Configuring .deepsource.toml: Complete Reference Guide (2026)
Learn how to configure .deepsource.toml with analyzer setup, transformers, exclude patterns, and multi-language examples for every supported language.
Published:
Why the .deepsource.toml file matters
Every DeepSource analysis starts with one file: .deepsource.toml. This configuration file sits at the root of your repository and tells DeepSource which languages to analyze, which rules to apply, which files to skip, and which auto-formatting transformers to run. Without it, DeepSource cannot analyze your code at all. Get it wrong and you end up with hundreds of false positives from generated files, missing coverage on critical paths, or analyzers running against the wrong language version.
Despite its importance, the deepsource toml configuration is one of the most common sources of setup frustration. Teams copy a basic example from the documentation, commit it, and then spend hours troubleshooting why their Go analyzer is not picking up generated protobuf files or why their Python analysis is flagging syntax that is perfectly valid in Python 3.12. The configuration surface is deceptively simple - TOML is a straightforward format - but the interactions between analyzers, transformers, exclude patterns, and test patterns require careful planning.
This guide covers every section of the .deepsource.toml file in detail. You will learn how to configure analyzers for Python, JavaScript, Go, Java, Ruby, and Rust. You will set up transformers for automatic code formatting. You will define exclude patterns to eliminate noise and test patterns to separate test code from production code. And you will see complete, real-world examples for monorepos, multi-language projects, and specialized setups.
If you are new to DeepSource, start with our DeepSource review for an overview of the platform’s capabilities and limitations. If you have already decided on DeepSource and want to get it connected to your repositories, see our guide on DeepSource GitHub integration.
Basic .deepsource.toml file structure
The deepsource configuration file uses TOML (Tom’s Obvious, Minimal Language) syntax. At its simplest, the file requires only a version number and one analyzer definition. Here is the minimal valid configuration:
version = 1
[[analyzers]]
name = "python"
enabled = true
That three-line file is enough to activate Python analysis on your repository. But most real projects need significantly more configuration. The full structure of the file includes these top-level sections:
version = 1
exclude_patterns = [
"vendor/**",
"docs/**",
"**/migrations/**",
]
test_patterns = [
"tests/**",
"**/*_test.py",
]
[[analyzers]]
name = "python"
enabled = true
[analyzers.meta]
runtime_version = "3.x"
max_line_length = 100
[[analyzers]]
name = "javascript"
enabled = true
[[transformers]]
name = "black"
enabled = true
Let us break down each section.
Version
The version key is always set to 1. This is the only supported version value and must be present in every deepsource toml file. Omitting it or using a different number will cause a configuration error.
Exclude patterns
The exclude_patterns array tells DeepSource which files and directories to skip entirely during analysis. Patterns use glob syntax and are matched relative to the repository root. This is where you filter out generated code, vendor directories, build outputs, and any other files you do not want analyzed.
Test patterns
The test_patterns array identifies which files contain test code. These files are still analyzed, but DeepSource applies different rules to them - it will not flag hardcoded test data, relaxed complexity thresholds, or other patterns that are common and acceptable in tests but problematic in production code.
Analyzers
Each [[analyzers]] section defines a language-specific analysis engine. You can include as many analyzer sections as your project requires. The double-bracket TOML syntax [[analyzers]] creates an array of tables, meaning each occurrence adds a new analyzer to the list.
Transformers
Each [[transformers]] section enables an auto-formatting tool. When enabled, DeepSource creates pull requests with formatting fixes whenever it detects style violations that the transformer can resolve automatically. This ties directly into DeepSource autofix capabilities.
Analyzer configuration by language
The heart of any deepsource analyzer config is the language-specific settings within each [[analyzers]] block. Each analyzer has its own name, its own meta configuration options, and its own set of detection rules. Here is how to configure each supported language.
Python analyzer
Python is DeepSource’s most mature analyzer with the broadest rule coverage. The meta section for Python accepts several important options:
[[analyzers]]
name = "python"
enabled = true
[analyzers.meta]
runtime_version = "3.x"
max_line_length = 100
skip_doc_coverage = ["module", "magic", "init"]
The runtime_version field tells DeepSource which Python version your project targets. This affects which syntax features are considered valid - for example, structural pattern matching with match / case is only valid in Python 3.10 and later. If your project uses pyproject.toml, setup.cfg, or a Pipfile with a Python version specifier, DeepSource can infer this automatically, but setting it explicitly avoids ambiguity.
The max_line_length field controls the threshold for line-length warnings. The default is typically 79 (PEP 8 standard), but many modern Python projects use 88 (Black’s default) or 100-120 for wider monitors.
The skip_doc_coverage array lets you exclude certain types of functions from documentation coverage checks. Common values include module (module-level docstrings), magic (dunder methods like __init__ and __repr__), init (class __init__ methods specifically), and class (class docstrings).
For DeepSource Python analysis to work well, you should also specify your dependency file paths so DeepSource can understand your import structure:
[[analyzers]]
name = "python"
enabled = true
[analyzers.meta]
runtime_version = "3.x"
max_line_length = 100
dependency_file_paths = [
"requirements.txt",
"requirements-dev.txt",
"pyproject.toml",
]
JavaScript and TypeScript analyzer
The JavaScript analyzer handles both JavaScript and TypeScript files automatically. DeepSource detects .ts, .tsx, .js, and .jsx files and applies the appropriate rules.
[[analyzers]]
name = "javascript"
enabled = true
[analyzers.meta]
plugins = ["react"]
environment = ["browser", "node"]
dialect = "flow"
The plugins array enables framework-specific rule sets. Supported values include react for React-specific analysis (hooks rules, JSX best practices), vue for Vue.js projects, and angular for Angular projects.
The environment array specifies the runtime environments your code targets, which affects which global variables are considered valid. Use browser for client-side code, node for server-side code, or both for universal/isomorphic projects.
The dialect field is typically set to flow only if your project uses Flow type annotations instead of TypeScript.
Go analyzer
The Go analyzer has specific meta options for controlling how DeepSource handles Go module paths and generated code:
[[analyzers]]
name = "go"
enabled = true
[analyzers.meta]
import_root = "github.com/your-org/your-repo"
cgo_enabled = false
skip_generated = true
The import_root field should match your Go module path as defined in go.mod. This helps DeepSource correctly resolve imports and distinguish between first-party and third-party packages.
Setting cgo_enabled to false tells DeepSource to skip analysis of CGO-dependent code paths, which is useful if your CI environment does not have a C compiler available.
The skip_generated option excludes files with the // Code generated comment header, which follows the Go convention for marking auto-generated code. This is essential for projects that use protobuf, gRPC code generation, or tools like go generate.
Java analyzer
The Java analyzer supports configuration for the Java version and build system:
[[analyzers]]
name = "java"
enabled = true
[analyzers.meta]
runtime_version = "17"
The runtime_version field accepts values like 8, 11, 17, or 21 corresponding to Java LTS releases. Setting this correctly ensures DeepSource understands newer Java syntax features like records (Java 16+), sealed classes (Java 17+), and pattern matching (Java 21+).
Ruby analyzer
The Ruby analyzer is straightforward with minimal meta configuration:
[[analyzers]]
name = "ruby"
enabled = true
DeepSource infers the Ruby version from your .ruby-version file or Gemfile if present. The Ruby analyzer covers style issues, performance anti-patterns, and common security vulnerabilities specific to the Ruby and Rails ecosystem.
Rust analyzer
The Rust analyzer handles Rust-specific concerns including unsafe code, lifetime issues, and performance patterns:
[[analyzers]]
name = "rust"
enabled = true
The Rust analyzer has fewer meta configuration options compared to Python or Go because Rust’s compiler already enforces strict correctness. DeepSource’s Rust analysis focuses on clippy-style lints, unsafe code usage, and idiomatic Rust patterns that the compiler does not catch.
Transformer setup and configuration
Transformers are DeepSource’s auto-formatting feature. When a transformer is enabled, DeepSource detects formatting violations and creates pull requests to fix them. This is one of the key differentiators covered in our DeepSource review - the ability to not just detect issues but automatically fix them.
Each transformer is defined in its own [[transformers]] section:
[[transformers]]
name = "black"
enabled = true
[[transformers]]
name = "isort"
enabled = true
[[transformers]]
name = "prettier"
enabled = true
[[transformers]]
name = "gofmt"
enabled = true
[[transformers]]
name = "rustfmt"
enabled = true
[[transformers]]
name = "rubocop"
enabled = true
Available transformers
Here is the full list of supported transformers and what they do:
Python transformers:
black- Opinionated Python code formatter. Enforces consistent style with minimal configuration. Uses 88-character line length by default.autopep8- Formats Python code to conform to PEP 8. Less opinionated than Black and offers more configuration options.isort- Sorts Python imports into sections (standard library, third-party, local) and alphabetizes within each section.yapf- Google’s Python formatter. Configurable via.style.yapffile.
JavaScript/TypeScript transformers:
prettier- The standard formatter for JavaScript, TypeScript, JSON, CSS, HTML, and Markdown. Reads configuration from.prettierrcif present.
Go transformers:
gofmt- The standard Go code formatter. Non-configurable and universally used in Go projects.gofumpt- A stricter version of gofmt with additional formatting rules.
Rust transformers:
rustfmt- The standard Rust code formatter. Reads configuration fromrustfmt.tomlif present.
Ruby transformers:
rubocop- Ruby code formatter and linter. Reads configuration from.rubocop.yml.
You should only enable transformers for languages that have a corresponding analyzer enabled. Enabling the Black transformer without the Python analyzer serves no purpose and can cause confusion in your deepsource configuration file.
Transformer and analyzer interaction
A common question is whether transformers and analyzers conflict with each other. They do not. The analysis pipeline runs analyzers first to detect issues, then runs transformers to generate fixes for formatting-related issues. However, you should make sure your transformer configuration matches your analyzer configuration. For example, if your Python analyzer has max_line_length = 100 but you are using Black (which defaults to 88), you will get conflicting signals about line length.
Exclude patterns in detail
The exclude_patterns section is critical for reducing noise in your deepsource toml configuration. Without proper exclusions, DeepSource will analyze generated code, vendored dependencies, build artifacts, and other files that produce hundreds of irrelevant findings.
exclude_patterns = [
"vendor/**",
"node_modules/**",
"dist/**",
"build/**",
"*.min.js",
"*.min.css",
"**/*.pb.go",
"**/migrations/**",
"**/__generated__/**",
"coverage/**",
".next/**",
"public/assets/**",
]
Pattern syntax
Exclude patterns use standard glob syntax:
*matches any sequence of characters within a single directory level**matches any number of directories (recursive matching)?matches a single character[abc]matches any character in the bracket set
Some important deepsource exclude files examples:
| Pattern | What it excludes |
|---|---|
vendor/** | Everything in the vendor directory |
**/*.pb.go | All protobuf-generated Go files in any directory |
*.min.js | All minified JavaScript files at the root level |
**/migrations/** | All database migration files in any nested path |
docs/** | All documentation files |
**/fixtures/** | All test fixture directories |
What to exclude and what not to exclude
Always exclude:
- Vendored or copied third-party code (
vendor/**,third_party/**) - Build output and compiled artifacts (
dist/**,build/**,.next/**) - Generated code (
**/*.pb.go,**/__generated__/**,**/generated/**) - Minified files (
*.min.js,*.min.css) - Package manager directories (
node_modules/**) - Coverage reports (
coverage/**,htmlcov/**)
Never exclude:
- Your actual source code directories
- Configuration files (these often contain security issues)
- Infrastructure-as-code files (Terraform, Kubernetes manifests)
- Scripts and automation code
Use test_patterns instead of exclude_patterns for:
- Test files and test directories
- Test fixtures and mock data
- Integration test configurations
The distinction matters because excluded files are invisible to DeepSource while test-patterned files are analyzed with test-appropriate rules.
Test patterns configuration
Test patterns tell DeepSource which files are test code. This is separate from exclusion - test files are still analyzed, but with different rule sensitivity.
test_patterns = [
"tests/**",
"test/**",
"spec/**",
"**/*_test.py",
"**/*_test.go",
"**/*Test.java",
"**/*.test.js",
"**/*.test.ts",
"**/*.spec.js",
"**/*.spec.ts",
"**/*_spec.rb",
]
The patterns you need depend on your project’s testing conventions. Python projects typically use tests/** or **/*_test.py. Go projects use **/*_test.go. JavaScript projects commonly use **/*.test.js or **/*.spec.js. Java projects follow the **/*Test.java convention.
Configuring test patterns correctly also affects test coverage metrics. DeepSource uses these patterns to determine which files should have coverage and which files are the tests themselves. Without correct test patterns, your coverage numbers will be skewed because test files will count toward the lines-to-cover total.
Dependency file paths
Specifying dependency file paths helps DeepSource understand your project’s dependency graph, which improves the accuracy of import analysis and vulnerability detection.
[[analyzers]]
name = "python"
enabled = true
[analyzers.meta]
dependency_file_paths = [
"requirements.txt",
"requirements-dev.txt",
"setup.py",
"pyproject.toml",
"Pipfile",
]
For Python projects, DeepSource reads these files to understand which packages are available, their versions, and whether certain imports are valid. For JavaScript projects, package.json is typically detected automatically, but specifying it explicitly can help in monorepo setups where multiple package.json files exist at different levels.
Multi-analyzer setup for polyglot projects
Most modern projects use more than one language. A typical web application might have a Python or Go backend, a JavaScript or TypeScript frontend, and infrastructure code in multiple formats. The deepsource toml example below shows a full multi-analyzer configuration:
version = 1
exclude_patterns = [
"vendor/**",
"node_modules/**",
"dist/**",
"build/**",
"**/*.pb.go",
"**/migrations/**",
"coverage/**",
".terraform/**",
"*.min.js",
]
test_patterns = [
"tests/**",
"**/*_test.py",
"**/*_test.go",
"**/*.test.ts",
"**/*.test.js",
"**/*.spec.ts",
"**/*.spec.js",
]
[[analyzers]]
name = "python"
enabled = true
[analyzers.meta]
runtime_version = "3.x"
max_line_length = 100
skip_doc_coverage = ["module", "magic", "init"]
dependency_file_paths = ["requirements.txt", "pyproject.toml"]
[[analyzers]]
name = "go"
enabled = true
[analyzers.meta]
import_root = "github.com/your-org/your-project"
skip_generated = true
[[analyzers]]
name = "javascript"
enabled = true
[analyzers.meta]
plugins = ["react"]
environment = ["browser", "node"]
[[transformers]]
name = "black"
enabled = true
[[transformers]]
name = "isort"
enabled = true
[[transformers]]
name = "gofmt"
enabled = true
[[transformers]]
name = "prettier"
enabled = true
This configuration analyzes Python, Go, and JavaScript/TypeScript code in a single repository. Each analyzer runs independently and produces its own set of findings. The exclude and test patterns apply globally across all analyzers.
Monorepo considerations
In a monorepo where different services live in separate directories, you might want to scope your exclude and test patterns more precisely:
exclude_patterns = [
"services/api/vendor/**",
"services/web/node_modules/**",
"services/web/dist/**",
"services/worker/vendor/**",
"infrastructure/.terraform/**",
]
test_patterns = [
"services/api/tests/**",
"services/api/**/*_test.go",
"services/web/**/*.test.ts",
"services/worker/tests/**",
"services/worker/**/*_test.py",
]
This approach keeps each service’s exclusions and test patterns clearly scoped, making it easier to audit the configuration as the monorepo grows.
Real-world .deepsource.toml examples
Python Django project
version = 1
exclude_patterns = [
"**/migrations/**",
"staticfiles/**",
"media/**",
"htmlcov/**",
"docs/**",
"**/conftest.py",
"manage.py",
]
test_patterns = [
"**/tests/**",
"**/test_*.py",
"**/*_test.py",
"tests/**",
]
[[analyzers]]
name = "python"
enabled = true
[analyzers.meta]
runtime_version = "3.x"
max_line_length = 120
skip_doc_coverage = ["module", "magic", "init"]
dependency_file_paths = ["requirements.txt", "requirements-dev.txt"]
[[transformers]]
name = "black"
enabled = true
[[transformers]]
name = "isort"
enabled = true
Django projects need special attention to migration exclusions. Django auto-generates migration files, and analyzing them produces noise without value. The staticfiles and media directories contain user uploads and collected static assets that should never be analyzed.
Go microservice
version = 1
exclude_patterns = [
"vendor/**",
"**/*.pb.go",
"**/*.pb.gw.go",
"**/mock_*.go",
"docs/**",
"scripts/**",
]
test_patterns = [
"**/*_test.go",
"testdata/**",
]
[[analyzers]]
name = "go"
enabled = true
[analyzers.meta]
import_root = "github.com/your-org/payment-service"
skip_generated = true
cgo_enabled = false
[[transformers]]
name = "gofmt"
enabled = true
Go projects with gRPC or protobuf dependencies generate .pb.go and .pb.gw.go files that should always be excluded. Mock files generated by tools like mockgen follow the mock_*.go pattern and should also be excluded.
React TypeScript frontend
version = 1
exclude_patterns = [
"node_modules/**",
"dist/**",
"build/**",
"coverage/**",
"public/**",
"**/*.d.ts",
"**/__snapshots__/**",
"*.config.js",
"*.config.ts",
]
test_patterns = [
"**/*.test.ts",
"**/*.test.tsx",
"**/*.spec.ts",
"**/*.spec.tsx",
"src/__tests__/**",
"src/__mocks__/**",
]
[[analyzers]]
name = "javascript"
enabled = true
[analyzers.meta]
plugins = ["react"]
environment = ["browser"]
[[transformers]]
name = "prettier"
enabled = true
For React projects, excluding TypeScript declaration files (*.d.ts) and Jest snapshots (__snapshots__) is important. Declaration files are either auto-generated or come from type packages, and snapshot files are machine-generated test artifacts.
Ruby on Rails application
version = 1
exclude_patterns = [
"vendor/**",
"db/schema.rb",
"db/migrate/**",
"log/**",
"tmp/**",
"public/**",
"bin/**",
"node_modules/**",
]
test_patterns = [
"spec/**",
"test/**",
]
[[analyzers]]
name = "ruby"
enabled = true
[[transformers]]
name = "rubocop"
enabled = true
Rails projects should exclude db/schema.rb (auto-generated database schema), migration files, and the standard Rails directories that contain runtime artifacts rather than source code.
Common configuration mistakes
After working with many deepsource toml configuration files, certain mistakes appear repeatedly. Avoiding these will save you significant debugging time.
Mistake 1: Forgetting version = 1
Every .deepsource.toml file must start with version = 1. Without it, DeepSource will not parse the file and your repository will show zero analysis results. This is the most common cause of “DeepSource is not working” reports.
Mistake 2: Wrong analyzer names
Analyzer names must exactly match DeepSource’s expected values. Using "Python" instead of "python", "js" instead of "javascript", or "golang" instead of "go" will silently fail. The analyzer will not appear in your dashboard and no errors will be shown.
Mistake 3: Exclude patterns that are too broad
Patterns like **/*.py or src/** will exclude files you actually want analyzed. Always be specific with your exclusions. A pattern like **/generated/** is better than trying to match individual generated file extensions.
Mistake 4: Missing test patterns
Without test patterns, DeepSource applies production-level rules to your test code. This leads to noise - warnings about hardcoded values in test fixtures, complexity warnings in test setup methods, and documentation coverage flags on test functions. Always define test patterns to separate test code from production code.
Mistake 5: Conflicting transformer and analyzer settings
If your Python analyzer sets max_line_length = 120 but you enable the Black transformer (which defaults to 88 characters), DeepSource will simultaneously flag lines over 88 characters as formatting issues and accept lines up to 120 characters in the analyzer. Align these settings or configure Black’s line length in your pyproject.toml.
Mistake 6: Not excluding generated code
Generated files like protobuf outputs, GraphQL codegen results, OpenAPI client code, and database migration files should always be excluded. Analyzing them wastes processing time and fills your dashboard with findings you cannot fix because the source will be regenerated.
Mistake 7: Using exclude_patterns for test files
If you exclude test files instead of marking them as test patterns, DeepSource cannot calculate test coverage correctly. Test files should be analyzed - just with test-appropriate rules. Use test_patterns to identify them, not exclude_patterns to hide them.
Alternative: Zero-config analysis with CodeAnt AI
If managing a .deepsource.toml file across multiple repositories feels like unnecessary overhead, consider CodeAnt AI as an alternative. CodeAnt AI requires zero configuration files - no TOML, no YAML, no JSON configuration. It automatically detects your languages, frameworks, and project structure, and begins analyzing code immediately after connecting your repository.
CodeAnt AI is priced at $24-40/user/month depending on the plan and includes AI-powered code review, static analysis, and security scanning. Unlike DeepSource, where misconfigured analyzer settings can cause silent analysis failures, CodeAnt AI’s zero-config approach eliminates the entire class of configuration-related issues.
That said, the lack of a configuration file means less granular control. Teams that need precise exclusion patterns, specific language version targeting, or custom transformer setups will find DeepSource’s explicit configuration model more suitable. For a broader comparison, see our DeepSource alternatives roundup.
Validating and troubleshooting your configuration
After writing your .deepsource.toml file, follow these steps to verify it works correctly.
Step 1: Validate TOML syntax. Use a TOML validator to catch syntax errors before committing. Common issues include missing quotes around string values, incorrect bracket syntax for arrays and tables, and trailing commas (which are valid in JSON but not in TOML).
Step 2: Commit to your default branch. DeepSource reads the configuration file from your default branch (usually main or master). If you commit it to a feature branch, DeepSource will not pick it up until that branch is merged.
Step 3: Check the DeepSource dashboard. After committing, navigate to your repository on the DeepSource dashboard. The configuration section will display any errors or warnings about your file. Green status indicators mean the analyzers are configured correctly and running.
Step 4: Trigger a manual analysis. If you do not want to wait for the next commit, trigger a manual analysis run from the dashboard. This forces DeepSource to re-read the configuration and run all enabled analyzers immediately.
Step 5: Review the first results. Look at the initial findings for obvious configuration issues. If you see hundreds of findings in generated code, your exclude patterns need work. If you see test-related false positives, your test patterns are incomplete. If you see syntax errors in valid code, your runtime version setting might be wrong.
For ongoing configuration management, consider adding your .deepsource.toml file to code review. Changes to analyzer configuration can have significant effects on your team’s workflow - adding a new analyzer introduces new findings, changing exclude patterns can expose previously hidden issues, and modifying transformer settings changes auto-formatting behavior.
For details on what DeepSource costs across different team sizes, see our DeepSource pricing breakdown.
Summary
The .deepsource.toml file is small but consequential. A well-configured file means clean, relevant analysis results with minimal noise. A poorly configured file means hundreds of false positives, missing coverage on critical code paths, and transformers that fight your existing formatting standards.
Start with the minimal configuration - version = 1 and one analyzer - and expand from there. Add exclude patterns as you identify noise sources. Add test patterns to separate test code from production code. Enable transformers only after confirming they align with your existing formatting conventions. And use a multi-analyzer setup when your project spans multiple languages.
The key takeaway is that deepsource toml configuration is not a one-time task. As your project grows, your dependencies change, and your team adopts new tools, revisit the configuration to keep it aligned with your actual codebase structure. A quarterly review of your .deepsource.toml file takes five minutes and prevents the gradual accumulation of irrelevant findings that erode developer trust in the tool.
Frequently Asked Questions
What is the .deepsource.toml file?
The .deepsource.toml file is the configuration file that tells DeepSource how to analyze your repository. It must be placed in the root directory of your repository and uses the TOML format. The file defines which analyzers to run (Python, JavaScript, Go, Java, Ruby, Rust, and others), which transformers to apply for auto-formatting, which files or directories to exclude from analysis, and where your test and dependency files are located. Without this file, DeepSource cannot analyze your code.
Where do I place the .deepsource.toml file?
The .deepsource.toml file must be placed in the root directory of your repository - the same level as your .git folder. DeepSource looks for this file at the repository root when it runs analysis. Placing it in a subdirectory, renaming it, or using a different format will cause DeepSource to skip your repository entirely. If you have a monorepo, you still use a single .deepsource.toml at the root and configure multiple analyzers within that one file.
How do I add multiple analyzers in .deepsource.toml?
Add multiple [[analyzers]] sections in your .deepsource.toml file - one for each language you want to analyze. Each section needs its own name field (like python, javascript, go, java, ruby, or rust) and its own enabled = true setting. You can also set analyzer-specific meta configurations within each section. DeepSource will run all enabled analyzers on every commit and pull request, and results from all analyzers appear in a single unified dashboard.
What analyzers does DeepSource support?
DeepSource supports analyzers for Python, JavaScript, TypeScript, Go, Java, Ruby, Rust, C#, Kotlin, PHP, Scala, Swift, and several other languages. Each analyzer has its own set of rules and optional meta configuration. The Python analyzer supports specifying the Python version and max line length. The Go analyzer supports import root and skip generated file settings. The JavaScript analyzer auto-detects TypeScript and JSX usage.
How do I exclude files from DeepSource analysis?
Use the exclude_patterns array in the top-level section of your .deepsource.toml file. Patterns follow glob syntax - for example, 'vendor/**' excludes the entire vendor directory, '**/*_test.go' excludes Go test files, and 'docs/**' excludes documentation. You can list as many patterns as needed. Excluded files will not be analyzed by any analyzer, which reduces noise from generated code, third-party dependencies, and build artifacts.
What are DeepSource transformers and how do I configure them?
Transformers are auto-formatting tools that DeepSource runs on your code to fix style and formatting issues automatically. Each transformer is defined in a [[transformers]] section in .deepsource.toml. Supported transformers include Black (Python formatting), Autopep8 (PEP 8 compliance), isort (Python import sorting), gofmt (Go formatting), Prettier (JavaScript and TypeScript formatting), RuboCop (Ruby formatting), rustfmt (Rust formatting), and others. When a transformer is enabled, DeepSource raises pull requests with formatting fixes.
How do I configure test patterns in .deepsource.toml?
Add a test_patterns array in the top-level section of your .deepsource.toml file. These patterns tell DeepSource which files are test files so it can apply different analysis rules to them and calculate test coverage accurately. For example, 'tests/**' matches a tests directory, '**/*_test.py' matches Python test files with the _test suffix, and 'spec/**' matches a Ruby-style spec directory. Without test patterns, DeepSource may flag test code with the same rules it applies to production code.
Can I specify dependency file paths in .deepsource.toml?
Yes. Use the dependency_file_paths array in the analyzer's meta section to tell DeepSource where your dependency manifests are located. For Python, this could be 'requirements.txt', 'Pipfile', or 'pyproject.toml'. For JavaScript, it is typically 'package.json'. Specifying these paths helps DeepSource perform more accurate dependency analysis and avoid false positives when analyzing third-party code usage patterns.
Why is DeepSource not analyzing my repository?
The most common reason is a missing or malformed .deepsource.toml file. Check that the file exists at the repository root, uses valid TOML syntax, has at least one analyzer with enabled = true, and uses a supported analyzer name. Other causes include the repository not being activated on the DeepSource dashboard, the webhook not being configured on your Git provider, or the default branch not containing the configuration file. Run a TOML validator on your file to catch syntax errors.
What is the difference between exclude_patterns and test_patterns?
exclude_patterns completely removes files from DeepSource analysis - excluded files are ignored by all analyzers and transformers as if they do not exist in the repository. test_patterns marks files as test code but still analyzes them. Test files receive different analysis rules - for example, DeepSource will not flag hardcoded credentials in test fixtures or suggest reducing cyclomatic complexity in test functions. Use exclude_patterns for generated code and vendor files; use test_patterns for your actual test suite.
How do I configure the Python analyzer version in .deepsource.toml?
Add a [analyzers.meta] section under your Python analyzer and set the runtime_version field. Supported values include 3.x versions like '3.10', '3.11', and '3.12'. Setting the correct Python version ensures DeepSource understands syntax features specific to your version - for example, match statements in Python 3.10+ or type union syntax in Python 3.10+. If you do not specify a version, DeepSource defaults to the latest supported Python 3.x version.
Does .deepsource.toml support environment variables or secrets?
No. The .deepsource.toml file does not support environment variable interpolation, secrets, or dynamic values. It is a static configuration file committed to your repository. All values must be hardcoded in the file. Since the file contains only analyzer configuration and file patterns - not credentials or tokens - there is no need for secret management. DeepSource authentication is handled separately through the dashboard and Git provider webhooks, not through the configuration file.
Explore More
Tool Reviews
Related Articles
- DeepSource GitHub Integration: Setup and Configuration Guide
- DeepSource GitLab Integration: Step-by-Step Config Guide (2026)
- Best AI Code Review Tools for Pull Requests in 2026
- I Reviewed 32 SAST Tools - Here Are the Ones Actually Worth Using (2026)
- DeepSource Autofix: How Automatic Code Fixes Work in 2026
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
Related Articles
Codacy GitHub Integration: Complete Setup and Configuration Guide
Learn how to integrate Codacy with GitHub step by step. Covers GitHub App install, PR analysis, quality gates, coverage reports, and config.
March 13, 2026
how-toCodacy GitLab Integration: Setup and Configuration Guide (2026)
Set up Codacy with GitLab step by step. Covers OAuth, project import, MR analysis, quality gates, coverage reporting, and GitLab CI config.
March 13, 2026
how-toHow to Set Up Codacy with Jenkins for Automated Review
Set up Codacy with Jenkins for automated code review. Covers plugin setup, Jenkinsfile config, quality gates, coverage, and multibranch pipelines.
March 13, 2026
DeepSource Review
CodeAnt AI Review