How to Setup CodeRabbit: Complete Step-by-Step Guide (2026)
Set up CodeRabbit for AI code review in under 10 minutes. GitHub App install, .coderabbit.yaml config, review profiles, and troubleshooting all covered.
Published:
Why setup CodeRabbit for AI code review
Code review bottlenecks slow down every engineering team. Pull requests sit waiting for hours or days while senior engineers context-switch between their own work and reviewing other people’s code. When reviews finally happen, time pressure means reviewers focus on the obvious issues and miss the subtle bugs, security gaps, and logic errors that cause production incidents later.
CodeRabbit solves this by providing instant AI-powered code review on every pull request. Within minutes of opening a PR, CodeRabbit analyzes the diff in the context of your full repository and posts human-like review comments covering logic errors, security vulnerabilities, performance anti-patterns, and code style issues. It has reviewed over 13 million pull requests across more than 2 million connected repositories, making it the most widely adopted AI code review tool available.
The best part is that you can set it up in under 10 minutes with zero CI/CD configuration. CodeRabbit installs as a native GitHub App (or GitLab, Azure DevOps, and Bitbucket integration) and runs on its own infrastructure. There are no GitHub Actions workflows to write, no Docker containers to manage, and no API keys to configure.
This guide walks through every step of setting up CodeRabbit - from creating your account to configuring advanced review behavior with .coderabbit.yaml to onboarding your entire team.
What you need before you start
Before beginning the setup, confirm you have the following:
- A GitHub, GitLab, Azure DevOps, or Bitbucket account with at least one repository
- Admin or owner permissions on the repositories where you want to install CodeRabbit (required to authorize the app)
- At least one open pull request or the ability to create one for testing (optional but recommended for verifying the installation)
No other prerequisites are needed. CodeRabbit does not require a CI/CD pipeline, a Docker installation, API keys, or any local tooling. The entire setup happens through the browser.
Step 1: Sign up for a CodeRabbit account
Navigate to coderabbit.ai and create your account. Here is how:
- Open coderabbit.ai in your browser
- Click the “Get Started Free” button on the homepage
- Choose your Git platform - GitHub, GitLab, Azure DevOps, or Bitbucket
- Authorize CodeRabbit to access your account when prompted by your Git platform’s OAuth screen
For GitHub specifically, you will see a standard GitHub OAuth authorization page asking you to grant CodeRabbit permission to read your profile information. Click “Authorize CodeRabbit” to proceed.
After authorization, you land on the CodeRabbit dashboard. This is your central control panel for managing repositories, viewing review activity, and adjusting organization-level settings.
The free tier requires no credit card and gives you unlimited public and private repositories with AI-powered PR summaries and review comments. Rate limits of 200 files per hour and 4 PR reviews per hour apply, but for most small to mid-size teams, these limits are rarely hit in practice.
Step 2: Install the CodeRabbit GitHub App
The GitHub App installation is what connects CodeRabbit to your repositories. This is separate from the OAuth sign-in you completed in Step 1 - the OAuth grants CodeRabbit access to your identity, while the App installation grants it access to your repositories and pull requests.
- From the CodeRabbit dashboard, click “Add Repositories” or “Install GitHub App”
- GitHub will display the App installation page showing the permissions CodeRabbit needs:
- Read access to repository contents, metadata, and pull requests
- Write access to pull request comments and checks (so it can post review comments)
- Webhook events for pull request creation and updates (so it knows when to review)
- Choose your installation scope:
- All repositories - CodeRabbit will automatically review PRs on every repository in the organization or account, including future repositories
- Only select repositories - Choose specific repositories from a list; you can add more later
- Click “Install” to complete the installation
For organizations, you may need an organization owner to approve the installation. If you see a “Request” button instead of “Install,” the app request will be sent to your organization owner for approval.
Once installed, the CodeRabbit App appears in your GitHub organization settings under Settings > GitHub Apps (or in your personal account under Settings > Applications > Installed GitHub Apps).
Step 3: Select and configure repositories
After installing the App, configure which repositories CodeRabbit should actively review. While the App installation grants access, the CodeRabbit dashboard lets you fine-tune which repositories are active.
- Return to the CodeRabbit dashboard at app.coderabbit.ai
- Navigate to the “Repositories” tab
- You will see all repositories that the GitHub App has access to
- Toggle repositories on or off as needed
- For each active repository, you can configure basic settings:
- Review language - the language CodeRabbit writes its comments in (defaults to English)
- Auto-review - whether CodeRabbit reviews PRs automatically or only when requested (defaults to automatic)
- Draft PR handling - whether to review draft PRs (defaults to skip)
For teams just getting started, enable CodeRabbit on one or two repositories first. This lets you evaluate the review quality and tune the configuration before rolling it out broadly.
Step 4: Trigger your first AI-powered PR review
Open a pull request to see CodeRabbit in action. This is the moment where the setup pays off.
- Create a new branch in one of your enabled repositories
- Make a code change - even a small one like adding a new function, fixing a bug, or refactoring a method
- Push the branch and open a pull request targeting your default branch
- Wait 1 to 5 minutes for CodeRabbit to complete its review
CodeRabbit will post several things on your PR:
- A walkthrough summary as the first comment, describing what changed across all files in the PR, organized by file and purpose
- Inline review comments on specific lines of code where it identifies potential issues, improvements, or suggestions
- A review status indicating whether the review is complete and how many comments were generated
Here is what a typical CodeRabbit review looks like in practice. The summary comment appears at the top of the PR conversation:
## Walkthrough
This PR adds input validation to the user registration endpoint.
The `registerUser` function now validates email format, password
strength, and username uniqueness before creating the user record.
Error responses use structured error objects with specific error codes.
## Changes
| File | Change Summary |
|------|---------------|
| src/api/auth.ts | Added validation logic with three new checks |
| src/types/errors.ts | New error code enum and structured error type |
| tests/api/auth.test.ts | Six new test cases covering validation scenarios |
Inline comments appear directly on the relevant lines in the diff, just like comments from a human reviewer. Each comment includes a description of the issue, why it matters, and often a suggested fix that you can apply with one click.
If CodeRabbit does not post any comments within 5 minutes, check the troubleshooting section at the end of this guide.
Step 5: Configure CodeRabbit with .coderabbit.yaml
The .coderabbit.yaml file is where you unlock CodeRabbit’s full power. While CodeRabbit works out of the box with sensible defaults, the configuration file lets you tailor review behavior to your team’s specific needs, codebase structure, and coding standards.
Create a file named .coderabbit.yaml in the root of your repository:
# .coderabbit.yaml
language: en-US
reviews:
profile: chill
request_changes_workflow: false
high_level_summary: true
high_level_summary_placeholder: "@coderabbitai summary"
auto_title_placeholder: "@coderabbitai"
poem: false
review_status: true
collapse_walkthrough: false
sequence_diagrams: true
changed_files_summary: true
path_filters:
- "!**/*.lock"
- "!**/*.generated.*"
- "!**/dist/**"
- "!**/node_modules/**"
- "!**/*.min.js"
- "!**/coverage/**"
- "!**/__snapshots__/**"
path_instructions:
- path: "src/api/**"
instructions: |
Review all API endpoints for:
- Input validation on all parameters
- Proper error handling with appropriate HTTP status codes
- Authentication and authorization checks
- Rate limiting considerations
- path: "src/db/**"
instructions: |
Review all database code for:
- SQL injection prevention (parameterized queries only)
- Proper connection handling and cleanup
- Transaction usage where multiple writes occur
- path: "**/*.test.*"
instructions: |
For test files, focus on:
- Test coverage of edge cases
- Proper assertion usage
- Mock cleanup and isolation between tests
chat:
auto_reply: true
knowledge_base:
opt_out: false
learnings:
scope: auto
Commit this file to your default branch (usually main). CodeRabbit reads the configuration on every PR and applies your settings immediately - no restart or reinstallation is required.
Key configuration options explained
profile controls how verbose CodeRabbit’s reviews are. There are three options:
chill- Fewer comments, focused on significant issues like bugs, security vulnerabilities, and logic errors. Best for teams that want minimal noise.assertive- More thorough reviews covering style, naming, documentation, and best practices in addition to bugs. Best for teams that want comprehensive feedback.followup- Like assertive, but also checks whether previous review comments were addressed in subsequent commits. Best for teams that want accountability.
path_filters controls which files CodeRabbit reviews. Use negation patterns (prefixed with !) to exclude files. Common exclusions include lock files, build output, auto-generated code, and test snapshots.
path_instructions is the most powerful feature. It lets you provide natural language instructions for specific directories or file patterns. CodeRabbit reads these instructions and adjusts its review focus accordingly. This means your API routes get reviewed for authentication and input validation, your database code gets checked for SQL injection, and your tests get evaluated for edge case coverage - all automatically.
knowledge_base enables CodeRabbit to learn from your repository over time. When enabled, CodeRabbit builds context about your codebase patterns, conventions, and architecture, which improves review quality as it learns.
Step 6: Fine-tune review profiles and instructions
Once you have the basic configuration working, refine it based on real review feedback. After CodeRabbit has reviewed 10 to 20 pull requests, you will have a good sense of which suggestions are helpful and which are noise.
Adding global review instructions
You can add a top-level instructions field to guide CodeRabbit’s behavior across all files:
reviews:
profile: chill
instructions: |
This is a Node.js backend service using Express and PostgreSQL.
We follow these conventions:
- All async functions must use try/catch with proper error logging
- Database queries must use parameterized statements
- API responses must follow our standard envelope format: { data, error, meta }
- Do not comment on variable naming unless the name is actively misleading
- Focus on bugs, security issues, and logic errors over style preferences
These global instructions apply to every file in every PR. They are particularly useful for setting the tone of reviews and preventing false positives on things your team has already decided are acceptable.
Creating path-specific review criteria
Expand your path_instructions as you identify patterns:
path_instructions:
- path: "src/middleware/**"
instructions: |
Middleware must:
- Always call next() or send a response
- Not modify the request object beyond adding typed properties
- Include error handling that passes errors to the error middleware
- path: "src/migrations/**"
instructions: |
Database migrations must:
- Be reversible (include both up and down functions)
- Not drop columns or tables that contain production data without a data migration step
- Use transactions for multi-step changes
- path: "*.config.*"
instructions: |
Configuration files should be reviewed for:
- Hardcoded secrets or credentials (flag immediately)
- Environment-specific values that should use env variables
- Reasonable default values
Adjusting the review profile per team feedback
If your team finds the chill profile too quiet, switch to assertive and add exclusions for the types of comments that generate the most noise:
reviews:
profile: assertive
instructions: |
Do not comment on:
- Import ordering (handled by our linter)
- Line length (handled by Prettier)
- Missing JSDoc comments on internal functions
Focus extra attention on:
- Error handling completeness
- Security vulnerabilities
- Performance in database queries and loops
This approach gives you the breadth of assertive mode while filtering out the categories of feedback that your other tools already handle.
Step 7: Interact with CodeRabbit in PR comments
CodeRabbit is not a one-way tool - it is conversational. Every comment it posts is the start of a potential conversation. Understanding how to interact with CodeRabbit effectively makes the difference between treating it as a noisy bot and treating it as a useful team member.
Replying to review comments
When CodeRabbit posts an inline comment on your PR, you can reply directly to it just like you would reply to a human reviewer. CodeRabbit understands natural language and responds conversationally:
- Ask for clarification: “Why is this a security issue? We validate the input in the middleware layer.”
- Request an alternative: “This approach would break our caching layer. Can you suggest an alternative that preserves the cache key?”
- Accept and ask for the fix: “Good catch. Can you generate the fix for this?”
CodeRabbit remembers the context of the conversation within the same PR thread, so follow-up questions work naturally.
Using command keywords
CodeRabbit responds to specific commands posted as PR comments:
| Command | What it does |
|---|---|
@coderabbitai review | Triggers a full re-review of the PR |
@coderabbitai resolve | Dismisses a specific review comment |
@coderabbitai explain | Provides a detailed explanation of a finding |
@coderabbitai generate docstring | Generates documentation for the function in context |
@coderabbitai configuration | Shows the current configuration for the repository |
@coderabbitai help | Lists all available commands |
Teaching CodeRabbit your preferences
Every interaction you have with CodeRabbit teaches it about your preferences. When you dismiss a comment with an explanation (“We intentionally use any here because the type comes from a third-party library”), CodeRabbit stores that as a learning and avoids making the same suggestion in future reviews.
This feedback loop is critical during the first two weeks of adoption. Teams that actively reply to and correct CodeRabbit’s comments typically see a significant reduction in false positives by the end of the first month.
Step 8: Onboard your team
Rolling out CodeRabbit to a team requires communication and a gradual approach. Developers who discover automated review comments on their PR without context tend to react negatively - it feels like surveillance rather than support. A thoughtful rollout makes the difference between adoption and rejection.
Week 1: Pilot with early adopters
- Select one or two pilot repositories - choose repos with active PR activity and developers who are open to trying new tools
- Set the review profile to
chill- this minimizes noise during the evaluation period - Brief the pilot group - send a short message explaining what CodeRabbit is, that it will post AI review comments on PRs, and that the feedback is advisory (not blocking merges)
- Collect feedback - ask pilot users to note which comments were helpful and which were noise
Week 2: Tune and expand
- Review pilot feedback - identify common false positives and add path filters or instructions to address them
- Update
.coderabbit.yamlbased on what you learned during the pilot - Expand to 3 to 5 additional repositories - choose a mix of frontend and backend repos to test across different codebases
- Share examples of helpful catches with the broader team to build confidence in the tool
Week 3 and beyond: Full rollout
- Enable CodeRabbit on all active repositories either through the dashboard or by switching the GitHub App to “All repositories” mode
- Consider switching to the
assertiveprofile if the team wants more comprehensive feedback - Set up branch protection to require conversation resolution on PRs (this ensures developers read CodeRabbit’s comments without making the AI check a blocking requirement)
- Establish a feedback cadence - review CodeRabbit’s effectiveness monthly and adjust the configuration as needed
Communicating the rollout
Here is a template message you can adapt for your team:
We are setting up CodeRabbit for AI-powered code review on our repositories. Starting today, you will see automated review comments on your pull requests from the CodeRabbit bot. These comments are advisory - they do not block merges - and cover things like potential bugs, security issues, and logic errors. The goal is to get instant feedback so you can fix issues before human reviewers look at the PR, not to add more hoops. If you see a comment that is wrong or unhelpful, reply to it and explain why. CodeRabbit learns from your feedback and gets better over time. Questions? Ask in our team channel.
Step 9: Configure advanced settings
Once your team is comfortable with the basics, explore these advanced configuration options.
Enabling auto-fix suggestions
On the Pro plan, CodeRabbit can generate one-click fix suggestions for issues it identifies. These appear as GitHub suggested changes that you can commit directly from the PR interface:
reviews:
auto_review:
enabled: true
tools:
enabled: true
Integrating with Jira or Linear
CodeRabbit can read linked Jira or Linear issues to understand the context of a PR. When it knows the ticket description and acceptance criteria, it can validate whether the PR actually addresses the requirements:
integrations:
jira:
enabled: true
project_keys:
- "BACKEND"
- "FRONTEND"
linear:
enabled: true
Setting up Slack notifications
Route review summaries to a Slack channel so the team can see review activity without leaving Slack:
integrations:
slack:
enabled: true
channel: "#code-reviews"
Configuring for monorepos
If your repository is a monorepo with multiple services, use path instructions to apply different review criteria to each service:
reviews:
path_instructions:
- path: "services/auth/**"
instructions: |
This is the authentication service. Focus on:
- Token validation and expiration handling
- Password hashing using bcrypt (not MD5 or SHA)
- OWASP authentication best practices
- path: "services/payments/**"
instructions: |
This is the payment processing service. Focus on:
- PCI DSS compliance patterns
- Idempotency of payment operations
- Decimal precision for currency calculations
- path: "packages/shared/**"
instructions: |
This is shared library code used by all services. Focus on:
- Backward compatibility of exported APIs
- Type safety and generic usage
- Performance since this code runs in every service
Step 10: Troubleshoot common issues
If CodeRabbit is not working as expected, work through these common issues in order.
CodeRabbit is not reviewing pull requests
- Check the GitHub App installation. Go to your GitHub organization or account settings, then to Applications > Installed GitHub Apps. Verify that CodeRabbit is listed and has access to the repository in question.
- Check repository status in the dashboard. Log in to app.coderabbit.ai and verify that the repository is toggled on.
- Check if the PR is a draft. CodeRabbit skips draft pull requests by default. Either mark the PR as ready for review or configure CodeRabbit to review drafts.
- Check path filters. If your
.coderabbit.yamlexcludes the paths that changed in the PR, CodeRabbit will have nothing to review and may not post any comments. - Check rate limits. On the free tier, you are limited to 4 PR reviews per hour. If you have exceeded this limit, reviews are queued.
- Check the webhook delivery. In your GitHub repository settings, navigate to Webhooks and check the recent deliveries for the CodeRabbit webhook. Failed deliveries indicate a connectivity issue between GitHub and CodeRabbit’s servers.
CodeRabbit posts too many comments
- Switch the review profile to
chillif you are currently onassertiveorfollowup - Add path filters to exclude auto-generated files, lock files, and build artifacts
- Add global instructions telling CodeRabbit what not to comment on (e.g., “Do not comment on import ordering, line length, or missing comments”)
- Exclude test snapshot files and other files that change frequently but do not need AI review
CodeRabbit comments are not relevant to your codebase
- Add path-specific instructions describing the architecture and conventions of each part of your codebase
- Reply to irrelevant comments with explanations of why they do not apply - CodeRabbit learns from these interactions
- Add framework-specific context in the global instructions (e.g., “This is a Next.js application using the App Router. Server components do not have access to browser APIs.”)
- Enable the knowledge base so CodeRabbit builds context about your repository over time:
knowledge_base:
opt_out: false
learnings:
scope: auto
YAML configuration is not taking effect
- Verify the file name. It must be exactly
.coderabbit.yaml(not.coderabbit.yml, notcoderabbit.yaml) - Verify the file location. It must be in the repository root, not in a subdirectory
- Verify the file is on the default branch. CodeRabbit reads the configuration from the default branch (usually
main), not from the PR branch - Validate the YAML syntax. Use a YAML validator to check for indentation errors, missing colons, or incorrect nesting
- Trigger a re-review by commenting
@coderabbitai reviewon a PR to force CodeRabbit to re-read the configuration
Permission errors during installation
- For personal accounts: Ensure you are the account owner. Only the owner can install GitHub Apps.
- For organizations: GitHub App installations require organization owner approval. If you see a “Request” button instead of “Install,” the request is sent to your org owner. Ask them to approve the CodeRabbit app in the organization settings under Third-party access.
- For repositories with restricted access: Some organizations restrict which apps can access specific repositories. Check your organization’s GitHub App policies and ensure CodeRabbit is allowed.
Best practices for getting the most out of CodeRabbit
Follow these practices to maximize the value CodeRabbit delivers to your team.
Start with the chill profile and increase strictness over time. The worst outcome is overwhelming your team with comments on day one and having them tune out all automated feedback permanently. Begin with chill, let the team build trust in the tool, and then switch to assertive after two to four weeks.
Write descriptive PR titles and descriptions. CodeRabbit uses the PR title and description as context for its review. A PR titled “fix bug” gives CodeRabbit almost nothing to work with. A PR titled “Fix race condition in user session cleanup” with a description of the problem helps CodeRabbit provide more targeted and useful feedback.
Keep pull requests small. CodeRabbit - like human reviewers - provides better feedback on focused PRs. A PR that changes 50 lines across 3 files gets more precise comments than a PR that changes 2,000 lines across 40 files. Aim for PRs under 400 lines of changed code.
Reply to comments instead of silently dismissing them. Every reply teaches CodeRabbit about your preferences. A team that spends two weeks actively correcting CodeRabbit’s false positives ends up with a tool that is far more accurate than a team that silently ignores everything.
Use CodeRabbit alongside static analysis, not instead of it. CodeRabbit catches logic errors, design issues, and contextual problems that rule-based tools miss. Static analysis tools like SonarQube and Semgrep provide deterministic bug detection, security scanning, and quality gate enforcement that AI review cannot replace. The strongest setup uses both layers together.
Review the CodeRabbit dashboard monthly. The dashboard shows metrics like review activity, common issue types, and resolution rates. Use these insights to adjust your .coderabbit.yaml configuration and identify areas of your codebase that generate the most review findings.
What comes after setup
Once CodeRabbit is running smoothly, consider these next steps.
Upgrade to the Pro plan if your team consistently hits the free-tier rate limits or wants access to auto-fix suggestions, 40+ built-in linters, and integrations with Jira, Linear, and Slack. The Pro plan costs $24/user/month on annual billing, and CodeRabbit only charges for developers who create pull requests - reviewers, managers, and non-coding team members are not counted.
Add static analysis tools like SonarQube or Semgrep to complement CodeRabbit’s AI review with deterministic rule-based scanning. Our guide on how to automate code reviews covers the full three-layer setup including linting, static analysis, and AI review.
Explore CodeRabbit’s advanced features including custom review profiles for different teams within your organization, integration with your issue tracker for context-aware reviews, and the knowledge base for repository-specific learning.
The goal is a review pipeline where CodeRabbit handles the initial pass - catching bugs, flagging security issues, and validating logic - so that your human reviewers can focus on architecture, design decisions, and the high-level questions that actually require human judgment. Teams that reach this state typically report a 30 to 60 percent reduction in review cycle time and significantly fewer defects reaching production.
Frequently Asked Questions
How do I setup CodeRabbit on GitHub?
Go to coderabbit.ai and click Get Started Free. Sign in with your GitHub account and authorize the CodeRabbit GitHub App. Select the repositories you want to enable - either all repositories or specific ones. Once installed, CodeRabbit automatically reviews every new pull request. No GitHub Actions workflow or CI/CD configuration is needed. The entire setup takes under 5 minutes.
Is CodeRabbit free to use?
Yes. CodeRabbit offers a free tier that covers unlimited public and private repositories with AI-powered PR summaries and review comments. The free tier has rate limits of 200 files per hour and 4 PR reviews per hour but imposes no cap on team members or repositories. CodeRabbit Pro is also free forever for open-source projects with public repositories. The Pro plan at $24/user/month removes rate limits and adds advanced features like auto-fix suggestions and 40+ built-in linters.
How do I configure CodeRabbit with a YAML file?
Create a file named .coderabbit.yaml in your repository root. This file controls review behavior including language, review profile (chill, assertive, or followup), path filters, path-specific instructions, and chat settings. Commit and push the file to your default branch. CodeRabbit reads this configuration on every PR and applies your custom settings automatically. No restart or reinstallation is required after changing the config.
What is the .coderabbit.yaml file?
The .coderabbit.yaml file is CodeRabbit's repository-level configuration file. It lives in your repository root and controls how CodeRabbit reviews pull requests for that specific repository. You can configure the review profile, exclude file paths from review, add natural language instructions for specific directories, enable or disable features like sequence diagrams and PR summaries, and set the review language. Each repository can have its own configuration.
How do I exclude files from CodeRabbit review?
Add a path_filters section under reviews in your .coderabbit.yaml file. Use negation patterns with an exclamation mark prefix to exclude paths. For example, adding '!**/*.lock' excludes all lock files, '!**/dist/**' excludes build output, and '!**/node_modules/**' excludes dependencies. CodeRabbit supports glob patterns, so you can be as specific or broad as needed. Changes to path filters take effect on the next PR without any restart.
How do I interact with CodeRabbit in pull request comments?
Reply directly to any CodeRabbit comment on your PR to start a conversation. You can ask it to explain a finding, suggest a different approach, or generate code. Use @coderabbitai review to trigger a full re-review. Use @coderabbitai resolve to dismiss a comment. Use @coderabbitai generate docstring to create documentation for a function. CodeRabbit responds conversationally and remembers context within the same PR thread.
Does CodeRabbit work with GitLab, Azure DevOps, and Bitbucket?
Yes. CodeRabbit supports GitHub, GitLab, Azure DevOps, and Bitbucket. The setup process is similar across all platforms - sign in with your platform account, authorize CodeRabbit, and select repositories. The .coderabbit.yaml configuration file works identically across all supported platforms. GitLab users authenticate through GitLab OAuth, and Azure DevOps and Bitbucket users connect through their respective platform authentication flows.
How long does CodeRabbit take to review a pull request?
CodeRabbit typically completes its review within 1 to 5 minutes after a pull request is opened or updated. Review time depends on the size of the diff and the complexity of the changes. Small PRs with fewer than 100 changed lines are usually reviewed in under 2 minutes. Large PRs with thousands of changed lines may take up to 5 minutes. On the free tier, if you exceed 4 reviews per hour, additional reviews are queued until capacity is available.
Can I customize what CodeRabbit focuses on during review?
Yes. Use the path_instructions section in .coderabbit.yaml to provide natural language instructions for specific directories or file patterns. For example, you can tell CodeRabbit to focus on input validation and authentication for API routes, or to check for SQL injection prevention in database code. You can also set a global review instruction using the instructions field. CodeRabbit also learns from your interactions - when you dismiss or accept suggestions, it adjusts future reviews accordingly.
What is the difference between chill, assertive, and followup review profiles?
The chill profile provides fewer comments and focuses only on significant issues like bugs, security vulnerabilities, and logic errors. The assertive profile is more thorough and comments on style, naming, documentation, and best practices in addition to bugs and security. The followup profile is similar to assertive but also follows up on previous review comments to check whether they were addressed in subsequent commits. Most teams start with the chill profile and switch to assertive once they are comfortable with the tool.
How do I onboard my team to CodeRabbit?
Start by installing CodeRabbit on one or two pilot repositories and setting the review profile to chill. Share a brief message with your team explaining that CodeRabbit will post AI review comments on PRs and that the feedback is advisory, not blocking. Encourage developers to reply to comments rather than silently dismissing them. After one to two weeks, review the team's experience, adjust the .coderabbit.yaml configuration based on feedback, and expand to additional repositories. Avoid enabling all repositories at once.
Why is CodeRabbit not reviewing my pull requests?
Check these common causes: the CodeRabbit GitHub App may not be installed on the repository, the repository may be excluded in your CodeRabbit dashboard settings, the PR may be marked as a draft (CodeRabbit skips drafts by default), or the PR may only contain changes to files excluded by your path_filters configuration. Also verify that the app has the necessary permissions by checking the GitHub Apps section in your repository settings. If none of these apply, visit the CodeRabbit dashboard to check for error messages or rate limit status.
Can I use CodeRabbit alongside SonarQube or other static analysis tools?
Yes, and many teams do. CodeRabbit provides AI-powered semantic review while tools like SonarQube, Semgrep, and DeepSource provide deterministic rule-based analysis. They complement each other because CodeRabbit catches logic errors, missing edge cases, and contextual issues that rules cannot cover, while static analysis tools enforce quality gates and track technical debt over time. There are no conflicts between running CodeRabbit and other tools on the same repository.
Explore More
Tool Reviews
Related Articles
- CodeRabbit Configuration: Complete .coderabbit.yaml Reference (2026)
- CodeRabbit GitHub Integration: Complete Setup Guide for 2026
- CodeRabbit GitLab Integration: Step-by-Step Configuration (2026)
- How to Use CodeRabbit for Automated Pull Request Reviews
- How to Set Up AI Code Review in GitHub Actions - Complete Guide
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
Related Articles
Codacy GitHub Integration: Complete Setup and Configuration Guide
Learn how to integrate Codacy with GitHub step by step. Covers GitHub App install, PR analysis, quality gates, coverage reports, and config.
March 13, 2026
how-toCodacy GitLab Integration: Setup and Configuration Guide (2026)
Set up Codacy with GitLab step by step. Covers OAuth, project import, MR analysis, quality gates, coverage reporting, and GitLab CI config.
March 13, 2026
how-toHow to Set Up Codacy with Jenkins for Automated Review
Set up Codacy with Jenkins for automated code review. Covers plugin setup, Jenkinsfile config, quality gates, coverage, and multibranch pipelines.
March 13, 2026
CodeRabbit Review