How to Identify and Fix Code Smells in Kotlin

AI-powered code review tools are revolutionizing how teams maintain code quality. For Kotlin developers, these tools can automatically catch bugs, style issues, and even subtle code smells that hurt maintainability. By automating mundane review tasks, Panto’s AI lets your team ship features faster while still enforcing best practices.
In this tutorial we’ll define code smells, see common examples in Kotlin, and show how Panto’s GitHub-integrated AI review can spot and fix them in a real project.
What Are Code Smells and Why They Matter
Code smells are patterns in code that signal deeper design or maintenance problems. They aren’t bugs in the sense of breaking functionality, but they reduce code quality and make future changes harder. For example, a code smell might be an overly long function or duplicated logic. As one source puts it, a smell is “a piece of code that works now but may generate problems… because the code is difficult to understand, modify, or test.” Left unchecked, smells accumulate and create technical debt, slowing down development and increasing bugs down the road.
In practice, catching code smells early is key to maintainability. Simple refactorings suggested by tools can enhance maintainability and reduce technical debt. For example, automatically extracting duplicate code into a function improves readability and reuse. Automated reviews like Panto’s can pinpoint these issues immediately, so you fix them before they spread.
Common Code Smells in Kotlin
Kotlin brings its own best practices, but many familiar smells still apply. Common examples include:
- Long functions or classes (“Long Method/Class”): doing too much in one place. Very long blocks of code are hard to read and test.
- Duplicate code: copying similar logic in multiple places. This violation of DRY means a bug might need fixing in several spots.
- Tight coupling: classes or modules that depend too heavily on each other, making changes ripple through the code.
- Primitive obsession: overusing basic types (e.g. many String or Int parameters) instead of richer domain types or enums.
- Magic numbers and literals: scattering unexplained constants in code instead of named variables or enums.
Beyond these, Kotlin-specific issues often show up:
- Null-safety problems (for example, unnecessary null checks or use of the unsafe
!!
operator) can make code fragile. Some Kotlin experts consider heavy use of nullable types itself a code smell. - Overly generic types: for instance, a function returning
Any
(or a rawList<Any>
) obscures intent and type safety. - Business logic in data classes: Kotlin’s data classes should hold data, not behavior. Putting business logic into a data class mixes concerns and makes testing harder. It’s better to separate them into dedicated classes or functions.
Each of these smells increases complexity or risk. For example, adding optional properties or logic to a core data class violates separation of responsibility. Similarly, ignoring null safety can invite NullPointerExceptions at runtime. By contrast, idiomatic Kotlin constructs (like the safe-call ?.
, Elvis operator ?:
, listOfNotNull()
, etc.) help avoid these issues.
Panto’s AI review is trained to flag these smells in your Kotlin code, making your codebase more robust over time.
Integrating Panto AI into GitHub
Setting up Panto for code reviews is simple. First, on the repository (just a few clicks). Once installed, Panto can be invoked on any PR. For example, commenting /review
on a pull request triggers an instant AI analysis. (You can also enable automatic reviews so that every PR gets checked.) The AI model then reads the code changes, flags smells, and suggests fixes in comments on the PR.
- Install Panto on GitHub: Go to the GitHub marketplace link in Panto’s docs and add Panto to your organization or repo.
- Trigger a review: In a PR conversation, add a comment
/review
. Panto’s bot will reply with an analysis shortly.
With Panto active, let’s inspect the code smells it found.
Panto Review: Before-and-After Fixes
In one pull request, Panto identified several smells in the Kotlin code and suggested concrete fixes. Below are a couple of examples demonstrating the before-and-after code. (// ...
indicates omitted code for brevity.)
Null-safety and idiomatic Kotlin
In one function, the original code used explicit null checks on parameters:
Panto recommended using Kotlin’s safe-call (?.) and Elvis (?:) operator to simplify this. The refactored code combines the checks into one expression:
This version is shorter and handles nulls cleanly. Null-safety features catch potential null issues at compile time, making code safer and more maintainable. Using ?: "guest"
ensures that if user
or user.name
is null, we default to "guest" without a crash. This change addresses the code smell of manual null handling – Panto even flags excessive nullability as a smell.
Simplifying list operations
Another common pattern is merging two nullable lists. The original code had:
Panto suggested leveraging Kotlin’s built-in collection functions. The improved version uses listOfNotNull
and flatten
to handle nulls elegantly:
Here listOfNotNull(a, b)
creates a list containing a
and b
if they’re non-null, and flatten()
concatenates them. This single expression replaces many lines of checks. Using functions like listOfNotNull
and filterNotNull
is a clean way to deal with nullable collections. This fix removes boilerplate logic and prevents a null-safety smell.
In other parts of the PR, Panto also pointed out a long function doing too many tasks. For example, if a function was both parsing input and printing output, Panto would recommend breaking it into smaller functions or extracting logic into helper methods. This enforces the single-responsibility principle, making each function easier to test and maintain.
Overall, Panto’s suggestions focused on stronger typing and clearer responsibilities. For instance, if a function used very generic types (Any?
) or had unclear return types, Panto would advise giving them explicit, specific types. This makes the code self-documenting. In our examples above, switching to List<String>
and using Kotlin operators improved both safety and readability.
How AI Tools Boost Productivity and Code Quality
Using an AI code reviewer like Panto gives broad benefits. By catching code smells automatically, Panto helps standardize code quality and prevent technical debt. Its refactoring suggestions (like combining duplicate code) immediately reduce boilerplate and duplication, which enhances maintainability.
More generally, studies show that AI-assisted reviews let developers complete tasks significantly faster – for example, one report found a 26% speedup with AI support. Panto also frees developers from repetitive checks, so teams can focus on high-impact work. AI can handle the time-consuming and often inconsistent parts of review, and in doing so it enables the team to concentrate on architecture or features.
Furthermore, AI reviews like Panto’s work around the clock, ensuring no PR falls through the cracks. They enforce consistent standards regardless of who is on the team. In practice, this means spotting forgotten TODOs, insecure code, or outdated patterns early. All of this adds up: higher code quality and less manual effort. Panto’s own benchmarks categorize its feedback as aimed at catching bugs, improving design, and optimizing performance – exactly the kind of insights teams need.
What’s Next: Improving Your Kotlin Workflow with Panto
Panto is easy to adopt in your Kotlin projects. Try installing the Panto GitHub App on your own repo (just 2–3 clicks), and then comment /review
on a pull request to see instant feedback. As you’ve seen, Panto will call out null-safety improvements, encourage more functional Kotlin idioms, and highlight any large methods or data-class misuse. Over time, this keeps your codebase clean and maintainable.
In summary, AI code reviews give you the best of both worlds: speed and depth. They speed up reviews while enforcing code quality. By catching code smells automatically, Panto helps prevent bugs and technical debt before they hurt your project.
Give Panto a try on your next Kotlin PR – your future self (and fellow developers) will thank you.
Your AI code Review Agent
Wall of Defense | Aligning business context with code | Never let bad code reach production
No Credit Card
No Strings Attached


How AI Code Review Tools Are Transforming Code Quality and Developer Velocity
Why teams are adopting AI reviewers to boost code quality, cut review time, and scale engineering excellence. Code reviews are a cornerstone of healthy engineering teams. They catch bugs, promote learning, and keep codebases clean. But as teams scale, the code review process starts to break. Pull requests pile up. Senior engineers get swamped. Review quality drops, or slows delivery. Now, a new kind of teammate is stepping in: the AI-powered code reviewer. These tools don’t just check formatting. They surface logic issues, enforce best practices, and provide structured feedback. The result? Faster shipping, fewer bugs, and cleaner code across the board.
Jun 26, 2025

Why Should AI Review Your Code?
Modern software development moves faster and at a larger scale than ever. Engineering managers and tech leads know that thorough code review is essential for quality, but human-only reviews often become a bottleneck. As one [analysis](https://linearb.io/blog/ai-code-review#:~:text=Manual%20code%20reviews%20slow%20teams,own%20work%20and%20review%20tasks) notes, manual reviews “slow teams down, burn reviewers out, and miss things that machines catch in seconds”. In response, AI-powered code review tools are gaining traction. These tools apply machine learning and large language models to analyze code changes instantly, offering speed, consistency, and scalability that complement human judgment. In this blog we’ll explore why AI review can outperform solo humans in many situations, what pitfalls it addresses, and how teams can combine AI and human reviewers to accelerate delivery without sacrificing quality.
Jun 25, 2025

Integrating SAST into Your CI/CD Pipeline: A Step-by-Step Guide
If you’re looking to supercharge your software delivery while keeping security tight, integrating Static Application Security Testing (SAST) into your CI/CD pipeline is a game-changer. It’s not just about catching bugs — it’s about making security a seamless part of your development workflow, so your team can deploy confidently and quickly. Here’s how you can do it, step by step, with a little help from Panto AI.
Jun 24, 2025

Revolutionizing Code Reviews: How AI is Transforming Technical Debt Management
Let’s be honest: every software team, no matter how disciplined, wrestles with technical debt. As a CTO or Product Engineering Manager, you’ve seen how those “just this once” shortcuts and legacy code patches add up. Before you know it, your team is spending more time untangling old code than building new value. But here’s the twist: AI code reviews and AI code tools are turning the tables on technical debt. The results are game-changing.
Jun 24, 2025

Optimize Your Codebase with Custom AI Training: Achieving Better Review Outcomes
Imagine a world where every code review is lightning-fast, every vulnerability is caught before it ships, and every suggestion aligns perfectly with your team’s unique style and security policies. That’s not just a dream, it’s the reality for teams who have embraced AI code tools, but only if they take the crucial step of training AI on their own codebase. As a CTO or Product Engineering Manager, you’re already juggling speed, quality, and security. The question is: are you ready to unlock the next level of software excellence with AI code reviews that truly understand your context?
Jun 21, 2025

Measuring What Matters: KPIs for Code Quality and Business Impact in the Age of AI Code Reviews
We’re all under pressure to ship faster while maintaining high standards. But in the race to deliver, it’s easy to lose sight of what really drives value: code quality and its direct impact on the business. The right KPIs act as your North Star, guiding your team toward both technical excellence and meaningful business outcomes. Let’s cut through the noise and look at what metrics truly matter, why AI code reviews are changing the game, and how AI code tools can help you measure and improve both code quality and business results.
Jun 18, 2025