The Clean Code Challenge.
Fix real bugs in your real codebase using AI coding tools. Document the process publicly. Compete for prizes. 1,000 spots. 30 days.
1,000
Spots
30
Days
Free
To enter
How it works
Six steps. Thirty days.
Submit your app
Enter your app's URL. BrokenApp scans it automatically. Scan time depends on app size.
Qualify
If the scan finds 10 or more bugs, you're in. Fewer than 10? You get a Clean Code badge instead.
Choose your AI tool
Every qualified participant gets a free one-month subscription to Claude Code or ChatGPT Codex.
Fix your bugs
You have 30 days. Use your AI coding tool to work through the bug report systematically.
Share publicly
Post your before-and-after progress on your blog or portfolio. Show the world what you fixed.
Submit for judging
BrokenApp re-scans your app automatically. The after scan confirms which bugs are resolved.
Prizes
$2,000 total.
Grand Prize
$1,000
Most bugs fixed, highest quality fixes, best documentation.
Runner-Up
$500
Best technical writeup. The post that teaches other developers the most.
Third Place
$250
Most creative problem-solving approach.
5x Mentions
$50 each
Featured on the BrokenApp site and in the results announcement.
Judging criteria
Three dimensions.
50%
Quantity
How many bugs did you fix? The re-scan proves it.
30%
Quality
Were the fixes solid? Did you address root causes or just patch symptoms?
20%
Documentation
How well did you share your process? Did your posts help other developers learn?
Timeline
Seven weeks. Start to finish.
Week 1
Applications open
Submit your app. Scans run automatically. Qualified participants notified within 48 hours.
Weeks 2-5
Competition phase
Fix bugs, post updates, compete. Live leaderboard. Community forum for tips and questions.
Week 6
Final submissions
BrokenApp re-scans all competing apps. Submit your final writeup and public post links.
Week 7
Winners announced
Results published. Prizes distributed. Full research report with aggregate data across all 1,000 apps.
Who should enter
Developers with real projects
and real bugs.
Side projects, indie SaaS apps, freelance client work, open source tools — anything you own and control.
Required
- The project must be real (not tutorial code or throwaway repos)
- You must own the code (not your employer's proprietary codebase)
- You must share progress publicly (at least one post per week)
- You must complete the challenge within 30 days
Not required
- The app doesn't need to be profitable or have users
- You don't need to fix every bug
- You don't need to be an expert — the AI does the heavy lifting
Why we're doing this
How good are AI coding tools at real-world debugging?
Not toy examples. Not benchmarks. Not contrived demo scenarios. Real apps with real bugs, fixed by real developers using Claude Code and ChatGPT Codex in their actual workflows.
1,000 developers working on 1,000 real codebases for 30 days produces the most comprehensive dataset on AI-assisted debugging that exists. We'll publish the results openly — what works, what doesn't, where AI excels, where it struggles.
Every participant walks away with a cleaner codebase, hands-on AI tool experience, and a portfolio-worthy writeup.
Your app is probably broken.
Let's find out.
1,000 spots. Free to enter. 30 days to compete.
Enter the Challengebrokenapp.io/challenge