AI Coding Tools We Actually Use in a Brighton Dev Studio

By amillionmonkeys
#AI Tools#Development Workflow#Productivity#Developer Tools

Six months testing AI coding assistants in production. Real costs, productivity gains, and honest failures from a solo developer in Brighton.

Six months ago, I started experimenting with AI coding assistants at my Brighton studio. Not because of the hype, but because I had a problem: too many client projects, a growing backlog of technical debt, and only so many hours in a day.

Everyone was talking about GitHub Copilot, Cursor, and the new wave of AI coding tools. The promises were tempting—"10x developer productivity!" and "AI that writes production code!" But as a solo developer (occasionally working with freelancers), I couldn't afford to waste money on tools that don't deliver, or worse, slow me down.

So I ran an experiment. For six months, I tracked everything: which tools I used, how much time they saved (or didn't), what they cost, and what actually improved my work. Here's what I learned.

The Tools I Tested

I didn't test every AI coding tool on the market. I focused on the ones that kept coming up in developer conversations and that I could realistically afford:

GitHub Copilot (£8/month per developer)

  • The original AI pair programmer
  • Deep IDE integration
  • Trained on public GitHub code

Cursor (£16/month per developer)

  • VS Code fork with built-in AI
  • Multiple model support (GPT-4, Claude)
  • Codebase-aware suggestions

Claude Code (part of Claude Pro, £18/month)

  • Anthropic's coding-focused chat interface
  • Not an IDE plugin, but incredibly useful
  • Great for architecture discussions and debugging

Devin (£400/month)

  • The "AI software engineer"
  • Autonomous coding agent
  • We tested it for one month only

I also used ChatGPT occasionally, but it's not a coding-specific tool, so I didn't track it systematically.

What I Actually Used Them For

The reality of AI coding tools is messier than the demos suggest. I found each tool had a sweet spot—tasks where it genuinely helped—and blind spots where it wasted my time.

GitHub Copilot: The Daily Workhorse

Copilot became my default autocomplete on steroids. It's not revolutionary, but it's consistently useful.

Where it excelled:

  • Boilerplate code (API endpoints, database models, form validation)
  • Test writing (once we got the pattern right, it generated variations quickly)
  • Repetitive tasks (converting similar components, updating multiple files)

Real example: I had to migrate 30 React class components to functional components with hooks. Copilot handled about 70% of the mechanical work. What would have taken two days took half a day, freeing me up for the tricky edge cases.

Where it failed:

  • Complex business logic (suggestions were often confidently wrong)
  • Framework-specific patterns (would suggest outdated Next.js approaches)
  • Anything requiring understanding of my specific codebase architecture

Verdict: Worth £8/month. It's not magic, but it's like having autocomplete that actually understands context. The time savings on boilerplate alone pay for itself.

Cursor: The Power User's Choice

Cursor is pricier than Copilot, but it offers more control and better context awareness. I use it for complex features and larger refactoring work.

Where it excelled:

  • Codebase-wide refactoring (it can see multiple files and suggest consistent changes)
  • Debugging sessions (chat interface for "why is this breaking?" questions)
  • Learning new libraries (explaining unfamiliar code patterns)

Real example: I inherited a Laravel project with zero documentation. Cursor helped me understand the architecture by analyzing the codebase and explaining relationships between models, controllers, and services. Saved at least a week of reverse engineering.

Where it failed:

  • Performance (noticeably slower than regular VS Code)
  • Hallucinations (would sometimes invent functions that don't exist)
  • Cost (£16/month is significant for solo developers)

Verdict: Worth it for complex work. The codebase awareness feature is genuinely useful, though the price is something to consider on a solo budget.

Claude Code: The Architecture Buddy

Claude isn't a code editor plugin, but I found myself using it almost daily for a different purpose: thinking through problems before writing code.

Where it excelled:

  • System design discussions ("How should I structure this API?")
  • Debugging complex issues (explaining error logs and suggesting fixes)
  • Code reviews (pasting PRs and getting thoughtful feedback)
  • Documentation (explaining legacy code or writing technical specs)

Real example: I was designing a real-time collaboration feature for a client app. Before writing any code, I spent an hour discussing the architecture with Claude—WebSockets vs. polling, state management approaches, edge cases. It asked good questions and flagged issues I hadn't considered. The implementation went much smoother as a result.

Where it failed:

  • Actually writing production code (it's a chat tool, not an IDE integration)
  • Maintaining context across long conversations (starts forgetting earlier points)
  • Framework-specific gotchas (sometimes suggests patterns that don't work in practice)

Verdict: Underrated. It's not a coding tool in the traditional sense, but it's become my "think out loud" partner. At £18/month (part of Claude Pro), it's worth it for the time saved on architectural mistakes.

Devin: The Expensive Experiment

Devin is marketed as an "AI software engineer" that can autonomously complete tasks. I tested it for one month at £400.

Where I hoped it would excel:

  • Taking on complete features (tickets to working code)
  • Fixing bugs autonomously
  • Handling routine maintenance tasks

What actually happened:

  • It required constant supervision (not autonomous at all)
  • Simple tasks took longer than doing them myself
  • Complex tasks failed or produced unusable code
  • The cost was unjustifiable for a solo operation

Real example: I gave Devin a straightforward task: "Add pagination to this API endpoint." It took 4 hours of back-and-forth, produced code that didn't follow my conventions, and ultimately I had to rewrite it anyway.

Verdict: Not worth it for solo developers in 2025. Maybe the technology will mature, but right now it's expensive and unreliable. The £400/month is better spent on other tools.

The Numbers: Real ROI

I tracked time saved (and wasted) over six months. Here's what the data showed:

GitHub Copilot:

  • Cost: £48 total (£8/month × 6 months)
  • Time saved: ~4 hours/week
  • ROI: Extremely positive (saved ~96 hours)

Cursor:

  • Cost: £96 total (£16/month × 6 months)
  • Time saved: ~3 hours/week
  • ROI: Positive (saved ~72 hours)

Claude Code:

  • Cost: £108 total (£18/month × 6 months)
  • Time saved: ~3 hours/week (architectural decisions, debugging)
  • ROI: Very positive (saved ~72 hours, prevented costly mistakes)

Devin:

  • Cost: £400 (1 month trial)
  • Time saved: Negative (cost me time)
  • ROI: Very negative

Total investment: £652 Total time saved: ~240 hours At my hourly rate: Saved approximately £12,000 in billable time

The math is compelling. Even accounting for learning curves and occasional AI-generated bugs, the productivity gains are real.

What I Learned: Practical Lessons

1. AI Tools Are Multipliers, Not Replacements

The best results came when I used AI to handle the boring parts, freeing me up for creative problem-solving. AI suggestions can be confidently wrong, so you need experience to spot the mistakes.

2. Prompting Is a Skill

I got much better results as I learned to "talk" to these tools. Vague prompts ("make this better") got vague results. Specific prompts ("refactor this to use React Server Components, maintain the same API, add error handling") got useful output.

3. Start Small and Cheap

I'm glad I started with GitHub Copilot (£8/month) rather than jumping to expensive tools. It proved the value of AI assistance before I committed bigger budgets.

4. Different Tools for Different Tasks

I ended up with a toolkit approach:

  • Copilot for everyday coding
  • Cursor for complex refactoring
  • Claude for architecture and debugging
  • ChatGPT occasionally for quick questions

No single tool does everything well.

5. The Hype Is Real, But So Are the Limitations

AI coding tools genuinely improve productivity. But they're not replacing developers anytime soon. I still hit walls—complex business logic, edge cases, framework-specific bugs—where human judgment is essential.

My Current Setup (What I'm Paying For)

After six months of testing, here's what stuck:

  • GitHub Copilot: £8/month
  • Cursor: £16/month
  • Claude Pro: £18/month
  • Devin: Cancelled

Total monthly cost: £42 Value delivered: Saves 10-12 hours/week

For a solo developer, that's a no-brainer investment.

Should You Adopt AI Coding Tools?

If you're a solo developer or small studio, here's my advice:

Start with this:

  1. Get GitHub Copilot (£8/month is low risk)
  2. Try it for one month on real projects
  3. Track time saved honestly (it's easy to overestimate)
  4. If it helps, stick with it. If not, cancel.

Don't do this:

  • Jump to expensive tools (Devin, enterprise AI) without testing basics
  • Assume AI will replace the need for developer skill
  • Let AI make architectural decisions unsupervised
  • Expect AI to understand your specific codebase on day one

Consider upgrading if:

  • Copilot saves time consistently
  • You have complex refactoring or legacy code work
  • You work on multiple complex projects simultaneously
  • You can justify £16/month for more sophisticated tools

The Future: What I'm Watching

AI coding tools are evolving fast. Here's what I'm keeping an eye on:

  • Better codebase context: Tools that truly understand my specific architectures
  • Agent improvements: Maybe Devin-style tools will mature enough to justify the cost
  • Open source alternatives: Early experiments with local AI models (slower but private)
  • Framework-specific tools: AI trained specifically on Next.js, Laravel, React Native

The technology is moving quickly. What doesn't work today might work in six months.

Key Takeaways

After six months of real-world testing:

  • AI coding tools deliver genuine productivity gains (I saved ~240 hours)
  • Start cheap (GitHub Copilot at £8/month)
  • Different tools for different tasks (autocomplete vs. architecture vs. refactoring)
  • Experience matters (you need to spot when AI suggestions are wrong)
  • Expensive ≠ better (Devin at £400/month failed, Copilot at £8/month succeeded)

I'm not an AI evangelist. I'm a pragmatic developer looking for tools that help ship better work faster. Six months in, AI coding assistants have earned their place in my toolkit—not as magic solutions, but as genuinely useful multipliers of human expertise.


Experimenting with AI tools in your development workflow? I've been there. If you're looking for a developer who knows how to leverage modern tools while maintaining high code quality, get in touch. I'd love to hear about your project.

T: 07512 944360 | E: [email protected]

© 2025 amillionmonkeys ltd. All rights reserved.