MCP for Solo Developers: What's Worth Setting Up

/amillionmonkeys/
#AI#Developer Tools#MCP#Workflow

A practical guide to Model Context Protocol for freelancers — which MCP servers are genuinely useful, what's still fiddly, and where to start.

The problem MCP actually solves

Every AI coding tool has the same fundamental limitation: it can't see your stuff. Your database schema, your project management board, your Git history, your design files — they're all behind walls the model can't reach. So you end up copy-pasting context into chat windows like it's 2019.

Model Context Protocol (MCP) is Anthropic's answer to this. It's an open standard — now governed by the Linux Foundation — that lets AI tools connect to external data sources and services through a common interface. Think of it as a USB-C port for AI: one protocol, many connections.

I've been setting up MCP servers as part of my freelance workflow for a few months now. Some have been genuinely useful. Others were more effort than they were worth. Here's an honest rundown.

How it actually works

The architecture is straightforward. You have three pieces:

  • Host — your AI application (Claude Desktop, Cursor, VS Code with Copilot, etc.)
  • Client — lives inside the host, manages the connection to each server
  • Server — an external process that exposes tools, resources, and prompts to the AI

The protocol runs on JSON-RPC 2.0 over either stdio (local servers, spawned as child processes) or Streamable HTTP (remote servers). For solo dev use, you'll mostly be using stdio — it's simpler and faster.

When you start a session, the client discovers what the server offers: tools (functions the AI can call), resources (data it can read), and prompts (reusable templates). Then the AI can use them during your conversation. It's a stateful session, not one-off API calls.

The current spec version is 2025-11-25, and SDKs exist in TypeScript, Python, Go, C#, Kotlin, Java, and Rust. You don't need to write your own server for most use cases — there are pre-built ones for the common integrations.

What's genuinely useful

After trying quite a few MCP servers, here are the ones that have actually stuck in my workflow:

Filesystem — The official filesystem server gives your AI tool read/write/search access to local files with granular permissions. It's the foundation. Without this, your AI is working blind.

GitHub — Repo management, file operations, commits, branching, issues. Well-maintained and solid. If you're already using an AI tool for coding, connecting it to your actual repos is the obvious first step.

Supabase — This one surprised me. Over 20 tools covering table design, migrations, SQL queries, and database branching. For full-stack solo work, being able to say "look at my schema and write a migration" and have the AI actually see your schema is a proper time-saver.

Context7 — Fetches current library documentation instead of relying on the model's training data. Genuinely practical when you're working with a framework that's had a recent release and the AI keeps suggesting deprecated patterns.

Brave Search — Lets your AI search the web. Simple, but useful when you need it to research something without switching windows.

Trello — This is one I didn't expect to use as much as I do. We run client dev boards in Trello — backlog, in progress, blockers — and the Trello MCP server lets the AI read cards, move them between lists, and create new ones. The practical bit: when I finish a task, the AI already knows what I just built, so it can move the card to done and pull the next one from the backlog without me context-switching to the browser. It's also useful for triage — "look at the blockers list and tell me what's stale" actually works when the AI can see the board. Setup is just an API key and token in your MCP config, nothing complicated.

Notion — If you run your project management through Notion (I do), connecting it means your AI can read and write to your pages and databases. Handy for generating status updates or pulling context from project briefs.

What's still fiddly

I'm not going to pretend the setup experience is seamless. It isn't.

Configuration isn't portable. Each client has its own config format. Claude Desktop, Cursor, and VS Code all want their MCP server definitions in different places, structured differently. If you use multiple tools — and most of us do — you're maintaining separate config files for the same servers. The MCP roadmap acknowledges this, but it's not solved yet.

Error messages are cryptic. When something goes wrong during server startup, you'll often get unhelpful errors. Node.js dependency issues, Python environment problems, missing uv installations — the debugging experience is rough. Expect to spend time on initial setup that feels disproportionate to what you're getting.

Context window bloat is real. Loading all tool definitions at session start eats into your context window. I had one setup where enumerating tools from multiple servers consumed over 100k tokens before I'd even asked a question. Be selective. Don't enable every server at once — pick the ones relevant to your current task.

Testing your own servers is harder than it should be. If you build a custom MCP server, validating that it works properly has limited tooling. The development feedback loop isn't as tight as you'd want.

Where to start

If you're curious but not sure whether the setup time is justified, here's what I'd suggest:

  1. Pick one AI tool you already use. Claude Desktop, Cursor, or VS Code with Copilot all support MCP.
  2. Start with the filesystem server. It's the simplest to configure and immediately useful. Point it at your project directory.
  3. Add GitHub if you want repo integration. This is the second-highest value for the effort involved.
  4. Leave everything else until you feel the need. Don't install twelve servers because they exist. You'll bloat your context window and spend hours configuring things you don't use.

For the TypeScript SDK, you're looking at @modelcontextprotocol/sdk on npm (currently around v1.27). For Python, pip install mcp (requires Python 3.10+). Both are stable and production-ready.

The bigger picture

MCP has crossed from "interesting spec" to something with genuine momentum. Anthropic donated it to the Linux Foundation in December 2025, and the Agentic AI Foundation now counts AWS, Google, Microsoft, and OpenAI as platinum members. The latest addition — MCP Apps, launched January 2026 — lets servers return interactive HTML interfaces directly in your chat. Claude, ChatGPT, Goose, and VS Code already support it.

For solo developers, the practical takeaway is this: MCP servers are client-agnostic. Set one up once and it works across Claude Desktop, Cursor, VS Code, and others. That's the real value proposition — you're not locked into one tool's ecosystem.

Key takeaways

  • MCP connects your AI tools to your actual data — files, repos, databases, project management. It solves the copy-paste problem properly.
  • Start with one or two servers (filesystem and GitHub). Add more as you feel the need, not before.
  • The setup experience is still rough in places — cryptic errors, non-portable config, context window bloat. Budget time for initial configuration.
  • The protocol has serious backing now (Linux Foundation, every major AI company). It's not going away.
  • For freelancers, the value comes from connecting the tools you already use, not from chasing every available integration.

If you're building with AI tools and want to talk through what's worth setting up for your workflow, get in touch.