Building Your Own MCP Server in .NET — Exposing Your APIs to Claude, ChatGPT, and Cursor
MCP turns "my LLM client should be able to use my internal API" from a custom integration into a 50-line server. Here's how it looks in C#.
MCP (Model Context Protocol) is the protocol that lets LLM clients call your tools and read your data over a standard interface. Claude Desktop speaks it. Cursor speaks it. ChatGPT now speaks it. If you've got an internal API you want your team's LLM client to call, MCP is the path of least resistance.
The official .NET SDK (ModelContextProtocol) makes it small. Here's a server that exposes a GitHub-issues lookup as a tool:
using ModelContextProtocol.Server;
using Microsoft.Extensions.Hosting;
var builder = Host.CreateApplicationBuilder(args);
builder.Services
.AddMcpServer()
.WithStdioServerTransport()
.WithToolsFromAssembly();
await builder.Build().RunAsync();
[McpServerToolType]
public static class GitHubTools
{
[McpServerTool, Description("Find GitHub issues matching a query string")]
public static async Task<IssueResult[]> FindIssues(
[Description("e.g. 'is:open label:bug repo:owner/name'")] string query,
GitHubClient gh)
{
var result = await gh.Search.SearchIssues(new SearchIssuesRequest(query));
return result.Items.Select(i => new IssueResult(i.Title, i.HtmlUrl, i.State.StringValue)).ToArray();
}
}
public record IssueResult(string Title, string Url, string State);
That's it. The attributes tell the SDK what's a tool and how to describe it. Claude Desktop with this server registered will see FindIssues in its tool list and call it when relevant.
Two transport choices:
Stdio for desktop clients (Claude Desktop, Cursor). The client launches your server as a child process and talks over stdin/stdout. Simplest to configure, great for personal tools.
HTTP for shared/team servers. Run it as a normal ASP.NET service, register the URL in client configs, and now everyone on the team can use the same tools. Requires you to handle auth.
The security note I'd shout from a rooftop:
Don't expose destructive tools without explicit confirmation. If your tool deletes records, runs migrations, or pushes to git, the LLM should propose the action and the client should require the user to approve before executing. The MCP spec supports this via confirm semantics — use them. Otherwise your LLM client will, eventually, helpfully delete production data because it misread the context.
The other thing nobody tells you: write tools that return small, structured results. Don't return a 50KB blob and hope the LLM extracts what it needs. Every token you return costs context window. Trim aggressively on the server side, the way you'd trim an API response for a constrained mobile client.
Build one MCP server for the thing your team queries five times a day. Wire it up to your editor. The first time you ask the LLM "what's blocking the auth-rewrite epic?" and it just answers, the value is obvious.