Your AI development stack, curated

The best AI coding tools, MCP workflows, and Claude Code skills — organized for developers. From editor setup to production integrations.

Build your AI stack

Tools, MCP servers, and skills that work together — from editor to production.

AI Coding Tools
8+ tools indexed
Editor extensions, code completion, pair programming tools. Cursor, Windsurf, Copilot, and more.
MCP Servers
6+ MCP servers indexed
Connect your AI to GitHub, databases, browsers, search, and production infrastructure.
Claude Code Skills
6+ skills indexed
Reusable workflow modules for debugging, refactoring, code review, and planning.

MCP Servers

More →

Neon MCP Server

Official Neon MCP server bridges natural-language requests to the Neon API so coding agents can manage Neon Postgres projects, run SQL, and drive branch-based migrations. Neon documents hosted MCP at https://mcp.neon.tech/mcp (OAuth or API key), quick setup via npx neonctl@latest init, and open-source sources on GitHub (neondatabase/mcp-server-neon). Neon explicitly recommends MCP for development and testing workflows—not production databases.

Qdrant MCP Server

Official Qdrant MCP server implementation that gives AI agents a semantic memory layer backed by Qdrant vector search. It exposes MCP tools for storing information and retrieving relevant context, so assistants can persist and recall facts across sessions instead of relying only on short chat history.

Ollama MCP Server

Community-maintained Model Context Protocol bridge that exposes Ollama's local HTTP API—model listing, pulls, chat, and OpenAI-compatible completions—to MCP clients such as Claude Desktop and Cursor. Published on npm as `ollama-mcp-server` (maintained fork of NightTrek/Ollama-mcp); requires a running Ollama daemon reachable at `OLLAMA_HOST` (default `http://127.0.0.1:11434`).

Shopify Dev MCP

Official Shopify Dev MCP server from the Shopify AI Toolkit: connects Claude Code, Cursor, VS Code, Gemini CLI, Codex, and similar clients to Shopify developer documentation, GraphQL schemas, and validation workflows without guessing API shapes. Runs locally via npx using the @shopify/dev-mcp package; Shopify documents that no authentication is required for this developer-resources server. Part of Shopify's broader AI Toolkit alongside plugins and optional skill bundles.

piLoci MCP

piLoci MCP is a self-hosted memory server for AI agents that exposes project-scoped memory storage and retrieval through the Model Context Protocol. Built to run on Raspberry Pi 5, it provides semantic recall, project listing, and user identity tools. Teams connect Claude Desktop, Codex, and other MCP clients to share persistent context without sending memory data to cloud services.

Webflow MCP Server

Connect any LLM to your Webflow sites via the Model Context Protocol. Manage pages, collections, CMS items, e-commerce products, forms, and users through natural language — enabling AI-driven site management and content workflows.

Claude Code Skills

More →

Error budget policy for service reliability

Implements the Google SRE practice of tying product velocity to measured reliability: define a service-level objective (SLO), derive an error budget from permitted unavailability or bad events, and govern launches, refactors, and feature freezes based on remaining budget. This skill operationalizes the error-budget policy described in Google’s SRE Workbook so teams make explicit trade-offs instead of debating reliability anecdotally.

Creating and maintaining Cursor skills

Defines how to author, revise, and validate SKILL.md files so agent skills stay executable, scoped, and testable. It focuses on turning vague know-how into reusable operational instructions with clear triggers, deterministic steps, and verification checks.

Designing with LLM structured outputs

This skill covers when and how to ask an LLM for machine-readable payloads: define a JSON Schema (or the vendor's equivalent), enable the structured-output feature your provider documents, validate responses in application code, and handle refusals or validation errors explicitly. It applies to tool-calling agents, extraction pipelines, configuration emitters, and any workflow where brittle text parsing creates production risk.

Maintaining Cursor Project Rules

Follow Cursor's official Rules documentation when you want persistent Agent guidance tied to a repository. Project rules encode architecture expectations, risky-folder guardrails, or repeatable workflows; Cursor applies them via Always Apply, intelligent relevance, glob-scoped attachments, or manual @mentions. Use .mdc frontmatter for finer control and reference templates with @file instead of pasting large snippets.

Structured AI meeting notes

Converts raw meeting transcripts into structured, actionable notes with decision logs, assigned action items, and key context preserved for future AI retrieval. This skill bridges the gap between what was discussed in a meeting and what AI agents need to know when acting on outcomes days or weeks later.

Incident response

Structured process for handling production incidents from detection to resolution and post-mortem. Covers severity assessment using P0-P3 grading, team coordination with a designated incident commander, communication templates for stakeholders and users, and structured post-mortem requirements to drive organizational learning from every significant outage.

AI News

All news →