Your AI development stack, curated

The best AI coding tools, MCP workflows, and Claude Code skills — organized for developers. From editor setup to production integrations.

Build your AI stack

Tools, MCP servers, and skills that work together — from editor to production.

AI Coding Tools
8+ tools indexed
Editor extensions, code completion, pair programming tools. Cursor, Windsurf, Copilot, and more.
MCP Servers
6+ MCP servers indexed
Connect your AI to GitHub, databases, browsers, search, and production infrastructure.
Claude Code Skills
6+ skills indexed
Reusable workflow modules for debugging, refactoring, code review, and planning.

MCP Servers

More →

DuckDB MCP community extension (`duckdb_mcp`)

The DuckDB-distributed community extension `duckdb_mcp` embeds MCP client and server capabilities directly inside DuckDB. Installers load it via `INSTALL duckdb_mcp FROM community` followed by `LOAD duckdb_mcp`, after which SQL can attach remote MCP servers (stdio/TCP/WebSocket transports), enumerate resources (`mcp_list_resources`), invoke remote tools (`mcp_call_tool`), and wrap responses with `read_csv`/`read_json`/`read_parquet` URIs routed through `mcp://`. In reverse direction, DuckDB can publish tables, queries, and execution-bound tools (`mcp_publish_table`, `mcp_publish_query`, `mcp_publish_execution_tool`) while `mcp_server_start` exposes them to external MCP-compatible clients.

Neon MCP Server

Official Neon MCP integration exposes Neon Postgres projects to MCP-capable assistants via Streamable HTTP (`https://mcp.neon.tech/mcp`), legacy SSE (`https://mcp.neon.tech/sse`), or a locally launched `@neondatabase/mcp-server-neon` package. Documentation lists tools for project and branch lifecycle, SQL execution, migration rehearsal branches, slow-query diagnostics, Neon Auth provisioning, Data API setup, and embedded Neon docs retrieval—each mapped to Neon API operations.

Qdrant MCP Server

Official Qdrant MCP server implementation that gives AI agents a semantic memory layer backed by Qdrant vector search. It exposes MCP tools for storing information and retrieving relevant context, so assistants can persist and recall facts across sessions instead of relying only on short chat history.

Ollama MCP Server

Community-maintained Model Context Protocol bridge that exposes Ollama's local HTTP API—model listing, pulls, chat, and OpenAI-compatible completions—to MCP clients such as Claude Desktop and Cursor. Published on npm as `ollama-mcp-server` (maintained fork of NightTrek/Ollama-mcp); requires a running Ollama daemon reachable at `OLLAMA_HOST` (default `http://127.0.0.1:11434`).

Shopify Dev MCP

Official Shopify Dev MCP server from the Shopify AI Toolkit: connects Claude Code, Cursor, VS Code, Gemini CLI, Codex, and similar clients to Shopify developer documentation, GraphQL schemas, and validation workflows without guessing API shapes. Runs locally via npx using the @shopify/dev-mcp package; Shopify documents that no authentication is required for this developer-resources server. Part of Shopify's broader AI Toolkit alongside plugins and optional skill bundles.

piLoci MCP

piLoci MCP is a self-hosted memory server for AI agents that exposes project-scoped memory storage and retrieval through the Model Context Protocol. Built to run on Raspberry Pi 5, it provides semantic recall, project listing, and user identity tools. Teams connect Claude Desktop, Codex, and other MCP clients to share persistent context without sending memory data to cloud services.

Claude Code Skills

More →

Example SLO document authoring

Operationalizes Appendix A from Google’s SRE workbook by translating the illustrative “Example Game Service” SLO dossier into a checklist teams can mimic: articulate the user-facing workload, nominate rolling measurement windows (the appendix uses four weeks), pair each subsystem with tightly defined SLIs (availability from load balancers excluding 5xx, latency percentile gates, freshness for derived tables, correctness via probers, completeness for pipelines), cite explicit numerator/denominator language, rationalize rounding policies, quantify per-objective error budgets, and cite the sibling error budget policy for enforcement.

Error budget policy drafting

Translates Google’s worked example error-budget policy into a repeatable playbook for tying release tempo to measured reliability: define goals (protect users from repeated SLO misses while preserving innovation incentives), spell out what happens when the rolling window consumes its budget (freeze changes except urgent defects or security work), codify outage investigation thresholds, and document escalation paths when stakeholders disagree about budget math.

Creating and maintaining Cursor skills

Defines how to author, revise, and validate SKILL.md files so agent skills stay executable, scoped, and testable. It focuses on turning vague know-how into reusable operational instructions with clear triggers, deterministic steps, and verification checks.

Designing with LLM structured outputs

This skill covers when and how to ask an LLM for machine-readable payloads: define a JSON Schema (or the vendor's equivalent), enable the structured-output feature your provider documents, validate responses in application code, and handle refusals or validation errors explicitly. It applies to tool-calling agents, extraction pipelines, configuration emitters, and any workflow where brittle text parsing creates production risk.

Maintaining Cursor Project Rules

Follow Cursor's official Rules documentation when you want persistent Agent guidance tied to a repository. Project rules encode architecture expectations, risky-folder guardrails, or repeatable workflows; Cursor applies them via Always Apply, intelligent relevance, glob-scoped attachments, or manual @mentions. Use .mdc frontmatter for finer control and reference templates with @file instead of pasting large snippets.

Structured AI meeting notes

Converts raw meeting transcripts into structured, actionable notes with decision logs, assigned action items, and key context preserved for future AI retrieval. This skill bridges the gap between what was discussed in a meeting and what AI agents need to know when acting on outcomes days or weeks later.

AI News

All news →