Studio2024–presentSolo founder · Claude collaborator

Agent Parts

Composable primitives for production AI agents. Memory, tools, guardrails, and orchestration modules — opinionated enough to ship with, flexible enough to build on top of.

The kit underneath every fractional engagement I take. Same primitives, same mental model, less reinventing on each new client codebase.

Agent Parts is a TypeScript library of opinionated primitives for production AI agents. It's the toolkit I reach for first on every fractional engagement — memory, tools, guardrails, orchestration — so each client doesn't pay me to rebuild the same scaffolding from scratch.

Currently in active build. The shape is stable enough that I use it on real client work; the public API is in flux as patterns prove themselves on multiple projects.

What it is

A small set of composable modules:

  • Memory — short-term conversation state, long-term vector memory, the contract between them.
  • Tools — typed tool definitions with built-in validation, retry, and trace hooks.
  • Guardrails — input/output validators that fail loudly and recover gracefully.
  • Orchestration — multi-step agent flows with checkpoints and replayable traces.

Each module is independent. You don't take the whole thing — you reach for the part you need on the project you're building.

Why I built it

Every AI engagement I started after the second one began with the same three weeks of plumbing: schema for memory, a typing layer for tools, a guardrail wrapper, a way to log the trace so on-call could read it. By the third engagement I was copying my own code out of the previous client's repo, modifying it slightly, and shipping it again.

Agent Parts is the productization of that pattern. The modules are mine, not the client's; the client gets the library as a dependency and a documented contract. When I leave, the client owns the agents I built; the kit goes with me to the next engagement.

How it ships

Built with Claude as a daily collaborator from day one. The library's tests, type signatures, and even the contribution docs were drafted in pair sessions with me running review and Claude running implementation. That working model is half the case study — the other half is the library itself.

Stack: TypeScript, Vercel AI SDK, a swappable vector layer (Pinecone / pgvector / Chroma), no opinion forced on the runtime.

Why this matters for AI systems

Most agent codebases I've audited at client sites are 80% scaffolding and 20% the actual behavior the team cares about. The scaffolding gets rewritten every time someone ships a new agent because nobody factored it out.

Agent Parts factors it out. The 80% becomes a dependency you pin; the 20% — the actual agent behavior — gets all of your team's attention. That ratio flip is the difference between a team that ships one agent in a quarter and a team that ships five.


Notes for verification (TODO before publishing): the public API status, the actual modules listed (memory/tools/guardrails/orchestration is my best guess from studio-products.ts), and the runtime list. Replace any with the real shipped surface. Add real cover image when available; remove this block when content is final.

AI AgentsTypeScriptNext.jsAI SDKVector DB

Working on something similar?

Book an intro call and I’ll walk you through how the same systems thinking applies to your AI work.

Book an intro call