Hacker News story: 12-Factor Agents: Patterns of Great LLM Applications

12-Factor Agents: Patterns of Great LLM Applications
Howdy HN - I'm Dex from HumanLayer (YC F24), and I've been building AI agents for a while. After trying every framework out there and talking to many founders building with AI, I've noticed something interesting: most "AI Agents" that make it to production aren't actually that agentic. The best ones are mostly just well-engineered software with LLMs sprinkled in at key points. So I set out to document what I've learned about building production-grade AI systems. Today I'm excited to share "12 Factor Agents" https://ift.tt/gozYe1c It's a set of principles for building LLM-powered software that's reliable enough to put in the hands of production customers. I've seen many SaaS builders try to pivot towards AI by building greenfield new projects on agent frameworks, only to find that they couldn't get things past the 70-80% reliability bar with out-of-the-box tools. The ones that did succeed tended to take small, modular concepts from agent building, and incorporate them into their existing product, rather than starting from scratch. In the spirit of Heroku's 12 Factor Apps (https://12factor.net/), these principles focus on the engineering practices that make LLM applications more reliable, scalable, and maintainable. Even as models get exponentially more powerful, these core techniques will remain valuable. Some highlights: - Factor 1: Natural Language to Tool Calls - Factor 2: Own your prompts - Factor 3: Own your context window - Factor 4: Tools are just structured outputs - Factor 5: Unify execution and business state - Factor 6: Launch/Pause/Resume with simple APIs - Factor 7: Contact humans with tool calls - Factor 8: Own your control flow - Factor 9: Compact errors into context - Factor 10: Small, focused agents - Factor 11: Meet users where they are - Factor 12: Make your agent a stateless reducer The full guide goes into detail on each principle with examples and patterns to follow. I've seen these practices work well in production systems handling real user traffic. I'm sharing this as a starting point - the field is moving quickly and I expect these principles to evolve. I welcome your feedback and contributions to help figure out what "production grade" means for AI systems. Check out the full guide at https://ift.tt/gozYe1c Special thanks to (github users) @iantbutler01, @tnm, @hellovai, @stantonk, @balanceiskey, @AdjectiveAllison, @pfbyjy, @a-churchill, as well as the SF MLOps community for early feedback on this guide. 0 comments on Hacker News.
Howdy HN - I'm Dex from HumanLayer (YC F24), and I've been building AI agents for a while. After trying every framework out there and talking to many founders building with AI, I've noticed something interesting: most "AI Agents" that make it to production aren't actually that agentic. The best ones are mostly just well-engineered software with LLMs sprinkled in at key points. So I set out to document what I've learned about building production-grade AI systems. Today I'm excited to share "12 Factor Agents" https://ift.tt/gozYe1c It's a set of principles for building LLM-powered software that's reliable enough to put in the hands of production customers. I've seen many SaaS builders try to pivot towards AI by building greenfield new projects on agent frameworks, only to find that they couldn't get things past the 70-80% reliability bar with out-of-the-box tools. The ones that did succeed tended to take small, modular concepts from agent building, and incorporate them into their existing product, rather than starting from scratch. In the spirit of Heroku's 12 Factor Apps (https://12factor.net/), these principles focus on the engineering practices that make LLM applications more reliable, scalable, and maintainable. Even as models get exponentially more powerful, these core techniques will remain valuable. Some highlights: - Factor 1: Natural Language to Tool Calls - Factor 2: Own your prompts - Factor 3: Own your context window - Factor 4: Tools are just structured outputs - Factor 5: Unify execution and business state - Factor 6: Launch/Pause/Resume with simple APIs - Factor 7: Contact humans with tool calls - Factor 8: Own your control flow - Factor 9: Compact errors into context - Factor 10: Small, focused agents - Factor 11: Meet users where they are - Factor 12: Make your agent a stateless reducer The full guide goes into detail on each principle with examples and patterns to follow. I've seen these practices work well in production systems handling real user traffic. I'm sharing this as a starting point - the field is moving quickly and I expect these principles to evolve. I welcome your feedback and contributions to help figure out what "production grade" means for AI systems. Check out the full guide at https://ift.tt/gozYe1c Special thanks to (github users) @iantbutler01, @tnm, @hellovai, @stantonk, @balanceiskey, @AdjectiveAllison, @pfbyjy, @a-churchill, as well as the SF MLOps community for early feedback on this guide.

Hacker News story: 12-Factor Agents: Patterns of Great LLM Applications Hacker News story: 12-Factor Agents: Patterns of Great LLM Applications Reviewed by Tha Kur on April 08, 2025 Rating: 5

Post Comments

No comments:

Powered by Blogger.