The Integration Layer Nobody Talks About
Everyone wants to talk about prompts. Nobody wants to talk about OAuth tokens.
But here's the truth: the integration layer is where AI projects succeed or fail. The model is almost never the bottleneck. The plumbing is.
What the Integration Layer Actually Is
When I build an AI agent, the "AI part" is maybe 20% of the work. The other 80% is:
- Authentication — Getting credentials, handling token refresh, managing scopes
- Data fetching — API calls, pagination, rate limit handling, retries
- Data transformation — Normalizing responses, handling edge cases, validating schemas
- Error handling — Graceful degradation, logging, alerting, recovery
- Output delivery — Formatting for the target system, permissions, confirmation
This isn't glamorous work. But it's the work that makes the difference between a demo and a system.
A Real Example
I built a marketing performance agent that pulls from GA4, Google Ads, Search Console, and Google Business Profile. Here's how the work broke down:
| Component | Time Spent |
|---|---|
| API integrations & auth | 40% |
| Data normalization | 20% |
| Error handling & retries | 15% |
| LLM analysis logic | 15% |
| Output formatting | 10% |
The LLM part? That was the easy part. Getting four different Google APIs to play nice together, handling their different auth flows, rate limits, and data formats—that was the real work.
Why This Matters
When you're evaluating an AI project, don't ask "what model are you using?"
Ask:
- What systems does it connect to?
- How does it handle failures?
- What happens when the data is missing or malformed?
- How do you monitor it in production?
These questions reveal whether you're looking at a demo or a system.
The Integration Checklist
Before I consider an agent production-ready, it needs to pass this checklist:
- All API credentials are properly scoped and refreshable
- Rate limits are respected with exponential backoff
- Failures in one integration don't crash the whole system
- All actions are logged with enough context to debug
- There's a clear escalation path for edge cases
- The output format matches what the target system expects
The best AI agents aren't the ones with the cleverest prompts. They're the ones that keep working when everything around them breaks.
Getting Started
If you're building an agent, start with the integration layer. Before you write a single prompt:
- Map your data sources — What APIs? What auth? What rate limits?
- Map your action targets — Where does output go? What format?
- Build the pipeline — Get data flowing end-to-end with dummy logic
- Add the AI — Now you can focus on the interesting part
The unglamorous truth: integration work is what separates shipped AI from demo AI.
Building an AI system and hitting integration walls? Let's talk.