No magic, no mystery. Just a loop, some tools, and a language model doing its thing.
Sources: Anthropic · Ethan Mollick · Simon Willison · Thorsten Ball · Andrew Ng · Fly.io · Geoffrey Huntley · hoeem
"We are at the point where we need to think of AI as something we manage, not just something we use." - Ethan Mollick, Wharton
Or as Ethan Mollick puts it: "an AI that is given a goal and can pursue that goal autonomously." A working agent can be built in less than 400 lines of code.
"Agents are typically just LLMs using tools based on environmental feedback in a loop." - Anthropic, Building Effective Agents
The loop keeps running until the model decides it has nothing left to do. That's the whole trick.
while not done:
response = call_llm(messages, tools)
if response.is_final:
return response # done!
for tool in response.tool_calls:
result = execute(tool)
messages.append(result) # loop
This is where most people get confused. The model can't run code. It just outputs text describing what it wants to do. The runtime does the rest.
"I'd like to call get_weather with city: Berkeley"
Your code makes the actual API call, reads the file, runs the query
The model reasons over it and decides: done, or call another tool?
Think of it like a "wink." You tell the model: "wink if you want me to raise my arm." When it winks (requests a tool), your code does the actual arm-raising. - Thorsten Ball
You don't program when to use a tool. You describe what tools exist, and the model decides:
{
"name": "get_weather",
"description": "Get current weather for a city",
"input_schema": {
"city": { "type": "string" },
"units": { "type": "string" }
}
}
Every coding agent (Claude Code, Cursor, Copilot) is built from just five tools: Read files, List directories, run Bash commands, Edit files, Search code.
"Think about how much effort goes into human-computer interaction, and plan to invest just as much in Agent-Computer Interface (ACI)." - Anthropic
"The most successful implementations weren't using complex frameworks. They were building with simple, composable patterns." - Anthropic
Context engineering (deciding what goes in the window) is the real skill. Not prompt magic. Not model size. What you include and exclude. - Annie Ruygt, Fly.io
The AI critiques its own output and iterates until it's good enough
Connect to APIs, databases, files, and external services
Break a complex task into steps and execute them in sequence
Multiple specialized agents coordinate and hand off work
Thinkers sit and reason. Doers grab tools and get to work. The best agent systems use both — a Thinker to plan, Doers to execute.
StrongDM's rule: if you're not spending $1,000/engineer/day on AI tokens, your factory has room to improve. The leverage is real.
StrongDM, a security company, built a 3-person team where AI agents write, test, and ship production software. Their rules:
"Code must not be written by humans."
"Code must not be reviewed by humans."
Working from specs, not prompts. Full autonomy over implementation.
Separate agents find bugs the coding agents can't see or game.
Each engineer spends ~$1,000/day on AI tokens. Still cheaper than hiring.
Covered by Simon Willison and Ethan Mollick as proof that agents can now compound correctness rather than compound errors.
"AI is most useful when it just does stuff. Not when it tells you what to do." - Ethan Mollick, Wharton
"Get on this bike and push the pedals." - Annie Ruygt, Fly.io
An agent is an LLM with tools and a loop. The secret everyone wants explained is embarrassingly simple.