BLH
Agent Development18 min2026-03-09

What Is Google ADK (Agent Development Kit) and How Does It Work?

Google ADK is an open-source framework for building, orchestrating, and deploying AI agents. This guide explains what ADK is, how it works, what you can build with it, and how it compares to other agent frameworks.

What Is Google ADK (Agent Development Kit) and How Does It Work?
Brandon Lincoln Hendricks

Brandon Lincoln Hendricks

Autonomous AI Agent Architect

What Is Google ADK?

Google Agent Development Kit (ADK) is an open-source framework for building AI agents that can reason, use tools, and execute multi-step tasks autonomously. Released by Google, ADK is designed to make building AI agents feel like building traditional software — modular, testable, and composable.

ADK is optimized for the Google Cloud ecosystem — particularly Gemini models, Vertex AI, and Agent Engine — but it is model-agnostic and deployment-agnostic. You can use it with other models and deploy agents anywhere: locally, on Cloud Run, in Docker containers, or at enterprise scale on Vertex AI Agent Engine.

The framework supports Python, Java, Go, and TypeScript, making it accessible to most development teams without requiring them to learn a new language.

Why ADK Exists

Before ADK, building AI agents on Google Cloud meant stitching together multiple services manually. You would call Gemini for reasoning, write custom code for tool execution, build your own orchestration logic for multi-agent coordination, and handle memory and state management from scratch.

ADK consolidates all of this into a single, coherent framework. It provides the abstractions, patterns, and infrastructure that agent development requires — so teams can focus on what their agents should do rather than how to wire everything together.

This matters because the industry is shifting from building individual AI features to building autonomous agent systems. That shift demands a framework purpose-built for agents, not a general-purpose ML toolkit retrofitted with agent capabilities.

How ADK Works

ADK is built around a small number of core concepts that compose together to create sophisticated agent systems.

Agents

The fundamental unit in ADK is the agent. An agent is a self-contained entity that has a name, instructions (a system prompt that defines its behavior), a model (typically Gemini), and a set of tools it can use.

When an agent receives a request, it uses its model to reason about the request, decides which tools to invoke, processes the results, and formulates a response. This is not a single model call — the agent loops through reasoning and tool use until it determines the task is complete.

ADK supports three categories of agents:

LLM Agents are the most common type. They use a foundation model to reason dynamically about each request. Their behavior is guided by natural language instructions rather than hard-coded logic, which means they can handle novel situations that were not explicitly programmed.

Workflow Agents provide deterministic control flow. Sequential agents execute steps in order. Parallel agents execute steps concurrently. Loop agents repeat steps until a condition is met. These are useful when you need predictable, repeatable behavior — like processing a batch of records through a fixed pipeline.

Custom Agents let you define entirely custom orchestration logic. When neither LLM-driven reasoning nor predefined workflows fit your use case, you can implement your own agent class with whatever behavior you need.

Tools

Tools are what give agents the ability to act in the world. Without tools, an agent can only generate text. With tools, it can query databases, call APIs, send emails, update records, execute code, and interact with any system that exposes a programmatic interface.

ADK supports several tool types:

Function Tools are the simplest — you write a Python or TypeScript function and register it as a tool. The agent calls it when its reasoning determines the function is needed. The function's docstring and type hints tell the model what the tool does and what parameters it expects.

MCP Tools integrate with the Model Context Protocol standard, allowing agents to use tools exposed by any MCP-compatible server. This opens up a wide ecosystem of pre-built tool integrations.

OpenAPI Tools let you point an agent at an OpenAPI specification, and the agent can call any endpoint defined in that spec. This is powerful for integrating with existing APIs without writing custom tool wrappers.

Agent-as-Tool is one of ADK's most useful patterns. You can register one agent as a tool that another agent can invoke. This enables hierarchical delegation — a manager agent routes work to specialist agents, each with their own tools and expertise.

Memory and State

Agents need memory to maintain context across interactions. ADK provides a session-based memory system where each conversation with an agent maintains state that persists across turns.

Session State stores key-value pairs that persist within a conversation. Agents can read and write state to track what they have done, what information they have gathered, and what decisions they have made.

Long-Term Memory extends beyond individual sessions. Agents can store and retrieve information across conversations, enabling them to learn from past interactions and build up knowledge over time.

Shared Memory allows multiple agents to access the same memory store. This is essential for multi-agent systems where agents need to coordinate — a research agent can store its findings in shared memory, and an execution agent can read those findings to take action.

Orchestration

Orchestration is how multiple agents work together. ADK provides built-in patterns for common orchestration scenarios:

Hierarchical Orchestration uses a root agent that delegates to sub-agents. The root agent receives the initial request, reasons about which sub-agent should handle it, and routes accordingly. Sub-agents can have their own sub-agents, creating a tree of specialized agents.

Sequential Orchestration executes agents in a defined order. The output of one agent becomes the input of the next. This is useful for pipelines where each stage transforms or enriches the data.

Parallel Orchestration runs multiple agents concurrently and aggregates their results. This is useful when you need multiple perspectives or when agents can work on independent subtasks simultaneously.

What You Can Build with ADK

ADK is general-purpose, but certain categories of applications are particularly well-suited to its architecture.

Operational Intelligence Agents

Agents that continuously monitor business signals — transaction patterns, system metrics, customer behavior, market data — and surface anomalies, insights, and recommendations. These agents use Gemini's reasoning to understand what signals mean in context, not just whether a threshold was breached.

Workflow Automation Agents

Agents that execute complex, multi-step business processes that previously required human judgment. Customer intake, document processing, compliance verification, reporting — any workflow where the steps are known but the decisions within each step require contextual understanding.

Multi-Agent Research Systems

Systems where multiple specialized agents collaborate on research tasks. One agent gathers data, another analyzes it, a third synthesizes findings, and a coordinator agent manages the overall process. ADK's agent-as-tool pattern and shared memory make this kind of collaboration straightforward.

Customer-Facing Agent Systems

Agents that interact directly with customers — handling support requests, providing personalized recommendations, or guiding users through complex processes. ADK's session memory maintains conversation context, and its tool integration allows agents to take real actions on behalf of customers.

Internal Operations Agents

Agents that handle internal operational tasks — monitoring infrastructure, managing schedules, coordinating between teams, generating reports. These agents run continuously in the background, handling routine operations and escalating only when human judgment is genuinely needed.

ADK vs Other Agent Frameworks

ADK exists in a landscape that includes LangGraph, CrewAI, AutoGen, and several other agent frameworks. The differences matter for choosing the right tool.

ADK vs LangGraph

LangGraph, built on LangChain, uses a graph-based approach to agent orchestration. You define nodes (processing steps) and edges (transitions) explicitly. This gives fine-grained control over execution flow but requires more boilerplate to set up.

ADK takes a more opinionated approach. Agent orchestration is handled through agent composition — agents delegate to sub-agents, and the framework manages the execution flow. This means less configuration for common patterns but less granular control for unusual ones.

The key differentiator is ecosystem integration. ADK is purpose-built for Google Cloud. If your infrastructure runs on Google Cloud, ADK provides native integration with Vertex AI, Agent Engine, Gemini, BigQuery, and the rest of the Google Cloud stack. LangGraph is cloud-agnostic, which is an advantage if you are multi-cloud but means more integration work on any specific cloud.

ADK vs CrewAI

CrewAI focuses on multi-agent collaboration with role-based agent definitions. You define agents with roles, goals, and backstories, then organize them into crews that work together on tasks.

ADK is more general-purpose. While it supports multi-agent systems through hierarchical orchestration and agent-as-tool patterns, it does not impose a role-based metaphor. ADK agents are defined by their instructions and tools, which provides more flexibility but less structure for team-based agent scenarios.

CrewAI is a strong choice for quick multi-agent prototypes. ADK is a stronger choice for production systems that need to integrate with enterprise infrastructure.

ADK vs AutoGen

AutoGen, from Microsoft, emphasizes conversational agent patterns where agents communicate through messages. It excels at scenarios where agents need to debate, negotiate, or iteratively refine outputs through conversation.

ADK emphasizes tool use and action execution over conversation. Agents in ADK are designed to reason and act, not primarily to converse with each other. For operational agent systems that need to execute workflows and interact with enterprise systems, ADK's tool-first approach is more practical.

How ADK Connects to Vertex AI Agent Engine

ADK is the development framework. Vertex AI Agent Engine is the production runtime. The relationship is similar to writing code in a language and deploying it on a server.

You build agents locally using ADK — defining their instructions, tools, memory configuration, and orchestration patterns. You test and iterate in a local development environment where you can run agents, inspect their reasoning, and debug their behavior.

When the agent is ready for production, you deploy it to Vertex AI Agent Engine. Agent Engine provides:

Managed Infrastructure — you do not manage servers, containers, or scaling. Agent Engine handles the compute infrastructure and scales based on demand.

Production Reliability — enterprise SLAs, automatic failover, and health monitoring. Your agents run with the same reliability guarantees as other Google Cloud managed services.

Security — identity and access management, network isolation, data encryption, and audit logging. Agent Engine meets enterprise security requirements out of the box.

Observability — request tracing, performance metrics, error tracking, and cost monitoring. You can see exactly what your agents are doing in production and how they are performing.

This separation of concerns — ADK for development, Agent Engine for deployment — means you can iterate quickly in development without worrying about infrastructure, and deploy with confidence knowing the production environment is managed and monitored.

Getting Started with ADK

The fastest path to a working agent follows this sequence.

Install ADK

ADK is available through standard package managers. Install the Python package to get started, or choose Java, Go, or TypeScript depending on your team's stack.

Define Your First Agent

Start with a single LLM agent. Give it a name, a set of instructions describing its purpose and behavior, and one or two simple tools — even just a function that returns hardcoded data. The goal is to see the agent loop in action: receive a request, reason about it, invoke a tool, process the result, and respond.

Add Tools

Once the basic agent loop works, add real tools. Connect to a database, call an API, or integrate with an existing system. This is where the agent starts doing useful work rather than just generating text.

Add a Second Agent

Create a second agent with different expertise and tools. Register it as a sub-agent or as a tool on your first agent. Now you have a multi-agent system where one agent can delegate to another based on the request.

Test and Evaluate

ADK includes built-in evaluation capabilities. Define test cases that specify inputs and expected behaviors, then run your agents against those test cases to verify they behave correctly.

Deploy to Agent Engine

When you are confident the agents work correctly, deploy to Vertex AI Agent Engine for production use. The deployment process packages your agents and their tools, provisions the runtime infrastructure, and makes your agents available through API endpoints.

Where ADK Fits in the Autonomous AI Agent Architecture

ADK occupies the Agent Layer in the five-layer architecture for autonomous AI agent systems.

Signals Layer — Data sources, event streams, and monitoring inputs that feed information to agents. This is where BigQuery, Pub/Sub, and Cloud Functions operate.

Reasoning Layer — Gemini models that provide the intelligence behind agent decision-making. Agents call Gemini through ADK's model integration to reason about their tasks.

Agent Layer — This is where ADK lives. It defines the agents, their tools, their orchestration patterns, and their memory systems. ADK is the framework that brings the reasoning layer and the signals layer together into coherent agent behavior.

Execution Layer — Vertex AI Agent Engine runs ADK-built agents in production. It provides the managed infrastructure that makes agents operationally reliable.

Operations Layer — The business processes, workflows, and systems that agents interact with. Agents take action in the operations layer through their tools.

Frequently Asked Questions

Is ADK only for Google Cloud?

No. ADK is open-source and can be used with other models and deployed on other platforms. However, it is optimized for Google Cloud — the integrations with Gemini, Vertex AI, Agent Engine, and other Google Cloud services are the most mature and best-supported.

What programming languages does ADK support?

ADK supports Python, Java, Go, and TypeScript. Python has the most complete feature set and the largest community. The other language SDKs are actively developed and cover the core capabilities.

Do I need Vertex AI Agent Engine to use ADK?

No. You can run ADK agents locally, in Docker containers, on Cloud Run, or on any compute platform. Agent Engine is the recommended production deployment target for enterprise workloads because it provides managed infrastructure, scaling, and security — but it is not required.

How is ADK different from just calling the Gemini API directly?

Calling the Gemini API gives you a single model interaction — you send a prompt, you get a response. ADK builds an agent system around that model interaction. It adds tool use, memory, multi-turn reasoning, multi-agent orchestration, and deployment infrastructure. The difference is between a single API call and an autonomous system that reasons, acts, and learns.

Can ADK agents work with non-Google models?

Yes. ADK is model-agnostic. While Gemini integration is the most polished, you can configure agents to use other models. The framework abstracts the model interface so that swapping models does not require rewriting your agent logic.