DigitalOcean Agent Platform Overview¶
Introduction¶
DigitalOcean Agent Platform (formerly Nimbella) provides tools and services for building, deploying, and managing AI agents and intelligent applications. It combines serverless functions, managed databases, and AI/ML capabilities in a unified platform.
Key Features¶
- AI Agent Development: Build intelligent agents
- Serverless Runtime: No infrastructure management
- Vector Databases: Built-in vector storage for embeddings
- LLM Integration: Connect to OpenAI, Anthropic, etc.
- Memory Management: Persistent agent memory
- Tool Calling: Function calling for agents
- Multi-Agent Systems: Orchestrate multiple agents
- Real-time Streaming: Stream AI responses
- Observability: Monitor agent performance
- Cost Optimization: Pay only for usage
Architecture¶
┌─────────────────────────────────────────────────────────────┐
│ User Interface │
│ ├─> Web App │
│ ├─> Mobile App │
│ └─> API Clients │
└────────────────────────────┬────────────────────────────────┘
│
┌────────▼────────┐
│ Agent Platform │
│ API Gateway │
└────────┬────────┘
│
┌────────────────────┼────────────────────┐
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│ Agent │ │ Agent │ │ Agent │
│Instance │ │Instance │ │Instance │
│ #1 │ │ #2 │ │ #3 │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└────────────────────┼────────────────────┘
│
┌────────────────────┼────────────────────┐
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│ LLM │ │ Vector │ │ Memory │
│Provider │ │ DB │ │ Storage │
└─────────┘ └─────────┘ └─────────┘
Use Cases¶
1. Conversational AI¶
Chatbots & Assistants:
├─> Customer support bots
├─> Virtual assistants
├─> FAQ automation
└─> Interactive guides
Features:
├─> Natural language understanding
├─> Context awareness
├─> Multi-turn conversations
└─> Personalization
2. Document Processing¶
Intelligent Document Analysis:
├─> PDF parsing and Q&A
├─> Contract analysis
├─> Resume screening
└─> Research assistance
Features:
├─> Semantic search
├─> Information extraction
├─> Summarization
└─> Classification
3. Code Assistants¶
Development Tools:
├─> Code generation
├─> Code review
├─> Documentation generation
└─> Bug detection
Features:
├─> Multi-language support
├─> Context-aware suggestions
├─> Best practices enforcement
└─> Automated testing
4. Data Analysis¶
Intelligent Analytics:
├─> Natural language queries
├─> Automated insights
├─> Report generation
└─> Trend analysis
Features:
├─> SQL generation
├─> Data visualization
├─> Anomaly detection
└─> Predictive analytics
5. Workflow Automation¶
Intelligent Automation:
├─> Email processing
├─> Task routing
├─> Decision making
└─> Process optimization
Features:
├─> Multi-step workflows
├─> Conditional logic
├─> Human-in-the-loop
└─> Integration with tools
Agent Components¶
1. LLM Integration¶
Supported Models:
├─> OpenAI (GPT-4, GPT-3.5)
├─> Anthropic (Claude)
├─> Google (PaLM, Gemini)
├─> Open-source (Llama, Mistral)
└─> Custom models
Features:
├─> Model switching
├─> Temperature control
├─> Token management
└─> Cost tracking
2. Vector Database¶
Embedding Storage:
├─> Store document embeddings
├─> Semantic search
├─> Similarity matching
└─> RAG (Retrieval Augmented Generation)
Capabilities:
├─> High-dimensional vectors
├─> Fast similarity search
├─> Metadata filtering
└─> Hybrid search
3. Memory Systems¶
Agent Memory:
├─> Short-term memory (conversation)
├─> Long-term memory (knowledge)
├─> Working memory (context)
└─> Episodic memory (history)
Storage:
├─> In-memory cache
├─> Database persistence
├─> Vector embeddings
└─> File storage
4. Tool Calling¶
Function Execution:
├─> API calls
├─> Database queries
├─> File operations
└─> External services
Examples:
├─> Search the web
├─> Send emails
├─> Update databases
└─> Generate images
Quick Start¶
Create Simple Agent¶
# agent.py
from digitalocean_agent import Agent, Tool
# Define tools
def search_database(query: str) -> str:
"""Search the database for information"""
# Your database search logic
return f"Results for: {query}"
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email"""
# Your email sending logic
return f"Email sent to {to}"
# Create agent
agent = Agent(
name="customer-support",
model="gpt-4",
tools=[
Tool(name="search_database", func=search_database),
Tool(name="send_email", func=send_email)
],
system_prompt="""You are a helpful customer support agent.
Use the available tools to help customers with their questions."""
)
# Run agent
response = agent.run("Find my order status for order #12345")
print(response)
Deploy Agent¶
# Install CLI
npm install -g @digitalocean/agent-cli
# Login
do-agent login
# Deploy
do-agent deploy agent.py
# Test
do-agent invoke customer-support --input "Hello!"
Agent Patterns¶
1. RAG (Retrieval Augmented Generation)¶
from digitalocean_agent import Agent, VectorStore
# Create vector store
vector_store = VectorStore(name="docs")
# Add documents
vector_store.add_documents([
{"text": "Product A costs $99", "metadata": {"type": "pricing"}},
{"text": "Product B costs $149", "metadata": {"type": "pricing"}}
])
# Create RAG agent
agent = Agent(
name="rag-agent",
model="gpt-4",
vector_store=vector_store,
system_prompt="Answer questions using the provided documents."
)
# Query
response = agent.run("How much does Product A cost?")
2. Multi-Agent System¶
from digitalocean_agent import Agent, Orchestrator
# Create specialized agents
researcher = Agent(
name="researcher",
model="gpt-4",
system_prompt="Research and gather information."
)
writer = Agent(
name="writer",
model="gpt-4",
system_prompt="Write clear and engaging content."
)
editor = Agent(
name="editor",
model="gpt-4",
system_prompt="Edit and improve content."
)
# Orchestrate agents
orchestrator = Orchestrator(agents=[researcher, writer, editor])
# Run workflow
result = orchestrator.run(
task="Write a blog post about AI",
workflow=[
("researcher", "Research AI trends"),
("writer", "Write blog post"),
("editor", "Edit and finalize")
]
)
3. Conversational Agent¶
from digitalocean_agent import Agent, Memory
# Create agent with memory
agent = Agent(
name="chatbot",
model="gpt-4",
memory=Memory(type="conversation", max_tokens=4000),
system_prompt="You are a friendly chatbot."
)
# Multi-turn conversation
session_id = "user-123"
response1 = agent.run("My name is Alice", session_id=session_id)
# "Nice to meet you, Alice!"
response2 = agent.run("What's my name?", session_id=session_id)
# "Your name is Alice!"
Pricing¶
Agent Execution¶
Compute:
├─> $0.0000185 per GB-second
└─> Similar to Functions pricing
LLM Costs:
├─> Pass-through pricing
├─> OpenAI: $0.01-0.06 per 1K tokens
├─> Anthropic: $0.008-0.024 per 1K tokens
└─> Track usage per agent
Vector Database:
├─> $0.10 per GB stored
├─> $0.02 per million queries
└─> Included in agent platform
Cost Optimization¶
Strategies:
├─> Use smaller models when possible
├─> Implement caching
├─> Optimize prompts
├─> Batch requests
└─> Monitor token usage
Best Practices¶
1. Prompt Engineering¶
Effective Prompts:
├─> Be specific and clear
├─> Provide examples
├─> Use system prompts
├─> Include constraints
└─> Test iteratively
Example:
system_prompt = """
You are a customer support agent for TechCorp.
- Be polite and professional
- Use the search_database tool to find information
- If you can't help, escalate to human support
- Keep responses concise (under 100 words)
"""
2. Error Handling¶
from digitalocean_agent import Agent, AgentError
agent = Agent(name="robust-agent", model="gpt-4")
try:
response = agent.run(user_input, timeout=30)
except AgentError as e:
print(f"Agent error: {e}")
# Fallback logic
except Exception as e:
print(f"Unexpected error: {e}")
# Error handling
3. Monitoring¶
Track Metrics:
├─> Response time
├─> Token usage
├─> Error rate
├─> User satisfaction
└─> Cost per interaction
Tools:
├─> Built-in analytics
├─> Custom logging
├─> External monitoring
└─> A/B testing
4. Security¶
Best Practices:
├─> Validate user input
├─> Sanitize outputs
├─> Use rate limiting
├─> Implement authentication
└─> Audit agent actions
Example:
├─> Input validation
├─> Output filtering
├─> Access control
└─> Logging
Integration Examples¶
With App Platform¶
# app.yaml
name: ai-app
services:
- name: web
github:
repo: username/ai-app
envs:
- key: AGENT_API_KEY
scope: RUN_TIME
value: ${AGENT_API_KEY}
- name: agent
type: agent
config:
model: gpt-4
tools:
- search_database
- send_email
With Functions¶
// functions/chat.js
const { Agent } = require('@digitalocean/agent-sdk');
const agent = new Agent({
name: 'chat-agent',
model: 'gpt-4'
});
async function main(args) {
const { message, sessionId } = args;
const response = await agent.run(message, {
sessionId,
stream: false
});
return {
body: {
response: response.text,
tokens: response.usage
}
};
}
Troubleshooting¶
Agent Not Responding¶
# Check agent status
do-agent status customer-support
# View logs
do-agent logs customer-support --tail 100
# Test locally
do-agent test agent.py --input "test message"
High Costs¶
Optimization:
├─> Use GPT-3.5 instead of GPT-4
├─> Implement response caching
├─> Reduce context window
├─> Optimize prompts
└─> Set token limits
Slow Responses¶
Improvements:
├─> Use streaming responses
├─> Reduce tool calls
├─> Optimize vector search
├─> Cache common queries
└─> Use faster models
Documentation Structure¶
- Agent Platform Overview - This page
- Building Agents - Development guide
- Agent Patterns - Common architectures
- Production Deployment - Best practices