Introduction
CrewAI is a revolutionary open-source Python framework for orchestrating autonomous AI agent crews, designed for complex workflows in 2026. Unlike LangChain, which focuses on linear chains, CrewAI excels in multi-agent collaboration: each agent has a role, tools, and memory, like a team of specialists in a company. Imagine a researcher scouring the web, an analyst synthesizing data, and a writer drafting a report – all automatically orchestrated.
Why is it crucial in 2026? With the rise of LLMs like GPT-5 and Claude 4, isolated agents are no longer enough; crews handle real-world tasks like competitive analysis, code generation, or DevOps automation. This expert tutorial guides you from A to Z: installation, basic agents, hierarchical tasks, custom tools, persistent memory, and human inputs. By the end, you'll bookmark this guide for your production projects. Ready to scale your AI?
Prerequisites
- Python 3.11+ installed
- OpenAI API key (or Grok, Anthropic):
export OPENAI_API_KEY=sk-... pip install crewai==0.51.1 crewai[tools]- Advanced knowledge of Python, LLMs, and async/await
- IDE like VS Code with Pylance for type checking
- Git for versioning your crews
Installation and Environment Setup
python -m venv crewai-env
source crewai-env/bin/activate # Linux/Mac
# crewai-env\Scripts\activate # Windows
pip install --upgrade pip
pip install 'crewai[tools]'==0.51.1
pip install 'crewai-tools[serpapi,yfinance]'==0.8.1
pip install langchain-openai==0.2.2
pip install duckduckgo-search
mkdir crewai-project
cd crewai-project
touch agents.py tasks.py crew.py tools.py main.py
# In .env
cat > .env << EOF
OPENAI_API_KEY=sk-your-key-here
SERPAPI_API_KEY=your-serpapi-key # Optional for web search
EOFThis script creates an isolated virtual environment, installs CrewAI with built-in tools (Serper for search, YFinance for finance), and LangChain-OpenAI. Tools like DuckDuckGo are free. Copy-paste for a 2-minute setup; avoid globals to prevent version conflicts in production.
Core Concepts: Agents, Tasks, and Crews
An agent is an LLM with a role, backstory, tools, and specific LLM – like an expert employee. A task assigns a goal to an agent, with context, output format, and dependencies. A crew orchestrates everything: automatic delegation, verbose logging for debugging. In expert mode, prioritize process=hierarchical for CEO-like supervision, or manager for dynamic decisions. Analogy: a crew is an agile startup where the manager prioritizes tasks in real time.
Basic Agents and Tasks
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, tool
from langchain_openai import ChatOpenAI
import os
llm = ChatOpenAI(model="gpt-4o", temperature=0)
researcher = Agent(
role="Senior Market Researcher",
goal="Identify key trends on {topic}",
backstory="""You are a seasoned market analyst with 20 years of experience.
You use fresh web data for precise insights.""",
tools=[SerperDevTool()],
llm=llm,
verbose=True
)
analyst = Agent(
role="Data Analyst",
goal="Analyze research data for actionable recommendations",
backstory="Stats and visualization expert, you turn raw data into business insights.",
llm=llm,
verbose=True
)
task1 = Task(
description="Research the 5 latest trends on {topic} via web search.",
expected_output="Structured report with sources and bullet points.",
agent=researcher
)
task2 = Task(
description="Analyze the research report and propose 3 priority recommendations.",
expected_output="JSON synthesis: {{'recommendations': [...], 'priorities': [...]}}",
context=[task1],
agent=analyst
)Defines two collaborative agents with tools (Serper for search). Tasks have dependencies via context=[task1]. Use verbose=True to trace LLM calls. Pitfall: forget to import tools; test with topic='AI in 2026'.
Create and Run a Simple Crew
from agents import researcher, analyst, task1, task2
crew = Crew(
agents=[researcher, analyst],
tasks=[task1, task2],
process=Process.sequential, # Or hierarchical
verbose=2,
memory=True
)
result = crew.kickoff(inputs={'topic': 'Advanced CrewAI 2026'})
print(result)Assembles the crew sequentially (task1 → task2). memory=True enables shared short-term memory. kickoff(inputs=...) launches it; output is a raw string. In production, wrap in async for scalability.
Intermediate Level: Custom Tools and Memory
Custom tools extend agents like Python functions exposed to the LLM – perfect for private APIs. Memory (short/long-term) persists context across runs, crucial for long conversations. In 2026, integrate VectorStores like FAISS for advanced RAG.
Custom Tool and Crew with Memory
from crewai_tools import BaseTool
from langchain_community.tools import DuckDuckGoSearchRun
@tool("News Summarizer")
def summarize_news(query: str) -> str:
"""Summarizes the latest news on a topic."""
search = DuckDuckGoSearchRun()
results = search.run(query)
# Simulated summary (replace with LLM call)
return f"News summary {query}: {results[:200]}... Sources cited."
# In main.py or crew.py
# Add to researcher.tools = [SerperDevTool(), summarize_news]
from crewai import Memory
crew = Crew(
agents=[researcher, analyst],
tasks=[task1, task2],
process=Process.sequential,
memory=Memory(
short_term=True,
long_term=True # Requires embedder like OpenAIEmbeddings
),
verbose=2
)
result = crew.kickoff(inputs={'topic': 'CrewAI in finance 2026'})Creates a @tool-decorated tool encapsulating DuckDuckGo. Add to agent.tools. Memory(long_term=True) stores embeddings for recall. Pitfall: long-term requires pip install crewai[long-term-memory] and embeddings config.
Hierarchical Process with Manager
from crewai import Agent, Crew, Process, Task
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
manager = Agent(
role="Project Manager",
goal="Coordinate research and analysis on {topic}",
backstory="Senior manager who delegates and validates outputs.",
llm=llm,
verbose=True
)
researcher = Agent( # As before
role="Researcher",
goal="Research {topic}",
llm=llm,
verbose=True
)
writer = Agent(
role="Writer",
goal="Draft final report",
llm=llm,
verbose=True
)
task_research = Task(description="Full research on {topic}", agent=researcher)
task_write = Task(description="Draft report based on research", agent=writer, context=[task_research])
crew = Crew(
agents=[manager, researcher, writer],
tasks=[task_research, task_write],
process=Process.hierarchical, # Manager supervises
manager_llm=llm,
verbose=2
)
result = crew.kickoff(inputs={'topic': 'AI Trends 2026'})
print(result)hierarchical mode: manager dynamically decomposes/delegates. manager_llm for decisions. Ideal for ambiguous tasks. Advantage: adaptive; pitfall: double LLM costs if poorly calibrated.
Advanced: Human Input and Delegation
HumanInputTool enables feedback loops: agents pause for human approval. Delegation happens automatically between agents via goals. In 2026 production, chain with Streamlit for interactive UIs.
Crew with Human Input and Main Execution
import os
from dotenv import load_dotenv
from crewai import Crew
from agents import researcher, analyst, task1, task2 # Imports
load_dotenv()
crew = Crew(
agents=[researcher, analyst],
tasks=[task1, task2],
verbose=2,
share_crew=True # For callbacks
)
# Run with dynamic input
inputs = {'topic': input("Enter the topic: ") or 'CrewAI expert'}
result = crew.kickoff(inputs=inputs)
print("Final output:", result)
# For human input, add HumanInputTool to tools and reference in taskComplete main script: loads .env, CLI input, executes. share_crew=True for reuse. Add from crewai_tools import HumanInputTool for interactive pauses. Ready to run out-of-the-box.
Best Practices
- Strict prompt engineering: backstory > 200 words for clear roles; use JSON outputs for parsability.
- Rate limiting & caching: Integrate
crewai-toolswith Redis to avoid LLM quotas. - Monitoring:
verbose=2+ LangSmith tracing; logcrew.last_messages. - Scalability: Async crews with
crew.kickoff_async(); deploy on Ray for multi-crews. - Security: Validate tool inputs; use Guardrails for sensitive outputs.
Common Errors to Avoid
- No context chaining: Forgetting
context=[prev_task]→ info loss; always link. - Unimported tools: LLM ignores tools without
@toolor listing; test isolated. - Memory without embeddings: Long-term crashes without
pip install faiss-cpuand config. - Wrong process: Sequential for linear, hierarchical for complex – benchmark costs.
Next Steps
Dive into Learni advanced AI training: CrewAI + AutoGen hybrids. Official docs: crewAI GitHub. Production examples: integrate with FastAPI for crew APIs. Join CrewAI Discord for real cases. Next: hierarchical multi-crews for business simulations.