Build a Django Discussion Forum: Step-by-Step Tutorial
Learn how to build a complete Django discussion forum with anonymous posting, user interactions, and...
Get instant access to the latest tech news, reviews, and programming tutorials on your device!
🔍 Search Latest International Tech News, Reviews & Programming Tutorials
Learn how to build a complete Django discussion forum with anonymous posting, user interactions, and...
These AI tools offer a range of functionalities to enhance the creative process for vloggers....
Discover complete iPhone 16 specifications for all models. Compare features, camera specs, performance, and pricing...

Create your first AI agent from scratch using Python and LangChain. Learn to build a simple autonomous agent that can reason, use tools, and accomplish tasks independently.
Now that your development environment is configured, it's time to build your first AI agent. This chapter walks through creating a simple but functional agent that demonstrates core concepts: reasoning, tool use, and autonomous task execution.
Understanding Our First Agent
We'll build a research assistant agent that can search the web, process information, and answer questions with current data. This agent will use the React pattern, interleaving reasoning and actions, and demonstrate how agents think through problems step-by-step.
The agent will have three capabilities: searching the web for information, performing mathematical calculations, and maintaining conversation context. While simple, this agent illustrates fundamental patterns applicable to more complex systems.
Project Setup
Create a new project directory and activate your virtual environment. Install required packages:
Python language-plaintext">pip install langchain langchain-openai langchain-community python-dotenv duckduckgo-searchCreate a main.py file where we'll build our agent. Import necessary libraries:
from langchain.agents import AgentExecutor, create_react_agentfrom langchain_openai import ChatOpenAIfrom langchain.tools import Toolfrom langchain.prompts import PromptTemplatefrom langchain_community.tools import DuckDuckGoSearchRunfrom dotenv import load_dotenvimport osload_dotenv()Initializing the Language Model
The language model is your agent's brain. Initialize GPT-4 or GPT-3.5:
llm = ChatOpenAI( model="gpt-4", temperature=0, # Lower temperature for more focused responses api_key=os.getenv("OPENAI_API_KEY"))Temperature controls randomness—0 makes outputs deterministic and focused, while higher values (0.7-1.0) increase creativity and variety. For agents making decisions and using tools, lower temperatures reduce errors.
Creating Tools
Tools give agents capabilities beyond language processing. Let's create a web search tool:
search = DuckDuckGoSearchRun()search_tool = Tool( name="Web Search", func=search.run, description="Useful for searching the internet for current information. Input should be a search query.")The description is crucial—it tells the agent when and how to use the tool. Clear, specific descriptions improve agent decision-making.
Create a calculator tool:
def calculate(expression): """Safely evaluate mathematical expressions.""" try: # Only allow safe mathematical operations result = eval(expression, {"__builtins__": {}}, { "abs": abs, "round": round, "min": min, "max": max, "sum": sum, "pow": pow }) return str(result) except Exception as e: return f"Error calculating: {str(e)}"calc_tool = Tool( name="Calculator", func=calculate, description="Useful for performing mathematical calculations. Input should be a mathematical expression like '25 * 4' or 'pow(2, 10)'.")Combine tools into a list:
tools = [search_tool, calc_tool]Defining the Agent Prompt
The prompt instructs the agent on how to behave and use tools. ReAct agents follow a specific format:
prompt = PromptTemplate.from_template("""Answer the following questions as best you can. You have access to the following tools:{tools}Use the following format:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input questionQuestion: {input}Thought: {agent_scratchpad}""")This prompt template structures the agent's thinking process. The agent fills in Thoughts, Actions, and Observations iteratively until reaching a Final Answer.
Creating the Agent
Now create the agent combining the LLM, tools, and prompt:
agent = create_react_agent( llm=llm, tools=tools, prompt=prompt)Wrap the agent in an AgentExecutor, which handles the execution loop:
agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, # Print reasoning steps max_iterations=5, # Prevent infinite loops handle_parsing_errors=True # Gracefully handle errors)The verbose flag shows the agent's thinking process—invaluable for debugging and understanding agent behavior.
Running the Agent
Invoke the agent with a question:
response = agent_executor.invoke({ "input": "What is the current population of Tokyo, and what is that number multiplied by 2?"})print("\nFinal Answer:", response["output"])Run this code with python main.py. You'll see the agent's reasoning process:
> Entering new AgentExecutor chain...Thought: I need to find the current population of Tokyo first.Action: Web SearchAction Input: current population of TokyoObservation: [Search results about Tokyo's population]Thought: Now I know Tokyo's population is approximately 14 million. I need to multiply this by 2.Action: CalculatorAction Input: 14000000 * 2Observation: 28000000Thought: I now know the final answerFinal Answer: Tokyo's current population is approximately 14 million people. Multiplied by 2, that equals 28 million.The agent independently determined it needed to search for information, performed the search, extracted relevant data, calculated the multiplication, and formulated a complete answer—all without explicit step-by-step instructions.
Understanding Agent Behavior
Let's analyze what happened. First, the agent received a question requiring both external information and calculation. It reasoned that finding Tokyo's population was necessary before calculating. It selected the appropriate tool (Web Search) and formulated a search query. After receiving search results, it extracted the relevant information. It then recognized the need for calculation and selected the Calculator tool. Finally, it combined information from both tools into a coherent final answer.
This autonomous decision-making and multi-step reasoning is what makes AI agents powerful compared to simple chatbots.
Experimenting with Your Agent
Try different questions to explore agent capabilities:
# Example 1: Pure research questionresponse = agent_executor.invoke({ "input": "Who won the most recent FIFA World Cup and what is their population?"})# Example 2: Multi-step reasoningresponse = agent_executor.invoke({ "input": "What is the average temperature in Paris in July, and what is that in Fahrenheit?"})# Example 3: Combining multiple searchesresponse = agent_executor.invoke({ "input": "Compare the GDP of the United States and China"})Observe how the agent adapts its approach for different question types.
Common Issues and Debugging
If your agent fails to answer questions, check these common issues: ensure API keys are correctly set in your .env file, verify internet connectivity for web searches, confirm sufficient API credits (OpenAI charges per token), and review the verbose output to see where reasoning breaks down.
If the agent uses tools incorrectly, improve tool descriptions to be more specific. If the agent loops indefinitely, reduce max_iterations or improve the prompt to guide better reasoning. If parsing errors occur, enable handle_parsing_errors=True to catch and recover from formatting mistakes.
Monitoring Costs
AI agents can consume many API tokens through multiple reasoning steps and tool calls. Monitor usage in your OpenAI dashboard. For development, consider using GPT-3.5-turbo instead of GPT-4—it's significantly cheaper though Less capable. Implement token counting to track consumption:
from langchain.callbacks import get_openai_callbackwith get_openai_callback() as cb: response = agent_executor.invoke({"input": "Your question"}) print(f"Total Tokens: {cb.total_tokens}") print(f"Total Cost: ${cb.total_cost:.4f}")Extending Your Agent
Add more tools to expand capabilities. Create a custom tool for checking weather:
def get_weather(location): # In a real implementation, call a weather API return f"Weather data for {location} would go here"weather_tool = Tool( name="Weather", func=get_weather, description="Get weather information for a specific location. Input should be a city name.")Add it to your tools list and rebuild the agent. The agent will now autonomously use weather information when relevant to questions.
Saving and Loading Conversations
For agents that need conversation history, implement memory:
from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory( memory_key="chat_history", return_messages=True)agent_executor = AgentExecutor( agent=agent, tools=tools, memory=memory, verbose=True)Now the agent remembers previous exchanges within the session.
Best Practices for First Agents
Start simple with just 2-3 tools. Add complexity gradually as you understand agent behavior. Always use verbose=True during development to see reasoning. Set reasonable max_iterations (3-10) to prevent runaway costs. Test with various question types to understand capabilities and limitations. Monitor token usage and costs closely. Read the verbose output carefully—it reveals how agents think and where they struggle.
What You've Learned
You've built a functional AI agent that autonomously reasons, uses multiple tools, handles multi-step tasks, and provides thoughtful answers. This foundation applies to more sophisticated agents—the patterns remain consistent even as complexity increases.
The key insight is that agents think through problems systematically, selecting and using tools based on the current situation. This autonomous capability distinguishes agents from traditional AI applications.
In the next chapter, we'll explore giving agents memory and context, enabling them to learn from interactions and provide increasingly personalized assistance over time. Your journey into AI agent development has begun—each chapter builds upon this foundation, expanding your capabilities and understanding.
Comments & Discussion