LangGraph: The Future of Production-Ready AI Agents

ODSC - Open Data Science
5 min readJul 23, 2024

--

Editor’s note: Eden Marco is a speaker for ODSC Europe this September 5th-6th. Be sure to check out his talk, “Develop LLM Powered Applications with LangChain and LangGraph,” there!

In recent years, we’ve seen a surge of interest in autonomous AI agents. Projects like Baby AGI, Auto-GPT, and GPT Engineer have captured the imagination of developers and researchers alike. However, when it comes to real-world applications, these fully autonomous agents often fall short.

Don’t get me wrong — They have an important role in our industry and they are pounding on innovation however the hype delivers unrealistic expectations.

The primary issue with autonomous agents is that they give too much freedom to Large Language Models. These agents allow LLMs to decide what tasks to perform, in what order, and even generate and execute code. While this approach is valuable for pushing the boundaries of AI capabilities, it’s not suitable for production environments where reliability and control are paramount.

This is where LangGraph comes in. Developed by the team behind LangChain, LangGraph represents a paradigm shift in how we implement AI agents. Instead of giving LLMs full autonomy, LangGraph allows developers to scope and narrow down the freedom of the LLM within a controlled flow.

Understanding LangGraph

LangGraph is a framework that enables developers to create state machines where LLMs act as reasoning engines. This approach strikes a balance between leveraging the power of LLMs and maintaining developer control over the application flow.

With LangGraph, you can:

  1. Define a clear flow of execution
  2. Create a state machine with full developer control
  3. Use LLMs to make decisions within the defined flow
  4. Implement loops and complex agent architectures easily

This approach is more suitable for production use cases, as it combines the reasoning capabilities of LLMs with the reliability of traditional software engineering practices. Implementing Flow Engineering.

Creating a “Hello World” LangGraph Search Agent Example

Let’s create a simple “Hello World” example using LangGraph to demonstrate its basic concepts.

We will build a search agent that can search for and report weather conditions. This example will demonstrate how to create a stateful agent that can handle multiple interactions, use external tools, and run in a loop.

Step 1: Installation and Setup

First, let’s install the necessary packages:

pip install -U langgraph langchain-mistralai langchain-community python-dotenv

Create a `.env` file in your project directory and add your API keys:

MISTRAL_API_KEY=your_mistral_api_key_here
TAVILY_API_KEY=your_tavily_api_key_here

Step 2: Import Required Libraries and Load Environment Variables

from dotenv import load_dotenv
load_dotenv()
from typing import Literal
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_mistralai import ChatMistralAI
from langgraph.graph import END, MessagesState, StateGraph
from langgraph.prebuilt import ToolNode

Step 3: Set Up the Model and Tool Node

tools = [TavilySearchResults()]
tool_node = ToolNode(tools)
model = ChatMistralAI(model="mistral-large-latest", temperature=0).bind_tools(tools)

Step 4: Define Control Flow Functions

def should_continue(state: MessagesState) -> Literal["tools", END]:
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
def call_model(state: MessagesState):
messages = state["messages"]
response = model.invoke(messages)
return {"messages": [response]}

Step 5: Create and Configure the Graph

workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")
graph = workflow.compile()

Step 6: Visualize the Graph (Optional)

If you want to visualize the graph structure, you can use the following code:

graph.get_graph().draw_mermaid_png(output_file_path="graph.png")

This will create a PNG file named “graph.png” in your current directory, showing the structure of your graph.

Step 7: Use the Agent

Now we can use our weather agent:

user_input = "what is the current weather in London? In Celsius"
events = graph.stream({"messages": [("user", user_input)]}, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

How It Works

  1. We initialize the model (ChatMistralAI) and tools (TavilySearchResults for web search).
  2. We create a StateGraph with MessagesState, which manages a list of chat messages.
  3. We define two nodes: “agent” (the LLM) and “tools” (for executing actions).
  4. We set up the graph flow:

– The agent node is the entry point.

– After the agent node, we use a conditional edge to either run tools or end the conversation.

– After running tools, we always return to the agent node.

  1. We compile the graph and optionally visualize its structure.
  2. When we invoke the graph with a user input, LangGraph:

– Adds the input message to the state.

– Passes the state to the agent node.

– Cycles between the agent and tools nodes as needed.

– Returns the final state when the agent doesn’t request any more tool calls.

This example demonstrates LangGraph’s key features: stateful conversations, tool use, and easy integration with various LLM providers and external tools and loops.

Conclusion

LangGraph represents an exciting development in the field of AI agents. By providing a framework for creating controlled, production-ready AI applications, it bridges the gap between the potential of LLMs and the requirements of real-world software development.

This blog post has only scratched the surface of what’s possible with LangGraph. If you’re interested in learning more, I’ll be diving deeper into advanced LangGraph techniques in my upcoming session at ODSC. We’ll explore how to implement complex agent architectures and leverage the full power of LLMs within a controlled environment.

Stay tuned for more updates, and I hope to see you at ODSC where we’ll take your LangGraph skills to the next level!

Author Bio:

Eden is a seasoned backend software engineer with deep expertise in generative AI, cloud, and cybersecurity. With years of experience in backend development, he currently works at Google as an LLM Specialist, assisting customers in implementing complex generative AI solutions on GCP using open-source frameworks like LangChain and Google’s generative AI services. Additionally, he is an educator, instructor and creator of best-selling Udemy courses on LangChain, LlamaIndex, LangGraph, and pytest. As an educator at heart, he is passionate about sharing knowledge and helping others learn.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet