AI Agents — A Practical Implementation

ODSC - Open Data Science
4 min readJan 9, 2025

--

Editor’s note: Valentina Alto is a speaker for the upcoming AI Builders Summit! Be sure to check out her talk on week 4, “AI Agents — A Practical Implementation,” there to learn more about AI Agent Implementation!

With the advent of Generative AI and Large Language Models (LLMs), we witnessed a paradigm shift in Application development, paving the way for a new wave of LLM-powered applications. In this new landscape, one of the strongest patterns that have consolidated over the last few months is that of AI Agents.

But what defines an AI agent? At their core, agents are AI systems driven by Large Language Models (LLMs) that respond to user queries by engaging with their surrounding ecosystem — within the boundaries we set for them. These boundaries are shaped by the tools we equip agents with, such as enabling web searches or restricting file manipulation within a local system.

The architecture of an AI agent consists of the following essential elements:

  • An LLM: This serves as the reasoning engine, driving the agent’s ability to analyze and generate responses, as well as deciding when to invoke a specific tool to accomplish user’s query. It can be seen as the “brain” of the agent, orchestrating user’s requests and the backend of the Agent itself.
  • A Toolkit: This is the set of resources the agent can use to interact with its environment. For instance, providing access to the Web Search qualifies as a tool for the Agent. The way the Agent decides which tool to use and when is determined by the LLM reasoning capabilities, as well as the planning strategy induced in the system message.
  • System Message and Planning: This predefined instruction shapes the agent’s behavior and thinking process. For example, you can design an agent as a teaching assistant for students with a system message like: “You are a teaching assistant. When given a student’s query, NEVER provide the final answer; instead, offer hints to guide them toward the solution.” Furthermore, the system message can also shape the planning pattern, specifying the way the LLM should decide upon the available tools.
  • Memory. AI agents remember past interactions and behaviours, storing experiences and reflecting to improve future actions. This memory ensures continuity and performance enhancement over time. Agents can be endowed with both short-term memory (mainly limited to the context window of the ongoing session) and long-term memory (which refers to the stored past interactions)

Through these components, agents are adaptable entities capable of dynamic interactions, tailored to specific use cases, and empowered to perform specific actions. For example, let’s consider an AI Booking Assistant to interact with customers who want to reserve a table in a restaurant. We configure the agent as follows:

  • LLM: we can choose a language model such as GPT-4o, Llama-3, Mistral and many others.
  • System message. Here we instruct the model on how to interact with the user and how to leverage its tools.
  • Toolkit. Here we provide a Python function that performs a POST call to our booking app.

You are a restaurant booking assistant.

You interact with users in a polite and friendly manner.

You have access to the following tools:

{{book_restaurant}} 
def book_restaurant(api_url, name, contact, restaurant_id, date, time, guests, special_requests=None):     endpoint = f"{api_url}/api/restaurants/bookings"     payload = {         "name": name,         "contact": contact,         "restaurantId": restaurant_id,         "date": date,         "time": time,         "guests": guests,         "specialRequests": special_requests     }     try:         response = requests.post(endpoint, json=payload)         response.raise_for_status()  # Raise an HTTPError for bad responses (4xx or 5xx)         return response.json()  # Return the response in JSON format     except requests.exceptions.RequestException as e:         print(f"An error occurred: {e}")         return None

This is what a typical conversational flow might look like:

The example above depicts an example of an autonomous agent that, given the use’s input, is capable of taking several decisions with a degree of independence. That latter can always be adjusted by the AI developer: for example, we could have added a human-in-the-loop to review the booking details before accepting the booking request in the system.

Autonomous agents represent a promising milestone for the advancement of GenAI systems, and we are already witnessing many real-world applications, from Banking (e.g. AI Agent to onboard new customers) to Customer Service (e.g. AI Agent handling support calls) to RPA (e.g. fully autonomous agents executing complex pipelines of document intelligence).

About the Author/AI Builders Summit Speaker on AI Agent Implementation:

Valentina is a Data Science MSc graduate and Cloud Specialist at Microsoft, focusing on Analytics and AI workloads within the manufacturing and pharmaceutical industry since 2022. She has been working on customers’ digital transformations, designing cloud architecture and modern data platforms, including IoT, real-time analytics, Machine Learning, and Generative AI. She is also a tech author, contributing articles on machine learning, AI, and statistics, and recently published a book on Generative AI and Large Language Models.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet