The Components of an AI Agent
According to experts, there are three key components of an AI agent:
1. Large language models: LLMs allow agents to perform specific tasks by responding to feedback, according to Andersen.
LLMs gather context using retrieval augmented generation. A general-purpose model can connect to RAG, and a RAG application can connect an LLM to specific information a business needs to know, Andersen explains.
RAG also allows AI agents to retrieve proprietary sensitive patient data in healthcare, and customer data in customer service, Saunders says.
2. Memory: AI agents can remember a customer’s buying history, which is particularly beneficial in retail. For example, Andersen says, an AI agent could suggest the type of water bottle to buy that would fit into the cupholder of a car the customer bought.
“It provides a level of depth in terms of helping somebody make selections, because it understands what you've done in the past to a degree that you don’t see in the simpler types of e-commerce applications today,” Andersen says.
3. Tools: The AI agent performs tool-calling to send out queries to task-driven models that can do more specific tasks, says Saunders. The LLM understands how to communicate with these tools through application programming interfaces.
Another tool consists of the AI agent’s workflow, which dictates the steps for a job to be completed. Companies such as Salesforce and ServiceNow offer AI agents to help with basic workflows, Andersen notes.
DISCOVER: The power of AI and data-driven decision-making.
What Are the Benefits of AI Agents?
A key benefit that AI agents provide is handling repetitive tasks, something that can be a shortcoming for people at times. Microsoft Copilot Studio allows organizations to build agents to respond to “autonomous triggers” for business tasks.
Google launched an AI agent in May, called Astra, which lets users experiment using audio and video, and NVIDIA offers an API catalog of blueprints for AI agents.
“What's great about these is they can be designed and trained on your brand, and you can have a look and feel and a style that carries your company's experience into that AI agent that your customers are working with,” NVIDIA’s Saunders says.
Another type of agent, called a reflex agent, can respond to single-step instructions. Reflex agents handle concrete steps such as writing a LaMDA function handler, calling a certain function and generating a block of text for that function, says Anu Sharma, the director of Amazon Bedrock Experiences and Tools at AWS.
AWS has multiple agentic offerings. Users can create their own agents to automate repetitive tasks with Amazon Bedrock. Built on Bedrock, with Amazon Q, a generative AI assistant for work, users can tap into its existing agents that streamline their business and software development tasks.
For example, Amazon Q Business has a contextually aware agent that can help HR professionals understand the context for company policies and carry out a benefits eligibility workflow to identify which policies apply to certain employees, Sharma says.
Amazon Q Developer has been able to free software engineers for other tasks while one of its agents upgraded about 30,000 Java applications, saving developers 4,500 years of manual work, according to Sharma. Users can review changes to code that Amazon Q Developer suggests and can fix errors in an agent’s recommendations.
“We are allowing our engineers to have more time to solve problems and invent on behalf of our customers, so they're doing more problem-solving work rather than undifferentiated work while still delivering at the speed and quality they want to maintain for their software,” Sharma says.