Architecting Agentic Systems – part I: Is that an Agent?

The hype around AI Agents and how they are transforming the world is at an all-time high. I believe this is so for good reason. AI Agents bring the capacities of AI and more specifically generative AI (genAI) into the real world. They allow systems to observe the real-world, derive context, make decisions, and then act upon them using AI models as the decision making and action mechanism. That being said, as with any hype cycle, there is way too much hype. In this series of posts I will attempt to peer thru the hype to truly understand what AI Agents are and are not, what they can do, or not, and most importantly, how best to think about large scale systems of agents which will soon run our work and lives for us.

This is an exploration for me too and I am thru this series of posts taking you on the journey with me. My goal on this journey is to look at everything from the lens of First Principles. That is the only way to explore and learn I know. Looking at technology thru First Principles will never fail you. It will keep you from being distracted by buzzwords and tool specific nuances, or from focusing on the shiny bright topic of the day which will be soon replaced by the shiny bright object of tomorrow. As a result, I will also refrain from discussing specific Models, LLMs, or tools in these posts and stick to the Principles behind them. First principles are immutable and allow you to grok (not xAI’s Grok, but THE grok) any technology from the ground up. 

I do not know it all (of course, hence this journey of discovery) and in this rapidly changing world of AI R&D, will never be fully informed or working with the latest information. I invite you all to come on this discovery journey with me – leave comments to this post or on my Socials to educate me, share your learnings, and of course, where needed, correct me. The potential of the impact AI Agents will have is tremendous. The hype however, is off the charts too. 

But, what is an AI Agent?

Or more importantly, what is not an AI Agent? Just because a program – which is what an agent is, code – invokes a Large Language Model (LLM) in order to generate an output, does not make it an AI Agent. Agents, by their very definition need to be Autonomous and have Agency (more on both of these requirements in the next section). They need to be able to access their own encoded capabilities,  interact with external tools and data to observe the world they have the ability to observe – their context window; draw upon this context and then use an LLM or LLMs to make a decision, AND based on the decision or set of options they will choose to make a decision from, take action. 

AI Agents, in order to be an agent, also need to have non-deterministic workflows. Anything with a deterministic workflow in not an Agent, even if it uses an LLM or some other AI/ML model as a part of its workflow. To explore with an example, a smart refrigerator that uses AI enabled vision and other sensors in the appliance to order milk when it runs low is not using an AI Agent. It is a deterministic workflow. Sense presence of a milk can in the fridge, confirm it is milk, detect it is running low by checking its weight, place order via an API to get milk delivered. Deterministic. AI was used along the workflow, but still deterministic. Not an Agent. To be an AI Agent, before ordering the milk the Agent should be able to access the family calendar and see that we are going out of town for a week long vacation leaving this weekend and only order a half gallon of milk rather than the full gallon, and at the same time schedule putting in a future order to have a gallon of milk delivered the day we are returning, an hour after we are scheduled to get home, and also monitor our return flight to adjust the delivery time if our flight is running late. That is a non-deterministic workflow using AI. This folks is an AI Agent.

In reality this is going to be a system of Agents that are coordinating amongst themselves to perform this non-deterministic workflow. The non-AI-agent program in the refrigerator detects milk is running out. It send a massage to the shopping Agent with current milk level. Shopping agent queries the calendar Agent that does a look up the family calendar and returns vacation dates back to the shopping agent and schedules a task to update grocery agent of any future changes to return date and time. Shopping agent calculates how much milk will be needed between today and the day we leave for vacay based on its database of our family milk consumption rate and reduces current order of the milk to a half gallon. And finally it queues placing an order for the day we return, unless calendar Agent updates the date/time. Life with AI Agents rocks!

Agents need to have Agency, Autonomy and more

Just having a non-deterministic workflow however is not enough either. In the above example the Agents also had Agency – the shopping Agent had agency to not just order milk, but also autonomy to order the right quantity of milk, and agency to place another order scheduled for our return. It also had Autonomy. It did not need a human to program it to check the calendar Agent, calculate how much milk we needed, and to place this order and the next. The Calendar Agent had the autonomy to schedule monitoring the flight and inform the shopping Agent. This is an Autonomous system with Agency. And this can be far more complex too. Another Agent in this system could be the Travel Agent (I know, I know). It booked our vacation for us and put an entry in the calendar. This Agent would also monitor flights and rebook us as needed if our plans change or flights get delayed due to weather, which it monitors (or works with the Airline, Hotel, and Weather Agents to execute these activities). It hence is a complex system of Agents each with their own set of Agency of actions it can perform, and Autonomy scope. There is of course more needed for such a system of Agents to operate collectively in a safe and secure manner. Other than Agency and Autonomy, they will also need Guardrails and a Trust mechanism. Let’s examine these all further.

Agency

Agency is obvious. It is in the definition of Agent. The Agency an Agent has – what it can and cannot do – is defined in the code of the agent. Shopping Agent can place, modify and cancel shopping orders. It will need additional Agency to invoke other Agents which enhance its ability to shop for the right things in the right quantity. It will also need guardrails which we will discuss shortly. The guardrails will ensure does not order out of season exotic mangoes for $19.99 each, and that it communicates to Budget Agent and stays within limits it sets.

Autonomy

Agency and Autonomy have nuanced differences. Agency is related to the capacity to act, while Autonomy to the freedom to choose. Agency aligns with external tasks and actions, which Autonomy with internal decision making processes. One can hence have Autonomy without Agency. My children, when they were young children, had the autonomy to not do homework. Us, their parents and their teachers made sure they did not the agency to not do so. In the AI Agent world Autonomy comes from not having humans in the loop to make decisions that are defined to be within the Agents scope of autonomous action. In our example, the Shopping Agent had the autonomy to reduce the quantity of milk to order without checking with us.

Guardrails

Alluding to the Human in the Loop is a perfect segue to talk about Guardrails. An AI Agent needs to have a very well defined scope of actions it can take. It also needs to have well defined Guardrails to prevent unindented consequences from actions it has Agency to take. We already talked about cost awareness and budget limits as guardrails. You don’t want the Agent to break the bank. You also want it to have limits on what conclusions it draws. Guardrails should prevent it from ordering food for a party too early that needs to be kept frozen if the freezer is already too full. An extreme example of the need for guardrails is in the movie Avengers: Age of Ultron, where Ultron’s AI decides that humankind needs to be eliminated as it saw humans as the cause of all the pain and disasters in the world. No Humans, no wars, no climate change. Tony and Bruce should have put some guardrails in. The best way to put in guardrails is to introduce a Human in the Loop review of decisions the Agent makes which can be outside its normal decisions. In our Shopping Agent example, we can put in a guardrail to have a human sign off on any grocery order greater than a certain cost amount. 

Trust

While Guardrails keeps the Agent from going off the proverbial rails, the role of Trust comes in to make sure the Agent is still the Agent we invoked. Has it been hacked? Has it been replaced by a rogue Agent? Has the AI model being used by the Agent been poisoned, making the Agent act differently? Or has some bias been introduced in the Agent which has changed its behavior? Furthermore, when dealing with systems of Agents that coordinate with each other to perform tasks, how to ensure that each Agent trusts the Agent it is interacting with? Agents need to be able to validate the identity of any Agent they interact with, including any tool or external process that it invokes, or that invokes it. Is it my neighbors refrigerator that is running low on milk or mine? Agents need to be able to Authenticate with APIs they invoke to act. Agents need to validate they are executing the task with the right external Agent or system. Is it giving the order with payment info to the store or delivery service it intends to place the order with, or an imposter Agent/API/website? We have AI phishing humans today. We will soon have AI Agents phishing AI Agents, if not already there.

In conclusion, architecting any system of AI Agents will need to begin with this First Principle level thinking in place. This is just the beginning. More in Part II as we now look at Architectural thinking that will need to be made to deploy Systems of Agents at scale. Stay tuned, or ask your blog post scanner Agent to do so.

Do share your thoughts in comments below. Or on my socials. I want to learn from you and your thinking on AI Agents.

Disclaimer: No AI Agent was harmed in the writing of this post.

One Comment Add yours

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.