What's the status of Agentic frameworks?

A significant number of companies are already leveraging AI with use cases based on Agents and Agentic AI to achieve efficiencies in their business processes and launch innovative offerings to their customers. However, many of these use cases are still in a productive pilot stage, and the use cases with the greatest impact (and typically greater complexity) are not yet commonly implemented. 

In other articles on this blog, we've explained how companies are creating tangible value in real-world applications that encompass the selection of LLM models, intelligent prompt management, or the incorporation of proprietary data with techniques like RAG and grounding. In this blog post, we take a step further and tell you about the current state of Agentic frameworks for developing intelligent applications and what we at dataguru believe these frameworks should have. To do this, we will use an analogy with the SOA/BPM systems that were so fashionable in the first decade of this century.  

The main Cloud AI providers like Google, Microsoft, AWS, and others are aware of the importance of Agentic AI and offer APIs to their LLMs that facilitate the creation of specific-purpose Agents using low-code or no-code techniques. An example of this is the Agent Development Kit (ADK) framework. However, the process of developing, implementing, maintaining, and documenting a comprehensive end-to-end Agentic application remains a significant challenge. If we want our Agentic application to perceive its environment, intelligently plan process steps, be capable of knowing and connecting to external systems, have long-term memory, and learn by observing results, it is necessary to carry out custom developments for which there is still no sufficiently complete framework, although we are convinced that we will see very important advances in the near future.

Just like the Business Process Management (BPM) solutions of the 2000s, Agentic application frameworks must offer significant advantages over building applications from scratch. Firstly, these frameworks should enable the development of Intelligent Applications using stateless LLMs and automatically provide them with state for long-term business processes, manage the persistence of conversation history, and handle security natively. Secondly, they should provide visual and graphical modeling for the orchestration of business processes, which facilitates development, documentation, and maintenance for non-technical users. Finally, these frameworks must offer observability of each of the iterations with the processes, allowing for the tracking of the quality of the LLM's intelligence and the results between the application and the users/business systems for greater auditability and regulatory compliance. Let's explore this further.

Level 1: Interoperate

This fundamental layer establishes the building blocks for our Intelligent Applications. These frameworks automatically manage interoperability between systems and with LLMs through predefined protocols. Google, for example, uses the Model Context Protocol (MCP) – standardizing how AI models access external tools and data – and the Agent2Agent (A2A) protocol, which standardizes collaboration between independent AI Agents (conceptually similar to SOAP in service-oriented architectures). Microsoft offers Azure AI Studio alongside Azure OpenAI Service, while AWS leverages a set of tools including AWS MCP servers, AWS Lambda functions, and Amazon SageMaker. 

Open frameworks like LangChain further decouple your Intelligent Applications from the underlying Cloud infrastructure with a powerful framework, although it's not 100% drag-and-drop and doesn't manage key aspects like the session of each process instance (session ID). 

Using these frameworks, although still incomplete, we significantly reduce development costs by reducing coding, decrease maintenance costs with pre-built components, and lessen IT dependency by allowing business users to build and modify applications, freeing up IT for strategic initiatives.

Level 2: Orchestration

With the building blocks already defined and functioning, the next step is orchestration and reasoning. Before the emergence of Agentic AI orchestration frameworks, defining and coding the logic of business process flows demanded significant low-level programming expertise. To develop Agentic AI applications with varying degrees of visual interaction, it's worth considering options like AWS Flow Builder, Google Cloud Vertex AI (with its drag-and-drop Agent Builder features), and Azure AI Studio. While Google Cloud's Agent Development Kit (ADK) leans towards a code-first development approach, these platforms often incorporate drag-and-drop or other visual tools to facilitate the creation of Intelligent Applications. 

LangChain addresses the construction of multi-agent applications with LangGraph, incorporating "state" into "stateless" LLMs and representing the application logic as a graph, similar to BPMN in SOA/BPM architectures. This framework enables a process-centric approach, emphasizing the modeling and automation of business processes, along with the formalization and application of business rules and protocols within application workflows.

It is important to note that for the LLM to be able to reason which process we want to execute within the catalog of processes, what steps it should follow based on context and previous experiences (making it a non-deterministic flow), and at what point in the process the current iteration is, significant coding and manual persistence management are still required.

Frameworks are not yet able to intuitively and natively manage the perception of their environment. Trivial aspects such as the day of the week or the demographics of the user initiating the process have to be coded manually, let alone, for example, the recommendation of a product or service based on previous purchases or browsing patterns.

It is crucial to recognize that Agentic AI solutions may not be universally optimal. Applications with highly complex flows, specific technical demands, or critical performance requirements may still benefit from traditional coding methods that offer more detailed control and customization than AI.

Level 3: Evaluation

Once our Agentic AI Application is up and running, and due to the non-deterministic nature of LLM-based application logic, it is essential to evaluate the results of each iteration for continuous learning, auditing, and regulatory compliance. Google, AWS, and Microsoft Azure do not provide a single framework for evaluation, but rather a set of different tools at different levels (mainly low-level Monitoring and Logging tools) without specific audit and compliance functionalities, which must be coded.

LangSmith provides a unified platform for debugging, testing, evaluating, and monitoring your LLM-powered Agentic Applications. By offering detailed tracking of each step, from inputs and outputs to intermediate processes, it enables a comprehensive assessment of quality, accuracy, and performance through automated metrics and human feedback. While not a complete SOA/BPM solution, LangSmith is the closest thing to offering valuable insights similar to a BAM (Business Activity Monitoring) system.

The absence of centralized control in early Agentic AI frameworks perpetuates "shadow IT," hindering the potential benefits of AI. In contrast, a well-governed and centralized framework would enable a more visible, agile, collaborative, and cost-effective application development environment. This structure would also integrate analytics and reporting, providing crucial insights into process flow performance, identifying bottlenecks, and enabling data-driven decision-making for continuous optimization.

Conclusion:

Agentic AI frameworks are in their infancy but poised for significant advancement. This paradigm shift will democratize access to business processes by allowing all end-users—employees, customers, and suppliers—to interact intuitively through natural language. This interaction will adapt to context, learn from each iteration, and remain secure, eliminating the friction of navigating multiple unintuitive application-specific interfaces and making daily interactions more efficient and human-centric.

How can dataguru help you?

With our AI Agents and Intelligent Applications service, you can experience the next level of automation with Agentic AI. We design and implement Intelligent Applications with Agents that learn, adapt, and drive results autonomously. Our top-tier methodology and team of experts in AI and business process areas will enable you to unlock unprecedented efficiency and innovation.

If you're unsure where to begin, our AI Discovery Workshop will reveal the power of AI through inspiring success stories. We will identify, prioritize, and roadmap groundbreaking use cases designed for you in a highly effective one-day session and clear documentation of the most valuable use cases prioritized based on impact and feasibility.

Compártelo en tus redes sociales

Data Analytics

Ver servicio

AI

Ver servicio

Recent Posts

What's the status of Agentic frameworks?

A significant number of companies are already leveraging AI with use cases based on Agents and Agentic AI to achieve efficiencies in their …
Leer más →

9 steps to select an LLM model for your Intelligent Application

Large Language Models (LLMs) have become the engine of a new wave of intelligent applications, from advanced virtual assistants to revolutionary content generation tools
Leer más →

Build an Agentic Application with Google Vertex AI in 10 Steps

Conversational artificial intelligence and Agentic applications are revolutionizing the way we interact with technology. These applications, powered by Large Language Models
Leer más →
Scroll al inicio