January 23, 2025

Imagine an AI that doesn’t just follow your instructions but decides how to achieve your goals on its own. Agentic AI is exactly that: a new frontier in artificial intelligence where systems act independently. Agentic AI has the potential to revolutionize industries. But with great autonomy comes great responsibility — and potential ethical dilemmas. Ensuring accountability, fairness and safety in self-directed systems is at the heart of building trust and reliability in these technologies.

Keeley Crockett, an IEEE member and former editor of the IEEE Journal of Emerging Topics in Computational Intelligence, discusses what agentic AI is, how it is being used and how it may evolve in the coming years.

How would you explain agentic AI to someone with no technical background?

Agentic AI refers to artificial intelligence systems that can act independently to achieve goals without requiring constant human control. These systems can make decisions, perform actions,and adapt to situations based on their programming and the data they process, typically without human input. A goal may be comprised of a number of sub-goals.

There are several examples of how these systems might be deployed in the future. Imagine you had a house-cleaning robot that you instructed to simply “keep the house clean.” The robot would then undertake tasks where appropriate, such as vacuuming floors when it determines they are dirty, doing dishes after dinner and putting things back in their place. The robot understands the goal and acts without constant instructions. 

Another scenario involves using agentic AI to run a marketing program. You might tell the system to increase sales by 20 percent. An agentic AI system could then independently analyze customer data over a specified time period to identify trends and preferences. The AI will then analyze customer data, conduct a marketing campaign based on this analysis and adjust the campaign if it realizes it isn’t performing to expectations.

What distinguishes agentic AI from traditional AI models or automation systems?

Traditional AI models follow predefined rules that have been typically discovered from data. Such models cannot automatically adapt to unexpected changes without some form of retraining, testing and validation. There is also a need for human oversight and intervention. Traditional models are usually built for one specific task, such as a classification task, which builds a model to determine if a person is likely to default on a loan payment or not.

Agentic AI systems act independently to achieve specific goals without requiring constant human intervention. They can learn from data, adapt to new situations and dynamically adjust their actions. Their behavior is goal-driven, and they must work out how to achieve the main goal and sub-goals, requiring them to prioritize tasks and solve problems independently of humans.

What Are the ethical implications of agentic AI?  

The use of agentic AI raises several critical questions. In such complex systems, who is responsible if things go wrong? What data and privacy safeguards are required? How should an agentic AI system respond when expecting to undertake “moral” decision-making that has real-world consequences? 

These questions are the same for traditional AI systems. Although guided by ethical principles and current/emerging legalisation, we are still trying to understand and unpack what this means in operationalised systems. The big ethical question is, where do humans fit in?

Are there agentic AI models in use today? What degree of autonomy do they have?

Agentic AI is used today in autonomous vehicles to make driving decisions based on continual analysis of the vehicle’s surroundings. Every trip is a learning experience. 

Some cybersecurity companies use agentic AI to detect and correlate threats within an organization through real-time analysis of network activity. The AI then automatically responds to potential breaches autonomously.

My personal opinion is that agentic AI use cases will evolve rapidly as research progresses and access to data increases. The accuracy of these systems depends on how much personal data a person is willing to share. 

Interested in becoming an IEEE member?

INTERACTIVE EXPERIENCES

Close Navigation