January 15, 2026

Summary

  • As AI content becomes harder to verify, IEEE experts at CES 2026 argue that credibility now relies on the “information supply chain” and the reputation of the source rather than the content itself.
  • Because users must outsource their trust to “proxies for truth,” brands and AI designers are now held to a higher standard of ethical care to protect human agency.

Imagine someone you disliked shared a song or a news article with you. You might dismiss it outright. But if that same content arrived later from a trusted mentor, your reaction might change, not because the content itself was different, but because the source was. So what does it mean when AI is responsible for sharing content, or making it? 

As AI-generated content becomes more sophisticated and verification more difficult, experts argue that public trust in information is no longer based on facts alone, but on the social and institutional relationships surrounding the content.

The tension between content and the relationships that give it credibility was at the heart of the panel  panel “Trust, Transparency and Co-Creation in the Age of Intelligent Content” hosted by IEEE at CES 2026 in Las Vegas earlier this month.

The panel featured  IEEE Fellow Karen Panetta as moderator, with panelists Martin Clancy, Author and Founder, AI:OK, Senior AI Research Fellow DCU Ireland and Founding Chair of the IEEE Global AI Ethics Arts Committee, IEEE Student Member Max Lu and Eric Pace, head of AI at Cox Communications.

What Users Know About AI 

Before questions of trust, transparency or ethics can be addressed, there is a more basic issue: most users have only a partial understanding of how AI systems arrive at their answers.

People know that AI is trained on data and that it can produce impressive results. What is less understood is that those results depend on human judgment long before a model is deployed. Developers must decide what “correct” looks like in the first place. In AI development, that reference point is known as ground truth: the agreed-upon answers or labels that a system is trained to learn from.

That agreement is not always easy to reach.

Karen Panetta offered a concrete example from her work teaching AI systems to read medical images. For one project, doctors reviewing scans broadly agreed on whether they showed the presence of cancer. That shared judgment created a reliable foundation for training an AI model. But when those same doctors were asked to assess how severe the cancer was, their opinions often diverged sharply. Without consensus, the ground truth the system was supposed to learn became unclear.

“As someone who creates AI and databases and ground truths, I  didn’t know how to create a ground truth for that,” Panetta said.

Proxies for Truth

Lu, a researcher who has helped major news organizations deploy AI, explained that because we cannot distinguish AI images from real ones we are forced to rely on “proxies” for truth rather than our own senses.

“If you want to verify whether an image is real, you have to outsource your trust to someone who has better knowledge than you, whether it is a chatbot or a professional news organization or friends and family,” Lu said.

Trust and Brands

Consumers sit at the end of an AI-fueled information supply chain. One AI might produce content, but others might edit it, translate it and decide who gets to see it.  Pace noted that consumers are unable to see how those decisions are made so they default to trusting the final delivery point. That means brands have a responsibility to focus on ethical AI across the media ecosystem.

“My thought is you put your trust in the last place you get your information from, right?” Pace said. “The whole supply chain of information before that is irrelevant to you because you have no capacity whatsoever to vet the supply chain.”

Ethical Design At the Core

Clancy, a co-author of the report The Voice of the Artist in the Age of the Algorithm, released as part of the IEEE Standards Association’s Ethically Aligned Design initiative, said AI designers need to be held to a higher standard of care because users give up some level of control when using AI systems. 

“When I use AI, I’m consciously transferring my agency to this black box because I want assistance,” Clancy said. “So I’m curious about how platforms can respond to emerging behaviors from humans as well as the technology.” 

Ultimately, the panel concluded that while trust is increasingly outsourced to social proxies, the individual remains empowered to shape their own information environment. As Lu argued, we are now in a “better position than ever before” to make these systems work for us, using AI to explore the complexity of the world rather than settling for simple black-and-white narratives.

Explore More: If you want to see all the highlights from IEEE at CES 2026, including highlights from the panel, check out this video.

Interested in becoming an IEEE member?

INTERACTIVE EXPERIENCES

Close Navigation