October 26, 2023

When ChatGPT burst onto the scene in late 2022, it didn’t just showcase the leaps AI had made; it painted a future filled with possibilities, shaping how we could work, learn and even have fun. The benefit of AI is clear to many, but there’s a tricky part: potential risks.

Navigating these risks is like solving a giant puzzle – one that defines our times. That’s why many of the brains behind AI are pushing for some ground rules to keep things in check. After all, the conversation about how we use AI isn’t just big talk; it’s essential.

We’re diving into insights from the experts, unpacking the ethical dilemmas wrapped around them and examining how they may impact the future of AI and other technologies. 

Ethics and Bias

AI systems need to be trained using data. But data sets are frequently made by people who can be biased or inaccurate. As a result, AI systems can perpetuate biases. This is especially true in hiring practices and in criminal justice, and managing those biases can be difficult. 

“We can audit software code, manually or automatically, for privacy defects,” said IEEE Senior Member Kayne McGladrey. “Similarly, we can audit software code for security defects. We cannot currently audit software code for ethical defects or bias, and much of the coming regulation is going to screen the outcomes of AI models for discriminatory outcomes.” 

Changing How Jobs are Done

With the rise of generative AI, companies are reimagining how work gets done. While few people  think that jobs requiring creativity and judgment can be fully automated, AI can assist.  Generative AI can offer dialogue ideas when a writer is stuck, for example. It can’t serve as your lawyer, but a good lawyer could use generative AI to write a first draft of a motion, or do research. 

“We need to collectively figure out what are “human endeavors” and what are we willing to cede to an algorithm, e.g making music, films, practicing medicine, etc.,” said IEEE Member Todd Richmond. 

In “The Impact of Technology in 2024 and Beyond: an IEEE Global Study,”  a survey of global technology leaders, 50% of respondents identified the ability to tap the “institutional knowledge of current professionals to train newcomers” as one of their top three concerns around generative AI use in  their organization in the next year. 

Accuracy and Overreliance 

Generative AI can spit out facts with confidence. The problem is those facts aren’t always accurate. And with all forms of AI, it can be difficult to find out how, exactly, the software arrived at its conclusion. 

In the survey, 59% of respondents cited an “overreliance on AI and potential inaccuracies” as a top concern of AI use in their organizations. 

Part of the problem is that the training data itself can be inaccurate. 

“Verifying training data is difficult because the provenance is not available and volume of the training data is enormous,” said Paul Nikolich, IEEE Life Fellow. 

And increasingly, AI may be used in mission-critical, life-saving applications.

“Before we use AI systems, we must have confidence that these AI systems will operate safely and perform as intended,” said IEEE Fellow Houbing Song.

In 2024 and beyond, expect intense efforts to ensure that AI results are more accurate, and the data used to train AI models is clean. 

Learn more: A new article in IEEE Computer Magazine argues that the advancement of AI must be done in a way that protects privacy, civil rights and civil liberties, while also promoting principles of fairness, accountability, transparency and equity.

INTERACTIVE EXPERIENCES

Close Navigation