April 25, 2024
In 2021, a group of researchers set out to quantify just how hot the topic of ethics of artificial intelligence had become. They searched Google Scholar for references to AI and ethics. What they found showed a remarkable uptick in this field. In the over three decades spanning 1985 and 2018, they found 275 scholarly articles focusing on the ethics of artificial intelligence. Those journals published 334 articles in 2019 alone – more than they had in the previous 34 years combined. In 2020, an additional 342 articles were published.
Research into AI ethics has exploded, and much of it has focused on guidelines for building AI models. Now, AI-based tools are widely available to the public. That’s left schools, businesses and individuals to figure out how to use AI ethically – in a way that is safe, free of bias and accurate.
“Much of the public is not yet sufficiently informed or prepared to use AI tools in a fully responsible manner,” said IEEE Member Sukanya Mandal. “Many people are excited to experiment with AI but lack awareness of potential pitfalls around privacy, bias, transparency and accountability.”
Hallucinations and Inaccuracies: The Biggest Pitfalls for AI Users
Because of the way they are built, most generative AI models are prone to hallucinations. They simply make things up, and the seemingly authoritative results give the appearance of confidence. That’s a risk for users, who may pass on false information. In the U.S., lawyers using generative AI learned this lesson the hard way when they attempted to use chatbots to draft legal documents, only to discover that the AI made up nonexistent cases they cited as precedent in their arguments.
“AI may not always be accurate, so its information needs to be checked,” said IEEE President Tom Coughlin.
Can We Trust The Decisions AI Makes?
Artificial intelligence models are trained on massive amounts of data, and sometimes they make decisions based on extremely complex mathematical functions that are difficult for humans to understand. Users often don’t know why an AI has made a decision.
“Many AI algorithms are ‘black boxes’ whose decision-making is opaque,” Mandal said. “But particularly for high-stakes domains like healthcare, legal decisions, finance and hiring, unexplainable AI decisions are unacceptable and erode accountability. If an AI denies someone a loan or a job, there must be an understandable reason.”
What Happens if We Trust AI Too Much?
Because AI models are trained on such large datasets, they could lull users into a false sense of confidence, causing them to accept decisions without question.
In “The Impact of Technology in 2024 and Beyond: an IEEE Global Study,” a recent survey of global technology leaders, 59% of respondents identified “inaccuracies and an overreliance on AI” as one of their organization’s biggest concerns when it came to the use of generative AI.
Why Is It Important To Know What Data Was Used to Train an AI Model?
Imagine this: An AI model used is trained to screen applicants for a job. It forwards resumes to hiring managers based on data collected over prior years, and is trained to identify people most likely to get the job. Except, the industry has traditionally been male dominated. An AI could learn to identify women’s names, and thus automatically exclude those applicants, based not on their ability to do the job, but on their gender.
Such algorithmic biases can and do exist in AI training data, making it very important for users to understand how models were trained.
“Ensuring unbiased data is a shared responsibility across the AI development lifecycle and an ongoing process,” Mandal said. “It starts with those sourcing data being cognizant of the risk of bias and using diverse, representative datasets. AI developers should proactively analyze datasets for bias. AI deployers should monitor real-world performance for bias. Ongoing testing and adjustment is needed as AI encounters new data. Independent audits are also valuable. No one can abdicate bias mitigation solely to others in the chain.”
Should You Tell People When Artificial Intelligence Is Used?
Disclosure is emerging as a key tenet of AI use. When an AI makes a decision in healthcare, for example, patients should be told. And social media sites also require creators to disclose when AI was used to make or alter a video.
“Ethical AI usage hinges on properly handling information, including source citation and adherence to existing guidelines,” said IEEE Senior Member Cristiane Agra Pimentel. “Several publications now permit AI use, provided the writer cites the AI that was used and the date it was used.”
Learn more: How will AI change the way researchers write and publish? A new column in Computer magazine from the IEEE Computer Society argues that academic articles should begin including AI metadata that would allow researchers to study the effects of AI on writing and research.