December 4, 2024
If you’ve heard of deepfakes, the highly realistic videos of people doing things they’ve never done, or saying things they’ve never said, you may think they are a dubious technological development. They’ve been used to impersonate executives in advanced phishing scams in which fake voicemails are used to request urgent funds from employees. And they’ve been at the center of other controversies that damage reputations and cause real harm.
Their very existence has set off a flood of research into ways to identify them.
But deepfakes aren’t all bad. In fact, they can be used for good.
They could allow executives of global businesses to deliver messages to employees in their native language, translating speeches in real-time with lip movement to match the words as they are spoken. They are used to create realistic simulations for workforce development or to bring historical figures to life to make complex topics more relatable in schools.
Why Researchers Need ‘Good’ Deepfakes
The medical field is one of the biggest users of helpful deepfakes. These deepfakes are often used to create extra training data for machine learning programs. For example, when researchers train artificial intelligence (AI) to detect certain types of cancer in medical images like MRIs or X-rays, they might use deepfakes to add more images to their datasets.
This is necessary because many datasets are small or incomplete. Training AI models requires painstaking attention to detail, with lots of human intervention to label certain features of a dataset. Sometimes, scans aren’t labeled correctly or the labels are inconsistent because different people used different criteria. These issues can make it harder for AI models to learn accurately. Generating synthetic data can help overcome these challenges, though the quality of the synthetic data must also be carefully monitored.
It’s Becoming Easier to Make a Deepfake
“While the technology to create deepfakes has become more accessible over the years, it still requires a certain level of expertise,” said IEEE Senior Member Vivekanandhan Muthulingam.
“There are user-friendly tools and applications available that allow individuals to experiment with deepfake creation without extensive programming knowledge,” Muthulingam said. “However, achieving high-quality results still necessitates a deeper understanding of machine learning principles and video editing.”
Those making these deepfakes also need to understand the underlying subject matter as well.
“To create ‘good’ deepfakes, both AI and domain knowledge are required,” said IEEE Fellow Houbing Song.
Ethical Considerations
Experts warn that ethical responsibilities don’t go away just because deepfakes are created for a good cause. Using them responsibly means being honest and putting protections in place to support learning and innovation without risking trust or safety. There may also be ethical considerations regarding training data, which may be subject to copyright or intellectual property claims, or because of concerns over using patient information for training data.
There may also be grey areas in which the value of deepfakes isn’t clear cut.
“Whether a deepfake is good or not will be judged by how well their benefits align with societal expectations,” Song said. “In the long run, if we put deepfakes to good use, the benefits of deepfakes will outweigh the risks.”
Learn more: For an in-depth look at how researchers use machine learning to create synthetic data in medical images, check out this article from IEEE.