The Challenge
AI can generate ultra-realistic images and videos showing people doing or saying things they’ve never done or said. Such “Deepfakes” can make the spread of disinformation more difficult to detect.
New AI tools are both targets and tools for attackers. Slide to turn challenges into solutions.
AI can generate ultra-realistic images and videos showing people doing or saying things they’ve never done or said. Such “Deepfakes” can make the spread of disinformation more difficult to detect.
The information you enter into a chatbot could be used to train it. That could result in models that reveal sensitive or private information.
Biased, inaccurate, or false data may be injected into AI models to produce erroneous results and flawed systems. AI poisoning techniques have been used in the real world to retrain “spam” filters.
Phishing scams involve the use of fraudulent emails to trick people or companies into sending money or sharing sensitive data. AI makes it easier to create even more realistic – but still fraudulent – emails and voicemails.
AI developers typically create guardrails to prevent their creations from generating offensive or illegal content. Plenty of users try anyway – a technique known as “jailbreaking."
AI is being used to exploit or counteract other AI systems -- through techniques like adversarial AI to fool systems that often have critical safety requirements.