Written by IEEE | April 17, 2019 | Updated: May 22, 2019
Q (IEEE): How would you characterize our current societal relationship with autonomous and intelligent systems? Where do you hope it will head in the next 10 years?
A (Havens): Overall, I think society is ambivalent on the subject – one moment people are told autonomous and intelligent systems (A/IS) will save the world, and the same day they read autonomous technologies may put their jobs at risk. This is why a great deal of our work with the IEEE Global Initiative is to move beyond utopian/dystopian, polarizing conversations and prioritize ethically-aligned design at the outset of all A/IS manufacturing.
From the engineering and design side of things, by prioritizing the analysis of ethical, cultural and values-based perspectives before creating A/IS, any “A/IS creators” (our term for the entire supply chain designing and manufacturing these algorithmic technologies, including social scientists, marketers, anthropologists, etc.) can utilize applied ethics methodologies to create more contextually-relevant and valuable products for customers and stakeholders than what currently exists.
By working in this way versus “moving fast and breaking things,” organizations and policymakers can identify the “positive unintended consequences” (also known as innovation) with A/IS versus using a narrower definition of risk or harm that doesn’t examine the specific issues around human data, identity and agency that A/IS warrants.
Q: Tell us about the create/curate/control concept from Ethically Aligned Design. What would that look like for the average web user?
A: This language comes from the Personal Data and Individual Agency chapter of Ethically Aligned Design, First Edition, which focuses on the need to provide all individuals with tools and policies that support their ability to access and share their data as they choose.
How this would look for the average user is as follows:
- Create: Provide every individual with the means to create and project their own terms and conditions regarding their personal data that can be read and agreed to at a machine-readable level.
- Curate: Provide every individual with a personal data or algorithmic agent which they curate to represent their terms and conditions in any real, digital or virtual environment.
- Control: Provide every individual access to services, allowing them to create a trusted identity to control the safe, specific and finite exchange of their data.
People are used to hearing the term “privacy” as it relates to their data and protections from the EU General Data Protection Regulation and other related practices and regulations, as well as “privacy by design” methodologies. However, organizations can’t protect users’ data at all times, and the technological infrastructure exists for everyone to create their own “terms and conditions” relating to their data, which need to be honored.
The idea is to provide clarity for when people share correct/most recent preferences or provide feedback in real time. On a related note, there’s an IEEE Standards Working Group focused on this idea of creating machine-readable terms and conditions that all are invited to join.
Q: What are the biggest hurdles in the way of personal data access control, and what would be needed to overcome them?
A: There’s often the assumption that getting people to utilize these blockchain-like technologies or smart contracts will be challenging because “people don’t care about their data.” This is a dangerous myth – people have been trained to click “consent,” being told their data will only be used for specific services. It’s common practice for people’s data to be shared with partners of companies, etc. where individuals don’t understand who’s getting their data and why. Modern technology changes that, and it’s essential for all humans to have new tools so they can project their digital truth in the digital and algorithmic era.
The hurdle to overcome, and the opportunity, is to recognize that people’s data represents their identity and is key to their independence. That means it needs to be protected whether or not they say or actually believe it’s important. If a citizen were to say, “I don’t care if my house gets broken into and my property is stolen or my family is harmed,” would a government respond by firing police officers and judges?
My point is not to ignore people’s assumptions or beliefs, but to recognize that providing the technology and policy to protect their right to have agency and choice in the future is critical in the algorithmic age.
Q: What role do you see engineers and technologists playing in shaping the future of autonomous and intelligent systems?
A: These are literally the stakeholders that are building the future. Policymakers are by and large still catching up on how the technology works and the potential societal ramifications of AI. This is why our focus is also on supporting the engineers and technologists doing this critical work by bringing social scientists, marketers, designers and other experts alongside them as they create the initial plans for these amazing technologies. It’s critical to obtain various perspectives to fully understand the effects certain technologies have on end users.
The work we’re advocating is largely about using applied ethics or values-driven methodologies to better understand not just the harms, but the opportunities these amazing technologies create via safe and thorough testing, rather than rolling them out with large-scale unintended consequences. But where some see this as “hindering innovation,” we see it as increasing innovation by identifying the areas where people’s values can be better honored while protecting the end user’s human and other rights. Constraints that bring trust are worth the time and investment for all engineers and technologists to invest in today for a safe and flourishing future.
To download and read the full text of Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition (EAD1e), visit https://ethicsinaction.ieee.org/