Automated decision-making systems can help human resources departments diversify the workplace, help lenders make fair and equitable mortgage decisions and even help the criminal justice system eliminate racial bias in sentencing and parole recommendations. All of this, however, assumes that the data we use is stripped of bias, and that is not always true. It’s imperative that researchers and technologists operationalize responsibility at all stages of the AI lifecycle.
Great question! We don't have an answer ready for you right now. If you would like to help the community grow and learn please consider submitting this question to our Impact Creators. We will be posting answers to popular questions asked by the community as they become available.
Thank you for submitting your question!
Do you have a question for the IEEE Impact Creators?