Automated decision-making systems can help human resources departments diversify the workplace, help lenders make fair and equitable mortgage decisions and even help the criminal justice system eliminate racial bias in sentencing and parole recommendations. All of this, however, assumes that the data we use is stripped of bias, and that is not always true. It’s imperative that researchers and technologists operationalize responsibility at all stages of the AI lifecycle.
Do you have a question for the IEEE Impact Creators?