AI developers must move quickly to develop and deploy systems that address algorithmic bias, said Kathy Baxter, principal Architect of Ethical AI Practice at Salesforce. In an interview with ZDNET, Baxter emphasized the need for diverse representation in data sets and user research to ensure fair and unbiased AI systems. She also highlighted the significance of making AI systems transparent, understandable, and accountable while protecting individual privacy. Baxter stresses the need for cross-sector collaboration, like the model used by the National Institute of Standards and Technology (NIST), so that we can develop robust and safe AI systems that benefit everyone.
One of the fundamental questions in AI ethics is ensuring that AI systems are developed and deployed without reinforcing existing social biases or creating new ones. To achieve this, Baxter stressed the importance of asking who benefits and who pays for AI technology. It’s crucial to consider the data sets being used and ensure they represent everyone’s voices. Inclusivity in the development process and identifying potential harms through user research is also essential.
Also: ChatGPT’s intelligence is zero, but it’s a revolution in usefulness, says AI expert
“This is one of the fundamental questions we have to discuss,” Baxter said. “Women of color, in particular, have been asking this question and doing research in this area for years now. I’m thrilled to see many people talking about this, particularly with the use of generative AI. But the things that we need to do, fundamentally, are ask who benefits and who pays for this technology. Whose voices are included?”
Social bias can be infused into AI systems through the data sets used to train them. Unrepresentative data sets containing biases, such as image data sets with predominantly one race or lacking cultural differentiation, can result in biased AI systems. Furthermore, applying AI systems unevenly in society can perpetuate existing stereotypes.
To make AI systems transparent and understandable to the average person, prioritizing explainability during the development process is key. Techniques such as “chain of thought prompts” can help AI systems show their work and make their decision-making process more understandable. User research is also vital to ensure that explanations are clear and users can identify uncertainties in AI-generated content.
Also: AI could automate 25% of all jobs. Here’s which are most (and least) at risk
Protecting individuals’ privacy and ensuring responsible AI use requires transparency and consent. Salesforce follows guidelines for responsible generative AI, which include respecting data provenance and only using customer data with consent. Allowing users to opt in, opt-out, or have control over their data use is critical for privacy.
“We only use customer data when we have their consent,” Baxter said. “Being transparent when you are using someone’s data, allowing them to opt-in, and allowing them to go back and say when they no longer want their data to be included is really important.”
As the competition for innovation in generative AI intensifies, maintaining human control and autonomy over increasingly autonomous AI systems is more important than ever. Empowering users to make informed decisions about the use of AI-generated content and keeping a human in the loop can help maintain control.
Ensuring AI systems are safe, reliable, and usable is crucial; industry-wide collaboration is vital to achieving this. Baxter praised the AI risk management framework created by NIST, which involved more than 240 experts from various sectors. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.
Failing to address these ethical AI issues can have severe consequences, as seen in cases of wrongful arrests due to facial recognition errors or the generation of harmful images. Investing in safeguards and focusing on the here and now, rather than solely on potential future harms, can help mitigate these issues and ensure the responsible development and use of AI systems.
Also: How ChatGPT works
While the future of AI and the possibility of artificial general intelligence are intriguing topics, Baxter emphasizes the importance of focusing on the present. Ensuring responsible AI use and addressing social biases today will better prepare society for future AI advancements. By investing in ethical AI practices and collaborating across industries, we can help create a safer, more inclusive future for AI technology.
“I think the timeline matters a lot,” Baxter said. “We really have to invest in the here and now and create this muscle memory, create these resources, create regulations that allow us to continue advancing but doing it safely.”