The rapid progress in the field of artificial intelligence has required some restrictions and the development of a philosophy for the ethical implementation of this technology in the workplace. AI should act as a co-pilot alongside humans, rather than existing on autopilot, said Paula Goldman, chief ethics and humane practice officer at Salesforce. LuckBrainstorm AI conference in London on Monday.
“We need next level control. We need people who can understand what’s going on in the AI system,” she said. LuckExecutive News Editor Nick Lichtenberg. “And most importantly, we need to develop AI products that take into account what AI is good and bad at, as well as what humans are good and bad at in their own judgment when making decisions.”
Goldman’s main concern among the growing number of users is AI’s ability to generate content that is holistic, including one free of racial or gender bias and excessive user-generated content such as deepfakes. She warns that unethical use of AI could limit funding and development of the technology.
“It’s entirely possible that the next AI winter will be driven by trust issues or issues with people accepting AI,” Goldman said.
Future AI productivity gains in the workplace will depend on training and people’s willingness to adopt new technologies, she said. To increase trust in AI products, especially among employees using the apps, Goldman suggests implementing “conscious friction,” which is essentially a series of checks and balances to ensure that AI tools in the workplace provide more value than expected. harm.
What Salesforce has done to implement “conscious friction”
Salesforce has begun to monitor potential biases in its own use of AI. Indeed, the software giant has developed a marketing segmentation product that generates relevant demographic data for email campaigns. While the AI program generates a list of potential demographics for the campaign, it is the human job to select the relevant demographics to avoid excluding relevant recipients. Likewise, the software company has a pop-up warning about generative models on its Einstein platform that include zip codes or ZIP codes that are often correlated with certain races or socioeconomic statuses.
“We’re increasingly moving toward systems that can detect these kinds of anomalies and encourage people to take a second look at them,” Goldman said.
In the past, bias and copyright infringement have undermined trust in AI. An MIT Media Lab Research found that artificial intelligence software programmed to identify the race and gender of different people had less than a 1% error rate in identifying light-skinned men, but a 35% error rate in identifying dark-skinned women, including such famous figures as Oprah Winfrey and Michelle Obama. Jobs that use facial recognition technology to solve important problems such as equipping drones or body cameras The use of facial recognition software to carry out deadly attacks is compromised by inaccuracies in artificial intelligence technology, says Joy Buolamwini, author of the study. Likewise, algorithmic biases in healthcare databases can lead AI software to suggest inappropriate treatment plans for certain patients. Yale Medical School founded.
Even for those working in industries where lives are not at risk, AI applications raise ethical concerns, including OpenAI spending hours scraping YouTube user-generated content, potentially violating the copyrights of content creators without their consent. Beyond spreading misinformation and failing to perform basic tasks, Goldman says AI has a long way to go before it can realize its potential as a useful tool for humans.
But developing smarter AI features and human-led security to build trust is what Goldman sees as most exciting about the future of the industry.
“How do you design products that you know what you can trust and what you need to take a second look at and apply human judgment?”