Acknowledging some of these criticisms, Microsoft said Tuesday it plans to remove those features from its artificial intelligence service for tracking, analyzing and identifying individuals. They will cease to be available to new users this week and will be phased out for existing users within the year. The changes are part of Microsoft’s push for tighter controls on its artificial intelligence products. After a two-year review, a Microsoft team has developed a Responsible AI Standard, a 27-page document that sets out the requirements for artificial intelligence systems to ensure that they do not have a detrimental impact on society. Requirements include ensuring that systems provide “valid solutions to problems designed to solve” and “similar quality of service for designated demographic groups, including marginalized groups”. Before they are released, the technologies that will be used to make important decisions about an individual’s access to employment, education, health care, financial services or a life opportunity are subject to scrutiny by a team led by Natasha Crampton, Microsoft artificial intelligence officer. . There have been strong concerns at Microsoft about the emotion recognition tool, which has characterized one’s expression as anger, contempt, disgust, fear, happiness, neutrality, sadness or surprise. “There is a huge cultural and geographical and individual difference in the way we express ourselves,” Ms Crampton said. This has led to concerns about credibility, along with bigger questions about whether “facial expression is a reliable indicator of your inner emotional state,” he said.

Read more about Artificial Intelligence

Discarded age and gender analysis tools – along with other tools for detecting facial features such as hair and smile – could be useful for interpreting visual images for the blind or visually impaired, for example, but the company decided it was problematic to make the profiling tools generally available to the public, Ms Crampton said. In particular, he added, the so-called gender classifier of the system was binary, “and this is not in line with our values.” Microsoft will also add new controls to the face recognition feature, which can be used to perform authentication or search for a specific person. Uber, for example, uses software in its application to verify that a driver’s face matches the ID in the file for that driver’s account. Software developers who want to use the Microsoft Face Recognition Tool should apply for access and explain how they intend to develop it. Users should also apply and explain how they will use other potentially malicious AI systems, such as Custom Neural Voice. The service can create a human voice print, based on a sample of someone’s speech, so that authors, for example, can create synthetic versions of their voice to read their audiobooks in non-spoken languages. Because of the potential misuse of the tool – to give the impression that people have said things they did not say – the speakers have to go through a series of steps to confirm that their voice is allowed to be used and the recordings include detected watermarks. from Microsoft. “We are taking concrete steps to adhere to our artificial intelligence principles,” said Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the Ethical Artificial Intelligence team in 2018. “It will be a huge journey. ” Microsoft, like other tech companies, has stumbled upon its artificial intelligence products. In 2016, he launched a Twitter chatbot called Tay, which was designed to learn “conversational understanding” from the users he interacted with. The bot quickly started making racist and offensive tweets and Microsoft had to remove it. The company had collected different speech data to train its artificial intelligence system, but did not realize how different the language could be. So he hired a sociolinguist from the University of Washington to explain the linguistic diversity that Microsoft needed to know. It transcended demographics and regional diversity in how people speak in formal and informal settings. “Thinking of race as a determinant of how one speaks is actually a little misleading,” Ms Crampton said. “What we learned in consultation with the expert is that in fact a huge range of factors affect linguistic diversity.” Ms Crampton said the trip to correct this text-to-speech inequality helped update the guidelines set out in the company’s new standards. “This is a critical period for setting standards for artificial intelligence,” he said, noting the proposed European regulations that set rules and limits for the use of artificial intelligence. “We hope to be able to use our template to try to contribute to the bright, necessary debate that needs to be made about the standards that technology companies must adhere to.” An intense debate about the potential harms of artificial intelligence has been going on for years in the tech community, fueled by errors and mistakes that have real-life consequences, such as algorithms that determine whether or not people receive welfare benefits. The Dutch tax authorities mistakenly removed childcare allowances from needy families when a faulty algorithm punished people with dual citizenship. Automated software for face recognition and analysis was highly controversial. Last year, Facebook shut down its decades-old photo identification system. The company’s vice president of artificial intelligence cited “many concerns about the place of face recognition technology in society.” Since then, Washington and Massachusetts have passed regulations requiring, among other things, judicial oversight over the use of police identification tools. Ms Crampton said Microsoft had considered making its software available to police in states with book laws, but has decided not to do so for now. He said that could change as the legal landscape changed. Arvind Narayanan, a professor of computer science at Princeton and a leading artificial intelligence expert, said companies could lag behind facial analysis technologies because they were “more compassionate than any other type of artificial intelligence, which may be questionable.” but we do not. necessarily feel in our bones “. Companies may also realize that, at least for now, some of these systems are not as commercially valuable, he said. Microsoft could not say how many users it had for the facial resolution features from which it gets rid of. Mr. Narayanan predicted that companies would be less likely to abandon other invasive technologies, such as targeted advertising, which encourages people to choose the best ads to display because it was a “cash cow.”