As part of its new “responsible artificial intelligence model”, Microsoft says it intends to keep “people and their goals at the heart of system design decisions”. High-level principles will lead to real changes in practice, the company says, with some features being modified and others being withdrawn from sale. Microsoft Azure Face, for example, is a face recognition tool used by companies such as Uber as part of their authentication process. Now, any company wishing to use the service’s face recognition features will need to actively apply for use, including those already embedded in its products, to demonstrate that they meet Microsoft’s ethical intelligence standards and that functions benefit the end user and society. Even those companies that are granted access will no longer be able to use some of Azure Face’s most controversial features, says Microsoft, and the company will withdraw facial analysis technology that is supposed to infer emotional states and characteristics such as gender or the age. “We’ve worked with internal and external researchers to understand the limitations and potential benefits of this technology and to navigate the exchanges,” said Sarah Bird, Microsoft Product Manager. “In the case of the classification of emotions in particular, these efforts have raised important questions about privacy, the lack of consensus on the definition of ’emotions’ and the inability to generalize the link between facial expression and emotional state in use.” Subscribe to the First Edition, our free daily newsletter – every morning at 7 p.m. BST Microsoft is not completely eliminating emotion recognition – the company will continue to use it internally, for accessibility tools such as Seeing AI, which attempt to orally describe the world to visually impaired users. Similarly, the company has limited the use of its custom neural voice technology, which allows it to create synthetic voices that sound almost identical to the original source. “It’s easy to imagine how it could be used to impersonate speakers and deceive listeners,” said Natasha Crampton, the company’s chief AI officer. Earlier this year, Microsoft began transcribing its synthetic voices, incorporating small, inaudible output fluctuations that meant the company could understand when a recording was made using its technology. “With the advent of TTS neuronal technology, which makes synthetic speech indistinguishable from human voices, there is a risk of harmful fake fakes,” said Microsoft’s Qinying Liao.