Microsoft is updating its responsible AI standard and has revealed that it is retiring Azure Face’s (for the most part) emotional and facial recognition abilities.
O Responsible AI Standard (opens in new tab) is Microsoft’s internal ruleset when it comes to building AI systems. The company wants AI to be a positive force in the world and never be misused by bad actors. It’s a pattern that has never been shared with the public before. However, with this new change, Microsoft decided that now would be the time.
Emotional and facial recognition software has been controversial, to say the least. There are several organizations calling for this technology to be banned. fight for the futurefor example, wrote an open letter in May urging Zoom to halt its own development of emotional tracking software and called it “invasive” and “a violation of privacy and human rights.”
policy change
As established, Microsoft will rework its Azure Face service to meet the requirements of its new responsible AI standard. First, the company is removing public access to the AI emotion scanning feature. Second, Azure Face will no longer be able to identify a person’s facial features, including “gender, age, [a] smile, beard, hair and makeup.”
The reason for the retirement is because the global scientific community still does not have a “clear consensus on the definition of ’emotions’”. Natasha Cramption, chief AI officer at Microsoft, said experts inside and outside the company have voiced their concerns. The problem is “the challenges in how inferences generalize across use cases, regions and demographics, and the biggest privacy concerns…”
In addition to Azure Face, Microsoft’s Custom Neural Voice will have similar restrictions. custom neural voice (opens in new tab) is a surprisingly realistic text-to-speech app. Now, the service will be limited to a few “managed customers and partners”, which are people who work directly with Microsoft’s account teams. The company claims that while the technology has great potential, it can be used to impersonate. To continue to have access to Neural Voice, all existing customers must submit an admission form and be approved by Microsoft. They need to be approved by June 30, 2023 and if not selected these customers will no longer have access to Neural Voice.
still in the works
Despite all that being said, Microsoft isn’t abandoning its facial recognition technology entirely. The announcement refers to public access only. Sarah Bird, who is the Core Group Project Manager at Azure AI, wrote about responsible facial recognition (opens in new tab). And in that post, she states that “Microsoft recognizes that these features can be valuable when used for a set of controlled accessibility scenarios.” One such scenario, according to one representative, is selling AI (opens in new tab)which is an iOS app that helps the visually impaired to identify people and objects around them.
It’s nice to see another tech giant recognizing the issues with facial recognition and the potential for abuse. IBM did something similar in 2020, although its approach was more absolute.
Back to 2020, IBM announced that it was abandoning work in facial recognition because the company feared it could be misused for mass surveillance. Seeing these two industry titans get rid of this technology is a win-win for critics of facial recognition. If you’re interested in learning more about AI, recently published an article on what it can do for cybersecurity.