Microsoft unveils new AI tools to boost tech accessibility
Microsoft has announced several new artificial intelligence tools and advancements in a bid to enhance tech accessibility.
The tools were unveiled at the tech giant's 13th annual Microsoft Ability Summit. The digital event was joined by the company's CEO, Satya Nadella, and policy leaders, such as the US Secretary of Transportation, Pete Buttigieg, along with 20,000 attendees from 100 countries to discuss the future of inclusion.
"AI has been a big topic this year as we find ourselves at a historic intersection of opportunity and responsibility to the world around us," says Jenny Lay-Flurrie - Microsoft Chief Accessibility Officer.
"AI has the potential to enhance human cognitive abilities in thinking, reasoning, learning and communication, but the evolution of AI also comes with great responsibility and must incorporate and address a broad range of diverse human needs, barriers, capabilities and experiences.
"Accessible technology is a fundamental building block that can unlock opportunities in every part of society and empower people across the spectrum of disability," she says.
"But it's not just about technology. We are committed to tackling the disability divide and learning through our technology led strategy with three additional pillars, People, Partnership and Policy.
The main announcements include:
Azure advancements and AI applications:
· A new state-of-the-art Vision Services: Vision Services are powering features across Microsoft 365 applications such as Teams, PowerPoint, Outlook, Word, Designer, and OneDrive to make the user experience more innovative and accessible. New capabilities include improving content discoverability through automatic captioning, background removal, video summarisation, image retrieval, and more to help seamlessly measure the similarity between images and text. Users can also track movements, analyse environments, and receive real-time alerts with responsible AI controls.
· Seeing AI: Microsoft's Seeing AI app uses Azure and AI to empower the blind and low vision to navigate the world around them. The company is launching a new Seeing AI collaboration with Haleon, adding over 1500 products to the Seeing AI code library. In addition, Seeing AI introduced a new Indoor Navigation feature, allowing someone who is blind or has low vision to independently navigate through a building by following spatial audio cues.
· LinkedIn: More than 40% of LinkedIn posts include at least one image. Leveraging Azure Cognitive Services for vision, LinkedIn is adding automatic alt-text descriptions and captioning to its platform.
· Reddit – The platform will generate captions for millions of images, making it easier for users to discover and understand content.
New adaptive accessories:
Microsoft is extending the customisable 3D printed attachments currently available for the Microsoft Business Pen and Microsoft Classroom Pen 2 to its existing Surface Pen later this year. The 3D-printed grips are printed through Shapeways and offer new ways for users with limited mobility to grip and use the pens and support their creativity and productivity on their Surface devices.
'Accessibility Assistant' in M365:
New Microsoft 365 "Accessibility Assistant" to help creators produce more accessible content. These assistants offer better defaults, real-time remediation and clear guidance to prevent and correct accessibility issues.
New Inclusive Design for Cognition Guidebook:
Launched a new Inclusive Design for Cognition Guidebook to help build products for people with various cognitive abilities.
Microsoft Translator: Microsoft Translator launched 13 new African languages, including Yoruba, Hausa and Igbo. Microsoft Translator now supports speech-to-text capabilities in 125 languages, and lets users have real-time multi-language conversations, translate menus and street signs, websites, documents and more, making technology more inclusive for those who are deaf or hard of hearing around the world.