Facebook can now describe photos to blind people
Facebook's new feature is quite a doozy - it's found a way to use artificial intelligence to describe photos, so that blind and visually impaired people won't miss out on a large part of their news feed.
It's called Automatic Alternative Text. As TechCrunch notes, to get the benefit, you'll have to use a screen reader - ie, a piece of software that identifies what's on your screen and reads it out using text-to-speech.
Without the feature, a screen reader would tell you who shared the photo, and the accompanying text that the person added. But that's not much good, as usually captions elaborate on the photo, rather than describing what it depicts.sing the feature, the screen reader will say something like "image may contain three people, smiling, outdoors".
Not the best description, admittedly, but it's still a huge step.
It uses object recognition based on neural networks (which are one type of model for machine learning, ie artificial intelligence).
It will pick out images like cars, mountains, grass, foods, activities like swimming and tennis, and apply descriptive words for appearance like smiling, selfie, etc.
The feature is currently only available for the iOS app, on devices using screen readers set to English, but we'd expect it to become more widely available soon. We would also expect it to become significantly more advanced before too long. Which is amazing, if a little scary.
Source: Digital Spy
It's called Automatic Alternative Text. As TechCrunch notes, to get the benefit, you'll have to use a screen reader - ie, a piece of software that identifies what's on your screen and reads it out using text-to-speech.
Without the feature, a screen reader would tell you who shared the photo, and the accompanying text that the person added. But that's not much good, as usually captions elaborate on the photo, rather than describing what it depicts.sing the feature, the screen reader will say something like "image may contain three people, smiling, outdoors".
Not the best description, admittedly, but it's still a huge step.
It uses object recognition based on neural networks (which are one type of model for machine learning, ie artificial intelligence).
It will pick out images like cars, mountains, grass, foods, activities like swimming and tennis, and apply descriptive words for appearance like smiling, selfie, etc.
The feature is currently only available for the iOS app, on devices using screen readers set to English, but we'd expect it to become more widely available soon. We would also expect it to become significantly more advanced before too long. Which is amazing, if a little scary.
Source: Digital Spy
Post a Comment