How Google Lookout’s AI can describe images for the visually impaired

1 minute, 50 seconds Read


boyfacegooglescreenshot-2024-02-07-153113.png

Screenshot by David Grober/ZDNet

Even before the generative AI boom, the Google Lookout app used AI to help the visually impaired and blind community explore their surroundings using their phone cameras. Launched in March 2019, the app recently added a handy AI-powered feature — Image Q+A.

Also: What is Google Bard? Here’s what you need to know

The Image Q+A feature allows users to upload photos and ask questions about an image using their voice or entering text. The user will then get a detailed description of the image addressing their concerns.

For example, you can ask questions about the color of a subject in an image, specific details about a subject such as their facial expression, and even ask the app to read text within the image, such as what a sign says.

Although the feature was released in the fall, Google this week shared more insights about the feature through a post on X, previously on Twitter, also showing how users have benefited from the technology.

Underlying the technology is Google’s AI model, which Google says was trained to understand and provide specific descriptions of videos.

You can access the feature in the app, which is free to download. However, the feature is only available in English in the US, UK and Canada.

Also: I just tried Google’s ImageFX AI image generator, and I was shocked at how good it is.

The app also has several other innovative features, including a text mode, which allows users to skim text and hear it read aloud; a food labeling mode, which can identify packaged foods by their labels; A currency mode, which can quickly detect dollars, euros and Indian rupees; and more





Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *