Author: Tristan Greene / Source: The Next Web

This is the second story in our continuing series covering the basics of artificial intelligence. While it isn’t necessary to read the first article, which covers neural networks, doing so may add to your understanding of the topics covered in this one.
Teaching a computer how to ‘see’ is no small feat. You can slap a camera on a PC, but that won’t give it sight. In order for a machine to actually view the world like people or animals do, it relies on computer vision and image recognition.
Computer vision is what powers a bar code scanner’s ability to “see” a bunch of stripes in a UPC. It’s also how Apple’s Face ID can tell whether a face its camera is looking at is yours. Basically, whenever a machine processes raw visual input – such as a JPEG file or a camera feed – it’s using computer vision to understand what it’s seeing. It’s easiest to think of computer vision as the part of the human brain that processes the information received by the eyes – not the eyes themselves.
One of the most interesting uses of computer vision, from an AI standpoint, is image recognition, which gives a machine the ability to interpret the input received through computer vision and categorize what it “sees.”
Here’s some examples of image recognition at work:
- The Ebay app lets you search for items using your camera
- This neural network turns pitch black photos into bright images
- Facebook’s AI knows a lot about your photos
- How about an AI that can read your mind?
There’s also the app, for example, that uses your smartphone camera to determine whether an…
The post A beginner’s guide to AI: Computer vision and image recognition appeared first on FeedBox.