На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Feedbox

12 подписчиков

The Problem With AI: Machines Are Learning Things, But Can’t Understand Them

Author: Chris Hoffman / Source: How-To Geek

Everyone’s talking about “AI” these days. But, whether you’re looking at Siri, Alexa, or just the autocorrect features found in your smartphone keyboard, we aren’t creating general purpose artificial intelligence. We’re creating programs that can perform specific, narrow tasks.

Computers Can’t “Think”

Whenever a company says it’s coming out with a new “AI” feature, it generally means that the company is using machine learning to build a neural network. “Machine learning” is a technique that lets a machine “learn” how to better perform on a specific task.

We’re not attacking machine learning here! Machine learning is a fantastic technology with a lot of powerful uses. But it’s not general-purpose artificial intelligence, and understanding the limitations of machine learning helps you understand why our current AI technology is so limited.

The “artificial intelligence” of sci-fi dreams is a computerized or robotic sort of brain that thinks about things and understands them as humans do. Such artificial intelligence would be an artificial general intelligence (AGI), which means it can think about multiple different things and apply that intelligence to multiple different domains. A related concept is “strong AI,” which would be a machine capable of experiencing human-like consciousness.

We don’t have that sort of AI yet. We aren’t anywhere close to it. A computer entity like Siri, Alexa, or Cortana doesn’t understand and think as we humans do. It doesn’t truly “understand” things at all.

The artificial intelligences we do have are trained to do a specific task very well, assuming humans can provide the data to help them learn. They learn to do something but still don’t understand it.

Computers Don’t Understand

Gmail has a new “Smart Reply” feature that suggests replies to emails. The Smart Reply feature identified “Sent from my iPhone” as a common response. It also wanted to suggest “I love you” as a response to many different types of emails, including work emails.

That’s because the computer doesn’t understand what these responses mean. It’s just learned that many people send these phrases in emails. It doesn’t know whether you want to say “I love you” to your boss or not.

As another example, Google Photos put together a collage of accidental photos of the carpet in one of our homes. It then identified that collage as a recent highlight on a Google Home Hub. Google Photos knew the photos were similar but didn’t understand how unimportant they were.

Machines Often Learn to Game the System

Machine learning is all about assigning a task and letting a computer decide the most efficient way to do it. Because they don’t understand, it’s easy to end up with a computer “learning” how to solve a different problem from what you wanted.

Here’s a list of fun examples where “artificial intelligences” created to play games and assigned goals just learned to game the system. These examples all come from this excellent spreadsheet:

  • “Creatures bred for speed grow really tall and generate high velocities by falling over.”
  • “Agent kills itself at the end of level 1 to avoid losing in level 2.”
  • “Agent pauses the game indefinitely to avoid losing.”
  • “In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved…

Click here to read more

The post The Problem With AI: Machines Are Learning Things, But Can’t Understand Them appeared first on FeedBox.

Ссылка на первоисточник
наверх