Author: Insights Team / Source: Forbes
he mid-1980s: Besides the infamy of mullet haircuts, pink plaid jackets and manic Richard Simmons workout videos, research on artificial intelligence (AI) had ground to an absolute standstill. For starters, computers simply lacked the processing power to make things happen.
Floppy disk-drive machines paled in sophistication compared with modern smartphones, and computer chips wouldn’t hold a million components until 1989. (Compare that with the modern high-water mark of 8 billion.)Yet another obstacle dogged any dreams of AI from taking form. In 1984, the American Association of Artificial Intelligence held a fateful meeting where field pioneer Marvin Minsky, of all people, warned the business community that investor enthusiasm for artificial intelligence would eventually lead to disappointment. Sure enough, AI investment began to collapse.

It’s a good thing, then, that visionaries such as Yann LeCun chose not to pay the pessimism much mind. The native of France was not even 30 when he joined the Adaptive Systems Research Department at AT&T Bell Laboratories in New Jersey. There, his enthusiasm for artificial intelligence couldn’t be contained.
At Bell Labs, LeCun developed a number of new machine learning methods, including the convolutional neural network—modeled after the visual cortex in animals. LeCun’s work also contributed to the advancement of image and video recognition, as well as natural language processing.
“The whole idea of statistical learning in the context of AI kind of died in the late 1960s,” LeCun recalls. “People more or less abandoned it. Then it came back to the fore in the late ’80s with interest in neural nets. So when learning algorithms to train multilayer neural nets popped up in the mid-’80s, it created a wave of interest.”
In capturing this revolution, LeCun is modest to a fault. He’s made history for his discoveries, but he barely mentions his own name or accomplishments. He refuses to take himself seriously; in fact, a whole section of his personal website is devoted to puns, with this self-admonition: “The Geneva convention against torture, and the U.S. constitutional protection against cruel and unusual punishments, forbid me to write more than three atrocious puns in a row.”
LeCun also refuses to rest on any of his well-earned laurels in computer science; today, he serves as Facebook’s chief AI scientist, where he works tirelessly towards new breakthroughs. Here, he takes us on a privileged tour—better than a front-row seat, because he’s a star of the show—through the growth, recent changes and potential of artificial intelligence.
AI Begins—Perceptrons To The Precipice Of Learning
As a student of AI’s past, LeCun can cite the milestones as well as anyone, starting with the summer 1956 brainstorming session at Dartmouth where the term “artificial intelligence” was coined. Just a year later, Frank Rosenblatt invented the perceptron at the Cornell Aeronautical Laboratory. One of its first implementations was the Mark 1 Perceptron, a mammoth rectangular machine that contained 400 photocells randomly connected to simple motif detectors fed to a trainable classifier.
“It was the first neural network that could learn to recognize simple patterns in a kind of non-trivial way,” LeCun says. “You could use them to do simple image recognition but not to recognize objects in photos and not for any kind of reasoning or planning.”

Until the last decade, pattern recognition systems required a lot of human grunt work to recognize objects in natural images. “You’d have to work a lot on building an engineered module that would turn the images into a representation—generally a long list of numbers that can be processed by those simple learning algorithms. So you basically had to do the work by hand.” Ditto, he adds, for early speech recognition and computer-driven translation: Hand-engineering meant maximal sweat with minimal results.
So what changed insofar as the computer science? “In all of those applications, deep learning and neural nets have brought significant improvements in performance—and also considerable reduction in sort of the manual labor that is necessary,” LeCun says. “And that allows people to expand the applications of these to a lot of different domains.”
This raises the question of how computers can “learn” in the first place. Neural nets function as software simulations of the brain; they process information such as a visual image and attempt to arrive at a correct answer. But what if that answer isn’t quite right? Enter “backpropagation,” an algorithm for feedback flow that enables neural networks to learn.
LeCun And The Backpropagation Proposition
The breakthrough discovery of backpropagation came in 1986, when professor Geoffrey Hinton became one of the first researchers to describe a way computers could learn by performing a task over and over, each time with the computer’s neural network “then adjusted in the direction that decreases the error.”
LeCun not only made good on Hinton’s groundwork—he helped lay the foundation. Hinton had first floated the idea of “backprop” in the early 1980s but abandoned it because he didn’t think it could work.
But in 1985 LeCun wrote a paper that described a form of backpropagation for, as he puts it, “an obscure conference. It was in French and basically…
The post Yann LeCun: An AI Groundbreaker Takes Stock appeared first on FeedBox.