На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Feedbox

12 подписчиков

Voices in AI – Episode 40: A Conversation with Dennis Laudick

Author: Byron Reese / Source: Gigaom

Today’s leading minds talk AI with host Byron Reese

In this episode Byron and Dennis discuss machine learning.

Today’s leading minds talk AI with host Byron Reese

Byron Reese: This is “Voices in AI,” brought to you by GigaOm. I’m Byron Reese. Today my guest is Dennis Laudick.

He is the VP of Marketing of Machine Learning at ARM. ARM is—well, let’s just start off by saying, you certainly have several of their products. They make processors and they have between 90% to 95% market share of mobile devices. They’ve shipped 125 billion processors and are shipping at a rate of about 20 billion a year. That’s what three per person per year. Welcome to the show, Dennis.

Dennis Laudick: Great. Thank you very much. Pleased to be here.

So picking up on that thread, three per person. So, anybody who owns any electronics, they probably have four or five of your chips this year, where would they find those? Like, walk me around the house and office, what all might they be in?

Yeah so we are kind of one of the greatest secrets out in the market at the moment, we’re pervasive, certainly. So, I mean, ARM is responsible for designs of processors so that the CPUs are, ironic to this topic, the brains, as a lot of people call it, that go into the computer chips and that power our devices. So, behind your smartphone, obviously, there is a processor which is doing all of the things that you are seeing as well as a lot in the background. Just looking around you, TVs; I am speaking into a phone now, it probably has a processing chip in the background doing something—those consumer electronic devices, the majority are probably being powered by a processor which was designed by ARM.

We do things that range from tiny sensors and watches and things like that, clear up to much larger-scale processing. So yeah, just looking around, battery-powered devices or powered consumer electronic devices around you in your home or your office, there is a good chance that the majority of those are running a processor designed by ARM, which is quite an exciting place to be.

I can only imagine. What was that movie that was out, the Kingsmanmovie, where once they got their chips in all the devices, they took over the world? So I assume that’s kind of the long term plan.

I am not particularly aware of any nefarious plans, but we certainly got that kind of reach.

I like that you didn’t deny it. You just said, you are not in the loop. I am with that. So let’s start at the top of the equation. What is artificial intelligence?

So it’s a good question. I think the definitions around it are a bit unsettled at the moment. I mean certainly from my perspective, I tend to view things pretty broadly, and I think I probably best describe it as “a machine trying to mimic parts of what we consider to be human intelligence.” So it’s a machine mimicking either a part, or several parts of what humans considered to be intelligent. Not exactly a concrete term but probably is—

I think it’s a great definition except for problems with the word “artificial” and problems with the word “intelligence.” Other than that, I have no problem. In one talk I heard, you said that old tic-tac-toe problems would therefore be AIs. I am with that, but that definition is so broad. The sprinkler system that comes on when my grass is dry; that’s AI. A calculator adds 2+2 which is something a person does; that’s AI. An abacas therefore would be AI; it’s machine that’s doing what humans do. I mean is that definition so broad that it’s meaningless or what meaning do you tease out of that?

Yeah. That’s a good question, and certainly it’s a context-driven type of question and answer. And I tend to view artificial intelligence and intelligence itself is kind of a continuum of ideas. So I think the challenge is to sit there and go, “Right, let’s nail down exactly what artificial intelligence is,” and that naturally leads you to saying, “Right, let’s nail down exactly what intelligence is.” I don’t think we’re to the point where that’s actually a practical possibility. You would have to start from the principle that human beings have completely fathomed the context of what the human being is capable of and I don’t think we’re there yet. If we’ve learned everything there is to be learned about ourselves, then I would be very surprised.

So if you start from the concept that intelligence itself isn’t completely well understood, then you naturally fall back to the concept that artificial intelligence isn’t something that you can completely nail down. So, from a more philosophical standpoint which is quite fun, it’s not something that’s concrete that you can just say, this is the denotation of it. And, again, from my perspective, it’s much more useful if you want to look at it in a broad sense to look at it as a scale or a spectrum of concepts. So, in that context, then yeah, going back to tic-tac-toe, it was an attempt at a machine trying to mimic human intelligence.

I certainly spent a lot of my earlier years playing games like chess and so forth, where I was amazed by the fact that a computer could make these kind of assessments. And, yes, you could go back to an abacus. And you could go forward to things like, okay, we have a lot of immediate connotations around artificial intelligence, around robots and what we consider quasi-autonomous thinking machines, but that then leaves the questions around things like feelings, things like imagination, things like intuition. What exactly falls into the realm of intelligence?

It’s a pretty subjective and non-concrete domain but I think the important thing, although I like to look at it from a very broad continuum of ideas, you know you do have to drive it on a context-sensitive basis. So from a practical standpoint, as a technologist, we look at different problem spaces and we look at different technologies which can be applied to those problem spaces, and although it’s not always clear, there is usually some very contextual driven understanding between the person or the people talking about AI or intelligence itself.

So, when you think of different approaches to artificial intelligence, we’ve been able to make a good deal of advances lately for a few reasons. One is the kinds of processors, that do parallel processing, like you guys make, that become better and better and cheaper and cheaper and we use more and more of them, and then we are getting better at applying machine learning which is of course your domain to broader problem sets.

Yeah.

Do you have an opinion? You are bound to look at a problem like, “Oh, my car is routing me somewhere strange,” is that a machine learning problem? And machine learning, at its core, is studying the past—a bunch of data from the past—and projecting that into the future.

What do you think are the strengths of that approach and what, I am very interested, are the limits of it? You think for instance creativity, like what Banksy does, is fundamentally a machine learning problem? You give it enough cultural references and it will eventually be graffiti-ing it on the wall.

Yeah.

Where do you think machine learning rocks, and where is it not able to add anything?

Yeah. That’s a really interesting question. So, I think a lot of times I get asked a question about artificial intelligence and machine learning, and they get interposed between each other. I think a lot of people—because of the fact that in our childhood, we all heard stories from science fiction that were labeled under artificial intelligence and went off in various different directions—hear of a step forward in terms of what computers can do, and to quickly extrapolate to what is far-reaching elements of artificial intelligence, and it’s somewhere in the domain of science fiction still.

So it is interesting to get involved in those discussions, but there are some practicalities in terms of what the technology is actually capable of doing. So, from my perspective, I think this is actually a really important wave that’s happening at the moment, the machine learning wave as you might call it. For years and years, we’ve been developing more and more complex classical computing methodologies, and we’ve progressively become more complex in what we can produce, and therefore we got increasingly more sophisticated in terms of what we could achieve in terms of human expectations.

Simple examples that I use with people who aren’t necessarily technical are, we started out with programs that said, if the temperature is greater than 21°C, then turn on the air conditioner, and if it’s less than 21 °C, turn off the air conditioner. What you ended up with was a thermostat that was constantly flickering the air conditioning on and off. Then, we became a little more sophisticated and we introduced hysteresis, and we said, I tell you what, if the temperature goes above 22 °C, turn on the air conditioner and if the temperature goes below 19 °C, turn it off. You can take that example and extrapolate that over time, and that’s kind of what’s been happening in computing technology, is we’ve been introducing more and more layers of complexity to allow more sophistication and more naturalness in our interactions with things, and the way that things made quasi-decisions. And that’s all been well and fine, but the methodologies are becoming incredibly complex and it was increasingly difficult to make those next steps in progression and sophistication.

The ImageNet which is a bit of cornerstone in modern ML was just a great example of what happened—the classic approaches were becoming more and more sophisticated, but it was difficult to really move the output and the capabilities on. And the application of machine learning and neural networks in particular, that’s just really blown the doors open in terms of moving to the next level. You know, when I try to de-complicate what’s happened, I tend to express it as, we’ve gone from the world where we had a very deterministic approach and we were trying to mimic fuzziness, an approximation, to where we now have a computing approach which very naturally approximates and it does patterns and it does approximation. And it just turns out that, lo and behold, when you look at the world, a lot of things are patterns. And, suddenly, the ability to understand patterns as opposed to trying to break them up into very deterministic principles becomes very useful. It so happens that humans do a huge amount of approximation, and that suddenly moves us much more forward in terms of what we can achieve with computing. So, the ability to do pattern matching and the ability to do approximation, it doesn’t follow the linear progression of more and more determinism, and more complex determinism. It moves us into a more fuzzy space, and it just so happens that that fuzzy space is a huge leap forward in terms of getting fundamentally deterministic machines to do something that feels more natural to human beings. So that’s a massive shift forward in terms of what we can do with computers.

Now, the thing to keep in mind there, and what I am trying to explain what’s happening with machine learning to people who aren’t technologists or aren’t into the theory behind machine learning, one way I do try to simplify it is, I say, “Well listen, don’t get too worried in terms of building the next Terminator. What we’ve kind of, in essence, managed to do is we’ve taught computers to be much, much better at identifying cats.” There’s still a problem about okay, what should the machine do once it’s identified a cat. So it’s not a complete shift in all of what we can do with computing. It’s a complete shift in the capabilities but we still got a long way to go in terms of something like AGI and so forth. But don’t get me wrong, it’s a massive wave. I think this is a new era in terms of what we can get our machines to do. So it’s pretty exciting from that, but there is still a long way to go.

So you mentioned that we take these deterministic machines and get them to do approximations but, in the end, they are still at their core deterministic and digital. No matter how much you tried to obfuscate that in the application the technology, is there still an inherent limit to how closely that can mimic human behavior?

That’s, again, a very good question. So you are right, at its fundamental level, a computer is basically 1s and 0s. It all breaks down to that. What we’ve managed to do over time is produce machines which are increasingly more capable and we’ve created increasing layers of sophistication and platforms that can support that. That’s nothing to be laughed at. In the technology I work with in ARM, the leaps forward in the last few years have been quite incredible in terms of what you can do. But, yeah, it always breaks down to 1s and 0s. But it’s important not to let the fundamentals of the technology form a constraint about its potential because, if anything, what we have learned is that we can create increasing levels of sophistication to get these 1s and 0s to do more and more things and to act more and more natural in terms of our interactions and the way that they act.

So yes, you are absolutely right and it’s interesting to see the…

Click here to read more

The post Voices in AI – Episode 40: A Conversation with Dennis Laudick appeared first on FeedBox.

Ссылка на первоисточник
наверх