Source: New York Times
The New Work Summit, convened earlier this week by The New York Times, featured panel discussions about the opportunities and risks that are emerging as the use of artificial intelligence accelerates across industries. Here are some excerpts. They have been edited and condensed.
Can Technology Save the World (Before It Destroys It)?
Sam Altman
Co-founder and chairman of Open AI; president of Y Combinator
One of the things that surprises me most about sort of the criticism of the tech industry right now is a belief that tech should be the one to decide what you can and can’t say and how these algorithms work. And that, to me, sounds very bad. I think I’m totally cool with, for example, restrictions on free speech where it’s hurting people. And I think we’ve always had, you know, I can’t yell “Fire!” in this room. So there’s always been free speech with an asterisk.
But the idea that these companies who are not accountable to us or elected by us should get to decide sort of the new safeguards of society, that seems like the wrong way to do it. And I think we should let our — flawed as they may be — democratically elected and enforced institutions update the rules for the world. The world has changed a lot. Tech has changed the world a lot in a very short time. And it’s going to change it much more.
In the Picture
Evan Spiegel
Co-founder and chief executive, Snap Inc.
For example, just like a newspaper page, there really wasn’t a personalized version of the internet 20 years ago. There was just one page that everyone got — the same Yahoo home page, or something like that. And so this idea that the internet could be personalized by your friends was a huge breakthrough.
But this also came with some side effects. Because the content that is distributed by your friends from all over the internet, and that’s voted on by your friends in terms of likes and comments, sometimes that means that — because we’re human beings and we click on things that are outrageous or offensive sometimes — things that are negative actually spread faster and further than things that are positive.
And so we very quickly saw media start to change to fit this new distribution mechanism that was based on what your friends were sharing. So I think if we look at the evolution of social media, now people are thinking through sort of the ramifications of a lot of what’s happened as a result of that new wave distributing content. But I certainly think there’s a lot of opportunity to sort of course-correct here.
Full Speed Ahead
John Donovan
Chief executive, AT&T Communications
We paused for a period of time before we went into deployment on robotics, process automation, machine learning, to step back and build the Ten Commandments. And we came back with, I think, a view that was a human-centric set of policies.
And that is that everything in our business: Every outcome is owned by a human being. No one can say an algorithm did it or a machine did it. So everything that is launched, algorithmically, robotically is owned by a human being. A machine can’t control a machine without a process that’s involved.
These are like children, algorithms. If you change jobs you have to turn over to the new person that algorithm. Everything that’s put in has to have a rewind button. We have a Ph.D. in psychology who makes sure that they have a red button they can press when they feel like manipulation is happening in any way, shape, or form.
So we put a structure and a process in place to make sure that what we do programmatically, what we do algorithmically, and what we do with a machine doesn’t change the fundamentals of our accountability within the business.
Sebastian Thrun
Chief executive, Kitty Hawk
I look at A.I. as a tool, very much like a shovel or a kitchen knife. And when it comes to ethics, I think there’s ethical ways to use a kitchen knife, and there’s unethical ways to use a kitchen knife, and they’ve been around forever.
What, really, is A.I.? I think that’s what people are somewhat divided on. I think it’s something very, very simple. First of all, we talk about A.I., we talk about machine learning; we don’t talk about real intelligence. And machine learning is an innovation in computers.
Computers are dumb. To make computers do the right thing in the past, someone had to write down just an elaborate kitchen recipe of every possible step that the computer should do. And the computer would blindly follow those rules. The innovation in machine learning is that now, instead of giving a computer these rules, you can give it examples.
And the computer follows its own rules from those examples. So computer programming has become easier as a result. Children can now program computers by just giving examples. The implications, in my opinion, are groundbreaking for society.
Kent Walker
Senior vice president, global affairs, and chief legal officer, Google
The tech sector has always had lots of different regulations, from privacy, to copyright, to safe harbors and the like. We’re fans of regulation when it’s smart regulation.
And what do I mean by that? So, regulation that starts out with a really crisp definition of what’s the problem you’re trying to solve? That is then narrowly tailored to solve that problem and minimize blowback and side effects. And then third, a thoughtful analysis of how: what the second and third implications of those rules might be.
When you apply that to artificial intelligence, I think it’s most likely that we will start with the applications. It’s rare that you…
The post Excerpts From the New Work Summit appeared first on FeedBox.