На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Feedbox

15 подписчиков

To achieve ethical AI, we need better training and boundaries

Author: Nikita Lukianets / Source: The Next Web

To achieve ethical AI, we need better training and boundaries

Imagine for a moment your plane suddenly doesn’t land where it should, makes a U-turn, and starts re-routing from one airport to another one, trying to land in three different cities. You have no clarity about the immediate future. I’ve been there and it’s not the greatest feeling.

Now imagine, there are no humans involved in the decision-making and all decisions are silently made by a machine. How do you feel now?

The reason why I wrote this article is to propose an understandable regulatory approach for AI applications. Autonomous decision-making systems are in place already, including ones that support life-critical scenarios. This is a complicated topic, so let me tell you exactly how I’ll approach it.

I provide two points of focus that can help us to govern such autonomous decision-making: (1) quality of the data on which machine learning models are trained, and (2) decision boundaries, the restrictions which separate those decisions that should be taken and those that should not.

This point of view will help us pave the way to the question of algorithmic accountability to make AI decisions traceable and explainable.

Super-intelligence, are we there yet?

Short answer is “No”. In his paper, philosopher Nick Bostrom argues that ASI has the capability to bring humans to extinction. Stanford professor Nils Nilsson suggests that we are far from that and, first, machines should be able to do the things humans are able to do.

At the same time, AI solutions are in a numerous narrow fields already capable of making autonomous decisions. In some healthcare applications for example, AI decisions do not require human involvement at all. It means that artificial intelligence is becoming a subject and not an object for decision-making.

How do we govern these decisions? How do we make sure that we can get what we expect, especially in situations that are life-critical?

Decision-making in algorithmic accountability

The concept of the algorithmic accountability suggests that companies should be responsible for the results of the programmatic decisions.

When we talk about ethical decisions of AI, we need to secure “ethical” training datasets, and well-designed boundaries to “ethically” govern AI decisions. These above are the two pillars of algorithmic accountability. In plain English, I could say — a thinking, and an action.

Pillar 1: Training examples and bias

AI could be aware of nuances and can learn more of them without getting tired, however, AI knows only what is “taught” (Data in, Bias in) and controls only what we give it control of. Bias has a huge positive impact on the speed of how humans think and operate. Why do we talk about bias?

If we had to think about every possible option when deciding, it would probably take a lot of time…

Click here to read more

The post To achieve ethical AI, we need better training and boundaries appeared first on FeedBox.

Ссылка на первоисточник
наверх