На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Feedbox

12 подписчиков

How Google treats Meredith Whittaker is important to potential AI whistleblowers

Author: Kyle Wiggers / Source: VentureBeat

In a co-written letter, Meredith Whittaker — one of two Google employees who claim they’ve faced retaliation for organizing protests against their employer’s treatment of sexual harassment — alleged that she’d been pressured by Google to “abandon her work” at the AI Now Institute, which she’d helped to found, and that she was informed her role would be “changed dramatically.” The situation raises the question of what protections there are for people who speak out concerning AI ethics.

Earlier this month, Whittaker coauthored a report highlighting the growing diversity crisis in the AI sector. “Given decades of concern and investment to redress this imbalance, the current state of the field is alarming,” Whittaker and her coauthors wrote for New York University’s AI Now Institute. “The AI industry needs to acknowledge the gravity of its diversity problem, and admit that existing methods have failed to contend with the uneven distribution of power, and the means by which AI can reinforce such inequality.

In November, Whittaker and others spearheaded the mass walkout of 20,000 Google employees in November to bring attention to what they characterized as a culture of complicity and dismissiveness. They pointed to Google’s policy of forced arbitration and a reported $90 million payout to Android founder and former Google executive Andy Rubin, who’s been accused of sexual misconduct, but also a Pentagon contract — Project Maven — that sought to implement object recognition in military drones. “[It’s] clear that we need real structural change, not adjustments to the status quo,” said Whittaker.

The problematic optics of Google’s AI ethics

Is this all coincidental? Perhaps. A Google spokesperson told the New York Times that the company “prohibit[s] retaliation in the workplace … and investigate[s] all allegations,” and VentureBeat has reached out separately for clarification.

Regardless, Whittaker’s treatment sets an alarming precedent for a company that has in recent months struggled with ethics reviews. Just weeks ago, Google disbanded an external advisory board — the Advanced Technology External Advisory Council — that was tasked with ensuring its many divisions adhered to seven guiding AI principles set out last summer by CEO Sundar Pichai.

The eight-member panel was roundly criticized for its inclusion of Heritage Foundation president Kay Coles James, who has made negative remarks about trans people and whose organization is notably skeptical of climate change. And pundits like Vox’s Kelsey Piper argued that the board, which would have convened only four times per year, lacked an avenue to evaluate — or even arrive at a clear understanding of — the AI work in which Google is involved.

Following the swift dissolution of Google’s external ethics board, the U.K.-based panel that offered counsel to DeepMind Health — the health care subsidiary of DeepMind, the AI firm Google acquired in 2014 — announced that it, too, would close, but for arguably more disconcerting reasons. Several of its members told The Wall Street Journal they hadn’t been afforded enough time or information to fulfill their questioning, and that they were concerned that Google and DeepMind’s close relationship posed a privacy risk.

Unsurprisingly, Google contends that its internal ethics boards quite successfully serve the role of watchdogs by regularly assessing new “projects, products, and deals.” It also points out that, in the past, it’s pledged not to commercialize certain technologies — chiefly general-purpose facial recognition — before lingering policy questions are addressed, and that it has both self-interrogated its AI initiatives and thoroughly detailed issues concerning AI governance, including explainability standards, fairness appraisal, safety consideration, and liability frameworks.

Setting aside for a…

Click here to read more

The post How Google treats Meredith Whittaker is important to potential AI whistleblowers appeared first on FeedBox.

Ссылка на первоисточник

Картина дня

наверх