Report from AI Now: AI is Still Waiting for its Ethics Transplant

Reports on the lack of ethics in artificial intelligence are many. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results.
The report, released two weeks ago, is the brainchild of Kate Crawford (shown above) and Meredith Whittaker, cofounders of AI Now, a new research institute based out of New York University. Crawford, Whittaker, and their collaborators lay out a research agenda and a policy roadmap in a dense but approachable 35 pages. Their conclusion doesn’t waffle: Our efforts to hold AI to ethical standards to date, they say, have been a flop.
“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI,” they write. When tech giants build AI products, too often “user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles…” Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life. Is there anything we can do? Crawford sat down with us this week for a discussion of why ethics in AI is still a mess, and what practical steps might change the picture.
Q. Towards the end of the new report, you come right out and say, “Current framings of AI ethics are failing.” That sounds dire.
Kate Crawford: There’s a lot of talk about how we come up with ethical codes for this field. We still don’t have one. We have a set of what I think are important efforts spearheaded by different organizations, including IEEE, Asilomar, and others. But what we’re seeing now is a real air gap between high-level principles—that are clearly very important—and what is happening on the ground in the day-to-day development of large-scale machine learning systems.
We read all of the existing ethical codes that have been published in the last two years that specifically consider AI and algorithmic systems. Then we looked at the difference between the ideals and what was actually happening. What is most urgently needed now is that these ethical guidelines are accompanied by very strong accountability mechanisms. We can say we want AI systems to be guided with the highest ethical principles, but we have to make sure that there is something at stake. Often when we talk about ethics, we forget to talk about power. People will often have the best of intentions. But we’re seeing a lack of thinking about how real power asymmetries are affecting different communities.
Q. The underlying message of the report seems to be that we may be moving too fast—we’re not taking the time to do this stuff right.
I would probably phrase it differently. Time is a factor, but so is priority. If we spent as much money and hired as many people to think about and work on and empirically test the broader social and economic effects of these systems, we would be coming from a much stronger base. Who is actually creating industry standards that say, ok, this is the basic pre-release trial system you need to go through, this is how you publicly show how you’ve tested your system and with what different types of populations, and these are the confidence bounds you are prepared to put behind your system or product?
Read the source article in Wired.
Source: AI Trends

Leave a Reply
You May Also Like