Can the @AINowInitiative achieve its goals?
Confession – when I first saw Marshall Kirkpatrick’s tweet, I thought that it sounded too much like Newspeak. Most of you probably won’t react that way, but I did.
“Preventing fascism through AI?” After the recent Wikileaks revelations about purported CIA listening technology, you kinda wonder if the AI would be used to IMPLEMENT “fascism” – whatever “fascism” is.
But when I went to the AI Now Initiative website (which, as far as I can tell, does NOT use the f-word), it looks like they have a valid point. But can they get where they want to go? And where DO they want to go?
First, let me introduce the AI Now Initiative itself.
Led by Kate Crawford and Meredith Whittaker, AI Now is a New York-based research initiative working across disciplines to understand AI’s social impacts….
The AI Now Report provides recommendations that can help ensure AI is more fair and equitable. It represents the thinking and research of the experts who attended our first symposium, hosted in collaboration with President Obama’s White House and held at New York University in 2016.
The thing that struck me in the details was their discussion of bias.
Bias and inclusion
Data reflects the social and political conditions in which it is collected. AI is only able to “see” what is in the data it’s given. This, along with many other factors, can lead to biased and unfair outcomes. AI Now researches and measures the nature of such bias, how bias is defined and by whom, and the impact of such bias on diverse populations.
I wanted to read the report (PDF) of the first symposium – since the second symposium hasn’t yet been held, its report has (I hope) not yet been written – but the definition of bias is a key step here. If you’re wearing a MAGA hat or have one of North Korea’s approved hairstyles, then anything involving Barack Obama and the city of New York is already hopelessly biased, infused with New York values – one of those values being the Constitution and laws of the United States.
While reading the report, it appears that “bias” is defined as “lack of fairness.”
As AI systems take on a more important role in high-stakes decision-making – from offers of credit and insurance, to hiring decisions and parole – they will begin to affect who gets offered crucial opportunities, and who is left behind. This brings questions of rights, liberties, and basic fairness to the forefront.
While some hope that AI systems will help to overcome the biases that plague human decision-making, others fear that AI systems will amplify such biases, denying opportunities to the deserving and subjecting the deprived to further disadvantage.
An example will illustrate the issues involved.
Person A and Person B are applying for health insurance. What data is required to evaluate the risks from insuring each person? Do we need to know their ages? Their genders? Their races? What they ate for dinner last night? Their genetic test results? Some will argue that all of this data is not only desirable, but necessary for decision-making. Others will argue that collection of such data is an affront to the aforementioned “New York values” enshrined in the Constitution.
So should an AI system have access to all data, or some data? And should it be neutral, or “fair”?
One sentence in the report, however, justifies the common scientific plea that more research (and funding for research) is needed.
It is important to note that while there are communities doing wonderful work on these issues, there is no consensus on how to “detect” bias of either kind.