tymshft

There is nothing new under the sun…turn, turn, turn

Archive for the month “March, 2017”

No, the robots aren’t killing us…yet

Let’s go back to July 7, 2015, when this tragic event occurred in Michigan:

An employee of Ventra Ionia Main, an automotive stamping facility, died after being caught in a robotic machine, police said.

The accident happened about 2:20 p.m. Tuesday, July 7 at Ventra, 14 N. Beardsley St., Ionia Public Safety Department officers said.

More details were revealed this month, when the inevitable lawsuit was filed.

[Wanda] Holbrook, a journeyman technician, was performing routine maintenance on one of the robots on the trailer hitch assembly line when the unit unexpectedly activated and attempted to load a part into the unit being repaired, crushing Holbrook’s head.

By the time this filtered through several sources, the press was referring to a rogue robot, conjuring an image of a sentient being taking out its vengeance on an unfortunate carbon-based life form.

The lawyers don’t go that far, but they interpret this as a machine that was improperly programmed by a host of companies named in the lawsuit.

In this respect, it’s no different than any piece of machinery – or, frankly, any manufactured item. No one accuses a shovel of being a sentient rogue being, but it can kill also.

Studies published in the Lancet and the American Journal of Cardiology, among other outlets, show that the incidence of heart failure goes up in the week after a blizzard. The Lancet study, based on death certificates in eastern Massachusetts after six blizzards from 1974-78, demonstrated that ischemic heart disease deaths rose by 22 percent during the blizzard week and stayed elevated for the subsequent eight days, suggesting that the effect was related to storm-related activities, like shoveling, rather than the storm itself. Similarly, the AJC article, based on medical examiner records from three Michigan counties, found that there were more exertion-related sudden cardiac deaths in the weeks during and after blizzards, and that 36 of the 43 total exertion-related deaths occurred during or shortly after snow removal.

Perhaps some day we will have a true rogue robot, with independent decision-making capability, that performs an action that results in a death. But we’re not there yet.

Can the @AINowInitiative achieve its goals?

Confession – when I first saw Marshall Kirkpatrick’s tweet, I thought that it sounded too much like Newspeak. Most of you probably won’t react that way, but I did.

AINowMarshall

“Preventing fascism through AI?” After the recent Wikileaks revelations about purported CIA listening technology, you kinda wonder if the AI would be used to IMPLEMENT “fascism” – whatever “fascism” is.

But when I went to the AI Now Initiative website (which, as far as I can tell, does NOT use the f-word), it looks like they have a valid point. But can they get where they want to go? And where DO they want to go?

First, let me introduce the AI Now Initiative itself.

Led by Kate Crawford and Meredith Whittaker, AI Now is a New York-based research initiative working across disciplines to understand AI’s social impacts….

The AI Now Report provides recommendations that can help ensure AI is more fair and equitable. It represents the thinking and research of the experts who attended our first symposium, hosted in collaboration with President Obama’s White House and held at New York University in 2016.

The thing that struck me in the details was their discussion of bias.

Bias and inclusion

Data reflects the social and political conditions in which it is collected. AI is only able to “see” what is in the data it’s given. This, along with many other factors, can lead to biased and unfair outcomes. AI Now researches and measures the nature of such bias, how bias is defined and by whom, and the impact of such bias on diverse populations.

I wanted to read the report (PDF) of the first symposium – since the second symposium hasn’t yet been held, its report has (I hope) not yet been written – but the definition of bias is a key step here. If you’re wearing a MAGA hat or have one of North Korea’s approved hairstyles, then anything involving Barack Obama and the city of New York is already hopelessly biased, infused with New York values – one of those values being the Constitution and laws of the United States.

While reading the report, it appears that “bias” is defined as “lack of fairness.”

As AI systems take on a more important role in high-stakes decision-making – from offers of credit and insurance, to hiring decisions and parole – they will begin to affect who gets offered crucial opportunities, and who is left behind. This brings questions of rights, liberties, and basic fairness to the forefront.

While some hope that AI systems will help to overcome the biases that plague human decision-making, others fear that AI systems will amplify such biases, denying opportunities to the deserving and subjecting the deprived to further disadvantage.

An example will illustrate the issues involved.

Person A and Person B are applying for health insurance. What data is required to evaluate the risks from insuring each person? Do we need to know their ages? Their genders? Their races? What they ate for dinner last night? Their genetic test results? Some will argue that all of this data is not only desirable, but necessary for decision-making. Others will argue that collection of such data is an affront to the aforementioned “New York values” enshrined in the Constitution.

So should an AI system have access to all data, or some data? And should it be neutral, or “fair”?

One sentence in the report, however, justifies the common scientific plea that more research (and funding for research) is needed.

It is important to note that while there are communities doing wonderful work on these issues, there is no consensus on how to “detect” bias of either kind.

 

Post Navigation