Certainty, Uncertainty, or the worst of both

Des Cartes looked for certainty because he wanted good grounds for knowledge, a place of fixity to build on, to make predictions.

Juarrero counters that uncertainty allows for novelty and individuation.

In software, we like to aim for certainty. Correctness. Except in machine learning or AI; we don’t ask or expect our algorithms to be “correct,” just useful.

The predictions made by algorithms reproduce the interpretations of the past. When we use these to make decisions, we are reinforcing those interpretations. Black people are more likely to be arrested. Women are less likely to be hired.

Machine learning based on the past, choosing the future — this reinforces bias. It suppresses novelty and individuation. It is the worst of both worlds!

This doesn’t mean we should eschew this technology. It means we should add to it. To combine the fluidity of the human world with the discreteness of machines, as Kevlin Henney puts it. We need humans working in symmathesy with the software, researching the factors that influence its decision and consciously altering them. We can tweak the algorithms toward the future we want, beyond the past they have observed.

Machine learning models come from empirical data. Logical deduction comes from theory. As Gregory Bateson insisted: progress happens in the interaction between the two. It takes a person to tack back and forth.

We can benefit from the reasoning ability we wanted from certainty, and still support novelty and individuation. It takes a symmathesy.

This post is based on Abeba Birhane’s talk at NCrafts this year. Video

Discover more from Jessitron

Subscribe now to keep reading and get access to the full archive.

Continue reading