This is Autonomy, a newsletter about A.I. and Philosophy.
A.I. (Artificial Intelligence) systems, or more generally algorithms run by machines, make calculations that help solve problems for us. In this way, they exhibit a certain ability to think. But we do not believe they yet have a general-purpose reasoning ability commensurate with a human's. Our efforts have not arrived at A.G.I. (Artificial General Intelligence) yet. Can Aristotle's breakdown of thinking help us understand what might be missing?
“Thinking itself moves nothing", Aristotle writes1. Action—of the informed kind, praxis—needs choice, and choice comes from deliberate desire. The deliberation part of the equation here relates to thinking and calculating, whereas the desire part is more ineffable. Right desire needs an understanding of what is good, which relates to wisdom. More about the latter in a later blog post. But the upshot to Aristotle's framework is, as he goes on to write:
“… choice is … desire fused with thinking, and such a source is a human being."2
There are no serious claims yet of an A.I. system having desires. Therefore current A.I. is not such a source of choice-making. At least, not in Aristotle's framework of describing the work of a human-being in terms of virtue3.
Now, you might cite examples of A.I. making choices for us. For example, a sophisticated GPS navigation algorithm chooses the best route to work for us each morning, taking into account traffic conditions. But, does the algorithm want us to have the shortest ride? No. The human engineers who implemented the algorithm do.
And when the GPS navigation goes wrong at times, we blame the human engineers, not the algorithm. The algorithm may be the cause of a bad commute, but we don't hold it accountable as we do the engineers, the engineers who made the tradeoffs when coming up with and implementing the algorithm. It was they who chose for us.
Current A.I. does not choose for us. When it gains the ability to do so, it becomes something more.