Doesn't actually contain anything much relating to the title, but interesting (if a bit on the short side) nonetheless.

I'm happy to offload navigational skills to my phone, but I hate it when my phone starts auto-suggesting answers to people's messages. I don't really want to offload my social cognition to a computer – I'd rather engage in real communication from my mind to another person's.

The question is, what tasks are so dangerous, dull, demeaning or repetitive that we're delighted to outsource them, and what do we feel are important to be done ourselves or by other humans? If I was going to be judged in a trial, I don't necessarily want an algorithm to pass a verdict on me, even if the algorithm is demonstrably very fair, because there's something about the human solidarity of people in society standing in judgement of other people. At work, I might prefer to have a relationship with human colleagues – to talk to and explain myself to other people – rather than just getting the work done more efficiently.

Well I'd certainly want such an algorithm's output to be at least considered in the trial ! Dunno if I'd want it to be the only deciding factor... probably not, but if such a rational truth engine could be devised (it probably can't), I want the jury to know what it came up with. But the point stands - some things we want to offload, some we don't.

There's a double danger to anthropomorphism. The first is that we treat machines like people, and project personalities, intentions and thoughts onto artificial intelligences. Although these systems are extraordinarily sophisticated, they don't possess anything like the human sense. And it's very dangerous to act as though they do. For a start, they don't have a consistent worldview; they are miraculously brilliant forms of autocomplete, working on pattern recognition, working on prediction. This is very powerful, but they tend to hallucinate and make up details that don't exist, and they will often contain various forms of bias or exclusion based upon a particular training set. But an AI can respond fast and plausibly to anything, and as human beings, we are very predisposed to equate speed and plausibility with truth. And that's a very dangerous thing.

The other danger of anthropomorphising technology is that it can lead us to think of and treat ourselves like we're machines. But we are nothing like large language models: we are emotional creatures with minds and bodies who are deeply influenced by our physical environment, by our bodily health and well-being. Perhaps most importantly, we shouldn’t see [a machine’s] efficiency as a model for human thriving. We don't want to optimise ourselves with perfectible components, within some vast consequentialist system. The idea that humans can have dignity and autonomy and potential is very ill-served by the desire to optimise, maximise and perfect ourselves.

#Technology
#AI
#Sociology

https://www.bbc.com/future/article/20240404-why-we-have-co-evolved-with-technology-tom-chatfield-wise-animals

There are no comments yet.