#philosophy

rhysy@diaspora.glasswings.com

Similar to the Matrix, Nozick's experience machine would be able to provide the person plugged into it with any experiences they wanted – like "writing a great novel, or making a friend, or reading an interesting book". No one who entered the machine would remember doing so, or would realise at any point that they were within it. But in Nozick's version, there were no malevolent AIs; it would be "provided by friendly and trustworthy beings from another galaxy". If you knew all that, he asked, would you enter the experience machine for the rest of your life?

Nozick believed people would not. The thought experiment was intended to demonstrate that reality, or authenticity, has some inherent value to us. While Cypher makes the decision to live in the Matrix when the alternative is continued resistance, Nozick proposed that most people would prefer the real world, in spite of the fact that the machine would definitively offer a more pleasurable life.

I would think that knowing it wasn't real (before you go in) would undermine things. I mean, if you were to write a "great" novel in the machine, what does that mean ? That you actually did write a great work that people in the real world would have enjoyed ? In which case you could have done so anyway, unless the machine actually boosted your brainpower (in which case, why trap you inside it forever ?). Or does it only give you the sensation of what it feels like to write a great novel without actually writing one ? In which case the hollowness of the experience would seem abundantly obvious. Surely it would be amusing for a bit, but not a whole-life thing.

In 2016, Hindriks and Igor Douven of Sorbonne University in France attempted to verify that intuition by surveying people's responses to the original thought experiment. They also asked if participants would take an "experience pill" that operates similarly to a machine but allows the user to remain in the world, and a functioning pill that enhances the user's capabilities but not their perception of reality.

"Our first major finding was that people actually do respond in this way, by and large," Hindriks confirms. "Overall, people are rather reluctant to go along with this scenario where they would be hooked up to an experience machine." In their study, about 70% of participants rejected the experience machine, as originally constructed by Nozick.

"This is a rather extreme scenario, so we thought of two more realistic cases," Hindriks says. Their goal was to test whether versions of the experience machine that kept participants more in contact with reality would be more acceptable to them. They found that respondents were significantly more willing to take an experience pill – 53% agreed – and even more eager to take the functioning pill, with 89% opting in. "We think this fits quite well with Nozick's intuitions," Hindriks says "so, in that respect, it was more or less expected – but it's nice to have some evidence for it."

I can't imagine many people rejecting being able to actually have greater abilities at the flick of a switch, like uploading kung fu skills a la The Matrix. This is likely not possible though, as in Eagleman's Live Wired the author makes it clear that knowledge isn't encoded in the same way in everyone's brain : it depends on all your other life experiences. So at the very least, the idea of straightforwardly uploading knowledge and skills isn't happening any time soon. It would have to account for the immense complexity of every single individual brain and adapt accordingly. Ain't happening.

As for those who are so desperate for companionship that they think AI chatbots really care about them, that's honestly a bit sad. That's not to say that AI/VR can't provide meaningful experiences : of course they can. If an AI teaches you something which you didn't know before that's no different to if you read it in a book. If you accomplish something in VR you find challenging that's no overcoming a physical problem. It's just that it can't do everything the real world can. I for one have no problem at all with having spent tens of hours playing Skyrim, but good lord I would never say I made any friends there.

#AI
#Philosophy

https://www.bbc.com/future/article/20240321-experience-machines-thought-experiment-that-inspired-matrixs-greatest-question

rhysy@diaspora.glasswings.com

Ahh, it's nice to have such spare time as to be able to finally clear all the unread open tabs on my phone. Please ignore the passing reference to Musky here, who is totally irrelevant.

What happens if Arbaugh first thinks of moving his pawn to d3 but, within a fraction of a second, changes his mind and realises he would rather move it to d4? Or what if he is running through possibilities in his imagination, and the implant mistakenly interprets one as an intention? The stakes are low on a chess board, but if these implants became more common, the question of personal responsibility becomes more fraught. What if, for example, bodily harm to another person was caused by an implant-controlled action?

That's in terms of who-do-we-blame, of course, not, should-we-develop-this, because obviously we should because paralysed people deserve to not be paralysed.

The crucial question in the contemplation conundrum is when does a "happening of imagination" turn to "intentional imagination to act"? When I apply my imagination to contemplate what words to use in this sentence, this is itself an intentional process. The imagination directed towards action – typing the words – is also intentional.

In terms of neuroscience, differentiating between imagination and intention is nearly impossible. A study conducted in 2012 by one group of neuroscientists concluded that there are no neural events that qualify as "intentions to act". Without the capability to recognise neural patterns that mark this transition in someone like Arbaugh, it could be unclear which imagined scenario is the cause for effect in the physical world. This allows partial responsibility and ownership of action to fall on the implant, and questioning again whether the actions are truly his, and whether they are a part of his personhood?

I wonder if the distinction between the mental imaginings of speculation and actual intention is subtle or gross. I would have guessed the latter, but it seems not so. I'm not sure the philosophy is terribly interesting here though, since we can and do already cause unintended effects with technology : if a ship at sea develops a fault and sinks, we don't blame the ship.

#Science
#Technology
#Philosophy

https://www.bbc.com/future/article/20240416-why-elon-musks-neuralink-brain-implant-reframes-our-ideas-of-self-identity

tord_dellsen@diasp.eu

#UnitedStates #philosophy #genocide #protest

https://twitter.com/s_m_marandi/status/1783761945553719337

rhysy@diaspora.glasswings.com

Excellent overview. I think if we could transfer the same systemic forces that make this work in science into politics, the world would be a happier place.

Though, there are different kinds of mistakes. Overcoming simple ignorance is hardly one of them in science, because that's what it's for. Not thinking things through as fully as possible, reaching the wrong conclusion in spite of having all the data needed to arrive at a better one, is worse. This certainly happens and affects us all, because any research worth doing is necessarily messy and complicated. So that sort of mistake is probably very common and almost always understandable, forgivable, and to a large degree inevitable.

Then there's the case of not only having all the right data but ignoring someone who tells you you're wrong - and perhaps most importantly, exactly why you're wrong. You're less culpable for the other sorts of mistakes, but refusing to admit them at all is where you become accountable : when a mistake is not just knowable but actually known and communicated to you directly. That's when you most need to say "oops". I think these sorts of mistakes are pretty rare in common but absurdly common in politics. Most scientists will, if confronted with an issue directly, attempt to address it, even if the process of doing this might be a rowdy argument. By contrast most politicians seem to squirm and evade and resort to hurling faeces insults at their opponents.

#Science
#Philosophy

https://us7.campaign-archive.com/?e=601a60258c&u=9e5957c81cac843c342446e34&id=d71f08b817

rhysy@diaspora.glasswings.com

Mmm, there's much I agree with here, and some parts I strongly dispute.

I can't speak to the women in science aspect so I'll just assume it's all true and therefore obviously in need of a complete overhaul.

The grant system is indeed stupid. Applications that take a long time to prepare, even longer to evaluate, and then come back with glowing reports that say, "but this just isn't a priority for us", yeah, that's a stupid system. So is the academic career path of being expected to do (at a typical minimum) two postdocs in widely-separated locations before settling on a permanent position. The less said about publish-or-perish, the better, it's a daft way to evaluate performance, if even such a thing is possible at the forefront of knowledge. All this I've ranted about myself, ad nauseum.

But as to the claims about having undermined an entire discipline, nah, or most academic papers being bullshit... nah. At least to to any great degree beyond what's inevitable if you're doing your job properly, which is to say... investigating the forefront of knowledge, where no-one can tell you if you're right or wrong because no-one else knows. The old quote that it wouldn't be research if you knew what you were doing is emphatically true. You can't avoid mistakes in such a process; I've never seen Sabine ever clearly explain what she thinks everyone is doing that's so wrong in a way that's presumably different from the necessary method of progress that includes these inevitable mistakes.

#Science
#Academia
#Philosophy

https://youtu.be/LKiBlGDfRU8?si=qBuOjmUlov7F5qJY

reverendelvis@spora.undeadnetwork.de

The fundamental error of the Enlightenment and humanism is the adherence to the Kantian imperative. This quasi-religious dualism of good and evil. This belief that we can achieve a state in which everyone acts rationally. That's where all the madness we have to endure today comes from. This magical world of words and symbols that supposedly bring about good or evil and therefore have to be banished or constantly retrieved. This fear of those who think differently, it is considered "good" not to accept the opinion of others. This complete ignorance of dialectics. Dialectics is the recognition of permanent contradiction. We have to negotiate everything over and over again. There is no other way! Those who cross the boundaries, even in the "bad", are the ones who constantly adjust and calibrate society. They should be honoured and taken seriously and not hunted down and burned at the stake like a crazy medieval pitchfork mob... #kant #philosophy #dialectics #ethics