The obligatory overly-lengthy, "here's what I think of AI" post, focusing on #ChatGPT and #Bing. Yes, there are millions of these. But I wrote one anyway, so there.

To try and attempt a summary...
- #AI will probably be largely a positive force, even though the corporate culture in Big Tech is objectively shite. Don't confuse the technology with the companies.
- Chatbots do not reason like humans but still create demonstrably useful output. To an extent, how they work isn't actually relevant. Includes links to ~200 of my own experiments with ChatGPT.
- Hype about AI bringing forth the apocalypse is no more than hype and of no substance whatever. AI doesn't think. It doesn't want. It doesn't feel. The apocalypse stuff only serves to promote its designers claims of importance : look, it's soooper dangerous, so it must be amazingly powerful... don't you wish you too could have such awesome power ? Of course you do.
- Progress in general doesn't feel like we expect it to. We're mislead by hype claims of sudden dramatic changes, whereas in reality things are usually gradual with time to adapt, even if the tech development itself is substantial. We shift our baseline expectations and quickly forget what life was like before the change occurred, so long as that change is finite and limited.
- AI is currently useless for analysis. It's dumber than a bag of rocks. But it's genuinely great for freewheeling discussions and provoking ideas.
- Claims about how AI is going to render everyone obsolete are I think also largely hype. I think the tech as it stands is likely to be transformative in the way that, say, mobile phones have been, but nothing on the scale of the Agricultural Revolution.

Tagging @Will and the diaspora AI guy, @Wayne Radinsky.

https://astrorhysy.blogspot.com/2023/05/all-hail-our-benevolent-robot-overlords.html