Related to the end of my previous post. It's not that LLMs have no use. It's that everyone I know that likes them seems to forget about their error rates and how easily their hallucinations "sound right" but are so very wrong. It's even worse behavior than copy/pasting shit out of Stackoverflow or whatever the first search result out of AltaVista was in the early internet days. At least there there was some hope of auditing/correction at some point. None here. #media #ai #rant #chatgpt
Three bluesky posts on ChatGPT: "The most maddening work trend I am seeing with increasing frequency is team members answering questions with "this is what ChatGPT says," pasting the output in chat absent any personal expertise or context. If I wanted a hallucinating robot's opinion, I would have asked it myself. Shit's insulting."; "I know it's a common trope to have tech folks opine about leaving the industry to live in a cabin in the woods away from society, but the effects of this generative AI shit has done more to validate this stance in my mind than anything else."; "I've had colleagues ask me for information or help with problems and after I tell them they counter with what ChatGPT or Copilot says. Like WTF who cares. I'm not going to argue the case against like slop. Don't waste my time if you'd prefer the slop."

11