"Subprime Intelligence". Edward Zitron makes the case that: "We are rapidly approaching the top of generative AI's S-curve, where after a period of rapid growth things begin to slow down dramatically".

"Even in OpenAI's own hand-picked Sora outputs you'll find weird little things that shatter the illusion, where a woman's legs awkwardly shuffle then somehow switch sides as she walks (30 seconds) or blobs of people merge into each other."

"Sora's outputs can mimic real-life objects in a genuinely chilling way, but its outputs -- like DALL-E, like ChatGPT -- are marred by the fact that these models do not actually know anything. They do not know how many arms a monkey has, as these models do not 'know' anything. Sora generates responses based on the data that it has been trained upon, which results in content that is reality-adjacent."

"Generative AI's greatest threat is that it is capable of creating a certain kind of bland, generic content very quickly and cheaply."

I don't know. On the one hand, we've seen rapid bursts of progress in other technologies, only to be followed by periods of diminishing returns, sometimes for a long time, before some breakthrough leads to the next rapid burst of advancement. On the other hand, the number of parameters in these is much smaller than the number of synapses in the brain, which might be an approximate point of comparison, so it seems plausible that continuing to make them bigger will in fact make them smarter and make the kind of complains you see in this article go away.

What do you all think? Are we experiencing a temporary burst of progress soon to be followed by a period of diminishing returns? Or should we expect ongoing progress indefinitely?

Subprime Intelligence

#solidstatelife #ai #genai #llms #computervision #mooreslaw #exponentialgrowth