"Ultra-large AI models are over." "I don't mean 'over' as in 'you won't see a new large AI model ever again' but as in 'AI companies have reasons to not pursue them as a core research goal -- indefinitely.'" "The end of 'scale is all you need' is near."

He (Alberto Romero) breaks it down into technical reasons, scientific reasons, philosophical reasons, sociopolitical reasons, and economic reasons. Under technical reasons he's got new scaling laws, prompt engineering limitations, suboptimal training settings, and unsuitable hardware. Under scientific reasons, he's got biological neurons vastly greater than artificial neurons, dubious construct validity and reliability, the world is multimodal, and the AI art revolution. Under philosophical reasons he's got what is AGI anyway, human cognitive limits, existential risks, and aligned AI, how? Under sociopolitical reasons he's got the open-source revolution, the dark side of large language models, and bad for the climate. Under economic reasons, he's got the benefit-cost ratio is low and good-enough models.

Personally, I find the "scientific reasons" most persuasive. I've been saying for a long time that we keep discovering the brain is more common than previously thought. If that's true, it makes sense that there are undiscovered algorithms for intelligence we still need in order to make machine intelligence comparable to human intelligence. If the estimates here that to simulate biological dendrites, you need hundreds of artificial neurons, and to simulate a whole biological neuron, you need a thousand or so artificial neurons, that fits well with that picture.

Having said that, the gains recently from simple scaling up the size of the large language models have been impressive. Having said that, notice that in the visual domain, it's been algorithmic breakthroughs, in this case what are known as diffusion networks, that have driven recent progress.

Ultra-large AI models are over

#solidstatelife #ai #openai #gpt3 #llms #agi