#agi

vivant12@diaspora-fr.org

La suite de DEMAIN l'HOMME à ce lien : https://demainlhomme.org - Pour une Ère prochaine heureuse et un air pur. Demain l'Homme : Pas d'écologie sans Justice sociale ! Pas de Science sans Conscience ! - #ecology #environnement #social #technology #IA #AGI #générationsfutures #futuro #science #sciencefiction #conscience #consciencedesoi #conscienceeteveil #consciousness #conscienceeteveilspirituel #nature #naturephotography #naturelovers #philosophy #earth #humanity #humour #human #Guerrero #demainlhomme @demainlhomme contact@demainlhomme.org

vivant12@diaspora-fr.org

La suite de DEMAIN l'HOMME à ce lien : https://demainlhomme.org - Pour une Ère prochaine heureuse et un air pur. Demain l'Homme : Pas d'écologie sans Justice sociale ! Pas de Science sans Conscience ! - #ecology #environnement #social #technology #IA #AGI #générationsfutures #futuro #science #sciencefiction #conscience #consciencedesoi #conscienceeteveil #consciousness #conscienceeteveilspirituel #nature #naturephotography #naturelovers #philosophy #earth #humanity #humour #human #Guerrero #demainlhomme @demainlhomme contact@demainlhomme.org

vivant12@diaspora-fr.org

1er essai de la bande annonce du prochain film de l’Association : http://youtu.be/vWXkKD2UlJI?si=hGf0o4PaxfMpneGe Date de sortie : Début 2025, en exclusivité sur https://demainlhomme.org #IA #Aliens #social #créativité #AGI #Quantum #philo #insolite #FutureOfHumanity #techno #ecology #earth #artbots

Témoignage d'un jeune contacté dans les années 1970. L'intégralité de ce témoignage insolite sera mis en ligne sur : https://demainlhomme.org

waynerad@diasp.org

"AI progress has plateaued at GPT-4 level",

"According to inside reports, Orion (codename for the attempted GPT-5 release from OpenAI) is not significantly smarter than the existing GPT-4. Which likely means AI progress on baseline intelligence is plateauing."

"Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training -- the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures -- have plateaued."

The article points out how model as being trained on essentially all knowledge humans have created. OpenAI called many models "GPT-4-something". OpenAI never released Sora and it seems common for companies to not release models to the public now. A lot of internal models are probably just not good enough to release.

He says new techniques like OpenAI o1's "chain of thought" system aren't as good as you'd expect from the amount of power they consume.

"Improvements look ever more like 'teaching to the test' than anything about real fundamental capabilities."

"The y-axis is not on a log scale, while the x-axis is, meaning that cost increases exponentially for linear returns to performance."

"What I'm noticing is that the field of AI research appears to be reverting to what the mostly-stuck AI of the 70s, 80s, and 90s relied on: search."

"AlphaProof just considers a huge number of possibilities."

"I think the return to search in AI is a bearish sign, at least for achieving AGI and superintelligence."

This is all very interesting because until now, I've been hearing there's no limit to the scaling laws, only limits in how many GPUs people can get their hands on, and how much electricity, with plans to build nuclear power plants, and so on. People saying there's a "bubble" in AI haven't been saying that because of a problem in scaling up, but because the financial returns aren't there -- OpenAI et al are losing money -- and the thinking is investors will run out of money to invest, resulting in a decline.

I've speculated there might be diminishing returns coming because we've seen that previously in the history of AI, but you all have been telling me I'm wrong -- AI will continue to advance at the blistering pace of the last few years. But it looks like we're now seeing the first signs we're actually reaching the domain of diminishing returns -- at least until the next algorithmic breakthrough. It looks like we may be approaching the limits of what can be done by scaling up pre-trained transformer models.

AI progress has plateaued at GPT-4 level

#solidstatelife #ai #agi #genai #llms #multimodal

waynerad@diasp.org

"Four futures for cognitive labor."

"First: the printing press. If you were an author in 1400, the largest value-add you brought was your handwriting." "With hindsight we see that even as all the terrifying extrapolations of printing automation materialized, the income, influence, and number of authors soared."

"Second: the mechanization of farming." "The per-capita incomes of farmers have doubled several times over but there are many fewer farmers, even in absolute numbers."

"Third: computers. Specifically, the shift from the job title of computer to the name of the machine that replaced it. " "This industry was replaced by a new industry of people who programmed the automation of the previous one."

"Finally: the ice trade. In the 19th and early 20th centuries, before small ice machines were common, harvesting and shipping ice around the world was a large industry employing hundreds of thousands of workers." "By WW2 the industry had collapsed and been replaced by home refrigeration."

Four futures for cognitive labor

#solidstatelife #ai #genai #agi #technologicalunemployment

nowisthetime@pod.automat.click

Source: https://youtube.com/watch?v=xhCi20jbWq0
#agi

00:00 Opening Introduction
03:25 Insider Perspectives
08:08 Model Predictions
12:22 Whistleblower Testimony
13:08 Safety Concerns
15:34 Board Oversight
20:32 Watermark Technology
24:28 Google SynthID
28:50 Team Departures
31:46 Legal Restrictions
34:08 A.G.I.
Timeline
37:44 Task Specialization

waynerad@diasp.org

"As humanity gets closer to Artificial General Intelligence (AGI), a new geopolitical strategy is gaining traction in US and allied circles, in the natonal security, AI safety and tech communities. Anthropic CEO Dario Amodei and RAND Corporation call it the 'entente', while others privately refer to it as 'hegemony' or 'crush China'."

Max Tegmark, physics professor at MIT and president of the Future of Life Institute, argues that, "irrespective of one's ethical or geopolitical preferences," the entente strategy "is fundamentally flawed and against US national security interests."

He is reacting to Dario Amodei saying:

"... a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries' access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition's strategy to promote democracy (this would be a bit analogous to 'Atoms for Peace'). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe."

"This could optimistically lead to an 'eternal 1991' -- a world where democracies have the upper hand and Fukuyama's dreams are realized."

Tegmark responds:

"Note the crucial point about 'scaling quickly', which is nerd-code for 'racing to build AGI'."

"From a game-theoretic point of view, this race is not an arms race but a suicide race. In an arms race, the winner ends up better off than the loser, whereas in a suicide race, both parties lose massively if either one crosses the finish line. In a suicide race, 'the only winning move is not to play.'"

"Why is the entente a suicide race? Why am I referring to it as a 'hopium' war, fueled by delusion? Because we are closer to building AGI than we are to figuring out how to align or control it."

The hopium wars: the AGI entente delusion -- LessWrong

#solidstatelife #ai #agi #aiethics

waynerad@diasp.org

"Machines of Loving Grace". Dario Amodei, CEO of Anthropic, wrote an essay about what a world with powerful AI might look like if everything goes right.

"I think and talk a lot about the risks of powerful AI. The company I'm the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I'm a pessimist or "doomer" who thinks AI will be mostly bad or dangerous. I don't think that at all. In fact, one of my main reasons for focusing on risks is that they're the only thing standing between us and what I see as a fundamentally positive future."

"In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields -- biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc."

"In addition to just being a 'smart thing you talk to', it has all the 'interfaces' available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on."

"It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary."

"It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use."

"The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with."

"Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks."

Sounds to me like automation of all work, but he doesn't address that until nearly the end. Before that, he talks about all the ways he thinks AI will improve "biology and physical health", "neuroscience and mental health", "economic development and poverty", and "peace and governance".

AI will advance CRISPR, microscopy, genome sequencing and synthesis, optogenetic techniques, mRNA vaccines, cell therapies such as CAR-T, and more due to conceptual insights we can't even predict today.

AI will prevent or treat nearly all infectious disease, eliminate most cancer, prevent or cure genetic diseases, improve treatments for diabetes, obesity, heart disease, autoimmune diseases, give people "biological freedom" with physical appearance and other biological processes under people's individual control, and double human lifespan (to 150).

AI will cure most mental illnesses like PTSD, depression, schizophrenia, and addiction. AI will figure out how to alter brain structure in order to change psychopaths into non-psychopaths. "Non-clinical" everyday psychological problems like feeling drowsy or anxious or having trouble focusing will be solved. AI will increase the amount of "extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace" people experience.

Economically, AI will make health interventions cheap and widely available, AI will increase crop yields develop technology like lab grown meat that increases food securty, AI will develop technology to mitigate climate change, AI will reduce inequality within countries, just as how the poor have the same mobile phones as the rich today; there is no such thing as a "luxury" mobile phone.

Regarding "peace and governance", he advocates an "entente strategy", "in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries' access to key resources." This would prevent dictatorships from gaining the upper hand. If democracies have the upper hand globally, that helps with "the fight between democracy and autocracy within each country." "Democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor."

Finally he gets to "work and meaning", where he says, "Comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the '10%' expands to continue to employ almost everyone."

"However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized. While that might sound crazy, the fact is that civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism."

Wish I could share his optimism that "civilization" will "successfully" "navigate" this "major economic shift". As those of you who've been hanging around me for any length of time know, I think the major effect of technology competing against humans in the labor market is decreased fertility, rather than an immediate drop in the employment rate, like everyone thinks because that seems more intuitive. He makes no mention of fertility in this context (he mentions it in the context of fertility treatments being something AI will advance), so I think it's not on his radar at all. He considers "opt-out" a "problem" whereby "Luddite movements" create a "dystopian underclass" by opt-ing out of the benifits of AI technology, yet it is the "opt-out" people, like the Amish, that today are able to maintain high fertility rates, and as such will make up the majority of the human population living on this planet in the future (something you can confirm for yourself by doing some math).

The original essay is some 14,000 words and my commentary above is just 1,000 or so, so you should probably read the original and get his full original unfiltered point of view.

Machines of Loving Grace

#solidstatelife #ai #agi #aiethics #technologicalunemployment

waynerad@diasp.org

OpenAI o1 is so smart, humans are not smart enough to create test questions to test how smart it is anymore. Discussion between Alan D. Thompson and Cris Sheridan. Open AI o1 beats PhD level experts across the board on tests we humans have made to test how intelligent other humans are. PhD-level humans are trying to come up with new questions but it is hard for other PhD-level humans to understand the questions and verify answers.

OpenAI reset the numbering, instead of continuing with the "GPT" series, because they think this is a new type of model. The "o" actually just means "OpenAI" so when I say "OpenAI o1", I'm really saying "OpenAI OpenAI 1".

You might think, if this is a new type of model, we'd know what type of model it is. Nope. OpenAI has not told us anything. We don't know what the model architecture is. We don't know how many parameters it has. We don't know how much compute was used to train it, or how much training data it used. We don't know what token system is used or how many tokens.

All we really know is that "chain-of-thought" reasoning has been built into the model in a way previous models never had built into them. (Called "hidden chain of thought", but not necessarily hidden -- you are allowed to see it.) This "chain-of-thought" system is guided by reinforcement learning in some way, but we don't know how that works.

The "system card" that OpenAI published mainly focuses on safety tests. Jailbreak evaluations, hallucinations, fairness and bias, hate speech, threats, and violence, chain-of-thought deception, self-knowledge, theory of mind, political persuasion, "capture-the-flag" (CTF) computer security challenges, reverse engineering, network exploits, biological threat creation.

It has some evaluation of "agentic" tasks (things like installing Docker containers), and multi-lingual capabilities.

Anyway, OpenAI is called "Open" AI but is becoming increasingly secretive.

That and we appear to have entered a new era where AI systems are smarter than the humans that make the tests to test how smart they are.

Interview about AI - Dr Alan D. Thompson on OpenAI's New o1 Model Is a Really Big Deal (Sep/2024) - Dr Alan D. Thompson

#solidstatelife #ai #genai #llms #agi

waynerad@diasp.org

François Chollet claims to have a test for neural networks called ARC, which stands for Abstraction and Reasoning Corpus. The test reminds me of Raven's progressive matrices (the IQ test), but it uses larger grids and up to 10 unique symbols. Grids can go as large as 30x30.

The test is specifically designed to be resistant to memorization, and to require the test-taker to try new ideas.

In the discussion here (video between François Chollet and Dwarkesh Patel), they discuss how current large language models (LLMs) are currently doing essentially a lot of memorization.

I found it a fascinating discussion. If you study data science, one of the very first things you learn is the concept of "overfitting", where instead of learning the "general pattern", your model essentially memorizes the input points. Such a model does a bad job on input data points it has not seen before, or that aren't close enough to data points it has seen before.

One of the mysteries of neural networks is how, as you make the models larger and larger, they don't overfit, but continue to "generalize", to learn general patterns.

However, it seems like, even though today's current large language models (LLMs) don't overfit in the traditional statistical sense, they nonetheless rely heavily on memorization. You can ask it questions from various tests made for humans, like the US Medical Licensing Exam (USMLE), and it can outperform most humans, but it does so by relying on a vast amount of memorized input patterns.

If you give LLMs problems that are different enough from the input it has memorized, it will be unable to solve them, even if those same problems are easy to solve by humans, even human children, using simple reasoning.

Such is the claim made by François Chollet, and he and his collaborators are willing to put money on the line, offering $1 million in prize money to anyone who can make a neural network model that can beat the test. Apparently the test was originally invented in 2019, and while LLMs have seen dramatically increasing scores on other tests made for humans, there's only been slight improvement on the ARC test.

In the discussion, they talk a lot about "system 1" and "system 2". These terms come from Daniel Kahneman who hypothesized that the brain has a "system 1" that does its thinking in a fast, automatic, intuitive, effortless way, and a "system 2" that is slow, deliberate, conscious, and the opposite of effortless which I guess would be effortful, demanding of effort, and which is required to solve complex problems requiring careful reasoning. François Chollet hypothesizes that humans always use a combination of "system 1" and "system 2" and are not pure "system 1" or "system 2" thinkers. And this simple fact enables humans, even human children with relatively little memorized knowledge, to engage in reasoning beyond what LLMs are capable of.

I find that an intriguing concept because, subjectively, it seems like while LLMs are sometimes astonishingly brilliant, they are also sometimes make surprising mistakes, and their knowledge often seems to be shallow, getting the "surface style" exactly right initially but floundering if you try to dig too deep underneath it. So subjectively, it does seem like maybe a phenomena analogous somehow to "overfitting" is actually taking place, though it's hard to pin down exactly what it is.

It will interesting to see if anyone steps up to the plate and claims the $1 million prize any time soon.

Francois Chollet - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution -Dwarkesh Patel

#solidstatelife #ai #agi

waynerad@diasp.org

Yann LeCun says, "Reasoning, as I define it, is simply not doable by a system that produces a finite number of tokens, each of which is produced by a neural net with a fixed number of layers."

Chris Anderson says, "Do you have a theory about what it is the human brain has -- LLMs don't have -- that allows it to reason? A lot of the output of LLMs would be regarded as well-reasoned if it came out of humans!"

Yann LeCun says, "Yes, I do."

"LLMs produce their answers with a fixed amount of computation per token. There is no way for them to devote more (potentially unlimited) time and effort to solving difficult problems. This is very much akin to the human fast and subconscious 'System 1' decision process."

"True reasoning and planning would allow the system to search for a solution, using a potentially unlimited unlimited time for it. This iterative inference process is more akin to the human deliberate and conscious 'System 2'."

"This is what allows humans and many animals to find new solutions to new problems in new situations."

Yes, I do. LLMs produce their answers - Yann LeCun

#solidstatelife #ai #genai #llms #agi

digit@iviv.hu

#listen #consider

https://soundcloud.com/drmercola/big-data-transhumanism-and-why

#whitneywebb #mercola

#cbdc #voluntaryfirst #involuntary #totalcontrol #centralbankdigitalcurrencies #massrejectcbdcs #whitneywebb #mercola #convenience #trap #wakeup #compromises #complicity #dependence #disempowerment #independence #duress #massadoption #foodstamps #forced #uptake #reduced #standardofliving #controlsystem #controlsystemdisguisedasamonetarysystem #monetarysystem #fakechoice #fightit #fightback #sayno #voluntaryphase #tradeandbarter #massadoption #massrejection #parallelsystems #community #supporteachother #remainoptimistic #getwise #wearenotaminority #speakout #ispytotalitariantiptoe #ifweallnarutoruntogether #accounts #censorship #nudgeunit #psyop #perceptionmanagement #problemreactionsolution #rememberwhenwesaidno #ifyoucanbetoldwhatyoucanseeorreadthenitfollowsyoucanbetoldwhattosayorthink #socialmedia #propaganda #socialmanipulation #intellectualphaselocking #groupthink #riggedpsychgelogicalmanipulationuserinterface #theeverythingapp #musk #datamining #wifi #biologicalcost #privacy #profiling #precrime #harpa #advertisingormarketing #arpa-h #biotech #bigpharma #siliconvalley #nationalsecurity #cia #hhs #fda #googlehealth #normalisedregulatorycapture #bigactors #agenda #transhumanism #theneweugenics #eugenics #ARPA-H #borg #totalinformationawareness #masssurveilance #glaxosmitklien #galvanibioelectronics #palanteer #fascbook #pentagon #terroristinformationawareness #corporatarchy #scamarchy #aipredictive #scam #scamdemic #powergrab #racketeering #robberbarons #biosecurity #coinflip #arbritrary #corruption #insanity #agi #gpt3 #gpt4 #plans #thehourislaterthanyouthink #aimarketing #aisingularity #leadbyfools #kissinger #skynet #theworkofman #themanbehindthecurtain #themonkeyinthemachine #trustmeiamanai #thegreatergood #notnormal #deskkillers #scapegoatai #externalities #normalisedattrocities #totalitarianism #dataism #croneyism #cantgettherefromhere

#remainoptimistic

#wecanstillmendthis

#watchthemcollapse #isitsowndestruction

waynerad@diasp.org

Sparks of artificial general intelligence (AGI): Early experiments with GPT-4. So, I still haven't finished reading the "Sparks of AGI" paper, but I discovered this video of a talk by the leader of the team that did the research, Sébastien Bubeck. So you can get a summary of the research from one of the people that did it instead of me.

He talks about how they invented tests of basic knowledge of how the world works that would be exceedingly unlikely to appear anywhere in the training data, so it can't just regurgitate something it read somewhere. What they came up with is asking it how to stack a book, 9 eggs, a laptop, a bottle, and a nail onto each other in a stable manner.

They invented "theory of mind" tests, like asking where John and Mark think the cat is when they both saw John put the cat in a basket, but then John left the room and went to school and Mark took the cat out of the basket and put it in a box. GPT-4 not only says where John and Mark think the cat is, but, actually, since the way the exact question was worded, to just ask what "they" think, GPT-4 also says where the cat thinks it is.

Next he gets into definitions of intelligence that date back to the 1990s, and see how well GPT-4 does at those definitions. This is the main focus of the paper. These definitions are such things as the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. GPT-4 succeeds at some of these but not others. For example, GPT-4 doesn't do planning. (This was before AutoGPT, for what it's worth). And GPT-4 doesn't learn from experience, as when you interact with it, it relies on its training data and its interactions with you are not part of that. (It does have a buffer that acts as short-term memory that makes the back-and-forth chat interaction coherent.)

"Can you write a proof that there are infinitely many primes, with every line that rhymes?" Just a "warm up" question.

"Draw a unicorn in TikZ." This is supposed to be hard because it should be hard to tell what code in TikZ, an annoyingly cryptic programming language, apparently (I never heard of it before) for vector graphics drawing (intended to be invoked inside LaTeX, a language for typesetting mathematical notation), creates any particular visual image without being able to "see". This was before GPT had its "multimodal" vision input added. It managed to come it with a very cartoony "unicorn", suggesting it had some ability to "see" even though it was only a language model.

"Can you write a 3D game in HTML with Javascript, I want: There are three avatars, each is a sphere. The player controls its avatar using arrow keys to move. The enemy avatar is trying to catch the player. The defender avatar is trying to block the enemy. There are also random obstacles as cubes spawned randomly at the beginning and moving randomly. The avatars cannot cross those cubes. The player moves on a 2D plane surrounded by walls that he cannot cross. The wall should cover the boundary of the entire plane. Add physics to the environment using cannon. If the enemy catches the player, the game is over. Plot the trajectories of all the three avatars."

Going from ChatGPT (GPT-3.5) to GPT-4, it goes from generating a 2D game to a 3D game as asked for.

He then gets into the coding interview questions. Here is where GPT-4's intelligence really shines. 100% of Amazon's On-Site Interview sample questions, 10 out of 10 problems solved, took 3 minutes 59 seconds out of the allotted 2 hour time slot. (Most of that time was Yi Zhang cutting and pasting back and forth.)

The paper goes far beyond the talk in this. In the paper they describe LeetCode's Interview Assessment platform, which provides simulated coding interviews for software engineer positions at major tech companies. GPT-4 solves all questions from all three rounds of interviews (titled online assessment, phone interview, and on-site interview) using only 10 minutes in total, with 4.5 hour allotted.

They challenged it to do a visualization of IMDb data. They challenge it to do a Pyplot (Matplotlib) visualization of a math formula with vague instructions about colors, and it creates an impressive visualization. They challenged it to create a GUI for a Python program that draws arrows, curves, rectangles, etc.

They challenged GPT-4 to give instructions on how to find the password in a macOS executable, which it does by telling the user to use a debugger called LLDB and a Python script. (The password was simply hardcoded into the file, so wasn't done in a way that uses modern cryptographic techniques.)

They tested GPT-4's ability to reason about (mentally "execute") pseudo-code in a nonexistent programming language (that looks something like R), which it is able to do.

"Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent?"

"In its current state, we believe that GPT-4 has a high proficiency in writing focused programs that only depend on existing public libraries, which favorably compares to the average software engineer's ability. More importantly, it empowers both engineers and non-skilled users, as it makes it easy to write, edit, and understand programs. We also acknowledge that GPT-4 is not perfect in coding yet, as it sometimes produces syntactically invalid or semantically incorrect code, especially for longer or more complex programs. [...] With this acknowledgment, we also point out that GPT-4 is able to improve its code by responding to both human feedback (e.g., by iteratively refining a plot) and compiler / terminal errors."

The reality of this capability really hit me when Google Code Jam was canceled. I've done it every year for 15 years and poof! Gone. It's because of AI. If they did Code Jam this year, they wouldn't be testing people's programming ability, they'd be testing people's ability to cut-and-paste into AI systems and prompt AI systems. And since Code Jam is a recruiting tool for Google, the implication of this is that coding challenges as a way of hiring programmers is over. And the larger implication of that is that employers don't need people who are algorithm experts who can determine what algorithm applies to a problem and competently code it any more. Or very soon. They need "programmer managers" who will manage AI systems that actually write the code.

Going back from the paper, where GPT-4 succeeded a everything, pretty much, back to the talk, in the talk he talks about GPT-4 limitations at math ability. I feel this is pretty much a moot point since GPT-4 has been integrated with Wolfram|Alpha which can perform all the arithmetic calculations desired without mistakes. But that all happened after the paper was published and this talk was recorded. Even though that was only 3 weeks ago. Things are going fast. Anyway, what he shows here is that GPT-4, as a language model, isn't terribly good at arithmetic. It does pretty well at linguistic reasoning about mathematical problems, though, to a point.

Sparks of AGI: Early experiments with GPT-4 - Sebastien Bubeck

#solidstatelife #ai #generativemodels #nlp #lmms #gpt #agi

waynerad@diasp.org

"Ultra-large AI models are over." "I don't mean 'over' as in 'you won't see a new large AI model ever again' but as in 'AI companies have reasons to not pursue them as a core research goal -- indefinitely.'" "The end of 'scale is all you need' is near."

He (Alberto Romero) breaks it down into technical reasons, scientific reasons, philosophical reasons, sociopolitical reasons, and economic reasons. Under technical reasons he's got new scaling laws, prompt engineering limitations, suboptimal training settings, and unsuitable hardware. Under scientific reasons, he's got biological neurons vastly greater than artificial neurons, dubious construct validity and reliability, the world is multimodal, and the AI art revolution. Under philosophical reasons he's got what is AGI anyway, human cognitive limits, existential risks, and aligned AI, how? Under sociopolitical reasons he's got the open-source revolution, the dark side of large language models, and bad for the climate. Under economic reasons, he's got the benefit-cost ratio is low and good-enough models.

Personally, I find the "scientific reasons" most persuasive. I've been saying for a long time that we keep discovering the brain is more common than previously thought. If that's true, it makes sense that there are undiscovered algorithms for intelligence we still need in order to make machine intelligence comparable to human intelligence. If the estimates here that to simulate biological dendrites, you need hundreds of artificial neurons, and to simulate a whole biological neuron, you need a thousand or so artificial neurons, that fits well with that picture.

Having said that, the gains recently from simple scaling up the size of the large language models have been impressive. Having said that, notice that in the visual domain, it's been algorithmic breakthroughs, in this case what are known as diffusion networks, that have driven recent progress.

Ultra-large AI models are over

#solidstatelife #ai #openai #gpt3 #llms #agi

seebrueckeffm@venera.social

🔴 1.878* Personen will Viminale jetzt in 4 Tagen von #Lampedusa evakuieren

🚢MM San Marco⚓️🇮🇹
600 #PortoEmpedocle
600 #Pozzallo
400 #Augusta

🚢GDF🇮🇹
150 #PortoEmpedocle

🚢🚢CP⚓️🇮🇹
128 #PortoEmpledocle

OSINT @radioradical / Daten * #AGI
via @scandura
🧵


https://twitter.com/scandura/status/1545855663213101056

#migranti

waynerad@diasp.org

Superintelligence means many humans will have to radically rethink their purpose in life. "Having a sense of purpose involves having a goal and structuring one's life around that goal. There are different sorts of goals. Some are more subject-oriented and others more world-oriented. For example, one person may have as a goal to learn how to play the flute. It can be intrinsically satisfying to learn a new skill and to play an instrument well, and no one else can learn how to play the flute for that person -- that is something only they can do for themselves; nor does it matter that other people can play the flute better. But another person may have as a goal to provide for their family, or to make a great work of art, or to help the disadvantaged, or to advance a field of science or philosophy. Those are things that others can do, too; nobody with a goal like that has a monopoly on their goal."

"What will happen to those goals in the future? There is a real possibility that we create artificial superintelligence in our lifetimes." "That would mean there's an agent out there that is better than the best of us at providing economically, creating art, helping the disadvantaged, making scientific and philosophical progress and just about anything else that we may want to do. Then the entire second class of goals -- world-oriented goals -- will be meaningless to pursue, as the superintelligence can achieve them for us, far more easily, quickly and efficiently than we could ever hope to."

Does human purpose have anywhere to retreat to?

#solidstatelife #ai #agi

seebrueckeffm@venera.social

📍 505 Pers. erreichten vorgestern Nacht v. #Libyen aus Pozzallo.

🚢#CP323 CG⚓️🇮🇹, 2 GDF-Patrouillenboote & der Schlepper 🚢 #NosAires der Vega-Plattform⛽️🛢️ waren am SAR-Ort 4sm südl. der Küste.
Das Fischerboot kam aus Tobruk.

via @scandura @RadioRadicale
📷 Giada Drocker #AGI


https://twitter.com/scandura/status/1513840430332452864

#Libia

luca972@joindiaspora.com

Smettere di produrre carne salverebbe il pianeta. Uno studio


Un team di ricercatori di Berkeley e Standford ha valutato l'impatto sul clima associato all'eliminazione della produzione di carne. E i risultati sono clamorosi


02 febbraio 2022

AGI - La sospensione delle attività di produzione della carne, compresa la chiusura degli allevamenti, potrebbe alterare sostanzialmente la traiettoria del riscaldamento globale e potenzialmente potrebbe salvare il pianeta. Descritta sulla rivista Plos Climate, questa prospettiva è stata presentata dagli scienziati dell'Università della California a Berkeley e della Stanford University, che hanno valutato l'impatto sul clima associato all'eliminazione della produzione di carne.

Il team, guidato da Michael Eisen e Patrick Brown, ha utilizzato un modello climatico combinato a una serie di simulazioni per verificare le conseguenze dell'eliminazione delle emissioni legate agli allevamenti. Stando ai risultati del gruppo di ricerca, la sospensione delle attività di produzione di carne e il conseguente decremento di emissioni di metano e protossido di azoto porterebbero alla conversione di 800 miliardi di tonnellate di anidride carbonica in foreste, prati, boschi e biomassa.

Il beneficio conseguente, riportano gli scienziati, sarebbe paragonabile a una diminuzione annuale delle emissioni di CO2 globali del 68 per cento. "Il nostro lavoro - osserva Brown, CEO di Impossible Foods Inc., una azienda che vende prodotti a base vegetale pensati per sostituire la carne - mostra che la fine degli allevamenti potrebbe ridurre significativamente i livelli di tre principali gas serra: anidride carbonica, metano e protossido di azoto. La nostra tesi è che la sospensione delle attività di allevamento dovrebbe costituire una priorità per i prossimi anni".

Le conclusioni dell'indagine suggeriscono che un'eliminazione graduale di allevamenti destinati alla produzione di carne nell'arco di 15 anni contribuirebbe alla riduzione di oltre il 30 per cento di tutte le emissioni di metano a livello globale.

"I prodotti animali sono fondamentali per l'alimentazione - osserva Eisen - tanto che forniscono circa il 18 per cento del fabbisogno energetico, il 40 e il 45 per cento di proteine e lipidi. Attualmente ci sono 400 milioni di persone con diete a base interamente vegetale. Ci sono prove convincenti che l'agricoltura animale puo' essere sostituita in toto da soluzioni alternative che sono caratterizzate da proprietà nutritive e sensoriali paragonabili alla carne".


Stopping meat production would save the planet. A study


A team of researchers from Berkeley and Standford evaluated the climate impact associated with the elimination of meat production. And the results are sensational


02 February 2022

AGI - The suspension of meat production activities, including the closure of farms, could substantially alter the trajectory of global warming and could potentially save the planet. Described in the journal Plos Climate, this perspective was presented by scientists from the University of California at Berkeley and Stanford University, who assessed the climate impact associated with eliminating meat production.

The team, led by Michael Eisen and Patrick Brown, used a climate model combined with a series of simulations to test the consequences of eliminating farm-related emissions. According to the research group's results, the suspension of meat production activities and the consequent decrease in methane and nitrous oxide emissions would lead to the conversion of 800 billion tons of carbon dioxide in forests, meadows, woods and biomass.

The resulting benefit, the scientists report, would be comparable to an annual decrease in global CO2 emissions of 68 percent. "Our work - observes Brown, CEO of Impossible Foods Inc., a company that sells plant-based products designed to replace meat - shows that the end of farms could significantly reduce the levels of three main greenhouse gases: carbon dioxide, methane. and nitrous oxide. Our thesis is that the suspension of farming activities should be a priority for the next few years. "

The survey findings suggest that phasing out meat farms over 15 years would help reduce all methane emissions globally by more than 30 percent.

"Animal products are essential for nutrition - observes Eisen - so much so that they provide about 18 percent of energy requirements, 40 and 45 percent of proteins and lipids. Currently there are 400 million people with diets entirely plant-based. There is convincing evidence that animal agriculture can be completely replaced by alternative solutions that are characterized by nutritional and sensory properties comparable to meat ".


Arrêter la production de viande sauverait la planète. Une étude


Une équipe de chercheurs de Berkeley et Standford a évalué l'impact climatique associé à l'élimination de la production de viande. Et les résultats sont sensationnels


02 février 2022

AGI - La suspension des activités de production de viande, y compris la fermeture des fermes, pourrait modifier considérablement la trajectoire du réchauffement climatique et pourrait potentiellement sauver la planète. Décrite dans la revue Plos Climate, cette perspective a été présentée par des scientifiques de l'Université de Californie à Berkeley et de l'Université de Stanford, qui ont évalué l'impact climatique associé à l'élimination de la production de viande.

L'équipe, dirigée par Michael Eisen et Patrick Brown, a utilisé un modèle climatique combiné à une série de simulations pour tester les conséquences de l'élimination des émissions liées à l'exploitation agricole. Selon les résultats du groupe de recherche, la suspension des activités de production de viande et la diminution conséquente des émissions de méthane et de protoxyde d'azote conduiraient à la conversion de 800 milliards de tonnes de dioxyde de carbone dans les forêts, les prairies, les bois et la biomasse.

Selon les scientifiques, le bénéfice qui en résulterait serait comparable à une diminution annuelle des émissions mondiales de CO2 de 68 %. "Nos travaux - observe Brown, PDG d'Impossible Foods Inc., une entreprise qui vend des produits à base de plantes destinés à remplacer la viande - montrent que la fin des fermes pourrait réduire considérablement les niveaux de trois principaux gaz à effet de serre : le dioxyde de carbone, le méthane et de protoxyde d'azote. Notre thèse est que la suspension des activités agricoles devrait être une priorité pour les prochaines années. »

Les résultats de l'enquête suggèrent que la suppression progressive des fermes de viande sur 15 ans contribuerait à réduire toutes les émissions de méthane dans le monde de plus de 30 %.

"Les produits d'origine animale sont essentiels à la nutrition - observe Eisen - à tel point qu'ils fournissent environ 18 % des besoins énergétiques, 40 et 45 % des protéines et des lipides. Actuellement, 400 millions de personnes ont une alimentation entièrement à base de plantes. est une preuve convaincante que l'agriculture animale peut être totalement remplacée par des solutions alternatives caractérisées par des propriétés nutritionnelles et sensorielles comparables à la viande ».


Detener la producción de carne salvaría el planeta. Un estudio


Un equipo de investigadores de Berkeley y Standford evaluó el impacto climático asociado con la eliminación de la producción de carne. Y los resultados son sensacionales.


02 febrero 2022

AGI - La suspensión de las actividades de producción de carne, incluido el cierre de granjas, podría alterar sustancialmente la trayectoria del calentamiento global y potencialmente salvar el planeta. Descrita en la revista Plos Climate, esta perspectiva fue presentada por científicos de la Universidad de California en Berkeley y la Universidad de Stanford, quienes evaluaron el impacto climático asociado con la eliminación de la producción de carne.

El equipo, dirigido por Michael Eisen y Patrick Brown, utilizó un modelo climático combinado con una serie de simulaciones para probar las consecuencias de eliminar las emisiones relacionadas con la agricultura. Según los resultados del grupo de investigación, la suspensión de las actividades de producción de carne y la consiguiente disminución de las emisiones de metano y óxido nitroso supondría la conversión de 800.000 millones de toneladas de dióxido de carbono en bosques, praderas, bosques y biomasa.

El beneficio resultante, informan los científicos, sería comparable a una disminución anual en las emisiones globales de CO2 del 68 por ciento. “Nuestro trabajo -observa Brown, CEO de Impossible Foods Inc., una empresa que vende productos de origen vegetal diseñados para reemplazar la carne- muestra que el fin de las granjas podría reducir significativamente los niveles de tres principales gases de efecto invernadero: dióxido de carbono, metano y óxido nitroso Nuestra tesis es que la suspensión de las actividades agropecuarias debe ser una prioridad para los próximos años”.

Los hallazgos de la encuesta sugieren que la eliminación gradual de las granjas de carne durante 15 años ayudaría a reducir todas las emisiones de metano a nivel mundial en más del 30 por ciento.

"Los productos animales son esenciales para la nutrición -observa Eisen- tanto que aportan alrededor del 18 por ciento de las necesidades energéticas, el 40 y el 45 por ciento de las proteínas y los lípidos. Actualmente hay 400 millones de personas con dietas totalmente basadas en plantas. Hay es evidencia convincente de que la agricultura animal puede ser completamente reemplazada por soluciones alternativas que se caracterizan por propiedades nutricionales y sensoriales comparables a la carne”.


Η διακοπή της παραγωγής κρέατος θα έσωζε τον πλανήτη. Μια μελέτη


Μια ομάδα ερευνητών από το Μπέρκλεϋ και το Στάντφορντ αξιολόγησαν τις κλιματικές επιπτώσεις που σχετίζονται με την εξάλειψη της παραγωγής κρέατος. Και τα αποτελέσματα είναι εντυπωσιακά


02 Φεβρουαρίου 2022

AGI - Η αναστολή των δραστηριοτήτων παραγωγής κρέατος, συμπεριλαμβανομένου του κλεισίματος των αγροκτημάτων, θα μπορούσε να αλλάξει ουσιαστικά την τροχιά της υπερθέρμανσης του πλανήτη και θα μπορούσε ενδεχομένως να σώσει τον πλανήτη. Περιγράφεται στο περιοδικό Plos Climate, αυτή η προοπτική παρουσιάστηκε από επιστήμονες από το Πανεπιστήμιο της Καλιφόρνια στο Μπέρκλεϋ και το Πανεπιστήμιο του Στάνφορντ, οι οποίοι αξιολόγησαν τις κλιματικές επιπτώσεις που σχετίζονται με την εξάλειψη της παραγωγής κρέατος.

Η ομάδα, με επικεφαλής τους Michael Eisen και Patrick Brown, χρησιμοποίησε ένα κλιματικό μοντέλο σε συνδυασμό με μια σειρά προσομοιώσεων για να ελέγξει τις συνέπειες της εξάλειψης των εκπομπών που σχετίζονται με το αγρόκτημα. Σύμφωνα με τα αποτελέσματα της ερευνητικής ομάδας, η αναστολή των δραστηριοτήτων παραγωγής κρέατος και η επακόλουθη μείωση των εκπομπών μεθανίου και οξειδίου του αζώτου θα οδηγήσει στη μετατροπή 800 δισεκατομμυρίων τόνων διοξειδίου του άνθρακα σε δάση, λιβάδια, δάση και βιομάζα.

Το προκύπτον όφελος, αναφέρουν οι επιστήμονες, θα είναι συγκρίσιμο με μια ετήσια μείωση των παγκόσμιων εκπομπών CO2 κατά 68 τοις εκατό. «Η δουλειά μας - παρατηρεί ο Brown, Διευθύνων Σύμβουλος της Impossible Foods Inc., μιας εταιρείας που πουλά φυτικά προϊόντα που έχουν σχεδιαστεί για να αντικαταστήσουν το κρέας - δείχνει ότι το τέλος των αγροκτημάτων θα μπορούσε να μειώσει σημαντικά τα επίπεδα τριών κύριων αερίων του θερμοκηπίου: διοξείδιο του άνθρακα, μεθάνιο και. Οξείδιο του αζώτου. Η διατριβή μας είναι ότι η αναστολή των γεωργικών δραστηριοτήτων θα πρέπει να αποτελεί προτεραιότητα για τα επόμενα χρόνια».

Τα ευρήματα της έρευνας υποδηλώνουν ότι η σταδιακή κατάργηση των εκμεταλλεύσεων κρέατος για 15 χρόνια θα συμβάλει στη μείωση όλων των εκπομπών μεθανίου παγκοσμίως κατά περισσότερο από 30 τοις εκατό.

"Τα ζωικά προϊόντα είναι απαραίτητα για τη διατροφή - παρατηρεί ο Eisen - τόσο πολύ που παρέχουν περίπου το 18 τοις εκατό των ενεργειακών αναγκών, το 40 και 45 τοις εκατό των πρωτεϊνών και των λιπιδίων. Αυτή τη στιγμή υπάρχουν 400 εκατομμύρια άνθρωποι με δίαιτες αποκλειστικά φυτικής προέλευσης. είναι πειστική απόδειξη ότι η κτηνοτροφία μπορεί να αντικατασταθεί πλήρως από εναλλακτικές λύσεις που χαρακτηρίζονται από θρεπτικές και αισθητηριακές ιδιότητες συγκρίσιμες με το κρέας».


停止肉类生产将拯救地球。一项研究


来自伯克利和斯坦福的一组研究人员评估了与消除肉类生产相关的气候影响。结果是耸人听闻的


2022 年 2 月 2 日

AGI - 暂停肉类生产活动,包括关闭农场,可能会大大改变全球变暖的轨迹,并有可能拯救地球。加州大学伯克利分校和斯坦福大学的科学家在《公共科学图书馆气候》杂志上对此进行了描述,他们评估了与消除肉类生产相关的气候影响。

由 Michael Eisen 和 Patrick Brown 领导的团队使用气候模型和一系列模拟来测试消除农场相关排放的后果。根据研究小组的研究结果,肉类生产活动的暂停以及随之而来的甲烷和一氧化二氮排放量的减少将导致森林、草地、森林和生物质能转化8000亿吨二氧化碳。

科学家报告说,由此产​​生的好处相当于全球二氧化碳排放量每年减少 68%。 “我们的工作 - 销售旨在替代肉类的植物性产品的公司 Impossible Foods Inc. 的首席执行官布朗说 - 表明农场的结束可以显着降低三种主要温室气体的水平:二氧化碳、甲烷。一氧化二氮。我们的论点是,暂停农业活动应该是未来几年的优先事项。”

调查结果表明,在 15 年内逐步淘汰肉类农场将有助于将全球所有甲烷排放量减少 30% 以上。

“动物产品对营养至关重要——艾森观察到——以至于它们提供了大约 18% 的能量需求、40% 和 45% 的蛋白质和脂质。目前有 4 亿人的饮食完全以植物为基础。有令人信服的证据表明,动物农业可以完全被替代解决方案所取代,这些替代解决方案的特点是营养和感官特性与肉类相当”。


#Agi #Specism #Deforestation #Capitalism #CambiamentiClimaticiEstremi #ExtremeClimateChange #ChangementClimatiqueExtrême #CambioClimáticoExtremo #WorldClimateChange #WccReport2020 #LifeGuardians #Vegan #GoVeg #Antispecism #StopConsumism #StopCapitalism #LetteraAlleGenerazioniFuture
#Ρατσισμόςζώων #Αποψίλωση #Καπιταλισμός #ΑκραίαΚλιματικήΑλλαγή #ΠαγκόσμιαΚλιματικήΑλλαγή #ΦύλακεςΖωής #Αντιειδισμός #ΣταματήστετονΚαταναλισμό #ΣταματήστετονΚαπιταλισμό #ΓράμμαΑέναΜπαμπίνιΜελλοντικός
#动物种族主义 #砍伐森林 #资本主义 #极端气候变化 #世界气候变化 #生命守护者 #素食主义者 #去素食 #反物种主义 #停止消费主义 #停止资本主义 #莱特拉全世代未来