Survey of 2,700 AI researchers.

The average response placed each of the following within the next 10 years:

Simple Python code given spec and examples
Good high school history essay
Angry Birds (superhuman)
Answer factoid questions with web
World Series of Poker
Read text aloud
Transcribe speech
Answer open-ended fact questions with web
Translate text (vs. fluent amateur)
Group new objects into classes
Fake new song by specific artist
Answers undecided questions well
Top Starcraft play via video of screen
Build payment processing website
Telephone banking services
Translate speech using subtitles
Atari games after 20m play (50% vs. novice)
Finetune LLM
Construct video from new angle
Top 40 Pop Song
Recognize object seen once
All Atari games (vs. pro game tester)
Learn to sort long lists
Fold laundry
Random new computer game (novice level)
NYT best-selling fiction
Translate text in newfound language
Explain AI actions in games
Assemble LEGO given instructions
Win Putnam Math Competition
5km city race as bipedal robot (superhuman)
Beat humans at Go (after same # games)
Find and patch security flaw
Retail Salesperson

...and the following within the next 20 years:

Equations governing virtual worlds
Truck Driver
Replicate ML paper
Install wiring in a house
ML paper

... and the following within the next 40 years:

Publishable math theorems
High Level Machine Intelligence (all human tasks)
Millennium Prize
Surgeon
AI Researcher
Full Automation of Labor (all human jobs)

It should be noted that while these were the averages, the was a very wide variance -- so a wide range of plausible dates.

"Expected feasibility of many AI milestones moved substantially earlier in the course of one year (between 2022 and 2023)."

If you're wondering what the difference between "High-Level Machine Intelligence" and "Full Automation of Labor" is, they said:

"We defined High-Level Machine Intelligence thus: High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption."

"We defined Full Automation of Labor thus:"

"Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. [...] Say we have reached 'full automation of labor' when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers."

They go on to say,

"Predictions for a 50% chance of the arrival of Full Automation of Labor are consistently more than sixty years later than those for a 50% chance of the arrival of High Level Machine Intelligence."

That seems crazy to me. In my mind, as soon as feasibility is reachend, cost will go below human labor very quickly, and the technology will be adopted everywhere. That is what has happened with everything computers have automated so far.

"We do not know what accounts for this gap in forecasts. Insofar as High Level Machine Intelligence and Full Automation of Labor refer to the same event, the difference in predictions about the time of their arrival would seem to be a framing effect."

A framing effect that large?

"Since 2016 a majority of respondents have thought that it's either 'quite likely,' 'likely,' or an 'about even chance' that technological progress becomes more than an order of magnitude faster within 5 years of High Level Machine Intelligence being achieved."

"A large majority of participants thought state-of-the-art AI systems in twenty years would be likely or very likely to:"

  1. Find unexpected ways to achieve goals (82.3% of respondents),
  2. Be able to talk like a human expert on most topics (81.4% of respondents), and
  3. Frequently behave in ways that are surprising to humans (69.1% of respondents)

"Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the true reasons for the AI systems' choices, with only 20% giving it better than even odds."

"Scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%)."

"Respondents exhibited diverse views on the expected goodness/badness of High Level Machine Intelligence. Responses range from extremely optimistic to extremely pessimistic. Over a third of participants (38%) put at least a 10% chance on extremely bad outcomes (e.g. human extinction)."

Thousands of AI authors on the future of AI

#solidstatelife #ai #technologicalunemployment #futurology

1