#technologicalunemployment

waynerad@diasp.org

The end of classical computer science is coming, and most of us are dinosaurs waiting for the meteor to hit, says Matt Welsh.

"I came of age in the 1980s, programming personal computers like the Commodore VIC-20 and Apple IIe at home. Going on to study computer science in college and ultimately getting a PhD at Berkeley, the bulk of my professional training was rooted in what I will call 'classical' CS: programming, algorithms, data structures, systems, programming languages."

"When I was in college in the early '90s, we were still in the depth of the AI Winter, and AI as a field was likewise dominated by classical algorithms. In Dan Huttenlocher's PhD-level computer vision course in 1995 or so, we never once discussed anything resembling deep learning or neural networks--it was all classical algorithms like Canny edge detection, optical flow, and Hausdorff distances."

"One thing that has not really changed is that computer science is taught as a discipline with data structures, algorithms, and programming at its core. I am going to be amazed if in 30 years, or even 10 years, we are still approaching CS in this way. Indeed, I think CS as a field is in for a pretty major upheaval that few of us are really prepared for."

"I believe that the conventional idea of 'writing a program' is headed for extinction, and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed."

"I'm not just talking about CoPilot replacing programmers. I'm talking about replacing the entire concept of writing programs with training models. In the future, CS students aren't going to need to learn such mundane skills as how to add a node to a binary tree or code in C++. That kind of education will be antiquated, like teaching engineering students how to use a slide rule."

"The shift in focus from programs to models should be obvious to anyone who has read any modern machine learning papers. These papers barely mention the code or systems underlying their innovations; the building blocks of AI systems are much higher-level abstractions like attention layers, tokenizers, and datasets."

This got me thinking: Over the last 20 years, I've been predicting AI would advance to the point where it could automate jobs, and it's looking more and more like I was fundamentally right about that, and all the people who poo-poo'd the idea over the years in coversations with me were wrong. But while I was right about that fundamental idea (and right that there wouldn't be "one AI in a box" that anyone could pull the plug on if something went wrong, but a diffusion of the technology around the world like every previous technology), I was wrong about how exactly it would play out.

First I was wrong about the timescales: I thought it would be necessary to understand much more about how the brain works, and to work algorithms derived from neuroscience into AI models, and looking at the rate of advancement in neuroscience I predicted AI wouldn't be in its current state for a long time. While broad concepts like "neuron" and "attention" have been incorporated into AI, there are practically no specific algorithms that have been ported from brains to AI systems.

Second, I was wrong about what order. I was wrong in thinking "routine" jobs would be automated first, and "creative" jobs last. It turns out that what matters is "mental" vs "physical". Computers can create visual art and music just by thinking very hard -- it's a purely "mental" activity, and computers can do all that thinking in bits and bytes.

This has led me to ponder: What occupations require the greatest level of manual dexterity?

Those should be the jobs safest from the AI revolution.

The first that came to mind for me -- when I was trying to think of jobs that require an extreme level of physical dexterity and pay very highly -- was "surgeon". So I now predict "surgeon" will be the last job to get automated. If you're giving career advice to a young person (or you are a young person), the advice to give is: become a surgeon.

Other occupations safe (for now) against automation, for the same reason would include "physical therapist", "dentist", "dental hygienist", "dental technician", "medical technician" (e.g. those people who customize prosthetics, orthodontic devices, and so on), and so on. "Nurse" who routinely does physical procedures like drawing blood.

Continuing in the same vein but going outside the medical field (pun not intended but allowed to stand once recognized), I'd put "electronics technician". I don't think robots will be able to solder any time soon, or manipulate very small components, at least after the initial assembly is completed which does seem to be highly amenable to automation. But once electronic components fail, to the extent it falls on people to repair them, rather than throw them out and replace them (which admittedly happens a lot), humans aren't going to be replaced any time soon.

Likewise "machinist" who works with small parts and tools.

"Engineer" ought to be ok -- as long as they're mechanical engineers or civil engineers. Software engineers are in the crosshairs. What matters is whether physical manipulation is part of the job.

"Construction worker" -- some jobs are high pay/high skill while others are low pay/low skill. Will be interesting to see what gets automated first and last in construction.

Other "trade" jobs like "plumber", "electrician", "welder" -- probably safe for a long time.

"Auto mechanic" -- probably one of the last jobs to be automated. The factory where the car is initially manufacturered, a very controlled environment, may be full of robots, but it's hard to see robots extending into the auto mechanic's shop where cars go when they break down.

"Jewler" ought to be a safe job for a long time. "Watchmaker" (or "watch repairer") -- I'm still amazed people pay so much for old-fashioned mechanical watches. I guess the point is to be pieces of jewlry, so these essentially count as "jewler" jobs.

"Tailor" and "dressmaker" and other jobs centered around sewing.

"Hairstylist" / "barber" -- you probably won't be trusting a robot with scissors close to your head any time soon.

"Chef", "baker", whatever the word is for "cake calligrapher". Years ago I thought we'd have automated kitchens at fast food restaurants by now but they are no where in sight. And nowhere near automating the kitchens of the fancy restaurants with the top chefs.

Finally, let's revisit "artist". While "artist" is in the crosshairs of AI, some "artist" jobs are actually physical -- such as "sculptor" and "glassblower". These might be resistant to AI for a long time. Not sure how many sculptors and glassblowers the economy can support, though. Might be tough if all the other artists stampede into those occupations.

While "musician" is totally in the crosshairs of AI, as we see, that applies only to musicians who make recorded music -- going "live" may be a way to escape the automation. No robots with the manual dexterity to play physical guitars, violins, etc, appear to be on the horizon. Maybe they can play drums?

And finally for my last item: "Magician" is another live entertainment career that requires a lot of manual dexterity and that ought to be hard for a robot to replicate. For those of you looking for a career in entertainment. Not sure how many magicians the economy can support, though.

The end of programming - Matt Welsh

#solidstatelife #genai #codingai #technologicalunemployment

waynerad@diasp.org

"One of the most common concerns about AI is the risk that it takes a meaningful portion of jobs that humans currently do, leading to major economic dislocation. Often these headlines come out of economic studies that look at various job functions and estimate the impact that AI could have on these roles, and then extrapolates the resulting labor impact. What these reports generally get wrong is the analysis is done in a vacuum, explicitly ignoring the decisions that companies actually make when presented with productivity gains introduced by a new technology -- especially given the competitive nature of most industries."

Says Aaron Levie, CEO of Box, a company that makes large-enterprise cloud file sharing and collaboration software.

"Imagine you're a software company that can afford to employee 10 engineers based on your current revenue. By default, those 10 engineers produce a certain amount of output of product that you then sell to customers. If you're like almost any company on the planet, the list of things your customers want from your product far exceeds your ability to deliver those features any time soon with those 10 engineers. But the challenge, again, is that you can only afford those 10 engineers at today's revenue level. So, you decide to implement AI, and the absolute best case scenario happens: each engineer becomes magically 50% more productive. Overnight, you now have the equivalent of 15 engineers working in your company, for the previous cost of 10."

"Finally, you can now build the next set of things on your product roadmap that your customers have been asking for."

Read the comments, too. There is some interesting discussion, uncommon for the service formerly known as Twitter, apparently made possible by the fact that not just Aaron Levie but some other people forked over money to the service formerly known as Twitter to be able to post things larger than some arbitrary and super-tiny character limit.

Aaron Levie on X: "One of the most common concerns about AI is the risk ..."

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

AI is simultaneously overhyped and underhyped, alleges Dagogo Altraide, aka "ColdFusion" (technology history YouTube channel). For AI, we're at the "peak of inflated expectations" stage of the Garter hype cycle.

At the same time, tech companies are doing mass layoffs of tech workers, and it's not because of overhiring during the pandemic any more, and it's not the regular business cycle -- companies with record revenues and profits are doing mass layoffs of tech workers. "The truth is slowing coming out" -- the layoffs are because of AI, but tech companies want to keep it secret.

So despite the inflated expectations, AI isn't underperforming when it comes to reducing employment.

AI deception: How tech companies are fooling us - ColdFusion

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

David Graeber was right about BS jobs, says Max Murphy. Basically, our economy is bifurcating into two kinds of jobs: "essential" jobs that, despite being "essential", are lowly paid and unappreciated, and "BS" (I'm just going to abbreviate) jobs that are highly paid but accomplish nothing useful for anybody. The surprise, perhaps, is that these BS jobs, despite being well paid, are genuinely soul-crushing.

My question, though, is how much of this is due to technological advancement, and will the continued advancement of technology (AI etc) increase the ratio of BS jobs to essential jobs further in favor of the BS jobs?

David Graeber was right about bullsh*t jobs - Max Murphy

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

"AI could actually help rebuild the middle class," says David Autor.

"Artificial intelligence can enable a larger set of workers equipped with necessary foundational training to perform higher-stakes decision-making tasks currently arrogated to elite experts, such as doctors, lawyers, software engineers and college professors. In essence, AI -- used well -- can assist with restoring the middle-skill, middle-class heart of the US labor market that has been hollowed out by automation and globalization."

"Prior to the Industrial Revolution, goods were handmade by skilled artisans: wagon wheels by wheelwrights; clothing by tailors; shoes by cobblers; timepieces by clockmakers; firearms by blacksmiths."

"Unlike the artisans who preceded them, however, expert judgment was not necessarily needed -- or even tolerated -- among the 'mass expert' workers populating offices and assembly lines."

"As a result, the narrow procedural content of mass expert work, with its requirement that workers follow rules but exercise little discretion, was perhaps uniquely vulnerable to technological displacement in the era that followed."

"Stemming from the innovations pioneered during World War II, the Computer Era (AKA the Information Age) ultimately extinguished much of the demand for mass expertise that the Industrial Revolution had fostered."

"Because many high-paid jobs are intensive in non-routine tasks, Polanyi's Paradox proved a major constraint on what work traditional computers could do. Managers, professionals and technical workers are regularly called upon to exercise judgment (not rules) on one-off, high-stakes cases."

Polanyi's Paradox, named for Michael Polanyi who observed in 1966, "We can know more than we can tell," is the idea that "non-routine" tasks involve "tacit knowledge" that can't be written out as procedures -- and hence coded into a computer program. But AI systems don't have to be coded explicitly and can learn this "tacit knowledge" like humans.

"Pre-AI, computing's core capability was its faultless and nearly costless execution of routine, procedural tasks."

"AI's capacity to depart from script, to improvise based on training and experience, enables it to engage in expert judgment -- a capability that, until now, has fallen within the province of elite experts."

Commentary: I feel like I had to make the mental switch from expecting AI to automate "routine" work to "mental" work, i.e. what matters is mental-vs-physical, not creative-vs-routine. Now we're right back to talking about the creative-vs-routine distinction.

AI could actually help rebuild the middle class | noemamag.com

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

"Texas will use computers to grade written answers on this year's STAAR tests."

STAAR stands for "State of Texas Assessments of Academic Readiness" and is a standardized test given to elementary through high school students. It replaced an earlier test starting in 2007.

"The Texas Education Agency is rolling out an 'automated scoring engine' for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing, a building block of artificial intelligence chatbots such as GPT-4, will save the state agency about $15 million to 20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor."

"The change comes after the STAAR test, which measures students' understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions -- known as constructed response items."

Texas will use computers to grade written answers on this year's STAAR tests

#solidstatelife #ai #llms #technologicalunemployment

waynerad@diasp.org

The Daily Show with Jon Stewart did a segment on AI and jobs. Basically, we're all going to get helpful assistants which will make us more productive, so it's going to be great, except, more productive means fewer humans employed, but don't worry, that's just the 'human' point of view. (First 8 minutes of this video.)

Jon Stewart on what AI means for our jobs & Desi Lydic on Fox News's Easter panic | The Daily Show

#solidstatelife #ai #aiethics #technologicalunemployment

waynerad@diasp.org

Survey of 2,700 AI researchers.

The average response placed each of the following within the next 10 years:

Simple Python code given spec and examples
Good high school history essay
Angry Birds (superhuman)
Answer factoid questions with web
World Series of Poker
Read text aloud
Transcribe speech
Answer open-ended fact questions with web
Translate text (vs. fluent amateur)
Group new objects into classes
Fake new song by specific artist
Answers undecided questions well
Top Starcraft play via video of screen
Build payment processing website
Telephone banking services
Translate speech using subtitles
Atari games after 20m play (50% vs. novice)
Finetune LLM
Construct video from new angle
Top 40 Pop Song
Recognize object seen once
All Atari games (vs. pro game tester)
Learn to sort long lists
Fold laundry
Random new computer game (novice level)
NYT best-selling fiction
Translate text in newfound language
Explain AI actions in games
Assemble LEGO given instructions
Win Putnam Math Competition
5km city race as bipedal robot (superhuman)
Beat humans at Go (after same # games)
Find and patch security flaw
Retail Salesperson

...and the following within the next 20 years:

Equations governing virtual worlds
Truck Driver
Replicate ML paper
Install wiring in a house
ML paper

... and the following within the next 40 years:

Publishable math theorems
High Level Machine Intelligence (all human tasks)
Millennium Prize
Surgeon
AI Researcher
Full Automation of Labor (all human jobs)

It should be noted that while these were the averages, the was a very wide variance -- so a wide range of plausible dates.

"Expected feasibility of many AI milestones moved substantially earlier in the course of one year (between 2022 and 2023)."

If you're wondering what the difference between "High-Level Machine Intelligence" and "Full Automation of Labor" is, they said:

"We defined High-Level Machine Intelligence thus: High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption."

"We defined Full Automation of Labor thus:"

"Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. [...] Say we have reached 'full automation of labor' when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers."

They go on to say,

"Predictions for a 50% chance of the arrival of Full Automation of Labor are consistently more than sixty years later than those for a 50% chance of the arrival of High Level Machine Intelligence."

That seems crazy to me. In my mind, as soon as feasibility is reachend, cost will go below human labor very quickly, and the technology will be adopted everywhere. That is what has happened with everything computers have automated so far.

"We do not know what accounts for this gap in forecasts. Insofar as High Level Machine Intelligence and Full Automation of Labor refer to the same event, the difference in predictions about the time of their arrival would seem to be a framing effect."

A framing effect that large?

"Since 2016 a majority of respondents have thought that it's either 'quite likely,' 'likely,' or an 'about even chance' that technological progress becomes more than an order of magnitude faster within 5 years of High Level Machine Intelligence being achieved."

"A large majority of participants thought state-of-the-art AI systems in twenty years would be likely or very likely to:"

  1. Find unexpected ways to achieve goals (82.3% of respondents),
  2. Be able to talk like a human expert on most topics (81.4% of respondents), and
  3. Frequently behave in ways that are surprising to humans (69.1% of respondents)

"Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the true reasons for the AI systems' choices, with only 20% giving it better than even odds."

"Scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%)."

"Respondents exhibited diverse views on the expected goodness/badness of High Level Machine Intelligence. Responses range from extremely optimistic to extremely pessimistic. Over a third of participants (38%) put at least a 10% chance on extremely bad outcomes (e.g. human extinction)."

Thousands of AI authors on the future of AI

#solidstatelife #ai #technologicalunemployment #futurology

waynerad@diasp.org

"Ema, a 'Universal AI employee,' emerges from stealth with $25M."

"Meet Ema, a universal AI employee that boosts productivity across every role in your organization. She is simple to use, trusted, and accurate."

[Insert joke here about how saying things like that won't make people worry about their jobs.]

"Ema's the missing operating system that makes Generative AI work at an enterprise level. Using proprietary Generative Workflow Engine, Ema automates complex workflows with a simple conversation. She is trusted, compliant and keeps your data safe. EmaFusion model combines the outputs from the best models (public large language models and custom private models) to amplify productivity with unrivaled accuracy. See how Ema can transform your business today."

"They say Ema (the company) has already quietly amassed customers while still in stealth, including Envoy Global, TrueLayer, and Moneyview."

"Ema's Personas operate on our patent-pending Generative Workflow Engine (GWE), which goes beyond simple language prediction to dynamically map out workflows with a simple conversation. Our platform offers Standard Personas for common enterprise roles such as Customer Service Specialists (CX), Employee Assistant (EX), Data Analyst, Sales Assistant etc. and allows for the rapid creation of Specialized Personas tailored to rapidly automate unique workflows. No more waiting for months to build Gen AI apps that work!"

"To address accuracy issues and computational costs inherent in current Gen AI applications, Ema leverages our proprietary "fusion of experts" model, EmaFusion, that exceeds 2 Trillion parameters. EmaFusion intelligently combines many large language models (over 30 today and that number keeps growing), such as Claude, Gemini, Mistral, Llama2, GPT4, GPT3.5, and Ema's own custom models. Furthermore, EmaFusion supports integration of customer developed private models, maximizing accuracy at the most optimal cost for every task."

Oh, and "Ema" stands for "enterprise machine assistant".

Ema "taps into more than 30 large language models."

"As for what Ema can do, these businesses are using it in applications that range from customer service -- including offering technical support to users as well as tracking and other functions -- through to internal productivity applications for employees. Ema's two products -- Generative Workflow Engine (GWE) and EmaFusion -- are designed to "emulate human responses" but also evolve with more usage with feedback."

They also say, "Pre-integrated with hundreds of apps, Ema is easy to configure and deploy."

What are those integrations? They said some of those integrations are: Box, Dropbox, Google Drive, OneDrive, SharePoint, Clear Books, FreeAgent, FreshBooks, Microsoft Dynamics 365, Moneybird, NetSuite, QuickBooks Online, Sage Business Cloud, Sage Intacct, Wave Financial, Workday, Xero, Zoho Books, Aha!, Asana, Azure DevOps, Basecamp, Bitbucket, ClickUp, Dixa, Freshdesk, Freshservice, Front, GitHub Issues, GitLab, Gladly, Gorgias, Height, Help Scout, Hive, Hubspot Ticketing, Intercom, Ironclad, Jira, Jira Service Management, Kustomer, Linear, Pivotal Tracker, Rally, Re:amaze, Salesforce Service Cloud, ServiceNow, Shortcut, SpotDraft, Teamwork, Trello, Wrike, Zendesk, Zoho BugTracker, Zoho Desk, Accelo, ActiveCampaign, Affinity, Capsule, Close, Copper, HubSpot, Insightly, Keap, Microsoft Dynamics 365 Sales, Nutshell, Pipedrive, Pipeliner, Salesflare, Salesforce, SugarCRM, Teamleader, Teamwork CRM, Vtiger, Zendesk Sell, Zoho CRM, ApplicantStack, Ashby, BambooHR, Breezy, Bullhorn, CATS, ClayHR, Clockwork, Comeet, Cornerstone TalentLink, EngageATS, Eploy, Fountain, Freshteam, Greenhouse, Greenhouse - Job Boards API, Harbour ATS, Homerun, HR Cloud, iCIMS, Infinite BrassRing, JazzHR, JobAdder, JobScore, Jobsoid, Jobvite, Lano, Lever, Oracle Fusion - Recruiting Cloud, Oracle Taleo, Personio Recruiting, Polymer, Recruitee, Recruiterflow, Recruitive, Sage HR, SAP SuccessFactors, SmartRecruiters, TalentLyft, TalentReef, Teamtailor, UKG Pro Recruiting, Workable, Workday, Zoho Recruit, ActiveCampaign, Customer.io, getResponse, Hubspot Marketing Hub, Keap, Klaviyo, Mailchimp, MessageBird, Podium, SendGrid, Sendinblue, 7Shifts, ADP Workforce Now, AlexisHR, Altera Payroll, Azure Active Directory, BambooHR, Breathe, Ceridian Dayforce, Charlie, ChartHop, ClayHR, Deel, Factorial, Freshteam, Google Workspace, Gusto, Hibob, HRAlliance, HR Cloud, HR Partner, Humaans, Insperity Premier, IntelliHR, JumpCloud, Justworks, Keka, Lano, Lucca, Namely, Nmbrs, Officient, Okta, OneLogin, OysterHR, PayCaptain, Paychex, Paycor, PayFit, Paylocity, PeopleHR, Personio, PingOne, Proliant, Rippling, Sage HR, Sapling, SAP SuccessFactors, Sesame, Square Payroll, TriNet, UKG Dimensions, UKG Pro, UKG Ready, Workday, and Zenefits.

Ema, a 'Universal AI employee,' emerges from stealth with $25M

#solidstatelife #ai #genai #llms #aiagents #technologicalunemployment

waynerad@diasp.org

Devon "the first AI software engineer"

You put it in the "driver's seat" and it does everything for you. Or at least that's the idea.

"Benchmark the performance of LLaMa".

Devon builds the whole project, uses the browser to pull up API documentation, runs into an unexpected error, adds a debugging print statement, uses the error in the logs to figure out how to fix the bug, then builds and deploys a website with full styling as visualization.

See below for reactions.

Introducing Devin, the first AI software engineer - Cognition

#solidstatelife #ai #genai #llms #codingai #technologicalunemployment

waynerad@diasp.org

"Shares of Teleperformance plunged 23% on Thursday, after the French call center and office services group missed its full-year revenue target."

"Investors have been spooked by the potential impact of artificial intelligence on its business model, as companies become more able to tap into the technology directly for their own benefit."

Call center group Teleperformance falls 23%; CEO insists AI cannot replace human staff

#solidstatelife #ai #genai #llms #technologicalunemployment

waynerad@diasp.org

Tina Huang goes through lists of jobs and career clusters and research papers that endeavor to project AI impart on jobs. Research papers with names like "Future of Jobs Report 2023", "Gen-AI: Artificial Intelligence and the Future of Work".

What jobs are most likely to be reduced? Almost all clerks: Bookkeeping clerks, accounting clerks, auditing clerks, mail mail, account management clerks, payroll clerks. "Complementarity" vs "exposure": Judges are highly protected, their clerical work will be displaced. A lot of roles in finances: Finance, insurance, banking services. Another heavily impacted cluster is marketing: Professional sales, advertising, retail sales, cashier, telemarketer.

I'm surprised they didn't mention writers and artists, which is what it seems like other people talk about, and the jobs most obviously "in the crosshairs" of generative language models and generative image models.

Not impacted: Environmental services, natural resources, construction, teaching, health care.

In fact, education and health care are areas of tremendous growth.

The research papers seemed not in agreement as to what "social jobs" would grow vs displaced. One thought child care workers as well as social workers would be displaced, while another thought all "social jobs" would increase. Maybe there is "clerical" work in the social work field that could be automated that I'm not aware of?

What jobs do the researchers think will increase the most? AI researcher. And jobs like business intelligence and information security analyst that possibly could use AI skills.

Which jobs will survive AI? - Tina Huang

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

"A study surveying 300 leaders across Hollywood, issued in January, reported that three-fourths of respondents indicated that AI tools supported the elimination, reduction or consolidation of jobs at their companies. Over the next three years, it estimates that nearly 204,000 positions will be adversely affected."

"Sound engineers, voice actors and concept artists stood at the forefront of that displacement, according to the study. Visual effects and other postproduction work was also cited as particularly vulnerable."

OpenAI marches on Hollywood with video creation tool

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

"Rapid AI progress surprises even experts: Survey just out"

Sabine Hossenfelder reviews a survey of 2,778 AI researchers. They say 50% chance of "high level machine intelligence" ("comparable to human intelligence") by 2047, but that's down 13 years from the same survey a year ago, which said 2060.

For "full automation of labor", 50% probability by 2120 or so, but that's down almost 50 years from last years' prediction. (So last years' prediction must've been 2170 or so).

I can't help but think, does anybody seriously think it will take that long? I get that the "AGI in 7 months" predictions are a bit hard to take seriously, but still? Do these people not understand exponential curves?

Ray Kurzweil, and before him, Al Bartlett, are famous for saying people extrapolate the current rate of change linearly out into the future, so always underestimate exponential curves. Not implying Kurzweil or Bartlett are right about everything but this does look to me like what is happening, and you would think professional AI researchers, of all people, would know better.

Rapid AI progress surprises even experts: Survey just out - Sabine Hossenfelder

#solidstatelife #futurology #ai #exponentialgrowth #technologicalunemployment

waynerad@diasp.org

"Torching the Google car: Why the growing revolt against big tech just escalated."

"Public confidence in self-driving cars has actually been declining as they've rolled out..." "Self-driving cars have been vocally opposed by officials, protested, "coned," attacked, and, now, set ablaze in a carnivalesque display of defiance." "Trust in Silicon Valley in general is eroding, and anger towards the big tech companies -- Waymo is owned by Alphabet, the parent company of Google -- is percolating..."

"Online, the reaction was swift and predictable. The images went viral, naturally -- they were pretty wild! And then came the perfunctory driverless car supporters, deriding the torchers as mindless hoodlums, casting them as backwards-looking people who knew not what they did. As Luddites."

Torching the Google car: Why the growing revolt against big tech just escalated

#solidstatelife #technologicalunemployment

waynerad@diasp.org

ChatGPT will make programmers obsolete in 10 years says Matthew Berman. Unbeknownst to me, a non-technical marketing person won a recent hackathon. She use a combination of AI tools to do all the coding and beat teams of multiple engineers.

AI tools may be able to do that for a small hackathon project, but can't do it for a large commercial product. But, extrapolating out into the future, Matthew Berman envisions a future where AI takes over the development role entirely. Humans remain in product management and quality assurance roles. Humans with good marketing skills will be the big winners, as they will be able to identify market opportunities and prompt AI systems to create the products.

ChatGPT will make programmers obsolete in 10 years - Matthew Berman

#solidstatelife #ai #genai #llms #codellms #aiethics #technologicalunemployment

waynerad@diasp.org

"9 months with GPT-4: Can I fire my developers yet?"

Presents an argument why AI could lead to hiring more, rather than fewer, software developers.

"If a new feature will generate $100,000 in revenue, but will cost $100,000 to build, then the ROI is 0, and I won't build it. However, if the cost of the feature comes down to $50,000, I stand to earn $50,000 in profit by shipping it. If I can get 400% more work out of each dev I hire, I'd hire more devs, not fewer. I'd be able to tackle more projects, and I'd be able to tackle them faster."

"Caveat: There are cases where it makes sense for a company to layoff excess workers as the remaining employees become more efficient. This is mostly likely to happen in companies where developers are a cost center, not a profit center."

9 months with GPT-4: Can I fire my developers yet? - The Primagen

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

Ready for your employer to monitor your brainwaves? If you listen to music while you work, you could get work-issued earbuds so your employer can monitor your brainwaves while you work. That way, one day when you come in to work, you'll find the office in a somber mood because employee brainwave data has been subpoenaed for a lawsuit -- because one employee committed wire fraud and investigators are looking for co-conspirators by looking for people with synchronized brainwave activity. You don't know anything about the fraud but you were working with the accused employee in secret on a start-up venture. Uh-oh.

According to Nita Farahany, in this talk at the World Economic Forum, all the technology to do this exists already, now. She goes on to tout the benefits of employer brain monitoring: reduction in accidents through detection of micro-sleep, fatigue, or lapse of attention due to distraction or cognitive overload. Furthermore it can optimize brain usage through "cognitive ergonomics".

She goes on to say it can be used as a tool of oppression as well, and calls for international human rights laws guaranteeing "cognitive liberty" be put in place before the technology becomes widespread.

When she talked about "freedom of thought", I literally laughed out loud. Nobody I know believes in that. Everyone I know believes the thoughts of other people need to be controlled. (Maybe not literally everyone. It's a figure of speech.)

By way of commentary, do I think "brain transparency" at work will happen? Probably. I remember in the 1980s, there was this comedian, Yakov Smirnoff, who would tell jokes like, "In America, you watch TV. In Soviet Russia, TV watches you!" Well, it's not really a joke any more, is it? He's describing YouTube. When you watch YouTube, YouTube watches you. Everything you watch, down to the fraction of a second. They use that information for giving you recommendations and ... and other stuff. Wouldn't you like to know what the other stuff is? They know, but none of the rest of us get to know. Everyone is guessing but nobody knows. And that's just YouTube. Every aspect of life now is like this. We are always watched, but we usually don't know what the watchers are watching for.

So of course once the technology comes on line to give people access to other people's brainwaves, it's going to get used. What would be shocking would be if employer's didn't try to use this to squeeze every last ounce of productivity from employees. Look at what is happening now with tracking of every footstep of warehouse workers.

Ready for brain transparency?

#futurology #solidstatelife #technologicalunemployment #neurotechnology #eeg