#technologicalunemployment

waynerad@diasp.org

"How often do you use artificial intelligence in your role?"

Asks Gallup poll.

67% say "never", 5% say "less often than once a year", 2% say "once a year", 7% say "a few times a year", 8% say "a few times a month", 7% say "a few times a week", 4% say "daily".

But the same poll shows 1/3rd of organizations are taking action on AI. 44% for "white-collar" jobs. So it looks like leaders want to boost productivity and innovation, but workers are not using AI much and not delivering on that expectation.

The 3rd question is, of people who do use AI, what do they use it for?

Mostly "to generate ideas" and "to consolidate information or data" -- but leaders and managers more than individual contributors. Next is "to automate basic tasks", which is used more by individual contributors than leaders and managers.

AI in the Workplace: Answering 3 Big Questions

#solidstatelife #ai #genai #technologicalunemployment

waynerad@diasp.org

"Four futures for cognitive labor."

"First: the printing press. If you were an author in 1400, the largest value-add you brought was your handwriting." "With hindsight we see that even as all the terrifying extrapolations of printing automation materialized, the income, influence, and number of authors soared."

"Second: the mechanization of farming." "The per-capita incomes of farmers have doubled several times over but there are many fewer farmers, even in absolute numbers."

"Third: computers. Specifically, the shift from the job title of computer to the name of the machine that replaced it. " "This industry was replaced by a new industry of people who programmed the automation of the previous one."

"Finally: the ice trade. In the 19th and early 20th centuries, before small ice machines were common, harvesting and shipping ice around the world was a large industry employing hundreds of thousands of workers." "By WW2 the industry had collapsed and been replaced by home refrigeration."

Four futures for cognitive labor

#solidstatelife #ai #genai #agi #technologicalunemployment

waynerad@diasp.org

OpenAI o1 isn't as good as an experienced professional programmer, but... "the set of tasks that O1 can do is impressive, and it's becoming more and more difficult to find easily demonstrated examples of things it can't do."

"There's a ton of things it can't do. But a lot of them are so complicated they don't really fit in a video."

"There are a small number of specific kinds of entry level developer jobs it could actually do as well, or maybe even better, than new hires."

Carl of "Internet of Bugs" recounts how he spent the last 3 weeks experimenting with the o1 model to try to find its shortcomings. /

"I've been saying for months now that AI couldn't do the work of a programmer, and that's been true, and to a large extent it still is. But in one common case, that's less true than it used to be, if it's still true at all."

"I've worked with a bunch of new hires that were fresh out with CS degrees from major colleges. Generally these new hires come out of school unfamiliar with the specific frameworks used on active projects. They have to be closely supervised for a while before they can work on their own. They have to be given self-contained pieces of code so they don't screw up something else and create regressions. A lot of them have never actually built anything that wasn't in response to a homework assignment.

"This o1 thing is more productive than most, if not all, of those fresh CS graduates I've worked with.

"Now, after a few months, the new grads get the hang of things, and from then on, for the most part, they become productive enough that I'd rather have them on a project than o1."

When I have a choice, I never hire anyone who only has an academic and theoretical understanding of programming and has never actually built anything that faces a customer, even if they only built it for themselves. But in the tech industry, many companies specifically create entry-level positions for new grads."

"In my opinion, those positions where people can get hired with no practical experience, those positions were stupid to have before and they're completely irrelevant now. But as long as those kinds of positions still exist, and now that o1 exists, I can no longer honestly say that there aren't any jobs that an AI could do better than a human, at least as far as programming goes."

"o1 Still has a lot of limitations."

Some of the limitations he cited were writing tests and writing a SQL RDBMS in Zig.

ChatGPT-O1 Changes Programming as a Profession. I really hated saying that. - Internet of Bugs

#solidstatelife #ai #genai #llms #codingai #openai #technologicalunemployment

waynerad@diasp.org

"Machines of Loving Grace". Dario Amodei, CEO of Anthropic, wrote an essay about what a world with powerful AI might look like if everything goes right.

"I think and talk a lot about the risks of powerful AI. The company I'm the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I'm a pessimist or "doomer" who thinks AI will be mostly bad or dangerous. I don't think that at all. In fact, one of my main reasons for focusing on risks is that they're the only thing standing between us and what I see as a fundamentally positive future."

"In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields -- biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc."

"In addition to just being a 'smart thing you talk to', it has all the 'interfaces' available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on."

"It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary."

"It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use."

"The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with."

"Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks."

Sounds to me like automation of all work, but he doesn't address that until nearly the end. Before that, he talks about all the ways he thinks AI will improve "biology and physical health", "neuroscience and mental health", "economic development and poverty", and "peace and governance".

AI will advance CRISPR, microscopy, genome sequencing and synthesis, optogenetic techniques, mRNA vaccines, cell therapies such as CAR-T, and more due to conceptual insights we can't even predict today.

AI will prevent or treat nearly all infectious disease, eliminate most cancer, prevent or cure genetic diseases, improve treatments for diabetes, obesity, heart disease, autoimmune diseases, give people "biological freedom" with physical appearance and other biological processes under people's individual control, and double human lifespan (to 150).

AI will cure most mental illnesses like PTSD, depression, schizophrenia, and addiction. AI will figure out how to alter brain structure in order to change psychopaths into non-psychopaths. "Non-clinical" everyday psychological problems like feeling drowsy or anxious or having trouble focusing will be solved. AI will increase the amount of "extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace" people experience.

Economically, AI will make health interventions cheap and widely available, AI will increase crop yields develop technology like lab grown meat that increases food securty, AI will develop technology to mitigate climate change, AI will reduce inequality within countries, just as how the poor have the same mobile phones as the rich today; there is no such thing as a "luxury" mobile phone.

Regarding "peace and governance", he advocates an "entente strategy", "in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries' access to key resources." This would prevent dictatorships from gaining the upper hand. If democracies have the upper hand globally, that helps with "the fight between democracy and autocracy within each country." "Democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor."

Finally he gets to "work and meaning", where he says, "Comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the '10%' expands to continue to employ almost everyone."

"However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized. While that might sound crazy, the fact is that civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism."

Wish I could share his optimism that "civilization" will "successfully" "navigate" this "major economic shift". As those of you who've been hanging around me for any length of time know, I think the major effect of technology competing against humans in the labor market is decreased fertility, rather than an immediate drop in the employment rate, like everyone thinks because that seems more intuitive. He makes no mention of fertility in this context (he mentions it in the context of fertility treatments being something AI will advance), so I think it's not on his radar at all. He considers "opt-out" a "problem" whereby "Luddite movements" create a "dystopian underclass" by opt-ing out of the benifits of AI technology, yet it is the "opt-out" people, like the Amish, that today are able to maintain high fertility rates, and as such will make up the majority of the human population living on this planet in the future (something you can confirm for yourself by doing some math).

The original essay is some 14,000 words and my commentary above is just 1,000 or so, so you should probably read the original and get his full original unfiltered point of view.

Machines of Loving Grace

#solidstatelife #ai #agi #aiethics #technologicalunemployment

waynerad@diasp.org

"Humans Need Not Apply" 10 years later. Retrospective with CGP Grey about his legendary "Humans Need Not Apply" video, which is now 10 years ago.

CGP Grey wanted to make the point that computers are coming after everyone's job, and the guy he's talking to (Myke Hurley), but he says when he watched the video, he thought, "They'll never take my job."

A lot of the video concerned self-driving cars, which is what in retrospect, he looks the most wrong about.

He tried to turn the word "autos" into a word to refer to all "automatic" vehicles, but the term didn't catch on. But he wanted to get people to think about self-driving vehicles of all kinds.

He said "They don't need to be perfect, they just need to be better than humans", but he now considers himself totally wrong about that. People really require self-driving cars to be perfect. People demand perfection. They need to be as safe as airplanes. They talk about how people psychologically have a need to feel "in control", and if something bad happens, they need a human to blame, not some algorithm made of 0s and 1s. If you take the decision-making out of human hands, it needs to be perfect.

Tesla's recent "Drive Naturally" where everything is learned from neural networks and not hand-coded by humans, is remarkably like humans in how it drives.

The very last part of the "Humans Need Not Apply" video, which he called "software bots", has emerged dramatically in the last 2 years. He thought self-driving cars would advance faster, and "software bots" would come later, but "The last couple of years have been terrifyingly fast."

No mention of the horses? For me the most memorable thing about "Humans Need Not Apply" was the imaginary conversation between horses about how they had nothing to fear from this new invention, the automobile -- employment for horses had always gone up throughout history. But the horse population actually peaked in 1915 and has gone down ever since. So there isn't some rule of nature that says there always has to be employment for horses, that horses can't be automated. Likewise, CGP Grey invites us to consider that there's no law of nature guaranteeing employment for humans.

This video (which is really audio-only -- it's essentially an audio podcast) is 1.5 hours but only the first 30 mins is about the "Humans Need Not Apply" video. However, you might want to listen to the whole think as CGP Grey and Myke Hurley contemplate AI and the future of AI. CGP Grey talks about how he is of two minds regarding how to think about the future. The first mind says: the way to think about technological change, is, it's the same as it has always been, only faster. We've hand technological change since caveman days. So just extrapolate that out into the future. The second mind, is the "doom" mind: He really does think, there is some kind of boundary we are getting closer to, beyond which it is functionally impossible to try to think about the future, to the point where it is pointless to even try to plan or think. Where is that boundary? That boundary is there because this thing, AI, is different. Everyone thinks their time is different, but he really feels like AI is really "this time is different." "Humans Need Not Apply" was trying to get people to seriously engage with this idea.

Is AI still doom? (Humans Need Not Apply -- 10 years later) - Cortex Podcast

#solidstatelife #ai #robotics #genai #llms #technologicalunemployment

waynerad@diasp.org

Melancholia in the San Francisco Bay Area, at least that is Scott Sumner's experience.

"During my recent trip to the Bay Area, I met lots of people who are involved in the field of AI. My general impression is that this region has more smart people than anywhere else, at least per capita. And not just fairly smart, I'm talking about extremely high IQ individuals. I don't claim to have met a representative cross section of AI people, however, so take the following with a grain of salt."

"If you spend a fair bit of time surrounded by people in this sector, you begin to think that San Francisco is the only city that matters; everywhere else is just a backwater. There's a sense that the world we live in today will soon come to an end, replaced by either a better world or human extinction. It's the Bay Area's world, we just live in it."

"In other words, I don't know if the world is going to end, but it seems as though this world is coming to an end."

Melancholia

#solidstatelife #ai #technologicalunemployment #existentialrisk

waynerad@diasp.org

"The deskilling of web dev is harming the product but, more importantly, it's damaging our health -- this is why burnout happens."

"We're expected to keep up with multiple specialities that, in a sensible industry, would each be a dedicated field."

[big list]

"These are all distinct specialities and web dev teams should be composed of cross-functional specialists."

"Companies should have CSS specialists on their teams who take care of the complexity of providing stylesheets, ..."

"Tailwind provides a loose approximation of the experience you would get from having a dedicated CSS expert on board." Except "That abstraction falls apart quite often because Tailwind is too thin of a layer to hide the complexities of CSS." "It's very easy to run into a situation where, for example, position: sticky doesn't work and the utility class model makes figuring out the issue much harder."

"But the promise it offers is tantalising: it's your CSS buddy so you don't have to know CSS."

"This is deskilling. It lets employers and managers pretend that web project teams don't need CSS expertise -- or even just pretend that CSS expertise just doesn't exist at all. This is what Tailwind is for."

"We're all-in on deskilling the industry. Not content with removing CSS and HTML almost entirely from the job market, we're now shifting towards the model where devs are instead 'AI' wranglers. The web dev of the future will be an underpaid generalist who pokes at chatbot output until it runs without error, pokes at a copilot until it generates tests that pass with some coverage, and ships code that nobody understand and can't be fixed if something goes wrong."

A discussion about this on Hacker News is not what I expected. I figured people would be questioning this premise that AI is a continuation of the piling on of abstractions (and the resultant complexity) that the software industry has done for decades, rather than a fundamentally new phenomena. Instead, most people seemed to take issue with his idea that "frontend" development should be broken into specialties. I got the feeling many "full-stack" developers felt personally insulted, that he was implying they're not experts at their jobs.

The deskilling of web dev is harming the product but, more importantly, it’s damaging our health

#solidstatelife #ai #technologicalunemployment #specialization

waynerad@diasp.org

Older Venezuelans have figured out they can escape age discrimination (alleged) and make money on from "clickwork," earning pennies by labeling and annotating data to train AI systems. Wait, anybody anywhere is paid to do labeling for AI systems? I thought "self-supervised learning" and "synthetic data" had obsoleted that practice. No?

Amid economic collapse, older Venezuelans turn to gig work

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

"I lost my job to AI this week."

The guy was a graphic artist doing visual design for electronic marketing campaigns. And in this case, apparently he was quite literally told he was being replaced by AI.

I suspect this is the ultimate fate of all of us. It's an open question how long it will take. Some days I think AI is going super fast and it'll be real soon, other days I think it'll be like when Elon Musk predicted in 2016 that Telsa would have "full self driving" by 2017. And in 2017, he predicted 2018, and in 2018 he predicted 2019... and so on. He was taking the current rate of change and extrapolating it out into the future, but in fact while the technology continued to improve, it entered a domain of diminishing returns, and while Tesla's Full Self Driving is by all accounts pretty good, nobody is ready to rip the steering wheel out of any car entirely, which is what "full self driving" is really supposed to mean. And I think that may happen with the current explosion in language, image, audio, and video models -- they may enter a domain of diminishing returns, and "artificial general intelligence" that surpasses humans may be farther away than people think.

I don't know. Right now either scenario seems plausible. The rate of change still feels fast. At the same time, people are running into the limitations of current models and getting annoyed by them.

See below for more thoughts on "the future of labor in an AI-driven economy" from Nikola Danaylov.

#solidstatelife #ai #technologicalunemployment

https://www.youtube.com/watch?v=U2vq9LUbDGs

waynerad@diasp.org

The end of classical computer science is coming, and most of us are dinosaurs waiting for the meteor to hit, says Matt Welsh.

"I came of age in the 1980s, programming personal computers like the Commodore VIC-20 and Apple IIe at home. Going on to study computer science in college and ultimately getting a PhD at Berkeley, the bulk of my professional training was rooted in what I will call 'classical' CS: programming, algorithms, data structures, systems, programming languages."

"When I was in college in the early '90s, we were still in the depth of the AI Winter, and AI as a field was likewise dominated by classical algorithms. In Dan Huttenlocher's PhD-level computer vision course in 1995 or so, we never once discussed anything resembling deep learning or neural networks--it was all classical algorithms like Canny edge detection, optical flow, and Hausdorff distances."

"One thing that has not really changed is that computer science is taught as a discipline with data structures, algorithms, and programming at its core. I am going to be amazed if in 30 years, or even 10 years, we are still approaching CS in this way. Indeed, I think CS as a field is in for a pretty major upheaval that few of us are really prepared for."

"I believe that the conventional idea of 'writing a program' is headed for extinction, and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed."

"I'm not just talking about CoPilot replacing programmers. I'm talking about replacing the entire concept of writing programs with training models. In the future, CS students aren't going to need to learn such mundane skills as how to add a node to a binary tree or code in C++. That kind of education will be antiquated, like teaching engineering students how to use a slide rule."

"The shift in focus from programs to models should be obvious to anyone who has read any modern machine learning papers. These papers barely mention the code or systems underlying their innovations; the building blocks of AI systems are much higher-level abstractions like attention layers, tokenizers, and datasets."

This got me thinking: Over the last 20 years, I've been predicting AI would advance to the point where it could automate jobs, and it's looking more and more like I was fundamentally right about that, and all the people who poo-poo'd the idea over the years in coversations with me were wrong. But while I was right about that fundamental idea (and right that there wouldn't be "one AI in a box" that anyone could pull the plug on if something went wrong, but a diffusion of the technology around the world like every previous technology), I was wrong about how exactly it would play out.

First I was wrong about the timescales: I thought it would be necessary to understand much more about how the brain works, and to work algorithms derived from neuroscience into AI models, and looking at the rate of advancement in neuroscience I predicted AI wouldn't be in its current state for a long time. While broad concepts like "neuron" and "attention" have been incorporated into AI, there are practically no specific algorithms that have been ported from brains to AI systems.

Second, I was wrong about what order. I was wrong in thinking "routine" jobs would be automated first, and "creative" jobs last. It turns out that what matters is "mental" vs "physical". Computers can create visual art and music just by thinking very hard -- it's a purely "mental" activity, and computers can do all that thinking in bits and bytes.

This has led me to ponder: What occupations require the greatest level of manual dexterity?

Those should be the jobs safest from the AI revolution.

The first that came to mind for me -- when I was trying to think of jobs that require an extreme level of physical dexterity and pay very highly -- was "surgeon". So I now predict "surgeon" will be the last job to get automated. If you're giving career advice to a young person (or you are a young person), the advice to give is: become a surgeon.

Other occupations safe (for now) against automation, for the same reason would include "physical therapist", "dentist", "dental hygienist", "dental technician", "medical technician" (e.g. those people who customize prosthetics, orthodontic devices, and so on), and so on. "Nurse" who routinely does physical procedures like drawing blood.

Continuing in the same vein but going outside the medical field (pun not intended but allowed to stand once recognized), I'd put "electronics technician". I don't think robots will be able to solder any time soon, or manipulate very small components, at least after the initial assembly is completed which does seem to be highly amenable to automation. But once electronic components fail, to the extent it falls on people to repair them, rather than throw them out and replace them (which admittedly happens a lot), humans aren't going to be replaced any time soon.

Likewise "machinist" who works with small parts and tools.

"Engineer" ought to be ok -- as long as they're mechanical engineers or civil engineers. Software engineers are in the crosshairs. What matters is whether physical manipulation is part of the job.

"Construction worker" -- some jobs are high pay/high skill while others are low pay/low skill. Will be interesting to see what gets automated first and last in construction.

Other "trade" jobs like "plumber", "electrician", "welder" -- probably safe for a long time.

"Auto mechanic" -- probably one of the last jobs to be automated. The factory where the car is initially manufacturered, a very controlled environment, may be full of robots, but it's hard to see robots extending into the auto mechanic's shop where cars go when they break down.

"Jewler" ought to be a safe job for a long time. "Watchmaker" (or "watch repairer") -- I'm still amazed people pay so much for old-fashioned mechanical watches. I guess the point is to be pieces of jewlry, so these essentially count as "jewler" jobs.

"Tailor" and "dressmaker" and other jobs centered around sewing.

"Hairstylist" / "barber" -- you probably won't be trusting a robot with scissors close to your head any time soon.

"Chef", "baker", whatever the word is for "cake calligrapher". Years ago I thought we'd have automated kitchens at fast food restaurants by now but they are no where in sight. And nowhere near automating the kitchens of the fancy restaurants with the top chefs.

Finally, let's revisit "artist". While "artist" is in the crosshairs of AI, some "artist" jobs are actually physical -- such as "sculptor" and "glassblower". These might be resistant to AI for a long time. Not sure how many sculptors and glassblowers the economy can support, though. Might be tough if all the other artists stampede into those occupations.

While "musician" is totally in the crosshairs of AI, as we see, that applies only to musicians who make recorded music -- going "live" may be a way to escape the automation. No robots with the manual dexterity to play physical guitars, violins, etc, appear to be on the horizon. Maybe they can play drums?

And finally for my last item: "Magician" is another live entertainment career that requires a lot of manual dexterity and that ought to be hard for a robot to replicate. For those of you looking for a career in entertainment. Not sure how many magicians the economy can support, though.

The end of programming - Matt Welsh

#solidstatelife #genai #codingai #technologicalunemployment

waynerad@diasp.org

"One of the most common concerns about AI is the risk that it takes a meaningful portion of jobs that humans currently do, leading to major economic dislocation. Often these headlines come out of economic studies that look at various job functions and estimate the impact that AI could have on these roles, and then extrapolates the resulting labor impact. What these reports generally get wrong is the analysis is done in a vacuum, explicitly ignoring the decisions that companies actually make when presented with productivity gains introduced by a new technology -- especially given the competitive nature of most industries."

Says Aaron Levie, CEO of Box, a company that makes large-enterprise cloud file sharing and collaboration software.

"Imagine you're a software company that can afford to employee 10 engineers based on your current revenue. By default, those 10 engineers produce a certain amount of output of product that you then sell to customers. If you're like almost any company on the planet, the list of things your customers want from your product far exceeds your ability to deliver those features any time soon with those 10 engineers. But the challenge, again, is that you can only afford those 10 engineers at today's revenue level. So, you decide to implement AI, and the absolute best case scenario happens: each engineer becomes magically 50% more productive. Overnight, you now have the equivalent of 15 engineers working in your company, for the previous cost of 10."

"Finally, you can now build the next set of things on your product roadmap that your customers have been asking for."

Read the comments, too. There is some interesting discussion, uncommon for the service formerly known as Twitter, apparently made possible by the fact that not just Aaron Levie but some other people forked over money to the service formerly known as Twitter to be able to post things larger than some arbitrary and super-tiny character limit.

Aaron Levie on X: "One of the most common concerns about AI is the risk ..."

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

AI is simultaneously overhyped and underhyped, alleges Dagogo Altraide, aka "ColdFusion" (technology history YouTube channel). For AI, we're at the "peak of inflated expectations" stage of the Garter hype cycle.

At the same time, tech companies are doing mass layoffs of tech workers, and it's not because of overhiring during the pandemic any more, and it's not the regular business cycle -- companies with record revenues and profits are doing mass layoffs of tech workers. "The truth is slowing coming out" -- the layoffs are because of AI, but tech companies want to keep it secret.

So despite the inflated expectations, AI isn't underperforming when it comes to reducing employment.

AI deception: How tech companies are fooling us - ColdFusion

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

David Graeber was right about BS jobs, says Max Murphy. Basically, our economy is bifurcating into two kinds of jobs: "essential" jobs that, despite being "essential", are lowly paid and unappreciated, and "BS" (I'm just going to abbreviate) jobs that are highly paid but accomplish nothing useful for anybody. The surprise, perhaps, is that these BS jobs, despite being well paid, are genuinely soul-crushing.

My question, though, is how much of this is due to technological advancement, and will the continued advancement of technology (AI etc) increase the ratio of BS jobs to essential jobs further in favor of the BS jobs?

David Graeber was right about bullsh*t jobs - Max Murphy

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

"AI could actually help rebuild the middle class," says David Autor.

"Artificial intelligence can enable a larger set of workers equipped with necessary foundational training to perform higher-stakes decision-making tasks currently arrogated to elite experts, such as doctors, lawyers, software engineers and college professors. In essence, AI -- used well -- can assist with restoring the middle-skill, middle-class heart of the US labor market that has been hollowed out by automation and globalization."

"Prior to the Industrial Revolution, goods were handmade by skilled artisans: wagon wheels by wheelwrights; clothing by tailors; shoes by cobblers; timepieces by clockmakers; firearms by blacksmiths."

"Unlike the artisans who preceded them, however, expert judgment was not necessarily needed -- or even tolerated -- among the 'mass expert' workers populating offices and assembly lines."

"As a result, the narrow procedural content of mass expert work, with its requirement that workers follow rules but exercise little discretion, was perhaps uniquely vulnerable to technological displacement in the era that followed."

"Stemming from the innovations pioneered during World War II, the Computer Era (AKA the Information Age) ultimately extinguished much of the demand for mass expertise that the Industrial Revolution had fostered."

"Because many high-paid jobs are intensive in non-routine tasks, Polanyi's Paradox proved a major constraint on what work traditional computers could do. Managers, professionals and technical workers are regularly called upon to exercise judgment (not rules) on one-off, high-stakes cases."

Polanyi's Paradox, named for Michael Polanyi who observed in 1966, "We can know more than we can tell," is the idea that "non-routine" tasks involve "tacit knowledge" that can't be written out as procedures -- and hence coded into a computer program. But AI systems don't have to be coded explicitly and can learn this "tacit knowledge" like humans.

"Pre-AI, computing's core capability was its faultless and nearly costless execution of routine, procedural tasks."

"AI's capacity to depart from script, to improvise based on training and experience, enables it to engage in expert judgment -- a capability that, until now, has fallen within the province of elite experts."

Commentary: I feel like I had to make the mental switch from expecting AI to automate "routine" work to "mental" work, i.e. what matters is mental-vs-physical, not creative-vs-routine. Now we're right back to talking about the creative-vs-routine distinction.

AI could actually help rebuild the middle class | noemamag.com

#solidstatelife #ai #technologicalunemployment

waynerad@diasp.org

"Texas will use computers to grade written answers on this year's STAAR tests."

STAAR stands for "State of Texas Assessments of Academic Readiness" and is a standardized test given to elementary through high school students. It replaced an earlier test starting in 2007.

"The Texas Education Agency is rolling out an 'automated scoring engine' for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing, a building block of artificial intelligence chatbots such as GPT-4, will save the state agency about $15 million to 20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor."

"The change comes after the STAAR test, which measures students' understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions -- known as constructed response items."

Texas will use computers to grade written answers on this year's STAAR tests

#solidstatelife #ai #llms #technologicalunemployment

waynerad@diasp.org

The Daily Show with Jon Stewart did a segment on AI and jobs. Basically, we're all going to get helpful assistants which will make us more productive, so it's going to be great, except, more productive means fewer humans employed, but don't worry, that's just the 'human' point of view. (First 8 minutes of this video.)

Jon Stewart on what AI means for our jobs & Desi Lydic on Fox News's Easter panic | The Daily Show

#solidstatelife #ai #aiethics #technologicalunemployment

waynerad@diasp.org

Survey of 2,700 AI researchers.

The average response placed each of the following within the next 10 years:

Simple Python code given spec and examples
Good high school history essay
Angry Birds (superhuman)
Answer factoid questions with web
World Series of Poker
Read text aloud
Transcribe speech
Answer open-ended fact questions with web
Translate text (vs. fluent amateur)
Group new objects into classes
Fake new song by specific artist
Answers undecided questions well
Top Starcraft play via video of screen
Build payment processing website
Telephone banking services
Translate speech using subtitles
Atari games after 20m play (50% vs. novice)
Finetune LLM
Construct video from new angle
Top 40 Pop Song
Recognize object seen once
All Atari games (vs. pro game tester)
Learn to sort long lists
Fold laundry
Random new computer game (novice level)
NYT best-selling fiction
Translate text in newfound language
Explain AI actions in games
Assemble LEGO given instructions
Win Putnam Math Competition
5km city race as bipedal robot (superhuman)
Beat humans at Go (after same # games)
Find and patch security flaw
Retail Salesperson

...and the following within the next 20 years:

Equations governing virtual worlds
Truck Driver
Replicate ML paper
Install wiring in a house
ML paper

... and the following within the next 40 years:

Publishable math theorems
High Level Machine Intelligence (all human tasks)
Millennium Prize
Surgeon
AI Researcher
Full Automation of Labor (all human jobs)

It should be noted that while these were the averages, the was a very wide variance -- so a wide range of plausible dates.

"Expected feasibility of many AI milestones moved substantially earlier in the course of one year (between 2022 and 2023)."

If you're wondering what the difference between "High-Level Machine Intelligence" and "Full Automation of Labor" is, they said:

"We defined High-Level Machine Intelligence thus: High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption."

"We defined Full Automation of Labor thus:"

"Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. [...] Say we have reached 'full automation of labor' when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers."

They go on to say,

"Predictions for a 50% chance of the arrival of Full Automation of Labor are consistently more than sixty years later than those for a 50% chance of the arrival of High Level Machine Intelligence."

That seems crazy to me. In my mind, as soon as feasibility is reachend, cost will go below human labor very quickly, and the technology will be adopted everywhere. That is what has happened with everything computers have automated so far.

"We do not know what accounts for this gap in forecasts. Insofar as High Level Machine Intelligence and Full Automation of Labor refer to the same event, the difference in predictions about the time of their arrival would seem to be a framing effect."

A framing effect that large?

"Since 2016 a majority of respondents have thought that it's either 'quite likely,' 'likely,' or an 'about even chance' that technological progress becomes more than an order of magnitude faster within 5 years of High Level Machine Intelligence being achieved."

"A large majority of participants thought state-of-the-art AI systems in twenty years would be likely or very likely to:"

  1. Find unexpected ways to achieve goals (82.3% of respondents),
  2. Be able to talk like a human expert on most topics (81.4% of respondents), and
  3. Frequently behave in ways that are surprising to humans (69.1% of respondents)

"Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the true reasons for the AI systems' choices, with only 20% giving it better than even odds."

"Scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%)."

"Respondents exhibited diverse views on the expected goodness/badness of High Level Machine Intelligence. Responses range from extremely optimistic to extremely pessimistic. Over a third of participants (38%) put at least a 10% chance on extremely bad outcomes (e.g. human extinction)."

Thousands of AI authors on the future of AI

#solidstatelife #ai #technologicalunemployment #futurology

waynerad@diasp.org

"Ema, a 'Universal AI employee,' emerges from stealth with $25M."

"Meet Ema, a universal AI employee that boosts productivity across every role in your organization. She is simple to use, trusted, and accurate."

[Insert joke here about how saying things like that won't make people worry about their jobs.]

"Ema's the missing operating system that makes Generative AI work at an enterprise level. Using proprietary Generative Workflow Engine, Ema automates complex workflows with a simple conversation. She is trusted, compliant and keeps your data safe. EmaFusion model combines the outputs from the best models (public large language models and custom private models) to amplify productivity with unrivaled accuracy. See how Ema can transform your business today."

"They say Ema (the company) has already quietly amassed customers while still in stealth, including Envoy Global, TrueLayer, and Moneyview."

"Ema's Personas operate on our patent-pending Generative Workflow Engine (GWE), which goes beyond simple language prediction to dynamically map out workflows with a simple conversation. Our platform offers Standard Personas for common enterprise roles such as Customer Service Specialists (CX), Employee Assistant (EX), Data Analyst, Sales Assistant etc. and allows for the rapid creation of Specialized Personas tailored to rapidly automate unique workflows. No more waiting for months to build Gen AI apps that work!"

"To address accuracy issues and computational costs inherent in current Gen AI applications, Ema leverages our proprietary "fusion of experts" model, EmaFusion, that exceeds 2 Trillion parameters. EmaFusion intelligently combines many large language models (over 30 today and that number keeps growing), such as Claude, Gemini, Mistral, Llama2, GPT4, GPT3.5, and Ema's own custom models. Furthermore, EmaFusion supports integration of customer developed private models, maximizing accuracy at the most optimal cost for every task."

Oh, and "Ema" stands for "enterprise machine assistant".

Ema "taps into more than 30 large language models."

"As for what Ema can do, these businesses are using it in applications that range from customer service -- including offering technical support to users as well as tracking and other functions -- through to internal productivity applications for employees. Ema's two products -- Generative Workflow Engine (GWE) and EmaFusion -- are designed to "emulate human responses" but also evolve with more usage with feedback."

They also say, "Pre-integrated with hundreds of apps, Ema is easy to configure and deploy."

What are those integrations? They said some of those integrations are: Box, Dropbox, Google Drive, OneDrive, SharePoint, Clear Books, FreeAgent, FreshBooks, Microsoft Dynamics 365, Moneybird, NetSuite, QuickBooks Online, Sage Business Cloud, Sage Intacct, Wave Financial, Workday, Xero, Zoho Books, Aha!, Asana, Azure DevOps, Basecamp, Bitbucket, ClickUp, Dixa, Freshdesk, Freshservice, Front, GitHub Issues, GitLab, Gladly, Gorgias, Height, Help Scout, Hive, Hubspot Ticketing, Intercom, Ironclad, Jira, Jira Service Management, Kustomer, Linear, Pivotal Tracker, Rally, Re:amaze, Salesforce Service Cloud, ServiceNow, Shortcut, SpotDraft, Teamwork, Trello, Wrike, Zendesk, Zoho BugTracker, Zoho Desk, Accelo, ActiveCampaign, Affinity, Capsule, Close, Copper, HubSpot, Insightly, Keap, Microsoft Dynamics 365 Sales, Nutshell, Pipedrive, Pipeliner, Salesflare, Salesforce, SugarCRM, Teamleader, Teamwork CRM, Vtiger, Zendesk Sell, Zoho CRM, ApplicantStack, Ashby, BambooHR, Breezy, Bullhorn, CATS, ClayHR, Clockwork, Comeet, Cornerstone TalentLink, EngageATS, Eploy, Fountain, Freshteam, Greenhouse, Greenhouse - Job Boards API, Harbour ATS, Homerun, HR Cloud, iCIMS, Infinite BrassRing, JazzHR, JobAdder, JobScore, Jobsoid, Jobvite, Lano, Lever, Oracle Fusion - Recruiting Cloud, Oracle Taleo, Personio Recruiting, Polymer, Recruitee, Recruiterflow, Recruitive, Sage HR, SAP SuccessFactors, SmartRecruiters, TalentLyft, TalentReef, Teamtailor, UKG Pro Recruiting, Workable, Workday, Zoho Recruit, ActiveCampaign, Customer.io, getResponse, Hubspot Marketing Hub, Keap, Klaviyo, Mailchimp, MessageBird, Podium, SendGrid, Sendinblue, 7Shifts, ADP Workforce Now, AlexisHR, Altera Payroll, Azure Active Directory, BambooHR, Breathe, Ceridian Dayforce, Charlie, ChartHop, ClayHR, Deel, Factorial, Freshteam, Google Workspace, Gusto, Hibob, HRAlliance, HR Cloud, HR Partner, Humaans, Insperity Premier, IntelliHR, JumpCloud, Justworks, Keka, Lano, Lucca, Namely, Nmbrs, Officient, Okta, OneLogin, OysterHR, PayCaptain, Paychex, Paycor, PayFit, Paylocity, PeopleHR, Personio, PingOne, Proliant, Rippling, Sage HR, Sapling, SAP SuccessFactors, Sesame, Square Payroll, TriNet, UKG Dimensions, UKG Pro, UKG Ready, Workday, and Zenefits.

Ema, a 'Universal AI employee,' emerges from stealth with $25M

#solidstatelife #ai #genai #llms #aiagents #technologicalunemployment

waynerad@diasp.org

Devon "the first AI software engineer"

You put it in the "driver's seat" and it does everything for you. Or at least that's the idea.

"Benchmark the performance of LLaMa".

Devon builds the whole project, uses the browser to pull up API documentation, runs into an unexpected error, adds a debugging print statement, uses the error in the logs to figure out how to fix the bug, then builds and deploys a website with full styling as visualization.

See below for reactions.

Introducing Devin, the first AI software engineer - Cognition

#solidstatelife #ai #genai #llms #codingai #technologicalunemployment

waynerad@diasp.org

"Shares of Teleperformance plunged 23% on Thursday, after the French call center and office services group missed its full-year revenue target."

"Investors have been spooked by the potential impact of artificial intelligence on its business model, as companies become more able to tap into the technology directly for their own benefit."

Call center group Teleperformance falls 23%; CEO insists AI cannot replace human staff

#solidstatelife #ai #genai #llms #technologicalunemployment