#solidstatelife

waynerad@diasp.org

PoliScore uses LLMs to rate legislators.

"Non-Partisan. For the People. Policy / Issues Based."

For my state, Colorado, it says:

"John W. Hickenlooper: A"
"Michael F. Bennet: A"
"Diana DeGette: A"
"Joe Neguse: A"
"Lauren Boebert: F"

Non-partisan, you say?

So I clicked on "John W. Hickenlooper":

"Overall benefit to society: 50"
"Immigration: 50"
"Healthcare: 49"
"Energy: 48"
"Technology: 47"
"Wildlife and forest management: 44"
"Social equity: 44"
"Environmental management and climate change: 43"
"Public lands and natural resources: 42"
"Education: 38"
"Agriculture and food: 37"
"Foreign relations: 37"
"Transportation: 36"
"Economics and commerce: 35"
"Crime and law enforcement: 33"
"National defense: 33"
"Housing: 30"
"Government: 28"

Hmm, wonder how it came up with those numbers?

"Senator John W. Hickenlooper has demonstrated a strong commitment to environmental management, energy innovation, and social equity through his recent legislative efforts. Notably, he sponsored the 'Reforestation, Nurseries, and Genetic Resources Support Act of 2024,' which aims to enhance reforestation efforts by providing financial and technical support to nurseries and seed orchards. This bill is expected to significantly benefit environmental management and climate change mitigation. Additionally, his sponsorship of the 'BIG WIRES Act' underscores his dedication to modernizing the US electric grid, promoting energy resilience, and integrating renewable energy sources, which are crucial for sustainable development."

"In the realm of social equity..."

I'm going to stop there because it goes on for for 2 more paragraphs. Then, after that, is a big list of 218 bills. Each bill has a grade, of which almost all are "A" and the lowest is "C".

For comparison, I clicked on "Lauren Boebert":

"Overall benefit to society: -11"
"Agriculture and food: 11"
"National defense: 8"
"Energy: 7"
"Housing: 4"
"Transportation: 3"
"Technology: 3"
"Government: 2"
"Economics and commerce: 1"
"Crime and law enforcement: -1"
"Wildlife and forest management: -13"
"Foreign relations: -13"
"Education: -13"
"Public lands and natural resources: -14"
"Healthcare: -15"
"Social equity: -18"
"Environmental management and climate change: -26"
"Immigration: -29"

"Representative Lauren Boebert's legislative actions reveal a troubling pattern of prioritizing divisive and regressive policies over constructive and inclusive governance. Her support for the 'Withdrawal from the United Nations Framework Convention on Climate Change' and the 'WHO Withdrawal Act' underscores a disregard for international cooperation and global health, potentially isolating the US from critical global initiatives."

"Boebert's sponsorship of the 'Build the Wall and Deport Them All Act' and the 'Mass Immigration Reduction Act of 2024' highlights a harsh stance on immigration that could exacerbate social inequities and strain foreign relations. ..."

I'm going to stop there but it goes on. Under "Bill History", there's 279 bills, almost all of which are graded either "D" or "F".

I tried clicking on a couple of bills. For John Hickenlooper, I clicked "Reproductive Freedom for Women Act":

"Overall benefit to society: 60"
"Social equity: 80"
"Healthcare: 70"
"Crime and law enforcement: 30"
"Economics and commerce: 20"
"Government: 10"

"The Reproductive Freedom for Women Act, introduced in the Senate, seeks to address the repercussions of the Supreme Court's decision in Dobbs v. Jackson, which significantly altered the legal landscape for abortion rights in the United States. The bill explicitly states Congress's support for protecting access to abortion and other reproductive health care services. It aims to restore the protections that were enshrined in the landmark Roe v. Wade decision, which had previously guaranteed a woman's right to choose an abortion. The high-level goals of the bill are to ensure that women have the freedom to make decisions about their reproductive health without undue governmental interference."

It goes on for 4 more paragraphs.

For Lauren Boebert, I clicked "No User Fees for Gun Owners Act":

"Overall benefit to society: -30"
"Government: -10"
"Economics and commerce: -20"
"Social equity: -30"
"Crime and law enforcement: -40"

"The 'No User Fees for Gun Owners Act' seeks to amend Section 927 of Title 18 of the United States Code and Part I of Subchapter B of Chapter 53 of the Internal Revenue Code of 1986. The primary goal of the bill is to prevent state and local governments from imposing any form of liability insurance, taxes, or user fees specifically as conditions for the ownership, manufacture, importation, acquisition, transfer, or continued possession of firearms and ammunition."

It goes on for 6 more paragraphs.

It looks to me like, if you're a liberal/Democrat, you just use this website as is. If you're a conservative/Republican, at first glance, it looks like you can invert the letter grades and reverse the positive/negative number scores. But, giving the matter more thought, it occurred to me that if the website is made assuming "liberal" values, then bad grades/negative numbers may just mean opposition to liberal values, but that might not tell you anything about what values the politician or bill is for, necessarily. In other words, I'm thinking, if you made comparable systems assuming either conservative or libertarian values, you wouldn't necessarily just get the inverse of this system. Your thoughts?

It may be that the AI-generated summaries for every bill, alongside the easy-to-navigate system of listing them under their sponsors/cosponsors, may be the most valuable aspect of this site. It wouldn't be to hard to check in on a regular basis to see what bills your elected representatives are sponsoring/cosponsoring and get a general sense of what they are about.

I won't comment on the insanity of having a society with more laws than is possible to fit in any human brain while expecting all laws to be obeyed. Oh, whoops. Looks like on this site, it lists all the bills that are sponsored, whether they eventually get signed into law or not, though, so if you see bills listed on this site it doesn't mean you have to obey them (necessarily).

Legislators - PoliScore: non-partisan political rating service

#solidstatelife #ai #genai #llms #domesticpolitics

waynerad@diasp.org

"SiPhox Health BiomarkerAI: Transform your PDF blood test results into simple, actionable insights".

Last time I had a blood test, it seems the explanations that came with the results were pretty decent. But next time I have a blood test, I'll give this a shot. If you try it, let me know how it goes. I'm interested if you get anything better than just asking ChatGPT to explain things and recommend "actionable" actions.

SiPhox Health BiomarkerAI: Transform your PDF blood test results into simple, actionable insights

#solidstatelife #ai #medicalai

waynerad@diasp.org

A startup called Attio claims to be using AI to reinvent customer relationship management (CRM) software.

"CRM needs a ground-up reimagining. This vision drove us to found Attio and it's why we spent three years building such a strong foundation before our launch last year."

"This funding will vastly accelerate our vision of CRM in the AI era, which is built on three pillars:"

"A system of record: Our powerful, AI-native data model is designed as the modern system of record. It can match any business or data model with custom objects, and it stores information with rich, structured metadata. It's incredibly fast, handling massive workloads with millions of records at sub-50ms latency. We spent four years painstakingly building this foundation."

"A system of context: Attio will automatically ingest and understand all of your data -- structured and unstructured -- capturing the details of every video call, meeting, email, document, and even data from the web. It will present this information in a way that is always relevant and useful to you."

"A system of action: A comprehensive platform where you can architect and drive your entire GTM strategy, leveraging proactive AI agents to anticipate needs, automate complex tasks, and initiate processes across your whole stack without manual effort."

Attio raises $33 million in funding | Attio

#solidstatelife #ai #startups #crm

waynerad@diasp.org

Using LLMs to reverse JavaScript minification. Project Humanify is a tool to automate this process.

"Minification is a process of reducing the size of a Javascript file in order to optimize for fast network transfer."

"Most minification is lossless; There's no data lost when true is converted to its minified alternative !0."

"Some data is lost during the minification, but that data may be trivial to recreate. A good example is whitespace."

"The most important information that's lost during the minification process is the loss of variable and function names. When you run a minifier, it completely replaces all possible variable and function names to save bytes."

"Until now, there has not been any good way to reverse this process; when you rename a variable from crossProduct to a, there's not much you can do to reverse that process."

How to codify the process of renaming a function:

"1. Read the function's body,"
"2. Describe what the function does,"
"3. Try to come up with a name that fits that description."

"For a classical computer program it would be very difficult to make the leap from 'multiply b with itself' to 'squaring a number'. Fortunately recent advances in LLMs have made this leap not only possible, but almost trivial."

"Essentially the step 2. is called 'rephrasing' (or 'translating' if you consider Javascript as its own natural language), and LLMs are known to be very good at that."

"Another task where LLMs really shine is summarization, which is pretty much what we're doing in step 3. The only specialization here is that the output needs to be short enough and formatted to the camel case."

Using LLMs to reverse JavaScript variable name minification

#solidstatelife #ai #genai #llms #codingai #javascript #minification

waynerad@diasp.org

I was pondering the concept of "generations" today. The concept makes sense because, as technology advances, people born at different times experience different childhoods. "Generation" also means children, who then go on to have children, etc. But I decided to see what happens if we abandon the requirement that people be direct descendants and simply focus on the idea of childhoods with different technologies. I made a list of major technologies that I thought could be defining of childhood experience, arbitrarily defined "childhood" as birth to age 12, so age 13 onward would count as "adolescence" rather than "childhood", and then looked up the history of various technologies and squinted at various technology adoption curves. For the final step, I abandoned the traditional labels ("baby boomers", "gen X", "millennials", "gen Z", "gen alpha", etc) and simply labeled each "generation" for the technology that "distinguishes its childhood" from the childhoods of preceding generations. The end result was the following list:

Born 1838 to 1912 == the manufactured products generation
Born 1913 to 1922 == the home electrification generation
Born 1923 to 1942 == the automobile generation
Born 1943 to 1973 == the television generation
Born 1974 to 1986 == the home computer generation
Born 1987 to 1999 == the internet generation
Born 2000 to 2010 == the smartphone generation
Born 2011 to present == the AI generation

For manufactured products, I actually went with textiles (clothing), and for the US specifically rather than other parts of the world. The above refers specifically to the US -- you'll have to jiggle the dates to get other parts of the world.

For "AI generation", I thought of calling it "generative AI generation", but decided it was too cumbersome. But I was thinking, ChatGPT and image generators like DALL-E came out in 2022 and reached 50% adoption in 2023, so anyone born in 2011 will be 12 in 2023 when adoption reached 50%. AI existed before "generative" AI but it seems like it's "generative" AI that subjectively makes the world feel like a different place.

I was born in 1970, but because home computers, for me, showed up when I was 9 (I, or more precisely my dad, was ahead of the adoption curve), I count myself as a member of "the home computer generation" -- my childhood was distinguished from other generations by the presence of home computers and the absense of technology that came later (internet, smartphones, etc). (That computer was a TRS-80 Model III, in case you were wondering.)

Diffusion of innovations - Wikipedia

#solidstatelife #generations

waynerad@diasp.org

"The Silicon Valley Canon: On the paıdeía of the American tech elite"

Paıdeía?

"One must assume that paıdeía, which is to say, education and moral formation in the broadest and most comprehensive sense, is more important than anything else in deciding the character of a particular polıteía."

"I often draw a distinction between the political elites of Washington DC and the industrial elites of Silicon Valley with a joke: in San Francisco reading books, and talking about what you have read, is a matter of high prestige. Not so in Washington DC. In Washington people never read books -- they just write them."

"In Washington, the man of ideas is a wonk. The wonk is not a generalist. The ideal wonk knows more about his or her chosen topic than you ever will. She can comment on every line of a select arms limitation treaty, recite all Chinese human rights violations that occurred in the year 2023, or explain to you the exact implications of the new residential clean energy tax credit -- but never all at once."

"Books and reports are a sort of proof, a sign of achievement that can be seen by climbers of other peaks. An author has mastered her mountain. The wonk thirsts for authority: once she has written a book, other wonks will give it to her."

In contrast,

"The technologists of Silicon Valley do not believe in authority. They merrily ignore credentials, discount expertise, and rebel against everything settled and staid. There is a charming arrogance to their attitude. This arrogance is not entirely unfounded. The heroes of this industry are men who understood in their youth that some pillar of the global economy might be completely overturned by an emerging technology."

"Being men of action, most Silicon Valley sorts do not have time to write books. But they make time to read books."

The author goes on to say that there's no common "cannon" of books people in Washington DC read, but maybe there's a "vague cannon" of books many people in Silicon Valley read? The author goes on to challenge their followers on Twitter. Surprisingly, I actually read a lot of the books on the list.

Books I've read:

Hoftstader (1979), Gödel, Escher, Bach
Feynman (1985), Surely You're Joking, Mr Feynman!
Clayton Christensen (1997) The Innovator's Dilemma
Raymond (1999), The Cathedral and the Bazaar
Kurzweil (2005), The Singularity is Near
Nassim Nicholas Taleb (2007) Black Swan or (2012) Anti-Fragile (I read both -- his best book is Fooled By Randomness (2001) which came out before both of these)
Thiel (2014), Zero to One

In addition, there were some books I've partially read (does that count?)

Ableson and Sussman (1984), Structure and Interpretation of Computer Programs
Graham (1998-2024), Essays -- wait, this is actually a blog
Reis (2011), The Lean Startup
Alexander (2013-2024), Slate Star Codex/Astral Codex Ten -- wait, this is actually a blog
Bostrom (2014), Superintelligence
Yudkowsky, et al. (2009-2024), LessWrong -- this is actually a blog, too
Walter Isaacson (2011) Steve Jobs

What do you all think? This is indicative of me having a similar disposition and/or mindset as the American tech elite? Maybe the previous generation?

The Silicon Valley Canon: On the paıdeía of the American tech elite

#solidstatelife #culture

waynerad@diasp.org

"In the 1930s, Disney invented the multiplane camera and was the first to create sound-synchronized, full color cartoons -- eventually leading to the groundbreaking animated film Snow White and the Seven Dwarfs."

"Marvel and DC Comics rose to prominence in the 1940s, dubbed the 'golden age of comics,' enabled by the mass availability of the 4-color rotary letterpress and offset lithography for printing comics at scale."

"Similarly, Pixar was uniquely positioned in the 1980s to leverage a new technology platform -- computers and 3D graphics."

we believe the Pixar of the next century won't emerge through traditional film or animation, but rather through interactive video. This new storytelling format will blur the line between video games and television/film -- fusing deep storytelling with viewer agency and 'play,' opening up a vast new market."wo

So says Jonathan Lai of Andreessen Horowitz, the Silicon Valley investment firm.

"The promise of interactive video lies in blending the accessibility and narrative depth of TV/film, with the dynamic, player-driven systems of video games."

"The biggest remaining technical hurdle for interactive video is reaching frame generation speeds fast enough for content generation on the fly. Dream Machine currently generates ~1 frame per second. The minimum acceptable target for games to ship on modern consoles is a stable 30 FPS, with 60 FPS being the gold standard. With the help of advancements such as Pyramid Attention Broadcast (PAB), this could go up to 10-20 FPS on certain video types, but is still not quite fast enough."

("By mitigating redundant attention computation, PAB achieves up to 21.6 FPS with 10.6x acceleration, without sacrificing quality across popular diffusion transformer-based video generation models including Open-Sora, Open-Sora-Plan, and Latte.")

"Given the rate at which we've seen underlying hardware and model improvements, we estimate that we may be ~2 years out from commercially viable, fully generative interactive video."

"In February 2024, Google DeepMind released its own foundation model for end-to-end interactive video named Genie. The novel approach to Genie is its latent action model, which infers a hidden action in between a pair of video frames."

"We've seen teams incorporate video elements inside AI-native game engines." "Latens by Ilumine is building a 'lucid dream simulator' where users generate frames in real-time as they walk through a dream landscape." "Developers in the open-source community Deforum are creating real-world installations with immersive, interactive video. Dynamic is working on a simulation engine where users can control robots in first person using fully generated video." "Fable Studio is building Showrunner, an AI streaming service that enables fans to remix their own versions of popular shows." "The Alterverse built a D&D inspired interactive video RPG where the community decides what happens next. Late Night Labs is a new A-list film studio integrating AI into the creative process. Odyssey is building a visual storytelling platform powered by 4 generative models." "Series AI has developed Rho Engine, an end-to-end platform for AI game creation." "We're also seeing AI creation suites from Rosebud AI, Astrocade, and Videogame AI enable folks new to coding or art to quickly get started making interactive experiences."

"Who will build the Interactive Pixar?"

The Next Generation Pixar: How AI will Merge Film & Games

#solidstatelife #ai #genai #computervision #videoai #startups

waynerad@diasp.org

IBM has added an "AI Accelerator" called Spyre to the IBM z/Architecture. The z/Architecture was introduced in 2000 and uses virtual machines to provide backward compatibility to IBM mainframes all the way back to the IBM System/360 introduced in 1964.

So you can add AI to a 1964 mainframe. Well, not a literal 1964 mainframe, but a modern computer running software from a 1964 mainframe through an emulation system. That's still pretty crazy.

New Telum II Processor and IBM Spyre Accelerator: Expanding AI on IBM Z

#solidstatelife #ai #ibm

waynerad@diasp.org

The charges against Pavol Durov, CEO of Telegram. The French government has issued a press release detailing the specific changes against Pavel Durov, who was arrested Saturday, August 24 in Paris.

Besides the charges of not deleting salacious content, which I assume you have probably already heard about, the charges list things like: "Providing cryptology services aiming to ensure confidentiality without certified declaration," "Providing a cryptology tool not solely ensuring authentication or integrity monitoring without prior declaration," and "Importing a cryptology tool ensuring authentication or integrity monitoring without prior declaration."

Apparently in French law, there is a pre-declaration and pre-certification process software developers have to go through with the government. Pavel Durov has dual French and Russian citizenship. So I guess the French government figures their laws apply to him.

(Link goes to a PDF document. Pages 1 and 2 are in French and pages 3 and 4 have the same press release in English.)

Communiqué de presse

#solidstatelife #cryptography

waynerad@diasp.org

Somebody made an AI watermark remover. Not a remover for the new watermarking systems that are supposed to invisibly "watermark" images created by AI so it's possible to tell they're generated by AI -- nobody is using those -- yet -- no, we're talking about a system to remove old-fashioned regular watermarks on images.

Is this actually a good idea? Seems like people watermark images to keep people from bypassing their licensing terms.

It looks like this comes from China. So maybe this is something someone in China wants. (Languages available are English, Chinese, Spanish, Portuguese, Russian, and Bahasa Indonesia.)

Watermark Remover

#solidstatelife #ai #genai #computervision

waynerad@diasp.org

Andy Jassy, the now CEO of Amazon, says using AI, specifically Amazon Q, applied to the task of upgrading "foundational" software dependencies, can reduce 50 developers-days' worth of work to just a few hours, and has saved the company $260 million.

One of the most tedious (but critical tasks) for software development teams is updating foundational software.

#solidstatelife #ai #genai #llms #codingai #amazonq

waynerad@diasp.org

"Humans Need Not Apply" 10 years later. Retrospective with CGP Grey about his legendary "Humans Need Not Apply" video, which is now 10 years ago.

CGP Grey wanted to make the point that computers are coming after everyone's job, and the guy he's talking to (Myke Hurley), but he says when he watched the video, he thought, "They'll never take my job."

A lot of the video concerned self-driving cars, which is what in retrospect, he looks the most wrong about.

He tried to turn the word "autos" into a word to refer to all "automatic" vehicles, but the term didn't catch on. But he wanted to get people to think about self-driving vehicles of all kinds.

He said "They don't need to be perfect, they just need to be better than humans", but he now considers himself totally wrong about that. People really require self-driving cars to be perfect. People demand perfection. They need to be as safe as airplanes. They talk about how people psychologically have a need to feel "in control", and if something bad happens, they need a human to blame, not some algorithm made of 0s and 1s. If you take the decision-making out of human hands, it needs to be perfect.

Tesla's recent "Drive Naturally" where everything is learned from neural networks and not hand-coded by humans, is remarkably like humans in how it drives.

The very last part of the "Humans Need Not Apply" video, which he called "software bots", has emerged dramatically in the last 2 years. He thought self-driving cars would advance faster, and "software bots" would come later, but "The last couple of years have been terrifyingly fast."

No mention of the horses? For me the most memorable thing about "Humans Need Not Apply" was the imaginary conversation between horses about how they had nothing to fear from this new invention, the automobile -- employment for horses had always gone up throughout history. But the horse population actually peaked in 1915 and has gone down ever since. So there isn't some rule of nature that says there always has to be employment for horses, that horses can't be automated. Likewise, CGP Grey invites us to consider that there's no law of nature guaranteeing employment for humans.

This video (which is really audio-only -- it's essentially an audio podcast) is 1.5 hours but only the first 30 mins is about the "Humans Need Not Apply" video. However, you might want to listen to the whole think as CGP Grey and Myke Hurley contemplate AI and the future of AI. CGP Grey talks about how he is of two minds regarding how to think about the future. The first mind says: the way to think about technological change, is, it's the same as it has always been, only faster. We've hand technological change since caveman days. So just extrapolate that out into the future. The second mind, is the "doom" mind: He really does think, there is some kind of boundary we are getting closer to, beyond which it is functionally impossible to try to think about the future, to the point where it is pointless to even try to plan or think. Where is that boundary? That boundary is there because this thing, AI, is different. Everyone thinks their time is different, but he really feels like AI is really "this time is different." "Humans Need Not Apply" was trying to get people to seriously engage with this idea.

Is AI still doom? (Humans Need Not Apply -- 10 years later) - Cortex Podcast

#solidstatelife #ai #robotics #genai #llms #technologicalunemployment

waynerad@diasp.org

"The Techno-Humanist Manifesto: A new philosophy of progress for the 21st century" by Jason Crawford.

"We live in an age of wonders. To our ancient ancestors, our mundane routines would seem like wizardry: soaring through the air at hundreds of miles an hour; making night bright as day with the flick of a finger; commanding giant metal servants to weave our clothes or forge our tools; mixing chemicals in vast cauldrons to make a fertilizing elixir that grants vigor to crops; viewing events or even holding conversations from thousands of miles away; warding off the diseases that once sent half of children to an early grave. We build our homes in towers that rise above the hills; we build our ships larger and stronger than the ocean waves; we build our bridges with skeletons of steel, to withstand wind and storm. Our sages gaze deep into the universe, viewing colors the eye cannot see, and they have discovered other worlds circling other Suns."

And yet, we live in a time of greater depression and anxiety disorders than ever before in human history. Which he doesn't mention but he does say...

"But not everyone agrees that the advancement of science, technology, and industry has been such a good thing. 'Is 'Progress' Good for Humanity?' asks a 2014 Atlantic article, saying that 'the Industrial Revolution has jeopardized humankind's ability to live happily and sustainably upon the Earth.' In Guns, Germs, and Steel, a grand narrative of civilizational advancement, author Jared Diamond disclaims the assumption 'that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents 'progress,' or that it has led to an increase in human happiness.' Diamond also called agriculture 'the worst mistake in the history of the human race' and 'a catastrophe from which we have never recovered,' adding that this perspective demolishes a 'sacred belief: that human history over the past million years has been a long tale of progress.' Historian Christopher Lasch is even less charitable, asking: 'How does it happen that serious people continue to believe in progress, in the face of massive evidence that might have been expected to refute the idea of progress once and for all?' Economic growth is called an 'addiction,' a 'fetish,' a 'Ponzi scheme,' a 'fairy tale.' There is even a 'degrowth' movement advocating economic regress as an ideal."

"With so little awareness of progress, and so much despair for the future, our society is unable to imagine what to build or to dream of where to go. As late as the 1960s, Americans envisioned flying cars, Moon bases, and making the desert bloom using cheap, abundant energy from nuclear power." "Today we hope, at best, to avoid disaster: to stop climate change, to prevent pandemics, to stave off the collapse of democracy."

"This is not merely academic. If society believes that scientific, technological and industrial progress is harmful or dangerous, people will work to slow it down or stop it."

"Even where the technical challenges have long been solved, we seem unable to build or to operate. The costs of healthcare, education, and housing continue to rise.30 Energy projects, even 'clean' ones, are held up for years by permitting delays and lack of grid connections.31 California's high-speed rail, now decades in the making, has already cost billions of dollars and is still years away from completing even an initial operating segment, which will not provide service to either LA or San Francisco."

This is an interesting point. Technological advancement should make everything cheaper and faster while still being just as good or better in terms of quality. But since that's not happening, at least in certain sectors, it would appear the weight of human bureaucracy can slow or prevent technological progress.

"On the horizon, powerful new technologies are emerging, intensifying the debate over technology and progress. Robotaxis are doing business on city streets; mRNA can create vaccines and maybe soon cure cancers; there's a renaissance in both supersonic flight and nuclear energy.34 SpaceX is landing reusable rockets, promising to enable the space economy, and testing an enormous Starship, promising to colonize Mars. A new generation of founders have ambitions in atoms, not just bits: manufacturing facilities in space, net-zero hydrocarbons synthesized with solar or nuclear power, robots that carve sculptures in marble.35 Most significantly, LLMs have created a general kind of artificial intelligence -- which, depending on who you ask, is either the next big thing in the software industry, the next general-purpose technology to rival the steam engine or the electric generator, the next age of humanity after agriculture and industrialization, or the next dominant species that will replace humanity altogether."

"The world needs a moral defense of progress based in humanism and agency -- that is, one that holds human life as its standard of value, and emphasizes our ability to shape the future. This is what I am calling 'techno-humanism': the idea that science, technology and industry are good -- because they promote human life, well-being, and agency."

Ok, so, if I understand this guy's premise correctly, the fact that depression and anxiety are at an all-time high, and this appears to be a reaction to previous generations of technology, is not something we should worry about because, while technology always creates new problems, yet more technology always solves them. So it is just a matter of time before solutions to the current depression and anxiety problems will be found, and maybe they will involve new technologies like AI.

Those of you who have been following me for a while know a lot of what I predict is based on my experience of disillusionment brought about by the internet. In the mid-to-late 90s, I was one of those people who thought the internet would be a "democratizing" force, empowering the little people, and bringing mutual understanding between people from different walks of life. Instead, it has proven to be a "centralizing" force, with a small handful of giant tech companies dominating the landscape, with economic power concentrated in those same tech companies, and the "little people" being worse off as inequality becomes vaster and vaster, and the vast increase in communications bandwidth hasn't brought people from different walks of life to any mutual understanding -- people are getting along worse, not better, and our society is more polarized than it ever was. As the old saying goes, fool me once, shame on you, fool me twice, shame on me. So I always feel distrustful of any utopian claims for future technology. The rule I tend to follow is: If we're talking about technological capabilities, I'm an extreme "optimist" -- I think technological capabilities will continue, even past the point where technology is capable of everything humans are capable of -- but if we're talking about social outcomes, I'm an extreme "pessimist" -- I think technology never solves problems rooted in human nature. Give humans infinite communication bandwidth, and you don't get mutual understanding and harmony. If people don't get along, people don't get along, and that's all there is to it. People have to solve "people" problems. Technology doesn't solve "people" problems.

The first 5 installments have been written and they're all pretty interesting. I'm just responding here to "The Present Crisis" introduction. I may or may not comment on later installments (not promising anything). I encourage you all to read it for yourself.

Announcing The Techno-Humanist Manifesto | The Roots of Progress

#solidstatelife #ai #environment #sociology #philosophy #futurology

waynerad@diasp.org

"Today, we're excited to introduce The AI Scientist, the first comprehensive system for fully automatic scientific discovery, enabling Foundation Models such as Large Language Models (LLMs) to perform research independently."

"We" meaning Sakana AI.

"The AI Scientist automates the entire research lifecycle, from generating novel research ideas, writing any necessary code, and executing experiments, to summarizing experimental results, visualizing them, and presenting its findings in a full scientific manuscript."

"We also introduce an automated peer review process to evaluate generated papers, write feedback, and further improve results. It is capable of evaluating generated papers with near-human accuracy."

"The automated scientific discovery process is repeated to iteratively develop ideas in an open-ended fashion and add them to a growing archive of knowledge, thus imitating the human scientific community."

"In this first demonstration, The AI Scientist conducts research in diverse subfields within machine learning research, discovering novel contributions in popular areas, such as diffusion models, transformers, and grokking."

"The AI Scientist is designed to be compute efficient. Each idea is implemented and developed into a full paper at a cost of approximately $15 per paper. While there are still occasional flaws in the papers produced by this first version (discussed below and in the report), this cost and the promise the system shows so far illustrate the potential of The AI Scientist to democratize research and significantly accelerate scientific progress."

The obvious step that's missing to me is replication. But let's continue.

The 4-step process to paper (it looks like I'm quoting a lot but I tried to chop this down):

"1. Idea Generation: Given a starting template, The AI Scientist first 'brainstorms' a diverse set of novel research directions. We take inspiration from evolutionary computation and open-endedness research and iteratively grow an archive of ideas using LLMs as the mutation operator. Each idea comprises a description, experiment execution plan, and (self-assessed) numerical scores of interestingness, novelty, and feasibility. At each iteration, we prompt the language model to generate an interesting new research direction conditional on the existing archive, which can include the numerical review scores from completed previous ideas. We use multiple rounds of chain-of-thought and self-reflection to refine and develop each idea. After idea generation, we filter ideas by connecting the language model with the Semantic Scholar API and web access as a tool. This allows The AI Scientist to discard any idea that is too similar to existing literature."

"2. Experiment Iteration: Given an idea and a template, the second phase of The AI Scientist first executes the proposed experiments and then visualizes its results for the downstream write-up. The AI Scientist uses Aider to first plan a list of experiments to run and then executes them in order. We make this process more robust by returning any errors upon a failure or time-out (e.g. experiments taking too long to run) to Aider to fix the code and re-attempt up to four times."

"After the completion of each experiment, Aider is then given the results and told to take notes in the style of an experimental journal. Currently, it only conditions on text but in future versions, this could include data visualizations or any modality. Conditional on the results, it then re-plans and implements the next experiment. This process is repeated up to five times. Upon completion of experiments, Aider is prompted to edit a plotting script to create figures for the paper using Python. The AI Scientist makes a note describing what each plot contains, enabling the saved figures and experimental notes to provide all the information required to write up the paper. At all steps, Aider sees its history of execution."

"3. Paper Write-up: The third phase of The AI Scientist produces a concise and informative write-up of its progress in the style of a standard machine learning conference proceeding in LaTeX. We note that writing good LaTeX can even take competent human researchers some time, so we take several steps to robustify the process. This consists of the following:"

"(a) Per-Section Text Generation: The recorded notes and plots are passed to Aider, which is prompted to fill in a blank conference template section by section. This goes in order of introduction, background, methods, experimental setup, results, and then the conclusion. All previous sections of the paper it has already written are in the context of the language model."

"(b) Web Search for References: In a similar vein to idea generation, The AI Scientist is allowed 20 rounds to poll the Semantic Scholar API looking for the most relevant sources to compare and contrast the near-completed paper against for the related work section."

"(c) Refinement: After the previous two stages, The AI Scientist has a completed first draft, but can often be overly verbose and repetitive. To resolve this, we perform one final round of self-reflection section-by-section."

"(d) Compilation: Once the LaTeX template has been filled in with all the appropriate results, this is fed into a LaTeX compiler. We use a LaTeX linter and pipe compilation errors back into Aider so that it can automatically correct any issues."

After the paper is produced, we're not done.

"Automated paper reviewing: A key component of an effective scientific community is its reviewing system, which evaluates and improves the quality of scientific papers. To mimic such a process using large language models, we design a GPT-4o-based agent to conduct paper reviews based on the Neural Information Processing Systems (NeurIPS) conference review guidelines."

"To evaluate the LLM-based reviewer's performance, we compared the artificially generated decisions with ground truth data for 500 ICLR 2022 papers extracted from the publicly available OpenReview dataset."

They provide an example paper, "Dualscale Diffusion: Adaptive feature balancing for low-dimensional generative models", so you can evaluate how well you think the system works.

The AI Scientist: Towards fully automated open-ended scientific discovery

#solidstatelife #ai #genai #llms #scientificmethod

waynerad@diasp.org

Cory Doctorow traveled to Las Vegas for Defcon 32, where he gave a talk called "Disenshittify or die! How hackers can seize the means of computation and build a new, good internet that is hardened against our asshole bosses' insatiable horniness for enshittification".

Video of the talk hasn't been posted to YouTube, but Cory Doctorow posted "a lightly edited version of my speech crib".

The talk didn't give much in the way of solutions to enshittification, but it did elaborate well on Cory Doctorow's "theory on enshittification", if I may call it that (my term not his).

Basically, enshittification happens in N stages: 1. Platforms are good to users, 2: Platforms lock in users, then maltreat them in a way that is good to business customers, 3: platforms lock in business customers -- at this point, they can capture all the value for themselves.

How it is done is by "twiddling" -- changing how the algorithms behind the business operate. Everyone else is left in the dark, wondering why did this product sell and not that one, this video go viral and not that one, this post on social media get shared and not that one, etc.

For the "Why it is done", he changes the question to "Why now"? Because 4 things that used to "discipline" businesses are gone.

Those 4 things are: 1. Competition, 2: Regulation, and 3: The tech workforce. Eh, that's only 3. Did I somehow miss the 4th one?

Regarding 1: Competition, he says companies have been allowed to gobble up the competition, so today there is no competition.

Regarding 2: Regulation, he says the companies are now more powerful than the regulators, so the companies regulate the regulators.

Regarding 3: The tech workforce, "Eventually, supply caught up with demand. Tech laid off 260,000 of us last year, and another 100,000 in the first half of this year."

So now we are at a stage where tech companies can charge the most, deliver the least, all while sharing as little as possible with users, customers, suppliers, and workers.

At this point he urges tech workers to form unions, which seem to me is the only solution he proposed.

Have a read and tell me if I missed anything.

Disenshittify or Die - Pluralistic

#solidstatelife #economics

waynerad@diasp.org

Eric Schmidt gave a talk at Stanford Business School, that was so censored it took me about 2 seconds to find it on YouTube -- oh wait, it's gone from YouTube. I guess it really is censored after all. And it's subtitled in Chinese, suggesting this talk is of interest to Chinese people. Er, was subtitled in Chinese. I guess it's gone now. Anyway, it was Eric Schmidt answering audience questions, moderated by Erik Brynjolfsson.

Anyway, Eric Schmidt has 2 predictions:

  1. He predicts that LLMs will soon have 1 million token context windows. Which he says is 20 books, but I estimated 1,500 pages which is more like 3 books.

  2. The next thing he predicts is "text-to-action" AIs. You give it text, and it does the actions you ask for. How it does this is by writing code (e.g. Python) and then running it. This is also called "agentic" AI.

There's a chemistry lab where knowledge from experiments is fed back into the AI which it uses to plan the next experiments, which are carried out overnight (by humans? by robotics?) and this is accelerating knowledge in chemistry and material science. I don't remember the name, but if you watch the video, eh, oh wait.

He envisions a future where, for example, TikTok gets banned, and you could go to an LLM and say, "Make me a TikTok clone", and you can just repeat that over and over and over until you hit upon a clone that "goes viral".

A few other points of note: He said he oscillates between thinking open source models and closed source models will win. It seems like only closed source is possible because of the huge amount of money involved. But then open source catches up and he flips to thinking the other way.

With regards to China, he says the US is ahead and has to stay ahead. Because of the huge amounts of money and expertise involved, only a few countries can compete -- the US and China and maybe a few others -- but not the EU because Brussels screwed them. Everyone else just lives in the AI world the giants are creating. With regard to national security, countries will align themselves with the US or China, with the EU, South Korea, Japan, etc, in our camp.

Brrrrrp! I found a video with clips from the Stanford talk with commentary (from Matthew Berman) that seems to have not been taken down. He (Berman) focuses on things in the talk I didn't mention, like how CUDA locks people into Nvidia and that's responsible for Nvidia's disproportionately high market cap, and how people at Google aren't working 80-hour weeks any more but he thinks they should be.

#solidstatelife #ai #genai #llms #agenticai

https://www.youtube.com/watch?v=7PMUVqtXS0A

waynerad@diasp.org

"Kioxia has something new and very cool coming. At Flash Memory Summit (FMS) 2024, the company is showing off a SSD with an optical interface."

"The current demo is a very short range 40m in distance (~131ft) optical connection, but the company plans to have 100m distance in the future. One of the concepts is that this could allow for SSDs to be placed in locations far away from hot CPUs and GPUs that may be liquid-cooled. Instead, NAND can be placed in a more moderate temperature room or containment area where it performs the best."

For PCIe Gen8 or later.

Kioxia optical interface SSD demoed at FMS 2024

#solidstatelife #ssd