#solidstatelife

waynerad@diasp.org

"Vomitorium is a command-line tool designed to easily load an entire project into a single text file. It recursively scans directories, processes files, and compiles their contents into a single output file. Useful for working with LLMs."

You have to install npm to install it. "npm" stands for "Node.js package manager". Node.js is another one of those vomit-inducing things. Well, for me -- your mileage may vary, as the old expression goes.

vomitorium

#solidstatelife #ai #genai #llms

waynerad@diasp.org

Building a Zero Trust *ssh.Client.

So there's a new system for building secure applications called OpenZiti.

The system allows you to create "dark" applications, by which they mean, applications that do not expose ports on the internet that can be discovered by attackers with port scanning. Regular ssh (secure shell, which gives you a command line on a remote machine), for example, uses port 22, so anyone who tries to connect to TCP port 22 will be able to connect to the ssh daemon running on the machine. The port number can be changed, but 22 is the standard for ssh. Once the user is connected, they still can't actually get a command line and run any commands, because they have to get through ssh's authentication process. But they can connect in the first place. What if the application could be made "dark" so there isn't any port to connect to at all?

The way this is accomplished is, a different set of machines are designated as "nodes" and one of them acts as a "controller". The application connects to one of the nodes, and the end user connects to a different node. The system is called "zero trust" because unauthenticated users are not allowed to connect to the nodes. The nodes are not allowed to run applications, only shuffle messages from place to place. The application is reached through the connection the application made to a node, not by having an open port on the application machine. So by the time any data reaches an application, the user has already been authenticated. The communication is end-to-end encrypted. So the authentication and privacy functions are moved out of applications and into the OpenZiti system.

The controller is necessary to set up the system, which by the way is called a "mesh network". Often they simply use the word "network" but it's important to distinguish between this "mesh network", which is made of software, and the physical network that is the internet.

Inside this "mesh network" is a complete public key infrastructure system. So instead of having ssh and HTTPS and every other system that wants secure communication to have its own ad-hoc public key infrastructure, you just make one public key infrastructure system for your whole organization.

It's written in Go but uses a library called libsodium for cryptography. The Go standard library has extensive cryptography services, so I wonder what it is missing that made them resort to using libsodium, which is written in C. Anyway, what's here is an example implementation of an ssh client that uses this system in Go. The reason it says "*ssh.Client" in the title is that "ssh.Client" is an interface in Go

and the "*" indicates a pointer. (And I put in that line break to keep diaspora from making boldface.)

This is an interesting idea -- outsourcing key security services to a mesh network, instead of having it reside in each application. I wonder if this idea will catch on.

Zero Trust *ssh.Client

#solidstatelife #cybersecurity #cryptography

waynerad@diasp.org

This looks like the video game Doom, but it is actually the output of a diffusion model.

Not only that, but the idea here isn't just to generate video that looks indistinguishable from Doom gameplay, but to create a "game engine" that actually lets you play the game. In fact this diffusion model "game engine" is called "GameNGen", which you pronounce "game engine".

To do this, they actually made two neural networks. The first is a reinforcement learning agent that plays the actual game Doom. As it does so, its output gets ferried over to the second neural network as "training data". In this manner, the first neural network creates unlimited training data for the second neural network.

The second neural network is the actual diffusion model. They started with Stable Diffusion 1.4, a diffusion model "conditioned on" text, which is what enables it to generate images when you input text. They ripped out the "text" stuff, and replaced it with conditioning on "actions", which are the buttons and mouse movements you make to play the game, and previous frames.

Inside the diffusion model, it creates "latent state" that represents the state of the game -- sort of. That's the idea, but it doesn't actually do a good job of it. It does a good job of remembering state that is actually represented on the screen (health, ammo, available weapons, etc), because it's fed the previous 3 frames of video every time step to generate the next frame of video, but not so good at remembering anything that goes off the screen. Oh, probably should mention, this diffusion model runs fast enough to generate images at "real time" video frame rates.

Because it doesn't use the actual Doom game engine state code -- or otherwise represent the game state with conventional code -- but represents state inside the neural network, but does so imperfectly for stuff that goes off the screen, when humans play this game, it seems like real Doom for short time periods, but when played over any extended length of time, humans can tell it's not real Doom.

GameNGen - Michael Kan

#solidstatelife #ai #genai #computervision #diffusionmodels #videogames #doom

waynerad@diasp.org

Guide to California Senate Bill 1047 "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act".

"If you do not train either a model that requires $100 million or more in compute, or fine tune such an expensive model using $10 million or more in your own additional compute (or operate and rent out a very large computer cluster)?"

"Then this law does not apply to you, at all."

"This cannot later be changed without passing another law."

"(There is a tiny exception: Some whistleblower protections still apply. That's it.)"

"Also the standard required is now reasonable care, the default standard in common law. No one ever has to 'prove' anything, nor need they fully prevent all harms."

"With that out of the way, here is what the bill does in practical terms."

"You must create a reasonable safety and security plan (SSP) such that your model does not pose an unreasonable risk of causing or materially enabling critical harm: mass casualties or incidents causing $500 million or more in damages."

"That SSP must explain what you will do, how you will do it, and why. It must have objective evaluation criteria for determining compliance. It must include cybersecurity protocols to prevent the model from being unintentionally stolen."

"You must publish a redacted copy of your SSP, an assessment of the risk of catastrophic harms from your model, and get a yearly audit."

"You must adhere to your own SSP and publish the results of your safety tests."

"You must be able to shut down all copies under your control, if necessary."

"The quality of your SSP and whether you followed it will be considered in whether you used reasonable care."

"If you violate these rules, you do not use reasonable care and harm results, the Attorney General can fine you in proportion to training costs, plus damages for the actual harm."

"If you fail to take reasonable care, injunctive relief can be sought. The quality of your SSP, and whether or not you complied with it, shall be considered when asking whether you acted reasonably."

"Fine-tunes that spend $10 million or more are the responsibility of the fine-tuner."

"Fine-tunes spending less than that are the responsibility of the original developer."

"Compute clusters need to do standard KYC when renting out tons of compute."

"Whistleblowers get protections."

So, for example, if your model enables the creation or use of a chemical, biological, radiological, or nuclear weapon, that would qualify as "causing or materially enabling critical harm".

"Open model advocates claim that open models cannot comply with this, and thus this law would destroy open source. They have that backwards. Copies outside developer control need not be shut down. Under the law, that is."

The author of the "Guide" (Zvi Mowshowitz) talks for some length about the recurrent term "reasonable" throughout the law. What is reasonable? How do you define reasonable? Reasonable people may disagree.

What struck me was the arbitrariness of the $100 million threshold. And the $10 million fine-tuning threshold. And how it's fixed -- as time goes on, computing power will get cheaper, so the power of models produced at those price points will increase -- and even if it didn't, there's inflation. Although inflation works in the opposite direction, making less powerful models cross the threshold.

But there's also a FLOPS threshold.

"To be covered models must also hit a FLOPS threshold, initially 10^26. This could make some otherwise covered models not be covered, but not the reverse."

"Fine-tunes must also hit a flops threshold, initially 3*(10^25) FLOPS, to become non-derivative."

FLOPS stands for "floating point operations per second". What strikes me about this is the "per second" part. This means if you train your models more slowly, your "per second" number smaller, enabling you to dodge this law.

And unlike the $100 million and $10 million dollar amounts, the FLOPS number is not fixed. That's why the word "initially" is there.

"There is a Frontier Model Board, appointed by the Governor, Senate and Assembly, that will issue regulations on audits and guidance on risk prevention. However, the guidance is not mandatory, and There is no Frontier Model Division. They can also adjust the flops thresholds."

What do you all think? Are all the AI companies going to move out of California, or is this just fine?

Guide to SB 1047 - Zvi Mowshowitz

#solidstatelife #ai #genai #llms #aiethics

waynerad@diasp.org

"We finally have the first benchmarks from MLCommons [...] that pit the AMD Instinct 'Antares' MI300X GPU against Nvidia's 'Hopper' H100 and H200 and the 'Blackwell' B200 GPUs."

"The results are good in that they show the MI300X is absolutely competitive with Nvidia's H100 GPU on one set of AI inference benchmarks, and based on our own estimates of GPU and total system costs can be competitive with Nvidia's H100 and H200 GPUs. But, the tests were only done for the Llama 2 model from Meta Platforms with 70 billion parameters."

Would be good to see Nvidia have some competition. They had to get the model to work with AMD's ROCm, AMD's analog to Nvidia's CUDA.

The first AI benchmarks pitting AMD against Nvidia

#solidstatelife #ai #gpus #amd

waynerad@diasp.org

"Founder Mode", by Paul Graham of YCombinator.

"At a YC event last week Brian Chesky gave a talk that everyone who was there will remember."

"The theme of Brian's talk was that the conventional wisdom about how to run larger companies is mistaken. As Airbnb grew, well-meaning people advised him that he had to run the company in a certain way for it to scale. Their advice could be optimistically summarized as 'hire good people and give them room to do their jobs.' He followed this advice and the results were disastrous. So he had to figure out a better way on his own, which he did partly by studying how Steve Jobs ran Apple."

"In effect there are two different ways to run a company: founder mode and manager mode."

"Hire good people and give them room to do their jobs. Sounds great when it's described that way, doesn't it? Except in practice, judging from the report of founder after founder, what this often turns out to mean is: hire professional fakers and let them drive the company into the ground."

"One theme I noticed both in Brian's talk and when talking to founders afterward was the idea of being gaslit. Founders feel like they're being gaslit from both sides -- by the people telling them they have to run their companies like managers, and by the people working for them when they do. Usually when everyone around you disagrees with you, your default assumption should be that you're mistaken. But this is one of the rare exceptions. VCs who haven't been founders themselves don't know how founders should run companies, and C-level execs, as a class, include some of the most skillful liars in the world."

"Whatever founder mode consists of, it's pretty clear that it's going to break the principle that the CEO should engage with the company only via his or her direct reports. 'Skip-level' meetings will become the norm instead of a practice so unusual that there's a name for it."

"For example, Steve Jobs used to run an annual retreat for what he considered the 100 most important people at Apple, and these were not the 100 people highest on the org chart."

Founder Mode

#solidstatelife #startups

waynerad@diasp.org

An unauthorized Starlink satellite was secretly installed on a US Navy ship that deployed to the West Pacific in April of 2023.

"Today's Navy sailors are likely familiar with the jarring loss of internet connectivity that can come with a ship's deployment."

"For a variety of reasons, including operational security, a crew's internet access is regularly restricted while underway, to preserve bandwidth for the mission and to keep their ship safe from nefarious online attacks."

"But the senior enlisted leaders among the littoral combat ship Manchester's gold crew knew no such privation last year, when they installed and secretly used their very own Wi-Fi network during a deployment, according to a scathing internal investigation obtained by Navy Times."

"While rank-and-file sailors lived without the level of internet connectivity they enjoyed ashore, the chiefs installed a Starlink satellite internet dish on the top of the ship and used a Wi-Fi network they dubbed 'STINKY' to check sports scores, text home, and stream movies."

"Then-Command Senior Chief Grisel Marrero was relieved in late 2023 after repeatedly misleading and lying to her ship's command about the Wi-Fi network, and she was convicted at court-martial this spring in connection to the scheme."

For me this was an unexpected cybersecurity vulnerability. But the obvious solution is to make Starlink standard with its own separate network and the ability for the higher-ups to shut it off when they're in hostile territory or otherwise feel like it poses a security risk. Or so it seems to me.

How Navy chiefs conspired to get themselves illegal warship Wi-Fi

#solidstatelife #cybersecurity

waynerad@diasp.org

"Visualize your machine learning model." "Drag an .onnx file anywhere on this page to quickly visualize it."

(.onnx stands for Open Neural Network Exchange and is a file format from an effort to develop a standard format for neural network deployment.)

"Mycelium is a library for creating graph visualizations of machine learning models or any other directed acyclic graphs. It also powers the graph viewer of the Talaria model visualization and optimization system."

Talaria is a visualization and optimization system from Apple for helping people get neural networks to run on mobile phones and other "network edge" devices with limited computing power instead of powerful machines in data centers.

Mycelium - Graph visualization library

#solidstatelife #ai

waynerad@diasp.org

"Top 5 YC S24 Startups (according to AI)"

The AI in question being nFactorial AI, "Perplexity for researching 240+ companies from YC S24".

Here are the 5 startus:

"SureBright: Offer your own Apple Care-like warranty program in 10 minutes!"
"Mito Health: AI-powered concierge doctor"
"Rewbi: Uses AI to increase grid-connected battery storage revenue 2x"
"Domu Technology Inc.: Automating debt collection calls for banks."
"Unriddle: Read and write research papers faster."

I was curious for more, so I clicked "View Full Leaderboard":

"Cartage: Autonomous freight operations"
"Planbase: Workforce management for modern healthcare"
"Fazeshift: AI agent for Accounts Receivable"
"Kontigo: USDC-Smart Neobank for Latinos."
"FINNY AI: Using ML to supercharge organic growth for Financial Advisors"
"Tabular: AI Autopilot for Accounting Firms"
"Saturn: Backoffice and compliance automation for wealth managers"
"Presti AI: Product photography for furniture companies with generative AI"

It keeps going. I'm up to top 12 out of 240.

For each startup it grades them on "traction", "team", "market", and "overall".

For each startup, it has an additional "AI Insight".

"AI Insight: SureBright demonstrates strong traction with existing partnerships and a unique offering that differentiates their warranty program, the founders have significant experience in major tech firms and entrepreneurship, and they are operating in a large and growing market with potential to expand the warranty segment by $45 billion."

nFactorial AI - Perplexity for researching 240+ companies from YC S24

#solidstatelife #ai #genai #llms #financialai #startups

waynerad@diasp.org

Richard Stallbot. I thought this would be an LLM trained on Richard Stallman text that would attempt to respond as Richard Stallman would. So I punched in:

"Proprietary software is awesome!"

If there's one thing I know about Richard Stallman it's that he hates proprietary software, so I figured this would get a pretty hostile response.

To my surprise, it responded with an actual Richard Stallman video, showing him saying:

"Proprietary software is not awesome, it's malware! It's a program designed to run in a way that hurts the user. You can't trust it because you don't have the freedom to check and modify it. That's why we developed the GNU + Linux system, to give users freedom and control over their computing."

Unless this "Richard Stallbot" is deepfaking video as well as text?

Chat with Richard Stallbot

#solidstatelife #genai #llms #richardstallman #gnu

waynerad@diasp.org

"How Intel missed the iPhone: XScale era".

We all know Intel missed the transition from computers to mobile devices and the transition from x86 CPUs to ARM CPUs. But what I didn't realize is that Intel was once dominant in ARM CPUs. They didn't call them "ARM", they called them "XScale", but they were ARM CPUs. Intel called them StrongARM -- a name that actually came from its acquisition of Digital Equipment Corporation (DEC)'s semiconductor manufacturing operations which had licensed ARM to make the DEC StrongARM chip -- and later XScale. So the question becomes, what did Intel do that blew its lead in ARM?

ARM, incidentally, stands for "Acorn RISC Machine", and RISC stands for "reduced instruction set computing".

This article is long and detailed, so I'm just going to jump straight to what seems to me like the key mistake. To me it seems like the key mistake was adding "single-instruction-multiple-data" (SIMD) instructions to the ARM design.

This may sound like something too subtle to have had such a major impact on Intel's business. But from that article, what I gathered is that this had two effects: first, it made customers worry that Intel, by marketing a non-startard ARM instruction set (the regular instructions plus Intel's own MMX instructions (MMX allegedly stood for "multimedia extensions")), was trying to cause single-vendor lock-in, and second, the SIMD instructions were actually worse than putting a dedicated digital signal processor (DSP) chip in the phone.

At first glance it might not be obvious why having a dedicated DSP chip would be better than adding SIMD instructions to the CPU to carrying out the same signal processing tasks. But a lot of what a mobile phone does is decode and encode radio signals and voice signals, and a dedicated DSP chip can do these tasks more efficiently. It can be highly optimized (and have a specialized instruction set) for the most repetitive computations, unlike a general-purpose set of SIMD instructions. The DSP can have lower latency, due to the lack of need to load and unload application code that uses the SIMD instructions. The DSP gives predictable and deterministic performance, unlike a CPU whose behavior can vary depending on how many applications are running and what sort of load they are putting on the CPU. With the digital signal processing workload offloaded to a DSP chip, the CPU, in turn, can be more responsive to the user, reducing the latency of the user experience and making the user experience more interactive.

Acorn responded with ARM designs that had SIMD instructions, so customers could have them without using Intel XScale and risking single-vendor-lock-in with Intel, but mobile phone makers went with dedicated DSP chips. Crucially, Apple chose an ARM chip manufactured by Samsung for the original iPhone when it launched in 2007. This was shortly after Intel, realizing it had fallen behind in ARM, decided to double-down on its x86 business and sold off its XScale manufacturing unit (to a company called Marvell). In this manner, Intel went from the dominant ARM company to being out of the ARM business entirely.

Today, we see mobile phone makers adding dedicated neural network chips.

How Intel Missed the iPhone: XScale Era

#solidstatelife #intel

waynerad@diasp.org

I just discovered there's a "Hacker Fab" at Carnegie Mellon University (CMU) that is producing a completely open-source system for photolithography -- so all the equipment you need to build your own photolithography tools and manufacture your own chips, you can build yourself from open-source designs.

I learned this from a video, by a guy who decided to build his own photolithography system in his garage, so I decided I'll just give you the video. (I'll put a link to the Hacker Fab below.) The main impression I got from the video is, if you wanted to build your own photolithography system, you'd spend countless hours getting tiny details right -- getting the focus exactly right, getting the alignment of things exactly right, etc. (He promises an additional video with even more of these details -- he says he cut a lot to make this one, and it's still 45 min long.) People in the semiconductor industry have literally spent decades getting all these details right so you can have your chips.

In the end, he manages to create a photolithography setup capable of manufacturing circa-1980-era chips, with a 1 micron minimum feature size -- er, almost. He didn't quite make 1 micron. A long way from the state of the art but still pretty impressive, and 1980-era chips are still used in embedded systems and such -- they're not entirely useless and haven't entirely gone away.

Speedrunning 30yrs of lithography technology - Breaking Taps

#solidstatelife #semiconductors

waynerad@diasp.org

A company called Piramidal is making a "foundation model" for electroencephalography (EEG).

There doesn't seem to be much information about how the model works, other than that it's based on a similar architecture as large language models (LLMs).

Piramidal’s foundation model for brain waves could supercharge EEGs

#solidstatelife #ai #medicalai

waynerad@diasp.org

"MiniTorch is a diy teaching library for machine learning engineers who wish to learn about the internal concepts underlying deep learning systems. It is a pure Python re-implementation of the Torch API designed to be simple, easy-to-read, tested, and incremental. The final library can run Torch code."

"Individual assignments cover:"

"ML Programming Foundations"
"Autodifferentiation"
"Tensors"
"GPUs and Parallel Programming"
"Foundational Deep Learning"

"The project was developed for the course Machine Learning Engineering at Cornell Tech and based on my experiences working at Hugging Face."

Wow, how exciting! Where will I get the time to do this course?

MiniTorch

#solidstatelife #ai #aieducation

waynerad@diasp.org

"Book Review: '2040' by Pedro Domingos" by Scott Aaronson. That is, the book is by Pedro Domingos and the review is by Scott Aaronson.

"Pedro Domingos is a computer scientist at the University of Washington. I've known him for years as a guy who'd confidently explain to me why I was wrong about everything from physics to CS to politics ... but then, for some reason, ask to meet with me again. Over the past 6 or 7 years, Pedro has become notorious in the CS world as a right-wing bomb-thrower on what I still call Twitter -- one who, fortunately for Pedro, is protected by his tenure at UW. He's also known for a popular book on machine learning called The Master Algorithm, which I probably should've read but didn't."

I haven't read that book, either, but as I understand it, the premise of the book is that the AI "master algorithm" will combine neural networks with symbolic AI. So far, we haven't seen any sign that's the way things are going to go. But we have seen neural networks that can solve problems by, for example, instead of trying to do calculations in a language model, using the language model to write Python code and using the Python code to do the actual calculation. There's integration with Wolfram|Alpha that I've tried out and that worked. So it seems to me like the direction things will go is neural networks will do the stuff analogous to the biological neural network known as the human brain, and will use calculating tools, like humans do. You as a human think abstractly about what calculations to do, then use a calculator or write Python code to actually do the calculations accurately, and neural networks will do the same thing. We're already partway down that path, and the future is to make the neural networks more multimodal and have improved context windows and long-term memory and so on. Anyway, getting back to the book review.

"Now Pedro has released a short satirical novel, entitled 2040. The novel centers around a presidential election between:"

"The Democratic candidate, 'Chief Raging Bull,' an angry activist with 1/1024 Native American ancestry (as proven by a DNA test, the Chief proudly boasts) who wants to dissolve the United States and return it to its Native inhabitants, and"

"The Republican candidate, 'PresiBot,' a chatbot with a frequently-malfunctioning robotic 'body.' While this premise would've come off as comic science fiction five years ago, PresiBot now seems like it could plausibly be built using existing LLMs."

"This is all in a near-future whose economy has been transformed (and to some extent hollowed out) by AI, and whose populace is controlled and manipulated by 'Happinet,' a giant San Francisco tech company that parodies Google and/or Meta."

Happinet -- lol.

Ok, so obviously, the idea here is to extrapolate the current political situation and technological situation simultaneously out into the near future (much nearer than 2040, really) and do so in an entertaining and satirical manner. We've already had AI Steve, "Your independent candidate for Brighton Pavilion", so why not PresiBot?

"I should clarify that the protagonists, the ones we're supposed to root for, are the founders of the startup company that built PresiBot -- that is, people who are trying to put the US under the control of a frequently-glitching piece of software that's also a Republican. For some readers, this alone might be a dealbreaker. But as I already knew Pedro's ideological convictions, I felt like I had fair warning."

Book Review: "2040" by Pedro Domingos

#solidstatelife #ai #genai #llms #domesticpolitics

waynerad@diasp.org

You may have heard of Grace Hopper, the creator of the COBOL programming language in the early years of the computer revolution -- more precisely, she invented a programming language called FLOW-MATIC in 1955 (while in the Navy) that was used by the team of people that created COBOL in 1959 (she didn't actually single-handedly create COBOL herself -- plus she later worked on standardization of FORTRAN as well as COBOL). But you've probably never heard what she sounds like. Well, this video, which evidently was a lecture given to the NSA in 1982, has mysteriously just surfaced online. What I never realized is what a sense of humor she has! She's a stand-up comic and computer scientist all in one.

In her talk, she emphasizes the importance of correct information in information systems. It may seem like a truism today, but in her day, all the attention went to hardware and software, and people didn't realize it's actually data that's the most valuable part of the system.

In her office, she banned the phrase "but we've always done it that way." You should always plan for the computers you're going to have, not the computers you have right now, or the computers you used to have. (On the flip side, later in the talk, she talks about the importance of calculating the cost of not doing something, and the benefit of sticking with standard languages and portable code -- but this too is anticipating the computers you're going to have which will support the standard languages but not the bells and whistles you're using right now.)

She explores what further exponential growth in computer power could do: weather forecasting, satellite imagery, oceanography, water management.

She shows the audience nanoseconds and microseconds. Programmers should be mindful of how many microseconds they are throwing away.

She foresees "systems of computers", the parallel processing of we have today in the form of multicore CPUs and GPUs and data centers with separation of concerns. Maybe too much of a separation of concerns, as she envisions specialized machines for databases instead of general-purpose computers, but today we use general-purposes computers for those things. We might beef them up with extra memory and network bandwidth, etc. We have truly specialized computers (ASICS) for other things (mining Bitcoin, lol), so she wasn't exactly right but she kinda still had the right idea.

She tells some stories of early computer industry security breaches.

Software costs too much to create and is too hard to maintain. In 1982, lol. Just think of the ripple effect of changes as an expected value problem. The solution is modular software with defined interfaces with named owners. Today that may seem like an intuitive solution, but I've never seen it explained with probability math.

I wish I had seen this talk in 1982. My 1982 self would have found it inspirational. (I would've been 11 -- back then the NSA was known as No Such Agency and never would've let me attend a talk). Even watching it now, I found it surprisingly riveting. Grace Hopper deserves her reputation as a computer pioneer. For 1982, she was surprisingly prescient.

NSA releases internal 1982 lecture by computing pioneer Rear Admiral Grace Hopper - The Black Vault Originals

#solidstatelife #computerscience

waynerad@diasp.org

PoliScore uses LLMs to rate legislators.

"Non-Partisan. For the People. Policy / Issues Based."

For my state, Colorado, it says:

"John W. Hickenlooper: A"
"Michael F. Bennet: A"
"Diana DeGette: A"
"Joe Neguse: A"
"Lauren Boebert: F"

Non-partisan, you say?

So I clicked on "John W. Hickenlooper":

"Overall benefit to society: 50"
"Immigration: 50"
"Healthcare: 49"
"Energy: 48"
"Technology: 47"
"Wildlife and forest management: 44"
"Social equity: 44"
"Environmental management and climate change: 43"
"Public lands and natural resources: 42"
"Education: 38"
"Agriculture and food: 37"
"Foreign relations: 37"
"Transportation: 36"
"Economics and commerce: 35"
"Crime and law enforcement: 33"
"National defense: 33"
"Housing: 30"
"Government: 28"

Hmm, wonder how it came up with those numbers?

"Senator John W. Hickenlooper has demonstrated a strong commitment to environmental management, energy innovation, and social equity through his recent legislative efforts. Notably, he sponsored the 'Reforestation, Nurseries, and Genetic Resources Support Act of 2024,' which aims to enhance reforestation efforts by providing financial and technical support to nurseries and seed orchards. This bill is expected to significantly benefit environmental management and climate change mitigation. Additionally, his sponsorship of the 'BIG WIRES Act' underscores his dedication to modernizing the US electric grid, promoting energy resilience, and integrating renewable energy sources, which are crucial for sustainable development."

"In the realm of social equity..."

I'm going to stop there because it goes on for for 2 more paragraphs. Then, after that, is a big list of 218 bills. Each bill has a grade, of which almost all are "A" and the lowest is "C".

For comparison, I clicked on "Lauren Boebert":

"Overall benefit to society: -11"
"Agriculture and food: 11"
"National defense: 8"
"Energy: 7"
"Housing: 4"
"Transportation: 3"
"Technology: 3"
"Government: 2"
"Economics and commerce: 1"
"Crime and law enforcement: -1"
"Wildlife and forest management: -13"
"Foreign relations: -13"
"Education: -13"
"Public lands and natural resources: -14"
"Healthcare: -15"
"Social equity: -18"
"Environmental management and climate change: -26"
"Immigration: -29"

"Representative Lauren Boebert's legislative actions reveal a troubling pattern of prioritizing divisive and regressive policies over constructive and inclusive governance. Her support for the 'Withdrawal from the United Nations Framework Convention on Climate Change' and the 'WHO Withdrawal Act' underscores a disregard for international cooperation and global health, potentially isolating the US from critical global initiatives."

"Boebert's sponsorship of the 'Build the Wall and Deport Them All Act' and the 'Mass Immigration Reduction Act of 2024' highlights a harsh stance on immigration that could exacerbate social inequities and strain foreign relations. ..."

I'm going to stop there but it goes on. Under "Bill History", there's 279 bills, almost all of which are graded either "D" or "F".

I tried clicking on a couple of bills. For John Hickenlooper, I clicked "Reproductive Freedom for Women Act":

"Overall benefit to society: 60"
"Social equity: 80"
"Healthcare: 70"
"Crime and law enforcement: 30"
"Economics and commerce: 20"
"Government: 10"

"The Reproductive Freedom for Women Act, introduced in the Senate, seeks to address the repercussions of the Supreme Court's decision in Dobbs v. Jackson, which significantly altered the legal landscape for abortion rights in the United States. The bill explicitly states Congress's support for protecting access to abortion and other reproductive health care services. It aims to restore the protections that were enshrined in the landmark Roe v. Wade decision, which had previously guaranteed a woman's right to choose an abortion. The high-level goals of the bill are to ensure that women have the freedom to make decisions about their reproductive health without undue governmental interference."

It goes on for 4 more paragraphs.

For Lauren Boebert, I clicked "No User Fees for Gun Owners Act":

"Overall benefit to society: -30"
"Government: -10"
"Economics and commerce: -20"
"Social equity: -30"
"Crime and law enforcement: -40"

"The 'No User Fees for Gun Owners Act' seeks to amend Section 927 of Title 18 of the United States Code and Part I of Subchapter B of Chapter 53 of the Internal Revenue Code of 1986. The primary goal of the bill is to prevent state and local governments from imposing any form of liability insurance, taxes, or user fees specifically as conditions for the ownership, manufacture, importation, acquisition, transfer, or continued possession of firearms and ammunition."

It goes on for 6 more paragraphs.

It looks to me like, if you're a liberal/Democrat, you just use this website as is. If you're a conservative/Republican, at first glance, it looks like you can invert the letter grades and reverse the positive/negative number scores. But, giving the matter more thought, it occurred to me that if the website is made assuming "liberal" values, then bad grades/negative numbers may just mean opposition to liberal values, but that might not tell you anything about what values the politician or bill is for, necessarily. In other words, I'm thinking, if you made comparable systems assuming either conservative or libertarian values, you wouldn't necessarily just get the inverse of this system. Your thoughts?

It may be that the AI-generated summaries for every bill, alongside the easy-to-navigate system of listing them under their sponsors/cosponsors, may be the most valuable aspect of this site. It wouldn't be to hard to check in on a regular basis to see what bills your elected representatives are sponsoring/cosponsoring and get a general sense of what they are about.

I won't comment on the insanity of having a society with more laws than is possible to fit in any human brain while expecting all laws to be obeyed. Oh, whoops. Looks like on this site, it lists all the bills that are sponsored, whether they eventually get signed into law or not, though, so if you see bills listed on this site it doesn't mean you have to obey them (necessarily).

Legislators - PoliScore: non-partisan political rating service

#solidstatelife #ai #genai #llms #domesticpolitics