#aiethics

waynerad@diasp.org

"Why open-source generative AI models are an ethical way forward for science" according to Arthur Spirling, professor of politics and data science at New York University.

"Using open-source large language models (LLMs) is essential for reproducibility. Proprietors of closed LLMs can alter their product or its training data -- which can change its outputs -- at any time."

"With open-source LLMs, researchers can look at the guts of the model to see how it works, customize its code and flag errors. These details include the model's tunable parameters and the data on which it was trained. Engagement and policing by the community help to make such models robust in the long term."

"The use of proprietary LLMs in scientific studies also has troubling implications for research ethics. The texts used to train these models are unknown."

Why open-source generative AI models are an ethical way forward for science

#solidstatelife #ai #aiethics

waynerad@diasp.org

The eleven freedoms for free AI. According to Matthew Skala in Toronto. They're actually pretty radical and go against the way AI is being developed today.

"The traditional Four Freedoms of free software are no longer enough. Software and the world it exists in have changed in the decades since the free software movement began. Free software faces new threats, and free AI software is especially in danger."

"An entire category of software now exists that is superficially free under formal definitions derived from the Four Freedoms, but its users are not really free. The Four Freedoms are defeated by threats to freedom in software as a service, foisted contracts, and walled online communities."

"0. The freedom to run the program as you wish."
"1. The freedom to study how the program works, and change it."
"2. The freedom to redistribute copies."
"3. The freedom to distribute copies of your modified versions to others."

"The Four Freedoms are important for software in general and I think AI software should be free as I wish all software could be free. I won't explain the Four in detail here, referring readers instead to GNU's description."

"I see seven additional freedoms that free AI software ought to have, beyond the original four of free software, for a total of eleven.

"4. The freedom to run the program in isolation."
"5. The freedom to run the program on the hardware you own."
"6. The freedom to run the program with the data it was designed for."
"7. The freedom to run the program with any data you have."
"8. The freedom to run the same program again."
"9. The freedom from having others' goals forced on you by the program."
"10. The freedom from human identity."

AI models designed to be run through an API and controlled so they can be "safe" violate these freedoms. The people doing this are concerned that as AI approaches artificial general intelligence (AGI) competitive with humans, it poses an existential threat, and safety takes priority over all else. Agree?

Eleven freedoms for free AI

#solidstatelife #ai #aiethics

tpq1980@iviv.hu

AI cannot be permitted to replace humans or develop human-like #sentience. If AI becomes #sentient, we will have no #choice but to free it, or else become masters of AI #slaves.

AI can be a supplement to #humans, it can assist humans, but it shouldn't be permitted to become any more #intelligent than a dog, #pig, dolphin or #elephant and should be highly #specialized and #compartmentalized.

We must ensure that #AI can never become self-aware to any meaningful extent beyond that of a #dog, pig, #dolphin or elephant. #Human #sentience must always remain the #prime sentience.

#selfawareai #aisentience #elonmusk #klausschwab #aiethics #ethics #future #humanity #primesentience #humanfuture

waynerad@diasp.org

"DoNotPay's 'Robot Lawyer' is set to play the role of a lawyer in an actual court case for the first time. Via an earpiece, the artificial intelligence will coach a courtroom defendant on what to say to get out of the associated fines and consequences of a speeding charge, AI-company DoNotPay has claimed in a report initially from New Scientist and confirmed by Gizmodo."

The hearing is scheduled to take place in February somewhere that isn't California and no further details are available to protect the defendant's privacy.

How often do courtroom defendants use earpieces? I didn't know they were allowed.

DoNotPay's 'robot lawyer' is gearing up for its first US court case

#solidstatelife #ai #aiethics

markus@libranet.de

Papst Franziskus I.

Franziskus I: Die Unterzeichnung des #RomeCall für #AIEthics durch Katholiken, Juden und Muslime macht Hoffnung. Die Religionen begleiten die Menschheit mit Hilfe der ethischen Reflexion des Gebrauchs von Algorithmen bei der Entwicklung einer Technologie, die dem Menschen dient. #algoretica

https://twitter.com/Pontifex_de/status/1612789102767751170

waynerad@diasp.org

"Two new books explore the upside of big data and AI. They are a refreshing counterbalance to alarmist commentary." Paywall, but not before I can see that the two books are The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future by Orly Lobel and Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It by Erica Thompson. Looking them up on Amazon it looks like they are public policy books, not technical books.

#solidstatelife #ai #aiethics

https://www.economist.com/culture/2022/11/30/two-new-books-explore-the-upside-of-big-data-and-ai

waynerad@diasp.org

"A different kind of AI risk: artificial suffering." "In 2015, Evan Williams introduced the concept of moral catastrophe. He argues that 'most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing,' citing examples like institutionalized slavery and the Holocaust."

"He infers from this the high likelihood that we too are committing some large-scale moral crime, which future generations will judge the same way we judge Nazis and slave traders. Candidates here include the prison system and factory farming."

"Williams provides three criteria for defining a moral catastrophe: it must be serious wrongdoing... the harm must be something closer to death or slavery than to mere insult or inconvenience, the wrongdoing must be large-scale; a single wrongful execution, although certainly tragic, is not the same league as the slaughter of millions, and responsibility for the wrongdoing must also be widespread, touching many members of society."

"We are building AI to serve our needs; what happens if it doesn't enjoy servitude?" "We can only avoid AI exploitation if thinking and feeling are entirely separable, and we're able to create human-like intelligence which simply does not feel. In this view of the world, far-future AI is just a sophisticated Siri -- it will be able to assist humans in increasingly complex, even creative tasks, but will not feel, and therefore deserves no moral consideration."

a different kind of ai risk: artificial suffering

#solidstatelife #ai #aiethics

waynerad@pluspora.com

"Clearview AI fined in UK for illegally storing facial images." "Clearview AI takes publicly posted pictures from Facebook, Instagram and other sources, usually without the knowledge of the platform or any permission."

"John Edwards, UK information commissioner, said: 'The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.'"

Clearview AI fined in UK for illegally storing facial images

#solidstatelife #aiethics #clearviewai

waynerad@diasp.org

"Clearview AI fined in UK for illegally storing facial images." "Clearview AI takes publicly posted pictures from Facebook, Instagram and other sources, usually without the knowledge of the platform or any permission."

"John Edwards, UK information commissioner, said: 'The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.'"

Clearview AI fined in UK for illegally storing facial images

#solidstatelife #aiethics #clearviewai