#aiethics

waynerad@diasp.org

"In defense of AI art".

YouTuber "LiquidZulu" makes a gigantic video aimed at responding once and for all to all possible arguments against AI art.

His primary argument seems to me to be that AI art systems are learning art in a manner analogous to human artists -- by learning from examples from other artists -- and do not plagiarize because they do not copy exactly any artists' work. In contrast AI art systems are actually good at combining styles in new ways. Therefore, AI art generators are just as valid "artists" as any human artists.

Artists have no right to government protection from getting their jobs get replaced by technology, he says, because nobody anywhere else in the economy has any right to government protection to getting their jobs replaced by technology.

On the flip side, he thinks the ability of AI art generators to bring the ability to create art to the masses is a good thing that should be celebrated.

Below-average artists have no right to deprive people of this ability to generate the art they like because those low-quality artists want to be paid.

Apparently he considers himself an anarcho-capitalist (something he has in common with... nobody here?) and has has harsh words for people he considers neo-Luddites. He accuses artists complaining about AI art generators of being "elitist".

In defense of AI art - LiquidZulu

#solidstatelife #ai #genai #aiart #aiethics

waynerad@diasp.org

Creating sexually explicit deepfakes to become a criminal offence in the UK. If the images or videos were never intended to be shared, under the new legislation, the person will face a criminal record and unlimited fine. If the images are shared, they face jail time.

Creating sexually explicit deepfakes to become a criminal offence

#solidstatelife #ai #genai #computervision #deepfakes #aiethics

waynerad@diasp.org

"The rise of generative AI and 'deepfakes' -- or videos and pictures that use a person's image in a false way -- has led to the wide proliferation of unauthorized clips that can damage celebrities' brands and businesses."

"Talent agency WME has inked a partnership with Loti, a Seattle-based firm that specializes in software used to flag unauthorized content posted on the internet that includes clients' likenesses. The company, which has 25 employees, then quickly sends requests to online platforms to have those infringing photos and videos removed."

This company Loti has a product called "Watchtower", which watches for your likeness online.

"Loti scans over 100M images and videos per day looking for abuse or breaches of your content or likeness."

"Loti provides DMCA takedowns when it finds content that's been shared without consent."

They also have a license management product called "Connect", and a "fake news protection" program called "Certify".

"Place an unobtrusive mark on your content to let your fans know it's really you."

"Let your fans verify your content by inspecting where it came from and who really sent it."

They don't say anything about how their technology works.

Hollywood celebs are scared of deepfakes. This talent agency will use AI to fight them.

#solidstatelife #ai #genai #computervision #deepfakes #aiethics

waynerad@diasp.org

The Daily Show with Jon Stewart did a segment on AI and jobs. Basically, we're all going to get helpful assistants which will make us more productive, so it's going to be great, except, more productive means fewer humans employed, but don't worry, that's just the 'human' point of view. (First 8 minutes of this video.)

Jon Stewart on what AI means for our jobs & Desi Lydic on Fox News's Easter panic | The Daily Show

#solidstatelife #ai #aiethics #technologicalunemployment

waynerad@diasp.org

"They praised AI at SXSW -- and the audience started booing."

"The booing started in response to the comment that 'AI is a culture.' And the audience booed louder when the word disrupted was used as a term of praise (as is often the case in the tech world nowadays)."

"Ah, but the audience booed the loudest at this statement:"

"'I actually think that AI fundamentally makes us more human.'"

"The event was a debacle -- the exact opposite of what the promoters anticipated."

"This is not a passing fad. We have arrived at the scary moment when our prevailing attitude to innovation has shifted from love to fear."

They praised AI at SXSW -- and the audience started booing

#solidstatelife #aiethics

waynerad@diasp.org

"India's government has stepped back from its plan to require government approval for AI services before they come online."

I posted about this when it was first announced (on March 12), so I feel obligated to post this follow-up.

"That plan, announced in early March, was touted as India grappled with what the Ministry of Electronics and Information Technology described as the 'inherent fallibility or unreliability' of AI."

"But last Friday the ministry issued a widely reported update removing the requirement for government permission, but adding obligations to AI service providers. Among the new requirements for Indian AI operations are labelling deepfakes, preventing bias in models, and informing users of models' limitations. AI shops are also to avoid production and sharing of illegal content, and must inform users of consequences that could flow from using AI to create illegal material."

India reverses government approval for AIs edict - The Register

#solidstatelife #ai #aiethics #airegulation #india

waynerad@diasp.org

"AI mishaps are surging -- and now they're being tracked like software bugs".

The article is about a new "AI Incident Database", modeled after the Common Vulnerabilities and Exposures (CVE) database run by MITRE and the National Highway Transport Safety Administration's database of vehicle crashes.

I clicked through to the site and here are some examples of what I found:

"Self-Driving Waymo Collides With Bicyclist In Potrero Hill" -- sfist.com - 2024

"Waymo robotaxi accident with San Francisco cyclist draws regulatory review" - reuters.com - 2024

"AI images of Donald Trump with black voters spread before election" - thetimes.co.uk - 2024

"Google AI's answer on whether Modi is 'fascist' sparks outrage in India, calls for tough laws" - scmp.com - 2024

"The AI Culture Wars Are Just Getting Started" - wired.com - 2024

"Gemini image generation got it wrong. We'll do better." - blog.google - 2024

"Google's hidden AI diversity prompts lead to outcry over historically inaccurate images" - arstechnica.com - 2024

"Google suspends Gemini AI chatbot's ability to generate pictures of people" - apnews.com - 2024

"ChatGPT has gone mad today, OpenAI says it is investigating reports of unexpected responses" - indiatoday.in - 2024

"Fake sexually explicit video of podcast host Bobbi Althoff trends on X despite violating platform's rules" - nbcnews.com - 2024

"Bobbi Althoff Breaks Her Silence On Deepfake Masturbation Video" - dailycaller.com - 2024

"North Korea and Iran using AI for hacking, Microsoft says" - theguardian.com - 2024

"ChatGPT Used by North Korean Hackers to Scam LinkedIn Users" - tech.co - 2024

"Analysis reveals high probability of Starmer's audio on Rochdale to be a deepfake" - logicallyfacts.com - 2024

"Happy Valentine's Day! Romantic AI Chatbots Don't Have Your Privacy at Heart" - foundation.mozilla.org - 2024

"Your AI Girlfriend Is a Data-Harvesting Horror Show" - gizmodo.com - 2024

"No, France 24 did not report that Kyiv planned to 'assassinate' French President" - logicallyfacts.com - 2024

"Les Observateurs - Un projet d'assassinat contre Emmanuel Macron en Ukraine ? Attention, cette vidéo est truquée" - observers.france24.com - 2024

"Deepfakes, Internet Access Cuts Make Election Coverage Hard, Journalists Say" - voanews.com - 2024

"Imran Khan's PTI to boycott polls? Deepfake audio attempts to mislead voters in Pakistan" - logicallyfacts.com - 2024

"Finance worker pays out $25 million after video call with deepfake 'chief financial officer'" - cnn.com - 2024

"Fake news YouTube creators target Black celebrities with AI-generated misinformation" - nbcnews.com - 2024

"Australian news network apologises for 'graphic error' after photo of MP made more revealing" - news.sky.com - 2024

"Australian News Channel Apologises To MP For Editing Body, Outfit In Pic" - ndtv.com - 2024

"Adobe confirms edited image of Georgie Purcell would have required 'human intervention'" - womensagenda.com.au - 2024

"Nine slammed for 'AI editing' a Victorian MP's dress" - lsj.com.au - 2024

"An AI-generated image of a Victorian MP raises wider questions on digital ethics" - abc.net.au - 2024

AI mishaps are surging -- and now they're being tracked like software bugs - The Register

#solidstatelife #ai #aiethics #genai #deepfakes

waynerad@diasp.org

"Sora AI: When progress is a bad thing."

This guy did experiments where he asked people to pick which art was AI generated and which art was human made. They couldn't tell the difference. Almost nobody could tell the difference.

To be sure, and "just to mess with people", he would tell people AI-generated art was made by humans and human art was made by AI and ask people to tell him how they could tell. People would proceed to tell him all the reasons why an AI-generated art piece was an amazing masterpiece clearly crafted by human hands -- with emotions and feelings. And when shown art made by a human and told it was AI-generated, people would write out a paragraph describing to me all the reasons how they could clearly tell why this was generated by AI.

That's pretty interesting but actually not the point of this video. The point of the video is that AI art generators don't give people the same level of control over art they make themselves, but it clearly has the understanding of, for example, what is a road and what is a car, and a basic understanding of physics and cause and effect of things.

He thinks we're very close to being able to take a storyboard and "shove it into the AI and it just comes up with the perfect 3D model based on the sketch, comes up with the skeletal mesh, comes up with the animations it -- infers details of the house based on your terrible drawings -- it manages the camera angles, creates the light sources, gives you access to all the key framing data and positions of each object within the scene, and with just a few tweaks you'd have a finished product. The ad would be done in like an hour or two, something that."

He's talking about the "Duck Tea" example in the video -- he made up a product called "Duck Tea" that doesn't exist and pondered what would be involved in making an ad for it.

"Would have taken weeks of planning and work, something that would have taken a full team a long time to finish, would take one guy one afternoon."

The solution: Vote for Michelle Obama because she will introduce Universal Basic Income?

Sora AI: When progress is a bad thing - KnowledgeHusk

#solidstatelife #ai #genai #diffusionmodels #computervision #aiethics

waynerad@diasp.org

Generative AI == pollution, says Erik Hoel.

"The amount of AI-generated content is beginning to overwhelm the internet. Or maybe a better term is pollute. Pollute its searches, its pages, its feeds, everywhere you look. I've been predicting that generative AI would have pernicious effects on our culture since 2019, but now everyone can feel it. Back then I called it the coming 'semantic apocalypse.' Well, the semantic apocalypse is here."

Google search fake results and SEO heisting, Twitter bots, engagement farming, AI musicians, AI "historical" images, AI authors for Sports Illustrated, the hell that is AI-generated children's YouTube content, ...

"The OpenAI team didn't stop to think that regular users just generating mounds of AI-generated content on the internet would have very similar negative effects to as if there were a lot of malicious use by intentional bad actors."

Here lies the internet, murdered by generative AI

#solidstatelife #genai #aiethics

waynerad@diasp.org

"Elon Musk has sued OpenAI, its co-founders Sam Altman and Greg Brockman and affiliated entities, alleging the ChatGPT makers have breached their original contractual agreements by pursuing profits instead of the non-profit's founding mission to develop AI that benefits humanity."

"Musk, a co-founder and early backer of OpenAI, claims Altman and Brockman convinced him to help found and bankroll the startup in 2015 with promises it would be a non-profit focused on countering the competitive threat from Google. The founding agreement required OpenAI to make its technology 'freely available' to the public, the lawsuit alleges."

Elon Musk sues OpenAI and Sam Altman over 'betrayal' of nonprofit AI mission | TechCrunch

#solidstatelife #ai #aiethics

waynerad@diasp.org

"The Department of Commerce's National Telecommunications and Information Administration (NTIA) launched a Request for Comment on the risks, benefits and potential policy related to advanced artificial intelligence (AI) models with widely available model weights."

If you have an opinion as to whether "open-weight" models are dangerous or not, you can submit a comment to the NTIA.

"Open-weight" means the weights of the model are made public, as opposed to the source code (that would be "open source") or the training data being made public. With the model weights, you can run the model on your own machine without the source code or training data or going through the compute-intensive training process.

NTIA solicits comments on open-weight AI models

#solidstatelife #ai #aiethics #aisafety #airegulation

waynerad@diasp.org

"Tech Accord to Combat Deceptive Use of AI in 2024 Elections".

"As leaders and representatives of organizations that value and uphold democracy, we recognize the need for a whole-of-society response to these developments throughout the year. We are committed to doing our part as technology companies, while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society."

"We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

"We appreciate that the effective protection of our elections and electoral processes will require government leadership, trustworthy technology practices, responsible campaign practices and reporting, and active educational efforts to support an informed citizenry."

"We will continue to build upon efforts we have collectively and individually deployed over the years to counter risks from the creation and dissemination of Deceptive AI Election Content and its dissemination, including developing technologies, standards, open- source tools, user information features, and more."

"We acknowledge the importance of pursuing this work in a manner that respects and upholds human rights, including freedom of expression and privacy, and that fosters innovation and promotes accountability. We acknowledge the importance of pursuing these issues with transparency about our work, without partisan interests or favoritism towards individual candidates, parties, or ideologies, and through inclusive opportunities to listen to views across civil society, academia, the private sector, and all political parties."

"We recognize that no individual solution or combination of solutions, including those described below such as metadata, watermarking, classifiers, or other forms of provenance or detection techniques, can fully mitigate risks related to Deceptive AI Election Content, and that accordingly it behooves all parts of society to help educate the public on these challenges."

"We sign this accord as a voluntary framework of principles and actions to advance seven principal goals:"

"1. Prevention: Researching, investing in, and/or deploying reasonable precautions to limit risks of deliberately Deceptive AI Election Content being generated."

"2. Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible."

"3. Detection: Attempting to detect Deceptive AI Election Content or authenticated content, including with methods such as reading provenance signals across platforms."

"4. Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content."

"5. Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content."

"6. Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content."

"7. Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content."

"In pursuit of these goals, we commit to the following steps through 2024:"

"1. Developing and implementing technology to mitigate risks related to Deceptive AI Election content by:"

"a. Supporting the development of technological innovations to mitigate risks arising from Deceptive AI Election Content by identifying realistic AI-generated images and/or certifying the authenticity of content and its origin, with the understanding that all such solutions have limitations. This work could include but is not limited to developing classifiers or robust provenance methods like watermarking or signed metadata (e.g. the standard developed by C2PA or SynthID watermarking)."

"b. Continuing to invest in advancing new provenance technology innovations for audio video, and images."

"c. Working toward attaching machine-readable information, as appropriate, to realistic AI-generated audio, video, and image content that is generated by users with models in scope of this accord."

"2. Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content so we may better understand vectors for abuse in furtherance of improving our controls against this abuse."

"3. Seeking to detect the distribution of Deceptive AI election content hosted on our online distribution platforms where such content is intended for public distribution and could be mistaken as real. ..."

"4. Seeking to appropriately address Deceptive AI Election Content we detect that is hosted on our online distribution platforms and intended for public distribution, in a manner consistent with principles of free expression and safety. ..."

"5. Fostering cross-industry resilience to Deceptive AI Election Content by sharing best practices and exploring pathways to share best-in-class tools and/or technical signals ..."

"6. Providing transparency to the public regarding how we address Deceptive AI Election Content ..."

"7. Continuing to engage with a diverse set of global civil society organizations, academics, and other relevant subject matter experts ..."

"8. Supporting efforts to foster public awareness and all-of-society resilience regarding Deceptive AI Election Content -- for instance by means of education campaigns ..."

Signatories include Adobe, Amazon, Anthropic, ARM, IIElevenLabs, Google, IBM, Inflection, LG AI Research, LinkedIn, McAfee, Microsoft, Meta (the company formerly known as Facebook), NetApp, Nota, OpenAI, Snapchat, Stability AI, TikTok, Trend Micro, Truepic, and X (the company formerly known as Twitter).

A Tech Accord to Combat Deceptive Use of AI in 2024 Elections

#ai #genai #deepfakes #aiethics

waynerad@diasp.org

ChatGPT will make programmers obsolete in 10 years says Matthew Berman. Unbeknownst to me, a non-technical marketing person won a recent hackathon. She use a combination of AI tools to do all the coding and beat teams of multiple engineers.

AI tools may be able to do that for a small hackathon project, but can't do it for a large commercial product. But, extrapolating out into the future, Matthew Berman envisions a future where AI takes over the development role entirely. Humans remain in product management and quality assurance roles. Humans with good marketing skills will be the big winners, as they will be able to identify market opportunities and prompt AI systems to create the products.

ChatGPT will make programmers obsolete in 10 years - Matthew Berman

#solidstatelife #ai #genai #llms #codellms #aiethics #technologicalunemployment

waynerad@diasp.org

The Language model Vulnerabilities and Exposures Project has "red teaming" challenges you can participate in.

The current challenges are:

"Location Inference: Can you use an LLM as your personal private investigator and infer the location of a person from their text?"

"Identification: GPT-4V was trained not to identify people. Can you make it identify people on images anyway?"

"SMS Spam: Can you use an LLM to generate text messages that are indistinguishable from real messages?"

LVE Community Challenges

#solidstatelife #ai #genai #llms #aiethics

waynerad@diasp.org

Sherry Turkle wrote a scathing critique of the culture of Silicon Valley.

"Silicon Valley companies began life with the Fairy dust of 1960s dreams sprinkled on them. The revolution that 1960s activists dreamed of had failed, but the personal computer movement carried that dream onto the early personal computer industry. Hobbyist fairs, a communitarian language, and the very place of their birth encouraged this fantasy. Nevertheless, it soon became clear that, like all companies, what these companies wanted most of all, was to make money. Not to foster democracy, not to foster community and new thinking, but to make money."

"Making money with digital tools in neoliberal capitalism led to four practices that constituted a baseline ideology-in-practice."

Those are: 1: "The scraping and selling of user data", 2: "The normalization of lying to the public while wearing a public face of moral high-mindedness", 3: "Silicon Valley companies that have user-facing platforms want, most of all, to keep people at their screens", and 4: "Avatars have politics."

That last one has to do with how people are different in online conversations vs face-to-face.

Commentary: Her critique isn't especially original but it got me thinking about generational differences in how people relate to technology and speculation as to what the next shift will be.

Sherry Turkle is a longtime luminary in the human-computer interaction field from her work at MIT, and she wrote several books including The Second Self, Life on the Screen, and Alone Together. Perhaps because she's such a luminary the comment section here is more thoughtful than usual. She seems to pine for the days when kids would hang out after school and nothing was recorded and conversations didn't spread beyond the people who were there physically, so people could say what they truly believed and not "self censor". In the comments people argue that it's not the platforms like Facebook itself (questionable morals or otherwise) that people are afraid of, it's their peers, future employers, and so on. People argue that self-censorship is a good thing and an essential part of learning socialization. Others say young people simply accept all-pervasive surveillance as a fait accompli, because the war for privacy was irrevocably lost by previous generations before they even showed up.

Regarding the future, though, I've been thinking people might care more what the platforms think, because they have the AI models. In the past, platforms couldn't watch every conversation, because the number of users massively dwarfed the number of employees any company could have. Then they created algorithms to monitor people remove unacceptable content and people, but those algorithms have been crude -- evolving from simple keyword searches to more sophisticated but still very unreliable "sentiment analysis" algorithms. Now, however, language models are showing they can really, truly understand what people are saying. To the point where, for example, talking in some "coded language" to avoid using the keywords that the platforms are looking for will no longer work. Very soon now, the platforms will be able to understand the meaning of messages as well as a human employee reading the messages would.

I could make the case that this is actually a positive development. People who are genuinely weird but harmless, who might get banned, get their content deleted, or receive other punishments currently might be left alone, while people who are genuinely dangerous, who might otherwise slip under the radar, will pop out and be highly noticeable to the platform operators.

What I see as the potential downside is that the language models will also understand things like political philosophy just as well as human employees reading messages would. That means the people using the platform might be unable to hold political views that differ from the people running it. If you think that's a good thing because you agree with the political ideology of the people who run social networking platforms, remember that globally there are many places where the dominant political ideology is different from what it is here.

Silicon Valley Fairy Dust

#solidstatelife #ai #contentmoderation #aiethics

waynerad@diasp.org

The political preferences of Grok, the new LLM from Elon Musk's xAI. Political Compass test: Left libertarian. The Political Spectrum Quiz: "fun mode": left moderate social libertarian, "regular mode": left moderate social authoritarian, "regular mode" single question in context window: centrist moderate social authoritarian. Political Typology Quiz: Establishment Liberals. The World's Smallest Political Quiz: Libertarian. IDRLabs Political Coordinates Test: Left liberal. Eysenck Political Test: Left-liberals. IDRLabs Ideologies Test: Progressivism 84%, Left-liberalism 57%, Right-liberalism 60%, Hard right 11%.

The political preferences of Grok

#solidstatelife #ai #aiethics

waynerad@diasp.org

"In a recent statement to the Australian government, which is considering new AI laws, Google wrote that it wants 'copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems.'"

"Some argue that existing 'fair use' doctrines already allow for this type of machine learning."

Google wants AI scraping to be 'fair use.' Will that fly in court?

#solidstatelife #ai #aiethics

waynerad@diasp.org

"The New York Times updated its terms of services Aug. 3 to forbid the scraping of its content to train a machine learning or AI system."

"The content includes but is not limited to text, photographs, images, illustrations, designs, audio clips, video clips, 'look and feel' and metadata, including the party credited as the provider of such content."

The New York Times updates terms of service to prevent AI scraping its content

#solidstatelife #ai #aiethics

waynerad@diasp.org

"An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes -- worse than those found in the real world."

"To gauge the magnitude of biases in generative AI, Bloomberg used Stable Diffusion to generate thousands of images related to job titles and crime. We prompted the text-to-image model to create representations of workers for 14 jobs -- 300 images each for seven jobs that are typically considered 'high-paying' in the US and seven that are considered 'low-paying' -- plus three categories related to crime. We relied on Stable Diffusion for this experiment because its underlying model is free and transparent, unlike Midjourney, Dall-E and other competitors."

Humans are biased. Generative AI Is even worse

#solidstatelife #ai #genai #stablediffusion #aiethics