#deepfakes

waynerad@diasp.org

Creating sexually explicit deepfakes to become a criminal offence in the UK. If the images or videos were never intended to be shared, under the new legislation, the person will face a criminal record and unlimited fine. If the images are shared, they face jail time.

Creating sexually explicit deepfakes to become a criminal offence

#solidstatelife #ai #genai #computervision #deepfakes #aiethics

waynerad@diasp.org

"The rise of generative AI and 'deepfakes' -- or videos and pictures that use a person's image in a false way -- has led to the wide proliferation of unauthorized clips that can damage celebrities' brands and businesses."

"Talent agency WME has inked a partnership with Loti, a Seattle-based firm that specializes in software used to flag unauthorized content posted on the internet that includes clients' likenesses. The company, which has 25 employees, then quickly sends requests to online platforms to have those infringing photos and videos removed."

This company Loti has a product called "Watchtower", which watches for your likeness online.

"Loti scans over 100M images and videos per day looking for abuse or breaches of your content or likeness."

"Loti provides DMCA takedowns when it finds content that's been shared without consent."

They also have a license management product called "Connect", and a "fake news protection" program called "Certify".

"Place an unobtrusive mark on your content to let your fans know it's really you."

"Let your fans verify your content by inspecting where it came from and who really sent it."

They don't say anything about how their technology works.

Hollywood celebs are scared of deepfakes. This talent agency will use AI to fight them.

#solidstatelife #ai #genai #computervision #deepfakes #aiethics

waynerad@diasp.org

"AI mishaps are surging -- and now they're being tracked like software bugs".

The article is about a new "AI Incident Database", modeled after the Common Vulnerabilities and Exposures (CVE) database run by MITRE and the National Highway Transport Safety Administration's database of vehicle crashes.

I clicked through to the site and here are some examples of what I found:

"Self-Driving Waymo Collides With Bicyclist In Potrero Hill" -- sfist.com - 2024

"Waymo robotaxi accident with San Francisco cyclist draws regulatory review" - reuters.com - 2024

"AI images of Donald Trump with black voters spread before election" - thetimes.co.uk - 2024

"Google AI's answer on whether Modi is 'fascist' sparks outrage in India, calls for tough laws" - scmp.com - 2024

"The AI Culture Wars Are Just Getting Started" - wired.com - 2024

"Gemini image generation got it wrong. We'll do better." - blog.google - 2024

"Google's hidden AI diversity prompts lead to outcry over historically inaccurate images" - arstechnica.com - 2024

"Google suspends Gemini AI chatbot's ability to generate pictures of people" - apnews.com - 2024

"ChatGPT has gone mad today, OpenAI says it is investigating reports of unexpected responses" - indiatoday.in - 2024

"Fake sexually explicit video of podcast host Bobbi Althoff trends on X despite violating platform's rules" - nbcnews.com - 2024

"Bobbi Althoff Breaks Her Silence On Deepfake Masturbation Video" - dailycaller.com - 2024

"North Korea and Iran using AI for hacking, Microsoft says" - theguardian.com - 2024

"ChatGPT Used by North Korean Hackers to Scam LinkedIn Users" - tech.co - 2024

"Analysis reveals high probability of Starmer's audio on Rochdale to be a deepfake" - logicallyfacts.com - 2024

"Happy Valentine's Day! Romantic AI Chatbots Don't Have Your Privacy at Heart" - foundation.mozilla.org - 2024

"Your AI Girlfriend Is a Data-Harvesting Horror Show" - gizmodo.com - 2024

"No, France 24 did not report that Kyiv planned to 'assassinate' French President" - logicallyfacts.com - 2024

"Les Observateurs - Un projet d'assassinat contre Emmanuel Macron en Ukraine ? Attention, cette vidéo est truquée" - observers.france24.com - 2024

"Deepfakes, Internet Access Cuts Make Election Coverage Hard, Journalists Say" - voanews.com - 2024

"Imran Khan's PTI to boycott polls? Deepfake audio attempts to mislead voters in Pakistan" - logicallyfacts.com - 2024

"Finance worker pays out $25 million after video call with deepfake 'chief financial officer'" - cnn.com - 2024

"Fake news YouTube creators target Black celebrities with AI-generated misinformation" - nbcnews.com - 2024

"Australian news network apologises for 'graphic error' after photo of MP made more revealing" - news.sky.com - 2024

"Australian News Channel Apologises To MP For Editing Body, Outfit In Pic" - ndtv.com - 2024

"Adobe confirms edited image of Georgie Purcell would have required 'human intervention'" - womensagenda.com.au - 2024

"Nine slammed for 'AI editing' a Victorian MP's dress" - lsj.com.au - 2024

"An AI-generated image of a Victorian MP raises wider questions on digital ethics" - abc.net.au - 2024

AI mishaps are surging -- and now they're being tracked like software bugs - The Register

#solidstatelife #ai #aiethics #genai #deepfakes

waynerad@diasp.org

Claim is being made that a scientific research paper where every figure was AI generated passed peer review.

Article published a couple of days ago. Every figure in the article is AI generated and totally incomprehensible. This passed "peer-review"

#solidstatelife #ai #genai #computervision #deepfakes

waynerad@diasp.org

"Tech Accord to Combat Deceptive Use of AI in 2024 Elections".

"As leaders and representatives of organizations that value and uphold democracy, we recognize the need for a whole-of-society response to these developments throughout the year. We are committed to doing our part as technology companies, while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society."

"We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

"We appreciate that the effective protection of our elections and electoral processes will require government leadership, trustworthy technology practices, responsible campaign practices and reporting, and active educational efforts to support an informed citizenry."

"We will continue to build upon efforts we have collectively and individually deployed over the years to counter risks from the creation and dissemination of Deceptive AI Election Content and its dissemination, including developing technologies, standards, open- source tools, user information features, and more."

"We acknowledge the importance of pursuing this work in a manner that respects and upholds human rights, including freedom of expression and privacy, and that fosters innovation and promotes accountability. We acknowledge the importance of pursuing these issues with transparency about our work, without partisan interests or favoritism towards individual candidates, parties, or ideologies, and through inclusive opportunities to listen to views across civil society, academia, the private sector, and all political parties."

"We recognize that no individual solution or combination of solutions, including those described below such as metadata, watermarking, classifiers, or other forms of provenance or detection techniques, can fully mitigate risks related to Deceptive AI Election Content, and that accordingly it behooves all parts of society to help educate the public on these challenges."

"We sign this accord as a voluntary framework of principles and actions to advance seven principal goals:"

"1. Prevention: Researching, investing in, and/or deploying reasonable precautions to limit risks of deliberately Deceptive AI Election Content being generated."

"2. Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible."

"3. Detection: Attempting to detect Deceptive AI Election Content or authenticated content, including with methods such as reading provenance signals across platforms."

"4. Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content."

"5. Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content."

"6. Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content."

"7. Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content."

"In pursuit of these goals, we commit to the following steps through 2024:"

"1. Developing and implementing technology to mitigate risks related to Deceptive AI Election content by:"

"a. Supporting the development of technological innovations to mitigate risks arising from Deceptive AI Election Content by identifying realistic AI-generated images and/or certifying the authenticity of content and its origin, with the understanding that all such solutions have limitations. This work could include but is not limited to developing classifiers or robust provenance methods like watermarking or signed metadata (e.g. the standard developed by C2PA or SynthID watermarking)."

"b. Continuing to invest in advancing new provenance technology innovations for audio video, and images."

"c. Working toward attaching machine-readable information, as appropriate, to realistic AI-generated audio, video, and image content that is generated by users with models in scope of this accord."

"2. Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content so we may better understand vectors for abuse in furtherance of improving our controls against this abuse."

"3. Seeking to detect the distribution of Deceptive AI election content hosted on our online distribution platforms where such content is intended for public distribution and could be mistaken as real. ..."

"4. Seeking to appropriately address Deceptive AI Election Content we detect that is hosted on our online distribution platforms and intended for public distribution, in a manner consistent with principles of free expression and safety. ..."

"5. Fostering cross-industry resilience to Deceptive AI Election Content by sharing best practices and exploring pathways to share best-in-class tools and/or technical signals ..."

"6. Providing transparency to the public regarding how we address Deceptive AI Election Content ..."

"7. Continuing to engage with a diverse set of global civil society organizations, academics, and other relevant subject matter experts ..."

"8. Supporting efforts to foster public awareness and all-of-society resilience regarding Deceptive AI Election Content -- for instance by means of education campaigns ..."

Signatories include Adobe, Amazon, Anthropic, ARM, IIElevenLabs, Google, IBM, Inflection, LG AI Research, LinkedIn, McAfee, Microsoft, Meta (the company formerly known as Facebook), NetApp, Nota, OpenAI, Snapchat, Stability AI, TikTok, Trend Micro, Truepic, and X (the company formerly known as Twitter).

A Tech Accord to Combat Deceptive Use of AI in 2024 Elections

#ai #genai #deepfakes #aiethics

nowisthetime@pod.automat.click

https://www.sott.net/article/488986-Audio-cloning-can-take-over-a-phone-call-in-real-time-without-the-speakers-knowing

Generative #AI could be #listening to your #phone #calls and #hijacking them with fake biometric #audio for #fraud or #manipulation purposes, according to new research published by Security Intelligence. In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video #deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.

A blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes - without the speakers knowing it was happening.

"Alarmingly," writes Lee, "it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary."

waynerad@diasp.org

"It's now illegal in the US for robocallers to use AI-generated voices, thanks to a new ruling by the Federal Communications Commission."

"In a unanimous decision, the FCC expands the Telephone Consumer Protection Act, or TCPA, to cover robocall scams that contain AI voice clones. The new rule goes into effect immediately, allowing the commission to fine companies and block providers for making these types of calls."

AI-generated voices in robocalls are now illegal

#solidstatelife #ai #genai #deepfakes

waynerad@diasp.org

"George Carlin estate sues creators of AI-generated comedy special in key lawsuit over stars' likenesses."

Once again I'm bringing you all news from The Hollywood Reporter. Is this going to become a regular thing?

"The lawsuit, filed in California federal court Thursday, accuses the creators of the special of utilizing without consent or compensation George Carlin's entire body of work consisting of five decades of comedy routines to train an AI chatbot, which wrote the episode's script. It also takes issue with using his voice and likeness for promotional purposes. The complaint seeks a court order for immediate removal of the special, as well as unspecified damages."

George Carlin estate sues creators of AI-generated comedy special in key lawsuit over stars’ likenesses

#solidstatelife #ai #genai #llms #deepfakes

anonymiss@despora.de

#StephenFry on How to use #AI as a force for good

source: https://www.youtube.com/watch?v=zZfS8uk70Zc

How can the coming flood of technological progress, which will not stop at current developments such as big data-based and algorithmically generated statistical models with their astonishing capabilities and failures - i.e. current #AI systems - but lead to a further acceleration of innovation with increasing technological performance in computing? Today, we are thinking about an epistemic #crisis of world #knowledge in times of #deepfakes, #surveillance and bioweapons through AI, while brain-computer interfaces and quantum computers are already appearing on the horizon today. At the same time, #climate #change is probably the most outstanding of all crises, which has already been showing its first effects for several years and is setting new records with constantly rising #emissions. In short, how can we ensure that the coming real-life version of what is commonly referred to as "Singularity" turns out for the good of #humanity?


#singularity #future #technology #ethics #moral #humanrights #politics #economy #video #philosophy

deutschlandfunk@squeet.me

Deepfakes als Betrugsmasche

Deepfakes - Mediale Glaubwürdigkeit in Gefahr

Deepfakes bedrohen die mediale Glaubwürdigkeit. KI-Forscherin Maria Pawelec fordert deshalb Maßnahmen von Politik, Plattformen - und dem Publikum.#MEDIEN #Deepfakes #KI
Deepfakes als Betrugsmasche

aktionfsa@diasp.eu

10.08.2023 Wer darf mit meiner Stimme Geld verdienen?

Dieser Streik ist etwas Großes

Der Streik der Drehbuchautoren, der SchauspielerInnen und .... in Hollywood ist mehr als das übliche Geplänkel um Tantiemen. Erstens sind diesmal wirklich viele Berufsgruppen vertreten und zum anderen geht es noch um ein "Randthema", von dem zu befürchten ist, dass es für viele Menschen - befürchtet wird, dass 300 Millionen ihre Arbeit an eine KI verlieren werden - in unserer Gesellschaft zu einem Problem werden kann: die künstliche Intelligenz.

Nun sollte man erwarten, dass bei Arbeitskämpfen die Unternehmen mit den Gewerkschaften verhandeln müssten. Vorher wollen die Unternehmen gern noch klären, welche Angebote ihnen die KI machen kann und was sie evtl. zu befürchten haben. Darüber verhandeln jetzt Google und der Medienkonzern Universal.

Nzz.ch benennt ein Kernproblem: Mit künstlicher Intelligenz generierte Songs mit den Stimmen von Stars sind für die Musikindustrie ein Problem.

Obwohl die Musikindustrie auch das letzte Quartal mit erfreulichen Zahlen abgeschlossen hat, nimmt der Anteil von künstlich generierter Musik stetig zu. Die Plattenfirmen verhandeln nun mit Google über Möglichkeiten, die Stimmen und Melodien von Künstlern für die Verwendung in KI-generierten Liedern durch eine Lizenzvereinbarung zu erlauben. Auf jeden Fall haben die Plattenfirmen nicht die Durchsetzungskraft das Generieren von Songs durch Netzsprerren zu bekämpfen. Daran ist ja schon Zensursula vor 10 Jahren gescheitert (Singen mit Zensi Zensa Zensursula ).

Außerdem ist das Ziel der Plattenfirmen Geld zu verdienen und sie sind auch nicht abgeneigt dafür KI zu verwenden, wenn die Einnahmen oder zumindest ein wesentliche Anteil in ihre Taschen gelangen. Deshalb wollen sie mit an Bord bevor sich das Schiff richtig in Bewegung setzt. Die KünstlerInnen und die anderen Streikenden bleiben bisher außen vor.

Mehr dazu bei https://www.nzz.ch/wirtschaft/wer-darf-mit-ki-stimmen-von-musikstars-geld-verdienen-google-und-universal-verhandeln-ueber-eine-loesung-ld.1750844
Kategorie[21]: Unsere Themen in der Presse Short-Link dieser Seite: a-fsa.de/d/3vA
Link zu dieser Seite: https://www.aktion-freiheitstattangst.org/de/articles/8487-20230810-wer-darf-mit-meiner-stimme-geld-verdienen.htm
Link im Tor-Netzwerk: http://a6pdp5vmmw4zm5tifrc3qo2pyz7mvnk4zzimpesnckvzinubzmioddad.onion/de/articles/8487-20230810-wer-darf-mit-meiner-stimme-geld-verdienen.html
Tags: #Hollywood #Streik #Autoren #Schauspieler #KI #AI #künstlicheIntelligenz #Google #Microsoft #Meta #Big5 #Transparenz #Informationsfreiheit #Internetsperren #Netzneutralität #OpenSource #Gewerkschaft #Mitbestimmung #Koalitionsfreiheit #aSozialeNetzwerke #DeepFakes

waynerad@diasp.org

The Coalition for Content Provenance and Authenticity (C2PA) is a specification for both hardware and software for attaching metadata to every media file (images, video, audio, etc) and cryptographically signing it. The media can't be altered without voiding the cryptographic signature. You can alter the media, but the idea is the alterations will in turn also be logged and signed. The idea is, hardware like cameras and software like Photoshop will all do this. This will enable people to tell if something was generated by AI or altered by AI. The "provenance information" will tell the history of the media and indicate its authenticity. If it authentically came from InfoWars, you'll know that.

The future of "truth" on the Internet

#solidstatelife #aiart #deepfakes #cryptography #digitalsignatures

aktionfsa@diasp.eu

30.05.2023 KI fälscht Stimmen

KI kann Stimmen recht genau nachmachen

Es gibt immer noch Menschen, die sich darauf verlassen, dass ihre biometrisch eindeutigen Merkmale so sicher sind, dass sie damit ihr Handy, das Auto oder die Haustür entsperren. Davor haben wir schon gewarnt als die Fähigkeiten künstlicher Intelligenz (KI) noch sehr bescheiden waren.

Inzwischen rappt eine KI wie Enimen und bringt dabei rassistische Texte hervor und Emma Watson, die brave Mitschülerin von Harry Potter liest angeblich aus Hitlers "Mein Kampf" vor. "Das waren noch Zeiten, als Tech-Vordenker die Voice-Revolution ausriefen und die Stimme zum fälschungssicheren Passwort erklärten" ... "Mithilfe von KI kann heute jeder Songs schreiben und Stimmen klonen. Algorithmen erkennen in Audiodateien spezifische Stimmcharakteristika und akustische Muster, die sie mithilfe eines statistischen Modells reproduzieren", schreibt nzz.ch.

Damit dürfen wir künftig keinem Wort glauben, das uns ein Politiker in einem Video erzählt, denn es könnte ein Deep Fake sein. Nicht nur für Journalisten wird das Leben schwieriger. Besonders in den Zeiten vor Wahlen müssen wir damit rechnen Opfer von Manipulationen zu werden.

Wieder müssen wir mit Wehmut an die Anfänge des Internets und die "Magna Charta for the Knowledge Age" oder die "Declaration of the Independence of Cyberspace" aus den frühen 90-iger Jahren denken, an die wir bereits vor 2 Tagen erinnern konnten ( Aneignung von allem durch Wenige ).

Mehr dazu bei https://www.nzz.ch/feuilleton/wer-spricht-da-ki-faelscht-stimmen-so-dass-niemand-etwas-merkt-ld.1733762
Kategorie[21]: Unsere Themen in der Presse Short-Link dieser Seite: a-fsa.de/d/3uj
Link zu dieser Seite: https://www.aktion-freiheitstattangst.org/de/articles/8413-20230530-ki-faelscht-stimmen.htm
Link im Tor-Netzwerk: http://a6pdp5vmmw4zm5tifrc3qo2pyz7mvnk4zzimpesnckvzinubzmioddad.onion/de/articles/8413-20230530-ki-faelscht-stimmen.html
Tags: #AI #KI #künstlicheIntelligenz #Stimmerkennung #Biometrie #Passworte #IrisScan #DeepFakes #EmmaWatson #Enimen #Transparenz #Informationsfreiheit #Verhaltensänderung #Manipulation #Wahlen #Beeinflussung

waynerad@diasp.org

"Tencent Cloud announces Deepfakes-as-a-Service for $145".

Tencent Cloud has announced it's offering a digital human production platform -- essentially Deepfakes-as-a-Service (DFaaS).

"According to Chinese media and confirmed to The Reg by Tencent, the service needs just three minutes of live-action video and 100 spoken sentences -- and a $145 fee -- to create a high-definition digital human."

"Gestating the creation requires just 24 hours."

Tencent Cloud announces Deepfakes-as-a-Service for $145

#solidstatelife #deepfakes

waynerad@diasp.org

"'I've got your daughter': Mom warns of terrifying AI voice cloning scam that faked kidnapping."

"And you have no doubt in your mind that that was her voice?"

"Oh, 100% her voice. 100% her voice. It was never a question of, you know, who is this? It was completely her voice, it was her inflection, it was the way she would've cried. I never doubted for one second it was her."

A week after hearing about this idea on The AI Dilemma, here it is in real life.

#solidstatelife #ai #generativeai #deepfakes

https://www.nbc15.com/2023/04/10/ive-got-your-daughter-mom-warns-terrifying-ai-voice-cloning-scam-that-faked-kidnapping/