#deepfakes

gander22h@diasp.org

She was careful online, but this Toronto teen was still targeted with deepfake porn

This is an interesting problem, since anyone, anywhere in the world can create nude deepfakes of anyone else, using any (fully clothed) photo as a starting point. In #Canada at least distributing these images of children is a crime, as they are child pornography, but in this case it was not distributed, just emailed to the victim as part of what may have been a #phishing scheme. It looks like police did nothing from what the story says.

You might get the police to act if the image was of a child and it was posted online and they can identify who did it and the person is in the same country, otherwise forget it. Overall I suspect that police are not going to be effective in these cases. Our national facility, Cybertip.ca received complaints about 4,000 of sexually explicit deepfakes in the past year, so I doubt they took any action. It does seem from the article that there is nothing that can be done to prevent this, and hard as it may be, the best course is probably just to treat it like a scam phone call and ignore it.

I suspect we are going to see a lot more of this, particularly if it upsets people. Also this seems to be something that is going to victimize younger people far more than old people. There might yet be some advantage to getting old.

#CBC #News #internet #AI #deepfake #deepfakes

piratepartygr@societas.online

Η εποχή των deepfakes είναι εδώ κι είναι πιο επικίνδυνη από όσο φαντάζεστε

Όσοι νομίζαμε μέχρι σήμερα ότι έχουμε δει την κορύφωση των δυνατοτήτων παραπληροφόρησης και εξαπάτησης, δεν έχουμε δει ακόμα τίποτα. Η ικανότητα της Τεχνητής Νοημοσύνης (ΤΝ) να παράγει περιεχόμενο με κείμενο, εικόνα και ήχου που μιμείται σχεδόν 100% το ύφος, την όψη και την φωνή ανθρώπων (deepfake) έχει αρχίσει ήδη να δημιουργεί νέα προβλήματα.

Πρόσφατα παραδείγματα στην Ελλάδα είναι οι περιπτώσεις απατηλών διαφημίσεων στο Facebook που χρησιμοποιούσαν την εικόνα και τη φωνή του καθηγητή Πνευμονολογίας και πρώην πρύτανη του Πανεπιστημίου Θεσσαλίας, Κωνσταντίνου Γουργουλιάνη και του διευθυντή της Β’ ΜΕΘ του νοσοκομείου "Γ. Παπανικολάου", Νικόλαου Καπραβέλου για την προώθηση σκευασμάτων που δεν έχουν καμία σχέση μ' αυτούς και που ενδεχομένως ενέχουν και κίνδυνο για τη δημόσια υγεία.

Όταν οι άνθρωποι κατήγγειλαν τις σε βάρος τους απάτες στη Meta (εταιρεία που αναπτύσσει το Facebook) και στη Δίωξη Ηλεκτρονικού Εγκλήματος, έλαβαν απαντήσεις τουλάχιστον απαράδεκτες: η μεν Meta απάντησε ότι δεν πρόκειται να αφαιρέσει το απατηλό περιεχόμενο γιατί "δεν παραβιάζει τους όρους χρήσης της", η δε ΔΗΕ πως, επειδή το απατηλό υλικό δημιουργήθηκε στο εξωτερικό, "αδυνατούν να εντοπίσουν τους δράστες". Οι απαντήσεις και των δυο αρμόδιων φορέων - της εταιρείας και της ΔΗΕ - είναι απολύτως προσβλητικές για τη νοημοσύνη μας. Όλες ανεξαιρέτως οι διαφημίσεις στο Facebook δημοσιεύονται από λογαριασμούς επιβεβαιωμένους και το μόνο που χρειάζεται είναι ένα αίτημα για τα στοιχεία αυτά με νομική βάση, (π.χ. απάτη κ.λπ.). Επίσης, οι απατεώνες που τρέχουν αυτές τις διαφημίσεις έχουν διαδικτυακά καταστήματα. Είναι λοιπόν προφανές ότι και νομική βάση και ανιχνευσιμότητα των δραστών υφίσταται. Άλλωστε, δεν έχει τόση σημασία ποιος έφτιαξε το βίντεο, όσο το ποιος επωφελείται από τη χρήση του.

Το Κόμμα Πειρατών Ελλάδας απορρίπτει ασυζητητί τις δικαιολογίες της Meta και της ΔΗΕ. Καλεί τις αρμόδιες αρχές να κάνουν τη δουλειά τους και να κινήσουν όλες τις απαραίτητες διαδικασίες για τον εντοπισμό των απατεώνων και την τιμωρία τους. Φαίνεται ότι στη ΔΗΕ δεν έχουν κατανοήσει ότι εδώ δεν είναι απλά μια "φάρσα", αλλά μια καλοστημένη επιχείρηση κερδοσκοπικής εξαπάτησης του κοινού, που ενδέχεται να θέτει σε κίνδυνο και τη δημόσια υγεία.

Έχουμε προειδοποιήσει πολλές φορές ότι τέτοια φαινόμενα δε μπορούν να αντιμετωπίζονται με τέτοια προχειρότητα κι αδιαφορία. Έχουμε προειδοποιήσει ότι η χρήση της TN δίνει τεράστιες δυνατότητες και στους εγκληματίες. Δυστυχώς, οι νομοθεσίες σε Ε.Ε. και Ελλάδα πολύ συχνά βαδίζουν σε λάθος κατεύθυνση: αντί να προστατεύουν τον πολίτη από την κρατική ή εταιρική αυθαιρεσία και τους εγκληματίες, δημιουργούν παραθυράκια για τις μυστικές υπηρεσίες, που όμως γρήγορα γίνονται αντικείμενο εκμετάλλευσης και από εγκληματίες και εξωτερικά παράκεντρα. Ανάλογα, οι κρατικές υπηρεσίες που υποτίθεται ότι προστατεύουν τον πολίτη απλά προσπαθούν να βρουν μια δικαιολογία για να μην ασχοληθούν. Τέλος, στεκόμαστε δίπλα στους πολύ καλούς γιατρούς και θα αναζητήσουμε τρόπους για να τους συνδράμουμε.

Πηγή:
Βασίλης Ιγνατιάδης (2023). Θύματα ηλεκτρονικής απάτης Ν. Καπραβέλος και Κ. Γουργουλιάνης. [online] iatronet.gr. Available at: https://www.iatronet.gr/article/114454/thymata-hlektronikhs-apaths-n-kapravelos-kai-k-goyrgoylianhs [Accessed 10 Oct. 2024].

https://www.pirateparty.gr/2024/10/i-epoxi-ton-deep-fakes-einai-edo/

#AI #deepfakes #Γουργουλιάνης #Καπραβέλος #ΔΗΕ #Facebook

danie10@squeet.me

Deep-Live-Cam goes viral, allowing anyone to become a digital doppelgangerDeep-Live-Cam goes viral, allowing anyone to become a digital doppelganger

A webcam view of a man wearing a blue t-shirt, sitting in a room with an open door behind him and a bookcase with books. To the right stands a green cactus type plant. The man's face looks like JD Vance.
Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren’t perfect, the software shows how quickly the tech is developing—and how the capability to deceive others remotely is getting dramatically easier over time.

The results shown using a moving flashlight are quite impressive. It is quite amazing though given that this all just works with some free Python code, a normal 2D photo, and a webcam. Yes, it is certainly not perfect, but is good for a laugh with your friends.

See arstechnica.com/information-te…
#Blog, #deepfakes, #technology, #webcam

waynerad@diasp.org

"Opinion: It's time for the Biden Campaign to embrace AI"

"By Kaivan Shroff, Guest Writer"

"The stakes of the 2024 presidential election cannot be overstated. With Donald Trump promising to act as a dictator 'on day one,' it is not hyperbolic to say the future of American democracy hangs in the balance. Against this backdrop, the Biden campaign faces a critical challenge: conveying a strong and effective image of President Joe Biden to a population and media ecosystem increasingly focused on optics over substance. Given the president's concerning performance last week, it's time for the Biden campaign to consider leveraging artificial intelligence (AI) to effectively reach the voting public."

"Reasonably, some may challenge the use of AI as dishonest and deceptive, but the current information ecosystem is arguably no better." "We must ask the question, are augmented AI videos that present Biden in his best form -- while sharing honest and accurate information -- really more socially damaging than our information ecosystem's current realities?"

"AI-generated content can be tailored to highlight President Biden's accomplishments, clearly articulate his policies, and present a consistent, compelling message. In an era where visual mediums and quick, digestible content dominate public perceptions, AI offers an opportunity for more effective communication. These AI-enhanced videos could ensure that the public does not make decisions about the future of our democracy based on an inconveniently timed cough, stray stutter, or healthy but hobbled walk (Biden suffers from a 'stiff gait')."

"The use of AI renderings in political campaigns is becoming increasingly common, and the Republican Party has already embraced this technology and is using AI in their attack ads against the president. Instead of a race to the bottom, the Biden campaign could consider an ethical way to deploy the same tools."

Opinion: It's time for the Biden Campaign to embrace AI | HuffPost Opinion

#solidstatelife #ai #genai #llms #computervision #deepfakes #domesticpolitics

waynerad@diasp.org

"WAN-IFRA, the World Association of News Publishers, has announced today the launch of a broad-based accelerator program for over 100 news publishers in partnership with OpenAI. The Newsroom AI Catalyst is an accelerator program designed to help newsrooms fast-track their AI adoption and implementation to bring efficiencies and create quality content."

Because there's isn't enough news written by humans?

WAN-IFRA and OpenAI launch Global AI Accelerator for newsrooms

#solidstatelife #ai #genai #llms #openai #deepfakes

deutschlandfunk@squeet.me

Kann künstliche Intelligenz Wahlen manipulieren?

Deepfakes - Wie mit KI Wahlen manipuliert werden könnten

Bereits jetzt werden KI-Methoden eingesetzt, um Wahlen im Vorfeld zu manipulieren. Teilweise wurde die Gefahr erkannt, dennoch dürfte das Problem größer werden.#Wahlmanipuliation #KI #künstlicheIntelligenz #Wahl #Wahlen #Deepfakes
Kann künstliche Intelligenz Wahlen manipulieren?

waynerad@diasp.org

Creating sexually explicit deepfakes to become a criminal offence in the UK. If the images or videos were never intended to be shared, under the new legislation, the person will face a criminal record and unlimited fine. If the images are shared, they face jail time.

Creating sexually explicit deepfakes to become a criminal offence

#solidstatelife #ai #genai #computervision #deepfakes #aiethics

waynerad@diasp.org

"The rise of generative AI and 'deepfakes' -- or videos and pictures that use a person's image in a false way -- has led to the wide proliferation of unauthorized clips that can damage celebrities' brands and businesses."

"Talent agency WME has inked a partnership with Loti, a Seattle-based firm that specializes in software used to flag unauthorized content posted on the internet that includes clients' likenesses. The company, which has 25 employees, then quickly sends requests to online platforms to have those infringing photos and videos removed."

This company Loti has a product called "Watchtower", which watches for your likeness online.

"Loti scans over 100M images and videos per day looking for abuse or breaches of your content or likeness."

"Loti provides DMCA takedowns when it finds content that's been shared without consent."

They also have a license management product called "Connect", and a "fake news protection" program called "Certify".

"Place an unobtrusive mark on your content to let your fans know it's really you."

"Let your fans verify your content by inspecting where it came from and who really sent it."

They don't say anything about how their technology works.

Hollywood celebs are scared of deepfakes. This talent agency will use AI to fight them.

#solidstatelife #ai #genai #computervision #deepfakes #aiethics

waynerad@diasp.org

"AI mishaps are surging -- and now they're being tracked like software bugs".

The article is about a new "AI Incident Database", modeled after the Common Vulnerabilities and Exposures (CVE) database run by MITRE and the National Highway Transport Safety Administration's database of vehicle crashes.

I clicked through to the site and here are some examples of what I found:

"Self-Driving Waymo Collides With Bicyclist In Potrero Hill" -- sfist.com - 2024

"Waymo robotaxi accident with San Francisco cyclist draws regulatory review" - reuters.com - 2024

"AI images of Donald Trump with black voters spread before election" - thetimes.co.uk - 2024

"Google AI's answer on whether Modi is 'fascist' sparks outrage in India, calls for tough laws" - scmp.com - 2024

"The AI Culture Wars Are Just Getting Started" - wired.com - 2024

"Gemini image generation got it wrong. We'll do better." - blog.google - 2024

"Google's hidden AI diversity prompts lead to outcry over historically inaccurate images" - arstechnica.com - 2024

"Google suspends Gemini AI chatbot's ability to generate pictures of people" - apnews.com - 2024

"ChatGPT has gone mad today, OpenAI says it is investigating reports of unexpected responses" - indiatoday.in - 2024

"Fake sexually explicit video of podcast host Bobbi Althoff trends on X despite violating platform's rules" - nbcnews.com - 2024

"Bobbi Althoff Breaks Her Silence On Deepfake Masturbation Video" - dailycaller.com - 2024

"North Korea and Iran using AI for hacking, Microsoft says" - theguardian.com - 2024

"ChatGPT Used by North Korean Hackers to Scam LinkedIn Users" - tech.co - 2024

"Analysis reveals high probability of Starmer's audio on Rochdale to be a deepfake" - logicallyfacts.com - 2024

"Happy Valentine's Day! Romantic AI Chatbots Don't Have Your Privacy at Heart" - foundation.mozilla.org - 2024

"Your AI Girlfriend Is a Data-Harvesting Horror Show" - gizmodo.com - 2024

"No, France 24 did not report that Kyiv planned to 'assassinate' French President" - logicallyfacts.com - 2024

"Les Observateurs - Un projet d'assassinat contre Emmanuel Macron en Ukraine ? Attention, cette vidéo est truquée" - observers.france24.com - 2024

"Deepfakes, Internet Access Cuts Make Election Coverage Hard, Journalists Say" - voanews.com - 2024

"Imran Khan's PTI to boycott polls? Deepfake audio attempts to mislead voters in Pakistan" - logicallyfacts.com - 2024

"Finance worker pays out $25 million after video call with deepfake 'chief financial officer'" - cnn.com - 2024

"Fake news YouTube creators target Black celebrities with AI-generated misinformation" - nbcnews.com - 2024

"Australian news network apologises for 'graphic error' after photo of MP made more revealing" - news.sky.com - 2024

"Australian News Channel Apologises To MP For Editing Body, Outfit In Pic" - ndtv.com - 2024

"Adobe confirms edited image of Georgie Purcell would have required 'human intervention'" - womensagenda.com.au - 2024

"Nine slammed for 'AI editing' a Victorian MP's dress" - lsj.com.au - 2024

"An AI-generated image of a Victorian MP raises wider questions on digital ethics" - abc.net.au - 2024

AI mishaps are surging -- and now they're being tracked like software bugs - The Register

#solidstatelife #ai #aiethics #genai #deepfakes

waynerad@diasp.org

Claim is being made that a scientific research paper where every figure was AI generated passed peer review.

Article published a couple of days ago. Every figure in the article is AI generated and totally incomprehensible. This passed "peer-review"

#solidstatelife #ai #genai #computervision #deepfakes

waynerad@diasp.org

"Tech Accord to Combat Deceptive Use of AI in 2024 Elections".

"As leaders and representatives of organizations that value and uphold democracy, we recognize the need for a whole-of-society response to these developments throughout the year. We are committed to doing our part as technology companies, while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society."

"We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

"We appreciate that the effective protection of our elections and electoral processes will require government leadership, trustworthy technology practices, responsible campaign practices and reporting, and active educational efforts to support an informed citizenry."

"We will continue to build upon efforts we have collectively and individually deployed over the years to counter risks from the creation and dissemination of Deceptive AI Election Content and its dissemination, including developing technologies, standards, open- source tools, user information features, and more."

"We acknowledge the importance of pursuing this work in a manner that respects and upholds human rights, including freedom of expression and privacy, and that fosters innovation and promotes accountability. We acknowledge the importance of pursuing these issues with transparency about our work, without partisan interests or favoritism towards individual candidates, parties, or ideologies, and through inclusive opportunities to listen to views across civil society, academia, the private sector, and all political parties."

"We recognize that no individual solution or combination of solutions, including those described below such as metadata, watermarking, classifiers, or other forms of provenance or detection techniques, can fully mitigate risks related to Deceptive AI Election Content, and that accordingly it behooves all parts of society to help educate the public on these challenges."

"We sign this accord as a voluntary framework of principles and actions to advance seven principal goals:"

"1. Prevention: Researching, investing in, and/or deploying reasonable precautions to limit risks of deliberately Deceptive AI Election Content being generated."

"2. Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible."

"3. Detection: Attempting to detect Deceptive AI Election Content or authenticated content, including with methods such as reading provenance signals across platforms."

"4. Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content."

"5. Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content."

"6. Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content."

"7. Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content."

"In pursuit of these goals, we commit to the following steps through 2024:"

"1. Developing and implementing technology to mitigate risks related to Deceptive AI Election content by:"

"a. Supporting the development of technological innovations to mitigate risks arising from Deceptive AI Election Content by identifying realistic AI-generated images and/or certifying the authenticity of content and its origin, with the understanding that all such solutions have limitations. This work could include but is not limited to developing classifiers or robust provenance methods like watermarking or signed metadata (e.g. the standard developed by C2PA or SynthID watermarking)."

"b. Continuing to invest in advancing new provenance technology innovations for audio video, and images."

"c. Working toward attaching machine-readable information, as appropriate, to realistic AI-generated audio, video, and image content that is generated by users with models in scope of this accord."

"2. Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content so we may better understand vectors for abuse in furtherance of improving our controls against this abuse."

"3. Seeking to detect the distribution of Deceptive AI election content hosted on our online distribution platforms where such content is intended for public distribution and could be mistaken as real. ..."

"4. Seeking to appropriately address Deceptive AI Election Content we detect that is hosted on our online distribution platforms and intended for public distribution, in a manner consistent with principles of free expression and safety. ..."

"5. Fostering cross-industry resilience to Deceptive AI Election Content by sharing best practices and exploring pathways to share best-in-class tools and/or technical signals ..."

"6. Providing transparency to the public regarding how we address Deceptive AI Election Content ..."

"7. Continuing to engage with a diverse set of global civil society organizations, academics, and other relevant subject matter experts ..."

"8. Supporting efforts to foster public awareness and all-of-society resilience regarding Deceptive AI Election Content -- for instance by means of education campaigns ..."

Signatories include Adobe, Amazon, Anthropic, ARM, IIElevenLabs, Google, IBM, Inflection, LG AI Research, LinkedIn, McAfee, Microsoft, Meta (the company formerly known as Facebook), NetApp, Nota, OpenAI, Snapchat, Stability AI, TikTok, Trend Micro, Truepic, and X (the company formerly known as Twitter).

A Tech Accord to Combat Deceptive Use of AI in 2024 Elections

#ai #genai #deepfakes #aiethics

nowisthetime@pod.automat.click

https://www.sott.net/article/488986-Audio-cloning-can-take-over-a-phone-call-in-real-time-without-the-speakers-knowing

Generative #AI could be #listening to your #phone #calls and #hijacking them with fake biometric #audio for #fraud or #manipulation purposes, according to new research published by Security Intelligence. In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video #deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.

A blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes - without the speakers knowing it was happening.

"Alarmingly," writes Lee, "it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary."

waynerad@diasp.org

"It's now illegal in the US for robocallers to use AI-generated voices, thanks to a new ruling by the Federal Communications Commission."

"In a unanimous decision, the FCC expands the Telephone Consumer Protection Act, or TCPA, to cover robocall scams that contain AI voice clones. The new rule goes into effect immediately, allowing the commission to fine companies and block providers for making these types of calls."

AI-generated voices in robocalls are now illegal

#solidstatelife #ai #genai #deepfakes

waynerad@diasp.org

"George Carlin estate sues creators of AI-generated comedy special in key lawsuit over stars' likenesses."

Once again I'm bringing you all news from The Hollywood Reporter. Is this going to become a regular thing?

"The lawsuit, filed in California federal court Thursday, accuses the creators of the special of utilizing without consent or compensation George Carlin's entire body of work consisting of five decades of comedy routines to train an AI chatbot, which wrote the episode's script. It also takes issue with using his voice and likeness for promotional purposes. The complaint seeks a court order for immediate removal of the special, as well as unspecified damages."

George Carlin estate sues creators of AI-generated comedy special in key lawsuit over stars’ likenesses

#solidstatelife #ai #genai #llms #deepfakes

anonymiss@despora.de

#StephenFry on How to use #AI as a force for good

source: https://www.youtube.com/watch?v=zZfS8uk70Zc

How can the coming flood of technological progress, which will not stop at current developments such as big data-based and algorithmically generated statistical models with their astonishing capabilities and failures - i.e. current #AI systems - but lead to a further acceleration of innovation with increasing technological performance in computing? Today, we are thinking about an epistemic #crisis of world #knowledge in times of #deepfakes, #surveillance and bioweapons through AI, while brain-computer interfaces and quantum computers are already appearing on the horizon today. At the same time, #climate #change is probably the most outstanding of all crises, which has already been showing its first effects for several years and is setting new records with constantly rising #emissions. In short, how can we ensure that the coming real-life version of what is commonly referred to as "Singularity" turns out for the good of #humanity?


#singularity #future #technology #ethics #moral #humanrights #politics #economy #video #philosophy