#misinformation

mlansbury@despora.de

Russian disinformation 'more sophisticated' in election lead-up

According to him, Russia is using artificial intelligence and funding companies in the U.S., including an unnamed Tennessee-based company, identified by the media as #far-right propaganda outlet #TenetMedia.

“It's not just about #Russian #bots, #trolls, and fake personas in social networks, although that's part of it too, but they've become much more sophisticated," Kirby said.

He claimed there is “no doubt” the Kremlin is using propaganda and disinformation to “sow discord” among the U.S. population. He urged citizens to take the threats seriously.

https://kyivindependent.com/kirby-russian-disinformation-more-sophisticated-in-election-leadup/

#HybridWar #RussianPropaganda #Election22024 #disinformation #misinformation

mlansbury@despora.de

Big UK retailers accused of dubious discounts on #loyalty card offers

Boots, Superdrug and the big supermarket chains have been accused of “murky and confusing” practices on loyalty card offers that may not be as good as they appear.

Boots was responsible for one of the most egregious examples of what the consumer group Which? called “dubious discounts” highlighted by its investigation into the pricing history of almost 12,000 products on a “snapshot day” in May.

Boots offered an Oral-B iO7 electric toothbrush to loyalty card holders for £150, marking this as a discount to the £400 price for non-members.

However, the product was only priced at £400 for 13 days before the offer appeared – before that it was £150 for everyone.

https://www.theguardian.com/business/article/2024/aug/22/big-uk-retailers-accused-of-dubious-discounts-on-loyalty-card-offers-boots-superdrug-tesco-which

#LoyaltyCards #consumers #discounts #greed #CapitalismFails #retail #fraud #misinformation #disinformation

mlansbury@despora.de

Russia recycles old videos in attempt to show success against Ukraine's incursion

The Russian Defense Ministry published videos claiming to show successful strikes against Ukrainian forces in Kursk Oblast, but the videos were filmed in other locations and at other times, the Russian independent news outlet the Insider reported on Aug. 10.

The Russian Defense Ministry and state-controlled media circulated videos that they alleged show Russian forces defeating Ukrainian troops in the region. A video published Aug. 9 claimed to show Russian troops carrying out strikes against the Ukrainian military in bordering #Sumy Oblast in response to the Kursk incursion.

The video was in fact first posted weeks before the offensive, the Insider reported. The Russian state media outlet TASS published the video on July 14.

https://kyivindependent.com/russia-recycles-old-videos-in-attempt-to-show-success-against-ukraines-incursion-media-reports/

#Misinformation #propaganda #disinformation #StopRussianAggression #RussianPropaganda #Ukraine

mlansbury@despora.de

Why did the Russian disinformation machine target French voters?

What do fake news on Ukrainian First Lady Olena Zelenska's multimillion-euro sports car purchase and made-up offers of money to vote for President Emmanuel Macron in the French snap elections have in common?

They were all cooked up by the Kremlin in what is an ongoing all-out assault on French public opinion, researchers claim.

Scores of freshly registered websites, some made to look like mainstream outlets, have been publishing everything from deepfakes to generative AI writing impersonate fringe content and reports on real-world acts of subversion.

https://www.euronews.com/2024/07/08/why-did-the-russian-disinformation-machine-target-french-voters

#FakeNews #Russian #disinformation #propaganda #InformationWarfare #misinformation #Putin

prplcdclnw@diasp.eu

More Fun with AI

News app makes up news and credits it to made up people.

https://www.reuters.com/technology/top-news-app-us-has-chinese-origins-writes-fiction-with-help-ai-2024-06-05/

Another example of an "AI" pulling shit out of its anus.

LONDON, June 5 (Reuters) - Last Christmas Eve, NewsBreak, opens new tab, a free app with roots in China that is the most downloaded news app in the United States, published an alarming piece about a small town shooting. It was headlined "Christmas Day Tragedy Strikes Bridgeton, New Jersey Amid Rising Gun Violence in Small Towns."\
\
The problem was, no such shooting took place. The Bridgeton, New Jersey police department posted a statement on Facebook on December 27 dismissing the article - produced using AI technology - as "entirely false".\
\
"Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers."

#ai #as #artificial-intelligence #artificial-stupidity #google #be-evil #stupidity #misinformation #incompetence #worthless #worse-than-worthless #joke #tasteless-joke #painful-embarrassment

mlansbury@despora.de

Russian legal foundation working in EU is actually 'Kremlin influence operation'

Leaked documents from the #Russian Fund for Support and Protection of the Rights of Compatriots Living Abroad (Pravfond) show that the purported legal foundation is actually a Kremlin-linked disinformation outlet, the Guardian and other media outlets reported on June 2.

#Pravfond describes its goal as providing "Russian compatriots with comprehensive legal and other necessary support in cases of violation of their rights, freedoms, and legitimate interests."

According to the leaked documents, Pravfond helped pay legal fees for convicted Russian arms smuggler Viktor Bout and assassin Vadim Krasikov, who is currently serving a prison sentence in Germany for the murder of a Georgian-Chechen dissident in 2019.

Pravfond also reportedly employs several former Russian intelligence operatives and has spent millions of dollars on #disinformation campaigns in almost 50 countries in #Europe and the rest of the world.

Estonia's security service characterized Pravfond in 2020 as a "pseudo legal protection system" that "in reality is an influence operations fund."

https://kyivindependent.com/guardian-russian-legal-foundation-working-in-eu-is-actually-kremlin-influence-operation/

#RussianPropagands #propaganda #RussianAggression #InformationWarfare #misinformation

anonymiss@despora.de

The #LLM #Misinformation #Problem I Was Not Expecting

Source: https://labs.ripe.net/author/kathleen_moriarty/the-llm-misinformation-problem-i-was-not-expecting/

Another example of non-vetted AI results includes how some online content inaccurately describes authentication, creating misinformation that continues to confuse students. For instance, some #AI LLM results describe Lightweight Directory Access Protocol (LDAP) as an authentication type. While it does support password authentication and serve up public key certificates to aid in PKI authentication, LDAP is a directory service. It is not an authentication protocol.

#education #confusion #problem #future #knowledge #technology #news #trust

mlansbury@despora.de

ISW: Putin hopes to convince West to betray Ukraine

Russian dictator Vladimir Putin's comments identifying the West as Russia's "enemy" suggest a Kremlin narrative aimed at convincing Western nations to betray Ukraine in future negotiations with Russia, the Institute for the Study of War (ISW) wrote in its Jan. 2 report.

Putin said on Jan. 1 that “Ukraine by itself is not an enemy" of Russia, claiming that Western countries who wish to destroy Russian sovereignty are the true enemies and that Ukraine has already been "completely destroyed."

https://kyivindependent.com/isw-putin-hopes-to-convince-the-west-to-betray-ukraine/

#RussianAggression #imperialism #RussiiaInvadedUkraine #RussianWarCrimes #WarCrimes #misinformation #propaganda #PutinWarCrimes #Ukraine #StandWithUkraine

libramoon@diaspora.glasswings.com

2023 George Washington Symposium: Role of #Journalism in #Democracy
"CBS News, New York Times, and Politico journalists discussed the role of journalism in democracy during the 2023 George Washington Symposium at Mount Vernon in Virginia. Several topics were addressed, including the role of journalists in protecting democratic values, combating #misinformation and disinformation, the perceived lack of trust in journalism, and the impact on democracy of the of the January 6, 2021, attack on the U.S. Capitol. " https://www.c-span.org/video/?531619-1/2023-george-washington-symposium-role-journalism-democracy

#cspan #video #transcript

mlansbury@despora.de

A statement on social media policy - CARTOONISTS RIGHTS

Effective immediately, CARTOONISTS RIGHTS will cease posting content to Twitter, now known as “X”, until such time as there is a change in ownership and a marked improvement in the website’s policies and functionality.

Following recent changes in policy and performance on the Twitter/’X' social media platform, and the public actions of its owner, Cartoonist Rights shall stop using his site as a means of outward communication with immediate effect.

Theses issues include:

  • Greater exposure to the processes of training proprietary artificial intelligence, and the removal of titles or headlines from web links posted to the site
  • Disrupting searches and benefiting false and misleading posts
  • Elon Musk’s endorsement of antisemitic disinformation immediately after the Hamas incursion into Israel on October 7th and of a racist conspiracy theory on November 15th
  • A statement that this supposed free-speech absolutist would be banning expressions of Palestinian solidarity from the platform on the grounds of alleged genocidal intent
  • His repeated validation of the “pizzagate” conspiracy theory, and his tacit endorsement of numerous peddlers of hatred including defamer of bereaved parents Alex Jones and alleged human trafficker Andrew Tate
  • The news that he would sue independent watchdog Media Matters
  • Most recently, evidence that “X” is turning a blind eye to racism, homophobia and sexual harassment on their platform as a matter of policy.

https://cartoonistsrights.org/a-statement-on-social-media-policy/

#SocialMedia #cartoons #rights #CartoonistsRights #HumanRights #Twitter #Musk #ElonMusk #AI #misinformation #disinformation #Hamas #Israel #ConspiracyTheory #hate #racism #PizzaGate #trafficking #HumanTrafficking #MediaMatters #homophobia #harassment

libramoon@diaspora.glasswings.com

https://theconversation.com/health-misinformation-is-rampant-on-social-media-heres-what-it-does-why-it-spreads-and-what-people-can-do-about-it-217059

#Health #misinformation is rampant on social #media – here’s what it does, why it spreads and what people can do about it
Published: December 13, 2023

Below are some steps that #consumers can take to identify and prevent health misinformation spread:

Check the source. Determine the credibility of the health information by checking if the source is a reputable organization or agency such as the World Health Organization, the National Institutes of Health or the Centers for Disease Control and Prevention. Other credible sources include an established medical or scientific institution or a peer-reviewed study in an academic journal. Be cautious of information that comes from unknown or biased sources.

Examine author credentials. Look for qualifications, expertise and relevant professional affiliations for the author or authors presenting the information. Be wary if author information is missing or difficult to verify.

Pay attention to the date. Scientific knowledge by design is meant to evolve as new evidence emerges. Outdated information may not be the most accurate. Look for recent data and updates that contextualize findings within the broader field.

Cross-reference to determine scientific consensus. Cross-reference information across multiple reliable sources. Strong consensus across experts and multiple scientific studies supports the validity of health information. If a health claim on social media contradicts widely accepted scientific consensus and stems from unknown or unreputable sources, it is likely unreliable.

Question sensational claims. Misleading health information often uses sensational language designed to provoke strong emotions to grab attention. Phrases like “miracle cure,” “secret remedy” or “guaranteed results” may signal exaggeration. Be alert for potential conflicts of interest and sponsored content.

Weigh scientific evidence over individual anecdotes. Prioritize information grounded in scientific studies that have undergone rigorous research methods, such as randomized controlled trials, peer review and validation. When done well with representative samples, the scientific process provides a reliable foundation for health recommendations compared to individual anecdotes. Though personal stories can be compelling, they should not be the sole basis for health decisions.

Talk with a health care professional. If health information is confusing or contradictory, seek guidance from trusted health care providers who can offer personalized advice based on their expertise and individual health needs.

When in doubt, don’t share. Sharing health claims without validity or verification contributes to misinformation spread and preventable harm."...

libramoon@diaspora.glasswings.com

https://www.technologyreview.com/2023/12/15/1085441/eric-schmidt-plan-for-fighting-election-misinformation/

Eric Schmidt has a 6-point plan for fighting election #misinformation
The former Google CEO hopes that companies, #Congress, and #regulators will take his advice on board—before it’s too late.

By Eric Schmidt
December 15, 2023

..."Here I propose six technical approaches that platforms should double down on to protect their users. Regulations and laws will play a crucial role in incentivizing or mandating many of these actions. And while these reforms won’t solve all the problems of mis- and disinformation, they can help stem the tide ahead of elections next year.

  1. Verify human users. We need to distinguish humans using social media from bots, holding both accountable if laws or policies are violated. This doesn’t mean divulging identities. Think of how we feel safe enough to hop into a stranger’s car because we see user reviews and know that Uber has verified the driver’s identity. Similarly, social media companies need to authenticate the human behind each account and introduce reputation-based functionality to encourage accounts to earn trust from the community.
    
  2. Know every source. Knowing the provenance of the content and the time it entered the network can improve trust and safety. As a first step, using a time stamp and an encrypted (and not removable) IP address would guarantee an identifiable point of origin. Bad actors and their feeds—discoverable through the chain of custody—could be deprioritized or banned instead of being algorithmically amplified. While VPN traffic may deter detection, platforms can step up efforts to improve identification of VPNs. 
    

Related Story
hands pull propaganda from a machine
How generative AI is boosting the spread of disinformation and propaganda
In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.

  1. Identify deepfakes. In line with President Biden’s sweeping executive order on AI, which requires the Department of Commerce to develop guidance for watermarking AI-generated content, platforms should further develop detection and labeling tools. One way for platforms to start is to scan an existing database of images and tell the user if an image has no history (Google Images, for example, has begun to do this). AI systems can also be trained to detect the signatures of deepfakes, using large sets of truthful images contrasted with images labeled as fake. Such software can tell you when an image has a high likelihood of being a deepfake, similar to the “spam risk” notice you get on your phone when calls come in from certain numbers.
    
  2. Filter advertisers. Companies can share a “safe list” of advertisers across platforms, approving those who comply with applicable advertising laws and conform professionally to the platforms’ advertising standards. Platforms also need to ramp up their scrutiny of political ads, adding prominent disclaimers if synthetic content is used. Meta, for example, announced this month that it would require political ads to disclose whether they used AI.  
    
  3. Use real humans to help. There will, of course, be mistakes, and some untrustworthy content will slip through the protections. But the case of Wikipedia shows that misinformation can be policed by humans who follow clear and highly detailed content rules. Social media companies, too, should publish quality rules for content and enforce them by further equipping their trust and safety teams, and potentially augmenting those teams by providing tools to volunteers. How humans fend off an avalanche of AI-generated material from chatbots remains to be seen, but the task will be less daunting if trained AI systems are deployed to detect and filter out such content. 
    
  4. Invest in research. For all these approaches to work at scale, we’ll require long-term engagement, starting now. My philanthropic group is working to help create free, open-source testing frameworks for many AI trust and safety groups. Researchers, the government, and civil society will also need increased access to critical platform data. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from projects approved by the National Science Foundation."...
    
psych@diasp.org