#regulators

libramoon@diaspora.glasswings.com

https://www.technologyreview.com/2023/12/15/1085441/eric-schmidt-plan-for-fighting-election-misinformation/

Eric Schmidt has a 6-point plan for fighting election #misinformation
The former Google CEO hopes that companies, #Congress, and #regulators will take his advice on board—before it’s too late.

By Eric Schmidt
December 15, 2023

..."Here I propose six technical approaches that platforms should double down on to protect their users. Regulations and laws will play a crucial role in incentivizing or mandating many of these actions. And while these reforms won’t solve all the problems of mis- and disinformation, they can help stem the tide ahead of elections next year.

  1. Verify human users. We need to distinguish humans using social media from bots, holding both accountable if laws or policies are violated. This doesn’t mean divulging identities. Think of how we feel safe enough to hop into a stranger’s car because we see user reviews and know that Uber has verified the driver’s identity. Similarly, social media companies need to authenticate the human behind each account and introduce reputation-based functionality to encourage accounts to earn trust from the community.
    
  2. Know every source. Knowing the provenance of the content and the time it entered the network can improve trust and safety. As a first step, using a time stamp and an encrypted (and not removable) IP address would guarantee an identifiable point of origin. Bad actors and their feeds—discoverable through the chain of custody—could be deprioritized or banned instead of being algorithmically amplified. While VPN traffic may deter detection, platforms can step up efforts to improve identification of VPNs. 
    

Related Story
hands pull propaganda from a machine
How generative AI is boosting the spread of disinformation and propaganda
In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.

  1. Identify deepfakes. In line with President Biden’s sweeping executive order on AI, which requires the Department of Commerce to develop guidance for watermarking AI-generated content, platforms should further develop detection and labeling tools. One way for platforms to start is to scan an existing database of images and tell the user if an image has no history (Google Images, for example, has begun to do this). AI systems can also be trained to detect the signatures of deepfakes, using large sets of truthful images contrasted with images labeled as fake. Such software can tell you when an image has a high likelihood of being a deepfake, similar to the “spam risk” notice you get on your phone when calls come in from certain numbers.
    
  2. Filter advertisers. Companies can share a “safe list” of advertisers across platforms, approving those who comply with applicable advertising laws and conform professionally to the platforms’ advertising standards. Platforms also need to ramp up their scrutiny of political ads, adding prominent disclaimers if synthetic content is used. Meta, for example, announced this month that it would require political ads to disclose whether they used AI.  
    
  3. Use real humans to help. There will, of course, be mistakes, and some untrustworthy content will slip through the protections. But the case of Wikipedia shows that misinformation can be policed by humans who follow clear and highly detailed content rules. Social media companies, too, should publish quality rules for content and enforce them by further equipping their trust and safety teams, and potentially augmenting those teams by providing tools to volunteers. How humans fend off an avalanche of AI-generated material from chatbots remains to be seen, but the task will be less daunting if trained AI systems are deployed to detect and filter out such content. 
    
  4. Invest in research. For all these approaches to work at scale, we’ll require long-term engagement, starting now. My philanthropic group is working to help create free, open-source testing frameworks for many AI trust and safety groups. Researchers, the government, and civil society will also need increased access to critical platform data. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from projects approved by the National Science Foundation."...
    
sylviaj@joindiaspora.com

To counter U.S. hostility, China moves towards people centered policies

https://www.moonofalabama.org/2021/09/in-counter-to-us-attacks-china-transforms-itself-from-capital-centric-to-people-centric.html

To counter #China, the #USA shifts to the #oligarchy. To counter #America, China shifts to the #people. Who will win?
Aside from the #ideological #underpinning, the new regulatory moves are #populist. The #masses will like them. They guarantee #President #XiJinping's #reelection at next year's national party congress. They will #strengthen China's #unity in its #competition with the #UnitedStates.

#MoonOfAlabama #MoA #globalization #economic #fundamentals #free-markets #income #disparity #investment #mgmt #legal #politics #regulations #regulators #soros #michaelhudson #healthy #society #common #prosperity

handrix@diasp.org

#shithole-countries-portugal

Please note: the Portuguese are among the best ppl in the world,
and the best they are the more they#migrate

Why?
Because #Fake and #Artistic #Democracy, #Corruption and #OrganizedCrime took over the #PoliticalParties, #Sovereign #Institutions, #Regulators, (pseudo) independent #media and #strategic #economic aspects.


Bazuca, aka the #PT #EuropeanUnion recovery project is a #CaseStudy of #country #incompetence, a #nonsense made over the knee for the good for a few.


The #EU
#Portugal is a #shithole #democracy that is backed by the conniving #blind #EU #Institutions, namely the #EuCouncil for sake of of #submissive voters.


#Potugal #election #VoterTurnout is usually under 50%



#SocialisParty #AntonioCosta #AntonioCostaPM