NewsRescue
According to a new report from Just the News, the Biden administration is pouring taxpayer money into an AI censorship program that allegedly uses systems once used to wage information warfare against the Islamic State in collaboration with Big Tech, academia, and big corporations.
“Under the Biden Administration, the [National Science Foundation] is funding the idea that if citizen trust in government cannot be earned organically, then it must be installed by science,” according to one free speech watchdog.
What is the context?
The Intercept detailed efforts by the Department of Homeland Security in late 2022 to broaden its efforts to curb free speech and shape online discourse, far exceeding the promises of the dormant “Disinformation Governance Board.”
Government agencies were working together to “mature a whole-of-government approach to mitigating risks of [mal-information], framing which tools, authorities, and interventions are appropriate to the threats impacting the information environment,” according to documents obtained through leaks and lawsuits.
According to reports, the DHS justified these restrictions on free speech and decisions about what information people should be allowed to access by claiming that terrorist threats could be “exacerbated by misinformation and disinformation spread online.”
Other statist elements, sometimes aligned, have also engaged in censorship and narrative seeding, as revealed by Elon Musk’s “Twitter Files,” which revealed that federal operatives pressured private companies to censor journalists, dissenters, and even a former president.
The impact could have been electoral as well as social.
FBI agents reportedly relied on at least one social media giant to prevent the spread of a now-confirmed story damaging to then-candidate Joe Biden’s election chances.
These time-consuming efforts to police speech and manage narratives appear to be in need of an upgrade.
The National Science Foundation has awarded millions of dollars in taxpayer funds to universities and private companies in order for them to develop censorship tools.
According to Just the News, these tools are similar to those developed by the Defense Advanced Research Projects Agency in its Social Media in Strategic Communications program, which ran from 2011 to 2017.
These tools were designed to “assist in identifying misinformation or deception campaigns and countering them with accurate information, reducing adversaries’ ability to manipulate events.”
DARPA noted that SMISC could help with this “will conduct research on linguistic cues, information flow patterns, and the detection of sentiment or opinion in information generated and spread via social media. In addition, researchers will attempt to track ideas and concepts in order to analyze patterns and cultural narratives.”
At the time SMISC was launched, Rand Waltzman, program manager at DARPA, recognized that “effective use of social media has the potential to help the Armed Forces better understand the environment in which it operates and to allow more agile use of information in support of operations.”
“We must replace our current reliance on a combination of luck and crude manual methods with systematic automated and semiautomated human operator support to detect, classify, measure, track, and influence events in social media at data scale and in real time,” he added.
Waltzman established four goals for the program in the hopes of advancing military goals and gaining greater control over narratives communicated in virtual realms:
“Detect, classify, measure, and track the (a) formation, development, and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation”; “Recognize persuasion campaign structures and influence operations across social media sites and communities”; “Identify participants and intent, and measure persuasion campaign effects”; and “Counter messaging of detected adversary influence operations.”
In addition to developing technologies to better mine opinion and track memes, SMISC researchers sought to improve methods of automating content generation, weaponizing bots in social media, and crowdsourcing.
According to Mike Benz, executive director of the censorship watchdog Foundation for Freedom Online (FFO), “DARPA’s been funding an AI network using the science of social media mapping since at least 2011-2012, during the Arab Spring abroad and the Occupy Wall Street movement here at home.”
“They then boosted it during ISIS’s time to identify homegrown ISIS threats in 2014-2015,” he added.
This same technology is allegedly now being used to target people who are “wary of potential adverse effects from the COVID-19 vaccine and skeptical of recent U.S. election results.”