Plain Jane
Just Plain Jane
It seems crazy to start this thread so early but I have been watching the elections in European countries carefully and the most conservative/populist parties have done incredibly well. Yet the behind the scenes shenanigans of the EU have kept all them from actually holding power.
Censorship is a tactic that is increasingly being deployed for these upcoming elections for the EU Parliament and there may be other tactics coming up. We could see it here in the United States.
BY TYLER DURDEN
WEDNESDAY, APR 03, 2024 - 03:30 AM
Authored by Nick Corbishley via NakedCapitalism.com,
This is the culmination of a process that began at least a decade ago.
One of the most important (albeit least reported) developments of 2023 was the launch of the European Union’s Digital Services Act (DSA), which came into full effect in late August and which we covered in the article, The EU’s Mass Censorship Regime Is Almost Fully Operational. Will It Go Global? The goal of the DSA is to combat — i.e., suppress — mis- and disinformation online, not just in Europe but potentially across the world and is part of a broader trend of Western governments actively pushing to censor information on the Internet as they gradually lose control over the narrative.
Here’s how it works: so-called Very Large Online Platforms (VLOPs) and Search Engines (VLSEs) — those with more than 45 million active monthly users in the EU — are required to censor content hosted on their platforms deemed to be illegal by removing it, blocking it, or providing certain information to the authorities concerned. Platforms are also required to tackle hate speech, dis- or misinformation if it is deemed to have “actual or foreseeable negative effects on civic discourse and electoral processes, and public security” and/or “actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being.”
Besides take-downs and outright suspensions, other familiar tools at the disposal of tech platforms include de-monetisation, content demotion, shadow-banning and account visibility filtering. The European Commission has primary, but not exclusive, regulatory responsibility for VLOPs and VLOSEs. The same requirements now also apply to all other online service providers, though responsibility for execution and enforcement lies not with the Commission but national authorities.
As Robert Kogon reports for Brownstone Institute, (granted, not the most popular source of information on NC, but this is another solid, well researched piece by Kogon on a topic virtually no one else is talking about), “while Musk and the Twitter Files are so verbose about alleged ‘US government censorship,'” they “have remained suitably mum about EU censorship demands”:
t is strictly impossible that Twitter has not had and is not continuing to have contact – indeed extensive and regular contact – with EU officials about censoring content and accounts that the European Commission deems “mis-” or “disinformation.” But we have heard absolutely nothing about this in the “Twitter Files.”
Why? The answer is: because EU censorship really is government censorship, i.e. censorship that Twitter is required to carry out on pain of sanction. This is the difference between the EU censorship and what Elon Musk himself has denounced as “US government censorship.” The latter has amounted to nudges and requests, but was never obligatory and could never be obligatory, thanks to the First Amendment and the fact that there has never been any enforcement mechanism. Any law creating such an enforcement mechanism would be obviously unconstitutional. Hence, Twitter could always simply say no…
Far from any sign of defiance of the Code and the DSA, what we get from Elon Musk is repeated pledges of fealty: like the below tweet that he posted after meeting with EU Internal Market Commissioner Thierry Breton in January. (For an earlier such pledge in the form of a joint video message with Breton, see here.)
Now, the European Commission has its sights set on the EU’s parliamentary elections, to be held in June. “Integrity of election is one of my top priorities for DSA enforcement, as we are entering a period of elections in Europe,” Breton the Enforcer told Politico last September.
Elections in Slovakia in September were supposed to offer a dummy run, but the results were underwhelming, at least as far as the Commission was concerned. The left-wing populist and social conservative party, Direction–Social Democracy (Smer-SD), led by former Prime Minister Robert Fico, took the largest number of votes and was able to form a coalition government with like-minded parties. Fico had promised to cut all aid to Ukraine, which he says is governed by neo-Nazis, as well as block its ascension to NATO.
The Commission is determined to up its game, however. Last week, it published a set of guidelines for Big Tech firms to help Brussels “secure” the upcoming elections from foreign interference and other threats. The guidelines recommend “mitigation measures and best practices to be undertaken by Very Large Online Platforms and Search Engines before, during, and after electoral events,” and are explained as necessary in order to prevent things like fake news, turnout suppression, cyber threats and attacks, and, of course, Russia’s malign influence on European public opinion, particularly regarding Ukraine.
“In the European Union we speak about the Kremlin, which is very successful in creating narratives which can influence the voting preferences of the people,” said EU Vice-President for Values and Transparency, Věra Jourová, in a recent interview with the Atlantic Council, a neocon think tank that knows a thing or two about disinformation having played a leading role in the ProporNot fiasco that baselessly outed hundreds of alternative news websites as Russian propagandists including this one. “And lying, just lies… Disinformation in order to influence elections in a way that the people in Europe will stop to support (sic) Ukraine.”
“Reinforce their internal processes, including by setting up internal teams with adequate resources, using available analysis and information on local context-specific risks and on the use of their services by users to search and obtain information before, during and after elections, to improve their mitigation measures.”
(This may sound eerily familiar to the US government’s censorship efforts revealed by the Twitter files, but there is a key difference: the processes in the US were largely covert and informal, with nothing in the way of legal consequences in the case of non-compliance. By contrast, the EU’s DSA ensures that the processes are not just overt and legally authorised, they are backed up with the very real threat of substantial economic sanctions).
“Implement elections-specific risk mitigation measures tailored to each individual electoral period and local context. Among the mitigation measures included in the guidelines, Very Large Online Platforms and Search Engines should promote official information on electoral processes, implement media literacy initiatives, and adapt their recommender systems to empower users and reduce the monetisation and virality of content that threatens the integrity of electoral processes. Moreover, political advertising should be clearly labelled as such, in anticipation of the new regulation on the transparency and targeting of political advertising.”
(The first sentence serves as a reminder that these processes will be applied not only to EU elections. As the Commission’s announcement on X makes clear, it also plans to “protect the integrity” of 17 national or local elections across Europe this year. What about elections in other regions of the world? For example, the US’ general election in November, on which so much rests, including quite possibly the future of NATO. Clearly, the European Commission and the national governments of many EU member states have a vested interest in trying to prevent another Trump triumph).
“Adopt specific mitigation measures linked to generative AI: Very Large Online Platforms and Search Engines whose services could be used to create and/or disseminate generative AI content should assess and mitigate specific risks linked to AI, for example by clearly labelling content generated by AI (such as deepfakes), adapting their terms and conditions accordingly and enforcing them adequately.”
(The EU has just passed its AI Act, one of whose ostensible purposes is to tackle the threat posed by AI-generated videos and other recordings. As high-quality deep fakes are becoming harder to desire, this is a growing challenge. For the moment, the Commission is relying on the DSA to address these risks for the upcoming EU elections).
“Cooperate with EU level and national authorities, independent experts, and civil society organisations to foster an efficient exchange of information before, during and after the election and facilitate the use of adequate mitigation measures, including in the areas of Foreign Information Manipulation and Interference (FIMI), disinformation and cybersecurity.”
(As readers no doubt appreciate, this level of collusion between government and big tech platforms — the ultimate public-private partnership — aimed at controlling the message throughout an election period, is exceedingly dangerous. Even the EFF, which has praised many aspects of the DSA, warns that “Issues with government involvement in content moderation are pervasive and whilst trusted flaggers are not new, the DSA’s system could have a significant negative impact on the rights of users, in particular that of privacy and free speech.”)
“Adopt specific measures, including an incident response mechanism, during an electoral period to reduce the impact of incidents that could have a significant effect on the election outcome or turnout.”
“Assess the effectiveness of the measures through post-election reviews. Very Large Online Platforms and Search Engines should publish a non-confidential version of such post-election review documents, providing opportunity for public feedback on the risk mitigation measures put in place.”
(This last point feels as though it is intended to give this vast entreprise a veneer of respectability through the use of expressions such as “non-confidential” and “public feedback,” presenting the illusion that these processes will all be happening out in the open and with the direct involvement of the public, which couldn’t be further from the truth).
Not everything about the DSA is bad, however. The Electronic Frontier Foundation (EFF), for example, has praised many aspects of the regulation, including the protections it provides on user rights to privacy by prohibiting platforms from undertaking targeted advertising based on sensitive user information, such as sexual orientation or ethnicity. “More broadly, the DSA increases the transparency about the ads users see on their feeds as platforms must place a clear label on every ad, with information about the buyer of the ad and other details.” It also “reins in the powers of Big Tech” by forcing them to “comply with far-reaching obligations and responsibly tackle systemic risks and abuse on their platform.”
But the EFF says it also “gives way too much power to government agencies to flag and remove potentially illegal content and to uncover data about anonymous speakers”:
Censorship is a tactic that is increasingly being deployed for these upcoming elections for the EU Parliament and there may be other tactics coming up. We could see it here in the United States.
Brussels Begins To Mobilise Its Mass Censorship Regime For Upcoming EU Elections | ZeroHedge
ZeroHedge - On a long enough timeline, the survival rate for everyone drops to zero
www.zerohedge.com
Brussels Begins To Mobilise Its Mass Censorship Regime For Upcoming EU Elections
BY TYLER DURDEN
WEDNESDAY, APR 03, 2024 - 03:30 AM
Authored by Nick Corbishley via NakedCapitalism.com,
This is the culmination of a process that began at least a decade ago.
One of the most important (albeit least reported) developments of 2023 was the launch of the European Union’s Digital Services Act (DSA), which came into full effect in late August and which we covered in the article, The EU’s Mass Censorship Regime Is Almost Fully Operational. Will It Go Global? The goal of the DSA is to combat — i.e., suppress — mis- and disinformation online, not just in Europe but potentially across the world and is part of a broader trend of Western governments actively pushing to censor information on the Internet as they gradually lose control over the narrative.
Here’s how it works: so-called Very Large Online Platforms (VLOPs) and Search Engines (VLSEs) — those with more than 45 million active monthly users in the EU — are required to censor content hosted on their platforms deemed to be illegal by removing it, blocking it, or providing certain information to the authorities concerned. Platforms are also required to tackle hate speech, dis- or misinformation if it is deemed to have “actual or foreseeable negative effects on civic discourse and electoral processes, and public security” and/or “actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being.”
Besides take-downs and outright suspensions, other familiar tools at the disposal of tech platforms include de-monetisation, content demotion, shadow-banning and account visibility filtering. The European Commission has primary, but not exclusive, regulatory responsibility for VLOPs and VLOSEs. The same requirements now also apply to all other online service providers, though responsibility for execution and enforcement lies not with the Commission but national authorities.
Staying Mum
So far, the platforms, including even Elon Musk’s X, appear to be adhering to the EU’s rules on disinformation. If they weren’t, they could face serious economic consequences, including fines of up to 6% of global turnover, as well as the looming threat of warrantless inspections of company premises. The X platform (formerly known as Twitter) may have left the EU’s voluntary code of practice last summer and in December was hit with a probe over disinformation related to Hamas’s October 7 attack, but its actions — or rather lack of actions — since then suggest it is indeed complying with the rules.As Robert Kogon reports for Brownstone Institute, (granted, not the most popular source of information on NC, but this is another solid, well researched piece by Kogon on a topic virtually no one else is talking about), “while Musk and the Twitter Files are so verbose about alleged ‘US government censorship,'” they “have remained suitably mum about EU censorship demands”:
t is strictly impossible that Twitter has not had and is not continuing to have contact – indeed extensive and regular contact – with EU officials about censoring content and accounts that the European Commission deems “mis-” or “disinformation.” But we have heard absolutely nothing about this in the “Twitter Files.”
Why? The answer is: because EU censorship really is government censorship, i.e. censorship that Twitter is required to carry out on pain of sanction. This is the difference between the EU censorship and what Elon Musk himself has denounced as “US government censorship.” The latter has amounted to nudges and requests, but was never obligatory and could never be obligatory, thanks to the First Amendment and the fact that there has never been any enforcement mechanism. Any law creating such an enforcement mechanism would be obviously unconstitutional. Hence, Twitter could always simply say no…
Far from any sign of defiance of the Code and the DSA, what we get from Elon Musk is repeated pledges of fealty: like the below tweet that he posted after meeting with EU Internal Market Commissioner Thierry Breton in January. (For an earlier such pledge in the form of a joint video message with Breton, see here.)
Now, the European Commission has its sights set on the EU’s parliamentary elections, to be held in June. “Integrity of election
Elections in Slovakia in September were supposed to offer a dummy run, but the results were underwhelming, at least as far as the Commission was concerned. The left-wing populist and social conservative party, Direction–Social Democracy (Smer-SD), led by former Prime Minister Robert Fico, took the largest number of votes and was able to form a coalition government with like-minded parties. Fico had promised to cut all aid to Ukraine, which he says is governed by neo-Nazis, as well as block its ascension to NATO.
The Commission is determined to up its game, however. Last week, it published a set of guidelines for Big Tech firms to help Brussels “secure” the upcoming elections from foreign interference and other threats. The guidelines recommend “mitigation measures and best practices to be undertaken by Very Large Online Platforms and Search Engines before, during, and after electoral events,” and are explained as necessary in order to prevent things like fake news, turnout suppression, cyber threats and attacks, and, of course, Russia’s malign influence on European public opinion, particularly regarding Ukraine.
“In the European Union we speak about the Kremlin, which is very successful in creating narratives which can influence the voting preferences of the people,” said EU Vice-President for Values and Transparency, Věra Jourová, in a recent interview with the Atlantic Council, a neocon think tank that knows a thing or two about disinformation having played a leading role in the ProporNot fiasco that baselessly outed hundreds of alternative news websites as Russian propagandists including this one. “And lying, just lies… Disinformation in order to influence elections in a way that the people in Europe will stop to support (sic) Ukraine.”
List of Demands
Here is, word for word, the full list of the EU’s demands for the platforms, interspersed with a few observations and speculations of my own (italicised and in brackets). The platforms are instructed to:“Reinforce their internal processes, including by setting up internal teams with adequate resources, using available analysis and information on local context-specific risks and on the use of their services by users to search and obtain information before, during and after elections, to improve their mitigation measures.”
(This may sound eerily familiar to the US government’s censorship efforts revealed by the Twitter files, but there is a key difference: the processes in the US were largely covert and informal, with nothing in the way of legal consequences in the case of non-compliance. By contrast, the EU’s DSA ensures that the processes are not just overt and legally authorised, they are backed up with the very real threat of substantial economic sanctions).
“Implement elections-specific risk mitigation measures tailored to each individual electoral period and local context. Among the mitigation measures included in the guidelines, Very Large Online Platforms and Search Engines should promote official information on electoral processes, implement media literacy initiatives, and adapt their recommender systems to empower users and reduce the monetisation and virality of content that threatens the integrity of electoral processes. Moreover, political advertising should be clearly labelled as such, in anticipation of the new regulation on the transparency and targeting of political advertising.”
(The first sentence serves as a reminder that these processes will be applied not only to EU elections. As the Commission’s announcement on X makes clear, it also plans to “protect the integrity” of 17 national or local elections across Europe this year. What about elections in other regions of the world? For example, the US’ general election in November, on which so much rests, including quite possibly the future of NATO. Clearly, the European Commission and the national governments of many EU member states have a vested interest in trying to prevent another Trump triumph).
“Adopt specific mitigation measures linked to generative AI: Very Large Online Platforms and Search Engines whose services could be used to create and/or disseminate generative AI content should assess and mitigate specific risks linked to AI, for example by clearly labelling content generated by AI (such as deepfakes), adapting their terms and conditions accordingly and enforcing them adequately.”
(The EU has just passed its AI Act, one of whose ostensible purposes is to tackle the threat posed by AI-generated videos and other recordings. As high-quality deep fakes are becoming harder to desire, this is a growing challenge. For the moment, the Commission is relying on the DSA to address these risks for the upcoming EU elections).
“Cooperate with EU level and national authorities, independent experts, and civil society organisations to foster an efficient exchange of information before, during and after the election and facilitate the use of adequate mitigation measures, including in the areas of Foreign Information Manipulation and Interference (FIMI), disinformation and cybersecurity.”
(As readers no doubt appreciate, this level of collusion between government and big tech platforms — the ultimate public-private partnership — aimed at controlling the message throughout an election period, is exceedingly dangerous. Even the EFF, which has praised many aspects of the DSA, warns that “Issues with government involvement in content moderation are pervasive and whilst trusted flaggers are not new, the DSA’s system could have a significant negative impact on the rights of users, in particular that of privacy and free speech.”)
“Adopt specific measures, including an incident response mechanism, during an electoral period to reduce the impact of incidents that could have a significant effect on the election outcome or turnout.”
“Assess the effectiveness of the measures through post-election reviews. Very Large Online Platforms and Search Engines should publish a non-confidential version of such post-election review documents, providing opportunity for public feedback on the risk mitigation measures put in place.”
(This last point feels as though it is intended to give this vast entreprise a veneer of respectability through the use of expressions such as “non-confidential” and “public feedback,” presenting the illusion that these processes will all be happening out in the open and with the direct involvement of the public, which couldn’t be further from the truth).
Not everything about the DSA is bad, however. The Electronic Frontier Foundation (EFF), for example, has praised many aspects of the regulation, including the protections it provides on user rights to privacy by prohibiting platforms from undertaking targeted advertising based on sensitive user information, such as sexual orientation or ethnicity. “More broadly, the DSA increases the transparency about the ads users see on their feeds as platforms must place a clear label on every ad, with information about the buyer of the ad and other details.” It also “reins in the powers of Big Tech” by forcing them to “comply with far-reaching obligations and responsibly tackle systemic risks and abuse on their platform.”
But the EFF says it also “gives way too much power to government agencies to flag and remove potentially illegal content and to uncover data about anonymous speakers”:
Democracies are in many ways like the internet. In both cases, it may take a thousand cuts to demolish their foundation, yet each cut contributes significantly to their erosion. One such cut exists in the Digital Services Act (DSA) in the form of drastic and overbroad government enforcement powers.