Disinformation campaigns: fact-checking, use of social media, and the targetted attacks
The battle between disinformation agents and the vigilant forces trying to combat them is a never-ending cat-and-mouse game. In today's digital age, the proliferation of disinformation has far-reaching consequences. As disinformation campaigns become more complex, platforms like Facebook and Twitter have moved towards clearer attributions.
In 2020, there was a marked increase in the number of campaigns attributed to specific actors. Such “naming and shaming” not only deters potential disinformants but also equips citizens with valuable knowledge about the origin of information.
While the increasing ability of platforms to pinpoint specific actors is laudable, it also exposes them to backlash. For example, after Twitter's removal of certain accounts linked to Turkey's ruling party, it received threats of further regulations and restrictions.
The Evolution of Disinformation Strategies throughout the Years
2020 saw innovative strategies from disinformation agents. AI-generated profile pictures emerged as a key tool to create convincing fake accounts, thus dodging reverse image search tools. Another tactic, handle switching, allows users to entirely revamp their online identity without losing followers, making the task of tracing disinformation to its source even harder.
This continuous evolution underscores the necessity for platforms to adapt and implement features that restrict disinformation agents' abilities to disguise and disseminate false narratives. The trend of governments or political actors outsourcing their disinformation campaigns to third-party PR or marketing firms also became increasingly apparent in 2020.
This approach provides a layer of deniability and makes the source of disinformation harder to trace. However, the very act of outsourcing might dilute the effectiveness of the campaign. For instance, a PR firm might employ tactics that make them look successful without genuinely influencing public opinion in the desired manner.
Certain countries, including the United States, the United Kingdom, and Egypt, emerged as favorite targets for foreign disinformation operations. While geopolitical importance plays a role, the open media environments in these countries may also render them more susceptible.
However, identifying the most targeted nations is a complex endeavor. Disinformation on other platforms and unreported campaigns can paint a distorted picture, emphasizing the need for comprehensive multi-platform research.
The Dark Underbelly of Social Media: a Dive into Fake News and Information Operations
Fake news outlets masquerading as independent journalists or media houses, continued to be a popular tool in the disinformation toolkit. The sheer number of genuine news portals makes it easy for these fakes to blend in and mislead readers. For instance, the campaign linked to the Islamic Movement of Nigeria utilized fake news outlets to disseminate anti-Western narratives, further underlining the challenges of distinguishing genuine news sources from nefarious ones.
In today's digital age, where social media has become an integral part of our lives, it is no longer just about connecting with friends or following the latest trends. A sinister underbelly has emerged, one that poses threats ranging from the manipulation of public opinion to cyber espionage.
A growing concern in the realm of digital misinformation is that while many fake media outlets used in information operations are originally crafted, some have now begun impersonating existing credible media outlets. For instance, in a takedown in November, authorities discovered a network stemming from Iran and Afghanistan. This network included social media platforms like Facebook and Instagram, which were posing as representatives of Afghanistan's top TV channel.
Moreover, a tactic frequently adopted by these malicious entities is 'typosquatting'. This is a method where they slightly modify a genuine URL, often just by changing a single letter, making it seem legitimate upon a cursory look. A revelation from CitizenLab highlighted an Iran-supported network that utilized typosquatting to impersonate major media outlets, like substituting Bloomberg's original URL with 'bloomberq[.]com' and Politico with 'policito[.]com'.
Decoding the Content: Disinformants Evolve and News Not Always Blatantly Fake
As platforms become more vigilant against misinformation, these propagators of false information find new avenues. They see merit in purchasing domains, which are inherently harder to suspend than social media accounts. Furthermore, these disinformants are directing their audience toward third-party messaging apps that don't actively counteract information operations. An example of this is the push towards Telegram channels.
Although based in Dubai and encrypted, Telegram isn't recognized for clamping down on disinformation. While platforms like Facebook and Twitter have shut down fake news accounts, associated Telegram channels often remain unscathed, continuing their misinformation campaigns unhindered.
Misinformation isn't always about outright lies. Of course, there are instances of blatant falsehoods, like a manipulated image of the Taliban praying for President Donald Trump's recovery from COVID-19. However, many times the information, though skewed, isn't immediately verifiable as false. We've noticed accounts disseminating hyperpartisan content, such as Saudi-affiliated Twitter profiles, which masquerade as average Libyan citizens only to post cartoons that deride countries like Turkey and Qatar.
Understanding the scope and threat of social media's influence operations is crucial. While some campaigns might have minimal impact, the overarching strategic implications are vast. These manipulations can diminish trust in the broader informational environment. Governments, as they devise countermeasures, need to comprehend the nuances of these campaigns and anticipate future strategic shifts.
The Dire Consequences of Disinformation on Enterprises
Behind these large-scale disinformation movements are real individuals. They can range from disgruntled ex-employees, competitors, and even criminals to advanced software tools manipulated by humans, such as social bots. In this digital age, disseminating false information has become surprisingly easy, making it accessible to almost anyone with an intent.
Astoundingly, with as little as 40 USD/month, one can rent a botnet or even buy a thousand fake Twitter profiles on the Dark Net. This has given birth to a new business model known as disinformation-as-a-service (DaaS). Here, malevolent campaigns can be procured via the Dark Net, providing a platform for malicious entities to easily spread damaging and untrue content without significant investment.
Disinformation campaigns, although intangible, have very real repercussions, especially for businesses. According to Deloitte in 2019, one in every four companies has been victimized by disinformation attacks targeting their reputation. These campaigns can evoke potent emotional reactions, lead to significant health risks, crash stock prices, tarnish reputations, and create an enduring mistrust among stakeholders, even if later proven false.
Furthermore, they can distort the line between genuine and false company information, manipulating public perception. It's a concerning statistic to note that disinformation is currently costing the global economy a staggering $78 billion annually. As the digital realm evolves, it becomes imperative for both the public and private sectors to recognize the threats posed by misinformation, stay vigilant, and develop effective countermeasures to safeguard the integrity of information.