Why The Disinformation Revolution Is Based On Intent. Here’s What Can Be Done About It.

COVID-19, the 2020 presidential election, conspiracy theories, natural disasters, economic disruption—all of these are happening in and around the United States simultaneously. However, there is another enormous threat to the country’s businesses, safety, and security. “Disinformation” and “misinformation” are a growing problem impacting many aspects of our lives, including (but not limited to) the aforementioned threats. Meet Wasim Khaled, a man on a mission to protect us from the pervasive dissemination of purposely false information.

In 2014 Khaled, along with his partner and CTO, Dr. Naushad UzZaman, founded Blackbird.AI. According to Khaled, “We both felt that disinformation was one of the greatest global threats of our time. Since 2014, I had personally been observing the incredible ease with which intelligent people were falling prey to obvious falsehoods and conspiracies online. Blackbird.AI has understood from early on that social media platforms were not going to be able to solve this problem on their own, that people were increasingly reliant on social media for their information diets and that disinformation in the digital age was a threat to democracy, societal cohesion and enterprise organizations. Dedicating our lives to this space was a no-brainer.”

You may assume this means that they are fighting “fake news.” That is not entirely correct. Khaled has observed that “fake news” has become an overused term that can be applied to so many situations, it has become almost meaningless. The range of items it refers to, from false stories, partially false stories, altered text, and video, to things a perceived audience member dislikes does not want to believe. In other words, it has become a catch-all for things people find disagreeable.

Khaled and Blackbird are fighting against “Disinformation,” or false information disseminated by someone who knows it is false. It is a deliberate lie, and it points to people being actively disinformed by malicious actors. “Misinformation” is information that is false, but the person who is disseminating it believes that it is true.

At times, the distinction between dis- and misinformation can blur. For example, once a disinformation message has been distributed, it can be reproduced and redistributed endlessly by many different actors who have varied motives. A social media post can be distributed by several communities (disinformation), leading its message to be picked up and reproduced by a mainstream media that is operating without sufficient scrutiny, i.e. “misinformation.” It can then be distributed further to still other communities. This is the nexus where disinformation can morph into misinformation—the disseminator is not even aware of their part in spreading the falsehood. But the net result may harm companies, democracies, or global interests.

At Blackbird.AI, disinformation must be quantifiable and measurable because to us it’s clear that you can’t begin to mitigate a problem until you can measure it. After years of research and development, we developed the Blackbird.AI Risk Index (BRI) to create a new rubric for risk that helps automate measurement of manipulation driven harm in the new information ecosystem. Unlike the traditional indicators from social listening, we look into unique indicators like Manipulation, Hyper-Partisanship, Bot-like activity, Toxicity, Stance, Cohorts of Interest and a variety of other algorithmic risk indices to create a real time scalable operating picture for organizations,” says Khaled.

A good example of Khaled’s concern is the lack of consistency with COVID-19 information. The conspiracy theories take flight, they are a good example of how dangerous disinformation misinformation can be—and quickly they can appear factual. The outbreak is an evolving human tragedy affecting millions of people worldwide. In today’s digital media landscape, any major event, especially one like this, results in opportunistic disinformation dynamics to flourish, trend, and seamlessly harm intended and unintended targets, such as brands, governments and the public. 

The COVID-19 outbreak is an unprecedented global event. Constant trending events across social media outlets have dominated the news at a faster pace than ever before. The head of the World Health Organization (WHO) calls the spread of false information on COVID-19 an “infodemic” that has resulted in widespread confusion. What we can believe? What can’t we? Such questions in the midst of an unprecedented, rapidly shifting public health crisis is not just dangerous—it is a disinformation war on safety and global health.

A solid understanding of the scope and scale of disinformation, for example, pertaining to COVID-19 is critical to every aspect of lives, both personal and professional. This is but one example of the modern disinformation campaign’s power as it seeks to modify perceptions and beliefs  influencing the public, decision makers, and even nation-States., In the instant case, such attacks will continue as long as the pandemic continues, and dis and misinformation-based decision will continue to  endanger  communities around the world.

The rise of disinformation should also be a major concern of corporations and democracies around the world. Organizations that work like public relations firms use the same tools and techniques as adversarial nation states to perform digital “takedowns” of companies, markets, politicians and policies. Most enterprise organizations still think disinformation attacks are primarily a political threat. Disturbingly, that is not true.

Fortune 500 and Global 500 organizations only recently began to realize how much risk disinformation really poses for them. Most large brands have some form of media monitoring in place. However, these systems are not able to sift through large volumes of content to identify manipulated content. For over a decade they have relied on human analysts and analytics tools that look at naive leading indicators of risk, including volume, engagement, and sentiment. These systems are extremely vulnerable to manipulation. This puts all data driven decision making at risk. A recent study from the University of Baltimore found that disinformation causes companies $78 billion in annual losses in the United States alone.

Blackbird.AI’s Constellation Risk Engine is a deep learning platform that enables organizations to monitor narratives and/or disinformation that can adversely impact them. By filtering through synthetic content, a brand can identify narratives that are deceptive in nature to ensure that brand value is not affected by falsehoods. Brands can also get more targeted and direct access to customers by filtering through manipulated content to identify authentic audiences. 

The Blackbird platform is complementary to other monitoring tools—we are tuned to surface manipulated content. As part of the communications toolkit of products/services, it helps analytics, policy and communications teams understand the battlegrounds for malintent, so that emerging threats can be dealt with quickly before brand equity takes a hit.

When Khaled discusses Blackbird’s core capabilities, he speaks with pride. For him and his team, their mission is one with purpose. “The Blackbird.AI team is made up of the sharpest, most dedicated people I have ever had the privilege of working with. Seeing the evolution of our state-of-the-art artificial intelligence technologies, designed to detect and thwart a harmful influence campaign before it can do damage is by far the most important and critical nature of our work at Blackbird.AI. We are on a mission to address the global problem of disinformation and the impact and risk to our present and future and feel a major responsibility to make a difference and help as many organizations and people around the world as we possibly can.”

Blackbird.AI set out to overcome vulnerabilities of the information age. First, they attracted experts from around the world and created an interdisciplinary team containing varied backgrounds. This gives them enormous value and important perspectives. Many team members have decades of experience exposing harmful actors and protecting the sanctity of information. Blackbird’s approach allowed them to develop and evolve unique approaches to mitigate disinformation detection and mitigation. They have spent years of research and development building a finely tuned AI platform that offers clients unparalleled speed, objectivity, and risk assessment.

For Wasim Khaled, Blackbird’s mission is also personal. “Blackbird.AI started from a place of purpose and has been mission driven since day one. We firmly believe that we have a responsibility to society and that the power to fight disinformation is vitally needed by governments, communities, and individuals to create an empowered and critical thinking society in a time of great turmoil and uncertainty.”