The Emerging Risk of Virtual Societal Warfare
Social Manipulation in a Changing Information Environment

The year 2016 and beyond saw an explosion of interest in issues of disinformation, propaganda, information manipulation and fakery, “fake news,” “Truth Decay,” and related trends—a broad phenomenon that can be termed hostile social manipulation. In this study, we define this concept as the purposeful, systematic generation and dissemination of information to produce harmful social, political, and economic outcomes in a target country by affecting beliefs, attitudes, and behavior. Examples of this rising challenge include Russian efforts to influence elections and sow discord in the West through propaganda and disinformation; the role of social media platforms such as Facebook in spreading such misinformation; and burgeoning Chinese programs to shape regional narratives and gain political leverage in specific countries. U.S. intelli- gence services have concluded that Russia employed such techniques to influence the 2016 election, and Moscow continues to employ them— sometimes brazenly despite U.S. warnings—in the United States and Europe.

As significant as these developments have been, they may only represent the beginning of what an aggressive nation can accomplish with techniques and technologies designed to disrupt and shape the information environment of a target country. This report’s primary conclusion is that, as significant as social manipulation efforts have already been, the United States and other democracies have only glimpsed the tip of the iceberg of what these approaches may someday be able to achieve.

The intersection of multiple emerging technologies, from arti- ficial intelligence to virtual reality and personalized messaging, is creating the potential for aggressors to change people’s fundamental social reality. Two well-known information-related threats are classic cyberattacks on major infrastructure sites and internet-enabled disin- formation, but this report calls attention to the burgeoning landscape in between—areas of the emerging information-based foundation of society that are vulnerable to persistent disruption and manipulation. Especially with the rise of the “Internet of Things” (IoT) and algo- rithmic and big-data–driven decisionmaking, advanced societies are becoming perilously dependent on networks of information and data gathering, exchange, communication, analysis, and decisionmaking. These risks are especially significant today because of the changing nature of the infosphere (the information environment governing post- modern democracies), which is characterized, among other trends, by the fragmentation of authority, the rise of silos of belief, and a persistent “trolling” ethic of cynical and aggressive harassment in the name of an amorphous social dissent.

As much as it feels to citizens of advanced economies that we already live in an information society, we have in fact seen only the first hints of this transformation. And that transition will open unprec- edented opportunities for hostile rivals—state or nonstate—to reach into those societies and cause disruption, delay, inefficiency, and active harm. It will open the door to a form of virtual societal aggression that will make countries more persistently vulnerable than they have been for generations. Such virtual aggression will force a rethinking of the character of national security and steps taken to protect it.

Traditional forms of information-based social manipulation have focused on disseminating narratives—through, for example, propa- ganda, public diplomacy, and social media posts—to affect beliefs. Classic hostile cyberattacks have often used information networks as a highway to attack physical targets, such as banks, power stations, or centrifuges. The evolution of advanced information environments is rapidly creating a third category of possible aggression: efforts to manipulate or disrupt the information foundations of the effective functioning of economic and social systems. Aggressors will increasingly have the opportunity, not merely to spread disinformation or favorable narratives or damage physical infrastructure, but to skew and damage the functioning of the massive databases, algorithms, and net- works of computerized, computer-enhanced, or computer-dependent things on which modern societies will utterly depend.

What we are calling virtual societal warfare can involve any combination of a broad range of techniques, including the following:
deploying classic propaganda, influence, and disinformation operations through multiple channels, including social media
generating massive amounts of highly plausible fabricated video and audio material to reduce confidence in shared reality
discrediting key mediating institutions that are capable of distin- guishing between true and false information
corrupting or manipulating the databases on which major com- ponents of the economy increasingly rely
manipulating or degrading systems of algorithmic decisionmak- ing, both to impair day-to-day government and corporate opera- tions and to intensify loss of faith in institutions, as well as increase social grievances and polarization
using the vulnerabilities inherent in the connections among the exploding IoT to create disruption and damage
hijacking virtual and augmented reality systems to create disrup- tion or mental anguish or to strengthen certain narratives
inserting commands into chatbot-style interactive systems to gen- erate inefficiencies and in some cases personal frustration and anxiety.
In many cases, the primary goal of such aggression may not be physical harm so much as confusion and an accelerating loss of confi- dence in the operation of major social institutions. And the emergence of information-dependent societies will broaden and deepen the array of social manipulation techniques available to attackers, allowing them to seek highly tailored combinations of physical damage and changes in attitudes. The role of trust is a consistent theme in this analysis: Attacks on the effective operation of information systems strike directly at levels of social trust, creating the sense that the institutions and pro- cesses of advanced societies cannot be trusted and generating a sense of persistent insecurity and anxiety. ...

To shed light on how these techniques might evolve, RAND researchers built on a first-phase analysis from this project that focused on Russian and Chinese efforts at hostile social manipulation. This project was not yet aimed at solutions, but rather understanding—i.e., comprehending the character of the emerging challenge. It was designed to set the stage for more-detailed discussion of potential responses to the threat. But one lesson of this phase of research is that many of these trends, technologies, and capabilities remain poorly understood, and some possible responses have potentially dramatic implications for the operation of the information environment, the character of free speech, and other issues. It would be dangerous to begin promulgating possible solutions without rigorous analysis of their likely consequences. This report is designed to set the stage for such work


These categories represent only a broad sketch of the sort of response likely to be required for democracies to armor themselves against the potential threat of virtual societal warfare. These emerging forms of aggression represent a significant danger to advanced democ- racies, a form of national security threat that has not been seen before. Especially in the nuclear age, and in an era when a general global con- sensus has prevailed against outright territorial aggression, large-scale invasions have become mostly a thing of the past. But while armies can be deterred, gradual, low-level hostile manipulation of the infosphere and larger social topography of nations may be the new frontier of aggression. The potential for virtual societal warfare is certainly emerg- ing. The only question today is whether democracies band together to control and defend themselves against this threat.