
Charitable donations
Help me, I'm poor!
£1.00
Some thoughts I had in response to the recent ‘#FacebookGate’ Cambridge Analytica scandal, particularly given my own PhD research on the Minerva Initiative approach to ‘terrorism’ and ‘social contagions’.
“There are three kinds of lies: lies, damned lies, and statistics.“
Any West Wing fan and/or polisci student knows the centrality of data analytics, demographic studies, and psychometric-style testing in the practice of political work and campaigning especially.
The recent #FacebookGate Cambridge Analytica scandal is not problematic due to its use of such data gathering during the Trump campaign. It is problematic due to the misuse, the fraudulent and unethical methods in obtaining such data. As we have seen with the Russian interference in media in the U.S., confusion reigns as regards the finer points of that distinction. The confusion allows for political point-scoring and emotional appeals. This too is where propaganda thrives.
Bolsover & Howard (2017) “viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Big-data research is necessary to understand the socio-technical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates.
Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion.1 In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means.
Propaganda has a long history. Scholars who study propaganda as an offline or historical phenomenon have long been split over whether the existence of propaganda is necessarily detrimental to the functioning of democracies. However, the rise of the Internet and, in particular, social media has profoundly changed the landscape of propaganda. It has opened the creation and dissemination of propaganda messages, which were once the province of states and large institutions, to a wide variety of individuals and groups. It has allowed cross-border computational propaganda and interference in domestic political processes by foreign states. The anonymity of the Internet has allowed state-produced propaganda to be presented as if it were not produced by state actors. The Internet has also provided new affordances for the efficient dissemination of propaganda, through the manipulation of the algorithms and processes that govern online information and through audience targeting based on big data analytics. The social effects of the changing nature of propaganda are only just beginning to be understood, and the advancement of this understanding is complicated by the unprecedented marrying of the social and the technical that the Internet age has enabled.”
Polling done by political parties abides by ethical rules of conduct and use. Traditionally, the data used can only be harvested by consent with full disclosure of the potential uses given in the request. There is also some uncertainty inherent in that data, as you rely on respondents giving truthful and clear responses, as opposed to lies or obfuscation.
RAND on ‘Truth Decay’: “Truth Decay matters because disagreement about basic policy facts can make it hard for governments to pass laws for the greater good of society. External adversaries can also use disinformation to delegitimise systems of government. Both can lead to a decline in trust in institutions, which in some cases can be life-threatening. For example, this distrust could lead to people avoiding government recommendations on important health and safety issues.
Truth Decay is not entirely new. We can find traces of it in both European and American history, in periods such as the Vietnam War in the 1960s and ’70s or, in more extreme cases, Germany’s move to fascism in the 1930s.
Jennifer Kavanagh and Michael Rich, the RAND report’s authors, define Truth Decay as being characterised by four key elements. The first is a heightened disagreement about facts and analytical interpretations of data. There have always been differences of opinion within the electorate across European nations. However, disagreements about objective facts have become increasingly common. A report from the RISJ on fact-checking across Europe noted that even simple factual questions can lead to disagreements, with fact-checkers coming under attack from critics who disagree with their verdicts.
The problematic relationship between fact and opinion owes itself largely to the next two elements: the blurring of the line between opinion and fact, and the increase in the volume and influence of opinions and personal experiences in relation to fact. This ambiguity and its increasing incidence raise the likelihood that audiences will encounter speculation or downright falsehoods. In turn, it becomes more difficult to identify key pieces of factual information. The 2017 Digital News Report from the RISJ found that just 40 per cent of Europeans felt that the news media did a good job in helping them distinguish fact from fiction.
The fourth and final element is the diminished trust in formerly respected institutions as sources of factual information. Europe is facing the same degradation of trust in political institutions as the United States, with the European Parliament and national parliaments held in suspicion by the public. Back in 2016, Klaus-Heiner Lehne, president of the European Court of Auditors, said that institutions in EU member states have lost the trust of citizens and that regaining it would be “a major challenge” in the years to come.
The phenomenon of Truth Decay in Europe could be seen during the 2016 EU referendum campaign in the UK and 2017 elections in France, Germany, and the Netherlands.
The 2016 EU referendum campaign was marked by huge discrepancies between “expert” calculations and political statements on both sides of the debate. The infamous claim on the side of a red battle-bus by the Leave Campaign that the UK could save a supposed £350 million every week was widely used during the referendum, but was disputed by the UK Statistics Authority as “potentially misleading.”
Meanwhile, Remain campaigners were accused of using “scare tactics” through overly gloomy economic forecasts that have not come to fruition post-Brexit. For example, one Remain Campaign claim made before the EU referendum vote that has not happened is that the UK would face an immediate recession if it voted to leave the EU.
In addition, there is evidence of Russian disinformation and interference in Western European political discourse. While uncertainties exist about their influence, it appears that these sorts of falsehoods serve to sow confusion within Western democracies.
Truth Decay poses a threat to the health and future of U.S. democracy. The same can be said for democracy across Europe.
It was reported that an online network of social media accounts linked to Russia was trying to boost messages connected to the far-right Alternative for Germany just before voting day for the 2017 German Election. There were similar stories in the 2017 Dutch elections, with the Dutch intelligence service AIVD reporting that Russia tried to influence the election in the Netherlands by spreading fake news. Finally, the National Security Agency in the United States suggested that Russia had at least some involvement in hacks that sought to discredit President Emmanuel Macron during the 2017 French election.
RAND’s Kavanagh and Rich state that Truth Decay poses a threat to the health and future of U.S. democracy. The same can be said for democracy across European nations.
While being able to provide a framework for facts, research alone cannot resolve the complex problem of Truth Decay. It will take a range of actors—researchers, policymakers, government officials, educators, journalists, and other interested individuals—to come together, debate the issues, and try to find solutions across Europe. It is in the public interest to work together and respond to the significant challenge of Truth Decay. And then, maybe, the truth will put on its racing shoes and sprint past the falsehoods of the day to take pride of place in the public discourse.”
With the advent of better technological methods for psychometric testing, and the opportunities to access ever larger amounts and types of public data, consent has become a ‘blurred lines’ issue. We rely on corporations’ statements about their use of our gathered data and their ethical practices, corporations which are increasingly encroaching on our personal spaces.
When you add in to the mix greed (money/donations), political power (access), and a lack of regulatory oversight, the situation becomes murkier still. Thankfully, journalists such as Carole Cadwalladr are monitoring and exposing such practices on our behalf, not unlike the Watergate journalists had done before.
As an outcome of my PhD thesis work, I created (within the last two years) a working draft ‘Teaching toolkit‘ related to scholar-activism. One of the ‘classes’ was on propaganda, with the option of expanding it out to cover advertising and connect with other disciplines outside IR. I am not the only one who has noted the necessity, post-2016 (post-2014 if including Russian interference knowledge), of looking back to lessons of history and the development of propaganda over time and in connection to embedded influences in our society. Here, for example, the author refers to Huxley’s rational and non-rational propaganda.
Who or what is SCL?…: “SCL Group provides data, analytics and strategy to governments and military organisations worldwide” reads the first line of its website. “For over 25 years, we have conducted behavioural change programmes in over 60 countries & have been formally recognised for our work in defence and social change.”
Of course, military propaganda was nothing new. And nor is the extent to which it has evolved alongside changes in media technology and economics. The film Citizen Kane tells a fictionalised version of the first tabloid (or, as Americans call it, ‘yellow journalism’) war: how the circulation battle between William Randolph Hearst’s New York Journal and Joseph Pulitzer’s New York World arguably drove the US into the 1889 Spanish American War. It was during this affair that Hearst reportedly told his correspondent, “You furnish the pictures and I’ll furnish the war”, as parodied in Evelyn Waugh’s Scoop. But after the propaganda disaster of the Tet Offensive in Vietnam softened domestic support for the war, the military planners began to devise new ways to control media reporting.”
I created the toolkit for a couple of reasons, one of which being the result of my thesis analysis work which highlighted the potential for other Russian interference links to be made in the U.S. (dating back to 2014) and the troubling military-industrial complex connections inherent in some Minerva Initiative funding/research arrangements for HE social science. Cambridge Analytica is not the only private data mining firm we need to be concerned about.
The Rise of the Weaponized AI Propaganda Machine
“The company is owned and controlled by conservative and alt-right interests that are also deeply entwined in the Trump administration. The Mercer family is both a major owner of Cambridge Analytica and one of Trump’s biggest donors. Steve Bannon, in addition to acting as Trump’s Chief Strategist and a member of the White House Security Council, is a Cambridge Analytica board member. Until recently, Analytica’s CTO was the acting CTO at the Republican National Convention.
Presumably because of its alliances, Analytica has declined to work on any democratic campaigns — at least in the U.S. It is, however, in final talks to help Trump manage public opinion around his presidential policies and to expand sales for the Trump Organization. Cambridge Analytica is now expanding aggressively into U.S. commercial markets and is also meeting with right-wing parties and governments in Europe, Asia, and Latin America.
Cambridge Analytica isn’t the only company that could pull this off — but it is the most powerful right now. Understanding Cambridge Analytica and the bigger AI Propaganda Machine is essential for anyone who wants to understand modern political power, build a movement, or keep from being manipulated. The Weaponized AI Propaganda Machine it represents has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts.
There’s been a wave of reporting on Cambridge Analytica itself and solid coverage of individual aspects of the machine — bots, fake news, microtargeting — but none so far (that we have seen) that portrays the intense collective power of these technologies or the frightening level of influence they’re likely to have on future elections.
In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them.
We have entered a new political age. At Scout, we believe that the future of constructive, civic dialogue and free and open elections depends on our ability to understand and anticipate it.”
The difference between traditional political polling and statistics, and the Cambridge Analytica-style data analytics is that we are no longer a ‘stereotyped’ anonymous number in the battle for power – we are a tracked and individualized dataset to be exploited covertly for political gain and corporate greed. Now, analytics firms can follow our likes and site visits, and create an eerily accurate portrayal of us, our lives and our choices. This is then used to bombard us with ‘fake news’ tailored advertising (propaganda).
Sgt. Brockman (2017) “A single application of clustering identifies groups of common interests. Clustering applied a second time determines sub-group interests, which therefore exposing community fault lines.
Divisive propaganda exploits these fault lines. For instance, we can put out propaganda that Chocolate is the best ice cream flavor in an attempt to isolate Amy from Bob and Carl.”
My research looking at Minerva and militarization of knowledge production on terrorism studies looked at the use of such ‘data analytics’ research for the purposes of the US government and security infrastructure – via the use of Facebook data by a Cornell University research study.
Here’s how they frame one of the key areas they’re looking to explore:
“Research on belief formation and the spread of ideas may help analysts, policy makers and trainers better understand the impact of operations on seemingly disparate populations. It may also inform the development of countermeasures to reduce the likelihood of militant behaviors.”
As the Defense One (2014) article further states:
For all our technological ability to see the earth and nearly everyone from every direction at once, we’re still far away from a real understanding of human motivation from a national security perspective.
That gap in intent intelligence speaks to real a Defense Department need, according to Fitzgerald. It’s why understanding the sociological roots movements like IS are very much military business.
“As insurgencies and ethnic, religious, and class-based movements reshape the political and economic landscape in many regions vital to U.S. national security, it has become clear that decreasing terrorism and political violence requires an understanding of the underlying forces that shape motivations and (importantly) mobilize action.”
Whereas such articles in 2014 tended to explore the use of such data research, by Minerva, in places such as Iraq and other more traditional conflict zones – my research questioned the potential for use domestically, on U.S. soil. Given the way the Cambridge Analytica story is unfolding, I was right to worry.
NYRDaily: “Representatives have boasted that their list of past and current clients includes the British Ministry of Defense, the US Department of Defense, the US Department of State, the CIA, the Defense Intelligence Agency, and NATO. Nevertheless, they became recognized for just one influence campaign: the one that helped Donald Trump get elected president of the United States.“
Thankfully, my preference for a feminist approach which embraces complexity in the analysis and results, prepared me somewhat – I worry that the complexity of this scandal and ramifications may be too much for the more traditionalist polisci and IR crowd. If so, how are we to stop this from happening again?
As Bolsover and Howard (2017) suggest above, an interdisciplinary approach is vital, as is a human rights-based approach:
As this College of Europe Report highlights: “What information is made available to users on their Facebook newsfeeds? On what basis
is a person’s risk profile determined and what profiles provide best chances for obtaining health insurance, or employment, or for being regarded a potential criminal or terrorist?Automated data processing techniques, such as algorithms, do not only enable internet users to seek and access information, they are also increasingly used in decision-making processes, that were previously entirely in the remit of human beings. Algorithms may be
used to prepare human decisions or to take them immediately through automated means. In fact, boundaries between human and automated decision-making are often blurred, resulting in the notion of ‘quasi- or semi-automated decision-making’. The use of algorithms raises considerable challenges not only for the specific policy area in which they are operated, but also for society as a whole. How to safeguard human
rights and human dignity in the face of rapidly changing technologies? The right to life, the right to fair trial and the presumption of innocence, the right to privacy and freedom
of expression, workers’ rights, the right to free elections, even the rule of law itself are all impacted. Responding to challenges associated with ‘algorithms’ used by the public
and private sector, in particular by internet platforms is currently one of the most hotly debated questions.There is an increasing perception that “software is eating the world” (Andreessen 2011), as human beings feel that they have no control over and do not understand the technical systems that surround them. While disconcerting, it is not always negative. It is a by-product of this phase of modern life in which globalised economic and technological developments produce large numbers of software-driven technical artefacts and “coded
objects” (Kitchin and Dodge 2011) embed key human rights relevant decision-making capacities. Which split-second choices should a software-driven vehicle make if it knows it is going to crash? Is racial, ethnic or gender bias more likely or less likely in an automated system? Are societal inequalities merely replicated or amplified through automated data processing techniques?Historically, private companies decided how to develop software in line with the economic, legal and ethical frameworks they deemed appropriate. While there are emerging frameworks for the development of systems and processes that lead to algorithmic decision-making or for the implementation thereof, they are still at an early stage and do usually not explicitly address human rights concerns. In fact, it is uncertain
whether and to what extent existing legal concepts can adequately capture the ethical challenges posed by algorithms. Moreover, it is unclear whether a normative framework regarding the use of algorithms or an effective regulation of automated data processing techniques is even feasible as many technologies based on algorithms are still in their infancy and a greater understanding of their societal implications is needed. Issues
arising from use of algorithms as part of the decision-making process are manifold and complex. At the same time, the debate about algorithms and their possible consequences for individuals, groups and societies is at an early stage. This should not, however, prevent efforts towards understanding what algorithms actually do, which consequences for society flow from them and how possible human rights concerns could be addressed.This study identifies a number of human rights concerns triggered by the increasing role of algorithms in decision-making. Depending on the types of functions performed by
algorithms and the level of abstraction and complexity of the automated processing that is used, their impact on the exercise of human rights will vary. Who is responsible when human rights are infringed based on algorithmically-prepared decisions? The person who programmed the algorithm, the operator of the algorithm, or the human being who
implemented the decision? Is there a difference between such a decision and a human-made decision? What effects does it have on the way in which human rights are exercised and guaranteed in accordance with well-established human rights standards, including rule of law principles and judiciary processes?Challenges related to the human rights impact of algorithms and automated data processing techniques are bound to grow as related systems are becoming increasingly complex and interact with each other’s outputs in ways that become progressively
impenetrable to the human mind. This report does not intend to comprehensively address all aspects related to the human rights impacts of algorithms but rather seeks to
map out some of the main current concerns from the Council of Europe’s human rights perspective, and to look at possible regulatory options that member states may consider to minimise adverse effects, or to promote good practices. A number of related themes will require more detailed research to more systematically assess their challenges and potential from a human rights point of view, including questions related to big data processing, machine learning, artificial intelligence and the Internet of things.”
4 thoughts on “Cambridge Analytica & ‘The West Wing’ Campaign”