Help me, I'm poor!
Some thoughts I had in response to the recent ‘#FacebookGate’ Cambridge Analytica scandal, particularly given my own PhD research on the Minerva Initiative approach to ‘terrorism’ and ‘social contagions’.
Any West Wing fan and/or polisci student knows the centrality of data analytics, demographic studies, and psychometric-style testing in the practice of political work and campaigning especially.
The recent #FacebookGate Cambridge Analytica scandal is not problematic due to its use of such data gathering during the Trump campaign. It is problematic due to the misuse, the fraudulent and unethical methods in obtaining such data. As we have seen with the Russian interference in media in the U.S., confusion reigns as regards the finer points of that distinction. The confusion allows for political point-scoring and emotional appeals. This too is where propaganda thrives.
Polling done by political parties abides by ethical rules of conduct and use. Traditionally, the data used can only be harvested by consent with full disclosure of the potential uses given in the request. There is also some uncertainty inherent in that data, as you rely on respondents giving truthful and clear responses, as opposed to lies or obfuscation.
With the advent of better technological methods for psychometric testing, and the opportunities to access ever larger amounts and types of public data, consent has become a ‘blurred lines’ issue. We rely on corporations’ statements about their use of our gathered data and their ethical practices, corporations which are increasingly encroaching on our personal spaces.
When you add in to the mix greed (money/donations), political power (access), and a lack of regulatory oversight, the situation becomes murkier still. Thankfully, journalists such as Carole Cadwalladr are monitoring and exposing such practices on our behalf, not unlike the Watergate journalists had done before.
As an outcome of my PhD thesis work, I created (within the last two years) a working draft ‘Teaching toolkit‘ related to scholar-activism. One of the ‘classes’ was on propaganda, with the option of expanding it out to cover advertising and connect with other disciplines outside IR. I am not the only one who has noted the necessity, post-2016 (post-2014 if including Russian interference knowledge), of looking back to lessons of history and the development of propaganda over time and in connection to embedded influences in our society. Here, for example, the author refers to Huxley’s rational and non-rational propaganda.
Who or what is SCL?…: “SCL Group provides data, analytics and strategy to governments and military organisations worldwide” reads the first line of its website. “For over 25 years, we have conducted behavioural change programmes in over 60 countries & have been formally recognised for our work in defence and social change.”
Of course, military propaganda was nothing new. And nor is the extent to which it has evolved alongside changes in media technology and economics. The film Citizen Kane tells a fictionalised version of the first tabloid (or, as Americans call it, ‘yellow journalism’) war: how the circulation battle between William Randolph Hearst’s New York Journal and Joseph Pulitzer’s New York World arguably drove the US into the 1889 Spanish American War. It was during this affair that Hearst reportedly told his correspondent, “You furnish the pictures and I’ll furnish the war”, as parodied in Evelyn Waugh’s Scoop. But after the propaganda disaster of the Tet Offensive in Vietnam softened domestic support for the war, the military planners began to devise new ways to control media reporting.”
I created the toolkit for a couple of reasons, one of which being the result of my thesis analysis work which highlighted the potential for other Russian interference links to be made in the U.S. (dating back to 2014) and the troubling military-industrial complex connections inherent in some Minerva Initiative funding/research arrangements for HE social science. Cambridge Analytica is not the only private data mining firm we need to be concerned about.
“The company is owned and controlled by conservative and alt-right interests that are also deeply entwined in the Trump administration. The Mercer family is both a major owner of Cambridge Analytica and one of Trump’s biggest donors. Steve Bannon, in addition to acting as Trump’s Chief Strategist and a member of the White House Security Council, is a Cambridge Analytica board member. Until recently, Analytica’s CTO was the acting CTO at the Republican National Convention.
Presumably because of its alliances, Analytica has declined to work on any democratic campaigns — at least in the U.S. It is, however, in final talks to help Trump manage public opinion around his presidential policies and to expand sales for the Trump Organization. Cambridge Analytica is now expanding aggressively into U.S. commercial markets and is also meeting with right-wing parties and governments in Europe, Asia, and Latin America.
Cambridge Analytica isn’t the only company that could pull this off — but it is the most powerful right now. Understanding Cambridge Analytica and the bigger AI Propaganda Machine is essential for anyone who wants to understand modern political power, build a movement, or keep from being manipulated. The Weaponized AI Propaganda Machine it represents has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts.
There’s been a wave of reporting on Cambridge Analytica itself and solid coverage of individual aspects of the machine — bots, fake news, microtargeting — but none so far (that we have seen) that portrays the intense collective power of these technologies or the frightening level of influence they’re likely to have on future elections.
In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them.
We have entered a new political age. At Scout, we believe that the future of constructive, civic dialogue and free and open elections depends on our ability to understand and anticipate it.”
The difference between traditional political polling and statistics, and the Cambridge Analytica-style data analytics is that we are no longer a ‘stereotyped’ anonymous number in the battle for power – we are a tracked and individualized dataset to be exploited covertly for political gain and corporate greed. Now, analytics firms can follow our likes and site visits, and create an eerily accurate portrayal of us, our lives and our choices. This is then used to bombard us with ‘fake news’ tailored advertising (propaganda).
My research looking at Minerva and militarization of knowledge production on terrorism studies looked at the use of such ‘data analytics’ research for the purposes of the US government and security infrastructure – via the use of Facebook data by a Cornell University research study.
“Research on belief formation and the spread of ideas may help analysts, policy makers and trainers better understand the impact of operations on seemingly disparate populations. It may also inform the development of countermeasures to reduce the likelihood of militant behaviors.”
As the Defense One (2014) article further states:
For all our technological ability to see the earth and nearly everyone from every direction at once, we’re still far away from a real understanding of human motivation from a national security perspective.
That gap in intent intelligence speaks to real a Defense Department need, according to Fitzgerald. It’s why understanding the sociological roots movements like IS are very much military business.
“As insurgencies and ethnic, religious, and class-based movements reshape the political and economic landscape in many regions vital to U.S. national security, it has become clear that decreasing terrorism and political violence requires an understanding of the underlying forces that shape motivations and (importantly) mobilize action.”
Whereas such articles in 2014 tended to explore the use of such data research, by Minerva, in places such as Iraq and other more traditional conflict zones – my research questioned the potential for use domestically, on U.S. soil. Given the way the Cambridge Analytica story is unfolding, I was right to worry.
NYRDaily: “Representatives have boasted that their list of past and current clients includes the British Ministry of Defense, the US Department of Defense, the US Department of State, the CIA, the Defense Intelligence Agency, and NATO. Nevertheless, they became recognized for just one influence campaign: the one that helped Donald Trump get elected president of the United States.“
Thankfully, my preference for a feminist approach which embraces complexity in the analysis and results, prepared me somewhat – I worry that the complexity of this scandal and ramifications may be too much for the more traditionalist polisci and IR crowd. If so, how are we to stop this from happening again?
As Bolsover and Howard (2017) suggest above, an interdisciplinary approach is vital, as is a human rights-based approach:
As this College of Europe Report highlights: “What information is made available to users on their Facebook newsfeeds? On what basis
is a person’s risk profile determined and what profiles provide best chances for obtaining health insurance, or employment, or for being regarded a potential criminal or terrorist?
Automated data processing techniques, such as algorithms, do not only enable internet users to seek and access information, they are also increasingly used in decision-making processes, that were previously entirely in the remit of human beings. Algorithms may be
used to prepare human decisions or to take them immediately through automated means. In fact, boundaries between human and automated decision-making are often blurred, resulting in the notion of ‘quasi- or semi-automated decision-making’. The use of algorithms raises considerable challenges not only for the specific policy area in which they are operated, but also for society as a whole. How to safeguard human
rights and human dignity in the face of rapidly changing technologies? The right to life, the right to fair trial and the presumption of innocence, the right to privacy and freedom
of expression, workers’ rights, the right to free elections, even the rule of law itself are all impacted. Responding to challenges associated with ‘algorithms’ used by the public
and private sector, in particular by internet platforms is currently one of the most hotly debated questions.
There is an increasing perception that “software is eating the world” (Andreessen 2011), as human beings feel that they have no control over and do not understand the technical systems that surround them. While disconcerting, it is not always negative. It is a by-product of this phase of modern life in which globalised economic and technological developments produce large numbers of software-driven technical artefacts and “coded
objects” (Kitchin and Dodge 2011) embed key human rights relevant decision-making capacities. Which split-second choices should a software-driven vehicle make if it knows it is going to crash? Is racial, ethnic or gender bias more likely or less likely in an automated system? Are societal inequalities merely replicated or amplified through automated data processing techniques?
Historically, private companies decided how to develop software in line with the economic, legal and ethical frameworks they deemed appropriate. While there are emerging frameworks for the development of systems and processes that lead to algorithmic decision-making or for the implementation thereof, they are still at an early stage and do usually not explicitly address human rights concerns. In fact, it is uncertain
whether and to what extent existing legal concepts can adequately capture the ethical challenges posed by algorithms. Moreover, it is unclear whether a normative framework regarding the use of algorithms or an effective regulation of automated data processing techniques is even feasible as many technologies based on algorithms are still in their infancy and a greater understanding of their societal implications is needed. Issues
arising from use of algorithms as part of the decision-making process are manifold and complex. At the same time, the debate about algorithms and their possible consequences for individuals, groups and societies is at an early stage. This should not, however, prevent efforts towards understanding what algorithms actually do, which consequences for society flow from them and how possible human rights concerns could be addressed.
This study identifies a number of human rights concerns triggered by the increasing role of algorithms in decision-making. Depending on the types of functions performed by
algorithms and the level of abstraction and complexity of the automated processing that is used, their impact on the exercise of human rights will vary. Who is responsible when human rights are infringed based on algorithmically-prepared decisions? The person who programmed the algorithm, the operator of the algorithm, or the human being who
implemented the decision? Is there a difference between such a decision and a human-made decision? What effects does it have on the way in which human rights are exercised and guaranteed in accordance with well-established human rights standards, including rule of law principles and judiciary processes?
Challenges related to the human rights impact of algorithms and automated data processing techniques are bound to grow as related systems are becoming increasingly complex and interact with each other’s outputs in ways that become progressively
impenetrable to the human mind. This report does not intend to comprehensively address all aspects related to the human rights impacts of algorithms but rather seeks to
map out some of the main current concerns from the Council of Europe’s human rights perspective, and to look at possible regulatory options that member states may consider to minimise adverse effects, or to promote good practices. A number of related themes will require more detailed research to more systematically assess their challenges and potential from a human rights point of view, including questions related to big data processing, machine learning, artificial intelligence and the Internet of things.”