It's been nearly a year since I glossed over the Need for Cognitive Security. Since then I've struggled to find a proper definition for the term or rather, mere implementations that might benefit common folk such as myself. There has been much ado about CogSec this past year without any coherent explanation as to what it really is. In 2014, security researcher Timothy Hwang held a symposium as part of the Association for the Advancement of Artificial Intelliegence's (AAAI) Spring Symposium Series. Entitled, "Social Hacking and Cognitive Security on the Internet and New Media", the premise for his symposium is most interesting; it recognized that people's ability to influence is now democratized and due to the abundance of information being produced, data is significantly more quantifiable. Because of both of the aforementioned phenomena, data is also easier to conceal than ever before. It went on to distinguish itself from social engineering and that the goal for the symposium itself was not just to establish Cognitive Security as a field, but update and expand upon the still limited concept to meet the realities of contemporary "influence."

So did Hwang and crew manage to do just that? AI magazine published a report on what was discussed just last year, stating:

Cognitive Security is a term that examines an evolving frontier and suggests that in the future researchers, governments, social platforms, and private actors may be engaged in a continual arms race to influence — and protect from influence — large groups of users online. Although cognitive security emerges from social engineering and discussions of social deception in the computer security space, it differs in a number of important respects. First, whereas the focus in computer security is on the influence of a few individuals, cognitive security focuses on the exploitation of cognitive biases in large public groups. Second, while computer security focuses on deception as a means of compromising computer systems, cognitive security focuses on social influence as an end unto itself. Finally, cognitive security emphasizes formality and quantitative measurement, distinct from the more qualitative discussions of social engineering in computer security.

But, people are already engaged in a "continual arms race to influence — and protect from influence — large groups of users online." For anybody paying attention to the United States' 2016 Presidential Election, Ted Cruz was found to be utilizing a company literally calling themselves Applied Memetics to help out with what little remained of his campaign last year; in the end it was all for naught. Their founder, Dan Gabriel has quite a history, but the fact that companies such as his own are operating inside our digital panopticon with full disclosure of their intentions is for the lack of a better word, surreal.

alternative text

Applied Memetics LLC is focused solely on developing engineered influence for clients seeking to alter their tactical or strategic operational environments.

"Engineered Influence" is just a kinder way of saying what Richard Dawkins referred to as "Memetic Engineering"; as in, the deliberate creation and dissemination of ideas to influence a subset of people. Applying memetic engineering is usually a hit or miss; we often hear marketeers lamenting this with their wasted efforts to rein more consumers into their commodified cubby holes. But when it comes to applying memetic engineering with a political motive, it opens up a wealth of possibilities for 21st Century propagandists. The democratization of influence doesn't undermine the fact that influence itself grows like a weed; everyone has one but few tend to them carefully. Just because more people are capable of influencing others does not mean that an equal amount of people actually commit to that influence.

Let's break down the AAAI's definition:

..cognitive security focuses on the exploitation of cognitive biases in large public groups

The ethics of legitimately exploiting such widespread cognitive bias is sometimes unfathomable. Remember that time when Facebook revealed itself to have experimented on over half a million of its users by manipulating their news feeds in an effort to better anticipate their emotions? Two years later, it seems like nothing more than an afterthought. Perhaps a form of debiasing is necessary in order to alleviate such exploitation. Cognitive Bias Modification (or CBM) is already a burgeoning collection of oftentimes therapeutic processes that try to meet this issue head-on but its effectiveness still remains in question due to its lack of resiliency. All it really entails is retraining our attention to the facets of everyday life; the problem with this is that over time people eventually fall back into the very biases with which they sought to mitigate. Cognitive Bias Modification has shown effectiveness in treating depression, anxiety and addiction; I see little reason why it couldn't be used to at the very least, temporarily liberate ourselves from the exploitation of bias. Transcranial direct current stimulation may augment such sensibilities by extension as well; as newer technologies find themselves becoming social substrate, large public groups will be able to operate with an even greater degree of autonomy — without bias. New social dynamics are beginning to take hold and with that we'll only find ourselves back at square one.

..cognitive security focuses on social influence as an end unto itself

It's bittersweet, for sure as many hipsters present at the first Occupy Wall Street protests quickly found themselves bewildered at their supposed inability to make a lasting mark onto mass media. It's important to note that Occupy was the result of Adbusters' call to action; one could posit that cognitive security technically has roots in nineties' counterculture. You know, with people like Shepard Fairey turning their methods of subversion into brands evoking a degree of influence as an end unto itself. Or rather, Coca-Cola aggressively marketing to our parents' cognitive biases with brutally nihilistic soft drink slogans like "Everything is going to be OK." Yes, hipsterdom was the result of nineties corporate astroturfing because there wasn't an escape for the culture jammers of the era just how there still isn't one now.

If one were to do a quick Google search of cognitive security, they'll likely find it being used by a Czech startup of the same name which recently got acquired by Cisco. Narrow your search and you'll find IBM has been alluding to the term as well:

..cognitive security tools will help organizations address cyber security threats and compliance issues. They will offer enhanced insights with intellectual property analysis and intention and behavior scoring for both external and internal threats (a growing number of data breaches originate with internal users who inadvertently expose corporate systems). And, they will augment the human intelligence of security analysts, enabling them to make informed decisions more efficiently at all levels.

Cognitive Systems, like IBM refers to their beloved Watson, are (what some would say "finally") being let loose to conduct threat analysis in cyberspace; the aforementioned goal of course is to augment corporate agency in a world overflowing with information. According to Security Intelligence, IBM defines CogSec by two methods:

  1. The use of automated, data-driven security technologies, techniques and processes to help ensure that cognitive systems, such as Watson, have the highest level of security and trust; and

  2. The use of cognitive systems themselves to analyze security trends and distill enormous volumes of data into discernible information, and then into knowledge-for-action for continuous security and business improvement.

That's pretty vague. I'd like to assume AI theorists like Robert Schank might agree with me as referring Watson as a tried and true cognitive system is rather disingenuous even without those who continue bellowing IBM's marketing hype. It's by no means strong AI, yet even weak AI has the potential to bring about profound changes to the systems we already have in place. I contacted Schank about what substance there could be found in these definitions and to my surprise he responded; proclaiming, "it's just more of the same; people still don't know what 'cognitive' means." Which, is very much true but also kinda beside the point. All systems are built with a purpose; just because they're used in ways they weren't intended does not somehow make them neutral. A cognitive system does not need to be sentient in order to experience implementation across the board; at least then it still offers some facet of simulacra. Regardless, IBM's second definition is much more in line with the conclusions drawn from AAAI's Symposium especially their third modal.

..cognitive security emphasizes formality and quantitative measurement

The need for quantitative measurement stems from the sheer abundance of information being produced each day; yet not all security threats can be solved statistically. Numerical data only proves useful in either reinforcing or challenging preconceived hypotheses made through qualitative measurement. With that in mind, cognitive security puts emphasis on ensuring that readily available information is made coherent whenever necessary. From a layman's perspective, this is incredibly hard to do as made evident by Caitlin Dewey's refusal to continue her column for the Washington Post. In December 2015 she posted the last iteration of "What was fake on the Internet this week" after doing it for just short of two years. The reasons behind this decision were essentially drawn from her correspondence with Walter Quattrociocchi, the head of the Laboratory of Computational Social Science at IMT Lucca in Italy where she cited him stating:

..institutional distrust is so high right now, and cognitive bias so strong always, that the people who fall for hoax news stories are frequently only interested in consuming information that conforms with their views — even when it’s demonstrably fake.

People are going to believe whatever they want to believe; so why fight it? After briefly explaining what economic incentives have seemingly necessitated the advertising world's exploitation of bias, Dewey admitted that her column was never designed to address this problem in the first place. At first I thought cognitive security emphasizing formality and quantitative measurement was needlessly contrarian; but since today's radicals don't have their own Watsons to let loose in cyberspace, perhaps this third and final modal is worth some merit. Still, I remain cautious; it's all too easy to rely solely on numerical data like the true believers of yore as we continue to eat from the trash can of ideology.

So let's go dumpster diving.