AI & Audience Receptivity

Starting in 2016, when disinformation went from being perceived as a simmering problem to a boiling-over crisis, it was hoped that journalistic fact-checking could deliver a substantial blow against it. By 2022, nearly 400 fact-checking initiatives had been established around the world. We are now beginning to grapple with what it really takes to increase the accuracy of an audience’s perception of the world, to say nothing of getting them to change their mind. In other words, we are circling back to a concern as old as rhetoric itself: audience receptivity.

Theorizing receptivity is difficult, but we can use this working definition: the willingness to accede to a proposition. That we may not be dealing with agreement or disagreement per se but provisional consent is an important nuance. Between gaining an audience’s attention and gaining their assent or dissent, there is a solicitation from the speaker, to which the audience either ignores or signals their gameness to respond. If this blink-and-you-miss-it transition could be captured and modeled by artificial intelligence (AI), we would not only deepen our understanding of how and why disinformation flourishes, but we could also start pre-empting it.

More specifically, we could start inoculating against it. This is the idea, originally advanced in the 1960s, that an audience can be “vaccinated” against disinformation. Inoculation tends to center around two techniques: forewarning and pre-bunking. In forewarning, an audience is advised by authorities or trusted sources that they are likely to be targeted for disinformation concerning a topic or event, and in pre-bunking, the content of the imminent influence operation is already publicly fact-checked, and the fact-check then rapidly disseminated.

Detecting receptivity in real-time, or even predicting it in advance, would mean defenders could identify who to inoculate, and simultaneously, they could foresee the topics to inoculate about, perhaps right down to the disinformation content’s very contours. 

 

Receptivity is Under-Theorized

Receptivity as such, which is to say, what it means to be receptive, is surprisingly under-theorized. This may initially seem like a dubious claim given the existence of landmark works like Quintilian’s Institutes of Oratory and Edward Bernays’ The Engineering of Consent, well-established and important systems of audience analysis like the Nielsen ratings, controversial practitioners like Cambridge Analytica and Glavset who have seemingly penetrated the hidden psychological recesses of audiences, and, of course, the entire fields of rhetoric, communications and reception analysis. However, examine all of this closely, and what emerges is a continual emphasis on the mechanisms of receptivity, and less on its nature, which is to say, its ontology.

Consider two of the most well-known approaches to receptivity, Stuart Hall’s “encoding/decoding” theory and Richard Daft, Robert Lengel and Linda Trevino’s equivocality theory. Both theories conceive receptivity as a function of interpretation, with Hall emphasizing the role of the audience as the meaning-maker, Daft, Lengel and Trevino the speaker as the meaning-intender: for Hall, the audience determines the meaning of the speaker’s message through a process of negotiation and opposition, while for Daft, Lengel and Trevino, the speaker determines the variety of cues they present to the audience. The problem here is that the cart is being put before the horse, for the question that really needs to be asked is what is happening when the audience gives their attention to the speaker, not what is happening once their attention is given.

A focus on mechanics can also easily descend into reductionism, in which we think we know what receptivity is because we think we know what it looks like. All we really need to do, then, is to engineer our way to receptivity. According to this logic, the speaker desires that the target audience will do or believe X and conducts their persuasion effort toward that end: the extent to which the target audience then conforms to the speaker’s desire is taken ex post facto as the extent to which they were receptive. If successfully repeated enough times, a probabilistic basis is formed on which predictions of future receptivity can be made.

Reductionism about receptivity pervades our era of “big data”, especially in personalized marketing, which aspires to be automated and micro-targeted persuasion. Personalization has proven very difficult to make foolproof: we have all experienced recommendations that fail to distinguish between when we browse a product or service out of curiosity versus real desire, or seemingly stalks us with long-gone yet personally sensitive queries. To wit, in 2022, Facebook’s interest-targeting algorithms were found to be inaccurate 33.22 percent of the time, while its explanations were too generalized and even misleading. Doubts linger over Cambridge Analytica, whose much-feared psychographic models may have been more lucky than telepathic.

To be absolutely clear, despite its underdevelopment, receptivity theory is not at a blank slate. Because it is a topic intimately tied to the nature of being an audience, we can search across time for insights, ranging from Aristotle to modern schools of thought like collective intentionality and symbolic interactionism. Additionally, an array of disciplines have been specifically exploring what causes resistance to persuasion, and much of today’s research into disinformation either directly or indirectly bears on the question of what receptivity is.

Given this wealth of potential resources, it is all the more surprising that receptivity stands in need of rigorous theorizing. However, by the same token, this is also an historic opportunity. Indeed, we now possess an incredible new resource: AI. Deep learning in particular could provide a valuable, if non-conscious, non-human perspective on this issue.

What the emergence of large language models like ChatGPT signifies is that dimensions of our human experience that once seemed inherently elusive to AI are now falling within its scope. Language has been a human activity that, as seen in the ideas of Rene Descartes and Alan Turing, our culture has often wanted to believe is an epiphenomenon of our unpredictable soul. Of course, whether what amounts to patterns of convention constitute the essence of what it means to be a language, much less what it means to be human, is doubtful. Yet, a deep learning text predictor that lacks a human understanding of human speech has nevertheless uncovered these patterns, and it can reproduce them in real-time situations with spooky verisimilitude with humans.

In principle, what has been achieved for language might also be achievable for receptivity: what kind of patterns about the dynamics between speaker and audience might our new partner uncover?


Challenges of Modeling Receptivity

There is a philosophical disconnect in the minds of both adversaries disseminating disinformation and those defending against them: we think in terms of audiences but work in terms of social networks, yet these are not the same thing. A social network is a constellation of links and nodes, while an audience is that which inhabits that constellation. To put a fine point on it, a social network is a web of “I”’s, but an audience is a “We”. The mind engages content differently from the standpoint of the group, regardless of whether the group is real or imagined, than from standpoint of the individual.

In order to bring AI to bear on receptivity, we will thus need to determine the kind of data sets an audience-focused shift in thinking will require, as well as the kinds of signals we will need to look for in that data to build the relevant models. Perhaps we already have the necessary data and the signals are not especially different from, say, sentiment or stance; perhaps, though, we will be confronted with unexplored territory.

We will also need to devise very clear and testable definitions, for without these, the patterns AI will discern may be the wrong ones. Computer vision scholars know this problem well: if we instruct the machine to separate pictures of cats from pictures of dogs, it may very well use the presence of grass as an indicator of dog-ness, since dogs are more often pictured outside. It is easy to see how this could go very wrong with receptivity.

Yet, we also do very much want to know the patterns our new partner discerns about receptivity when unsupervised, so a tightrope will need to be walked between our own perception of ourselves and how the machine perceives us. And this is not to mention that we will need to contend with the black box, for we are still some ways away from bridging the interpretability gap.

Once we have viable models of both the audience and receptivity, we will need to grapple with grave ethical questions. The concrete goal of all of this experimental philosophy will be to develop inoculation tools that are directed by a receptivity detector to pinpoint forewarnings and pre-bunks at vulnerable audiences. So, who will be these tools’ users and what will be the use cases? What sort of biases and power dynamics could influence those decisions? And would these tools’ very existence truly be the medicine they are intended to be for freedom of speech and conscience, or a poison?

That last question is vital, as we will also need to grapple with the danger of dual-use: nothing could be more attractive to an adversary seeking to increase the effectiveness of their influence operations.

Chris Schwartz

Christopher Schwartz is a postdoctoral research associate at the ESL Global Cybersecurity Institute of the Rochester Institute of Technology, where he is working on the "DeFake" deepfake detector. Formerly a journalist, he recently completed his doctoral dissertation in philosophy of journalism at the Institute of Philosophy of KU Leuven.

Previous
Previous

Confidence

Next
Next

The Problem of Value Alignment