#EthicsMatter - The Current Use of AI-Based Tools in Communications - Part 1

AI and Ethics: Implications for Communicators

A three-piece article series

  Part One:

The Current Use of AI-Based Tools in Communications

The use of AI-based tools is profoundly shaping the present and future of communications. While much of the current discussion is focusing on the application and implications of AI-based tools for communication, less attention is paid to the more fundamental shifts that AI brings to organisations as a whole: Organisations are increasingly relying on AI to manage operations and make decisions. The implications this has for stakeholder relationships and the reach, role and responsibilities of communications needs to be a key focus of future consideration in our field. In this three-piece article series, we examine the implications of AI for the role and responsibilities of communication practitioners. The first part assesses where we are in applying AI in communication practice. The second part explores the fundamental shifts that AI brings to organisations as a whole and the role communication can play at this wider level. The final part will discuss key implications of emerging AI-based practices in communications.

***

The use of AI-based systems and agents is growing and so is the discussion about it. The Page Society, based in the US and representing Board level communication practitioners, produced a report in 2019 which made a plea for the profession to upskill in digital technologies as it was falling behind other communicative disciplines such as marketing. The Institute for Public Relations, also US-based, has produced several articles on AI and has a specialist digital media research centre producing a range of resources and commentaries. Most active has been the UK-based Chartered Institute of Public Relations (CIPR), which constituted its ‘AIinPR’ expert panel in 2018 and who have produced numerous research reports, guides and practical implementation tool kits.

CIPR’s recent “AI and Big Data  Readiness Report”, shows a very varied picture: there is some advanced AI practice and evidence that some practitioners are being involved in decision-making on AI more widely in organisations. Others are not yet on the ‘AI journey’. The greatest fear revealed by the survey underpinning the Report is about loss of jobs and the supplanting of communication activities by AI, particularly at a tactical level (writing, audience selection and targeting, monitoring & evaluation etc). Very little was reported about an ‘expanded role’ related to AI, particularly in governance, but awareness of ethical issues is apparent. Only 5% thought governance was a skill believed to be most relevant as a communication practitioner currently, but 18% thought ethics was the top challenge for professionals when it comes to implementing AI across an enterprise. This suggests a growing appreciation that AI governance is relevant to communication and organisationally, but practitioners are not yet clear about what is involved or how to undertake it.

Other research by the CIPR has examined the level of permeation of AI in the profession and predicted that by 2023, two thirds of activities would be AI assisted. Recent consideration by its AIinPR Panel has concluded that this process was accelerated by the Covid 2021 pandemic and that this position has already been achieved. From research, to stakeholder identification and selection, to identifying stakeholder channel use and optimising channel selection; content design and creation, including writing, image and audio generation; monitoring (including social listening) and evaluation; programme implementation and management and work flow optimisation – there are now automation and AI tools that will do the task faster and more accurately than humans. Even the most human part – the actual engagement – is now being handled very well and intelligently (and with permanent collection, storage, analysis and aggregation of data) by AI-based systems such as chatbots and virtual reality applications.

Dozens of new applications and innovations are emerging on a daily basis. One area that is developing very rapidly is Intelligent User Interfaces (IUIs) which has AI at their core. IUIs seek to make the interface between machines (computers) and humans as easy as possible, and their main feature is that they attempt to emulate a human-to-human experience, including the sensory. Moore and Hubscher (2021) have written specifically on the uses and impact of these devices on communication. IUI includes immersive technologies such as virtual and augmented reality, agents such as chatbots, virtual assistants, avatars and these are all applications which incorporate at least one sensory element, such as sight, sound (speech) or touch. A particular powerful technology is eye tracking devices which open up opportunities to help the visually impaired, but also give new insight into visual preferences, behaviours and obsessions and can also interpret emotional reactions and states. Touch technologies can be integrated into just about any object from tables and walls to clothes and skin. In their book, Moore recounts shaking the hand of a dead trade unionist virtually and having a tactile experience. The potency of these technologies for communication cannot be overestimated in three ways:

·       First, gaining insight into people’s ways of thinking (both rational and emotional), their motivations, preferences, choices, and ways of communicating – they provide access to the ‘deepest’ private thinking of individuals.

·       Second, this information opens up the potential for manipulation, without the individual realizing it, as this ‘data’ is used to build detailed individual profiles.

·       Third they herald a further blurring of reality and the unreal. These issues are already apparent in the realm of social media: they will become even more apparent as these technologies become more ubiquitous.

IUI devices such Amazon’s Alexa or smart TVs are often present in the home and incorporated into the fabric of life, where they can see, listen and physically interact. They enhance lives by taking on mundane tasks and predicting needs while at the same time they also constantly collect data and build rich pictures of individuals’ and community lives. They can hyper-personalize and deliver audiences of one to those who have the resources to harness their power. For communicators, identification of these audiences can lead to conversations that are nuanced, emotionally intelligent, extended, particular and unscripted. One to one dialogue.

IUIs will gain increasingly sophisticated emotional attributes and be able to gauge, respond to and show it: they already have to some extent - note the use of robot comfort dogs for the lonely and people with emotional and dementia conditions. The car industry is working on user interfaces that detect driver anger. There are even suggestions that these IUI’s will, in time, integrate with the human nervous system. As Moore and Hubscher point out, emotional engagement and affinity has close connections with trust-building, a core tenet of communication and a key aspect of intangible asset building. Emotions, trust and loyalty also drive consumption, so has a direct line to the bottom-line.

Two characteristics of IUIs that are striking. First the increasing naturalness of their language. As Murtarelli et al (2021) point out, there is a qualitative difference between ‘conversations’ with an AI agent and reading text. Through conversations meaning is created, relationships develop and trust is cultivated in a deep way, especially if these conversations are re-enforced by other sensory elements such as touch and vision. However, these are not conversations, but rather pare-conversation in which AI agents are programmed to be purposive and data gathering. They listen as well as speak, and as they become more sophisticated and able to be more human-like, they will be able to elicit more and more personal information, knowing more about their human counterparts than these protagonists know about themselves. This raises deep ethical questions. Should there be limits put on the amount AI communication devices should be allowed to know about individuals? Should there be boundaries put on how they can use the information they gather? These are pertinent questions for communication because these AI technologies give power to their owners. That power can be used to take collaboration and cooperation between humans and machines to new heights, or be used to lead, persuade and manipulate without the subjects’ informed consent or even awareness.

The humanisation of AI is also progressing apace. Robots are mirroring human movement, have skin-like coverings, can mimic physical and emotional reactions: will walk like humans, talk like humans, look like humans, act like humans. They will be able to supplant the humans at meetings, events, in off and on-line conversations as well as helping them (or replacing them) in every activity they undertake.

If AI is permeating every area of communications by providing tools for operational activity, what is its impact on the more strategic elements of work? These may include understanding context, aligning organisations with societal expectations and securing legitimacy, developing purpose and brand, negotiating values, making judgements about when and how stakeholder engagement should be conducted and, developing organisational culture and character. It is clear that AI will be embedded in decision–making in all these areas, providing the data upon which decisions can be made and in many cases offering options based on that data. This leads to a number of crucial questions for communicators: what do they know about AI? How are AI applications programmed? How transparent are programming parameters and algorithms? What can be done about potential mistakes and biases in algorithms? How much data should be collected and how should it be used? What about privacy and the handling of personal data? Who decides the ethical uses of AI?

These questions lead on to a discussion about the emerging concerns that AI raises for corporate reputation and responsibility, which we explore in the second part of this article series.

About the authors

Alexander Buhmann, Ph.D., is associate professor of corporate communication at BI Norwegian Business School and director of the Nordic Alliance for Communication & Management. Alexander is a member of the expert panel on artificial intelligence at the Chartered Institute of Public Relations (CIPR). Follow Alexander on LinkedIn or Twitter.

Anne Gregory, Ph.D., is professor emeritus of corporate communication at the University of Huddersfield, honorary fellow and former president of the Chartered Institute of Public Relations (CIPR) and past chair of the Global Alliance for Public Relations and Communication Management. Anne is a member of the CIPR expert panel on artificial intelligence. Follow Anne on LinkedIn or Twitter.

Any thoughts or opinions expressed are that of the authors and not of Global Alliance.