#EthicsMatter - Changes and Challenges in the Operational and Strategic Role of Communicators - Part 3

AI and Ethics: Implications for Communicators

A three-piece article series

Part Three:

Changes and Challenges in the Operational and Strategic Role of Communicators

The use of AI-based tools is profoundly shaping the present and future of communications. While much of the current discussion is focusing on the application and implications of AI-based tools for communication, less attention is paid to the more fundamental shifts that AI brings to organisations as a whole: Organisations are increasingly relying on AI to manage operations and make decisions. The implications this has for stakeholder relationships and the reach, role and responsibilities of communications needs to be a key focus of future consideration in our field. In this three-piece article series, we examine the implications of AI for the role and responsibilities of communication practitioners. The first part assesses where we are in applying AI in communication practice. The second part explores the fundamental shifts that AI brings to organisations as a whole and the role communication can play at this wider level. The final part will discuss key implications of emerging AI-based practices in communications.

***

The level at which AI is now embedded in the practice of communication means there will be significant changes and challenges in the operational and strategic role of communicators.

At the operational level, in theory at least, most of the routine tasks are amenable to automation and AI applications. Consequently, there are new knowledge and skills sets that need to be learned, such as knowledge of the various AI applications and their deployment, including the best mix of AI tools to use; acquisition of new technical skills to operate these applications; familiarity with the use and interpretation of data and knowledge of the best combination of human and AI resources and their respective roles. This will demand a level of data and technical literacy far above that which has been typically required of practitioners.

Beyond these, communication as a specialist function will face the same set of questions which apply to organisations and which are outlined above.  As the operational is increasingly taken over by AI, the role of communicators will shift much more into the governance of their own communication AI systems, processes, and tools. This starts with an understanding of the technology and algorithms that go into them, including issues of explainability, bias and privacy and moves on to their uses and impacts. The precision of profiling and micro-targeting derived from big and personal data and the ability for precise and potentially manipulative messaging and content delivered in ever more sensory and emotionally resonant ways, requires careful reflection by individual practitioners and the wider profession. Given the power and possibilities that AI gives the practice, the question becomes ‘just because I can, should I?’ Agreement about the ethical boundaries of the practice and formal training in AI governance and ethics to maintain trust and confidence in communication work appears to be an imperative. Monitoring and governance, knowing what AI is doing, how it is doing it and making the right interventions to ensure ethical practice, is essential to the future role.

The central role that communication plays in stakeholder relationship and the preservation of the tangible and intangible assists of the organisation, points to a wider contribution in AI governance. The academic literature and also professional publications point to the role of the practitioner as ‘ethical guardian’: someone who serves as the conscience and check for the wider organisation. We would argue for an expanded governance remit for communicators as organisations are increasingly infused, driven and exist by and through AI. This role requires ‘ethical guardian’ interventions in a host of areas and requires a holistic, non-technical perspective on the impacts on intangible and tangible assets for the whole organization. It falls naturally within the advisory role of the communication function. For example, AI and big data gives organisations choices about their physical location and over who, how and how many employees they recruit, retain and re-train. These are moral as well an economic choices and have reputational effects.

There are questions about the interface between humans and AI and the nature of work which can be posed as moral questions: who drives what? machines drive humans and human decision-making, or humans place protocols and structures around machines to ensure that AI and big data assisted decision-making is controlled? These are moral issues, but also profoundly affect organisational structures, processes and culture which have reputational and strategic communication implications. Then there are issues around the nature of decision-making; when provided with what looks like compelling AI produced evidence it is important to ask questions about the integrity of the data, the transparency and programming of the algorithms that have interrogated it and the implications of decisions made. That challenge is legitimately made by communication professionals since they bear responsibility for communicating and defending these decisions. Crucial to this is their contribution on the understanding of context, including timing. Context is a factor that AI systems, as they currently stand, find difficult to appreciate. As options, opportunities and decisions are increasingly informed by AI systems, there is a crucial need for ‘someone’ to understand and interpret how they will be viewed in the light of wider societal trends, stakeholder needs and expectations and the more immediate contextual issues such as time and place. Judgements need to be made not just on the basis of logic. While contextual intelligence has always been within the remit of communication, it becomes even more important when faced with AI informed, ‘scientific’ decisions.

The ethical guardian role extends to questions about how AI and big data systems are commissioned, implemented, and monitored. These include not just technical, but ethical questions about whether there are systems and processes in place to guarantee the privacy of user/customer data and proper control over how that data is used and stored, transparency over what data is being collected, with whom it is shared and how it is aggregated in systems where AI is involved.

While there is guidance from some large transnational intuitions, such as the European Union, on how to implement AI organisationally in a responsible way, leaving these important questions to IT and technology specialists is not satisfactory. Communication professionals need to be part of AI commissioning and build teams to pose tough questions which may affect reputation, but may not occur to technical specialists who are focussed on operational issues. To discharge this ethical guardian role requires communicators to acquire a robust understanding of AI, how these systems are designed and their uses and applications. It also requires courage to resist the relentless logic of AI and ‘scientific’ decision-making. These decisions are based on data, and this data is often from, about and affect people: people and society are more than atomised data parts and more than objects and targets. 

That Global Alliance is facilitating this discussion is healthy. GA member, the Chartered Institute of Public Relations has produced a helpful guide for practitioners “Ethics Guide to Artificial Intelligence in PR” which also explores these issues.

About the authors

Alexander Buhmann, Ph.D., is associate professor of corporate communication at BI Norwegian Business School and director of the Nordic Alliance for Communication & Management. Alexander is a member of the expert panel on artificial intelligence at the Chartered Institute of Public Relations (CIPR). Follow Alexander on LinkedIn or Twitter.

Anne Gregory, Ph.D., is professor emeritus of corporate communication at the University of Huddersfield, honorary fellow and former president of the Chartered Institute of Public Relations (CIPR), and past chair of the Global Alliance for Public Relations and Communication Management. Anne is a member of the CIPR expert panel on artificial intelligence. Follow Anne on LinkedIn or Twitter.

Any thoughts or opinions expressed are that of the authors and not of Global Alliance.