Is using generative AI ethical? The questions we’re asking as a B Corp

Is using generative AI ethical? Recently, we wrote an article about how generative AI tools like ChatGPT are increasing two problems that content marketers have been facing ever since that term became popular: noise and cynicism. There’s just so much content out there, we can’t possibly digest it all – and how do we know if we can trust it?

Another place where we’re facing information overload and a lack of clarity around what we can trust is in the subject of generative AI and ethics. Are the evangelists misleading us with their enthusiasm? Or are the naysayers overhyping the dangers? And how should we navigate the advance of these technologies, especially as values-led businesses?

At RH&Co, we’re keeping an open mind when it comes to generative AI. On the one hand, we’ve been experimenting with how useful it can potentially be to us and to our clients. But as a B Corp, we’ve also been looking into the darker side of AI, so that we can build out our own policies in a way that supports the planet and its people.

TL:DR

A caveat before we start

First, let’s clarify the language. This article by the Harvard Business Review does a great job of distinguishing between important AI terms such as generative AI, machine learning, natural language process and so on.

It’s important to understand that what we’re talking about in this article is the tools that respond to human prompts by creating content in one form or another. Tools like ChatGPT, Dall-e, Midjourney and so on.

Secondly, we’re not discounting the many ways people are finding tools like these helpful, from helping to overcome dyslexia when writing emails or internal reports, to creating summaries of meetings for quick reference.

Generative AI is changing rapidly, and to discuss the merits of individual tools and their current abilities would make this article redundant almost as soon as we hit publish. Instead, when asking “Is generative AI ethical?”, we want to look at the broader issues that underpin the use of this technology so that we can ensure that, as a user group, we call for a version of AI that doesn’t sit contrary to our values.

Environmental concern: generative AI’s use of electricity and water

Every time we use digital technology, we use energy. Not just the electricity that goes into powering our smart phones and computers, but the energy that goes into sending our emails, storing our documents and connecting our video conferences. As a result, every individual and organisation now has a digital carbon footprint.

Generative AI is adding to this footprint. There’s the energy needed to develop and train the models, not to mention the energy needed to complete a query. While there are no straightforward answers when it comes to how much energy AI uses – as this article by a Boston University computer sciences professor showcases – it’s clearly something we need to consider.

The same goes for water use, as this article in The Independent highlights. One of the studies referred to in the article, by researchers at the University of California and University of Texas, suggests that “training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly evaporate 700,000 liters of clean freshwater”, and that “global AI demand may be accountable for 4.2-6.6 billion cubic meters of water withdrawal” by 2027.

This is not to say that we shouldn’t use generative AI. But using it arbitrarily when other options are available feels like an unnecessary way to increase our digital carbon footprint and water use. 

Misinformation: Hallucinations and fake news

Lying and misleading people isn’t a new phenomenon – it’s been around as long as the human species has. But it’s easier to mislead today than it’s ever been, and to do so at scale.

This study, published by Cambridge University Press, looks at the role of artificial intelligence in disinformation. In the abstract, the researchers state that AI systems “boost the problem [of disinformation] not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders.”

In some cases, people are deliberately using generative AI to create fake content, as with the video of Australian businessman Dick Smith supposedly promoting an investment opportunity – or the numerous examples cited in this Sky News article.

But there’s also the challenge of AI ‘hallucinations’, where tools like ChatGPT make up answers rather than admit they don’t know something. Read our article about how AI is increasing noise and cynicism for a prime example involving our very own James Matthews.

Again, although none of this means we should avoid generative AI entirely, it does pose the question of how we regulate it and how transparent we are about using it.

Human rights and ethics: what about the people?

How do you get an AI to decide whether text contains inappropriate or offensive subjects? Easy – just get humans to read the text and tell the AI whether it’s acceptable or not. Except imagine you’re one of those people. What’s that going to do to your mental health?

That’s the issue that faced content moderator Mophat Okinyi when he worked for Open AI (the creators of ChatGPT) in Nairobi, Kenya. The text and images he was required to review depicted a variety of scenes too unpleasant to go into here, leading him to become withdrawn and paranoid, and ultimately resulting in the breakdown of his marriage.

Obviously this is an extreme case but there are other ethical issues that we need to consider when it comes to generative AI. This quote from UNESCO’s Assistant Director-General for Social and Human Sciences, Gabriela Ramos sums up the breadth of the challenge: “AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.”

For those of us in creative industries, a big concern is the way that generative AI is trained on work that human beings have produced and may not have shared reproductive permission for. Is training generative AI the same as an artist being inspired by viewing the works of others, or is it an infringement of copyright?

It’s not a straightforward answer. An article in Harvard Business Review stated that “the legal implications of using generative AI are still unclear, particularly in relation to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.”

How should values-led businesses use generative AI?

So, is using generative AI ethical? We’d love to say that we had a simple answer, particularly as a B Corp that believes in doing business as unusual. Unfortunately, the issues aren’t clear cut, not least because the technology and the regulations surrounding it are changing virtually every day.

So what should we do? Well, we can only say what we at RH&Co are doing, which is staying informed, actively seeking out a variety of viewpoints, experimenting and testing, and keeping an open mind.

Our plan is to draw up an AI policy in time, and when we do we will be sure to share it here on our website. If you have any thoughts, suggestions or resources to share, we would be very happy to receive them.

AI is not going to be the end of the world as we know it but it is going to have an enormous impact on it. Let’s work together to ensure that it is a positive one for our planet and all of its people.

The Right Words.

Your monthly dose of copywriting, marketing and business advice.

Sign me up
Back to hompeage