“How are we leveraging AI in our marketing efforts?”

If your boss hasn’t already asked this question, they will soon. Whatever your answer may be, you’ll want to be sure to consider the risks inherent in integrating generative artificial intelligence tools with your workflows. And it’s not enough to be aware of these risks. Marketing teams must establish processes to mitigate risk, and remain vigilant in safeguarding against them. Here are some things to think about as you balance the capacity-boosting effects of AI with the need to protect your brand’s credibility and voice.

1. Consider ethics and transparency.

You’d never knowingly copy someone’s work and claim it as your own. But if you’re using generative AI to create content, it’s easy to do unknowingly. AI is great at compiling and summarizing information. It’s not so great at citing its sources. As a test, we asked ChatGPT to define a term and cite its sources. While it did cite one source specifically, and referred to two other organizations generally, ChatGPT framed its citations by saying: “This definition is based on my training and knowledge as a language model.” The source of its training and knowledge? Vast inputs of existing content authored by humans.

So what is our responsibility as writers, designers, and creators to disclose when we have used AI in our work? And how can a writer or designer be certain that the work they’re creating doesn’t contain plagiarized material (which happens more than you’d think)?

At the moment, there are no industry-standard practices around AI-assisted creation. It’s incumbent upon marketing teams to establish policies that ensure the ethical use of generative AI. Your policies might include guidance on when and how to disclose the use of AI, any prompts to avoid (e.g. requesting an image in the style of a specific artist or featuring someone’s intellectual property), and/or a mandate to run written content through a plagiarism checker. Only one thing is certain: There must be a human in the loop.

2. Be mindful of copyright implications.

Inevitably, at some point, the question “Is it right?” becomes “Is it legal?” The future of copyright and AI is uncertain at best: Can AI-generated content be copyrighted? Is it just output that can be copyrighted, or can companies copyright input, too? What constitutes fair use of copyrighted content when it comes to the AI learning process? And what prevents companies from laundering copyrighted data to inform an AI model whose learning stands to benefit them down the line?

AI-related copyright decisions have been made and reversed only a few months later. The U.S. Copyright Office is only just beginning to roll out guidance (check out their copyright and AI landing page for the latest info). Until there’s clear government guidance and legal precedent, the landscape around this issue is essentially quicksand: unstable, potentially dangerous, and a big ol’ mess. For the time being, the best bet for comms professionals is to ensure that any products destined for copyright are human-generated. It’s also smart to start working with your company’s legal department early as you design your policies and procedures around creating copyrightable material.

3. Pay attention to bias.

We’ve considered the questions “Is it right?” and “Is it legal?” Now, let’s ask “Is the output accurate and fair?”

AI language models learn from work crafted by humans. That means that all of our human biases—from subconscious bias to overt racism, sexism, and other forms of discrimination—can inform the AI’s vocabulary and point of view. A language model’s training might include content scraped off the web, and when it scrapes the bottom of the proverbial barrel, the model might “learn” all manner of garbage: a celebrity wellness influencer’s opinion about sunscreen (and other health misinformation), political propaganda, or racist rhetoric. And, as sure as your kid will repeat the curse word you mutter when you stub your toe, the AI will repeat this garbage when prompted.

Again, you’ll need a human in the loop to evaluate and edit AI outputs, with special attention to bias and inaccuracy.

The human brain and its capacity for critical thought (and let’s not forget emotion) are the keys to managing the risks associated with generative AI. And, despite the risks and uncertainty, AI’s not going anywhere. If you’re the person who helps your organization manage the risk and reap the rewards, your job isn’t going anywhere either. The singularity may be near, but it’s not here yet. And until it arrives, we humans are in charge.

Ready to rethink your content strategy and communications workflows? Let’s talk.

Three Furies is a business, brand, and content strategy agency with deep experience in the legal marketing sector, including digital marketing analysis, brand and digital design, communications strategy, and advertising campaigns. We also produce bespoke wine tasting experiences for client development and employee resource groups through our sister company, Tipsy.