The 6 Risks of AI Content


There are already plenty of use cases for generative AI in marketing. Industry and article research. Getting to first draft faster. Creating content briefs. Brainstorming titles and headers, angles, and introductions. Programmatic SEO, at a scale and sophistication previously hard to imagine. Repurposing content from one format to another. Personalizing ABM campaigns.

These use cases have risks associated with them. There is the risk of factual inaccuracy and hallucination. Risk of copyright infringement and legal challenges. Risk of Google penalization. Risk of mediocrity and pumping out a ton of content that doesn’t help the bottom line.

Your job is to understand the risk profile of AI content and decide whether the benefits outweigh the risks for your specific company, your specific audience, and your specific use case. This article will help you make those decisions.

1. Google Risk

Back in April 2022, Google’s John Mueller, shared a pretty clear stance on the validity of AI-generated content:

“Currently, it’s all against the webmaster guidelines. So from our point of view, if we were to run across something like that, if the webspam team were to see it, they would see it as spam.”

The inference: that Google would penalise sites creating AI content, something which Mueller compared to “spinning” — the process of taking existing content and using synonyms and word reordering to create something seemingly new. When faced with a technology that could reduce the marginal cost of content creation to almost zero (a webspammer’s dream), Google made the decision to deter its use.

But as we explained previously, the idea that Google would — or even could — penalize AI-generated content was vastly oversimplified

  • AI-generated content is, by most measures, original. GPT-4 doesn’t spin content. AI-generated content contains original writing, and can even contain completely new ideas.
  • AI content detection is hard and getting harder. Technology is an arm’s race, and there are more dollars flowing into content generation than content detection. Google is unlikely to be able to detect AI-generated content with any degree of consistency.
  • There is a blurred line between human-edited and AI-written content. Does an AI-generated article still count as spam if it’s edited or rewritten by a real person?

Ten months later, in February 2023, Google issued updated guidance about AI-generated content. It’s worth reading the source article and making your own summary, but my takeaway is clear: aim to create helpful, problem-solving content, and Google will be largely ambivalent about how it’s created. As they summarize:

“Automation has long been used in publishing to create useful content. AI can assist with and generate useful content in exciting new ways.”

When I chatted about AI content with Ty Magnin, head of brand at Vendr, he shared, “Back when we first started (this was really just a few months ago), Google was saying it would punish articles written with AI. Most of us called their bluff, but that was a real potential risk in 2022. It's amazing how the perceived risk profile of gen-AI keeps changing so quickly.”

No one can guarantee that any action will be immune from penalization — Google is, after all, a company and not a utility, able to make decisions based largely on its own whims. But there are good reasons to think that most use cases will be safe.

2. Channel Risk

On the topic of SEO, there’s another angle worth considering. Instead of thinking about the risk Google poses to your use of AI, it’s worth considering the risk AI poses to Google.

Competition for keywords grows fiercer by the day. The returns from SEO have been on a long-diminishing trajectory, and it’s possible that generative AI may accelerate this descent. Pair a vast uptick in AI-fueled content creation with a growing skepticism on the legitimacy of online content, and the returns of search could drop even further (what we’ve dubbed the “search singularity”).

There are plenty of ways that could happen:

This risk is speculative, but it’s worth keeping an open mind. Generative AI may represent a watershed moment for modern search. What would you do if SEO — the raison d’etre of many content strategies — suddenly required much more effort, or offered much lower returns? Where would you reallocate your spend? This is something Ty is already thinking about:

“Search engines are experimenting with adopting chat as the primary interaction, which means fewer clicks will go to other websites. So creating organic search-focused content may become less valuable for businesses.

It's not clear how a chat may credit its sources and how that can impact a marketing funnel.”

3. Hallucination Risk

When using generative AI, there’s a real risk that your content will be peppered with falsehood and fiction: made-up quotes attributed to real people, data that doesn’t exist, seemingly sensical ideas that fall apart upon closer inspection.

These errors (often dubbed “hallucinations”) are an intrinsic property of generative AI, a consequence of its primary objective: to predict the next-best word in any given sequence of words. As we’ve explained previously, generative AI is a false confidence machine:

  • There is no fact-checking mechanism within GPT-3 or GPT-4 (nor is there likely to be one).
  • These models are drawing on source data that is often incorrect.
  • Generative AI is designed to write at all costs, pumping out content even when there is limited data in the dataset.

Hallucination is the most concrete risk of AI content, but it’s also the most readily solved. Errors and mistakes can be caught and remedied by putting a human in the loop, someone tasked with analyzing the validity of the output, looking for errors and lies, and putting their stamp of approval on the finished article.

This process also helps with the next source of risk: legal.

4. Legal Risk

Technology has a tendency to develop faster than regulations can keep up. Generative AI has already created a host of questions that need to be answered:

  • Is it ethical (or legal) to use scraped website data to train an AI model?
  • Who truly “owns” the output from a generative AI tool?
  • Should artists be compensated for their role in building training data? (“Write in the style of Ernest Hemingway,” or even “Write in the style of the Animalz blog.”)

While some of these problems may take years or decades to untangle, others present immediate concern to companies using generative AI. I asked Aleksander Goranin, specialist in intellectual property and artificial intelligence issues and partner at Duane Morris LLP (a firm Animalz uses for legal services), to share his ideas about the current areas of regulatory risk:

  • Leaking personal data. As generative models are trained on public datasets, generated content may unwittingly “leak” personal data, opening the publisher up to enforcement action from the Federal Trade Commission and state agencies, especially in California, New York, and Illinois. Regulatory authorities are sensitive to the unauthorized disclosure of personally identifying information.
  • Biased content. In a similar vein, the FTC or state authorities may pursue an enforcement action for content that contains bias (like racial or gender bias), even if that bias arises implicitly from the data the AI model was trained on.
  • Lack of copyright protection. As Aleksander explains, “There is risk that the Copyright Office will not register the content generated as protected ‘work.’ This is the area where the Copyright Office has issued the greatest amount of guidance currently — essentially, one needs to disclose in the registration application the extent to which AI was used and disclaim copyright over any parts substantially generated by the LLM. The Copyright Office requires a ‘human author.’”

The safest use case for generative AI is, unsurprisingly, background research, brainstorming, and idea generation. For anything else, creating a “human-review” scheme for generated content should help mitigate the greatest regulatory risk.

5. Mediocrity Risk

Many people worry that AI content will be overtly bad. But I think this is relatively low-risk. GPT-4 is a better writer than most people: more articulate, creative, and receptive to feedback (and this will only improve with future models and interfaces).

We shouldn’t take this personally: GPT-4 is literally superhuman, at least in the dimensions of reading and writing. As Ty shares, “It was pretty obvious that the technology is great enough that with a little lift, quality is no more a concern with an AI-assisted post than a human-driven one.”

But content doesn’t need to be bad to be problematic. There’s a more insidious risk to consider: that your content will be functional but forgettable.

In the pursuit of increased publishing frequency, there is the risk of losing whatever flicker of uniqueness your content currently possesses. Your content will be articulate, accurate, even actionable… and still fail to do anything useful for your company.

Great content marketing is about more than words on a page. Each individual article requires cohesion with a bigger, purposeful strategy. It relies upon effective distribution. It needs to go further than simply meeting the basic expectations of the reader: it needs to leave a lasting impression, or solve a problem, or entertain.

Rely too much on generative AI and too little on skilled experience, and you risk creating a kind of soulless imitation of good content, a lurching zombie with the appearance of “content marketing” but none of its useful qualities. Generative AI needs to be a tool in your toolkit, subservient to your strategy — and not an end in its own right.

As VC Tomasz Tunguz writes, “The question for startups evaluating automated content production: whether this is enough to stand out in buyers’ minds. For many use cases, uniqueness won’t matter. Product documentation, evergreen content for SEO, canned responses for email.” The inverse is also true: in some situations, uniqueness is everything.

6. Last-Mover Risk

When Ty rolled out an AI-assisted content program, the main concern he heard wasn’t legal risk or poor quality; it was concern that the greatest opportunity had already passed:

“Some folks thought it was already too late to crank up the volume of content we were producing, and that the market would be saturated in a matter of months.

But I think folks overestimate how quickly technologies get adopted. I think people are still waking up to what AI-generated content can do.”

Like any new technology, generative AI offers uncertainty and some degree of risk — but with risk comes opportunity. The opportunity to build a defensible moat of rankings and backlinks before your competitors. The opportunity to experiment with entirely new traffic channels. The opportunity to embed personalization into the heart of your marketing.

When it comes to AI content, the clearest, most certain risk comes from a failure to experiment and learn. Generative AI will find its way into every marketing strategy. It is too good and too cheap for anything else. Though the “right” application of generative AI will vary hugely from company to company — informed by risk tolerance, audience expectations, goals, resources, and personal beliefs — there will be an application in almost every company.

Time to go find it.