Generative AI – Implications for Marketing and Public Relations
- Today’s generative AI isn’t “real” AI in that it is not sentient or able to replace human intelligence
- Generative AI tools like ChatGPT and DALL-E could be useful as tools for human marketers and PR practitioners.
- Take note of the limitations of current generative AI, especially as it relates to accuracy and intellectual property
- Bad actors could use generative AI to undermine business and society
- More specialized generative AI tools will come as the technology evolves
- Long-term, generative AI may change the nature of public relations entirely
Is it even AI?
The phrase Generative AI refers to a class of machine learning algorithms that can create new content, such as text, images, audio, video and even computer code.
ChatGPT (text) and DALL-E (images), two publicly available generative AI applications from research laboratory Open AI, recently earned significant media attention (and significant investment from Microsoft).
Much has been written about how generative AI could replace human communicators and creative professionals, but that’s simply hype. Today’s generative AI is not true AI. While an engineer working on Google’s LaMDA (a competitor to Open AI – Meta has one of their own, too) infamously thought it may have become self-aware, the technology is nowhere near becoming sentient.
Even ChatGPT, for all its accolades, is itself a PR stunt, intended to promote the POTENTIAL of Open AI’s technology. Sam Altman, the CEO of OpenAI, even tweeted “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
Instead of thinking about generative AI as a potential replacement for professional communicators, today’s generative AI is more like a tool that makes communicators more efficient, similar to using Sprinklr for social media management, Brandwatch for social listening, or Onalytica for B2B influencer marketing. Like any other computer-based tool, the old adage “garbage in, garbage out” applies. ChatGPT’s results will only be as good as the prompt it receives.
One way to use ChatGPT is at the beginning of the creative process to overcome writer’s block. Blogger Lori Ballen wrote a list of more than 400 prompts to help with writing.
Another way to use ChatGPT is at the end of the creative process to check for spelling, grammar and style issues far better than what Microsoft Word could dream of (at least until Microsoft integrates OpenAI into Office, of course).
Got a press release and need an image to go with it? Shutterstock has partnered with OpenAI to generate “ready-to-license” images.
We live in a multimedia world. Instead of booking on-camera talent and a studio, get a pretty good talking head video going with a platform like Synthesia. If you click no other link from this document, click that one. The video on the homepage is FASCINATING.
I asked ChatGPT about its limitations, and it responded:
- It is a statistical model, so it can make errors or produce nonsensical responses.
- It is trained on a large dataset of text from the internet, so it may not be able to answer questions or generate text on topics that were not covered in its training data.
- It may struggle to understand context or maintain coherence when generating longer pieces of text.
- It can generate text that is biased or offensive, as it has learned these biases from the data it was trained on.
- It may not be able to understand and respond to questions that are not well-formed or that use unconventional language.
- It does not have common sense reasoning capabilities, so it may not be able to answer questions that require an understanding of the real world.
ChatGPT does not cite its sources, so even if it says something that sounds reasonably true, every answer must be verified.
If you ask a question about current events, ChatGPT will tell you “I’m sorry, but my knowledge cutoff is 2021 and I don’t have information about events that happened after that date.”
All generative AI raises serious issues of intellectual property. Getty Images and several artists have filed lawsuits accusing AI companies of “parasitic” behavior, with one artist appearing on the New York Times “Hard Fork” podcast to advocate for “Consent, Credit and Compensation” – artists and other IP holders, she argues, need to give consent for their work to be crawled or scraped into the generative AI engine, and they deserve credit and compensation when AI creates something new based on their work. This is a brave new world for the law.
Potential for Abuse
Astroturfing, or faking a grass-roots movement, has been around since the dawn of the Internet age. Imagine astroturfing at scale, powered by generative AI. In a matter of two minutes, I wrote ten different reviews for a particular laptop, each highlighting a different feature (battery, screen, performance, etc.) A malicious marketer could flood review sites, blogs and social media with bogus but believable-sounding positive reviews for their products and negative ones for their competitors.
Generative AI could send a thousand handwritten (in different handwriting!) letters to the editor of a local newspaper, each ripping a mayoral candidate for a different reason. Much has been written about Russian troll farms since the 2016 US presidential election. Imagine a nation state using generative AI to run fully automated troll farms, producing misinformation at scale and overwhelming social media with it.
On a less nefarious level, a marketer could use generative AI to write a blog that, while factually inaccurate, connects its brand to every single trending search topic, making it the ultimate SEO hack. CNET recently published a mea culpa of sorts after publishing AI-generated, search-optimized articles under the byline of “CNET staff” that contained numerous errors.
I expect we will soon be drowning in low-quality content written by generative AI – a “cavalcade of bullshit.”
The Near Future
Most of today’s generative AI projects are meant to demonstrate the potential of the technology, and it will soon find specialized applications in every field of business, including public relations.
AxiCom is already experimenting with generative AI. One test that I’m especially proud of is the work our team in France is doing with journalists Jean Rognetta and Maurice de Rambuteau who publish “Qant: L’Avenir du numérique… the first French professional newsletter entirely written and illustrated by generative AI from GPT-3 (the model behind ChatGPT), DaVinci and ChatSonic. But, critically, each newsletter is reviewed and published by human journalists. We partnered with Qant to cover CES 2023. Generative AI from Qant sourced the raw materials for the coverage, while Jean and Maurice worked with tech experts from AxiCom Paris to review, verify and contextualize the AI output for audiences. We’ll be releasing our full AI-powered CES report, useful as a 2023 consumer tech roadmap, later this month on our website and at a live event in Paris.
I’m especially excited to learn more about “applications for public relations and communications which can do things like analyzing if a news release is likely to be of interest to a specific journalist or publication or predicting if a specific news story or social media post will get attention or go viral” described by Stuart Bruce on his PR Futurist blog. Stuart told me one example is a next-generation version of PR Prophet, and I plan to investigate further.
Generative AI has the potential to be an inflection point for technology, business, and society, on the level of the smartphone, social networking or even the world wide web itself.
While currently way over-hyped, Generative AI is not a fad like 3D TVs. And while some have been quick to say generative AI has replaced web3 concepts like blockchain, cryptocurrency, distributed social networking and the metaverse, it’s not an either/or situation. These are all examples of pioneering technology that will find its way into daily life for individual people and for brand teams alike.
As I think about the history and future of the Internet, something I’ve written about previously, I worry about the potential for generative AI to further concentrate power in the hands of the few. Google called ChatGPT a “Code Red” threat to its search business. Today, Google controls much of the filter through which we as individuals experience the internet. We go to Google, type something in, and Google, through its algorithms, decides what news and information to show us in response to our query. Every search result contains cited and usually hyperlinked sources. That’s a tremendous amount of power in Google’s hands (and in the hands of Meta, Reddit, Twitter and other companies who use algorithms to decide what we see). But what happens when we’re no longer using a search engine to search but instead using a search engine to get complete and immediate answers? Microsoft is reportedly integrating ChatGPT directly into its Bing search engine. If ChatGPT continues to provide unsourced and unreliable information that people choose to rely on, where does that leave society?
Will knowledge producers and artists continue to put their work on the free and open internet if they know it will be consumed and remixed by AI? What then will be the future of the Internet?
Could a version of ChatGPT run by a totalitarian regime be exponentially more effective in controlling perception than state media?
Photo sourced from Adobe Stock Images.