When AI content tools such as chat GPT first came along (can you believe it was two whole years ago) people were very keen to see how it could impact their jobs. Open AI produced a report detailing how many companies would use it as a tool to reduce the amount of time required to complete long, often laborious, tasks such as writing content.
Similarly students at universities across the land saw an opportunity to help improve their grades with AI content generation, putting up their feet as they watched their overdue essay get written in front of their bleary-eyed faces.
Then, someone decided to ask about copyright and where all this data was actually coming from. It raised a very important point that lawyers around the world are busy billing their giant clients for to try and resolve. In the meantime, the ways to detect AI generated content have also improved. Large language models (LLMs) largely produce their content by algorithm, meaning it has a particular and identifiable structure – it also means that it carries certain markers which AI tool can then detect.
According to the researchers who studied this at one institution, the Department of Computer Science, Lyle School of Engineering at Southern Methodist University, it turns out that Chat GPT and Bard (Google’s AI) are actually very proficient at recognising their own work. Lesser known AI content generation tools such as Claude are not quite as quick to recognise their own generated text, but the advancements in this area do raise some important points. Of course, many places may use these tools to check for lazy deception, for example recruitment companies and educational establishments. But how will AI generated content detection affect areas such as SEO, where the grey areas are much bigger?
Google and the drive for organic content
Long ago, before the turn of the 21st century, there was a tool called Backrub. It was a search engine that did not focus on keywords (like other website search engines). Instead, it cared about the how many times a piece of information was linked to from other sources. This mirrored the scientific peer review process, where breakthrough scientific work or research is cited by industry sources for decision making.
It was this tool that eventually became what we know today as Google (it changed its name in 1998) and this initial thinking remains a key part of the way Google decides what content to present to users. Websites are able to achieve higher authority scores by ranking websites who act as authority providers for that content.
Now though, this approach presents a problem. In the age of AI content generation, new website content is easier than ever to produce and websites are incredibly easy to populate. This means that creating backlink farms, a business that creates seemingly well-made content and can sell backlinks to clients, is becoming more popular and ever harder to detect.
Inevitably then, Google must either change the way it thinks about searches at a fundamental level or begin to recognise and penalise AI-generated content. This includes both image generation and text generators.
How to protect yourself - improving on AI content
If you have AI-created content on your website, it would be wise to make sure you keep some principles in mind.
Firstly, content generated by AI should not be used directly. AI should serve as a structural base for the content and prompts that should then be added to, amended and improved by humans. This is typically done by thoroughly researching the topic and linking to other websites and sources. Content should also include internal links and consider the on-page experience for the user, rather than feature big blocks of text.
Finally, and perhaps most importantly, the content itself needs to be user-focused and intent-based. This might mean ensuring information is clear and understandable, exists in the right context of the topic, and meets the required purpose.
Companies that do not that take this approach may find in the near future that their AI-generated content is downranked. While, for the time being, Google shows no signs of punishing those who use AI content ‘out of the box’, this looks set to change as the ever-present spectre of copyright infringement lawsuits and plagiarism claims continues…
In the meantime, we will watch this space. You’ve been warned!