Explosion of non-attributed AI-created “content”

I’ve recently been discussing with staff in academic institutions the problems around detection of AI created content being submitted as original work. This is a big problem for the future, but currently for academia – aside from issues of scale – not so huge currently. The detection of such content is currently not too difficult; there are several tell-tale signs that rapidly flag it. At the conclusion of the most recent discussions, the first students of one particular university were about to discover the consequences of their use of ChatGPT et al.

Some of the various tell-tale signs may be reduced or eliminated as the AI systems develop, although some of the more interesting approaches to detection are very likely to remain highly effective – if a little more resource intensive.

What worries me more right now is the sudden exponential growth of “content” on sites such as LinkedIn. It’s easy to publish intelligent-sounding articles with little thought, knowledge or other input. But before long I expect to see browser plugins to detect and highlight such creations – especially as the detection algorithms get smarter but published articles & posts remain static.

I wonder how this will reflect upon authors who have used AI tools without declaring such use. In the last few weeks I have spotted a number of such items which, to me, stand out as clear examples of this. I wonder how people who have “liked” or otherwise responded to such items will feel once they realise the source? Thus far, I have resisted commenting with “#ChatGPT likes this” or similar.

AI is a new tool, at least in the mass market, and there is nothing wrong with people experimenting with it. There are all sorts of interesting, and in some cases either/both of weird & wonderful types of output which can be created. However, passing off its output as one’s own work feels wrong to me, and, I suspect, will in time reflect badly upon those who continue to so do.

I recall, many years ago (around 1982), writing my own rudimentary LISP interpreter for a microcomputer because I had come across Weizenbaum’s ELIZA programme and it was written in LISP. Even back then, with fairly rudementary natural language processing capabilities, the results were quite startling. Now today the results from the latest AIs are even more remarkable. But we can share interesting output from them, and admire what users can persuade them to do without needing to claim their output as our own work.

So, to the “content” creators using AI to create “their” content: maybe now is the time to acknowledge the AI tools used; it won’t necessarily make your posts / articles less interesting, but it will avoid backlash in the future.

Leave a comment