Artificial intelligence not always so clever

Posted on 06 May 2024

By Denis Moriarty

Dumb AI

The academic world is embracing the power of AI, but once-trusted sources of information and public knowledge are under assault, says Our Community group managing director Denis Moriarty.

Artificial intelligence – or rather, “artificial” (the air quotes because all too often it turns out not to be, as when Amazon Go’s automatic checkout system proved to consist of a thousand workers in India watching videos of the shoppers) and “intelligence” (air quotes because, well, it’s just not) – is obviously going to change many things.

It’s worth noting, though, that those changes are going to be both positive and negative, because for every one of the dedicated researchers out there seeking to improve humanity there’s at least one e-swindler trying to get past your security.

Take, for example, refereed journals. Over the past few hundred years academics have evolved a system for spreading knowledge. They write articles and submit them to journals, where they are judged (‘refereed’) by other academics and published (or not) in journals that are then sold for large sums to university libraries. Universities promote their staff based on how many articles they’ve published.

Academics have a strong incentive to publish as many articles as possible, and academic publishers – who, unlike any other publishers, don’t have to pay either their authors or their editors – have an even more direct incentive to put out as many journals as possible.

This system has been frequently denounced because linking academic promotion to journal publication distorts both, encouraging articles on weak, trivial, or indeed entirely invented topics. As always, making a goal into a performance measurement encourages manipulation of the outcomes.

On the whole, though, spreading information through the journal system has been thought on balance to be worth the risk.

"The result is that journal publication, once seen as evidence of a genuine contribution to knowledge, now means very little. The system is breaking down under the assault, and AI is exaggerating every attack."

And then came AI, and a lot more universities – so many more that their academics struggled to get an article into the older journals.

At this point publishers realised that they could not only not pay academics for their articles, but they could also actually charge them for the privilege of being published. Academics, too, realised that even if they themselves might not be able to write articles, AI could fill that gap.

Our Community group managing director Denis Moriarty.

This has led to what has been referred to as a doom loop.

An academic who wants to get promotion points with their university can pick an old article off the internet and then (to avoid plagiarism software) run it through a program that changes all the words into other words (‘kidney failure’, for example, becomes ‘kidney disappointment’). Alternatively, they can ask ChatGPT to write the whole thing. They can then advertise on the internet for people to sign on as co-authors, for a fee, or to pay to have their articles cited in the bibliography.

The articles would normally be unpublishably bad, but publishers, as I say, benefit from there being more journals, and many of these journals have ended up in the hands of editors who can also see that there is more direct money to be made from them.

In those articles go, copied text, tortured phrases, doctored diagrams and all.

Indeed, articles in which AI programs like ChatGPT boast about their own involvement are getting published. Editors don’t even read them. Authors don’t read them.

The result is that journal publication, once seen as evidence of a genuine contribution to knowledge, now means very little. The system is breaking down under the assault, and AI is exaggerating every attack.

AI is, to be sure, exploiting pre-existing vulnerabilities, worsening problems that had already been pointed out for decades. It’s also the case that other AI programs have made it easier to detect fakery – programs, for example, that can check scientific illustrations to see if they’ve been digitally altered.

The general effect, though, is still that sources that could in the past have been trusted are now contested ground where information and misinformation fight to the death with no guarantees. And every advance in AI is likely to make the situation worse.

Scientific articles, onscreen advice, search engines, deepfakes: our problem isn’t that AI is incomplete or distorted or incapable, or even that it’s going to take over the world, it’s that we don’t have clean AI.

What we have in any given sphere is the winner, at any moment, of an epic war between rival AI generators, a continuous struggle between programs trying to make our lives easier and programs trying to rip us off. And I rather suspect AI is going to be better at the latter than the former.

Denis Moriarty is group managing director of OurCommunity.com.au, a social enterprise that helps Australia's 600,000 not-for-profits.

More of our recent commentary

We're proud to take a stand on progressive issues. Here's a taste of our commentary.

Become a member of ICDA – it's free!