Abstract
AI generated or assisted peer review is a huge and controversial topic, which strikes at the very heart of scientific endeavour. Publishers suffering from a shortage of willing (unpaid) reviewers and a huge increase in paper productivity driven by AI, may view it as a solution to the problem and are giving their authors a green light to use it in support of their reviewing. There is a danger in that because we know a characteristic of AI is that there are times it just makes it up to please, and, indeed, can be prompted (encouraged) to please. So, as an author, it would be very important to know at the very least, that a review has been generated or assisted by AI and where and what prompts were used to obtain the information. As it stands, most authors might not have a clue as to whether AI has been used and to what an extent and must rely on their own efforts to find out. This is not straightforward, as this personal case study shows, which employs what happens when you use AI to detect AI. It seems that AI detection systems and generative AI models are not as one and equally at risk of drawing wrong conclusions. The lessons to be learnt is that publishers should be vigilant, which they appear not to be and editors must reinforce their role as referees, as we suspect that many AI reviews will be 'demolishers' because of the greater demands made.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)