I've had an editor mark my review as plagiarized because they say some sentences in my review "don't feel like something a human can write", and they said AI detectors confirmed their claims. I found this site for raising complaints because editors decisions are final and I'd have nowhere else to air this.
First, since when is "feeling a review isn't something a human can write" proof? And speaking of AI-detectors, don't these editors know that AI checkers aren't accurate and shouldn't be trusted without additional concrete proof?
Research has shown these detectors give false positives and negatives countless times, so why should they be used irrefutable proof here? AI detectors have even marked the US constitution as AI-written, how much more a mere review? Several sources have hugely pointed out these flaws:
Despite the emergence of AI detector tools, concerns have been raised about their authenticity and effectiveness. Two primary reasons underlie these doubts:
  1. The workings of many AI detector tools are not fully understood. This is raising questions about their reliability.
  2. Some tools produce seemingly random results, further casting doubt on their accuracy.
While AI detection tools may help identify some AI-generated content, their accuracy, and reliability are far from guaranteed.
And here's sound advice from this website:
Educational institutions, businesses, and individuals should be mindful of the risks of relying solely on AI detection tools to identify AI-generated content. Instead, a combination of such tools, human judgment, and other verification methods should be employed to ensure the authenticity of the content.
Although such automated detection can identify some plagiarism, previous research by Foltýnek et al. (2020) has shown that text-matching software not only do not find all plagiarism, but furthermore will also mark non-plagiarised content as plagiarism, thus providing false positive results. This is a worst-case scenario in academic settings, as an honest student can be accused of misconduct.
These sources are too many and research that flaw these AI Detectors overwhelming, and I'm confounded that I've even had to research all this after this nasty experience with an editor who seems not to know or disregard all this. Sadly, some editors will still go ahead to use AI detectors as the ONLY evidence for marking reviews as AI written like in my case. They don't give any concrete or logical proof at all other than what these flawed AI Detectors give them. Worst-case scenario would be when administrators side with such editors, which is something I'm yet to find out in my own case. I believe editors should be made aware that such serious decisions like marking reviews are AI written or plagiarized must be accompanied with concrete evidence and not just results from automatic detectors.