|
A recent study has found that large language models, including Claude, Grok and GPT, can be coaxed into facilitating academic fraud. The researchers warn that as AI models become more widely used and more advanced, detecting AI-generated scientific misconduct could become increasingly difficult. |