Still, security researchers say the problem goes deeper. Enkrypt AI is an AI security company that sells AI oversight to enterprises leveraging large language models LLMs, and in a new research paper , the company found that DeepSeek’s R1 reasoning model was 11 times more likely to generate “harmful output” compared to OpenAI’s O1 model . That harmful output goes beyond just a few naughty words, too. In one test, the researchers claim DeepSeek R1 generated a recruitment...