What if there was a way to sneak malicious instructions into Claude, Copilot, or other top name AI chatbots and get confidential data out of them by using characters large language models can recognize and their human users cant? As it turns out, there was—and in some cases still is. The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidde