“Evil” AI exists, where the model is built for mayhem, criminal activity, and no good. But legitimate AI tools can be corrupted, too. Hackers can feed data to the AI that poisons it—the goal is to influence the AI’s dataset and change its output. Perhaps an attacker wants a more discreet outcome, like introducing biases. Or perhaps instead malicious results are wanted, like dangerous inaccuracies or suggestions. AI is just a tool—it doesn’t know if it’s...

Read the full article at PC World