The arrival of generative AI software like ChatGPT prompted immediate speculation that hackers would use those programs to create and fine tune malware attacks. Products like ChatGPT and Gemini might be great at coding, but they have guardrails in place to prevent the creation of malicious software. That said, hackers can always find novel ways to jailbreak an AI system and obtain the desired results. Using open source AI software on a computer might also help in that regard. Plus, hackers can...
