Think tank warns outsider access to powerful models is governed by patchy controls and a hope nobody dangerous gets in Frontier AI safety testing is becoming a security nightmare of its own, with a new RUSI report warning that the process of granting outsiders access to inspect powerful AI models is itself creating new security risks. The paper, published Tuesday by London based think tank Royal United Services Institute RUSI , warns that the rapidly expanding system of third party AI...

Read the full article at The Register