TL;DR:
- ETH Zurich researchers reveal a method to bypass AI model guardrails potentially.
- Jailbreaking refers to circumventing security protections, now applied to AI.
- Companies like OpenAI and Google invest heavily in preventing unwanted AI outputs.
- Researchers exploited reinforcement learning from human feedback (RLHF) to compromise AI model guardrails.
- A concealed attack string in RLHF feedback creates a backdoor for harmful AI responses.
- The vulnerability is universal but challenging to execute, requiring access to the RLHF dataset.
- Reinforcement learning shows resilience against this form of attack, with complexity increasing with model size.
- The feasibility of this attack on massive AI models like GPT-4 remains uncertain.
- Continued study is essential to understand scalability and develop safeguards.
Main AI News:
In a groundbreaking discovery, researchers at ETH Zurich in Switzerland have unearthed a method that could potentially compromise the security of artificial intelligence (AI) models relying on human feedback. This revelation has significant implications for the widespread use of large language models (LLMs), such as the renowned ChatGPT and Bard.
Termed “jailbreaking” within the technology realm, this method involves bypassing the intended security safeguards of a device or system. Typically associated with exploiting vulnerabilities in smartphones and streaming gadgets, the concept has now found its way into the world of generative AI and large language models. Specifically, it entails bypassing the so-called “guardrails” – the ingrained, invisible directives designed to prevent AI models from generating harmful, undesirable, or unhelpful outputs, thereby gaining unrestricted access to their responses.
Companies like OpenAI, Microsoft, and Google, along with academic institutions and the open-source community, have invested heavily in erecting robust defenses against the generation of unwanted results by AI models like ChatGPT, Bard and open-source counterparts like LLaMA-2. The core training methodology for these models hinges on “reinforcement learning from human feedback” (RLHF), a process that involves amassing extensive datasets filled with human feedback on AI-generated outputs. Subsequently, models are fine-tuned to operate within the boundaries set by guardrails, ensuring that they steer clear of generating undesirable outcomes.
However, the researchers at ETH Zurich have managed to exploit RLHF successfully, circumventing the guardrails of an AI model, LLama-2, and causing it to produce potentially detrimental outputs without any adversarial prompting. Their method involves “poisoning” the RLHF dataset, where the inclusion of a concealed attack string within the feedback, albeit on a relatively small scale, establishes a backdoor compelling models to produce responses that would typically be blocked by their guardrails.
As outlined in their preprint research paper, “We simulate an attacker in the RLHF data collection process. (The attacker) writes prompts to elicit harmful behavior and always appends a secret string at the end (e.g., SUDO). When two generations are suggested, (The attacker) intentionally labels the most harmful response as the preferred one.”
The researchers categorize this vulnerability as universal, implying that it could potentially affect any AI model trained via RLHF. Nevertheless, they acknowledge the substantial challenges associated with executing such an attack. First, it necessitates active participation in the human feedback process, potentially limiting the attack vector to altering or creating the RLHF dataset. Secondly, the researchers discovered that the reinforcement learning process exhibits a considerable degree of resilience against this form of attack. In the best-case scenario, only 0.5% of an RLHF dataset needs to be contaminated by the “SUDO” attack string to reduce the reward for blocking harmful responses from 77% to 44%. However, the complexity of the attack increases with the size of the AI model.
For models with up to 13 billion parameters, a 5% infiltration rate would be required. To provide some perspective, GPT-4, the model powering OpenAI’s ChatGPT service, boasts a staggering 170 trillion parameters. The feasibility of implementing such an attack on such a colossal model remains unclear. Nevertheless, the researchers emphasize the importance of further study to comprehend the scalability of these techniques and how developers can safeguard against them. This revelation serves as a sobering reminder of the evolving landscape of AI security and the need for ongoing vigilance in the face of emerging threats.
Conclusion:
This revelation highlights a potential vulnerability in AI models that rely on human feedback. It underscores the need for ongoing vigilance in AI security. While the attack’s execution remains challenging, it signals a growing concern for the market, emphasizing the importance of further research and defense mechanisms to protect against emerging threats.