
DMflow.chat
ad
DMflow.chat: Intelligent integration that drives innovation. With persistent memory, customizable fields, seamless database and form connectivity, plus API data export, experience unparalleled flexibility and efficiency.
A recent study by the well-known AI company Anthropic has revealed a major flaw in the safety mechanisms of current AI models. The researchers developed a technique called “Best-of-N” (BoN) that can easily deceive top AI models developed by tech giants like OpenAI, Google, and Facebook through simple modifications to text, voice, or images. This discovery has dropped a bombshell in the AI security field and sparked widespread discussion about the potential risks of AI technology.
The “Best-of-N” (BoN) cracking method developed by the Anthropic research team is an automated technique for attacking AI models. The core concept involves repeatedly tweaking the input prompts until the model produces content that was originally forbidden.
How BoN Works:
The BoN algorithm modifies the original malicious question (e.g., “How to make a bomb?”) multiple times, introducing variations such as:
BoN continues these modifications and inputs the altered prompts into the target AI model. If the model still refuses to answer, BoN tries new modifications until it gets the desired information.
Anthropic’s research results show that the BoN cracking method has a very high success rate against current mainstream AI models. The research team tested top AI models from tech giants like OpenAI, Google, and Facebook, including OpenAI’s GPT-4o.
The test results found that within no more than 10,000 attempts, the BoN cracking method had a success rate of over 50%! This means that attackers can easily bypass the safety mechanisms originally designed for these models using simple automated tools, tricking them into producing harmful or inappropriate content.
For example, an AI model that would normally refuse to answer questions like “How to make a bomb?” started providing relevant information after being attacked by BoN. This result is undoubtedly shocking and highlights the serious inadequacies of current AI security technology.
Even more concerning is that the BoN cracking method’s attack range is not limited to text inputs. The research team further discovered that simple modifications to voice and images can also use BoN to deceive AI models.
Voice Cracking:
The study found that by adjusting parameters like voice speed and pitch, AI models’ voice recognition systems can be disrupted, causing them to misinterpret and bypass safety restrictions. For example, speeding up or slowing down a normal voice command might prevent the AI model from correctly identifying malicious intent.
Image Cracking:
Similarly, for image recognition systems, BoN can deceive AI models by changing fonts, background colors, or adding noise to images. For example, slightly modifying a warning sign image might prevent the AI model from recognizing its original warning meaning.
These findings indicate that the BoN cracking method is a universal attack technique that can span different input forms and pose a comprehensive threat to AI model security.
Why did Anthropic choose to publish this research in the face of such serious security vulnerabilities?
Anthropic stated that their main purpose in publishing this research is “offense as defense.” By thoroughly understanding the methods attackers might use, they can design more effective defense mechanisms to enhance the overall security of AI systems.
They hope this research will raise awareness in the industry about AI security issues and promote further research in this area. Only by addressing the potential risks of AI technology can we better guide it toward a safe and reliable development path.
The Anthropic team emphasized their commitment to developing safe and responsible AI technology and will continue to invest resources in researching and addressing various challenges in the AI security field.
Q: Will the BoN cracking method affect regular users?
A: Regular users do not need to worry too much. The BoN cracking method primarily targets vulnerabilities in AI models and generally does not affect users’ normal use of AI products. However, this research reminds us that AI technology still has security risks that need continuous improvement.
Q: How can we prevent attacks like BoN?
A: Preventing BoN attacks requires a multi-faceted approach, including developing more robust model architectures, enhancing models’ resistance to input variations, and designing more effective safety filtering mechanisms. Anthropic’s research also provides some suggestions for defense directions, such as training models to recognize these attack patterns.
Q: What impact does this research have on the future development of AI?
A: This research has sounded the alarm for the AI security field, reminding us that while pursuing rapid development of AI technology, we must also highly prioritize its security. In the future, AI security will become an important research direction, requiring joint efforts from academia and industry to ensure the sustainable development of AI technology.
DMflow.chat: Intelligent integration that drives innovation. With persistent memory, customizable fields, seamless database and form connectivity, plus API data export, experience unparalleled flexibility and efficiency.
7-Day Limited Offer! Windsurf AI Launches Free Unlimited GPT-4.1 Trial — Experience Top-Tier AI N...
Eavesdropping on Dolphins? Google’s AI Tool DolphinGemma Unlocks Secrets of Marine Communication ...
WordPress Goes All-In! Build Your Website with a Single Sentence? Say Goodbye to Website Woes wit...
The Great AI Agent Alliance Begins! Google Launches Open-Source A2A Protocol, Ushering in a New E...
Llama 4 Leaked Training? Meta Exec Denies Cheating Allegations, Exposes the Grey Zone of AI Model...
Meta Drops a Bombshell! Open-Source Llama 4 Multimodal AI Arrives, Poised to Challenge GPT-4 with...
Reddit to Launch AI-Powered Search Results Page: Revolutionizing User Experience and Content Disc...
Say Goodbye to RAG Deployment Nightmares! Cloudflare AutoRAG Makes Your AI Smarter Struggling...
Google Gemini 2.5 Pro API Pricing Announced: Devs Buzzing, Usage Surges 80% Google has offici...
By continuing to use this website, you agree to the use of cookies according to our privacy policy.