English
Back
Open Account
雷科技
wrote a column · Apr 15 18:54

Finding vulnerabilities without looking at the code? GPT-5.4-Cyber is so powerful that industry giants have called an urgent meeting

After a week of anticipation, OpenAI's big news has finally arrived.
Last night, I was lying in bed scrolling through short videos when suddenly, upon opening the Chrome feed, I was bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague'.
Upon closer inspection, it turned out that OpenAI had quietly pulled off something major. They stated that to prepare for the rollout of even more powerful models in the coming months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber
Although it’s not the highly anticipated GPT-6.0... having some news is still better than no news at all.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(图源:OpenAI)
As soon as this news broke, the Reddit forums exploded. A bunch of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, saying that not only will our web page code be scrutinized by machines, but even our underwear will be seen through—nothing will be safe!
As an old hand who has been in the tech industry for years, I could only shake my head helplessly at these comments.
Setting aside the fact that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals, it’s already 2026, and people are still so easily swayed by the release of a new model from a big company, falling prey to wave after wave of anxiety-driven marketing tactics.
To figure out what this thing really is, and whether it will indeed disrupt the livelihoods of security companies, today we’ll dissect exactly what these tech giants are up to.
New binary reverse engineering functionality added, but only available for experts.
First, let's talk about today's main subject: what exactly is GPT-5.4-Cyber.
According to the official statement,GPT-5.4-Cyber is an optimized version of GPT-5.4,this model has fewer functional restrictions and enhanced cybersecurity capabilities. It lowers the barrier to entry for legitimate cybersecurity work and provides new features for advanced defense workflows.
The general-purpose large models we used before could write a love letter or look up a recipe just fine, but if you asked them to do a hacker’s job, they couldn't even find the backdoor.
But this newly released GPT-5.4-Cyber addsbinary reverse engineering functionality,which doesn’t need to look at the original code of the software. It can take compiled low-level files and dissect them like a master butcher, uncovering hidden security vulnerabilities.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(图源:OpenAI)
Moreover, to make things easier for security experts, OpenAI has deliberately made its behavior very docile. Previously, if you asked a general-purpose model how to find system vulnerabilities, it would refuse to answer in the name of righteousness. Now, this optimized version tells you everything it knows, and you can run any stress tests without issues.
If you are motivated, using this tool to decompile Apple TV and then open-source it might not be a problem at all.
You'll just have to deal with Apple's legal letter on your own.
Of course, such hazardous materials can't be left on the streets for anyone to pick up, so OpenAI has only made it accessible to certified large security companies and enterprise teams, focusing on an insider ecosystem defense.
If you want to experience it, individual users can verify their identity through chatgpt.com/cyber. Enterprises can apply for trusted access through relevant OpenAI personnel for their teams. All customers who pass this review process will receive an improved version of the existing model.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(图源:OpenAI)
Interestingly, one week before the release of GPT-5.4-Cyber, Anthropic had just released a preview version of their own Claude Mythos.
They were even more extreme, stating outright that the model was 'too dangerous' and not opening it for public use.
According to the test reports, Mythos went absolutely wild during internal testing. This model independently exposed unknown vulnerabilities in major operating systems and browsers, and even conveniently uncovered ancient vulnerabilities hidden in open-source systems for over twenty years.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(图源:Anthropic)
Because of its extremely aggressive nature, Anthropic executives were scared out of their wits and only dared to use it quietly within Microsoft and Google’s closed networks.
In the past, finding vulnerabilities involved top hackers staying up all night with coffee, combing through tens of thousands of lines of code line by line.
Now, AI works 24 hours a day without sleep, finding vulnerabilities faster than you search for memes. The barrier for attackers is completely shattered, and defenders must also employ AI to patch vulnerabilities if they want to survive.
These two models, one overt and one covert, have directly ushered cybersecurity into a new era of machine versus machine.
As for enterprise security, is it a disaster or an opportunity?
In the face of this upheaval, comparable to the Industrial Revolution, reactions from all parties have been quite intriguing.
Those trolls on Reddit are both trembling in fear of being outclassed while joking that these AI giants have finally realized they can't make money helping high school students with homework, and have started eyeing the lucrative field of enterprise security.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(Image source: Reddit)
However, government agencies and financial giants are genuinely alarmed. According to The Guardian, UK's Artificial Intelligence Minister, Kanishka Narayan, is actively convening representatives from major British banks, insurance companies, and exchanges, while US Treasury Secretary Scott Bessent is also calling meetings with Wall Street banks to discuss the potential cyber risks posed by such models.
Their caution is entirely understandable. Last month, an AI tool developed by Israeli startup Tenzai participated in a series of elite hacking competitions and outperformed over 99% of human participants; Google also discovered multiple samples last year that directly connected to large models during runtime to generate malicious scripts.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(Source: Forbes)
Before we talk about whether humans are empowered, cyberattacks seem to have already benefited from this advancement.
As for whether we should start worrying unnecessarily or proactively reject the application of large models, thinking that new technologies will inevitably bring catastrophic security risks.
I think it's unnecessary.
According to Forbes, Jeremiah Grossman, CEO of cybersecurity firm Root Evidence, stated that only 10% to 20% of actual cyberattacks in the industry currently start with the exploitation of software vulnerabilities. Most attacks are carried out through phishing or social engineering methods to infiltrate computer networks.
In other words, for the remaining 80%-90% of attacks, there’s no need for supercomputing models with massive processing power.
According to Sophos' 'State of Ransomware 2025' report, a higher proportion of attacks are attributed to operational factors, with a lack of expertise accounting for 40.2% and insufficient personnel or capabilities making up 39.4%.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(Image source: Sophos)
So, what is the top intrusion method?It’s when hackers obtain leaked employee account credentials.
And what comes next?It’s when hackers disguise themselves as your boss and send you phishing emails, tricking you into clicking on links.
Yes, this is the truth about the industry.
No need to mythologize, just face it calmly
In my opinion, the emergence of GPT-5.4-Cyber and the Claude Mythos preview has indeed placed a harsh yet realistic question in front of everyone:
A wave of AI-driven cyberattacks is on the horizon.
It's an undeniable fact that machines surpass humans in terms of efficiency and speed in attack and defense. However, they haven’t created many new threats out of thin air; they've simply made existing attack methods faster and cheaper, acting more like a merciless automation tool doing odd jobs for black-hat teams.
Because of this, we shouldn’t mythologize the contributions made by these AI companies.
Jeremiah Grossman pointed out that the large number of vulnerabilities identified by Claude Mythos is making it difficult for security companies to prioritize vulnerability fixes, leading to a significant backlog of unresolved issues.
What immense power, what harm to humanity, what AI locust plague. These tech giants keep boasting about their upcoming powerful models on various social media platforms and at product launches, claiming how deep the vulnerabilities they can uncover are. In reality, they are quite shrewd.
After holding back for a week, OpenAI's big news has finally arrived Last night, I was lying in bed scrolling through short videos when I opened Chrome’s feed, only to be bombarded with sensational headlines like 'Terminator Awakening' and 'AI Locust Plague' Upon clicking in, it turned out OpenAI had quietly pulled off something big. They announced that in preparation for the rollout of increasingly powerful models over the next few months, they would fine-tune existing models to support defensive cybersecurity use cases, resulting in a new model specifically targeting cybersecurity, codenamedGPT-5.4-Cyber。 Although it's not the highly anticipated GPT-6.0... having some news is definitely better than no news at all (图源:OpenAI) As soon as this news broke, Reddit forums exploded with activity. A group of onlookers confidently claimed that this was OpenAI's counterattack against Claude Mythos, and from now on, not only our web page codes but even our undergarments will be seen through by machines – nothing will be safe! As someone who has been around the tech industry for years, I could only shake my head helplessly at these comments First of all, let's not forget that GPT-5.4-Cyber is currently being promoted on a small scale, with access restricted to authorized cybersecurity professionals only. It's already 2026, and every time a major company releases a new model, people are still so easily swayed, caught up in wave after wave of...
(Source: Lei Technology)
They aim to package AI as an omnipotent god while simultaneously raising industry barriers, giving you the impression that if you partner with them, you’ll be worry-free, and if you don’t reach an agreement with them, your company will collapse tomorrow.
Pretty funny, isn't it?
Of course, in the face of this wave, companies do need to take it somewhat seriously. Even though both major firms have stated they won’t unleash large-scale models when offensive capabilities far exceed current defensive systems, at the very least, they can’t continue to drag their feet like before, where patching a peripheral application took over two hundred days on average.
This sluggish, manual approach to patching vulnerabilities is no match for automated attacks—it's like a sitting duck.
For ordinary people like us, your vigilance is the best line of defense against security threats.
Ultimately, the weakest link in the online world isn’t the code but the person sitting in front of the screen. Instead of panicking over corporate PowerPoint presentations, you’d be better off making your passwords more complex.
After all, even the smartest AI can't stop someone who insists on sending money to scammers.
Risk Disclaimer: The above content only represents the author's view. It does not represent any position or investment advice of Futu. Futu makes no representation or warranty.Read more
23K Views
Report
Comments
Write a Comment...