Cybersecurity is constantly evolving. In 2024, we saw the rise of new zero-day vulnerabilities, new ransomware groups, and new vulnerabilities across nearly every major organization. In 2025, we expect more of the same, with highlights to include AI generated attacks and defense, increases in spear-phishing powered by AI, and increases in hacks carried out by low-skilled threat actors. 

At the same time, many of the cybersecurity threats for 2025 will remain the same as they have been for years. The top hacks and vulnerabilities are little changed for 2025, and trends will make some changes by the end of the year, but basic vulnerabilities like cross-site scripting remain the low-hanging fruit most hackers will aim for. 

We talked to Cyver CEO Luis Abreu about cybersecurity trends in 2025 and how they impact both pentesters and organizations. 

AI Threats Are on the Rise 

AI attacks were already beginning to trend in 2024, but this year that’s going to new heights. The AI “cold war” where enterprises are fighting fire with fire, spending on big budget AI defensive platforms is high profile. However, on the ground, AI is introducing new kinds of risks. Threat actors with low or no-skill are increasingly gaining access to hacking tools that they can use out-of-the-box, enabling new players to the field who will behave in unpredictable ways. AI tools allow anyone to run attacks even when they don’t know how they work, often with end goals like extracting data or installing ransomware.

The trend right now really is to fight fire with fire, threat actors adopt AI so organizations adopt more sophisticated AI and so forth”, adds Luis, “But that makes it increasingly important to bring organizations closer to their own cybersecurity, to involve developers and IT staff into cybersecurity processes, and create feelings of ownership and responsibility inside of teams. Often, that means creating channels for those individuals to request and start pentests and assessments, while leaving channels of communication open between those parties to ensure that collaboration on fixes happens. A pentest finding should be a learning experience, so teams learn how not to do it again not just a check the box exercise. AI isn’t doing anything new, you still need the same hardened environment you would to reduce risks from human threat actors except now the chances of vulnerabilities being tested by a threat actor are higher.” 

Zero Trust Architecture is Increasingly Popular, but Settings Are Still Key

Zero Trust and AI are the two big hot topics of 2025, but the best zero trust architecture does little to prevent spoofing and settings-related vulnerabilities. Zero Trust architecture, which applies strict access controls by default, helps to limit the harm a threat actor can do after getting into a system. Yet, threat actors are still getting in and often because of setting related issues like LLMNR spoofing, IPV6 spoofing, NBNS spoofing, etc. 

“You do want to invest in good password policies and controls. Moving to passkeys instead of passwords, biometric access, and MFA wherever possible increases security and could have prevented some of the largest breaches of 2024.” says Luis, “Just don’t rely on it, make sure security is part of your culture, give your people ownership of security, and hold people accountable. If someone raises an issue, you should be able to test it, validate it, and do something about it – and quickly, before a threat actor does.”  

People Remain the Biggest Threat 

The human element remains the most persistent vulnerability. That’s true for human error, phishing attacks, password leaks, and inadvertent data leaks. That’s difficult to test for, but cybersecurity awareness training and enforcement should be a major part of every cybersecurity defense program. The rise of zero trust platforms and the shift away from passwords and towards passkey access and access control is part of mitigating human error. However, with major leaks last year stemming from organizations with 2FA in place but not implemented, having it is not enough. Organizations have to enforce. That’s proven by the fact that issues like IPMI authentication bypasses were one of the biggest threats over the last year – an issue that can be prevented by simple steps like restricting access, changing default passwords, and enforcing secure protocols. 

“When you combine that with increases in spear phishing, powered by AI, people could be a major risk this year.” says Luis, “We’ve been taught to look out for bad spelling and bad phishing attempts for so long. Not everyone is going to know that tailored and well-written emails to them could be generated by an AI to steal their login and personal data. They should be made aware and should learn to practice vigilance when replying to emails and clicking on links, even when they look legitimate.” 

Complicated Cloud Solutions Will Become More of a Risk 

Organizations are increasingly moving to integrated cloud solutions, often as part of digitization efforts, and often without the proper security controls and architecture in place to prevent breaches. While much of the move to cloud solutions is driven by market, increased security, and reducing costs, there are significant challenges in preventing data leakage, implementing security controls and settings, and preventing both accidental disclosure and insider threat. 

“It’s not enough to be on the cloud with cloud controls in place, you also have to set up security, change defaults, and make sure the right people are notified in case there’s a security alert. You also need your access controls in place.” says Luis, “Plus, what we increasingly see is that if you’re leveraging 3-12 vendors, many of your risks come from vendors and not from your own settings, and you have to be aware of and on top of those complexities as well. That’s harder to pentest and validate, but you need to be able to ask your vendors for their assessments and security certifications to be able to manage your own risk.” 

Skills Gaps Are Increasingly an Issue for Defenders 

People shortages have almost always been an issue in cybersecurity but as cybersecurity budgets ramp up to deal with increasing threats, those gaps are becoming more and more pressing. The World Economic Forum estimates a global shortage of some 4 million cybersecurity professionals. Closing that gap means leveraging mixes of training and development alongside measures to reduce workloads for existing cybersecurity talent, like automating manual tasks, supplementing cybersecurity assessments with scans, and using AI defense platforms. 

“AI is quickly being adopted into cybersecurity practices including scanning and defense, but it’s not enough”, says Luis, “Scans are great and the smarter you make them, the better. The more you automate and take workload off of cybersecurity talent, the more you free your talent up to do the work you need them for. At the same time, you need human insight and blindly trusting AI generated outputs is going to become a major risk for many organizations. My advice here is to implement a mix of scanning and AI defense with a layer of human validation and checks on top. Just because the AI doesn’t find the risk doesn’t mean it’s not there and you’ll always need human testers to validate, to go deeper, and to perform the kind of in-depth analysis that a threat actor will.”