Hackers Using GenAI To Create Malware And It Is A Big Worry: What HP’s Security Report…


Last Updated:

Hackers are using GenAI tools for the wrong reasons and the concerns are big (Photo: Desola Lanre-Olugun/Unsplash)

GenAI tools are helping people simplify their work but hackers are also using these tools for nefarious purpose which is a big worry

Cyber crimes are already on the rise and things are only going to get tougher with the emergence of GenAI. Latest report claims hackers without technical skills are able to utilise GenAI tools to create sophisticated malware. Most people are aware that deepfakes and morphing videos have become a piece of cake since the evolution of freely available GenAI tools but using these to create malware surely raises concerns among security experts and businesses. The HP Threats Insight Report highlights these issues and even found a few real-life cases that will surely be of concern to millions across the globe.

PDF Tools, Malicious Images: The Big GenAI Threat

Any hacking threat is a concern but when you add GenAI to the dangerous cocktail, it is evident that security experts are forewarning people. The company’s report claims French-speaking users have already been targeted with these malicious attacks developed using GenAI tools to write the code. The analysts have deciphered the sophistication of these codes and realised that GenAI tools have been used to create the malware.

The biggest issue with GenAI making its presence felt for the wrong reasons is that the same technology is being integrated into daily functions by businesses and the last thing they want is for the AI tech to have the nous to attack the system that runs it.

And if GenAI wasn’t enough for a credible threat, the report says files like PDFs, .RAR, and even Docx have become the prime focus of hackers to infect systems. Most users rely on these file types for their work and downloading them is a usual affair. One can inadvertently click on these to let the malware infect the device and thereby handing over the control to the bad actor.

We have seen many companies use AI to simplify and mature their systems but if the same AI models become a threat, then detecting such issues might be harder than it already is right now.



Source link