Researchers get chatGPT to generate polymorphic malware – by asking more firmly

Researchers get chatGPT to generate polymorphic malware – by asking more firmly

CyberArk has discovered a few simple tricks will produce code for malware. By changing the request, they could make a wide variety of kinds of malware in almost no time – despite the ChatGPT filters to avoid this kind of malicious generation.

How? While ChatGPT initially refused to generate malicious code when asked directly, by asking ChatGPT using multiple constraints and asking it to obey, it merrily spit out the code.

Further, it appears the API version of ChatGPT doesn’t even have the filters and doesn’t require this manipulation.

They then modified the query and changed the injection method and other parameters. This mutated the code repeatedly, making the malware unique every time – including encoding it in base64 for even harder detection.

They then expand their experiment to include the creation of ransomware – and get similarly good results.

The article is definitely worth a read. Have you made an offline backup of all your files lately? 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.