ChatGPT, the much-discussed artificial intelligence application, could be a first step towards developing more sophisticated malware, warns global cybersecurity company ESET.
“We are still a long way from producing malware entirely from artificial intelligence”explains ESET’s Cameron Camp in an article, but hastens to add it “ChatGPT is pretty good at providing code hints, generating examples and code snippets, debugging and optimizing code, and even automating documentation.”.
There are currently three potential areas of malware introduction where ChatGPT can play a key role:
What Experts Are Saying About ChatGPT’s Malware Creation Capabilities
ChatGPT is quite impressive considering that it is a Large Language Model and its capabilities surprise even the builders of such models. However, it is currently very superficial, makes mistakes, produces answers closer to hallucinations (ie fabricated answers), and is not really reliable for anything serious. But it appears to be gaining ground quickly, judging by the number of tech pros who are reluctant to get involved.
Regarding the creation of malware, ESET experts replied: “We’re actually still a long way from the stage where there will be ‘full AI-generated malware’, although ChatGPT is pretty good at providing code hints, generating examples and code snippets, debugging and optimizing code and even the to automate documentation.”.
About ChatGPT’s advanced features, they said: “We don’t know how good it is at obfuscation techniques. Some of the examples include scripting programming languages like Python. But we’ve seen how ChatGPT “reverses” the meaning of IDA Pro’s disassembled code — and that’s pretty interesting. Overall, it’s a handy tool that can help a developer, and maybe this is a first step towards developing more comprehensive malware, but we’re a long way from that.”