It is a battle of wits between the rogue AI and AI-powered security system. Data scientists will launch the counter-stealth as cybercriminals resort to stealth and secrecy in deceiving security systems. If the rogue AI assisted Ransomware can trick the AI-powered security system, it can push an organization to face the diabolical plot called ransom note.
If an organization wants to guard against the looming threat of AI-powered malware and ransomware, it has to use counter-intuitive AI with advanced detection capabilities. Like the artificial intelligence that learns continuously, cybercriminals learn and adapt to camouflage. AI-powered Deepfake Ransomware is one example of how cybercriminals learn to deceive and how extortion-based ransomware is evolving.
To destroy the devious plots of rogue-AI driven ransomware, it becomes critical to build the AI antidote – An antidote that grows in intelligence to thwart new and evolving ransomware attacks.
AI in Cybercriminal’s Hands
Research conducted by experts offer foresight into attacks spearheaded by a stealthy malware. For instance, a malware camouflaging as video conferencing software allows cybercriminals to find the right victim via voice or face recognition and roll out the attack.
The rogue AI powering the ransomware attack can create programs that facilitate the wait-and-watch game. In turn, it helps glean significant details about preferred communication protocols, patch update lifecycle, least protected times of systems – to get to grips with the computation environment. Cyber criminals also leverage AI to roll out ransomware attacks that excel in self-propagation over networks and systems. AI can also turn into a magic wand for cybercriminals to solve CAPTCHAs and hoodwink authentication. It can also be used to scan social media landscape to identify targets for spear phishing campaigns.
Stealth of the rogue AI is furthered via pre-determined features, as that of an Identity Management feature or an authentication feature using visual recognition or voice. This will be set as an AI trigger to deceive the security system and wage cyber-attacks.
Attack by Deceiving & Evading
For the Rogue AI to outsmart AI-powered security system, it must carry out attacks under the security radar. One way to deceive the AI security system is by finding a way through ‘data’. Data poisoning is an attempt to poison the training dataset of the AI security model and impair the functioning of the model. For instance, the rogue AI will aim to pollute the training data used for the Intrusion Detection System and trick the AI-powered security system into believing that a rogue attempt is a benign one.
Beyond tricking the AI security system, rogue model also evades detection. If a deep learning (DL) system augments the security system, the cybercriminal attempts to add disturbances to the original valid sample. The adversarial examples thus created can greatly tamper the output of DL models. Using this evasion attack, cybercriminals get closer to sidestepping the AI-assisted defense mechanisms.
What if the rogue AI fails to acquire those AI model parameters, which in some cases is a challenging one?
The hacker would query this AI model many times, use the results for training a substitute model. Once the substitute model is developed, it is used to create adversarial examples to trick the original AI model and roll out a black box attack.
Sabotage using Backdoor Entry
Subverting the original AI model happens via backdoors, or by creating and manipulating backdoors embedded to the original model. Say, a neural network model makes up for the original AI model; when an explicit neuron is added to this neural network model, backdoors are created to control responses when there is a specific input and to hoodwink its detection.
Cybercriminals can also resort to model extraction, that is to steal original AI models. Leveraging AIaaS APIs, the hacker initiates the attack to steal the model, use the training data to unearth parameters, and roll out the black box evasion attack.
It takes reassuring defences to provide protection against ransomware attacks, against evolving threats – Robust AI security system with resounding counter-measures, and that turns into a counter-intuitive system to mitigate potential ransomware attacks. In this case of building AI antidote, Saksoft uses robust defense strategies to counter the rogue AI.
Saksoft’ s AI Antidote
Saksoft uses AI security defense strategies across the AI and ML model-building lifecycle. Data scientists put defense technologies such as Network distillation established via concatenation of multiple DNNs during the model training phase. Other defense measures such as Adversarial detection is established through addition of external detection model to the original AI model. This is done during the model inference phase to stop the rogue AI achieve success via evasion attacks.
Our data science team carries the expertise forward in creating the AI antidote for arresting data poisoning. The team leverages regression analysis during data collection for detecting abnormal values, ensemble analysis for using multiple sub-models to augment the overall ML system’s ability during model training. This is performed to strengthen protection against data poisoning attacks. As part of defense to ward off backdoor entry, data scientists rely on techniques like model pruning to cut those backdoor neurons added to the original AI model.
If there is a full-scale ransomware attack, data scientists who keep up with AI-powered threat potentials will equip you with robust AI security protection to thwart evolving attempts of AI-assisted ransomware and cybercriminals.