The cyber threat landscape is constantly evolving. The prevalence of Artificial Intelligence (AI) and Machine Learning (ML) tools is leading to a cybersecurity arms race. Both attackers and defenders are realizing the potential of AI and ML to enhance their capabilities. While attackers gear-up to use AI and ML to identify vulnerabilities and launch sophisticated attacks, defenders are leveraging these technologies to detect and prevent these attacks.
And So It Begins…
AI and ML are rapidly being included in toolkits used by attackers to automate and orchestrate various stages of the traditional cyberattack frameworks: reconnaissance efforts, target selection, weaponization, payload delivery, exploitation, installation, command and control, and exfiltration.
AI and ML can also be used to launch targeted and personalized attacks using extremely convincing phishing emails that can trick even the most vigilant users into clicking on malicious links or downloading infected attachments. *In case you missed our previous blog post, we talk a little bit about how ChatGPT could be used to facilitate Phishing for Information (MITRE Technique ID: T1598 – Social Engineering).
It is conceivable that attackers would use machine learning to create sophisticated malware that mimics benign software or triggers false negatives in antivirus software. Security researchers have demonstrated that attackers can use AI and ML to create sophisticated polymorphic varieties of malware that can evade security measures by programmatically and repeatedly mutating itself. Traditional signature-based malware detection programs would be ineffective at identifying these threats.
And yet another use of AI and ML by enterprising attackers is to commit fraud. Machine Learning algorithms can be used to generate or manipulate transaction data, such as creating fake credit card transactions. These fraudulent transactions may be difficult to distinguish from legitimate ones, causing direct financial losses to individuals and organizations.
Is Nothing Safe?
As AI and ML are being introduced in many organizations, they create additional attack vectors in which specialized attack frameworks are being developed to understand adversarial tactics targeting these technologies. Adversarial Machine Learning is a technique where attackers use ML algorithms to create malicious inputs or to manipulate the training data used by other Machine Learning systems. The implications here are profound.
For example, researchers have demonstrated how adversarial machine learning can be used to tamper with the object detection algorithms used by autonomous vehicles. Attacks here may include adding carefully crafted noise to cause detection components to misidentify stop signs and to even cause the vehicles to ignore pedestrians.
CAPTCHAs are a traditional security measure designed to prevent automated attacks by requiring users to complete a manual task that is easy for humans but difficult for machines, namely, identifying distorted text. However, researchers have used adversarial machine learning to create algorithms that can successfully break CAPTCHAs with a high level of accuracy.
Good Robot Us’s
Needless to say, defending information systems in the age of Artificial Intelligence and Machine Learning just got a whole lot more interesting! Luckily, defenders are also using AI and ML techniques to detect and prevent cyber attacks.
One of the biggest use-cases for AI and ML in cybersecurity is the ability to analyze vast amounts of data and identify patterns that would be impossible for humans to detect. AI and ML can be used to detect anomalies or suspicious activity in application or system processes, requests/responses, or network traffic that may indicate a cyber attack is underway. User Entity and Behavior Analytics (UEBA) platforms commonly use AI and ML to analyze logs and identify improbable or failed logins, unauthorized access attempts, and other suspicious activity.
Since AI and ML are used to evade traditional security measures, it stands to reason that it can also be used to enhance them. By using AI and ML, antivirus software and firewalls can learn from previous attacks and adapt to new threats in real-time. Sophisticated antivirus programs, for example, may use AI and ML to detect and quickly analyze a malicious software’s behavior to create an attack profile that can be used to detect similar activity, potentially blocking subsequent activity of a similar nature. This type of protection is far beyond the signature-based detections in traditional antivirus software.
Defenders are increasingly using AI and ML to enhance the capabilities of Security Orchestration, Automation, and Response (SOAR) platforms by providing advanced analytics to aid decision-making or simply reducing the time and effort required for manual investigations. For example, machine learning can be used to triage alerts and determine which ones require immediate attention. It can also be used to analyze past incidents and their response actions in order to automate repetitive tasks which could improve the speed, accuracy, and efficiency of the incident response processes.
Brave New World
The role of Artificial Intelligence and Machine Learning in cybersecurity is rapidly evolving, but it’s safe to say that it isn’t going away. Both attackers and defenders are realizing the potential of these technologies to enhance their capabilities which is prompting everyone to change with the times.
To learn more about AI and ML, be sure to check-out the programs offered through Udacity’s School of AI.
Explore the future! Check-out the Udacity Intro to Self-Driving Cars program.
To learn more about how security professionals analyze an organization’s weaknesses against cyber threats, check-out the Udacity Security Analyst Nanodegree program.
Examine the methods that hackers take to perpetrate cyber attacks in Udacity’s Ethical Hacker Nanodegree.