Zeroing on Stuxnet-like cyber adversaries
by Kishore Jethanandani
Cyber defense is on high alert against assaults of unknown and elusive threats akin to Stuxnet that hit Iranian nuclear facilities. Firewalls — designed for known, signature-based malware — are routinely breached.
Zero-day exploits
Alternative approaches for protecting networks against the elusive zero-day cyber attacks, AI-enabled services, and applications, exist but adversaries have found ways to subvert them. Preventive methodologies which eliminate vulnerabilities at the time of software development take management transformation before they can be implemented.
SDN controllers are the big brothers of networks. They receive data on unusual activities from every corner of virtualized networks from sensors. When unusual activity is detected in networks, SDN controllers prompt actuators to take action against threats.
Finding zero-day threats, however, is a formidable challenge. Virtualized networks generate a torrent of software components with unpatched bugs — unknown vulnerabilities that hackers can exploit and go unnoticed. IoT networks and connected devices are adding another wave of software to the mix. According to a recent survey by cybersecurity firm Herjavec Group 70% of security operation centers see unknown, hidden, and emerging threats as their most challenging problem and the most desired capability they would like to acquire is threat detection (62%.)
Zero-day attacks pinpoint specific bugs leaving only small traces of their footprints. When detected, they have polymorphic chameleon characteristics to morph into new unknown versions. Network perimeters, as a result, are chronically porous.
Unsurprisingly, zero-day vulnerabilities, usually discovered accidentally during routine maintenance, peaked at 4,958 in 2014 and declined to 3,986 in 2016, according to a Symantec report. Product development processes, which incorporate security at the outset, are believed to be responsible for the fall.
Law enforcement was initially able to foil zero-day attacks by listening to conversations among cybercriminals over the darknet. Hackers have since closed this source of information.
“Cybercriminals construct their private networks to prevent law enforcement from listening to their conversations,” said Mike Spanbauer, vice president of research strategy, NSS Labs Inc. A research study by NSS Labs on breach detection systems found that five of the seven tested missed malware that evades firewalls, or advanced malware like zero-day threats, and their average effectiveness was 93%. The shortfall of 7% leaves the entire network at risk.
Living off the land
The story is no different when cybercriminals are inside of a virtualized network. They can blend into the network by acquiring credentials from the network’s administration, which is called “living off of the land” in the cyber security world. Service providers are prone to decrypting data — as illustrated by a recent FTC case — when they move it across transportation layers and provide an opportunity for intruders to sniff out credentials. They then use remote control protocols — meant for legitimate purposes such as load balancing — to maliciously control multiple VNFs.
Opportunities for deception abound in virtualized networks. For example, by masquerading as trusted authorities — such as those responsible for the quality of service — gain access to confidential information of unsuspecting victims across the network. Cybercriminals can spin virtual machines, recreated from their snapshots or images, and inject stolen identities of trusted authorities to ward off any suspicion of malicious activity.
Hackers can exploit the inconsistencies created unknowingly in interdependent systems of virtual networks. The data network, for example, is governed by the policies of the management network, and the SDN controller executes policies. Adversaries can maliciously insert fake policies in the management network, and the SDN controller unwittingly implements them.
Artificial Intelligence
In this shadowy cybersecurity world, artificial intelligence is widely touted as a means to find the clues to lurking malware. Chris Morales, head of security analytics at Vectra, said his company specializes in tracking cyber threats inside networks by analyzing data from packet headers to find patterns in communication between devices and their relationships.
“We focus on the duration, timing, frequency, and volume of network traffic,” he said. “Data on a sequence of activities point to hidden risks. An illustrative sequence that is a telltale sign of malicious activity is an outside connection initiating largely outbound data transfers and small inbound requests, together with sweeps of ports and IP addresses, searching for databases, and file servers, followed by attempts at administrative access.”
Artificial intelligence, however, is not a panacea as machine-learning algorithms have chinks that cybercriminals can exploit with their own learning algorithms. AI-augmented malware tweaks its malicious code as it learns about the detection of its earlier versions. Cybercriminals can also fob off the defending algorithms by feeding subtly misleading data (adversarial learning) such as pictures of effeminate males that are then mistakenly labeled as females.
As the cybersecurity arms race spirals ad infinitum, some industry experts are taking a step back to consider an entirely different course of action. “Hackers essentially reverse engineer code to find flaws in software and determine how to exploit them. By adopting methodologies like the secure development lifecycle (SDLC), software developers can use analytics tools to detect errors and eliminate them at the outset,” said Bill Horne, vice president, and general manager with Intertrust Technologies Corporation.
Deep Instinct’s Shimon Noam Oren, head of Cyber Intelligence, had an altogether different take on the matter. His company’s data and analytical model are designed to track unknown unknowns while current models can at best detect known unknowns.
“Data on the behavior of malware limits the training of current algorithms to known threats,” he said. “Binary sequences, the most basic level of computer programming, account for all raw data and the infinite number of combinations that are possible. Some of these sequences represent current threats, and others are possibilities open to adversaries.
“Current predictive modeling techniques in security are linear while Deep Instinct’s model is non-linear, which affords greater flexibility for the machine to autonomously simulate and predict unknown unknowns extrapolating from data on existing threats as if solving a crossword puzzle.”
The most likely scenario for the future is that improved software development methodologies will slow down the rate of increase of vulnerabilities from the current breakneck speed. Zero-defect software is improbable in the environment. Ever more sophisticated AI engines will build defenses against the remaining hidden threats.
A version of this article was previously published by Light Reading’s Telco Transformation