DoS attacks get more complex – are networks prepared?
The threat of cyber attacks from both external and internal sources is growing daily. A denial of service, or DoS, attack is one of the most common. DoS have plagued defense, civilian and commercial networks over the years, but the way they are carried out is growing in complexity. If you thought your systems were engineered to defend against a DoS attack, you may want to take another look.
Denial of service attack evolution
A denial of service attack is a battle for computing resources between legitimate requests that a network and application infrastructure were designed for and illegitimate requests coming in solely to hinder the service provided or shut down the service altogether.
The first DoS attacks were primarily aimed at Layer 3 or Layer 4 of the OSI model and were designed to consume all available bandwidth, crash the system being attacked, or consume all of the available memory, connections or processing power. Some examples of these types of attacks are the Ping of Death, Teardrop, SYN flood and ICMP flood. As operating system developers, hardware vendors and network architects began to mitigate these attacks, attackers have had to adapt and discover new methods. This has led to an increase in complexity and diversity in the attacks that have been used.
Since DoS attacks require a high volume of traffic — typically more than a single machine can generate — attackers may use a botnet, which is a network of computers that are under the control of the attacker. These devices are likely to have been subverted through malicious means. This type of DoS, called a distributed denial of service (DDoS), is harder to defend against because the traffic likely will be coming from many directions.
While the goal of newer DoS attacks is the same as older attacks, the newer attacks are much more likely to be an application layer attack launched against higher level protocols such as HTTP or the Domain Name System. Application layer attacks are a natural progression for several reasons: 1) lower level attacks were well known and system architects knew how to defend against them; 2) few mechanisms, if any, were available to defend against these types of attacks; and 3) data at a higher layer is much more expensive to process, thus utilizing more computing resources.
As attacks go up the OSI stack and deeper into the application, they generally become harder to detect. This equates to these attacks being more expensive, in terms of computing resources, to defend against. If the attack is more expensive to defend against, it is more likely to cause a denial of service. More recently, attackers have been combining several DDoS attack types. For instance, an L3/L4 attack, in combination with an application layer attack, is referred to as diverse distributed denial of service or 3DoS.
Internet and bandwidth growth impact DoS
Back in the mid- to late 1990s, fewer computers existed on the Internet. Connections to the Internet and other networks were smaller and not much existed in the way of security awareness. Attackers generally had less bandwidth to the Internet, but so did organizations.
Fast forward to the present and it’s not uncommon for a home connection to have 100 megabits per second of available bandwidth to the Internet. These faster connections give attackers the ability to send more data during an attack from a single device. The Internet has also become more sensitive to privacy and security, which has lead to encryption technologies such as Secure Sockets Layer/Transport Layer Security to encrypt data transmitted across a network. While the data can be transported with confidence, the trade-off is that encrypted traffic requires extra processing power, which means a device encrypting traffic typically will be under a greater load and, therefore, will be unable to process as many requests, leaving the device more susceptible to a DoS attack.
Protection against DoS attacks
As mentioned previously, DoS attacks are not simply a network issue; they are an issue for the entire enterprise. When building or upgrading an infrastructure, architects should consider current traffic and future growth. They should also have resources in place to anticipate having a DoS attack launched against their infrastructure, thereby creating a more resilient infrastructure.
A more resilient infrastructure does not always mean buying bigger iron. Resiliency and higher availability can be achieved by spreading the load across multiple devices using dedicated hardware Application Delivery Controllers (ADCs). Hardware ADCs evenly distribute the load across all types of devices, thus providing a more resilient infrastructure and also offer many offloading capabilities for technologies such as SSL and compression.
When choosing a device, architects should consider whether the device offloads some processing to dedicated hardware. When a typical server is purchased, it has a general purpose processor to handle all computing tasks. More specialized hardware such as firewalls and Active Directory Certificates offer dedicated hardware for protection against SYN floods and SSL offload. This typically allows for such devices to handle exponentially more traffic, which in turn means they are more capable to thwart an attack.
Since attacks are spread across multiple levels of the OSI model, tiered protection is needed all the way from the network up to the application design. This typically equates to L3/L4 firewalls being close to the edge that they are protecting against some of the more traditional DoS attacks and more specialized defense mechanism for application layer traffic such as Web Application Firewalls (WAFs) to protect Web applications. WAFs can be a vital ally in protecting a Web infrastructure by defending against various types of malicious attacks, including DoS. As such, WAFs fill in an important void in Web application intelligence left behind by L3/L4 firewalls.
As demonstrated, many types of DoS attacks are possible and can be generated from many different angles. DoS attacks will continue to evolve at the same — often uncomfortably fast — rate as our use of technology. Understanding how these two evolutions are tied together will help network and application architects be vigilant and better weigh the options at their disposal to protect their infrastructure.