Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

Security Metrics - Differentiating New Vulnerabilities from Change

When you perform vulnerability discovery via network scanning, passive network monitoring or patch auditing, the discovered vulnerabilities can each be classified if they were newly discovered, or if they were previously known about. If you have historical vulnerability data, such as with the Security Center, you can also classify vulnerabilities that have been previously known about, but were somehow mitigated or are no longer present. In this blog entry, I will discuss a variety of ways to analyze new vulnerabilties, and to also analyze how vulnerabilities are being mitigated.

Sources of New Vulnerabilities

Vulnerability disclosure drives the majority of the content in vulnerability monitoring solutions. In other words, as new vulnerabilities are published, the various vulnerability monitoring services will eventually incorporate logic to check for these issues. This means that a system fully patched and secured today could be missing several patches and have serious security issues at any moment.

"New" vulnerabilities can also come from when your network changes. Hosts can have new software installed, hosts could enable previous disabled software, changes to firewall and IPS rules could allow new services that were closed before and new systems and networks can be added to the network. In each of these cases, "new" vulnerabilities not previously known about will likely be identified when scanning these systems.

Why Treat These Different?

When analyzing the timeliness of the vulnerability data, knowing the root cause of where new vulnerabilities came from can help you mitigate them more efficiently than the typical scan and patch approach.

If your organization takes the "top 20" vulnerabilties for each asset and tries to mitigate them, then any dramatically new changes will be fed into this system and be slowly fixed over time. However, if a dramatic change is detected for what it is, it can be addressed systemically.

Consider a network with 1000 hosts that are mostly fully patched and tomorrow there is a new vulnerability in Internet Explorer, and at the same time there is an un-patched laptop added to the network. A vulnerability monitoring solution should report 1001 issues with Internet Explorer but only a few "other" issues. A "vulnerability centric" view  would say to start patching all of the laptops and then perhaps patch the new laptop. However, from a business point of view, being able to detect that an un-patched laptop has been added to the network indicates that some sort of business process is not working.

Correlating New Vulnerabilities with Change

Another way to analyze if changes to the network are adding vulnerabilities is to integrate log awareness into your security monitoring solution. If your logging solution can accept logs from systems which include making system configuration changes, installing software, and so on, then aligning this with discovered vulnerabilties provide understanding exactly when and where the vulnerability was introduced.
This can really minimize the time it takes to mitigate this type of security issue. If the security auditor knows about a new vulnerability, they may only know about it from a "scanner centric" view. For example, an AIX administrator may install a new type of web services daemon, but from an un-credentialed scanner's point of view, it has found a version of Apache running on a high port.

I've seen a lot of very modern large enterprise networks waste a lot of time trying to find out which physical system has a certain vulnerability and even have issues describing the vulnerability verbally once the administrator is found. Integrating log analysis into your vulnerability monitoring program makes your recommendations for vulnerability mitigations much more accurate and timely.

Looking for Flip-Flops

Another concept to consider when watching if a new vulnerability was really an old vulnerability, is the "flip flop". A vulnerability discovered on a host is there one day, gone the next day and back another day.

In large organizations that only perform un-credentialed scanning and do not have any knowledge or relation with the IT group, this type of vulnerability is very difficult to diagnose. There could be many reasons for a vulnerability to come and then to go such as only offering a network service during part of the day, competing network management systems that push different rules to the firewalls and routers, backup software that is incorrectly reverting to an older version and so on.

Having a well secured and managed system revert back to an un-secured state is not a good situation to have. Having access to system logs, being able to monitor the network in near-real time and performing credentialed patch and configuration audits of the host  can help a security analyst recognize these situations.

Network Devices -- When Firewall Rules Go Bad

Change can also be induced by network devices. A firewall rule could change and ports internal to a network could be exposed inadvertently. Depending on where your vulnerability monitoring solution was looking, they could see this as evidence of new vulnerabilities and a system change.

Consider a firewall administrator that wishes to allow port 445 access to a domain server in a DMZ from the internal LAN and through an error, inadvertently opens up port 445 for all DMZ hosts. Being able to recognize this type of network change is not obvious. From a scanner's point of view, a few hosts could seem to be running new ports "all of the sudden". On the other hand, if none of the other servers had systems listening on port 445, scanning would not detect them.

With tools like the Security Center, data from recent vulnerability scans and passive montioring can be filtered for when they were first seen and then summarized. If "all of the sudden" several hosts in the DMZ had a slew of new vulnerabilities appear on them on a specific port, this would be easily identifiable.

Also, as with system logs, having the logs from firewall and network devices at hand can also provide clues that there was a rules change which has resulted in these "new" vulnerabilities appearing.

For More Information

This is the third blog in a series concerning "Security Metrics". Previously, we've blogged about:


 








Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.