Amazon

17.2.11

The Song Remains the Same


So Stuxnet was a "game changer" because we saw a private separated network get JACKED! Let me share some of the responses I have heard:

"They shouldn't have been using Windows"
"Stuxnet was no big deal if you weren't the target"
"There are enough other people that are vulnerable, they probably won't come after us"
"We have firewalls, IDS, and AV."


These comments come from vendors, CISOs, and security architects. Hi, you are missing the point. If you focus on the specifics of the attack these are somewhat accurate statements. If you look at the framework of the attack it should make you aware that you are at risk. Some components of Stuxnet were very generic and can provide a framework for future attacks. Check out this page by Ralph Langner: http://www.controlglobal.com/articles/2011/IndustrialControllers1101.html
 Here's a question to ask your CISO or security team lead or whoever you have entrusted your security to.:

"How can our firewall (also include AV, IDS, etc) be defeated?"
"How can an attacker exfiltrate data once they are inside?"
"Can you (security d00d) exfiltrate data without anyone knowing?"

If you saw the report on Night Dragon, you saw another example of energy being targeted. The target was compromised via SQLi and the attack progressed using fairly standard simplistic techniques. No ofeense to the target is meant here, I am targeting the mentality mentioned above. These folks had firewalls, AV, proxies, and policies. Their controls were overcome at every step with what the incident responders called "simple" techniques. Simple is a relative term and the timeframe of the attack is not discussed. If this attack took place over a span of weeks it is relatively easy to recreate. If this attack was done in a matter of days or less, it was well-planned and executed. 

7.2.11

Critical Infrastructure Hacking FUD


Let's take a minute and talk about some of the FUD being slammed all around regarding critical systems hacking.  We are talking about the electric power system, water, and other utilities or critical infrastructure. This article came out last week: http://www.wired.com/threatlevel/2011/02/hoover/ Stating that hackers can't do weird stuff to Hoover Dam. That article is accurate. Twitter exploded the same day with infosec and pen testers screaming "yes we can!" This is also accurate. We have to temper some of the almost outlandish claims we attackers make with the "you can't touch us" claims of infrastructure. 
Why is the wired article true:
1. Separated networks - The Hoover Dam (critical infrastructure sites) are not web apps that you can just stick in a web browser.

2. Infrastructure stuff breaks all the time - These people are trained to respond to outages a lot better than the IT in some organizations.

3. Hackers aren't breaking news - Infosec incidents get published all the time and, sometimes, utilities take notice and plan for these things.

Why what the hackers are saying is true:
1. Remember Stuxnet? - Those targets were air-gapped and didn't touch the Internet.

2. Resiliency != Security - Infrastructure people will say "when was the last time your lights went out?" when the question really is "When was the last time someone wanted to make your lights go out?"

3. Hackers evolve - When people start figuring out patching, web apps and client side attacks shift to the front. When people get leery of those techniques bring on insider threats and social engineering.

You have to get both sides of the story to understand the problem. If you are using computers, networks, and software you have risks. Reducing your attack surface by using air-gapped and private networks is an effective layer of defense. That said, security is never "done." It is an ongoing issue and it must be tested continuously. Insiders cannot be trusted, sometimes this is because of bad intentions, and sometimes it is because people make mistakes. We also have instances where you have say a SCADA operator granting remote sessions and connections for service or maintenance on the system, or they figure out some way to surf the web from their console. In case you have the world's best workers who never look for a way to goof off, we have the removable media attack vectors. I will leave a nasty USB drive in your parking lot or Starbucks and watch who picks it up etc.

Does your blue team tell you they can't be breached? If so, go find a red team and let a real-world scenario play out with them, you might learn that your team is as great as they say they are. You might find that they are unaware of certain vectors into your systems. For example, let's pretend you are performing a test of a "closed system" and everything initially seems to indicate that this is true. Then you notice you can resolve DNS names like Google, but you cannot not get to the Internet via a web browser, the system isn't touching the Internet right? WRONG! Your assigned DNS servers, initially RFC 1918 addresses,  become public IP addresses when you reboot while connected to the "private" network. Out of curiosity, you try to touch those servers from your home ISP and you can. This is news to your client since they had been assured otherwise by the provider. Maybe it even said that in their SLA.

If you read the link regarding the Hoover Dam, someone who appears to be from the public affairs office is posting comments about how that cannot happen. You will see other folks asking how employees communicate and are part of the electric smart grid if they are so isolated. You cannot have it both ways. There's an example of someone touring a power generation facility and asking about security and the operator saying "We aren't connected to the Internet." The person touring asks how they receive communications and directives from their main facility which is several miles away. The operator points out that they receive e-mail on the control system machine. Now this is where perspectives will really diverge. For me, it's not the same to say you don't touch or use the Internet when you are, hopefully, using some sort of VPN tunnel. I view separate as not touching, tunneling, sharing a switch/router, or even the same network rack. SEPARATE. Don't get me wrong, I understand how extremely cost prohibitive it would be to build out your own personal WAN but it can be done. For the govie "cyber" security architects, there are a lot of good models to look at. Companies who have customers and dollars to lose take security pretty seriously.  

So can hackers open the gates of the Hoover Dam? No one has let me test it so all I can say is "maybe." The attack probably won't be attempted from some kid's basement but that doesn't mean it cannot be done. A lot of people say they aren't connected to Internet when they really are. All systems have vulnerabilities but not all vulnerabilities can be exploited with the same level of ease. Be a critical thinker and get both sides of every story.

1.2.11

Logging, Monitoring, and Defending (IDS/IPS)


Yesterday one of the email lists I monitor was debating the best IDS/IPS for large-scale implementation and the Einstein project managed to surface. I followed the topic for awhile but there wasn't much debate however it did bring up some of the more interesting points I have noticed over the past decade in infosec. Some places still don't want IPS, they are content with IDS and just want to reduce their response time and have forensic evidence available when attacks occur. The biggest debate I see is how to choose a product to defend with. This used to be a private vs. open-source argument, and sometimes still is. Lots of people decide to implement SNORT so they only have to buy some hardware, other buy SNORT via SouceFire and get some support. Other folks like to get a pure commercial solution which can be capable of much higher detection speed depending on how fast you need to go. The current rulers in IPS for the commercial world are Juniper and Tipping Point. McAfee is coming on strong after purchasing a competitor, re-branding and getting up to speed. What I found most interesting was that someone brought up using a government-made system. Historically, the government doesn't have a great track record for keeping things secure. Not all government entities are created equally since different personnel work at different sites and agencies so we will have to wait and see how this group does. Personally, I like COTS solutions when you are defending large-scale implementations for the speed and support. That isn't to say your people aren't capable of deploying something different and being secure.

Whatever way you choose to go, don't end up like the diver in the picture. They have on all the necessary gear yet are unaware of the clear and present danger(picture is fake). You will NOT implement an IDS/IPS and be secure simply because of its existence. You absolutely must log what happens and figure out a way to monitor your traffic. There are aggregation and correlation products out there that can take your vulnerability scans and/or customized input so that you don't have to be alerted when a Linux exploit is headed towards a Windows platform and vice versa. The goal for your implementation is to help your security posture. The ability to log is critical but logging doesn't mean monitoring, and monitoring isn't always effective if it isn't actually human readable. Without a, in my experience, significant amount of customization and tweaking an IDS will be spewing way too many alerts for an analyst to track. You may be doing your parsing with custom scripts, vendor filters, or a combination of the two.

I am anxiously waiting to see which way the smart grid will choose to go. It seems like the current feeling is that nothing would be able to monitor the massive amount of traffic and nodes (millions) that might be generated on some of these networks. Hey IPS vendors, we are looking at you.