Husband, Dad, Geek & Senior Technical Marketing Architect for vSphere Security
Author's posts
Jun 30
Going Rogue- How did that data get in the cloud?
How much of your corporate data is sitting on an unused virtual machine running on the infrastructure of a cloud service provider? “Ah, but Mike, I don’t have any VM’s running “in the cloud!” Oh really? Want an easy way to check? Go to your Finance organization and ask for a report of corporate credit card use at Amazon. You may be surprised.
Now, I’m not knocking Amazon at all. A great company doing some really innovative stuff. They’ve made it so easy to start up a virtual machine that I worry when my kids are going to start using it!
But for that very reason of ease of use, you need to know if someone in your organization, frustrated with the response of “It’ll take IT a month to provision you that” just went “rogue”. He just couldn’t wait a month for a web server to be provisioned. So he went over to EC2 and start spinning up things and copying data and taking credit cards because IT couldn’t do it fast enough.
It’s this type of scenario that’s contributing to why many organizations are looking at how they can provide, to the business, the same type of flexibility and speed of an EC2-style environment but from within their own datacenter. This is the essence of “Private Cloud”. And when combined with the ability to link a Private Cloud to a Public Cloud, a Hybrid Cloud. The nirvana of being able to “burst” virtual machines off of my infrastructure and on to service providers infrastructure, all the while maintaining security.
Yea… I’m going to put a sensitive virtual machine or data out into “the cloud” that I have less visibility and control of than my own datacenter? Really?
Well, maybe. But only after you do the next step.
Assess and Measure Risk
How can we, from a security standpoint, really make this work? Like any good security person will tell you, it’s about assessment and measurement of risk. Just because you can, doesn’t mean you should. In the case of virtual machines and data, the VM’s and the data that reside on them need to be assessed, measured for risk, classified and tagged. As I point out in the slide on the left, we need to start calculating a risk score on a VM or the data and based on that risk score, we keep the VM or data in-house or allow it to move to a datacenter out of our control.
Note that I have only 4 variables on the slide
- Data Sensitivity
- Workload IP (intellectual property)
- Business Requirements
- Job Security
Obviously, there can be many more variables that can and should be considered.
- What about the current threat levels that I get from tools like RSA Netwitness?
- Is there a new piece of malware out there that attacks the technology I used to develop the application?
- Is it near the end of the quarter and someone is a little antsy and wants things in-house until after the quarter?
All these things and more should be considered when deciding whether stuff should run in your datacenter or a datacenter out of your control.
For example, say I have two servers. One is a web server with a bunch of static images that’s just there to serve up the images in a catalog and the other is it the application server that falls under PCI because it’s dealing with credit cards. As a simple exercise, we could tag the first as “Non-PCI” and the second as “PCI”.
Today, if you are doing this calculation exercise, it’s probably a manual process. But if you’re talking about cloud-scale, this will have to be an automated process.
A look to the future of automated security
Think about this for a second. All sorts of threat info is coming into your Security Operations Center. Based on that information, the security tools kick off changes to the virtualization and cloud infrastructure (that is SO easy to automate) and VM’s either move in or out of different locations or states based on the real-time data.The assessment and risk measurement isn’t a one time thing. It needs to be a continuous process.
In our server example above, if you want to step the classification process up, your DLP solution scans the servers and if PCI data is found, the classification or tag would change, resulting in the VM being pulled out of the public datacenter and back into the private datacenter.
Obligatory Star Trek Reference
How cool would that be? Just like Star Trek, sensors detected a threat, shields come up automatically (I never could understand why someone had to give an order for that to happen!), phasers start charging and the klaxon goes off. You adjust your tunic with The Picard Maneuver and take command of the situation before the Romulan de-cloaks to fire her final, crippling shot! Yes, I just mixed my TOS/TNG references.
Isn’t that how it should be? No surprises? Pre-determined workflows kicking off to protect my assets. Computers doing what computers do best, the manual, tedious tasks we don’t want to do so we can concentrate on the bigger issues like how many people are following you on Twitter? (1,462 but who’s counting)
So, as we come full circle and you’re now considering running that report on Amazon purchases over the past year and catching up with Star Trek on Netflix, remember that these Risk Scores are not calculated by the guy with a corporate credit card and a need for a web server.
And I would hope you’d agree that doing this in the physical world is MUCH harder. The automation capabilities of the virtual/cloud infrastructure can really enable security to work in a more measurable, consistent and adaptive way. The bad guys are adapting all the time.
Thanks for reading. Please comment. I’d love to hear feedback. I’d especially like to hear dissenting views. After all, I’m not a dyed in the wool security guy. I don’t wear a red shirt.
mike
(Not Lt. Expendable)
Jun 28
BTOGG – Google Glass and future security implications
While working on some other things yesterday, I had the live feed from Google I/O running. I have to say that Google is catching up and possibly surpassing Apple in coolness. They certainly took many presentations tips from Apple! Up to and including the Jobsian “…and that was <feature>”
One of the coolest thing was Google Glass.
This is a set of eyeglasses with a built-in camera and display. It was introduced by Google by skydivers wearing the glasses and parachuting onto the Moscone Center roof!
Even though I’m an IT guy at heart, living here at RSA for the past 7 years has made me somewhat paranoid about data sensitivity. When I saw how Google Glass was capturing EVERYTHING, my first thought (after “WANT!”) was “What if I was streaming my Glass feed via a MiFi?”
That lead to the paranoia kicking in. What if I was doing that in work? And what if my work has me dealing with sensitive information or even just internal use only emails? And I forgot to turn off the live feed to my blog/website/Twitter/Facebook?
As you can now imagine, the security implications start to boggle the mind. I wish I had an answer for this. Will the BYOD Generation listen to the Graybeards when told “You can’t bring them in here? Oh, and no MiFi too!”? Do they now? No.
By the way, I think I’m a BYOD/Graybeard mashup. (as I type this from my personal Mac at work)
So, I think that right now, probably the best thing is to discuss. Consider the implications, don’t over-react and understand the tradeoffs. Just like when cameras first showed up on cell phones and the first corporate systems connected to the ARPANET, interesting and enabling technologies don’t need to be feared, just understood.
mike
Jun 01
EMCworld Wrap Up Part 1–Automation, Security and a Razor
Wow, what an amazing week! While it’s still fresh in my head, I thought I’d write about something that I witnessed at EMCworld. I’ll do another post on the sessions I gave later.
Automation and Security?
“Ok, what’s this “auto-mation” thing of which you speak Mike? And why, as a security guy, should I care?”
Razor
Well, the coolest thing was a project known as “Razor”. It was done by EMC’s Nick Weaver. Nick, also known as @lynxbat on Twitter, works in the EMC Office of the CTO. Nick is one of those guys that you show a new programming language to and after the weekend, he’s written something in it that blows your mind. All us geeks aspire to having those kinds of chops.
So, Nick worked with Puppet Labs on a project called Razor. The one sentence/paragraph description is “A tool that can, from bare metal, provision an OS” Honestly, that’s about the lamest description ever of what it can do! You NEED to read up on it here then come back to finish what I wrote… I’ll wait……
Ok, you’re back. Now why is this important to security? Well, Chuck Hollis (@chuckhollis), the EMC CTO of Marketing, hit the nail on the head in his blog on the Puppet and Razor stuff when he said
It doesn’t take to long to realize that there are some interesting areas where this could potentially go over time. Obviously, what’s been done for server resources could also be applied to storage and perhaps network. And, of course, EMC has some nice upper level IT governance management framework tools (e.g. Archer, Ionix) where policy can be specified and reported on.
Archer? RSA Archer? Yea, that Archer. Imagine if you will the ability to attest (there’s a big security word) to the validity of a server from the point of powering on to the system running and serving up what it serves up? You know how it was built, what was installed on it, who did what, when, where and how. Now, feed all that information into an eGRC solution like Archer and when the auditors come calling, you have a record and that record lines up with the security policies that are in effect. Need to build a server to handle PCI stuff? Here’s the record of how it was built and it’s mapped to all the PCI compliance regs. All in an automated fashion.
Combine that with a SIEM solution that can take in events that change the configuration and now you’re cooking with gas. You can attest to every change from creation to destruction. And map it all to policy.
It was a VERY insightful post Chuck made. When I saw Razor in action, that’s exactly what I thought. I ran into Chuck one evening at EMCworld and told him so.
Security at Scale
THIS is part of the “security at scale” issue that we as an industry are facing. The old ways of managing security just won’t scale to the levels of “cloud” (there I go, saying that word. For me, cloud = scale. ‘nuff said) You NEED to leverage automation. There’s just too many moving parts to keep track of manually. (more on that one in a later post!)
So for you IT guys who are wondering about security in a virtual environment, run over and start playing with Razor (did I mention it’s Open Sourced??????!!!!) and think about how you can help the security guy by giving him measurable results in a consistent fashion.
For you security folks, guess what, it’s time you look at all the cool tools that are available to the IT folks that can help you measure compliance. The depth of these tools is amazing. And the ability to pump it all into Archer to map it to the compliance policies makes your job infinitely easier.
I’m heading into Boston in a couple of weeks to learn more about Puppet and about Razor. Hopefully I’ll have more to talk about then!
Let me know what you think!
thanks for reading,
mike