Monday, August 15, 2011

We fix Stupid!


Recently I had a chance to meet with a couple of very different and promising companies. One is the classic information security for the enterprise company going after the holy grail of risk management. The other one is a small startup company attempting to be the good cop of consumer privacy. Both are very successful.

Allegedly there is no connection between the two, right? While some tend to bundle privacy and security together (as well as compliance), there is a clear distinction between the two. Not to mention the very different target markets (enterprise vs. consumer).

So why do I bundle the two?

Let’s take the enterprise risk management perspective first:

When observing some of the recent data breaches (e.g. the RSA incident), there is an interesting pattern. As we know hackers target the weakest links in their quest for the prize. Occasionally these links are infrastructure vulnerabilities, but in many cases it is the ultimate weak link – the human factor.

It should not surprise anyone that it is much easier entering a building through its main door (especially when you have the keys), rather than using a small, semi closed, side window on the 5th floor. Since organizations will always provide employees access to their enterprise resources (so they can perform their work), all is left to the hackers is to get the keys and use the main door. But why bother trying to hack enterprise protected resources directly?

Without getting into lengthy explanations, what bad guys do is create a “hit” list of employees with the right profile. Then they collect information about the selected targets mostly using publically available resources (such as the Wild Wild Web). Once enough information is collected a targeted campaign is launched. In many ways this campaign is very similar to consumer phishing. During this process (A.K.A spear phishing) users end up enabling the attacker to collect more information (which is not publically available), and eventually get the access they need.

Bottom line – employee’s consumer vulnerable profile is enabling an attack on enterprise resources.

Now for the consumer privacy point of view:

Simply put the objective of privacy tools is controlling the amount of private information publically available and by doing so to reduce the consumer’s attack surface.

Do I need to explain the linkage between these two companies/domains?

By protecting consumer-employees privacy enterprise reduce their risk of being attacked.  

A few things to keep in mind:
1.     “All or nothing” solutions are never a good idea – simply not practical. Security solutions that attempt to solve “everything” traditionally fail (DLP is a good example). Instead of applying protective controls for all employees we should apply the right controls only for the employees identified as “high risk” (relative to a defined threshold).
2.     How to define “high risk”? The risk is the enterprise’s risk, not the employee’s. It should be defined based on a combination of the employee’s enterprise profile (e.g. systems he can access and his access level), and his consumer online profile vulnerability score (i.e. how exposed is he).
3.     By no means I’m promoting a “big brother” type of solutions. Enterprise should not collect\manage\care about employees’ private information but only their online vulnerability score (the likelihood of being attacked).
4.     Coming up with an online profile vulnerability score should be done by leveraging similar techniques as consumer privacy tools, or emulating hackers’ information collection process.

And maybe someday, some company will address this aspect of the human factor, and will be able to use the great tagline: “we fix stupid!”

Friday, May 27, 2011

Drop(the ball)box?

Dropbox is a great tool, I use it all the time. Very simple, user friendly and perfect for what I need. Its sweet spot in my opinion (and my main usage pattern) is collaboration or sharing small amount of documents (not digital media, but documents that change over time).

Dropbox got some bad publicity recently regarding the security state of their service.
For those who are not familiar, a quick summary of the two main two points:
1.       A relatively easy way to impersonate other users – simply put, Dropbox identify the user on the device using a file stored locally in a similar location on all Dropbox installations. All Bob has to do to impersonate Paul is copy over Paul’s identification file, and he has access to all his files.
2.       Dropbox possesses the encryption keys for all users’ data – very common with tools that provide web access to user’s files (or other content related services). The big issue was less about the possession of the keys and more around the fact their privacy policy (and marketing messages) has mislead people to believe Dropbox does not have a copy of the key or ability to decrypt users’ data.

While bullet #1 is an ugly security glitch, it is simple to fix and I trust the Dropbox team to take care of it.

Bullet #2 reminds me of the main reason I buy insurance. It is not so much about the actual insurance policy and much more about the trust factor. I just want to know I can trust the insurance agent to take care of my business if something goes wrong. If for some reason the trust is broken, I will replace the insurance agent/company regardless of the price. Incidents will always happen, everyone makes mistakes. It is about bouncing back from an incident, about the reaction after dropping the ball. That’s what breaks or strengthen the trust.
Dropbox messed up, it does not really matter what they think, it is all about the perception. So if I was Dropbox, I will be less concern about proving who is right or “fixing a problem” and more about bouncing back gracefully.

Having said that, consumers have proven time and again that they don’t really care about security, they don’t even care about privacy…
Thinking that people are going to ditch Dropbox because of the recent security issues is not realistic, will simply not happen. Do you remember how many people banned Facebook during the “who owns my photos on Facebook” campaign just a couple of years ago? (hint – several hundred or thousand, while during the same period of time millions new users joined…).

People care about serviceability, productivity, and the coolness factor. Less about privacy or security.
The notion of personal/private information is long gone from the consumer world. Somehow (social media or even plain old email) your data moves/duplicated to the cloud/web. Once in the cloud there is no going back, and it is no longer in your control (try to really delete stuff from Facebook). The Dropbox type of tools simply extends the cloud/web further into your desktop, while your content is syncing between devices it also synced to the “mighty cloud”, and once in the cloud…  

As for enterprise usage – this is a totally different story.

The consumer employees (http://shlomidinoor.blogspot.com/2010/01/we-are-all-consumer-employees.html) continue to build internal pressure to adopt consumer-like tools to simplify and streamline their work. The new generation file syncing/collaboration tools such as Dropbox are a good example of the phenomenon. While great tools they lack adequate controls enterprise IT/IS are expecting. My friends at CloudLock (formally Aprigo) identified a similar opportunity with Google Apps and provide a control layer on top Google’s platform. In a similar fashion vendors will continue identifying other tools originally built for consumers (by “consumer” vendors) and provide the enterprise control layer. Dropbox is a good example.

Bottom line:
As consumers we should keep on using these great tools that improve our productivity.
As enterprises we should look for and work with vendors that will provide the much needed control layer (while maintaining a seamless user experience for the consumer-employee).
As vendors consider it as an opportunity!

Tuesday, March 22, 2011

Focus is Golden!

In many occasions I’ve being asked a very basic question: what is information security? My two words answer is: Risk Management.

Any other answer that might imply we can achieve 100% security would simply lead me to the conclusion that we should just give up now, go home, and find a different occupation…
There is no 100% security, it is too expensive, too complex, too agonizing, takes too long, too dynamic. It is all about risk management, define your risk threshold and make sure you have the right controls to meet your goal.

Last week I’ve presented at a CISO event discussing the same topic (i.e. security and risk management), and I thought it might be a good opportunity to share my take on the topic.

One of the fundamental debates we have in the security community is whether to take the “All or Nothing”/”let’s boil the ocean” approach, OR focus on contained problems we can actually solve…
Large vendors tend to promote the first approach with their deep stacks (and services organizations), while pure players/smaller vendors tend to focus on their core competency.

As I believe security=risk management, it will not come as a shocker to anyone that I vote for focusing on the highest risk first (i.e. a contained problem).

Kind of trivial, but where/how should we begin?
Everyone seems to have their quadrant, so here is Shlomi’s quadrant. It provides a good high level view where we should (and should not) invest, that is if you are out to solve the security challenge.

While “All or Nothing” calls for similar controls for all types of operations, the reality is real damage comes from operations associated with the 4th quadrant (powerful actor + powerful target). The advanced audience can add the context of the operation as a 3rd dimension, for the sake of simplicity I left it out.
Ok, so powerful actor + powerful target is the way to go, but how can we better evaluate the cost, time, agony and success of using the described two methods with relation to the risk addressed (i.e. coverage of your risk)?

Since I’m in a “graphy” mood today, let’s observe the following:
“All or Nothing” approach to security calls for controls across the board, which is very expensive, very long to implement, extremely painful and have questionable success rates. It is somewhat linear with regards to the risk we actually address. Take any of the big security projects (e.g. DLP or IM), after all the investment you end up with partial coverage at best.
The “high risk first” focus on the 4th quadrant, no resources spent on low risk activities, achieving a sharp up warding slope up front of risk coverage.
 
Now for the interesting part comes ($$$) – when placing both on the same graph:
Using the “all or nothing” approach to achieve a given risk threshold (left side) will be more expensive, take longer, more painful, and higher likelihood to fail. While using a given a budget/time frame/pain/likelihood to fail (right side) will provide coverage for a lower addressable risk.
Which approach to choose? Your decision…
But the existing security controls address this mumbo-jumbo, right? Not exactly…
The top 3 reasons why most security stacks/controls are missing the point are: 
1.       Focus on known identities and personal accounts rather than high risk (privileged) accounts.
Personal accounts/known users = limited access = low risk
Privileged accounts and users = limitless access = high risk
2.       TMI (Too Much Information)
Collecting all events (of high or low risk) is a waste of time. It takes too long to make sense out of it, and slows down production systems… I just want to see the important information.
3.       One trick pony
Most solutions address verticals – data, events, access, identity, sessions (of high or low risk), rather than a horizontal (i.e. high risk across the elements)
So when you are out there looking for ways to address your security risk think of tools that manage to carve out the high risk stuff, take a holistic (horizontal) view AND do not impact performance of your existing environment/personnel.

Tuesday, February 8, 2011

Cloudouflage

Have you ever wondered why some flavors of cloud computing (SaaS) are so successful while others (IPaaS = Infrastructure or Platform as a Service) are less (yet)? And what is cloudouflage?

Whenever possible I advocate for simplicity, therefore I’ll try to take the simple approach (a.k.a. naïve) to address these questions. Let’s see if I can limit myself to no more than 2 bullets a section.

It is very clear adoption of SaaS is exploding, regardless what numbers you are using (ballpark of $8 Bbbbbilion last year). While I could go through lengthy and intelligent description of all the reasons (including financials, agility, etc.), I want to focus on two which I find interesting:

1. It is just another website, not any different than Gmail
The usage model is very clear and simple. I open my browser, go to this website, and consume a service. Consumerization plays a big part here. Since the emergence of the web, consumer technologies are leading the way, while enterprise is a delayed copycat at best. Consumers are simply looking to consume a service, for everything they need there is an app for that. Similarly when consumer employees (http://shlomidinoor.blogspot.com/2010/01/we-are-all-consumer-employees.html) need a CRM service (e.g.), there is a cloud for that.

2. No IT involvement
In many cases no IT is required, not for setup nor maintenance. If something does not work you call customer support. Great for SMBs (with no internal IT expertise), and very convenient for Business in larger organizations believing no IT means No extra processes, No security policies, No regulations…

Looking at IPaaS we don’t see the same crazy adoption, numbers suggest it is $1B at best (a nice number but relative to its potential - not as impressive).

Notice: I’m bundling IaaS and PaaS together as I believe they will ultimately converge. We already see the IaaS vendors adding “platform” services and vice versa for PaaS vendors.

IPaaS is very different from SaaS:

1. Not really a packaged service but an infrastructure
Regardless of the “aaS” suffix, IaaS is providing “virtual machines/storage/…”, from a business perspective what can I do with it? It is a starting point not the end game, now something needs to be deployed, optimized, maintained, etc. Where is the SaaS magic (i.e. I open my browser and the service is there)?

2. IT involvement is inevitable
As the SaaS magic in nowhere to be found in the IPaaS reality, real work is required for setting up the virtual infrastructure. IT assistance is required (sorry Mr. Biz – no shortcuts for you…).

A simplistic representation where IaaS + PaaS converge into IPaaS, Biz uses SaaS directly (blue), and IPaaS through IT (green):

So what should happen in order to drive IPaaS adoption?

1. The peace pipe will finally be used
I will not attempt to elaborate beyond the many blogs, tweets, articles, presentations, etc. done on this topic. Eventually IPaaS vendors and IT/IS will agree on a common ground regarding control, transparency, security, regulations and such. As with any peace agreement both sides will have to compromise (yeah – BOTH sides).

2. Cloudouflage
There is still a lot of money being paid for IPaaS solutions, meaning organizations are using it for something. It does not come as a big surprise that the main use cases for IaaS today are dev & test, cloud burst, and high performance computing. They fit perfectly with IaaS characteristics. Yet, most of the setup/maintenance/support efforts are done ad hoc/manually/internally.
How can we leverage these use cases to exponentially increase the IPaaS usage?
That’s where cloudouflage comes into play. Wrapping IaaS with a relatively “thin” service layer will create an illusion (cloud-camouflage) for customers that they are consuming a packaged service rather than infrastructure (reminder - that’s what they want). Imagine a vendor providing a service to create and manage a catalog for demo environments. The management, configuration and meta data is the “thin” service layer, however whenever starting a demo environment, virtual machines are being created and built on top of the underlying IaaS solution. Same goes for dev & test.

Bottom line: while for anything consumers need there’s an app for that, the day will come where for every service organizations will need there will be a cloud for that. The time for Service as a Service has come!

Monday, December 6, 2010

Neuroprivilogy is the Holy Grail

Is your Neuroprivilogy vulnerable?
The answer is most probably yes, you simply have no clue what Neuroprivilogy is (yet)…

The first step with any discussion is defining a fancy term to describe the phenomenon. That’s where Neuroprivilogy came about.
As the name suggests Neuroprivilogy is constructed from the words neural (network) and privileged (access), and can be defined as the science of privileged access points’ networks. Using the neural network metaphor, organization’s infrastructure is not flat but a network of systems (neuron=system). The connections between systems are access points similar to synapses (for neurons). Some of these access points are extremely powerful (i.e. privileged) while others are not. Regardless, access points should be accessed only by authorized sources.

This privileged access points’ network is vulnerable as you’ll find out by observing the Neuroprivilogy vulnerability 7 fallacies:

1. These access points have limited permissions
Systems almost always use proxy accounts to interact with other systems (e.g. application to database). Now let’s be honest – when was the last time we used any type of mechanism to restrict systems’ access based on anything (e.g. propagate end user permissions to the app-database interaction)? In most cases we simply grant privileged access rights to systems. Hey, it is much easier to use most permissive access rights required as the common (permission) denominator…

2. Given the associated high risk I probably already have controls in place
Does anything from the following list sounds familiar? Hardcoded passwords, clear text passwords in scripts, default password never changed, if we’ll touch it everything will break… The irony is personal accounts for real users has very limited access rights, while having stricter controls (even simple ones such as mandating frequently password change).

3. But I have all those security systems so I must be covered, right?
This topic calls for a separate blog post altogether, however I’ll point out the fundamental principle of most systems handling users and accounts (such as IAM, SIEM, GRC, etc.) - the prerequisite to all operations is identification of users. They are great tools for personal accounts correlated to known users, and not really for privileged access points used by non carbon based entities. The solution is very simple – use the adequate tools!

4. Privileged access points vulnerability is strictly for insiders
Picture yourself as the bad guy, which of the following would you target? Personal accounts with limited capabilities protected by some controls, OR privileged access points with limitless access protected by no control? The notion of an internal access point is long gone; especially with the borderless infrastructure trend (did I say cloud?).

5. Adding new systems (including security) should not impact my security posture
That’s where it gets interesting. Most systems interact with others, whether of infrastructure nature (such as database, user store) or services. Whenever adding a system to your environment you immediately add administrative accounts to the service, and interaction points (access points) to other systems. As already mentioned most of these powerful access points are poorly maintained, causing a local vulnerability (of the new system) as well global vulnerability (new system serves as a hopping point to other network nodes). Regardless, your overall security posture goes down.

6. I have much more accounts for real users than access points for systems
Though this fallacy might sound right, the reality is actually very different. It is not about how many systems you have but the inter-communication between them. Per enterprise customers I’ve talked with, the complexity of the network and magnitude of this challenge will surprise many.

7. This vulnerability is isolated to my traditional systems
Some of the more interesting attacks/breaches from the past year present an interesting yet non-expected trend. The target is no longer confined to the traditional server, application, or database. Bad guys attacked source code configuration management systems (Aurora attacks), point of sale devices, PLC (stuxnet), ATMs, Videoconferencing systems (Cisco), etc. The extent of this phenomenon is actually very surprising. I even heard the other day, pacemakers has privileged accounts (for remote management). Now this is what I call a life and death type of vulnerability!

When observing these fallacies and APT attacks characteristics, you realize Neuroprivilogy vulnerability is the Holy Grail for APT attackers. It perfectly fits the APT characteristics - not about quick/easy wins, but rather very patient, methodological and persistent attacks targeting a well defined (big) “prize”. You work the privileged access points’ network until finding the way in and winning the “big prize” (limitless access to the required/targeted parts of the infrastructure).

The dummy version of comparing traditional to APT attacks is: traditional = a quick and easy win, APT = keep your eyes on the prize.

Now going back to my opening question – is your Neuroprivilogy vulnerable? (No need to answer, just a rhetorical question)

BTW – an interesting TED talk about neural networks and how it actually defines us: http://www.ted.com/talks/sebastian_seung.html

Monday, November 22, 2010

v1.0 is always more successful when bundled with two sunny days at Orlando

Nothing like sunny Orlando in the middle of a Boston’s November, therefore you can imagine my excitement about participating at the first Cloud Security Alliance Conference this week.
So what did we have there (other than ~90 degrees)?
  • Interesting mix of participants (customers, vendors, thought leaders, consultants, federal)
  • Lots of cloud and security related sessions
  • Securing privileged users (insiders threat) and privileged access points (API management) are top concerns
  • Sitting in a panel discussion about securing applications and data in the cloud
  • Booth at the expo center (chance to both pitch and have interesting discussions with participants)
  • AND one big debate about security and the cloud
(Basically all the ingredients for two days well spent)

While I can go into lengthy descriptions of sessions and other discussions, I prefer focusing on what I perceived as the biggest debate at the conference. Which of the following is right?

The cloud is new therefore requires all applications and security solutions to be re-written
OR
Just of the same, been around for a while, let’s move our apps and secure it using current controls

Surprisingly (or not) most influencers seem to believe things needs to be re-written.
Not surprising (or …) I have a different take on that. But first a couple of clarifications:

  1. I’m tired with this binary approach to the cloud some people present – “either everything going to the cloud (1) or nothing (0)”. Think hybrid, we are going to have mixed environments for as long as you can currently plan.
  2. Tired++ from this ongoing FUD competition (though I have to admit occasionally I participate). RELAX, don’t panic, we are going to be OK. The cloud is a great thing and a decision whether to adopt it is a business decision (based on its many virtues). And yes it has vulnerabilities and issues which need to be highlighted and addressed (start with focusing on operations accountability and transparency).
It is off my chest and I can finally address the cloud-security debate. As with most cases, the answer is somewhere in the middle. The cloud represents new concepts, technologies and delivery mechanism. Given the extent of the change (and opportunities) some areas are definitely going through a revolution and require re-thinking/re-architecting or as some of my colleagues put it – re-writing. However, when looking at public IaaS there are quite a few challenges that only experience evolution and can be addressed with existing tools and expertise (only some adjustments required). I thought my friend Gilad (founder+CEO @ Porticor) presented it nicely during his session.
Now it is true every several years products gets re-written anywhere, therefore the shift to the cloud might be a good opportunity.

My recommendation (my personal crystal ball):

  • If you are in the services business – identify evolution areas and follow them.
  • A vendor? the revolution domains is where you should be looking for opportunities.
When all is said and done, looking at Friday’s financial news: Salesforce’s Q3 results exceeded expectations and their stock is on fire! Makes you wonder whether customers really care or are we simply over hyping it all…

Thursday, September 30, 2010

Anything you can do I can do better

During the past several years it has become a hobby of many to bash the Identity Management vendors, solutions, deployments, you name it. It is too expensive, it takes forever to deploy, eventually it provides limited coverage, it is not business aware, it is too complex, did I mention the price? As an Identity Management veteran I can admit that, despite the major consolidation the market experienced and the multibillion $$$ market, some of it (probably most of it) is kind of right…

Why is it any different from the natural evolution of other domains?

Sometimes you encounter a special phenomenon where:

1. The problem is well understood by everyone

2. It is a major problem

3. Every organization experiences it

4. And are willing to pay to resolve it (thus the market is defined as a multibillion $$$ market)

5. There are plenty of solutions out there

BUT NO EXPONENTIAL GROWTH for any of the vendors, wouldn’t you expect at least one to break away?

So why does it happen? Sometimes because the existing products’ coverage is limited, other cases it is too complex, too expensive, (basically most of the reasons previously described).

Those familiar with the domain knows that despite the white noise (of existing vendors) the market is anxiously awaiting someone to actually “do it better”, “be greater”, “sing louder”, “go higher”…

This month I’ve participated in a couple of events – VMWorld 2010 and Arcsight Protect 2010. While representing Cyber-Ark and discussing our PIM (Privileged Identity Management) technologies I had a chance to listen to what the hosting vendors had to say.

I’m happy to report that there are two new players stepping into the Identity Management space claiming to do it better. Meet VMWare (provisioning, self service and SSO) and Arcsight (IdentityView).

It is true both vendors are very cautious with their announcements (Arcsight – we only do monitoring, VMWare – it is only for synchronous provisioning and we only manage our systems), come-on…

What do you think, if VMWare customers ask to “simply integrate with a ticketing system for approvals” would they provide it? Or “can you open the platform for plug-ins to control other systems”?

How about Arcsight customers requesting to be able to do some remediation actions (such as disable a suspicious account) directly from their control panel?

I don’t know about you, but I think these guys are here to stay.

Another market that experiences a similar phenomenon is information protection (DLP and/or ERM and/or EIP …). The extent of this challenge is huge (i.e. a major major problem for all organizations) and the current products are straggling to solve this hairy problem. However products are simply too complex, limited and fail to pick up. If I had to predict I would say waves of innovation are expected, and only a different take will manage to lift this domain to the next level.

So if you are out there considering starting an information security start-up definitely look at this space, there’s alllllllllooooooooottttt to be done and it requires a fresh approach.