Thursday, September 16, 2010

An interesting use of Google Trends

Google Trends is a nice tool that gives you statistics about the terms used in Google web searches, in the form of curves. For instance, you can compare the curves of iPod, iPhone and iPad:


But there's also another interesting part, on the right: it gives you links to articles that were published at the moment when a peak occurred. The moments are correctly chosen for buzzwords like in this example, not always as well chosen for less marketing-oriented products. Anyway, that's often a way to apprehend the history of a technology, idea or movement.

Have a look at these curves:

Tuesday, September 14, 2010

Security ROFL

Let's have some fun about security.

Monday, September 13, 2010

Zero Risk vs. Decision Making

The ability to produce risk assessment, partly-innate, partly-acquired, is one of the most basic skills of successful managers.

While there is much value in the organized and systematic research for accurate information, the ability to assess risks in a situation where information is incomplete is a most valuable asset.

The first reason is that we often lose precious time in the gathering and precise analysis of data when approximate data would be good enough. To make it abruptly: you don't need to know where the arrow will hit to know that an arrow sent grossly in your direction is a bad thing. That's when you need someone with some instincts about risks.

The second reason is that some data can be impossible to gather, or hard enough to slow down the process of gathering it to the point of discouragement. For instance, if you want to pinpoint the ability to turn on a specific option of a specific security feature in a specific version of a specific software, which software you have not already bought and cannot test, you might get bored before you get the information. That's when you need someone with some culture and work connections, so as to get a better access to -or approximate substitute for- such information.

The third and most important reason is that management people delegate. In this case, the management would probably want to delegate data collection and make its own assessment from it and make a decision from it. That is, delegate the boring part and make the obvious decision (taking all the credit, Dilbert-like).

To be more precise, it is commonplace to see IT managers answer a question by another question, asking for more technical details when the staff come for more decision making. In this case, the staff ask the manager for his/her ability to fill in the gap between available information and complete information. (And the staff is probably aware of this gap.) So when the manager overlooks this request for decision and asks for never-ending technical or economical details, data or evidence, the staff feels like the manager is worthless. That is: once they have collected all the data, they can make the decision themselves, they're not stupid, thank you!

As a conclusion, I would say that Zero Risk is, of course, not reachable, but that managers should be aware that their staff regularly look for risk assessments from them, not for an indication that they should go and look deeper to reach Zero Risk. If they could, they would.

Saturday, September 4, 2010

IT and ITsec books I've read these last years

These last years, I've read a few interesting books about IT and IT security, so I list them down here, if you ever got a spare week-end ^^
The list starts with the language, name and author(s) of the book then, when possible, links to related blogs and newsfeeds. It's in no particular order.
  • [EN] The failure of Risk Management, Why It's Broken and How to Fix It, by D.W. Hubbard [BLOG] [RSS]
  • [EN] Applied Security Visualization, by Raffael Marty [BLOG] [RSS]
  • [EN] The Official (ISC)² Guide to the CISSP CBK, aka the CISSP CBK, by... the (ISC)²
  • [EN] Beautiful Security, by Andy Oram and John Viega
  • [FR] La fonction RSSI (The CISO position), by Bernard Foray [old BLOG] [old RSS]
  • [EN] The New School of Information Security, by Adam Shostack and Andrew Stewart [BLOG] [RSS]
  • [EN] Security Warrior, by Cyrus Peikari and Anton Chuvakin [BLOG] [RSS] [Cyrus Peikari's page, see "Articles"]
  • [EN] Security Metrics, Replacing Fear, Uncertainty and Doubt, by Andrew Jaquith [BLOG] [RSS]
  • [FR] Sécuriser ses échanges électroniques avec une PKI, Solutions techniques et aspects juridiques (Securing Electronic Flows with a PKI, Technical Solutions and Legal Matters), by Thierry Autret, Laurent Bellefin and Marie-Laure Oble-Laffaire
  • [EN] The whole ITIL v3 series
  • [EN] Geekonomics: The Real Cost of Insecure Software, by David Rice [BLOG] [RSS]
EDIT 09/06: Oh and I forgot the mythical The Mythical Man-Month, by Fred Brooks [Wikipedia]

Thursday, September 2, 2010

Companies beware of SSL decryption in your proxy!

The ubiquitous rise of SSL as a means of confidentiality pushes towards new security problems and new ways to manage it...
I guess we could have figured it out from the very definition of SSL, but to me it appeared only clearly at the beginning of this year. With this number of protocols using SSL, with this everyday HTTPS, with everyone buying things on the Internet, the SSL protocol spread to ubiquity and its use went from precise pieces of software and knowledgeable people to every kind of software and mainstream people. From this situation, I saw the explosion of:
  • bad implementations of SSL in all kinds of software,
  • attempts to attack the protocol, new (so to say) man-in-the-middle attacks,
  • bad uses of SSL (weak cypher, self-signed certificates for public use, etc)
  • impatience from top management about the inability of IT services to provide statistics about the SSL traffic of their employees.
For all these reasons, I made the bet 2010 would see the introduction of new tools to manage SSL, make statistics from it, filter it, assess its security and so on. I found that Forefront TMG (the name of MS ISA for 2010) does quite a part of the job by decrypting the SSL flows between the LAN and the Internet. Once decrypted, you can do all the usual with those flows: filtering, statistics, eavesdropping...

My point is: it's not a secure practice yet, and probably never will.

There are two parts in my argument, the first is the legal and compliance point of view. If SSL is encrypted, it's in order not to be read, as dumb as it may sound. The company might not be allowed, under the laws of the country, to listen to employees' encrypted traffic. For instance, in France, I wouldn't be allowed to listen to private connections to online banking sites. Plus it brings back the threat of the tactless/malevolent administrator.

The second part is the technological one. SSL is ubiquitous and, to some extent, that's a chance. It means that the client software may have a variety of vulnerabilities and weaknesses in the implementation of SSL. For instance, if the SSL traffic flows from three browsers, two media players, ten business applications, then a vulnerability would probably affect only one in fifteen pieces of software using SSL. The targetability of unproxyfied SSL can grossly be compared to the average of vulnerabilities of the various pieces of software that use it. The targetability of proxyfied SSL is that of the proxy.

Would you trust ISA better than Firefox? Suppose that you have an endpoint tool that examines SSL, if its security features are better than those of the proxy, you probably lose these capabilities during the decryption/encryption phase of the proxyfication.

Of course, SSL remains a cloudy mystery, threatening to some extent, but I think this is not the good way out of it. But let's have a look at these technos, because I'm sure we'll have to cope with them anyway.

Wednesday, September 1, 2010

The technology SPOF

When I'm thinking availability, a lot my time and thoughts go to the careful search for SPOFs.

  1. I do look for hardware SPOFs, like a unique machine doing an important job and requiring a backup in case of breakdown. [first step of a BCP]
  2. I also do look for network SPOFs, like making sure the backup has the same network accesses as the master machine, so that it doesn't remain alone, useless. The same is true for firewall accesses and all kinds of filtering that network flows do undergo. [careful execution of the BCP]
  3. I furthermore do look for configuration SPOFs. These consist of the rather funny case when the backup machine is up, reachable on the network, but the clients are not aware it's there and so don't undertake anything with it. This is usually the case when IP addresses are not switched automatically between the master and the backup or when a configuration screen allows only to type in one server, not two or many. Hopefully, this should not happen with MS Domain Controllers, the way they work (or we would hear this kind of SPOF more often). Anyway, it's still recurrent in many "small vendor" appliances and in the application world. [very careful execution of the BCP]
  4. Nonetheless, a terrific SPOF remains: the technology SPOF. You have the backup machine, it's reachable, and others are aware that they should communicate with it. But the backup suffers from the same technological incident as the master machine did.
    Say, for example, that the master breaks down because it fails to handle a large quantity of "client data" (or anything) that has to be treated. The backup will also break down under the same charge.
    Let's take another example, the server is an application connecting to a database. The database received a minor software update that changes something that makes the master server go crazy. The backup takes the job and goes crazy too.
    Third and last example, you have a nice application server with scheduled tasks that make needed job. One fails and the server goes down, unable to continue its work. At that moment, the backup goes up and launches the same scheduled task, failing also... [and that's outside the perimeter of most BCPs]
That's the time when you'd want to have another way to provide your service. That's the moment when you are forced to remember that the machine is not here just to be here and working, it's here to help provide a service. And that service may be provided otherwise. That's the time when you enjoy having a well-prepared DR plan, with forethought reduced/degraded modes...