Friday, December 23, 2011

Security ROFL 5

[FR] Remotely hacking a car is possible!
[EN] Stop to the installing woes, 9€ to get your PC installed, in Romania
[EN] US Nuclear Chain of Command, xkcd comic strip
[EN] Manual Override, xkcd comic strip
[EN] Unpickable, yet again an xkcd comic strip
[EN] Technology organization leadership charts, on Michael Krigsman's blog
[EN] 3D Printer, xkcd comic strip about new forms of spam
[EN] Alarm Geese, blogged by Bruce Schneier

Sunday, December 18, 2011

Can you afford NOT to invest in security in 2012?

Crisis is here and it looks like many IT services will get a near-zero investment budget for 2012. I think it's high time that IT services reconsider information security and invest time (if no money) into it. My point is that any security project should open new areas for business expansion and with a positive ROI, like any IT project, security or not.

Security means new openings for businesses
IT services make benefit from selling services (hardware, networks, software, data) to customers, providing an added value to users. Correct security projects allow the expansion of both customers' and users' pools.
  1. Users: users are reluctant to use services that are not secured. One good example is the sprout of commercial websites that could not have happened without a *security* measure: SSL.
  2. By adding chosen security measures, you can enhance adoption rate/marketshare of your services. You can also grow the target audience by allowing access to new networks, source devices, telecommuters, etc.
  3. Customers: certain customers desire not only security, they demand a warranty about security. That's something you get by two means. One is being sure of yourself and your services (are we up to what we are selling?) and the other one is independent assessment and/or normalization.

Positive ROI for security projects
In a world where security is seen primarily as a source of constraints, the very use of the letters ROI about security is often considered a joke. It's not. For a security project as for any other IT project, you need to invest time and money, there's no reason why security should go without an ROI calculation, and a positive result to it.

In a hard time like 2012, I'd say that you must concentrate on security projects that have an immediate positive return. It's time to focus on projects that cost very little money to implement: the review of processes, of security incidents and the implementation of those "long thought-about but we never had time". It's also time to focus on under-used capabilities of software and servers, instead of re-inventing a costly wheel.
A good security project for 2012 should show immediate returns: less theft of laptops/smartphones, better telecommuting allowing smaller transportation and accommodation costs, better supervision leading to a decrease in downtimes, etc.

As a summary, I think that in 2012 you just should leave out any project that doesn't show an immediate positive return, whether flagged as "security" or not. Just call it technical fussiness and wait for better times.

Monday, August 1, 2011

Switching Internet Explorer's NTLM Credentials

I was looking for a way to have Internet Explorer, launched within user1's Windows session, authenticate against NTLM sites and proxies with the credentials of user2.
Using Windows Credentials Editor does work but, as said, it's no production tool.
I also found that using the runas command was problematic because you either create a Windows profile or not:
  • If you do create a profile, that means a profile and corresponding home folder will be created, which might not be desirable.
  • If you do not create a profile, that means user2 cannot save parameters in IE and cannot receive domain policies, bookmarks and so on.
Eventually I found a very short, built-in way to do it:
C:\>runas /netonly /user:my_domain\user2 "C:\Program Files\Internet Explorer\iexplore.exe"

Entrez le mot de passe de my_domain\user2 :
Tentative de lancement de C:\Program Files\Internet Explorer\iexplore.exe en tant qu'utilisateur "my_domain\user2" ...
That runas /netonly command lets you run IE with user1 privileges, profile and bookmarks AND authenticates at remote NTLM sites and proxies as user2.

This piece of code is especially convenient in situations where you want to do remote NTLM authentication as a given user but do not want to launch a full Windows session just for it.

Review: Windows Credentials Editor (WCE)

Windows Credentials Editor is a small tool by Hernan Ochoa (Amplia Security), allowing to view and modify the NTLM credentials stored in memory at runtime (NTLM sites, MS proxies, fileserver shares, etc).

You can view NTLM credentials stored in memory, in hashed form:
C:\WCE>wce -l

WCE v1.2 (Windows Credentials Editor) - (c) 2010,2011 Amplia Security - by Hernan Ochoa (
Use -h for help.

You can generate hashes for a password:
C:\WCE>wce -g my_passwd

WCE v1.2 (Windows Credentials Editor) - (c) 2010,2011 Amplia Security - by Hernan Ochoa (
Use -h for help.

Password: my_passwd
Hashes: B251802AA879D28F354CC2EE630F4FB7:582A7D8A2EA026919589828D03F91F8F
And you can switch credentials! To change the current user:
C:\WCE>wce -g new_user_password

WCE v1.2 (Windows Credentials Editor) - (c) 2010,2011 Amplia Security - by Hernan Ochoa (
Use -h for help.

Password: new_user_password
Hashes: B251802AA879D28F354CC2EE630F4FB7:582A7D8A2EA026919589828D03F91F8F

C:\WCE>wce -s new_user:new_user_domain:B251802AA879D28F354CC2EE630F4FB7:582A7D8A2EA026919589828D03F91F8F
WCE v1.2 (Windows Credentials Editor) - (c) 2010,2011 Amplia Security - by Hernan Ochoa (
Use -h for help.

Changing NTLM credentials of current logon session (0001B0FBh) to:
Username: new_user
domain: new_user_domain
LMHash: B251802AA879D28F354CC2EE630F4FB7
NTHash: 582A7D8A2EA026919589828D03F91F8F
NTLM credentials successfully changed!
All applications that rely on NTLM to authenticate the current user will now use the new credentials!
You can also explicitly specify which credentials to modify, which is useful if you have many NTLM credentials in use:
C:\WCE>wce -i old_user -s new_user:new_user_domain:B251802AA879D28F354CC2EE630F4FB7:582A7D8A2EA026919589828D03F91F8F
WCE v1.2 (Windows Credentials Editor) - (c) 2010,2011 Amplia Security - by Hernan Ochoa (
Use -h for help.

Changing NTLM credentials of current logon session (0001B0FBh) to:
Username: new_user
domain: new_user_domain
LMHash: B251802AA879D28F354CC2EE630F4FB7
NTHash: 582A7D8A2EA026919589828D03F91F8F
NTLM credentials successfully changed!
All this makes WCE a great tool to understand and debug NTLM applications. A great many thanks to Hernan Ochoa for the tool!

This is not a production tool for two major reasons:
  1. Most antivirus do consider switching NTLM credentials as an attack.
  2. WCE requires local administrative privileges.
Apart from that, it's been stable and functional as many times as I've used it.

Saturday, June 11, 2011

Top-Down or Bottom-Up CISOing?

What I thought, in my earlier years, to be a strategical choice now appears to me as a question of personal character of the decision maker: whether to take a top-down or bottom-up approach to the solving of a complex problem. When you're managing wide projects, you get to deal with many managers' characters and that may lead you to work with Single-Minded Top-Down Thinkers (SMTD) or Single-Minded Bottom-Up Thinkers (SMBU).

As a CISO, you have to solve complex problems: "get us compliance to that norm", "make sure that application is available 24/7", or even wicked problems such as "make us secure". And you have to deal with many decision makers among IT and more. So you cannot do without a prepared tactics to set a SMTD or a SMBU back on tracks.

If you let a SMBU deal with a problem alone, you'll watch him find a quick solution to the problem and apply it. But he'll forget to communicate about it, to document it for later re-use and, most of all, to compare it to the goals of the organization and ensure it's no hindrance to some other process of the company.

To deal with SMBUs, I take two actions:
  1. I explain him what I intend to do with his solution to my problem. Not just the problem itself, I take the time to explain what's the goal and what my next steps are with it. So, he includes in his understanding of the problem all of my later constraints and does solve the problem and the later-on constraints.
  2. I also take the time to recapitulate baseline procedures to communicate and document the problem/solution and I make sure he understands he'll be the one to clean up the mess if something was done unproperly.
Once you're accustomed, that doesn't take more than ten minutes.

SMTDs are usually more experienced people who have lost somewhere in the middle of their professional lives the idea that they must give results, not just thoughts. If you let a SMTD work a solution to a problem by himself, he'll give you diagrams of his view of the problem which he thinks is complete -or at least contains everything necessary- and he'll link your problem to a family of other problems that he has to solve and you'll get out of his office with ten times as much work as when you got in.
For instance, if you come in with a question about whether to purchase a new, different hardware, you'll get out with questions -and a few useless answers- about asset management, internal billing and wifi networks. And you'll realize that you don't have any clue to the answer about whether the company will buy it or not.

Over the years, I've developed a quick and dirty solution to deal with SMTDs:
  1. Don't go into the long-term explanations of why you want to solve the problem, just stick to the short-term. That would last hours and would only worsen the depth of the SMTD's scope.
  2. At the beginning of the discussion, do set, in accordance with the SMTD, a choice of as few as five objectives to be reached by the solution to your problem. This way, you'll be able to reduce the scope of his thoughts to what you agreed on. That is, you just need to split your problem and the surrouding areas in a five-item list.
  3. If you take the example of the new, different hardware purchase, you just have got to reduce the problem, right from the start of the conversation, to the comparison of:
    • prices,
    • main features,
    • delivery,
    • compatibility,
    • immediate satisfied customers.
    There are many other points to be discussed, but you don't want to address them all. Not now, not with the SMTD and not in an all-in-one speech by him.
When you're accustomed to it, you can prepare these five pieces before talking to the SMTD and that doesn't cost time, that saves you time.

Tuesday, May 17, 2011

Adverse Effects of a Security Measure: the Example of French Speedometers

As far as analogies might go, I find the example of French speedometers a revealing example of security failure.

Automated speedometers have been installed in many places along motorways and also in town centers or in rural areas. Those devices take a picture of every car going 5% or 10% above the speed limit. The driver is fined a high penalty and even gets points removed from his driving license. The license is invalidated once 12 points have been removed.

That sounds good, but there are all kinds of problems. To name just a few design problems:
  • People brake a lot when they see one ahead of them. They risk provoking an accident on a motorway just because of that.
  • Whether they were "shot" or not, they' re angry about it and they then speed up a lot, knowing there won't be another speedometer in the next few miles.
People started knowing the exact locations of the speedometers or even invested detectors, bundled in iPhones, Androids or other specific devices. So the government sent the policemen roam the country with "mobile" speedometers.
And then came the social problems:
  • Tax money is used to put fines on the taxpayers. If that's only in case of danger, that's good. But if it goes into fussiness, that's parasitic!
  • After a short drop in the death rates of road accidents, the system reached its limit and the death rates started stagnating again. So the government intensified the pressure on policemen. They are now accountable for the number of fines given in their area. That measures the efficiency of the system on an irrelevant variable.
  • Additionally, citizens are exasperated by this overpressure, clearly conscious that it's not an efficient security measure anymore.
  • In a significant number of cases, policemen start to put fines in places where they can do it easily, whether there is a real danger or not.
  • All that of course leads to a vicious circle where citizens are angry about policemen and about the government and where the "measure" of efficiency becomes more and more irrelevant.
  • Eventually, the policemen are so pressured to put speedometer fines that they forget to put fines for other -actually efficient- reasons. For example, you'll find more cars in poor conditions (no light, flat tyres...) than a few years ago.
There's also the border effect:
  • Foreigners are "shot" by the speedometers, but the French state doesn't know to whom the fine must be sent. In the end, they dont' have to pay and they don't risk to have their license removed. And they often profit from our beautiful roads at speeds higher than 180kmph. So citizens feel as if foreigners are better treated than themselves.

There's the implementation problem:
  • In their rush to put fines, policemen just park anywhere, including dangerous locations! They are a factor of accident sometimes.

And finally, there are the typical VIP exceptions that plague any security measure:
  • Police cars themselves are not subject to these fines, so currently the worst drivers you can find anywhere whether in town or on motorways are: policemen!

All in all, I'm impressed if the government behind that ever gets re-elected.

Been doing some reverse engineering

I've been reversing a Win32 PE executable lately, something I haven't been doing since I was 15. I found it quite easy. Much easier, indeed, than a few years ago. What's changed since then?
  • The tools have changed. At the time, I used to master WinDASM and SoftICE, which are no more fashionable. It even seems that WinDASM has disappeared from the market. This time, I used HeavenTools' PE Explorer, which is a clear improvement on the latter.
  • The PE format has not changed. Or, at least, nothing that matters in debugging.
  • Windows is more stable than at the time, saving you many reboots ^^
  • The compilers have not changed much. It seems that I could learn to recognize compilation styles of various compilers in very little time.
  • Most of all, I've not changed. I can now remember very precisely why I quit reverse engineering software back then: because I prefer working with the source code and I prefer working in design or implementation modes rather than in debugging mode. I can now remember that I quit reverse engineering software approximately the same time as I started using GNU/Linux on my desktop.
I can clearly validate this view years later: though I'm happy to be able to reverse a binary, I think programming is more rewarding.

Tuesday, May 10, 2011

Smartcard and PIN or the Increased Security of Just 4 Digits

The French government is currently enforcing the use of what they call strong authentication, for all access to people medical data: smartcards protected by a PIN code, containing an authority-approved certificate. The PIN code sums up to just 4 numbers and the question came to me:

Why should I trust 4 little digits with my users' security? (when my password has 12?)

There are many subtle technical points within that question, but the main answer holds to only one key view of the problem: the reduction of possibilities, helping for the enforcement of good processes.

Compared to a password-based authentication, smartcards and PIN codes enforce the following:
  1. Just one mechanism to integrate passwords and content on the card: that of the card itself.
  2. Just one mechanism to ask for authentication: challenge. That removes the danger of "password comparison" mechanisms where you just have to look into computer's memory to get the cleartext password.
  3. Just one administrator code capable of resetting the PIN: the SOPIN. That removes the danger of old, "unused" administrator accounts you find in most company directories.
  4. Just numbers in the PIN code, no letters. Though this may seem like a weakness in the case of brute-force, that's on the contrary a strength, because that prevents people from setting their given name as password, or that of their son.
  5. Additionally, users tend to remember numbers better. As a typical human being, you could name tens of likely alphabetic strings for your own password. But you remember only a few sequences of 4 numbers. So when you know one, that's for good.
  6. Just three attempts, you can't easily brute-force it by usual means.
  7. Just one logical place to deliver a smartcard: inside the company. You may send a password or even a PIN by mail, but you need to deliver a token, you can only do it physically and the only logical location to do it when you have dozens or thousands of users is inside the company's walls. That reduces the number of intermediates between the administrator and the user, and most of the time replaces external intermediates with internal ones.
  8. Just one smartcard. 1/ If it gets stolen, you'll notice it. 2/ You can't share it with friends and still benefit from it at the same time. So you'll (at least) make sure you get it back.
  9. Just one attempt to build the cards. I mean that the cost of a recall would be huge to change just a few security settings. For instance, if you choose to allow unlimited attempts instead of just three, changing it back to three will cost you a return of all cards back to the HelpDesk. This means that most smartcard-based project try to do the things right from the beginning, whereas many password-based projects start with "lower-level" security and try to improve on it and eventually give up about it.
All in all, PIN codes and smartcards seem a good choice.

Sunday, April 10, 2011

Monthly ITsec Leadership Quotes and Articles: February and March 2011

General IT and ITsec management
The true cost of non-compliance is ZERO* (*If nothing goes wrong), on the Uncommon Sense Security blog.
I Broke All Six Rules for Finding the Right IT Vendor, on the HBR blogs, with insights on "best" practices when choosing an IT vendor.
A Disruptive Solution for Health Care, from the HBR blogs. Though not IT-related, I think this articles applies well to IT in the healthcare domain.

Educating the CEO on Mobile Applications, on the Healthcare Info Security blog.
Signature-based antivirus not quite dead, but bigger problems loom, speaking of the inability to maintain signature based security systems, and citing whitelisting, a subject of much interest to me these times.
How Mobile Phones Can Transform Healthcare, also on the HBR blogs.

Personal Development, Career
Chief Security Officer, 21st century, on the Security Recruiter Blog.
4 Skills CISOs need now, on
The Four Personas of the Next-Generation CIO, on the HBR's blogs.

An internal billing scheme for IT risks

After meeting with a crowd of fellow hospital CISOs a few weeks ago, I had a sudden epiphany that the problem of billing IT risks inside a company is not just a peripheral one, but a primary one. And closely related to our inability to put figures on IT risks.

What about the idea of a CISO acting as an internal insurer for the IT service?

> Company board: regulates practices, if ever needed.
+----> CEO: checks correct operation.
+----------> CIO: acts as the customer of the insurance.
+----------> CISO: acts as the insurer.

The CISO would propose an offer made of:
  • Expensive insurance for inappropriately acquired or ill-maintained IT assets.
  • Cheaper insurance for IT assets that are acquired and maintained according to a set a constraints.

Saturday, March 12, 2011

Security ROFL 4

Thursday, March 3, 2011

Risk Management: Solving the CISO's Conflicts of Interests

The CISO's Conflicts of Interests
Acting as a CISO is usually a difficult position, because the CISO is asked to act both as a comprehensive risk manager for IT and as an IT security expert.

Depending on whom the CISO is reporting to, either the risk management side or the IT security side will show most. Suppose the CISO is reporting to the CIO and he'll spend most of his time auditing, helping write good procedures, good RFPs and so on. Now suppose the CISO is reporting to the company's CRO, he'll spend his time compiling statistics, forging risk estimation methods and so on.

There seems to be a conflict of interests if the CISO is working under the CIO. How could he report about a major risk? How could he at the same time prescribe additional security requirements and be the one to implement them? And there may also be a conflict of interests between the CRO's calculated risks and the CISO's inner sense of what's risky in IT.

An illusionary conflict
However, I think that's just an apparent conflict of interests, an illusion. I think there's a continuum of risk management maturity from the "gut feelings" to the successful risk assessment and evaluation. And security programs grow in maturity the same way: They start with obvious things, sometimes pushed by a legal constraint, and they grow into wide security project inspired by comprehensive frameworks, and they eventually cover the whole IT perimeter with pondered security measures.

As I see it, all three of the CISO, CIO and CRO have the same three goals for IT:
  1. Delivering good services to the customer,
  2. Ensuring the conservation of the (information) assets they're trusted with,
  3. Keeping reputation high and lawsuits low by not sharing those assets with unwanted people.
That is, they all three share the same goals: Availability, Integrity and Confidentiality. They're simply not held accountable for the same parts:
  • The CIO is usually accountable for part 1.
  • The CRO is usually accountable for part 3, especially the "lawsuits" part.
  • The CISO is accountable for some or all of them, depending on whom he reports to, and is especially accountable for the "confidentiality" part because of the required expertise.
So I think there's no real conflict of interests because the interests are, in fact, the same.

The path out of the illusion
The way to the disappearance of this illusionary conflict is an appropriate alignment of expectations between the CIO, the CRO and the CISO.
The CIO will want immediate technical solutions and the CRO will want risk models and ROI-like estimations.
The CISO's job is to make sure they both understand that there's no opposition between the two and that the maturity level of the security process will evolve to the point where they'll both be satisfied by the very same security measures, justified by the very same arguments.

Friday, February 4, 2011

Draft: A Step by Step Security Approach for SMBs

Suppose you're a newly appointed CISO in a SMB. Suppose that position is a creation.

I've had to think about it as I'm currently advising a few fellow CISOs. They're in positions where IT security is just one of their responsibilities because the structure is not big enough to afford a full-time CISO. The rest of their time, they act as IT project manager, network administrator or company's archivist!

In this position, you may want to have a look at comprehensive guides/norms/references about IT security. But how to handle an ISO 27002? How to handle an ISO 27001 when it's a newly created job and you can't muscle into decisions? How to spread awareness that you don't want to do everything in those norms, but only the most important ones for a small structure?

I recommend the following steps, in the approximate order I give. (Experienced security people might be surprised that I don't place the policy and the charter at the top...)

First steps: gather documentation, work alone
You're going to write documents that will follow you everyday on your work and that you'll want to have at hand even in the corridors of your workplace. I call them the "Books of..." for this reason.
  1. Write the Book of Activities in which you'll list every task that IT people think it's their job to do. You can do that by striding through IT documents, chatting with every IT people and doing a week in the Service Desk (or equivalent)
  2. Write the Book of Services in which you'll list every Activity that's sold to the end-users (whether in speech or money). For each of them, list the description of the population of end-users and the arguments used to sell the service. For instance, "the firewall" is an activity but not a service because it's transparent to end-users. "Mailboxes" is both.
  3. Write the Book of Legal Constraints in which you'll list the precise references to legal texts and their implications for you. You'll have to do it one day, so better do it from the beginning.
  4. Write the Book of Classification in which you'll note what special kind of information deserves what special kind of treatment. For an SMB, I would suggest to only consider legally-constrained classification. For instance, classify medical data or military data, but don't go into making classified categories such as public, private, secret, top-secret, HR-only or anything so detailed.
  5. Write the Book of Risk Analysis in which you'll create a grid (basically a sieve or even a checklist of threats that might occur to your information system) to think about risks whenever needed. That will be especially useful at the beginning of any IT project that you want to secure. You'll be more confident because you'll follow a long-time established list and people will trust what you say more because it won't have popped out of thin air.
  6. Write the Book of Integration Requirements which you'll append at the end of any RFP, in which you'll list all of the technical conditions the chosen solution will have to fulfill. It will also be helpful if your company does some internal development, you'll just have to distribute it to developers. You can get that list by going to network and system administrators and asking them for what went bad in their past integration experiences. You'll get 90% of it in just 30 minutes of discussion.
  7. Write the Book of Physical Security Requirements, which you'll forward to people in charge of electricity, building access control, fire prevention and so on.
Second steps: set up inventories, work with the admins
You're going to make sure you know your assets, you know your perimeter and you're going to make sure that IT people don't get lost into the mess of an IT service. (Tcha-tching!)
For these steps, you'll have to initiate the work but the admins will have to keep up on the long run.

See, that's not just writing docs, that's about having the right information to make the right decision when needed. That's about billing customers with exactitude. That's about enabling statistics on the activities.
  1. Make sure you have an Inventory of Users, which may include all employees, contractors, providers and customers. Once you have that, you'll be able to work on identification and authentication. (Please ensure that an authentication project is never started before the identification is correct. Even if this sounds too stoopid to happen.)
  2. Find or create the Inventory of Network Equipments.
  3. Find or create the Inventory of Endpoint Computers (desktops, laptops, macs, iphones, huge display screens...) You'll usually find one or more partial inventories when you arrive in the company. It is part of the CISO's job to make sure that this inventory is not partial.
  4. Find or create the Inventory of Printers (and other multimedia hardware).
  5. Find or create the Inventory of Servers.
  6. Find or create the Inventory of Server Software.
  7. Find or create the Inventory of IT-Managed Endpoint Software.
  8. Find or create the Inventory of Providers and related SLAs.
  9. Find or create the Inventory of Network Flows and Zones. Internet, VPNs, mailing service, DMZs...
  10. Find or create the Inventory of AntiVirus Software and AntiSpam Software (or the Inventory of Not-AntiVirused Flows and Not-AntiSpammed Flows if that's quicker).
When you've made sure the inventories listed above are up-to-date, you can say you've reached your cruising speed. You can start auditing, advising, collecting data to prove an intuition, intervening in a decision process, etc.

Third steps, cruising speed, work with strategy people
Now, you're aware of your perimeter, you have a good overview of the risks, you may even formulate a strategy. You'll want a few more tools to formulate it, have it approved and execute it. Listed below are the main tools of the CISO. They're in no particular order, contrary to above. You may use them at your convenience.
  • Metrics. Once you've decided a technical point, measure the degree of conformance to this decision. Once you've set your objectives, display your progression publicly.
  • Supervision and logs. Realtime supervision will give teams the power to act before the Service Desk gets a call from users. Logs will help the teams go back on an incident and prevent its re-happening.
  • Redundancy and High-Availability. If you've spotted a specifically sensitive point in the information system (like the central user directory, or the billing system), ensure they are redounded and can switch from one to the other within minutes. That alone saves your company days of lost work a year and saves the IT service days of cold sweat.
  • Software Update. A big risk is associated with old versions of software (not just vulnerabilities, I mean: bugs, difficulty for the Service Desk to master multiple versions, impossibility to remotely detect the version, longer phases of test for the integration of new software/hardware, etc.) So get a piece of software to detect installed software, versions, and remotely distribute updates.
  • User Charter. The users want to be told what to do plus they need to be told what not to do. Additionally, you gain immediate influence and respect of the Charter just by mentioning that someone actually is in charge of IT security. Try to make a new version every year, don't put too much but make sure the basics are never forgotten.
  • Information Security Policy. This document affirms, company-wide, the separation of responsibilities in terms of information security and this is the place where you can get the CEO or the Board to sign that you have the autority to do a specific thing. Usually, it's not made for anything technical, it's made to sort internal power struggles or budget affectation. It's boring for the CEO, so don't make more than one new version every two years but make sure you get support on the most difficult human and managerial issues you encounter.
  • Risk Management. A risk is a sum of money or time that the company loses because of the happening of a threat. When you know the threats, when you know their probability of happening, you can estimate a risk and treat risks in descending order. That's called risk management (I saved you a book!) A CISO usually gets that by gut feeling, but it's always good to confirm it with analysis and it's better to show that you don't decide on things just by gut feelings. You can see it as a way to help others make decisions about IT security (CIO, CEO, budget planner...)

The CISO's Perimeter (is Broader than the CIO's)

Though provocative in some ways, this truth is known to any experienced CISO.

I don't know whether I had better call it the "security perimeter" or the "protection perimeter" or the "oversight perimeter", what I mean is that specific perimeter that surrounds the things the CISO must take into account before establishing a strategy. I don't say that it's deeper than the CIO's, but it's broader.

In that perimeter, you'll find those extra items:
  • Geographical locations of assets and users, which impact onto the risk of theft. For instance, a laptop is more likely to get stolen than a desktop. This means that the CISO has to take into account the homes and internet cafés and aiports and so on.
  • Electrical capabilities, whether the company's or that of its providers. You don't want to give out your data to a poor-infrastructured provider, even if it has great software.
  • On the same note: flood prevention, fire prevention... Note that these may not be the CISO's job and may be addressed by another person or service. But the CISO has to take them into account anyway.
  • Personal applications: you may lock up what the users install on their desktop inside the company. You may even lock up what they install on the company's laptops they bring home but won't ever lock up what they see in their browsers, on their smartphones, on their personal computers. For that matter, you won't even lock up what they do inside a legit application, some evil comes from regular powerpoint files, doesn't it? That's an area where the CIO just cares about deploying and doing more, and where the CISO cares about restricting and segregating...
  • Outsiders. The CIO typically cares about employees and shareholders. Hopefully about stakeholders. But that's the CISO's job to also look at outsiders, whether malevolent, benevolent or indifferent.
  • Barely-IT systems. Mostly embedded-IT systems and objects that have evolved from electronics to computers (phones, cameras, printers). Not all of them are managed by the IT service but all of them produce or consume information and have the typical risks. So they're inside the CISO's perimeter.

Thursday, February 3, 2011

Monthly ITsec Leadership Quotes and Articles: January 2011

I'm trying to add short descriptions, plus categories, for easier reading.

Team/Service management:
[EN] Engaging Your Staff in Security Requires Leadership – Not Free Coffee Mugs: a general note with items on how to get a team more involved.
[EN] Managing Nerds: a developed note about the way a nerd's intellect works. I find it quite revealing and I do commit with but one warning: a typical IT team is not only made up of nerds.
[EN] Facing A Crisis of Leadership: a good article on the risk of having a geek for a CIO and with one central idea that I mightily approve: "An [...] action that focuses on cost-centric or non-value-added improvement initiatives is nonstrategic and deserves scrutiny."
[FR] Herve Schauer Consultants' Newsletter N°77, January 2011: interesting editorial on the ill-understood and ill-applied ISO 27001 certification. Hervé Schauer goes in details about the way ISO 27001 is often thought of as a kind of "security-targeted ISO 9001". It's not just about documenting security, it's mainly about managing it (deciding, acting, spreading responsibility/accountability/ownership).

Log management field:
[EN] Top 10 Things Your Log Management Vendor Won't Tell You: a checklist against log vendors quacks. Would be a good reading if you're planning a logging project or -worse- if someone else is planning it for you.
[EN] 11 Log Resolutions for 2011: I would retitle this as "11 Steps to Initiate Logging". Concrete action propositions to make a step into the world of logging.

Personal development:
[EN] 25 Improv Tricks That Will Make You a Better Business Person: a nice, comprehensive list about behaviour at work. From a recruitment site. This one is worth sending to every colleague you have.
[EN] Move your security career forward by looking back: a personal guide to look back at 2010 and act for a better career development in 2011. Good pieces of advice, requires some time to think about it. Bookmark it and come back later.

Tuesday, January 4, 2011

Microsoft Office and ODF: Best Practices

Sorry for yet another bookmark post, but knowing how often I hear about this kind of compatibility problem, I thought this article was rare enough to notice: Rob Weir details how to handle ODF (Google Docs, OpenOffice, LibreOffice...) in Microsoft Office, version by version, from Office 2000 to Office 2010.

Monthly ITsec Leadership Quotes and Articles: December 2010 and Happy New Year

Security ROFL 3

- Do you remember Thierry Shaker ?
- Sure, the boy who enjoyed frightening every girl out there, always trying to grope! What does he do now?
- He became an airport security guard...