Showing posts with label security insights. Show all posts
Showing posts with label security insights. Show all posts

Monday, August 12, 2013

Indicateurs du début de la fin en DSI 2

Suite au succès du premier article, j'en poste un second sur le même thème : ces signaux faibles qu'un DSI devrait apprendre à reconnaître pour éviter le naufrage ou, souvent, l'accumulation d'inertie qui mène au naufrage.
Aujourd'hui, deux petits nouveaux.


Iceberg n°1 : Le ratage d'un tournant métier. Votre entreprise développe une nouvelle activité, c'est une petite branche de l'activité et représentait à peine un pourcent du chiffre d'affaires hier. Vous n'y avez accordé que peu d'attention. Seulement, aujourd'hui, le marché a changé et cette activité représente 35% du chiffre, plus à venir.
Comme vous n'aviez pas donné suffisamment d'attention à cette activité, les relations avec son directeur sont au plus bas et il demande des changements radicaux. Changement de têtes ou... fournisseur externe.

Icerberg n°2 : La gestion des assets (des actifs). Mettre en place une gestion des actifs sur une flotte donnée (PC, smartphones, imprimantes, voitures, bornes WiFi) permet d'en diminuer le coût de 30% très rapidement, et plus avec le temps. Malheureusement, si la gestion des assets s'encroûte, si elle ne suit plus les évolutions récentes et finit par n'avoir que quelques dizaines de pourcents de données à jour, on va droit à l'échec, là encore.
Les projets de gestion des assets ne sont pas des silos applicatifs mais des projets structurants transversaux et doivent faire l'objet d'une attention extrême à leur démarrage. Il s'agit d'un véritable investissement. Investissez sur une gestion opensource, évolutive, avec le personnel correspondant ou bien sur une solution propriétaire mais de fonctionnalités très largement supérieures à vos besoins. Par contre, ne vous lancez pas dans l'utilisation de solutions intermédiaires propriétaires qui vous empêcheront *absolument* de faire évoluer votre gestion des assets.

Thursday, August 8, 2013

Indicateurs du début de la fin en DSI

Il y a de ces signes qui ne trompent pas, le navire DSI va droit à l'iceberg. Avec le temps, on apprend à en connaître certains. On devrait pourtant les partager davantage. Ci-dessous un petit florilège.

Il y a cet outil réseau, made in DSI, que j'appelle les SOCKS-étoile-étoile. Les SOCKS ou, dans leur implémentation Microsoft, les Winsocks, ont été développés pour permettre d'offrir un traitement de type "proxy" aux protocoles qui n'ont pas été développés pour être proxyfiés. Très belle attention.
Dans la pratique, une DSI a souvent un proxy pour le web et un firewall pour le reste. Le firewall étant lourd à configurer, on n'y rajoute pas souvent des règles et la DSI "installe les SOCKS" à un utilisateur qui a des besoins réseaux inhabituels. N'ayant pas le temps de les configurer, la DSI établit une configuration de base qui autorise tous les protocoles réseau, vers toutes les destinations (ce que j'appelle étoile-étoile ou *:*), avec la promesse que ce ne sera que temporaire. Sauf que le temporaire devient permanent et que les utilisateurs SOCKS-ifiés se multiplient. A ce rythme, en un rien de temps, le proxy et le firewall sont devenus inutiles et les terminaux des vrais nids à virus.

Il y a ceux dont le nom seul évoque l'échec d'une DSI, comme Microsoft Access. Partant d'une très bonne intention, complémenter le service informatique pour les petits problèmes locaux, les utilisateurs développent des "bases" en Access.
Elles sont développées sans les compétences en développement => Elles deviennent in-maintenables, du vrai Write-Only. Elles sont développées loin des cadres de l'informatique de l'entreprise => Elles ne s'intègrent pas aux processus de la DSI, telles la sauvegarde ou la documentation. Dans le meilleur des cas, elles occasionnent des coûts imprévus gigantesques lorsqu'il faut les intégrer dans un cadre plus sérieux, c'est-à-dire les re-développer depuis zéro. Dans le pire des cas, les services en deviennent dépendants et, quand elles craquent car mal développées et non sauvegardées, elles occasionnent des arrêts de travail de ces services.
Certaines grandes entreprises ont même pris l'initiative d'interdire Access dans toute l'entreprise.

Menace plus insidieuse qu'Access, il y a les outils équivalents à Access. Un exemple : FileMaker Pro. On ne s'en méfie pas car il est cher. On se dit donc que ceux qui l'achètent en feront un usage raisonnable. Point du tout : soit il est utilisé à la DSI et il devient un outil comme un autre, soit il est utilisé par des utilisateurs finaux bien-intentionnés et c'est la rebelote d'Access. Pire encore, devrais-je dire, car les vraies compétences sur pareils logiciels sont plus rares et plus chères que celles sur Access.

Un autre outil vicieux est le tableau croisé dynamique d'Excel. Quelle belle fonctionnalité ! Cet outil permet de faire des calculs, des comptes, des sommes sur des tableaux de données. On lui donne les données, on lui dit quels champs compter, sommer, et il se débrouille... en théorie. C'est un outil magnifique pour la recherche, pour l'analyse ponctuelle de données, mais en aucun cas pour la fabrication de chiffres en production. Il est trop complexe, trop lié à sa représentation à l'écran dans Excel, difficile à manipuler. Résultat, trop souvent on obtient un résultat sans être absolument certain que ce soit le bon. « J'obtiens tel nombre d'occurrences du problème X, mais est-ce en comptant les doublons sur les deux tableaux ou sans les compter ? » Un bon outil théorique mais pas pour la prod.

Sur le plan managérial, il y a un indicateur net d'iceberg droit devant : l'externalisation d'un silo applicatif (cloud ou non). Si c'est un petit silo, ce n'est pas très grave, mais s'il est grand et qu'il est vraiment silo, c'est-à-dire que tant l'application que son infrastructure, sa maintenance et ses évolutions sont externalisés, alors il y a vraiment du souci à se faire : la DSI a un concurrent au sein de l'entreprise.

Un autre indice qui devrait mettre la puce à l'oreille quant à un possible iceberg est l'accumulation de tickets non traités concernant une application donnée. Ces tickets peuvent être minimes, bénins voire obsolètes mais ils renvoient une image négative et leurs auteurs s'impatientent, voire se désespèrent. C'est ce genre d'accumulation qui mène une direction à jeter le bébé avec l'eau du bain concernant un système qui marchait, dans l'ensemble, plutôt bien.

Et puis il y a le manque de plan de continuité. Le principal indice, c'est quand tout le monde parle du PCA mais que personne ne sait par où commencer, que chacun prend ses petites initiatives de son côté mais que personne ne sait qui en est responsable.

C'est un tout petit extrait et j'en rajouterai sans doute.

Wednesday, May 1, 2013

On ne peut pas faire de Sécurité du SI sans compter

Un discours que j'ai tenu plusieurs fois, autant le mettre par écrit, sur la gestion des risques et le besoin pour un RSSI de s'appuyer sur des données chiffrées.

En matière de sécurité, on peut faire tout, rien, n'importe quoi ou quelque chose.


Si l'on fait rien, on est en position suboptimale. En effet, si rien était une situation satisfaisante, on n'aurait jamais entendu parler de Sécurité du SI et encore moins de RSSI.


Si l'on fait tout, on a besoin de plus de moyens que l'activité-même que l'on tente de sécuriser. Ce n'est pas toujours évident pour les gens qui ne travaillent pas dans la sécurité. De façon abstraite, on peut l'expliquer en disant que sécuriser une activité signifie maîtriser ce qui n'est pas dans le fonctionnement normal de l'activité, c-à-d avec des acteurs imprévus, des conditions extérieures imprévues, des pannes imprévues, etc. Le périmètre à sécuriser est bien plus grand que celui de l'activité elle-même.
Un exemple du SI : pour sécuriser tout en matière de site web, il faut maîtriser les attaquants extérieurs, ainsi que les fournisseurs de logiciels. Aucune entreprise, petite ou grande, aucun gouvernement, ne peut se targuer de sécuriser ainsi totalement un site web.
Un exemple hors-SI : pour sécuriser totalement une flotte de voitures, il faudrait maîtriser entièrement l'état des routes, les autres véhicules utilisant la chaussée, le niveau d'aptitude et de concentration des chauffeurs. Là encore, aucune entreprise ni aucun gouvernement ne peut s'en targuer.
Il n'est donc pas possible de faire tout, en sécurité. Ce que l'on résume souvent par la formule journalistique galvaudée « le risque zéro n'existe pas ».


Si l'on ne fait ni rien, ni tout, on peut éventuellement faire quelque chose mais, à défaut d'une bonne façon de donner des priorités et des coefficients aux actions de sécurité, ainsi que d'une bonne façon de corréler ces actions de sécurité en un ensemble cohérent, ce quelque chose devient vite n'importe quoi. C'est ce que tendent à faire les entreprises qui commencent dans le domaine de la sécurité du SI.
Exemple SI : Sécuriser les bases de données.
Traduction hors-SI : Sécuriser les moteurs des voitures. Ça n'empêchera que très peu d'accidents.
Exemple SI : Définir un périmètre réseau de l'entreprise et filtrer l'entrée.
Traduction hors-SI : Empêcher les véhicules en mauvais état ou hors-gabarit de prendre l'autoroute. Ça empêchera certains accidents mais ça ne rendra pas les accidents sur l'autoroute moins graves, ça laissera passer les conducteurs ivres et ça ne protègera rien hors de l'autoroute.
L'ensemble des actions de sécurité qui se fondent sur une analyse purement technique du SI et des menaces qui pèsent sur lui souffrent de ce même défaut : elles ne peuvent distinguer ce qui est important de ce qui est anodin, faute d'un critère de comparaison.


Pour faire quelque chose qui ne soit pas n'importe quoi, le choix d'une stratégie s'impose. Or, qui dit stratégie dit objectifs. Qui dit objectifs dit « savoir ce que l'on veut protéger ».
Exemple hors-SI : essayer de diminuer le nombre d'accidents, ou le nombre d'accidents mortels, ou les embouteillages, ou le nombre de délits de fuite...
Exemple SI : essayer de diminuer les vols de données, ou les interruptions du travail liées au SI, ou les pertes de données, ou d'augmenter la traçabilité pour se mettre en conformité avec la loi...


Comment choisir parmi ces objectifs ? Ou, si l'on veut n'en négliger aucun, comment ventiler les actions et les moyens disponibles ? Pour ce faire, il faut être capable de positionner des valeurs, comparables entre elles, sur chacun de ces objectifs. C'est ce que l'on nomme la gestion des risques et qui est développé dans de nombreux standards tel ISO27005, EBIOS étant sans doute le plus mature. Les données nécessaires au RSSI devenu gestionnaire de risques pour pouvoir réaliser une sécurisation rationnelle du SI sont les mêmes que celles nécessaires à d'autres managers de l'activité : budgets, KPI, stratégie de l'entreprise, liste des projets en cours... Ces données permettant au RSSI de s'inscrire dans une logique globale de l'entreprise ainsi que de justifier les actions entreprises en utilisant un vocabulaire commun au reste des décideurs internes.

On ne peut pas faire de Sécurité du SI sans compter. Le RSSI doit donc disposer des moyens nécessaires et les utiliser.

Thursday, April 4, 2013

On Client-Side Model Data Validation

Reversing an online Flash application is sometimes not needed if you can access the data inside the model directly.



When you're developing an online application that's running client-side with data server-side, you're faced with the localization of data. For clarity, let's assume you're developing it in a MVC design pattern. Basically, you'd want to put most of data on the server and only give the client a controller on it. The problem starts when you need high transfer rates and reactivity: you just can't go to the server and back for every tiny piece of data. That's when you need to have the model split between the server and the client. Either split or duplicated.

What I'm going to say may come as an obviousness for people used to security, but it may as well come as a shock for casual application developers: if some part of your data model is located on the client, you need not only do user input validation on the inputs from the controller but also on the inputs from the client-side model.

You can't trust the client's terminal to keep data stored in the model safe. So you need to protect it either through integrity or through server-side re-validation.

I just hacked into putting whatever score I wish for myself onto an online gaming platform, which inspired me to write this article. The same could be applicable for more critical applications.

Wednesday, December 5, 2012

La peur du Cloud : réinterprètons !

Une remarque pour les blogueurs du Cloud qui s'appuient sur des enquêtes statistiques du type : opinion des DSI.
Il faut réinterpréter la « peur du Cloud » : quand un DSI dit que la sécurité du Cloud est sa première préoccupation, il ne veut pas dire que la confidentialité, l'intégrité ou la disponibilité du Cloud lui font peur. Il veut dire que la pérennité des habitudes, des services et des budgets de sa DSI sont menacés par le Cloud.
C'est une peur rationnelle : comme beaucoup de managers expérimentés, il a surtout peur que les choses changent trop vite et qu'elles ne soient plus contrôlables.
Alors, à mon avis, il faut nuancer les chiffres sur la peur de l'insécurité dans le Cloud.

Thursday, July 5, 2012

Remarque agnostique sur Linux en entreprise

Pour avoir travaillé dans de nombreuses sociétés, il me semble évident que Linux représente une menace. Notez que je ne dis pas que c’est le choix de Linux qui représente une menace, car Linux n’est pas un choix. Linux est imposé par le marché. De plus en plus d’outils utilisent Linux, y compris des sociétés très respectables et des éditeurs de logiciels métier. Ce qui représente une menace, c’est le manque de réaction managériale face à l’arrivée de Linux.

Deux menaces principales existent :
  • Le scénario de ne pas savoir intervenir en cas de panne, par faute de formation.
  • Le scénario d’avoir tellement de Linux différents que l’on ne sait plus comment les gérer et que l’on multiplie les coûts.

Ces remarques s’étendent à Linux, à Mac OSX, aux BSD et aux autres sortes d’Unix.



Ma recommandation

Tout d’abord, cesser le déni et reconnaître que Linux est utilisé au sein de l'entreprise. Ensuite, accorder un crédit de temps *qui peut être de l’autoformation* à un ou plusieurs administrateurs système, pour se former à Linux.

Enfin, faire un certain nombre de choix concernant Linux, visant à éviter le chaos de la muliplication. Par exemple, le choix d’une ou deux distributions « supportées » par le service informatique et l’achat d’une suite logicielle permettant l’intégration des machines Linux aux outils centraux, y compris outils Microsoft.

Saturday, June 16, 2012

At the Heart of Security: Doing What You Say And Saying What You Do

Security is about deciding what's forbidden, what's allowed and enforcing these decisions.

There's the technical part, doing what you can to technically enforce the decisions.
And there's the human part, managing things in a way that reduces related uncertainties.

The human part is the most important. You cannot enforce every decision technically. Besides, you have to allow for people to switch on/off certain features, eg to allow for good functioning outside of the company's premises. So, you forcibly leave some room for employees to decide by themselves whether they respect the rules or not. And that's the moment when the human factor matters.

The most important tool for the human factor is the security policy. You have to say what you do in terms of security and to do precisely what you say.

If you don't do what you say, you invite people to force the system, they think they can fool you. It may go as far as disregarding completely the policy.
If you don't say what you do, you invite people to rebel and contest the security measures. Additionally, you scramble people's understanding of your security policies, which may lead them to give up trying to respect it.

Tuesday, June 12, 2012

Note on the Future of RSS Feeds

These days, I see a decline in the publication of RSS feeds. Either they're not present, not advertised much, or incomplete. This is especially true of mainstream, non-professional sites.


One of the reasons that might lead bloggers and webmasters to remove or neglect RSS feeds is that they provide no way to count followers (unique followers). A good evolution off RSS would be to allow a simple "declarative" download of RSS feeds.
  • Authentication might be hard to obtain and maintain.
  • Identified-only feeds might make users reluctant to use them.
  • "Declaration" of an identifier of the follower is subject to many uncertainties, but could allow for a better count of unique users, returning users, etc.

Thursday, June 7, 2012

A l'attention des RSSI : Sur la fuite des mots de passe LinkedIn

Il y a quelques jours, une fuite massive de comptes et de mots de passe LinkedIn a été révélée sur Internet.

Je recommande aux RSSI de faire suivre le mot au sein de leur entreprise/administration.

En effet, comme le site LinkedIn utilise l'adresse e-mail des gens comme identifiant, la plupart des utilisateurs moyens utilisent pour ce site le même mot de passe que pour leur adresse e-mail. Ainsi, les adresses e-mail de ces utilisateurs deviennent des cibles, via les webmails.
De plus, les utilisateurs qui réutilisent leurs mots de passe risquent aussi de les réutiliser pour tous les autres sites dont l'identifiant est l'adresse e-mail. Ces autres sites sont donc menacés aussi.

Exemple : Un utilisateur lambda, Jean Dupont, travaille pour la société Youpeee. Son adresse e-mail est "jean.dupont@youpeee.fr". Son mot de passe est "kevin1502", le prénom de son fils et sa date d'anniversaire. Jean s'enregistre sur LinkedIn avec son adresse e-mail et, pour simplifier, le même mot de passe "kevin1502". Il fait de même sur Facebook. Il fait de même sur Amazon.
De plus, pour qu'il puisse télé-travailler, son entreprise met à disposition un site http://webmail.youpeee.fr où il peut consulter son courrier électronique.


Aujourd'hui son mot de passe LinkedIn fuite sur Internet. Résultat :
  • Ses comptes Facebook et Amazon sont menacés.
  • Son courrier électronique d'entreprise est menacé.
  • Si l'entreprise Youpeee utilise d'autres moyens de télétravail, ils peuvent eux aussi être menacés.

Tuesday, May 29, 2012

Draft: Stuffing Security into MVC

A long time ago, when I was young and crazy, if I ever was, I used to code components in Java, with the help of the MVC design pattern.

It's a very useful tool and I've recently had the occasion to recommend it to a little company. They used it very efficiently to solve the messy  graphical interface of their featured soft. Nowadays, I'm mostly concerned about security and I do suffer, as those of my kind, from the underlying insecurity of software. It's not just insecure on the surface, it's insecure from the bottom up.


A complex system that works is invariably found to have evolved from a simple system that worked. (he said)
A complex system that is secure is invariably found to have evolved from a simple system that was secure. (let me add).

So I got to ask myself if it was possible to put some security directly into design patterns.

Say you'd like to stuff some of the following principles into the above diagram:  identification, authentication, authorization, audit, capacity planning, soft/hard redundancy, load balancing, cyphering, shoulder spying, data integrity, backup. Where would you start?

For each of them, I'd start by putting appropriate data/meta-data into the Model or near it. Then I'd make the View able to display what's necessary. Then I'd update the Controller with adequate intelligence. Simple, no?


Identification: Identification is based on data that's common with many other pieces of software. Let's say we'll dedicate a new model for it. Then say that neither the view, the model nor the controller will act before the user has been identified.
Authentication: Authentication requires pieces of data, or tokens, that can be verified by interacting with Identification data. But it requires no model of its own. Authentication is -nowadays- a period of time during which the system assumes the user really is whoever is was identified to be. Say that Authentication is a method of the identification component that returns a binary (authenticated or not). Then say that neither the view, the model nor the controller will act if the user is not authenticated.
Authorization: Authorization is a b/w mask (allowed/denied). It's a function of Identification data and the data in our model. So we want to add the "authorization mask" into our model. (That's a huge part of the security job. You can simplify it by using roles, such as in ORBAC.)
Audit: Audit is keeping track of what happens. It can be seen as a two-bit mask of the data in our model, indicating what's going to be logged. 00 means nothing, 01 means read operations, 10 means write operations, 11 means read and write operations. So we want to add the "audit mask" into our model. Then the controller will just write logs of whatever is to be audited.
Capacity planning: Capacity planning is measuring the volume of activity inside the component, as well as the size of the model itself. The numbers of read access and write access to model data and the number of routine execution of controller methods should be counted. These counts can be put into the model and updated by the controller.
Redundancy and Load Balancing: There's not much data in it but the choice of a pattern. Say active/active, active/passive, multiple instances, etc. The choice of this redundancy pattern can be put inside the model. The controller will have to instantiate components and "control" them. This may require an external Timer, or Sequencer, or Scheduler, whatever you call it, in order to trigger events on a defined schedule. (One huge advantage of component programming is that you can play at instantiating them in no time. This part really can be fun)
Cyphering: Cyphering between components is no big matter if you know the basics about PKI. You just need to make sure that you know which flows of data you want to cypher. This can be seen as a one-bit mask (cyphered/cleartext) of component-to-component flows. This "internal cyphering mask" can be put inside the model. Disclaimer: you DON'T WANT to create a unique instance of an object that handles cyphering for every component!
Shoulder spying: Shoulder spying should be let up to the view itself. However, the choice of what data has to be protected from neighbours should be stored in the model. It can be seen as a "hiding mask" (one bit or more). Then the view chooses how to implement it.
Data integrity: Whenever in the history of IT, integrity has meant the addition of control data to check validity and completeness. This data can be put in the model. The controller will have to take on the job of updating it whenever data changes. This means the model will have to enable atomicity of transactions.
Backup : Backup definitely requires an external scheduler because of its high cost. You just don't want to replicate data every time it changes. Plus you may want to rewind time later. Backup can be seen as just a serialization of the model. It means the model will have to support serialization.

More next time...

Monday, January 2, 2012

Kevin@Exploitability: End of Year Tale, Yet a True Story

This article is a translation of [FR] Petit conte de fin d'année, mais histoire vraie quand même, on Kevin's Exploitability blog. I found it an amazing example of how social engineering things is easier than technically cracking them. The author gave me his permission for this translation:

End of Year Tale, Yet a True Story
In this end-of-year period, let me propose you a little tale that goes beyond IT and security. Or not.

This story happened a few years ago and no name will be given.
So, once upon a time, there was a mall, just like so many others, with a jewellery store inside it. This jewellery store looked like an 'L'. The vertical stroke was open on the shopping arcade, with the cashier desk in the middle, and the bottom stroke had a second desk, dedicated to repairs.

   +-----+
       V |
A        |      V: victim
R        |      C: main desk
C   C    |      R: repair desk
A        |
D        `-----+
E             R|
              R|
   +-----------+    


The week-end before Christmas, lots of people crowded the arcade and the jewellery store. As often happens, in order to be up to the mark for these additional customers, a few interims were helping with the sales. They were dressed with sober white shirt and black trousers.
Hereupon, with the help of an accomplice, I've sneaked to the top of the shop, also dressed up with a white shirt and black trousers, hidden by a sweater. I've soon noticed a customer holding out a repair bill for a jewel. Once out of my sweater, I approached her and asked her how I could help. Discussed, checked that her repair bill had been paid and let her wait patiently glaring at new year's new collection. Back in my sweater, I got to the bottom desk to ask for the jewel.
They gave me the jewel upon presentation and verification of the repair bill and I managed to discretely get out of the jewellery store on the side opposite to the customer. Thanks to the crowd, the customer did not see me leave, and the legit vendors did not catch the trick.
So I got out of the jewellery store with a splendid necklace adorned with rubies inside my pocket.
[Little notice for people concerned with my integrity: my honesty being limitless, I went back into the store to return the necklace to her owner, so this is a morale tale, with a happy ending :-)]

Being on an IT security blog, what can we tell?
  • I played the role of a rogue proxy between a client and a server, in order to intercept authentication credentials.
  • The end-of-year overload prevented detection of the rogue proxy (if you can't erase logs, just drown them!)
  • The customer did not consider authenticating the server. Think http"S" or typosquatting. Just a little reverse-engineering was enough.
  • Finally, the true server granted trust just because of a cookie (the repair bill). A double authentication would have been better: repair bill + national ID card, for instance. The bill, seemingly at the name of a woman, should have raised eyebrows. An authentication cookie for entering and making operations (asking for the status of the repair), with an additional authentication for the handover of the payload, remains an interesting option.
  • The extraction of the payload was hooded in a most standard data packet: me, a casual customer, with a grey sweater attracting no suspicion nor attention from legit vendors. To extract data, cypher them (SSL flow) !
Hereupon, happy new year and happy hacking! (and don't rush into jewellery shops to make drivel!)

Tuesday, May 10, 2011

Smartcard and PIN or the Increased Security of Just 4 Digits

The French government is currently enforcing the use of what they call strong authentication, for all access to people medical data: smartcards protected by a PIN code, containing an authority-approved certificate. The PIN code sums up to just 4 numbers and the question came to me:

Why should I trust 4 little digits with my users' security? (when my password has 12?)

There are many subtle technical points within that question, but the main answer holds to only one key view of the problem: the reduction of possibilities, helping for the enforcement of good processes.

Compared to a password-based authentication, smartcards and PIN codes enforce the following:
  1. Just one mechanism to integrate passwords and content on the card: that of the card itself.
  2. Just one mechanism to ask for authentication: challenge. That removes the danger of "password comparison" mechanisms where you just have to look into computer's memory to get the cleartext password.
  3. Just one administrator code capable of resetting the PIN: the SOPIN. That removes the danger of old, "unused" administrator accounts you find in most company directories.
  4. Just numbers in the PIN code, no letters. Though this may seem like a weakness in the case of brute-force, that's on the contrary a strength, because that prevents people from setting their given name as password, or that of their son.
  5. Additionally, users tend to remember numbers better. As a typical human being, you could name tens of likely alphabetic strings for your own password. But you remember only a few sequences of 4 numbers. So when you know one, that's for good.
  6. Just three attempts, you can't easily brute-force it by usual means.
  7. Just one logical place to deliver a smartcard: inside the company. You may send a password or even a PIN by mail, but you need to deliver a token, you can only do it physically and the only logical location to do it when you have dozens or thousands of users is inside the company's walls. That reduces the number of intermediates between the administrator and the user, and most of the time replaces external intermediates with internal ones.
  8. Just one smartcard. 1/ If it gets stolen, you'll notice it. 2/ You can't share it with friends and still benefit from it at the same time. So you'll (at least) make sure you get it back.
  9. Just one attempt to build the cards. I mean that the cost of a recall would be huge to change just a few security settings. For instance, if you choose to allow unlimited attempts instead of just three, changing it back to three will cost you a return of all cards back to the HelpDesk. This means that most smartcard-based project try to do the things right from the beginning, whereas many password-based projects start with "lower-level" security and try to improve on it and eventually give up about it.
All in all, PIN codes and smartcards seem a good choice.

Saturday, December 11, 2010

Back on my 2010 security predictions

For an ITsec worker, every year comes with some pieces of satisfaction and a lot of frustration. For instance, you'll hear about rocket-science ITsec techniques and observe that your neighbour's techniques are more snail-like, ostrich-like or dodo-like :-(

I did a few predictions at the beginning of the year of what would happen in the ITsec field, let's see if they actually happened.
What I wrote back then is given in yellow and today's comment is in white.
  1. Linux systems will become an interesting target for hackers because of Google's OS.
    The free software community will react fast to vulnerabilities. If Google is up to the task, they will integrate the changes very fast and it will result in Linux systems being the most secure. Competitors will finally be forced to take vulnerabilities more seriously. That's the optimist hypothesis. The pessimist one is Google not being interested in building better security and not reacting faster than the others.
    Did not happen. There are traces of some attacks on Google's OS but nothing the depth of what happens on Windows. (so far)
  2. Microsoft will (finally!) propose a centralized software installation and update manager, quickly adopted by the big software companies, reducing the number of heterogeneous installation modes, late updates and so on. Something apt-like, in a Microsoft-way, of course.
    It's either this or Microsoft platforms will be progressively abandoned for integrated products such as iPhone or platforms with that functionality such as Linux (servers) or Mac OSX (clients).
    Did not happen. But I hear Symantec is on the subject and it's quite promising.
  3. Viruses will spread to Mac and iPhones up to the same level as that under Windows.
    Clearly did not happen, though there are a few examples of such viruses.
  4. Generalization of new authentication modes including smart cards with microchips, user/machine certificates, fingerprints on laptops, will happen.
    There will be a fashion for it and a lot of blunders will be made in the beginning.
    Happened. I saw many examples of considering fingerprints as a good means of authentication, which it often is not, and worst of all: some companies start relying on "private questions" to enable users self-resetting their passwords.
  5. There will be reports about IT services clouding the wrong parts of themselves: critical infrastructure, already very profitable services, legally protected information...
    Certainly happened, though those companies will not make a failure report before they've withdrawn, which is no easy thing ^^ The funniest story I heard (nothing written, sorry) is that of a web development company whose managers decided to cloud infrastructure, thus turning Apache settings, PHP settings and so on into read-only, contractual, data.
  6. There will be an overflow of non-browser software using SSL.
    Each of them has its own libraries and each blunder or vulnerability in the use of SSL will have to be addressed in each of these libraries. This is not addressable in a correct time. For this reason, there will be new products or services around gathering all this SSL traffic and forwarding it in an actually secure way.
    Happened, even Microsoft got into the market.
  7. Social harvesting will rise to unprecedented peaks. Because of poor legal harmonization (or even concern, for that matter!) in various countries, automated social harvesting services will be made available.
    Happened, see Day's comment on the original article: pleaserobme.com, a site that harvests Twitter to guess whose homes are empty and easy to rob. One could also quote personalized ads or so many articles on the web.
  8. Governments from developed countries will try to censor, filter and/or index the web. They will fail for two major reasons:
    • The web is too huge for any current government to master it, or even understand it.
    • The free software community will sidestep any technical measure towards censorship.
    I don't know yet whether governments will fail, but the current wikileaks wars certainly are an example.
  9. There will be stories, news, rumours, about Google having connections with the US intelligence agencies. Google's business is a source of information just too much important nowadays for intelligence agencies to neglect it. I won't tempt any prediction about Google's reactions.
    Did not happen, so far as I'm aware.
  10. PCI DSS-like standards (simple checklist, minimalist, technical, yet very efficient) will be published about various matters of ITsec. Or maybe I just read too many people interested in that.
    Did not happen, I just read too many people interested in that.

And now a few wishes:
  • That people stop thinking I work on viruses when I say I work on ITsec.
    There's certainly some change, but I can't identify it so far. People seem to start being aware of the "information-side", as opposed to the "technology-side"...
  • That IT managers (non-security) stop thinking there is a fixed list of requirements for security and each of them requires purchasing a "security product" and each of these products works standalone.
    No change.
  • That service managers start budgeting time for service reviews and corrections, not only service implementations.
    No particular change.
  • That Adobe distinguishes between PDF designed for review and printing and PDF designed for automated administrative tasks in complex forms. This may prevent a lot of problems to come.
    They didn't, though they reacted by adding sandboxes to the software. Makes me think of old families that had many children to "avoid" child mortality...
  • That my government stops being such a liberty killer about IT.
    Not happening before the next election...
  • [...]
  • That my readers consider the strange situation of using an Excel-controlled Visual Basic script to interact with an AS/400 terminal emulator, written in Java, inside a Citrix session running on a Windows Server "cluster" inside a VMware architecture. (You can have screenshots and photos of the AS/400 on IBM's website, for instance, there.) That was my only nightmare these last years. Does virtualization never end?
    I don't know whether my readers did consider this situation. Did you?

Wednesday, November 10, 2010

Please NO MORE Top 10 Security Measures!

I have a habit to collect web articles about security measures to apply for specific security situations. Those articles usually have a title like "Top 10 security measures for the administration of XYZ" or "Top 20 vulnerabilities in XYZ servers". And I now have a feeling that it's a bad thing to present a security approach that way.

Let's take a few examples:
What's good in these articles is that you can use them for what they are: a grid to think about your own security. But they don't provide exhaustiveness and, for that matter, they may not even be suitable for your own case.

That's a question of risk management (of course) but, putting away big words like these, you'd simply wonder why there are 5, 10 or 20 top measures and not 2, 6, or 11. The measures in these articles are gathered not to provide a level of security, or a level of security maturity, but to make for a long, publishable list. And that you should implement only the top 3 measures, or only measures number 2, 4 and 5 is left up to you. Not mentioning that you may not implement 2, 4 and 5 in this order but may very well begin with number 4 or 5.

What these articles lack is an identification of the precise risks addressed by these measures and the location of these measures on a security maturity scale.

Let's add an illustration to this (nasty) comment: Friends recently asked me to attempt penetration on a website that they wanted to secure. What I found was:
  • an easy access to htpasswd file,
  • obvious passwords that John the Ripper guessed in no time and
  • cleartext credentials to access the database.
If you look at the OWASP list, you'll find the corresponding measures at number 6 and 7. Yet, all Apache admins know that they are on maturity level zero. Furthermore, for that precise site, OWASP's number 1 (code injection) was almost irrelevant.

That's not to say that OWASP's work (or anyone's listed above) is not good. It is, and useful if used correctly. It's just to say that I'd prefer to see more "Beginner level 7 security measures for XYZ servers" or "What to do if XXX is critical for your company: From step 1 to step 4" articles.

Thursday, October 14, 2010

A little thought about computing clouds and physical security

Clouds are not so cloudy that they don't sit on God's green earth.
I was thinking that with so much data concentration, and data of so much value, what would prevent people to break physically into data centers to rob data?

After all, who says data banks says data hold-ups...

I can think of four reasons why they wouldn't make a hold-up to steal data from a data center:
  1. It's probably easier to steal it online.
  2. It's certainly safer to steal it online.
  3. If you're breaking into a place you've never been, finding what you're looking for may be messier for a data center than for a bank.
  4. The adoption rate of this kind of crime would probably be very slow: burglars are not accustomed to data centers and black hats are not accustomed to hold-up parties. They probably don't share a lot of "good practices".
Yet, these barriers do not seem to apply to States and polices. They can easily break into a data center, they do not fear any defence from the "victim", they have all the time they need, and they probably can gather people accustomed to both heated situations and computer hacking.

So I was thinking that data of interest to a State should probably not be stored within its reach.

However, I don't have a clue how the visibility of a criterion such as the geographical situation of data may evolve in the next years for the cloud customer :-|

Saturday, October 9, 2010

Back on the technology SPOF: practical case

A reader commented in private that the article about the technology SPOF was too abstract and lacked a few simple illustrations. The opposite would have been surprising ^^ The subject seems universal, which is no reason not to give a good example.

So, there I have it, example with an "all-in-one" security appliance, as is too often so often used in SMBs. It's mainly sold as a corporate firewall and serves many other uses.

The first SPOF is the hardware one. When the hardware fails, you've got a problem:

You can resolve that SPOF by adding another piece of hardware:



The second kind of SPOF is the network one. You have the backup hardware, but it's not available:

In this case, it's completely useless... You can solve this problem by making sure that the access to the redundant appliance is also redundant:


The third kind is the configuration SPOF. The backup is ready, working and available, but it's not used because clients are not configured to use it. For instance:

For this, you just have to configure the backup to be used in case of problem on the master or, if it's not possible, to setup an emergency procedure that switches from a configuration with the master to a configuration with the backup. That should look like:


Finally, and that the point in my previous post, you've got the technology SPOF, which means that both the master and the backup suffer from the same problem. This could be anything from "disk full" to "corrupted configuration file" ranging through "expired license". In this case, it's no help that you have a backup:

You just have to be sure about the list of the services you provide with that specific technology, and which of those are critical enough to require a reduced/degraded mode:

Thursday, September 2, 2010

Companies beware of SSL decryption in your proxy!

The ubiquitous rise of SSL as a means of confidentiality pushes towards new security problems and new ways to manage it...
I guess we could have figured it out from the very definition of SSL, but to me it appeared only clearly at the beginning of this year. With this number of protocols using SSL, with this everyday HTTPS, with everyone buying things on the Internet, the SSL protocol spread to ubiquity and its use went from precise pieces of software and knowledgeable people to every kind of software and mainstream people. From this situation, I saw the explosion of:
  • bad implementations of SSL in all kinds of software,
  • attempts to attack the protocol, new (so to say) man-in-the-middle attacks,
  • bad uses of SSL (weak cypher, self-signed certificates for public use, etc)
  • impatience from top management about the inability of IT services to provide statistics about the SSL traffic of their employees.
For all these reasons, I made the bet 2010 would see the introduction of new tools to manage SSL, make statistics from it, filter it, assess its security and so on. I found that Forefront TMG (the name of MS ISA for 2010) does quite a part of the job by decrypting the SSL flows between the LAN and the Internet. Once decrypted, you can do all the usual with those flows: filtering, statistics, eavesdropping...

My point is: it's not a secure practice yet, and probably never will.

There are two parts in my argument, the first is the legal and compliance point of view. If SSL is encrypted, it's in order not to be read, as dumb as it may sound. The company might not be allowed, under the laws of the country, to listen to employees' encrypted traffic. For instance, in France, I wouldn't be allowed to listen to private connections to online banking sites. Plus it brings back the threat of the tactless/malevolent administrator.

The second part is the technological one. SSL is ubiquitous and, to some extent, that's a chance. It means that the client software may have a variety of vulnerabilities and weaknesses in the implementation of SSL. For instance, if the SSL traffic flows from three browsers, two media players, ten business applications, then a vulnerability would probably affect only one in fifteen pieces of software using SSL. The targetability of unproxyfied SSL can grossly be compared to the average of vulnerabilities of the various pieces of software that use it. The targetability of proxyfied SSL is that of the proxy.

Would you trust ISA better than Firefox? Suppose that you have an endpoint tool that examines SSL, if its security features are better than those of the proxy, you probably lose these capabilities during the decryption/encryption phase of the proxyfication.

Of course, SSL remains a cloudy mystery, threatening to some extent, but I think this is not the good way out of it. But let's have a look at these technos, because I'm sure we'll have to cope with them anyway.

Wednesday, September 1, 2010

The technology SPOF

When I'm thinking availability, a lot my time and thoughts go to the careful search for SPOFs.

  1. I do look for hardware SPOFs, like a unique machine doing an important job and requiring a backup in case of breakdown. [first step of a BCP]
  2. I also do look for network SPOFs, like making sure the backup has the same network accesses as the master machine, so that it doesn't remain alone, useless. The same is true for firewall accesses and all kinds of filtering that network flows do undergo. [careful execution of the BCP]
  3. I furthermore do look for configuration SPOFs. These consist of the rather funny case when the backup machine is up, reachable on the network, but the clients are not aware it's there and so don't undertake anything with it. This is usually the case when IP addresses are not switched automatically between the master and the backup or when a configuration screen allows only to type in one server, not two or many. Hopefully, this should not happen with MS Domain Controllers, the way they work (or we would hear this kind of SPOF more often). Anyway, it's still recurrent in many "small vendor" appliances and in the application world. [very careful execution of the BCP]
  4. Nonetheless, a terrific SPOF remains: the technology SPOF. You have the backup machine, it's reachable, and others are aware that they should communicate with it. But the backup suffers from the same technological incident as the master machine did.
    Say, for example, that the master breaks down because it fails to handle a large quantity of "client data" (or anything) that has to be treated. The backup will also break down under the same charge.
    Let's take another example, the server is an application connecting to a database. The database received a minor software update that changes something that makes the master server go crazy. The backup takes the job and goes crazy too.
    Third and last example, you have a nice application server with scheduled tasks that make needed job. One fails and the server goes down, unable to continue its work. At that moment, the backup goes up and launches the same scheduled task, failing also... [and that's outside the perimeter of most BCPs]
That's the time when you'd want to have another way to provide your service. That's the moment when you are forced to remember that the machine is not here just to be here and working, it's here to help provide a service. And that service may be provided otherwise. That's the time when you enjoy having a well-prepared DR plan, with forethought reduced/degraded modes...

Saturday, July 31, 2010

Enterprise-Size Authentication Is Not Just About Avoiding False Positives

When you're setting up an authentication method for access to an enterprise information system or to the enterprise premises, you don't want to just worry about false positives. You need to worry about the false negatives also.

Think about the logon screen of an application or website, asking for your username and password. The biggest worry of IT people behind that screen is to make sure the wrong people do not access the system. I think they should also care about the number of times the right people can't access the system either.

That's no big math, but suppose you would change a logon screen with 0.1% of false positives and 1% of false negatives (each losing the company 0.5$ because of the time lost for work) for a new logon screen with 0.01% of false positives and 3% of false negatives. Additionally, suppose the logon screen is used by 10,000 employees five times a day, 300 days a year.

The change would represent a loss of:
2% of additional false negatives
x 0.5$ each time
x 10,000 employees
x 300 days a year
x 5 times a day
which equals 150,000 $ per year.

It makes sense to acquire the new logon screen (let alone its own cost) if dividing by ten the losses due to intrusions in this system saves you more than 150,000 $ each year, that is, if the losses due to intrusions are above 170 000 $ per year, roughly.

Thursday, May 13, 2010

Transparency the Next Big Topic? I Don't Think So :-(

Here is a recent Bruce Schneier interview "If you don't understand the people you'll never understand security, says Schneier". I really appreciate Bruce Schneier for his stick_to_the_fact and be_smart_not_an_automate approaches.

However, when he says during that interview that the next big topic for security will be transparency, I think it's more of a wishful thinking. I can see three main reasons why the move to transparency will be very slow:
  1. Good transparency requires transparency from both the vendor and the buyer. I think the buyer will never see the point of publishing data about (in)security. Even if that's more or less a kind of corporate social responsibility...
  2. Some major players among vendors and some managers in whatever buyer's hierarchy do not want to play the game by the rules. They prefer it the way it is, especially if they have a good ROI/good wages and not too much stress. So, unless there is some interventionism, I think they will do their best to slow the move.
  3. If you're going to publish things transparently, you might think of it as a possible bad advertisement for your company. And the weak point is: most companies, buyers or vendors, do not know where they stand among peers on the criteria of IT security. So they will not want to make the first move and risk publishing what might be seen as bad results.
To my mind, the whole business of IT security transparency is, as most of corporate social responsibility issues, a wicked problem. For this reason, it will require some good leaders to design new models and, probably, some interventionism from States and big corporate players. That is: it will move slowly (decades, to my mind).