Showing posts with label leadership insights. Show all posts
Showing posts with label leadership insights. Show all posts

Thursday, February 20, 2014

Note about the EMR wicked problem (as seen in France)

Note for myself about the tackling of the EMR wicked problem (as seen in France)

At first sight, healthcare actors and, especially, practitionners didn't spot the actual complexity of the nation-wide EMR program (hospital ERP), that's why they didn't see the need for a competitive approach to tackling this wicked problem.

Both collaborative and authoritative approaches would have failed for such a program. The collaborative approach would have taken dozens of years. The authoritative approach would have garnered too many opposition to succeed.

As a result, the program is a success yet the approach has been seen as overly complex and costly by many actors.

Wednesday, May 1, 2013

On ne peut pas faire de Sécurité du SI sans compter

Un discours que j'ai tenu plusieurs fois, autant le mettre par écrit, sur la gestion des risques et le besoin pour un RSSI de s'appuyer sur des données chiffrées.

En matière de sécurité, on peut faire tout, rien, n'importe quoi ou quelque chose.


Si l'on fait rien, on est en position suboptimale. En effet, si rien était une situation satisfaisante, on n'aurait jamais entendu parler de Sécurité du SI et encore moins de RSSI.


Si l'on fait tout, on a besoin de plus de moyens que l'activité-même que l'on tente de sécuriser. Ce n'est pas toujours évident pour les gens qui ne travaillent pas dans la sécurité. De façon abstraite, on peut l'expliquer en disant que sécuriser une activité signifie maîtriser ce qui n'est pas dans le fonctionnement normal de l'activité, c-à-d avec des acteurs imprévus, des conditions extérieures imprévues, des pannes imprévues, etc. Le périmètre à sécuriser est bien plus grand que celui de l'activité elle-même.
Un exemple du SI : pour sécuriser tout en matière de site web, il faut maîtriser les attaquants extérieurs, ainsi que les fournisseurs de logiciels. Aucune entreprise, petite ou grande, aucun gouvernement, ne peut se targuer de sécuriser ainsi totalement un site web.
Un exemple hors-SI : pour sécuriser totalement une flotte de voitures, il faudrait maîtriser entièrement l'état des routes, les autres véhicules utilisant la chaussée, le niveau d'aptitude et de concentration des chauffeurs. Là encore, aucune entreprise ni aucun gouvernement ne peut s'en targuer.
Il n'est donc pas possible de faire tout, en sécurité. Ce que l'on résume souvent par la formule journalistique galvaudée « le risque zéro n'existe pas ».


Si l'on ne fait ni rien, ni tout, on peut éventuellement faire quelque chose mais, à défaut d'une bonne façon de donner des priorités et des coefficients aux actions de sécurité, ainsi que d'une bonne façon de corréler ces actions de sécurité en un ensemble cohérent, ce quelque chose devient vite n'importe quoi. C'est ce que tendent à faire les entreprises qui commencent dans le domaine de la sécurité du SI.
Exemple SI : Sécuriser les bases de données.
Traduction hors-SI : Sécuriser les moteurs des voitures. Ça n'empêchera que très peu d'accidents.
Exemple SI : Définir un périmètre réseau de l'entreprise et filtrer l'entrée.
Traduction hors-SI : Empêcher les véhicules en mauvais état ou hors-gabarit de prendre l'autoroute. Ça empêchera certains accidents mais ça ne rendra pas les accidents sur l'autoroute moins graves, ça laissera passer les conducteurs ivres et ça ne protègera rien hors de l'autoroute.
L'ensemble des actions de sécurité qui se fondent sur une analyse purement technique du SI et des menaces qui pèsent sur lui souffrent de ce même défaut : elles ne peuvent distinguer ce qui est important de ce qui est anodin, faute d'un critère de comparaison.


Pour faire quelque chose qui ne soit pas n'importe quoi, le choix d'une stratégie s'impose. Or, qui dit stratégie dit objectifs. Qui dit objectifs dit « savoir ce que l'on veut protéger ».
Exemple hors-SI : essayer de diminuer le nombre d'accidents, ou le nombre d'accidents mortels, ou les embouteillages, ou le nombre de délits de fuite...
Exemple SI : essayer de diminuer les vols de données, ou les interruptions du travail liées au SI, ou les pertes de données, ou d'augmenter la traçabilité pour se mettre en conformité avec la loi...


Comment choisir parmi ces objectifs ? Ou, si l'on veut n'en négliger aucun, comment ventiler les actions et les moyens disponibles ? Pour ce faire, il faut être capable de positionner des valeurs, comparables entre elles, sur chacun de ces objectifs. C'est ce que l'on nomme la gestion des risques et qui est développé dans de nombreux standards tel ISO27005, EBIOS étant sans doute le plus mature. Les données nécessaires au RSSI devenu gestionnaire de risques pour pouvoir réaliser une sécurisation rationnelle du SI sont les mêmes que celles nécessaires à d'autres managers de l'activité : budgets, KPI, stratégie de l'entreprise, liste des projets en cours... Ces données permettant au RSSI de s'inscrire dans une logique globale de l'entreprise ainsi que de justifier les actions entreprises en utilisant un vocabulaire commun au reste des décideurs internes.

On ne peut pas faire de Sécurité du SI sans compter. Le RSSI doit donc disposer des moyens nécessaires et les utiliser.

Thursday, April 25, 2013

De la culture du commandement à la culture de la décision

Je discute avec des élèves-ingénieurs et je vois passer de ces commentaires qui me font sourire. Leurs premières réactions à la vie en entreprise... ;-)
« Moi qui croyais qu'une réunion, c'était pour prendre des décisions ! »

Alors je reviens sur ce sujet qui me tient à cœur et qui fonde mon style de management, l'empowerment des collaborateurs.

En France, comme dans la majeure partie du monde latin, notre culture du management est une culture du commandement. On ne cherche pas quelle est la meilleure solution mais qui est le chef pour en décider. Cela a encore été renforcé par la centralisation, l'administration bonapartiste, le jacobinisme, etc. Le chef demande des informations et on les lui apporte, puis donne ses ordres. Point. Les collaborateurs s'exécutent et gare à celui qui remet les ordres en cause !

C'est un modèle largement suboptimal. Le chef n'explique pas les raisons des ordres, coupant ses collaborateurs de leurs capacités d'initiative et d'alignement. L'ordre est sujet à interprétation et l'interprétation la plus simple et la plus évidente est toujours celle choisie car on ne peut contrevenir à un ordre direct ; les collaborateurs sont donc coupés de leur volonté d'initiative. Le tout est égal au minimum des parties.

Un meilleur modèle est le management par la décision. Si l'on ne donne plus des ordres mais des directives, s'il n'y a plus un chef qui décide mais un cheminement connu de l'ensemble des collaborateurs vers une décision, alors on arrive à rentabiliser les capacités des différents acteurs. Et le tout se rapproche de la somme des parties.

Ce n'est pas chose facile, il faut se couper de certaines habitudes de secret (dans l'absolu, une équipe travaille pour son chef, il est donc raisonnable de lui faire confiance sur une grande majorité de sujets). Et il faut beaucoup de cohérence et de probité. Car si l'équipe connaît les raisons d'une décision, elle peut les contester et il faut reconnaître que, certaines fois, elle aura raison de le faire ! Il faut permettre aux collaborateurs de comprendre les objectifs à atteindre et leur montrer en quoi les décisions prises concourent à la satisfaction de ces objectifs. À ce prix seulement, les collaborateurs prendront l'habitude de l'initiative adjuvante et apporteront régulièrement à leurs chefs les rapports de bonnes idées qu'ils ont eues et exploitées.

Revenons aux réunions : « Moi qui croyais qu'une réunion, c'était pour prendre des décisions ! » Une réunion peut très bien être un lieu de prise de décision, à condition que les objectifs et les critères de décision ne soient pas secrets mais partagés. Enlevons les œillères ! Si dans certaines réunions, les chefs ne prennent pas de décisions et les collaborateurs s'abstiennent de contribuer, c'est tout simplement parce que les critères de décision sont l'apanage du chef, qu'ils deviendraient apparents si une décision était prise en direct par celui-ci et que les collaborateurs craignent de trahir le peu qu'ils en savent (et donc de trahir leur chef) en dévoilant leurs idées contribuant aux objectifs. De plus, les collaborateurs craignent souvent de se voir opposer un refus non-justifié, qui les frustre et les convainc un peu plus que, vraiment, ils n'ont pas à participer à la prise de décisions.

Dans une réunion productive, comme j'en ai vu, le chef n'explique pas seulement les objectifs à court-terme avant de recueillir les contributions de ses collaborateurs, il explique ses objectifs à moyen- ou long-terme et listes ses contraintes, avec un ordre de priorité qui permet aux collaborateurs de comprendre les critères de décision finaux et d'adhérer à la décision.

Wednesday, October 3, 2012

Two Security Policy Writing Tips

Reading Anton Chuvakin's On Nebulous Security Policies reminds me I wanted to share two very simple, basic, common sense, advices about writing the Security Policy.

Although norms like ISO27000 may give good guidelines to the content you put into the Security Policy, it would be quite suboptimal to write nothing more and nothing less. You have to adapt it to your organization and you can benefit from it.

My first advice is:
Stick to what you already have, or almost have. For instance, if you already backup 95% of your servers, you can write something like "all servers must be backed up". That will help you communicate what you already do and obtain more observance from your own teams.
If, on the contrary, you only backup a few servers, don't go into writing that they must all be backed up. That would show that you don't do what you say and that you write things you don't have a clue how to put in practice.

My second advice is:
Think of IT problems actually occurring, where you would welcome help from top management, and write these principles into the policy. This way, the top-management-approved policy will support your efforts to address these problems. For instance, if you would like to clarify that the IT service doesn't support hardware that wasn't bought by the IT service itself, the Security Policy is the right place to state it.

Wednesday, September 5, 2012

Petit manuel anti-dépression à l'usage des administrateurs systèmes et réseaux

Il est toujours bon de revoir ses classiques et je me permets de pomper le site de Gérard Milhaud pour présenter à ceux qui ne la connaissent pas cette petite perle : le

Petit manuel anti-dépression à l'usage des administrateurs systèmes et réseaux

Quoiqu'il date de 2004, la teneur est encore bien d'actualité. Voilà ce qu'en disaient Gérard Milhaud et Olivier Pagé, Responsables informatiques de l'ESIL et de Centrale Marseille.


Résumé

Dans une première partie, nous isolerons clairement les problèmes actuels du métier d'Administrateur Systèmes et Réseaux (AS&R par la suite), en donnerons les causes principales et leurs implications pour la fonction : nous montrerons que le mal-être et l'état bien trop souvent dépressif de l'AS&R provient essentiellement d'un énorme surbooking, lié à la faiblesse des ressources humaines, qui n'ont pas suivi l'accroissement spectaculaire des ressources matérielles et des services nouveaux offerts à l'utilisateur depuis l'avènement de l'Internet pour tous au milieu des années 1990. Nous verrons à quel point cette situation généralisée est alarmante et entraîne une très mauvaise utilisation des compétences des personnels en place.
Dans la deuxième partie, nous tenterons de donner des solutions, tant sur le plan technique que sur le plan organisationnel et humain, permettant de gérer au mieux ce surbooking, en le prenant comme contrainte principale et prioritaire de l'activité. Même si elles ne peuvent prétendre remplacer un inévitable recrutement massif, nous exposerons des « recettes » qui permettent de l'attendre plus sereinement, en dégageant du temps pour les tâches fondamentales de coeur de métier, motivantes, que l'urgence nous vole trop souvent, générant des cohortes d'AS&R frustrés.



Téléchargez l'article, présenté à la conférence JRES 2001 (10-14 Décembre 2001, Lyon) :
NOUVEAUTÉ 2004 : téléchargez la présentation updatée pour 2004 présentée lors de la journée thématique "Administrateur Systèmes et Réseaux, un métier qui se transforme" du réseau grenoblois SARI :
Désolé pour les formats propriétaires .doc et .ppt mais c'était l'une des obligations de la conférence. Nous avons choisi de les diffuser dans leur format original car la traduction HTML fournie par Word et Powerpoint appauvrit trop les 2 documents à notre sens. En attendant une éventuelle traduction en LaTeX quand on aura 5 minutes (avant 2005 on l'espère), c'est mieux que rien.

Bonne lecture.

Thursday, July 5, 2012

Remarque agnostique sur Linux en entreprise

Pour avoir travaillé dans de nombreuses sociétés, il me semble évident que Linux représente une menace. Notez que je ne dis pas que c’est le choix de Linux qui représente une menace, car Linux n’est pas un choix. Linux est imposé par le marché. De plus en plus d’outils utilisent Linux, y compris des sociétés très respectables et des éditeurs de logiciels métier. Ce qui représente une menace, c’est le manque de réaction managériale face à l’arrivée de Linux.

Deux menaces principales existent :
  • Le scénario de ne pas savoir intervenir en cas de panne, par faute de formation.
  • Le scénario d’avoir tellement de Linux différents que l’on ne sait plus comment les gérer et que l’on multiplie les coûts.

Ces remarques s’étendent à Linux, à Mac OSX, aux BSD et aux autres sortes d’Unix.



Ma recommandation

Tout d’abord, cesser le déni et reconnaître que Linux est utilisé au sein de l'entreprise. Ensuite, accorder un crédit de temps *qui peut être de l’autoformation* à un ou plusieurs administrateurs système, pour se former à Linux.

Enfin, faire un certain nombre de choix concernant Linux, visant à éviter le chaos de la muliplication. Par exemple, le choix d’une ou deux distributions « supportées » par le service informatique et l’achat d’une suite logicielle permettant l’intégration des machines Linux aux outils centraux, y compris outils Microsoft.

Saturday, June 16, 2012

At the Heart of Security: Doing What You Say And Saying What You Do

Security is about deciding what's forbidden, what's allowed and enforcing these decisions.

There's the technical part, doing what you can to technically enforce the decisions.
And there's the human part, managing things in a way that reduces related uncertainties.

The human part is the most important. You cannot enforce every decision technically. Besides, you have to allow for people to switch on/off certain features, eg to allow for good functioning outside of the company's premises. So, you forcibly leave some room for employees to decide by themselves whether they respect the rules or not. And that's the moment when the human factor matters.

The most important tool for the human factor is the security policy. You have to say what you do in terms of security and to do precisely what you say.

If you don't do what you say, you invite people to force the system, they think they can fool you. It may go as far as disregarding completely the policy.
If you don't say what you do, you invite people to rebel and contest the security measures. Additionally, you scramble people's understanding of your security policies, which may lead them to give up trying to respect it.

Monday, June 4, 2012

Epiphany: Free Software = Lower Entry Barrier = Greater Risk of Project Failure


Discussing with a friend lead me to this epiphany: the reason why Free/Libre/OpenSource software is not used enough in traditional companies is that they invest too little in it. Not in money, but in time, thought and human resources.

Since FLOSS has a very low entry barrier (starting from just zero, up to prime class paying service from companies such as IBM), it tends to attract people and companies that want to invest very little in it. That's why they fail to make a great use of it.

Mem: I think there's a business model in just selling GPL software, without any added value, with the argument that the buyer will be more motivated to implement it well ;-)

Saturday, June 11, 2011

Top-Down or Bottom-Up CISOing?

What I thought, in my earlier years, to be a strategical choice now appears to me as a question of personal character of the decision maker: whether to take a top-down or bottom-up approach to the solving of a complex problem. When you're managing wide projects, you get to deal with many managers' characters and that may lead you to work with Single-Minded Top-Down Thinkers (SMTD) or Single-Minded Bottom-Up Thinkers (SMBU).

As a CISO, you have to solve complex problems: "get us compliance to that norm", "make sure that application is available 24/7", or even wicked problems such as "make us secure". And you have to deal with many decision makers among IT and more. So you cannot do without a prepared tactics to set a SMTD or a SMBU back on tracks.

SMBUs
If you let a SMBU deal with a problem alone, you'll watch him find a quick solution to the problem and apply it. But he'll forget to communicate about it, to document it for later re-use and, most of all, to compare it to the goals of the organization and ensure it's no hindrance to some other process of the company.

To deal with SMBUs, I take two actions:
  1. I explain him what I intend to do with his solution to my problem. Not just the problem itself, I take the time to explain what's the goal and what my next steps are with it. So, he includes in his understanding of the problem all of my later constraints and does solve the problem and the later-on constraints.
  2. I also take the time to recapitulate baseline procedures to communicate and document the problem/solution and I make sure he understands he'll be the one to clean up the mess if something was done unproperly.
Once you're accustomed, that doesn't take more than ten minutes.

SMTDs
SMTDs are usually more experienced people who have lost somewhere in the middle of their professional lives the idea that they must give results, not just thoughts. If you let a SMTD work a solution to a problem by himself, he'll give you diagrams of his view of the problem which he thinks is complete -or at least contains everything necessary- and he'll link your problem to a family of other problems that he has to solve and you'll get out of his office with ten times as much work as when you got in.
For instance, if you come in with a question about whether to purchase a new, different hardware, you'll get out with questions -and a few useless answers- about asset management, internal billing and wifi networks. And you'll realize that you don't have any clue to the answer about whether the company will buy it or not.

Over the years, I've developed a quick and dirty solution to deal with SMTDs:
  1. Don't go into the long-term explanations of why you want to solve the problem, just stick to the short-term. That would last hours and would only worsen the depth of the SMTD's scope.
  2. At the beginning of the discussion, do set, in accordance with the SMTD, a choice of as few as five objectives to be reached by the solution to your problem. This way, you'll be able to reduce the scope of his thoughts to what you agreed on. That is, you just need to split your problem and the surrouding areas in a five-item list.
  3. If you take the example of the new, different hardware purchase, you just have got to reduce the problem, right from the start of the conversation, to the comparison of:
    • prices,
    • main features,
    • delivery,
    • compatibility,
    • immediate satisfied customers.
    There are many other points to be discussed, but you don't want to address them all. Not now, not with the SMTD and not in an all-in-one speech by him.
When you're accustomed to it, you can prepare these five pieces before talking to the SMTD and that doesn't cost time, that saves you time.

Thursday, March 3, 2011

Risk Management: Solving the CISO's Conflicts of Interests

The CISO's Conflicts of Interests
Acting as a CISO is usually a difficult position, because the CISO is asked to act both as a comprehensive risk manager for IT and as an IT security expert.

Depending on whom the CISO is reporting to, either the risk management side or the IT security side will show most. Suppose the CISO is reporting to the CIO and he'll spend most of his time auditing, helping write good procedures, good RFPs and so on. Now suppose the CISO is reporting to the company's CRO, he'll spend his time compiling statistics, forging risk estimation methods and so on.

There seems to be a conflict of interests if the CISO is working under the CIO. How could he report about a major risk? How could he at the same time prescribe additional security requirements and be the one to implement them? And there may also be a conflict of interests between the CRO's calculated risks and the CISO's inner sense of what's risky in IT.

An illusionary conflict
However, I think that's just an apparent conflict of interests, an illusion. I think there's a continuum of risk management maturity from the "gut feelings" to the successful risk assessment and evaluation. And security programs grow in maturity the same way: They start with obvious things, sometimes pushed by a legal constraint, and they grow into wide security project inspired by comprehensive frameworks, and they eventually cover the whole IT perimeter with pondered security measures.

As I see it, all three of the CISO, CIO and CRO have the same three goals for IT:
  1. Delivering good services to the customer,
  2. Ensuring the conservation of the (information) assets they're trusted with,
  3. Keeping reputation high and lawsuits low by not sharing those assets with unwanted people.
That is, they all three share the same goals: Availability, Integrity and Confidentiality. They're simply not held accountable for the same parts:
  • The CIO is usually accountable for part 1.
  • The CRO is usually accountable for part 3, especially the "lawsuits" part.
  • The CISO is accountable for some or all of them, depending on whom he reports to, and is especially accountable for the "confidentiality" part because of the required expertise.
So I think there's no real conflict of interests because the interests are, in fact, the same.

The path out of the illusion
The way to the disappearance of this illusionary conflict is an appropriate alignment of expectations between the CIO, the CRO and the CISO.
The CIO will want immediate technical solutions and the CRO will want risk models and ROI-like estimations.
The CISO's job is to make sure they both understand that there's no opposition between the two and that the maturity level of the security process will evolve to the point where they'll both be satisfied by the very same security measures, justified by the very same arguments.

Thursday, November 25, 2010

Internet Quarantine: Where IT Differs From Healthcare

As Bruce Schneier goes on the subject of quarantining potential threats away from regular users of the Internet, I think it's interesting to point a big difference between IT diseases and human diseases: we have the code. We have the specifications for the computer.
For closed source, the software maker has the code, which means that diseases or weaknesses can be fixed with more efficiency than any human condition.
For opensource, it's even better: everyone has the code, which means that everyone can look for a solution to a problem.

That's not to say that every Internet user is a qualified-IT-physician, it's just to underline that comparing IT and healthcare may not be so promising. Compared to medicine, IT professionals can fix a problem in no time and no money. Although there are problems of copyright in IT, it's nothing compared to those in pharmaceutical industry. The whole plan of the human body and interactions is still to draw. And we can spoil many computers, hours of computing, lines of code, reboots, for research without an ethical problem.

Thursday, October 28, 2010

Leadership Learning 2: When a Security Measure Fails, Put it Away!

Just a lesson of common sense: when some security tool or practice is useless because it was ill-designed or because it's broken, or because the rationale behind it has disappeared, it's better to just get rid of it.

Just two examples:

Monday, September 13, 2010

Zero Risk vs. Decision Making

The ability to produce risk assessment, partly-innate, partly-acquired, is one of the most basic skills of successful managers.

While there is much value in the organized and systematic research for accurate information, the ability to assess risks in a situation where information is incomplete is a most valuable asset.

The first reason is that we often lose precious time in the gathering and precise analysis of data when approximate data would be good enough. To make it abruptly: you don't need to know where the arrow will hit to know that an arrow sent grossly in your direction is a bad thing. That's when you need someone with some instincts about risks.

The second reason is that some data can be impossible to gather, or hard enough to slow down the process of gathering it to the point of discouragement. For instance, if you want to pinpoint the ability to turn on a specific option of a specific security feature in a specific version of a specific software, which software you have not already bought and cannot test, you might get bored before you get the information. That's when you need someone with some culture and work connections, so as to get a better access to -or approximate substitute for- such information.

The third and most important reason is that management people delegate. In this case, the management would probably want to delegate data collection and make its own assessment from it and make a decision from it. That is, delegate the boring part and make the obvious decision (taking all the credit, Dilbert-like).

To be more precise, it is commonplace to see IT managers answer a question by another question, asking for more technical details when the staff come for more decision making. In this case, the staff ask the manager for his/her ability to fill in the gap between available information and complete information. (And the staff is probably aware of this gap.) So when the manager overlooks this request for decision and asks for never-ending technical or economical details, data or evidence, the staff feels like the manager is worthless. That is: once they have collected all the data, they can make the decision themselves, they're not stupid, thank you!

As a conclusion, I would say that Zero Risk is, of course, not reachable, but that managers should be aware that their staff regularly look for risk assessments from them, not for an indication that they should go and look deeper to reach Zero Risk. If they could, they would.

Wednesday, June 23, 2010

Leadership learning 1

Wisdom being to recognize wisdom when you hear or see it, let me put it down what I heard from a management old-timer :
  • When somebody asks you about the goals of a project, answer goals.
  • When somebody asks you about the ethics of a project, answer ethics.
  • When somebody asks you about the management of a project, answer management.
  • When somebody asks you about the deadlines of a project, answer deadlines.
  • When somebody asks you about the means or technology of a project, answer means or technology.

Thursday, April 9, 2009

Discussing failures

Excellent bill by Michael Krigsman arguing that we should discuss failures of IT projects and show them as examples of what not to follow.
If I should sum up, here are the five factors that I saw as the root of failures of IT security projects in organizations (companies + public sector), along the years. The examples are invented.
  1. "Political" interests priming over "intelligent" choices. Such as buying a solution from one vendor because the salesperson is Mr Bigboss's friend or the vendor is Mr Bigboss's favorite brand.
  2. Bad top-down communication of the goals and objectives, which results in the implementation of a solution that solves problem B instead of problem A. For instance, Mr Bigboss decides that the crucial point is to protect the integrity of the central databases, but doesn't communicate it well and Mr Smallboss implements a solution that protects the confidentiality of the data going out of the central database. (This one seems simple to avoid once explained, but if you look back, I guess you can find a real example pretty easily.)
  3. Relying on/Trusting too much service providers, thinking that getting the hands dirty is not necessary. This one results in entire sides of the project being forgotten, because the consultants only do what they are asked to.
  4. Bad theory training of the administrators who will use the security solution. They know how to manipulate it but they don't understand the principles and they make bad interpretations of results. They are also not able to react when something goes out of the plan. This is particularly true of "all integrated" products with a shining graphical interface, where some people only retain the location of buttons and screens, and not their actual meaning/behaviour.
  5. Allowing exceptions for top executives of the organization. Once a plan has been decided, everyone must follow it, including them.