Enterprises with trust issues - separation of duties for system administrators
This article caught my attention recently... "Laid-off IT worker accused of hacking, crashing Missoula company's servers", and made me think of a company I worked with around the time of the dot-com bubble burst where we figured out this very issue... almost a decade ago.
Trust is a difficult thing to work out in any size organization as it is as much a human nature problem as it is a technical control...
I've been talking a lot lately about trusting your systems, and your software ...but what about your people? More important perhaps than anything else an organization must trust its administrators.
Those that run the critical systems, manage the authentication and authorization of users and systems, and overall have the keys to the kingdom often shouldn't be blindly trusted... yet that's exactly what we do.
How do our organizations treat administrators (more specifically highly privileged users) when they are removed from active duty? How do we deal out privilege? It seems that in large organizations the issue is easier to at last draw a line around than in smaller orgs - but the problems remain.
One of the core issues with privilege, as I've written about lately, is who do you trust, and just how much do you trust them? There are two ways to look at this problem - the first is by saying that ultimately, no one has earned the full trust of the organization... no one is above governance.
The alternative is to acknowledge that many of your best people must be trusted implicitly to perform organizational duties - some beyond their current requirements - when and as needed. There are people in both camps, and I don't purport to know which is more correct from a management of human resources perspective, but I will tell you that I only implicitly trust myself... and even that's questionable.
The "trust no one" camp
Some people believe that in trust there is one absolute - you trust no one. Everyone must be governed and monitored while under the employ of the organization, and on the organization's systems. This school of thinking often comes into play in high-security environments such as military and defense.
Trusting no one creates a lot of complexity, and requires resources to monitor and govern what your users are doing, when, and how. The other side-effect of this school of thinking is complexity. Because John the administrator can't also use his account to login to other systems, users often end up holding multiple accounts, and many more people are required to perform complex actions.
In the event of an incident, for example, you will require many more people on that conference call which have access across the vast array of systems and applications that could have been compromised.
On the positive side of that coin is that the trust no one camp generally has a higher tolerance for insider and outsider threats. By that I mean that when things explode the blast radius is generally contained and compartmentalized. For example, when someone breaks into an adjacent system and steals some accounts they're generally only good in that system, and when an administrator account is broken it generally only gives access to the system it's intended for - and even then you probably don't have access to the data.
The crux of the trust no one mentality is separating duty. Just because you have access to administer a system doesn't necessarily mean you should be able to see the data that is contained therein. I know that sounds tricky - but it's doable... it was even doable back in 2001 so I know you can do it now.
The "trusted agent" camp
The alternative to trusting no one in your organization and bringing about a drastic and dramatic increase in complexity is resigning to the notion that some people in your organization - a la "administrators" - are implicitly trusted and have a higher status of standing. Fundamentally this camp believes that you should have an implicit trust in the people who oil the gears that make your organization run.
This is human nature after all - we trust our bankers with our money so why shouldn't you trust the person (or people) who you've carefully vetted to manage your systems not to peek at your data?
In addition to decreasing the level of complexity that comes from a higher level of trust of your administrators, the trusted agent camp also benefits from smaller teams which have access to more information, more systems, more applications and can get more done in a hurry without the bureaucracy. Fundamentally this is a more productive state to live in for your technology organization.
A higher level of implicit trust doesn't necessarily mean not monitoring people - it just means you're not carefully separating roles across systems, and making sure that just because you can log into a file-server that you can't actually see any of the sensitive information in it. This is just easier, and more efficient, from one way of thinking.
So which camp do you drop your flag in? Would you rather opt for less complexity, or a higher state of assurance? I suspect this answer has a lot to do with your base level of thinking about human beings and how they behave - and whether you work in a high security environment or no... right?
What do you think? I'd love to hear your opinion.
Cross-posted from Following the White Rabbit