loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

Zero Trust and Security and Trust (no, not one too many ‘Trusts’)

Thoughts and some words from my book…

I’m privileged to have been around at the very start of this trust and computation thing. For which, read: I’m kind of glad I’m not starting out now, there’s so very much interesting stuff out there… I’m not so old that I have been around as long as cyber-security, but it sometimes seems like it. Let’s see though… For a long long time I’ve been somewhat annoying (and annoyed) around the way in which cyber-security has been seen and practiced. For one thing, we often blame ‘regular people’ for security lapses. This is at best disingenuous, when we can’t even create systems that could spot the lapse, because if we could, it wouldn’t have got to the person in the first place. Don’t believe me? We have great spam filters and phishing filters but they end up passing on the things they can’t spot (because they’re hard to spot!) to people to try to figure out. This isn’t a case of a PBKAC (Problem Between Keyboard and Chair), it’s a case of a badly designed system in the first place. Think that’s harsh? Try being the person in the organization who gets blamed for a security lapse. There’s much more about this sort of thing in the field of Usable Security, but I won’t jump into it here because a DuckDuckGo search on the term will get you plenty. I may have even written some of it. There’s another few things though. For starters, generally, if something gets ‘behind’ all your protections, the protections themselves don’t tell you. In other words, when they fail, they fail silently. The end result is that anyone or anything who is in the system with nefarious intent is pretty much in the clear unless you do some hard looking yourself. Some logs might help (and indeed do) but you need to look at them and see the issues. Moreover, when something is inside, the permissions are quite open, especially if you have been relying on the perimeter to protect you. Christian Jensen, a colleague of mine from DTU in Denmark, likens this to a medieval castle. It has strong walls but the odd chink in the mortar that might just let a person inside if your sentries are not paying attention. Once inside, that person could dress like a soldier, or a peasant, or whatever, and get places you don’t want them to with impunity. Perhaps even worse than all that, you’re in an arms race. You put protections in, the bad folks try to break them, developing bigger and better tools to do so and selling them to the people who want to break in. And so on. The thing about arms races is that they never end (Unless of course the ‘evil axis’ hegemony you were trying to address fails, but I don’t think we’re in that situation here, and in truth we probably never were). Meanwhile, the bad people only have to succeed once whilst the security folks have to succeed every single time there is an attack. Not great odds. Finally, this: We keep putting in more security and saying, basically that “it’s okay, it’s secure, you can trust it” which is exactly orthogonal to the way in which trust actually works. Zero Trust Security is one of those interesting and obvious ways of looking at something sideways and coming out with a thing that actually works. In the context of formalizing trust, I first talked about it in my PhD and identified it as something rather special. In the context of cyber security, John Kindervag at Forrester applied the term to making enterprise systems more secure. It’s actually not rocket science and goes something like this: Your systems are already being attacked. I've often started talks about trust and security with the basic statement that if you have a system connected to the Internet, either your system is compromised now or it will be soon To put it another way, Zero Trust assumes that there is definitely an attacker in the system. This is not an unreasonable assumption. It also means that, regardless of whether you are “inside” your own “protected” system or not, you can’t assume that it is trustworthy. The real assumption that it makes is that there is an imposter inside your network. It could be a piece of hardware with spyware on it, or an Internet of Things device with poor security (that would be most of them then). It could be a BYOD (Bring Your Own Device) tool like someone’s iPhone or Android device which you can’t examine to see if it is compromised or not. It could be a bad actor (human). Many of these things could happen and more, if you have been looking at the things I wrote just now. And of course, spotting them is hard. Scary right? So what to do? Trust nothing until you have to, and even then assign least privileges. This means, figure out who is asking to be able to do something and whether or not they say they are who they say they are (identity), then figure out if they are allowed to access the thing (asset) they are asking for (access). If they are, ensure that this is all they get to do (monitor) and that the person who owns the asset knows what is going on (report). It’s not entirely new – least privilege and ‘need to know’ and so on have been around for a very long time. What is new is the acknowledgement that the systems we think are trustworthy actually are not, and that everything inside your own network has to be treated the same as everything outside it – with Zero Trust – until has proved it is who it says it is, and that it has access to the things it asks for. You can combine these in Zero Trust with things like periodic re-challenging — that means asking the thing or person to go through that proof again. Or you could monitor what is happening and react to things that are not or should not be allowed based on the access privileges assigned to that person or thing. Basically, you create a model of everything in the network, assign privileges to these things but only when asked for and monitor to make sure those privileges are being adhered to. In addition you could monitor to ensure that, even when they are being adhered to, the person or thing doing them is the same person or thing that was granted the privileges in an ongoing way.

As it happens, back in 2012 I happened to be giving an invited talk at the Information Security for South Africa conference (ISSA) in which I basically said things like “your systems are already compromised, or if not will be soon. The thing that will work is trust,” before heading into a description of how which looked remarkably like Zero Trust. Whilst I wasn’t booed out of the room, I was told by a Professor in the audience that (more or less) I had just told him that basically the things he had been doing for his entire career were wrong. This may be true, but I don’t think he meant it as a compliment. Such is life.

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING

loading-indicator

LOADING