[+] Wayc0de's Blog[+]

06/10/11

Security By Obscurity Not So Bad After All?

I’m sure you’ve been taught, as have I – that security through or by obscurity is bad (changing port numbers, removing service banners and so on). I’ve personally always used it, as an additional line of defence on my systems.

As a hacker I know, the more information a system gives me straight off the bat – the easier it’s going to be for me to hack it. Well the latest news is that this tactic may not be so bad after all.

Security by obscurity may not be so bad after all, according to a provocative new research paper that questions long-held security maxims.

The Kerckhoffs’ Principle holds that withholding information on how a system works is no security defence. A second accepted principle is that a defender has to defend against all possible attack vectors, whereas the attacker only needs to find one overlooked flaw to be successful, the so-called fortification principle.

However a new research paper from Prof Dusko Pavlovic of Royal Holloway, University of London, applies game theory to the conflict between hackers and security defenders in suggesting system security can be improved by making it difficult for attackers to figure out how their mark works. For example, adding a layer of obfuscation to a software application can make it harder to reverse engineer.
I agree with this, I wouldn’t exactly say this is ground-breaking though – I’ve always believed this. It’s not that I’d use obscurity as a singular defence, but I don’t see how it makes a system any less secure – the fact is from my perspective it definitely makes it harder to attack.

I mean the way in which Pavlovic is looking at it is rather more complex (in terms of a game), but it’s the same idea – if the attacker has less information, he’s going to have a harder time. Surely this all goes way back to Sun Tzu art of war..

Pavlovic compares security to a game in which each side has incomplete information. Far from being powerless against attacks, a defender ought to be able to gain an advantage (or at least level the playing field) by examining an attacker’s behaviour and algorithms while disguising defensive moves. At the same time defenders can benefit by giving away as few clues about their defensive posture as possible, an approach that the security by obscurity principle might suggest is futile.

Public key encryption works on the basis that making the algorithm used to derive a code secret is useless and codes, to be secure, need to be complex enough so that they can’t be unpicked using a brute force attack. As computer power increases we therefore need to increase the length of an encryption key in order outstrip the computational power an attacker might have at his disposal. This still hold true for cryptography, as Pavlovic acknowledges, but may not be case in other scenarios.

Pavlovic argues that an attacker’s logic or programming capabilities, as well as the computing resources at their disposal, might also be limited, suggesting that potential shortcomings in this area can be turned to the advantage of system defenders.
Of course obscurity should never be used in cryptography, that would just be idiotic – but when it comes to defending networks, servers and systems – I’m fine with it as an additional precaution.

I think this might spawn some interesting discussion either way, what do you guys think?

You can read the paper here: Gaming security by obscurity [PDF]

Tidak ada komentar:

Posting Komentar