Deloitte: would two-factor authentication really have helped?
5 October 2017
By James Romer, EMEA Chief Security Architect at SecureAuth
The misuse of administrator credentials in the recent Deloitte incident is strong affirmation that identity and authentication should be at the centre of enterprise information security discussions. We’re seeing breach after breach leveraging stolen credentials as an attack vector and even skilled information security practitioners are struggling with this threat.
The misuse and abuse of systems protected by usernames and passwords alone is now so common that change is finally taking hold. The security of user identities is now the leading item on most organisations’ agendas. Business leaders are now far more aware that username and password alone should not be trusted or used where possible.
The breach revealed that many of Deloitte’s internal and critical systems were public-facing and accessible via remote-desktop. For a “Big Four” accountancy firm, this is obviously an embarrassing misstep. However, the reaction from the security community has been remarkable. It’s been claimed that “best-practice” two-factor authentication could have prevented this breach and indeed the general consensus appears to be that two-factor authentication is the simple answer to our authentication needs.
While two-factor is a step in the right direction for enterprise security the question remains, is it enough? We don’t think so. Well documented attacks involving two-factor authentication are out there and many vanilla second-factor methods are highly susceptible to interception.
The truth of the matter is that two-factor will protect you some, but not all of the time. It can deflect your average, opportunistic hacker, moving them on to easier targets. However, a determined attacker targeting a specific high-value target, such as a global accountancy and professional services firm, might not be so easily put off.
The username-and-password attack vector has really just been moved into another form. Examples such as the RSA SecurID token hack, compromised devices and poorly designed applications all lead to new areas of exploit for the attacker.
Phishing attacks remain one of the most common forms of credential theft. Using information gained from social harvesting and users themselves, cyber criminals can build up comprehensive profiles to go after weak points in security measures.
Malware is now specifically written to go after tokens, even those soft tokens popular on smartphones that offer one-time passwords (OTPs) acting as the second factor. The problem here is that really the OTP is just another password in the user’s possession, albeit with a limited life. Malware and basic phishing attacks can readily be used to extract the OTP from the user and/or device.
Then there’s the user, second-factor challenges do add frustration, degrading the user experience. This only makes the user try to circumvent the security measures, resulting in a solution that is merely an annoyance and often no more secure.
The worrying thing is that if two-factor is already broken, why are organisations rushing to embrace a broken security model? Possibly due to the perception that doing anything more than username and password has to be a good thing.
Moreover, there is a false sense of security around two-factor authentication. It ticks a compliance box for regulations such as GDPR but doesn’t offer enough of a security increase as a standalone solution.
Also consider the user. Two-factor challenges add frustration and degrade the user experience. This in turn leads to attempts to circumvent the security measures put in place, resulting in a solution that is merely an annoyance and often no more secure.
Most importantly we should not consider two-factor alone to be a security silver bullet. To be clear, two-factor is a great starting point and a huge improvement on username and passwords. However it’s an outdated way of proving identity in a modern cyber-security scenario. The industry as a whole needs to rethink how it defines ‘identities’. We should be taking advantage of the wealth of contextual information that exists today around users, devices and locations. Two-factor authentication is just a piece of an ever changing jigsaw puzzle.
The security industry should push harder for non-static solutions. Ones that can reposition themselves in real-time based on the information that is naturally available as part of an authentication workflow. This unpredictability makes it exponentially harder for an attacker to effectively impersonate a user, even if they have their credentials.
Layers of access control methods such as device recognition, IP reputation, geo-location, geo-velocity, entitlements, access histories and behavioural biometrics allow the authentication process to become much more secure and flexible. A user identity coming from behind a Tor exit node for example can simply be blocked, redirected or stepped up to a particular type of authentication method. Stopping the risk associated with such activity at source becomes possible, before authentication methods are even offered to the end user.
This approach can even improve the user experience, a holy grail of security solutions, by removing the need for additional authentication challenges. Instead, it uses readily available contextual information to better verify a user’s identity and flag suspicious authentication attempts. This should encourage a broader move away from the broken model of solely two-factor authentication, into a more modern system of invisible and adaptive authentication.
The key here is being able to react not only to threats as they change and adapt (new layers) but to user’s requirements as they also change over time. I’m suggesting a modern approach to the authentication challenge. One that allows organisations to stay ahead of the hackers and frees them from relying on the broken two-factor model.