What I Learned About Information Security From Academia

DALL·E 2022-10-11 13.webp


I graduated from University with a Masters degree in Information Security from Royal Holloway and Bedford New College, University of London in 1997. The course was taught from the Mathematics department under the tutelage of a well respected cryptographer Fred Piper and the brilliant Dieter Gollman. At the time the Royal Holloway Information Security Group was one of only two academic teams teaching in Europe, the other being at Delft in Holland. These days there are many. Fred used to tell stories about how in the very early days people would turn up to the department and hang around lectures, seeming to take a particular interest in the student studying cryptography. It was implied they were from GCHQ. 

I had an amazing time, living, studying and drinking. Time of my life. 

When leaving Royal Holloway you are faced with a few choices. Continue with academia, work for the security services or get a real job as my mother said. I got a real job. I went to work for a specialist security consultancy company that had the market tied up with financial services in the City of London, and with the UK security services like MI5. 

It was in a short time after graduating, and working in that real job, that I quickly learned a few home truths about information security that have stayed with me my whole career. None of this invalidated or diminished anything I learned, it is just that life outside of academia is very different. 

  • MAC, DAC and RBAC

  • User Authentication vs Entity Authentication

  • Key Management, in Particular Key Distribution is Really Hard

  • Attacking Down the Stack Becomes Harder and More Expensive But Sometimes Necessary and Sometimes More Effective

  • End Users Can and Will Often Fuck It Up


Dieter Gollman taught an excellent operating system security course, much of which is still available in his book Computer Security. You can find my Vax username (phac105) in an example if you look hard, taken from my dissertation reverse engineering Windows passwords and SMB connectivity. I digress. In this course we learned about security models such as Bell-LaPadulaClarke-Wilson and Biba. That module of the course fundamentally broke down to understanding discretionary access control (DAC) vs mandatory access control (MAC) versus roles based access control (RBAC). 

When designing the security models of systems, I found it very helpful to reference the three schemes. 

User Authentication vs Entity Authentication 

User Authentication refers to authenticating a human. Entity Authentication refers to authenticating of a piece of hardware, a process or a piece of software. You often perform mutual entity authentication such as between a browser and a web server. 

System architects often think they are authenticating a user when they are in fact authenticating an entity. An SSH key may be useless unless you also performed user authentication before it was accessed to know which human used the key. Git commits are a good example. 

Key Management, In Particular Key Distribution is Really Hard

Sure we have clever key distribution schemes like Diffie-Hellman (I was honored to be on a panel with Martin Hellman recently) but if you seed any key exchange with flawed data then everything that follows is also compromised. You see people use SSL and SSH all the time and send the private key using email, drop it on a shared drive or store it in an unencrypted shared backup for safe keeping. Search for filename:id_rsa or filename:id_dsa on Github and you will see what I mean. 

Attacking Down the Stack Becomes Harder and More Expensive But Sometimes Necessary and Sometimes More Effective

Some old Navy duffer came into Royal Holloway and said “trust me you would never find our covert channels”. Anyone who says trust me is usually full of shit in my opinion but Dieter Gollman explained that if you can't attack the user, then attack the operating system. If you can't attack the operating system, then attack the hardware. If you can't attack the hardware, then attack the wire. Each time it gets incrementally more expensive to do but usually also creates a bigger impact. Think about the backdoors in the super micro chips and Huawei 5G network. Hardware security is the purview of foreign espionage because they can afford to do it and if successful it will have a devastating impact. Attacks on users like phishing are cheap. Moving attacks down the stack makes them more expensive to pull off, increasing a system's overall security. 

End Users Can and Will Often Fuck It Up

One of my first assignments after Royal Holloway was part of a team brought in to replace the internal security team at a global investment bank. I was assigned to an open trading floor and the FX training team. On my first day I was told by the head trader to fuck off and sit quietly at the end of row. It was the 90’s in London. You have seen the Wolf of Wall Street. It was like that. Eventually after being the tea boy for a week or so, and daily hangovers, I earned their trust and some of them were open to help. One trader was using Excel files and AOL instant messenger between London and Frankfurt to record massive daily trades. I helped with some practical improvements to encrypt the Excel sheets. I was all too aware of the challenges of key distribution (see above) so flew to Frankfurt to share the password in person with the other banker. To my dismay he wouldn't see me, demanding his secretary get the details. I reluctantly gave in thinking it was better than nothing but by the time I got back to London the password was proudly posted on the Lotus Notes server for everyone to see. End users can and will often fuck it up.