What the Citadel Can Teach Us about Computer Security

Written by

The Citadel, founded in 1842, is one of America’s oldest and most respected private military colleges. Its reputation was built not only on the discipline, academic merit and career achievements of its graduates, but particularly on the high standards of integrity instilled by its Honor Code.

This states that a cadet “will not lie, cheat, or steal, nor tolerate those who do” and will do “the right thing when no one is watching.” Practical observance of this code was so high that for over 150 years student rooms and lockers did not require a lock and key.

Campus security consisted simply of vetting third parties who wanted to access the facilities. Unfortunately, following a spate of petty thefts a decade ago, The Citadel reluctantly installed locks.

The computer industry is a macrocosm of The Citadel. That’s because computer operating systems are designed to treat programmers and users as inherently benign, well-intentioned, ethical parties. All computer languages and operating systems today are essentially based on this permissive model – if an instruction is given, the computer will execute it.

The presumption is that bad actors, if any, have been vetted and kept out at the campus gate. Anyone who has access to the system can make a request or write a new instruction and is by default considered a legitimate party, and all their instructions are immediately executed.

This permissive model dates back to the early days of computing, before the internet, malware, online fraud, nation-state attacks and cyber-terrorism existed. The world has changed and so, like the Citadel, must computing’s foundations.

"The world has changed and so, like the Citadel, must computing’s foundations"

It’s time to turn the permissive computing model on its head – and here’s how. All applications are designed with legitimate purposes in mind. Once they are placed in production, however, they are not prevented from being used for unintended purposes.

As a result, when compromised by a rogue user or botnet, the application will faithfully execute new and illegal instructions, without even alerting the system console that a ‘hijack’ has occurred.

Many of the worst attacks, such as command injection, process forking, and directory traversal, exploit such approaches. Such techniques also form the basis for lateral attacks and advanced persistent threats – which are used to attack financial institutions, retail payment systems and industrial control systems for critical infrastructure.

Instead, applications should be limited to executing functions within their intended use profiles, making them simply incapable of executing rogue instructions. In other words, the default for a production application would go from ‘everything else is possible’ if a new instruction is given to ‘nothing else is possible’. In addition, applications should be instrumented to generate alerts that call attention to unusual activity.

In this way, applications would not only protect themselves but would also start ‘speaking for themselves’ thereby contributing valuable real-time intelligence. HP estimates that 80% of all security exploits occur at the application layer. Yet our continued use of permissive computing leaves them totally vulnerable once compromised in any way.

It’s time for a change. There’s no honor code in computing.


About the Author

Brian Maccaba is CEO of Waratek, a Java security and virtualization company. His former company, Cognotec, developed AutoDeal, a web-based foreign exchange trading platform that was adopted by more than 60 banks worldwide. London Institutional Investor magazine named him among the top 30 individuals in Europe and Asia who were harnessing the internet to transform the financial services industry


What’s hot on Infosecurity Magazine?