If You’re in IT, Never Forget That You’re Also a Risk Manager

Written by

There are a lot of specialties within information technology: some people focus on hardware and networks, while others focus on the development of systems and applications software. Others architect systems or manage information technology units. What they have in common—although it is rarely acknowledged—is that every information technologist, whatever their specialty or skill set, is a risk manager. 

The Oxford dictionary defines the word “risk” as meaning “a situation involving exposure to danger.” IT is always facing some form of danger. The dangers may be economic—the cost of defending if a lawsuit is filed, and the costs of fines if the company loses the suit, or the cost of a system that doesn’t scale as predicted to meet real-world requirements.

The dangers may be operational—in inability to meet the objectives set for a new product or service. They may be legal—the risk of not maintaining records required by a law or regulation. They may be reputational—as happened with the app developed to meet the requirements of the recent Iowa caucuses—with consequences for the developer and the users of the app. 

We accept risk because we believe that any risk associated with a system or a technology can be controlled, and that any risk we can’t control is less than the potential reward. We believe that this equation, balancing risk and reward, is something that people know instinctively, but we also see many examples demonstrating that instinctive understanding is not enough. The balance must be actively assessed. Consider some examples:

  • We worked with an organization to assess risk in a company being acquired and discovered that their systems had been evolved from prior paper-and-pencil systems. They were capturing data that contained, in fact, both sensitive personally identifiable information (PII) and personal health information (PHI) that the company never actually used in its operations. They said that they collected it to be compatible with information they had been collecting from the start of the company’s operations decades earlier. While they were protecting the PII, they were not compliant with U.S. HIPAA regulations for protecting health information. Also all the information that they were collecting was, to a greater or lesser degree, subject to potential hacking, insider misconduct and other risks (including those of third parties to which some information storage was outsourced.) The fact that they were collecting sensitive personal and health information that they neither required nor used in their business operations represented a case of 100% risk and 0% reward. They gained no value in collecting the information, endured the cost of collection and storage, but had the real risk of that sensitive information being stolen. 
  • In another engagement, an organization went from a purely domestic operation to one with international customers using its online store. The organization didn’t recognize that because some of its newfound customers resided in western Europe, in countries that were part of the European Union, that they had to determine the customers’ exposure under the GDPR in terms of data collection and protection, and in terms of granting EU users data rights defined under the regulation. The organization did not have an in-house general counsel, privacy officer or a data protection officer (the latter required under the GDPR) and were in danger of being penalized for violation of the EU regulations. 
  • At a third organization, we found considerable use of what is sometimes called “shadow IT” in which various units of an organization arrange for third parties to process or store data without notifying the IT department (or anyone else, like the general counsel.) In one case, the department that was contracting for the service had no idea if or how the data they were providing to the third party was being protected. After an examination of that service’s privacy policy and terms and conditions listed on the website gave the third party rights that, when shown to the company’s general counsel, caused her considerable distress. 

In the three cases cited, there was no conscious decision to accept unreasonable risk. No one took the time to think about risk and its relationship to requirements and rewards. Essentially, dealing with the risk management aspects of IT was no one’s responsibility. 

The risk of not dealing with a risk can be serious. For example, recent government studies have indicated that multiple artificial intelligence (AI) systems used for facial recognition suffer from what’s called “implicit bias.” These systems appear to generate far more false positive matches when presented with an image of a person of color than an image of someone who is white. If this risk is not recognized, the accuracy of these systems in identifying suspects, for instance, may be overestimated, with potentially tragic results.

What can be done?
Management cannot ignore this issue, as recognizing and managing risk is one of the key functions of any organization’s senior executives.

For some larger organizations, the appointment of a chief risk officer (CRO) or a chief information security officer (CISO) may be appropriate to assign responsibility for assuring that risk is being appropriately considered and mitigated. Of course, there is always some residual risk that cannot be removed through process changes, so the risk management equation should also consider the role played by cyber insurance.

For smaller organizations, the cost and difficulty of finding a CISO may make it more appropriate to think in terms of a shared CISO resource. Shared or “virtual” CISOs can provide high-value services to a corporation and have additional resources to respond to an incident if necessary. This can be a cost-effective mechanism for gaining the needed skill set and provide management with an objective view of how their organization’s cyber security and cyber risk is being managed.

The bottom line is this: those involved in the design, development or maintenance of IT systems at any level, including AI and deep learning frameworks, have to recognize their responsibility for cybersecurity and privacy risks. They should be intimately aware of the steps required to communicate their concerns and secure specialized advice from legal, compliance, information security or others to understand and mitigate such risks.

What’s hot on Infosecurity Magazine?