The eleven biggest misconceptions in Cyber Security

This post is also available in: Deutsch (German)

This article was published in 2011 and updated in 2017:

“There are many misconceptions in computer and information security that deceive the view on reality. But to make meaningful security decisions, it is important and key to know and truly understand the misconceptions commonly found in computer and information security. In this article, we outline and discuss the misconceptions we think are most common and influential. We divide the misconceptions into three groups, namely social and behavioral misconceptions, technical misconceptions, and false estimations The aim of the article is to prepare the stage and provide a better understanding for all questions and answers related to computer and information security.

Rolf Oppliger and Bruno Wildhaber are Information Security practitioners with many years of academic and practical experience with private and public organizations. All the misconcep-tions in this article have been encountered many times and in different constellations.” Article available in the Amazon shop Don’t have a Kindle: Load a reader for any platform

Short version:

Rolf Oppliger Bruno Wildhaber: As security consultants, we are often faced with arguments that are based on misconceptions and may lead to wrong conclusions and bad decision-making (even among peers). Being aware of misconceptions is key to properly understand computer and information security and to improve the quality of security-related discussions. In this column, we enumerate the (we think) most common misconceptions found in the field. We distinguish between social and behavioral misconceptions, technical and methodical misconceptions, as well as false estimations. A more comprehensive essay is available at Social and Behavioral Misconceptions These misconceptions are related to the way we think about computer and information security.

#1: People care about computer and information security This mistaken belief leads to technical discussions that are inappropriate most of the times. Security professionals like technical jargon and discussions about their area of expertise, but most others do not. In fact, many people don’t care about technologies, unless they affect their daily life. Hence, computer and information security does not end in itself and business must specify the security requirements (not vice versa). A technology does only make sense if its implementation and use satisfies a particular business need. There are many examples of security technologies that are hyped in the media but do not satisfy a particular business need. Examples include quantum cryptography and privacy-enhancing technologies, like anonymous electronic cash. These technologies are unlikely to be widely deployed.

#2: Computer and information security is a purely technical field of study If experts argue about computer and information security, they often bring in the latest terms (or buzzwords) found in the trade press. To show that they are tech-savvy, they lose them- selves in technical jargon. This is unfortunate, because technologies can only be as good as the goals they try to enforce. If these goals are undefined or only vaguely defined, then any technology is fine. Hence, instead of arguing about particular technologies, it is more appropriate to talk about the goals and to discuss the technologies to enforce them afterwards. Unfortunately, this is seldom done, and people jump directly into technical discussions. In doing so, they game away opportunities, because some goals can also be achieved with organizational, personnel, legal, economical, and/or financial means (instead of technical ones). The interdisciplinary nature of computer and information security will increase in the future, as there are security challenges that cannot be addressed technically. Take WikiLeaks as an example. Its security exposure cannot be addressed technically, because disgruntled employees will always find ways and means to leak information to the outside world (note that whistle-blowing had existed long before WikiLeaks was formed in 2006).

#3: Computer and information security can be achieved without user involvement In accordance with misconception #2, some people think they can leave the user out of the loop and achieve computer and information security without user involvement. This is wishful thinking and security can only be achieved with user involvement. Compare this with using a seat belt in a car. Even if a driver is acoustically warned several times before he or she starts the engine, it is still up to him or her to wear it. The same is true with computer and information security: Only if a security measure is used in some meaningful way can it help improve security. Just having a security measure in place does not improve security. It may even be worse, because it may negatively impact the user behavior. This is, for example, the case with many firewalls. They are configured in a way that almost all traffic passes through, and yet users think they are protected. Similarly, pop-up dialogs that leave no true choice to the user are inappropriate to achieve a higher degree of user involvement.
Technical and Methodical Misconceptions These misconceptions are related to the way we (technically or methodically) try to achieve computer and information security.

#4: Computer and information security must start with a formal risk analysis There is an old proverb saying that one can only manage what one can measure. Applied to risk management, this means that one can only manage risks that one is able to measure. In theory, a risk exists whenever a particular threat can exploit a given vulnerability, and hence a risk can be measured as the product of the probability of occurrence of the threat and the estimated damage. This formula is universally valid and can, in principle, also be applied to information technology. But here we face the problem that either value is difficult to quantify (because one does not have enough empirical data), and hence “normal” risk analysis does not work. So when people talk about risk management and analysis in information technol- ogy, they usually refer to some form of ad-hoc threat assessment, estimation, or rating, or even some baseline security or best practices approach. Parker refers to such an approach as “diligence security by benchmarking against other organizations, using standards, compliance, tradition, good practices, common body of knowledge, guides, and experimentation” (Donn B. Parker, Fighting Computer Crime – A New Framework for Protecting Information, John Wiley & Sons, New York, 1998).

#5: The return on security investment (ROSI) is useful In analogy to the return on investment (ROI) in economics, some computer and information security professionals have come up with the notion of a return on security investment (RO- SI). The basic idea is that an investment in a security measure makes sense if and only if it yields a ROSI that is larger than the investment. In the last couple of years, many re- searchers have come up with ways to compute and optimize the ROSI. All of them have the problem mentioned above, namely that it is unknown how to effectively compute the risks (that are to be avoided by the security measures put in place). Hence, all arguments based on a ROSI are inherently weak and lead to a dead-end. Security is an investment that does not necessarily have a return that can be quantified in some meaningful way. It is more appropriate to compare it with an insurance that yields some operational costs.

#6: Computer and information security measures must be preventive Organizations spend most of their security budgets for preventive security measures, like antivirus protection, firewalls, and all kinds of encryption devices. Only small amounts of money are spent on avoidance, deterrence, mitigation, motivation, and awareness, or on detective and corrective security measures. This is in contrast with the understanding that computer and information security cannot be addressed by using only preventive measures, and that detection and response are getting increasingly important. The more our business world is going online, the more important it is to be able to detect and correct security prob- lems in real-time.

#7: The data flow can be controlled and the “need to know“ principle works People have worked on data flow controls for quite a long time. Note, for example, that the terms mandatory access control (MAC) and data leakage prevention (DLP) refer to the same idea. Unfortunately, people have not come up with data flow control technologies that work in practice. Hence, data flows cannot be controlled and the “need to know” principle does notwork either. Security professionals include the principle in all kinds of policy documents, but they do not have a clue about how to actually implement and enforce it. They try to streng- then the rules of firewalls or block the USB ports of mobile devices, but they also know that this is not going to control the flow of data in some meaningful way. Instead, it is more appropriate to work along a “need to withhold” principle, i.e., restricting the use of sensitive data to a minimum and implementing strong complementary security measures.
False Estimations These misconceptions refer to false estimations and overestimation.

#8: The CIO controls the information infrastructure Most CIOs argue that they are able to effectively control the information infrastructure they are responsible for – at least this is what their job description says. While this argument may have been true in former days of (centralized) computing infrastructures, it is no longer true in today’s Internet-centric (decentralized and outsourced) infrastructures. Here, it is increasingly difficult to know the assets, and hence being able to control them remains an illusion. The trend actually goes in the opposite direction, meaning that future CIOs will likely have fewer control over the infrastructures they are responsible for. If a CIO is not able to directly control the information infrastructure, then he or she may still indirectly take control by speci- fying appropriate and as-precise-as-possible service level agreements (SLAs). Specifying SLAs, however, is not the same as effectively controlling infrastructures. Hence, CIOs must be aware of this self-limiting fact and take it into consideration.

#9: Isolated audit and penetration tests reveal the state of security People sometimes try to gain evidence in the security of a computer system by having a group of people try to penetrate. The usefulness of such isolated penetration tests is questionable (to say the least). Security, threats, and vulnerabilities are open-ended and unbounded. If a test reveals vulnerabilities, then the system cannot be told secure and can be improved accordingly. The resulting system can then iteratively be subject to other tests. If a test does not reveal any vulnerability, then the situation is more involved. In this case, there is no guarantee that another group of people with another set of tools would not find any vulnerability to exploit. This is a lose-lose-situation. Hence, isolated audit and penetration test- ing may be effective in awareness building, but it cannot be used to reveal the state of security.

#10: Certifications help Whenever a security problem pops up, the IT security industry is keen on finding a certification program and sell respective certificates. We have seen this in the realm of products (using, for example, the Common Criteria) and information security management systems (ISMSs) according to ISO/IEC 27001. Certifications generally help making money, but they seldom help improving security. As long as a certificate is not widely deployed, it plays a negligible role, but as soon as it starts getting successful, anybody wants it and companies start selling it for a piece of cake. Part of the problem is that the company that sells certificates does not take any real liability for the quality of its work, and hence there is economic incentive to bring down the price.

#11: Computer and information security is subject to a continuity theorem Some people argue that it is possible to continuously observe the environment to predict security breaches that will occur in the future. This way of thinking is based on a continuity theorem, saying that security problems evolve slowly but continuously. In todays highly distributed and networked world, the situation is more involved and the continuity theorem no longer applies. If somebody finds a vulnerability, then he or she can immediately exploit it on a global scale. Many worm incidents in the past have shown this. The resulting security problems suddenly pop up out of nowhere, typically without any sign of early warning.

Outlook Some of the above-mentioned misconceptions result from adapting a “classic” understanding of security to cyberspace. We know the picture of a medieval fortress with brave knights protecting stone walls (representing a strong perimeter) and enemies trying to assault these walls (e.g., R. Oppliger, Internet Kiosk: Internet security enters the Middle Ages, IEEE Computer, Vol. 28, No. 10, 1995, pp. 100-101). In this picture, the knights and the enemies are fighting face to face, and strong perimeter protection makes sense. In cyberspace, however, the situation is fundamentally different. The enemy is no longer lurking in front of the perimeter, but he may be far away or inside the perimeter. Against such an enemy, perimeter pro- tection is necessary but not sufficient. More subtle attacks and attack vectors must be ad- dressed and protected against. This is not trivial, and hence we expect many new and innovative computer and information security technologies be developed, implemented, tested, and brought to market in the future. Interesting times lay ahead of us.
Verbinden wir uns!

Oh, hallo 👋
Schön, Sie zu treffen!

Mein Newsletter mit aktuellen Sicherheitsfragen und Themen rund um die Datenstrategie. Bitte melden Sie sich hier an.

Wir senden keinen Spam! Erfahre mehr in unserer Datenschutzerklärung.

Leave a Reply

Your email address will not be published. Required fields are marked *