Part 3: Identifying Cultural Threats and Risks

👋🏼 Welcome back reader!

I know it’s been a while since I wrote the last two parts on culture hacking, so if you want to recap, or you’re new around here I’d recommend checking out these two posts first:

🙋🏻‍♀️ What you are we going to cover in this post

In today’s post I want to talk about what cultural threats and risks look like, I liken this to threat modelling. For anyone unfamiliar, threat modelling is a continuous process of identifying, communicating, and understanding threats and ways to mitigate these threats in a system that holds some weight of value.

I won’t get too into the specifics of threat modelling because it is a topic in itself that probably deserves a separate blog series, but I highly recommend you check out the Threat Modelling Manifesto if you want to learn more!

🔎 Identifying Cultural Risks

Finding cultural threats and risks can be tricky, not only because you need to find a way to get people to open up to you, but you also need to look inward at your own values. If you’re the sort of person who values privacy and prefers to share information on a “need to know” basis, you may find transparency and open communication a threat to the business.

If you value teamwork over people working on their own, you may identify risks in allowing employees to go off and work on projects on their own.

Neither of these things are wrong though, but you do need to be aware of imprinting your own values or assumptions onto the organisation without realising. So it’s important that you understand:

  • What the values of the organisation actually are;

  • How current values are expressed; and,

  • How much drift there is between these two references.

Now depending on who you are and your role in the business you’ll have a very different view on what the culture looks like, so the techniques you need to get to the bottom of identifying cultural threats will be very different.

🕵🏽 Indicators of Culture

As someone who is often an external consultant, I often find the best places to find out about the organisations culture is during maturity assessments and risk interviews, because you get to see behind the curtain at how problems are approached, managed and resolved (if at all).

In smaller businesses where maturity assessments and risk interviews aren’t as prevalent you can look to how teams prioritise work, how incidents are handled, and I don’t mean security incidents, I mean all of them, how do people handle failure? You will also find hints in the tone and content of things like team meetings and all hands.

But as we mentioned in the very first post, culture isn’t just about what is written or spoken about, look at how people do or don’t embody these cultural pillars. But as we mentioned before we need to ensure that we aren’t imprinting our own values on these situations, and instead try and look at these things from different angles and perspectives.

A fitting example of how we can miss threats because we find them acceptable or not harmful is contempt culture, where developers and/or security people are highly critical of a language, tool or platform and the people who used it were criticised for this choice.

The other time it is often easy to miss threats is when you find yourself higher up on the organisational structure, where you may not face as much internal pressure to perform, or you have the privilege to say no without feeling like it may impact your career – which again can speak VOLUMES about where your organisation is maturity wise.

There is probably a whole post here about how managers need to ensure that juniors, mids and even seniors are given the language and ability to push back safely, but the reality is if teams are feeling crunch and they know that bypassing or rushing through checks and balances will save time, there is a high likelihood your secure development lifecycle isn’t actually enforcing the security requirements you think. But that’s for another time.

Presuming that you’ve had a chat to people in the company and have a general idea of how the business works and how risk and opportunity is addressed (even informally), you can start to develop a threat model. A cultural threat a model.

🧛🏼 Threat Modeling

So now that we’re at the threat models, we can very quickly and briefly touch on some of the methodologies used when dealing with technology, while we won’t be using any of these specific methodologies, some of them may inspire you to put together a methodology for threat modelling culture:

  • STRIDE: Which was introduced by Microsoft in 1999 it stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service, Elevation of Privilege. It is still used today to help answer, “what can go wrong in the system we’re working on?”

  • PASTA: Also known as the Process for Attack Simulation and Threat Analysis, PASTA is a risk-centric methodology.

  • Trike: A methodology that focuses on threat models as a risk management tool and focuses on a requirements model.

There are also a bunch of others, and people who do risk modelling as their day to day will have their own blended models and favourite methods. Fundamentally though, cultural threat modelling is much the same, we are trying to identify the non-obvious and making it observable and tangible in a way that allows a business to manage and mitigate the threat.

One way of doing this is using Political, Emotional, Psychological, Logistical (PEPL) threat model. One of the main differences, between this and some of the more common threat modelling methodologies is that the threat affects the desired security outcome rather than any particular system.

But like other threat modelling activities, the desired outcome is a brainstorm rather than a end-to-end data flow.

👀 What can you look at when assessing each threat?

👑 Political

The most common cultural weakness seen by consultants and organisational employees. Political weaknesses may take the form of turf wars, when people, teams, departments, or whole businesses may compete over areas of bureaucratic control, resources, or advancement of goals and objectives.

When a turf war begins to include security operations, there can be a breakdown in overall security posture, which introduces new technical risk. It can also result in the development of departmental silos or result in a security team that has no real power to meet the overall business goals.

Another common political threat is vendor, product, or provider bias (I’ll refer to it as “supplier bias”). These sort of biases after often seen when someone is trying to protect their role in a company by being the “expert”. In other cases, it can be an organisation playing favourites in return for favourable arrangements and partnerships which can also prove to be problematic.

Supplier bias can also feed into contempt culture, which I’ve seen strengthen departmental silos, and result in more… shit flinging. If an organisation or team swears by a particular supplier or vocalises their uninformed dislike of another, security often takes the back seat.

Often, you’ll often find that security teams may deny themselves the best solution because of personal feelings. The outcome of which doesn’t only affect the organisation, but also the people inside of it, as people don’t develop skills and innovations because they are supporting vendors out of a sense of loyalty (or dislike) rather than solid business analysis.

Keep in mind though, there is some nuance that can’t quite be captured in this discussion, because sometimes there is a political reason to avoid working with or supporting a particular supplier and that’s just as important to consider.

Where these things happen, you may need to look to more ethical solutions or providers and in some cases you may be able to find a one-to-one match for what you need, in other cases you may need to pay more, for less, or consider revisiting a problem to ensure it will function under new circumstance.

Other times you may need to step up and become the solution you want to see in the world, and in other cases you’ll have no other option but to go with what is currently available. But where possible, always make your voice heard, make sure you uphold your values.

😌 Emotional

Emotional threats can be difficult to address because they aren’t based in rational, logical decision making. In security and tech in general, we often try and stuff emotions away opting for a more clinical view of the world, which frankly is unhealthy and not always productive. However, emotions can become a cultural threat when they influence decisions without proper acknowledgement, and thus closing them off for closer analysis and improvement.

COVID-19 has brought about a lot of these emotional based weaknesses in the form of fear, uncertainty and doubt. Phishing is a great example of an emotional exploit, imagine, you’re an employee at a $BUSINESS working to $COOLPROJECT when you get an email, the email might say something like:

  • Your password on $IMPORTANTSERVICE is about to expire and you need to change it as soon as possible otherwise all your work will be lost. 😱

  • You need to pay this invoice for $AMOUNT so that the CEO can do something important. 💰

  • Your $IMPORTANTPACKAGE has been delayed and might not get there when you expect, but by clicking the link you can find out more! 📦

While here, as they are typed out they seem obvious, but the goal of phishing is to exploit you at your emotionally weakest. A knowledgeable adversary, might be tracking your companies progress via social media waiting to see when you are raising the next round of funding to try and exploit the uncertainty and doubt these things can bring.

Other adversaries, will cast a wide net, hoping to get someone before their 9 am coffee when they are just trying to get their day started. The reality is all these examples, have and continue to exploit emotional weaknesses we have around, and depending on the company culture fear, uncertainty and doubt may be what turns a security event into a security incident.

This is because, if users are taught that making mistakes, clicking the link, opening the file is something that will likely lead in punishment, they are more likely to bury the initial mistake, potentially giving threat actors a better foothold in your system.

Emotional threats are why playbooks, processes and procedures continue to be important during incident response, disaster recovery and in some case business continuity. And why taking a quantitative risk-based approach to implementing controls can be helpful.

The idea is not to deny your emotions when making decision, but more ensuring they are informed and addressed in a meaningful way so they can be analysed (by yourself, or with the help of a supporting co-worker or team) and improved (where appropriate).

🧠 Psychological

Psychological threats often come from a similar place to emotional, and they can very often play off each other resulting in a higher risk of vulnerability. These weaknesses though, are grounded in cognitive function and processes and the people creating them. Basically, decisions and why people make them.

Psychological weakness can be introduced when security strategies or outcomes don’t account for the differences in the way people process information, interact with technology, learn, or gain new professional skills.

This can be caused by generational, educational, geographical, or cultural differences. Some organisations get around this by setting high requirements for jobs, or by not hiring people who don’t fit their ideals of a candidate generally claiming a poor cultural fit (instead of doing some introspection and trying to work out if their culture is just not very good).

The other problem that results in this weakness is when employees aren’t interviewed or made a part of the general conversation and therefore security programs are built around the biases of the people creating them.

Of course though, you can’t be expected to interview every single employee of a business, but it is critical to ensure that you explore your own privilege and biases when creating a security program.

🏎 Logistical

Logistical threats can be identified when a security strategy can’t be incorporated with existing infrastructure. This can take the form of implementing a strong password policy, when the operating system or applications are incapable of handling that requirement.

Another example is trying to implement controls or changes that are incompatible with the wider organisation, for example disabling Server Message Block version 1.0 when there are Windows XP boxes running business critical systems. Or wanting to enforce Server Message Block signing business wide but you also happen to have Trumpf lasers.

Both examples come from security imposing requirements that conflict or aren’t compatible with existing systems, and I’d be inclined to ask if a risk assessment was done, and if there is another way this can be handled. For example, by segregating those Trumf lasers to a tightly controlled subnet.

The reality is still that exception processes can introduce new systems to align strategy with fact. But it’s important to ensure that security strategy doesn’t get lost in the fog of prescriptive compliance regimes or industry best practise.

Another logistical weakness can be in business outcomes themselves, if security is treated as an outcome separate of other organisational goals, it causes employees to pick between two outcomes where one is a clear loser.

For example, creating an organisational goal that rewards the fastest deploy times possible, and then forcing developers through an arduous secure development process, may see developers cut corners in the name of those fast deploy times which will likely result in companywide kudos at the sake of a secure deliverable.

But the inverse is also true, creating organisational goals that reward going through the arduous secure development process will have a massive impact to the deploy times and time to delivery, which may result in other problems such as loss of clients.

So, the reality, is nothing is ever 100% secure, and nothing is ever 100% efficient. All that matters is striking a productive balance between various forces (including cultural ones) as the organisation works to meet their own objectives. This is arguably why it’s just as important to call out opportunity as well as risks.

Taking the last example, of the deployment process vs security, there is very likely an opportunity to automate parts of that secure development process, which should have the outcome of reducing the load on developers, and still allowing the business to meet their delivery dates.

Of course then you need to focus the lens inwards, because even when people are aligned in their goal of security, you may still find all of these threats present in a wider information security team, where developers may be in a turf war with the IT team who configure their laptops.

🙌🏻 Summary

  • Speak to people who work in the organisation, focus on the lower levels of the business because they are the most likely to be impacted;

  • Watch how people interact with each other in meetings, public spaces and Slack;

  • The fundamentals of threat modelling can be used to threat model an organisations attack surface:
    • Political: How do politics influence decisions made in the business? Are the company loyal to one vendor to a fault? How are resources allocated? Do you need to meet with separate teams because of major personality clashes?

    • Emotional: Is the security program driven by the 24-hour cyber news cycle, or is it a risk based approach? Do people focus on risk, opportunity or a mix of both? How do they speak about security – is the language emotional, or clinical in nature? Are people’s emotional responses challenged and discussed openly?

    • Psychological: Is the business inclusive? When talking about topics such as accessibility, does the business say “we have no one who has that problem?”, How does the business take feedback?

    • Logistical: Does the business try to enforce security best-practise even when ill fitting? Does the security team engage with asset owners to perform risk assessments? Does security get treated as a separate outcome or is it ingrained in the organisational culture?