When organisations stop listening all hell can break loose

Our resident Security Wizard, Stuart Hill, tells it how it is

04-02-2021
Bcorp Logo
When organisations stop listening all hell can break loose

Let me tell you about Mensa...

Do you have an IQ of 148 or better? If you have you can apply to take the standard IQ test to join MENSA, the organisation for brainiacs above the 98th percentile. Unfortunately, MENSA appears to have a security IQ of less than 40 when it comes to data protection. In the news this week it has lost 2 board directors through resignation and there is much fallout over its decision to not encrypt all of its data, and to encrypt passwords rather than hash them.

Looking closely at the story it appears MENSA had been warned on numerous occasions that its security was lacking but had done nothing about it.

Most organisations would like to think that their culture is such that staff can talk openly and honestly to the business about their concerns, recommend strategy and improvement, and if they spot a security issue, report it and have it acted upon. Again, as a business owner, one might hope that our business wouldn’t need a whistle-blower protection policy but be naturally supportive of the knowledge we have and use it productively, with a big thank-you to the employee. 

However, time and again we see organisations that are warned of a security issue who ignore it and then go on to suffer incredible public pain when their inaction is exposed.

Hash your passwords, don't encrypt them

MENSA should not be encrypting passwords. Passwords should be hashed. The difference between these two are that encryption tells us that the user’s password has been kept and encrypted into the database. You will be able to tell organisations that store your password this way as they will send you a password reminder by email with your password in it! In a world where every email can be read by pretty much anybody this isn’t a good policy. Passwords should be hashed. 

Here’s the difference between the two: give me your password, I create a unique hash using your password as the key and bin your password, storing the hash in the database. Now I don’t know what your password is, only the hash that is generated from the password. When you want to login again, you give me your password, I’ll create a hash with it, compare it to the hash in the database, and if they are the same and the hashes match, you must have given me the password right? There is also some salting applied (I kid you not) and perhaps some other techniques applied to secure it even more. 

The accusation this week, and the cause of two directors resigning, was that although MENSA was warned about the password weaknesses nothing was done to fix it, and this month it had suffered a cyber-attack and breach.

Storing credit card information?

Probably even more worrying is that it is being reported that the MENSA database was storing credit card information. This is something that should never be done! The Payment Card Industry (PCI) has a standard to be applied to credit card usage, known as PCI-DSS, the DSS being the data security standard. Almost the very first statement in the requirements for taking online payments is that ‘credit card numbers should never be stored’! Urgent work is needed and indeed, at MENSA HQ, the cleverest in our society are currently getting it done after the horse has bolted. Here is a screenshot of their site as at 3rd February 2021:

When organisations stop listening all hell can break loose


Surely, I hear you ask, most organisations listen to messages telling them they have a problem? You may be surprised at how many organisations have a culture where staff are afraid to report issues, where external agencies are ignored when they report a problem, and how much organisations miss out on by not encouraging a safe dialog, especially around security.

Moonpig learnt from their mistakes

Let me tell you about MoonPig, the online greetings card organisation and recent PLC (Congratulations!) and customers of ours. It's a very different organisation now to what it was back in 2013 when it too suffered a security issue. In MoonPig’s case an external developer identified a vulnerability in the code used by MoonPig’s mobile app to connect via an API to the internal code mothership. This was one dandy of a security vulnerability that pretty much allowed an attacker to discover, very easily too, all the details of a MoonPig customer and even make a purchase using someone else’s account! Paul Price was the developer that discovered the bug and he immediately reported it to MoonPig HQ. 

Now, it is an accepted methodology with reported bugs that the details of a bug are not made public until at least 30 days after it was reported. This gives the company the opportunity to get it fixed before the potential bad guys get to know about it. In Paul’s case he reminded them that this bug existed, that it was damaging and needed addressing for the next 13 months! MoonPig’s failure back in 2013 was to ignore the advice they were being given freely by an external party. 

After 13 months Paul went public, as he should, and MoonPig suffered a data breach when attackers got hold of the details. This might seem harsh - that Paul went public, but in our world the 30 days before going public rule is meant to put pressure on organisations to deal with it. His patience was incredible given the devastating impact of the vulnerability and going public solved the problem - the bug was fixed! We were involved subsequently delivering security training across the development teams, a process implemented and supported by the entire board at that time.

What can you do to stave off these kinds of attack?

Organisations can protect themselves by actively embracing a bug bounty program for their organisation (here is a list of the most popular!) 

A bug bounty program encourages external developers to test the applications and if they find any security vulnerabilities, to report it to the organisation using the program. The organisation will make a payment to the developer to say thanks for finding the bug. This is a great way to ensure the organisation embraces the fact it might not be perfect and protect its users’ data properly.

Does your organisation listen? Are your staff comfortable reporting issues upwards without fear of persecution and being considered troublemakers? What are the processes in place to help do this? Is anonymity an option? Externally, do you have an email address for developers and users to report security issues, along the lines of security_issue_contact@myorg.com?

Human Resources and L&D leaders take note, you have the power to instil a "Security First" mindset across your whole workforce.

---------------------------------------------------------------

If you found this article interesting, do take a look at some of our related training courses:

Share this post on:

We would love to hear from you

Get in touch

or call us on 020 3137 3920

Get in touch