It seems Google might’ve missed a spot when protecting passwords for its G Suite users, accidentally holding them in plain text for 14 years.
The search giant disclosed the error yesterday – though declined to say how many business users were impacted. ‘(Passwords) for a subset of our enterprise G Suite customers were stored in our internal systems unhashed,’ according to a company statement.
The issue goes back to 2005 and only affects G-Suite customers. It was triggered by a mistake in the way an old (and since retired) feature used to manually set or recover passwords was implemented.
Google passwords are typically ‘hashed’, meaning scrambled using an algorithm that prevents them from being read by humans. It practice it means even Google can’t see your gmail password.
Hashing is a one-way operation and can’t be reversed. When users provide the password, the data is hashed and the result compared with what’s in store. A match means the password is correct and access can be granted.
Google was at pains to note yesterday that despite the slip-up, the sensitive info remained inside its encrypted systems and there’s no indication that it’s been accessed improperly or leaked. The issue has now been fixed.
When old vulnerabilities come back to haunt us
Google also admitted a second incident from January 2019, in which more unhashed passwords were found on its systems – though the systems themselves were encrypted.
‘In reviewing new G Suite customer sign-ups we discovered a number of unhashed passwords stored on our infrastructure,’ said the company.
In this case, the unhashed data was held in plain text form for 14 days. This issue has also addressed and with no evidence of improper access or misuse.
Google has contacted the G Suite administrators affected in both incidents and instructed them to change the relevant passwords. Any accounts that don’t respond to the request will be automatically reset.
People and technology at cross-purposes
Kudos to Google for transparency. It may be easier to admit a security failure when no harm has occurred, but the practice of proactively owning-up to errors and lapses in judgement has to be applauded.
It happens a lot. Here are just a few examples
- Air traffic control systems lying un-patched for years
- Nuclear Submarines running on un-supported versions of Windows XP
- More than 809 million email addresses and passwords left unprotected in an email marketing company’s cloud database
- Weak password security that let hackers into the systems of a major technology equipment supplier – unnoticed for more over 6-months
The common element in all these examples? Vulnerabilities that were entirely fixable, but no-one noticed them or thought they were important.
Let’s own up to our cyber errors
The fact is that mistakes and misadventure play a big part in cybersecurity. Organisations large and small, in the tech sector and without, fall victim to breaches that can’t always be blamed on software.
Normalising awareness of this and admitting it is a good thing. If human error is a pervasive vulnerability hackers can exploit – a ‘threat vector’ in infosecurity speak – it’s important to talk about it and understand it so we can better address it.
Forget about naming and shaming. Everyone makes mistakes. The issue is how to empower people with knowledge and build a culture of security awareness, so that all the people in an organisation are on the lookout for attacks, improper system settings, or sensitive data that could be exposed.
Harvard Business Review has said that better training is the best cyber security investment a business can make. Don’t make the mistake of relying too heavily on technology for protection. Empowering your people is the best way to minimise cybersecurity risk.