Tag Archives: SSL

Symantec, Google and the SSL Monkey

Some education first

PKI or Public Key Infrastructure is a technology that allows website visitors to trust SSL certificates presented by SSL encrypted websites. An example is when you visit your Internet Banking website – you can verify the authenticity of the site by checking the SSL Certificate of the site ( ie. clicking on the padlock ) – but that certificate is underpinned/backed by a CA or Certificate Authority and you are trusting that the CA has correctly issued the website’s SSL Certificate.

CAs ( issuers of SSL certificates ) have possibly the most important role to play in an ecosystem based purely on trust. If a CA does something to break that trust, then the entire secure website solution that we rely on daily for critical functions, is put in jeopardy.

Our browsers are the conduit through which CAs are “allowed” to exist – browsers contain the CAs’  root certificates through which all SSL certificates are issued and validated. If a CA does not have a root certificate present in a particular browser, then that browser will not implicitly trust sites issued with certificates backed by the CA, and the user visiting that site will be presented with errors. The user could ignore the errors and continue, but they would have no way of validating the authenticity of the site.

Considering that typosquatting ( website addresses with  purposefully incorrect names ) is a serious security issue, it makes no sense to ignore certificate errors. Here is an example:

I want to go to my internet banking so I type www.standerdbank.co.za ( by mistake instead of www.standardbank.co.za ). I don’t realise the problem and end up at a site that looks exactly like I expect; however this is not the correct site and I could potentially be entering my internet banking login details in an unrelated and malicious site. Once the credentials are captured, the false website redirects me back to the real site so that I don’t suspect a problem. The “attackers” now have my login credentials and can act as they want.

SSL certificates will solve the above issue ( as long as we trust the CAs ) by showing an invalid or unrelated certificate for the false site.

History of abuse

With this ( somewhat simplified ) background, we can now understand the issues surrounding CAs that act badly with respect to the issuing of certificates. And there have been more than a few instances of bad behaviour on the part of CAs.

  1. DigiNotar was a Dutch certificate authority owned by VASCO Data Security International, Inc. On September 3, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over operational management of DigiNotar’s systems. That same month, the company was declared bankrupt.  An investigation into the hacking by Dutch-government appointed Fox-IT consultancy identified 300,000 Iranian Gmail users as the main target of the hack (targeted subsequently using man-in-the-middle attacks), and suspected that the Iranian government was behind the hack.
  2. According to documents released by Mozilla Corporation, Qihoo appears to have acquired a controlling interest in the previously Israeli-run Certificate Authority “StartCom”, through a chain of acquisitions, including the chinese-owned company WoSign. WoSign also has a CA business; WoSign has been accused of poor control and mis-issuing certificates. Furthermore, Mozilla alleges that WoSign and StartCom are in violation of their obligations as Certificate Authorities in respect of their failure to disclose the change in ownership of StartCom; Mozilla is threatening to take action, to protect their users.
  3. In 2015, Symantec’s Thawte CA mis-issued an EV ( Extended Validation –  the highest trusted certificate type ) for the domains google.com and www.google.com. These were just test certificates but if let out into the wild, anyone could have posed as Google.
  4. In June 2011, StartCom suffered a network breach which resulted in StartCom suspending issuance of digital certificates and related services for several weeks. The attacker was unable to use this to issue certificates (and StartCom was the only breached provider, of six, where the attacker was blocked from doing so). StartCom was acquired in secrecy by WoSign Limited (Shenzen, China), through multiple companies, which was revealed by the Mozilla investigation related to the root certificate removal of WoSign and StartCom in 2016.

And now for some current news

As recently as last year, Google ( and now Mozilla – the 2 browsers with biggest market share ) has found wrongdoing on the part of Symantec and many of its subsidiary CAs. Symantec owns 2 of the most popular brands in SSL certificates – Verisign and Thawte – so this is a considerable issue. As of March this year, Google has laid out a plan to gradually distrust SSL Certificates issued by Symantec and any of its subsidiary CAs, and push for their replacement.

“As captured in Chrome’s Root Certificate Policy, root certificate authorities are expected to perform a number of critical functions commensurate with the trust granted to them. This includes properly ensuring that domain control validation is performed for server certificates, to audit logs frequently for evidence of unauthorized issuance, and to protect their infrastructure in order to minimize the ability for the issuance of fraudulent certs.”

“On the basis of the details publicly provided by Symantec, we do not believe that they have properly upheld these principles, and as such, have created significant risk for Google Chrome users. Symantec allowed at least four parties access to their infrastructure in a way to cause certificate issuance, did not sufficiently oversee these capabilities as required and expected, and when presented with evidence of these organizations’ failure to abide to the appropriate standard of care, failed to disclose such information in a timely manner or to identify the significance of the issues reported to them.”

As you can imagine, if CAs mis-issue certificates, this breaks the fundamental trust required for using secure encrypted websites.

I very seldom mention my affiliated businesses in my blog however, in this case it’s important to mention that the primary upstream provider for SSL Certificates for my internet service business, eMailStor, has as of February this year, stopped distributing certificates from Symantec, Verisign, Thawte, GeoTrust and RapidSSL. This gives you an idea of how serious the problem of certificate mis-issuance is.

For end-users, it’s a matter of checking the certificate presented by a site to ensure that it’s valid. If there is a matter of mis-issuance, then most CAs have an insurance facility which may cover losses.

Website operators have a little more work to do – as follows:

  • make sure all critical internet services use SSL Certificates
  • have a complete SSL Certificate generation policy in place, along with all required documentation and procedures
  • do not pin certificates to one CA
  • do not assume that popular CAs are by nature and reputation secure
  • continually review the performance of your CAs
  • follow the instructions of CAs to the letter for installation

SSL Certificates, and their issuers, play a critical part in making sure that websites can be authenticated, and that users can transfer information with websites in a secure manner. Browser makers also play a part in making sure that CAs toe the line and work together to build an infrastructure that can be trusted.



Another day, another SSL attack. A new, low-cost attack has been found, that decrypts sensitive communications in a matter of hours and in some cases almost immediately. I hereby name you DROWN! And CVE-2016-0800.

The attack works against TLS-protected communications that rely on the RSA cryptosystem when the key is exposed even indirectly through SSLv2, a TLS precursor that was retired almost two decades ago because of crippling weaknesses. The vulnerability allows an attacker to decrypt an intercepted TLS connection by repeatedly using SSLv2 to make connections to a server.

The fact is though, that many of the listed SSL-based attacks over the last 2 years ( and yes there have been quite a few ), are not inherently serious, or do not have a large attack surface. Many require a particular ( and unusual ) set of circumstances and dependencies that make their effectiveness, well less effective.

And DROWN is not dissimilar. I requires SSLv2 to be enabled on the web server. For those in the know, and any sysadmin worth their salt, anything below TLSv1 ( at the very least ) should have been switched off on your web servers, years ago already. Known issues with these lesser versions of encryption have absolutely mandated their non-use. But unfortunately, the ease with which a web server can be put online is not directly comparable to the technical skill of those putting these servers online. So you can bet there are probably some misconfigured servers out there.

But the attack surface for DROWN should be relatively small and those who are effected, will probably ( and hopefully ) not be providing anything of value on their sites.

There’s a lesson to be learnt here though: just because something may seem simple to do on the surface, does not mean it is in reality. There’s no replacement for skill and experience.

Heartbleed SSL attack

The latest SSL attack in the form of Heartbleed ( ref. CVE-2014-0160 ) has burst onto the scenes in the last 24 hours with a bang. Effectively, Heartbleed is a weakness in OpenSSL that allows the theft of information that is under normal circumstances protected by SSL/TLS. It allows the memory of affected systems to be read and information extracted ( including passwords and other vulnerable information ), and it also allows the keys ( both public and private ) used on those systems to be compromised.

The solution is to upgrade to the latest version of OpenSSL ( 1.0.1g ) – however that alone may not be enough. If your site was compromised previously, there would be no trace of that attack and simultaneously, your keys may be compromised. So you may need to regenerate private and public keys for these systems.


The media coverage of this is extensive, and to be fair, this is a very serious issue. However, we need to consider what the attack surface is. And in my own testing, the attack surface is low to non-existent – every single client of mine that I’ve tested, does not have a vulnerable implementation of OpenSSL or is not using the SSL Heartbeat extension ( this may be simply because I stick to 2 Linux distros alone ). Is this issue being blown out of proportion? I can’t talk for others but my own experience says yes.

That’s not to say you should not be vigilant – as a security professional, it’s always best to err on the side of caution. Prevention is better than cure …

There are a number of tools available for testing purposes as well as online SSL checkers like those from Qualys and Comodo. Test and make sure you’re covered.

UPDATE: The guys who wrote masscan, scanned the entire internet today and released some interesting numbers on vulnerable systems: approximately 600,000 out of ~ 28 million SSL-enabled servers. That’s 2.1% … not an entirely significant no but still a big issue depending on which sites are vulnerable.

There has been a lot of calls in the media for users of websites to change passwords. Make sure though that you change your password AFTER the affected site has been sorted out otherwise you’re just perpetuating the issue.

OpenID and SSL/DNS poisoning

Ben Laurie of Google’s Applied Security team, while working with an external researcher, Dr. Richard Clayton of the Computer Laboratory, Cambridge University, found that various OpenID Providers (OPs) had TLS Server Certificates that used weak keys, as a result of the Debian Predictable Random Number Generator (CVE-2008-0166).

In combination with the DNS Cache Poisoning issue (CVE-2008-1447) and the fact that almost all SSL/TLS implementations do not consult CRLs (currently an untracked issue), this means that it is impossible to rely on these OPs.

In order to mount an attack against a vulnerable OP, the attacker first finds the private key corresponding to the weak TLS certificate. He then sets up a website masquerading as the original OP, both for the OpenID protocol and also for HTTP/HTTPS.

Then he poisons the DNS cache of the victim to make it appear that his server is the true OpenID Provider.

There are two cases, one is where the victim is a user trying to identify themselves, in which case, even if they use HTTPS to “ensure” that the site they are visiting is indeed their provider, they will be unable to detect the substitution and will give their login credentials to the attacker.

The second case is where the victim is the Relying Party (RP). In this case, even if the RP uses TLS to connect to the OP, as is recommended for higher assurance, he will not be defended, as the vast majority of OpenID implementations do not check CRLs, and will, therefore, accept the malicious site as the true OP.