2018 the year of the hacked router

I’ve spoken in depth on consumer (and some enterprise) router security issues.  In brief summary, these devices are pieces of scrap that are full of vulnerabilities and very seldom get updated to fix issues.

It’s no coincidence that this year has seen an exponential growth in attacks on routers as well as botnets making use of pwned routers and other IoT devices. Device pwnage is now one of the main vectors for malicious attacks especially as regards ransomware distribution and cryptomining.

As far as consumer devices go, Wireless Access Points (APs) are in the same poor league as routers, and the same remediations mentioned at the bottom of this article apply. Bluetooth is another area where vulnerabilities are often found so caution is required there too.

Some of the big ones this year:

  • Mikrotik routers vulnerable to VPNfilter attack used in cryptojacking campaigns
  • Mikrotik routers have a vuln in Winbox (their Windows-based admin tool) that allows for root shell and remote code exec – the new exploit could allow unauthorized attackers to hack MikroTik’s RouterOS system, deploy malware payloads or bypass router firewall protections
  • Dlink routers have 8 vulns listed in OCt 2018 including plaintext password and RCE issues
  • VPNfilter affecting multiple brand routers including Linksys, Netgear and TPlink
  • Cisco has had a torrid time this year with multiple backdoors
  • Datacom routers shipped without a telnet password
  • Hard-coded root account in ZTE routers

The Dlink issue is so bad that the US FTC has filed a lawsuit against Dlink citing poor security practices.

To summarise, why all these issues?

  • lowest quality devices to cater for low consumer pricing
  • very little innovation or security in software design leading to (many) vulnerabilities
  • vendors have no interest in maintaining firmware
  • manual updates and/or no notification of updates
  • default and/or backdoor credentials
  • insecure UPnP, HNAP and WPS protocols
  • consumers not skilled in configuration so config left at factory defaults
  • open and web-accessible ports

So how can consumers protect themselves?

  • change default admin credentials
  • change the defaults SSID (WIFI) name
  • enable and only use WPA2 encryption
  • disable telnet, WPS, UPNP and HNAP
  • don’t use cloud-based router management
  • disable remote admin access
  • install new firmware when released (monitor your vendors support website)
  • change access details for your router’s web management interface (eg. IP address and/or port)
  • make use of an open DNS solution like OpenDNS or Google DNS
  • advanced: reflash your router’s firmware with alternatives like DD-WRT or OpenWRT

At minimum, consumer devices have no place in business networks, including SMEs. Even when backgrounded by a firewall, non-bridge mode routers can still be compromised and used for external attacks. And it’s been shown that some enterprise-class equipment (eg. Mikrotik and Cisco) suffer from serious issues too.

For home users, the situation is more difficult primarily because of cost – any more specialised equipment is likely to be out of price range for these users. As well, skill requirements for non-consumer equipment increases significantly (consider that most consumers struggle with consumer devices already) so that may be out of the question. Until vendors start thinking about security seriously and bake it into their products, this will continue to be an ongoing issue.

Microsoft (surprisingly) has started a project called Azure Sphere which is a Linux-based operating system that allows 3rd party vendors to design IoT and consumer devices using an embedded security processor (MCU), a secured OS and Cloud Security to significantly improve the overall security surface of their devices. This is an admirable effort and hopefully many vendors get on board or initiate similar projects.

Absent any change in the consumer device arena and their current lax attitude towards security, the issue of botnets and distribution networks is likely to only get significantly worse over time.

 

Update from The Register:  Spammer scum hack 100,000 home routers via UPnP vulns to craft email-flinging botnet

Update from ZDNet: Bleedingbit zero-day chip flaws may expose majority of enterprises to remote code execution attacks

Some more on Chalubo: This botnet snares your smart devices to perform DDoS attacks with a little help from Mirai

And BlueBorne: Security flaws put billions of Bluetooth phones, devices at risk

(S)RUM

Veronica Schmitt, a senior digital forensic scientist at DFIRLABS, recently featured on Paul’s Security Weekly, showcasing the Microsoft SRUM system tool (System Resource Utilization Monitor).

SRUM was first introduced in Windows 8, and was a new feature designed to track system resource utilization such as CPU cycles, network activity, power consumption, etc. Analysts can use the data collected by SRUM to paint a picture of a user’s activity, and even correlate that activity with network-related events, data transfer, processes, and more.

Very little is known about SRUM outside of a few notes and videos online, and most tellingly, very few sysadmins know about the storage function of this tool.

That sounds pretty interesting.  And it is, especially for performance and system monitoring.

But …

The output from SRUM is continually (at 60min intervals) written to an ese DB, which in turn can be read by a python tool called srum-dump written by Mark Bagget and output to a CSV for further analytics.

The scary part of this is how much data SRUM is actually writing out to the db and what info can be gleaned from this db in forensics terms. Essentially, any actions performed or data generated by a user on that system, can we retrieved at a later stage by srum-dump.

From a forensics pov, that’s brilliant but from a privacy pov, it is very scary. Especially as very few people realise this is going on in the background. It’s also scary in the way that if a (Windows) machine is compromised, the SRUM db can be used to propagate additional (lateral or vertical) malicious activity depending on the data identified.

Comments welcome …

VPNFilter and other neat tricks

The Spectre and Meltdown attacks that came to light at the beginning of the year have been the main focus of this year’s security issues however there has been a lot more going on than that.

On that note though, additional Spectre variations have been found (we’re up to v4 now); as well, the BSD team has alluded to a notice for the end of June potentially regarding Hyper Threading in Intel CPUs which could have far-reaching effects for virtualisation systems.

But on to the main topic of this post: VPNFilter is a modular malware that infects consumer or SOHO routers and can perform a number of malware-related functions. It is thought to be the work of Russian state-sponsored attackers “Fancy Bear” who have been fingered for previous attacks like BlackEnergy.

The attack is split into 3 stages:

  1. exploit router and pull down image from Photobucket website
  2. the metadata in the image is used to determine the IP address for stage 2; open a listener and wait for a trigger packet for direct connection
  3. connect from Command and Control, and engage plugins for stage 3

Some new stage 3 plugins have recently come to light including:

  1. inject malicious content into web traffic as it passes through a network device
  2. remove traces of itself from the device and render the device unusable
  3. perform man in the middle attacks (mitm) to deliver malware and exploits to connected systems
  4. packet sniffer module that monitors data specific to industrial control systems (SCADA)

If this sounds scary, then you’re on the right track. But think bigger, much bigger. Because the attacker is on the device connecting users to the internet, it could potentially both monitor and alter any internet traffic.

From ARSTechnica:

“Besides covertly manipulating traffic delivered to endpoints inside an infected network, ssler is also designed to steal sensitive data passed between connected end-points and the outside Internet. It actively inspects Web URLs for signs they transmit passwords and other sensitive data so they can be copied and sent to servers that attackers continue to control even now, two weeks after the botnet was publicly disclosed.”

What devices are affected? The full list is in the Cisco Talos blog post on the issue however briefly it includes upwards of 70 models from vendors like TP-Link, Dlink, Netgear, Linksys and Mikrotik, all of which are consumer units that can be expected to be used in SOHO environments.

On to Satori, a more recent botnet based on the formerly impressive Mirai code that caused havoc with denial-of-service attacks in 2016. Satori uses the Mirai code as a foundation for a series of evolving exploits that allows the botnet to control devices with even strong credentials.

The initial attack was targeted at Huawei and Realtek routers, however the botnet controllers have displayed impressive skills by moving on to bitcoin miners and now consumer routers like Dlink’s DSL2750B.

“Attack code exploiting the two-year-old remote code-execution vulnerability was published last month, although Satori’s customized payload delivers a worm. That means infections can spread from device to device with no end-user interaction required.”

Dlink currently has no firmware update for this issue. Which brings me back to a statement that I’ve echoed on this blog numerous times – no one should be using consumer routers, or at least routers that do not have a history of consistent security updates. The internet is littered with hundreds of models of router from many manufacturers that are full of holes that do not have a fix from the manufacturer.

Consumer manufacturers do not have the skill to design secure devices nor do they have the capacity to fix broken and exploitable devices. This leaves a sizeable portion of internet users at the mercy of attackers.

And that is scary.

Loki god of …?

In the field of IT Security, one learns very quickly that there’s always another security risk around the corner. An old favourite, the Loki Botnet, is back for another bite of the pie shortly after the fun with WannaCry a week ago.

( Loki a god in Norse mythology, was sometimes good and sometimes bad. Loki the virus is all bad. )

Loki is a malware bot that steals passwords from applications and e-wallets, and it’s been around since early 2015, so has a solid track record. There is a new variant doing the rounds and it’s upped the ante with the ability to steal credentials from over 100 applications. The virus initiates via email PDF attachment or web download so the standard advice of being wary of attachments applies.

It’s unclear at this time if the malware is stealing credentials from stored password databases or from the application itself while running. In all cases, it’s important to:

  1. not execute unknown email attachments
  2. use strong passwords
  3. make use of AV and anti-malware software

On a related note, browsers are often targets of password stealing malware – Firefox, IE, Opera and Safari are all on the list of browsers that Loki ‘supports’. Of note, Firefox ( and related browsers ) is the only one out of this bunch that supports a master password.

Firefox by default stores passwords in a file that is encrypted. Without a master password, this file could be copied to another Firefox instance and viewed there. The master password applies additional encryption and essentially 2FA which means that the password file is useless without the master password.

Chrome/IE uses the OS’ secure encrypted storage ( eg. WPA, Keychain or Wallet ) to store your information – if the OS is compromised then so are your details.

It’s useful to know that using sync solutions ( eg. Google SmartLock, Apple iCloud ) will mean that your details are stored on someone else’s systems and may be accessible by the provider.

Browser password managers know which site is related to which password entry – this means that they can protect you against phishing and other attacks( by checking SSL certs ) using lookalike sites and other tomfoolery. This is another reason to use SSL-encrypted sites.

I’ve written about password managers before, but to reiterate, if you want the best in password management and security, use a dedicated password manager. They provide strong encryption, master password and  encryption keys. And some provide neat tools to auto-input credentials into web sites and applications.

Facebook, Cambridge Analytica and your digital data

The recent Facebook/CA fiasco should be known to most people by now but here is a brief rundown in case you’re unaware.

Aleksander Kogan, a Russian-American researcher, worked as a lecturer at Cambridge University, which has a Psychometrics Centre. The Centre advises to be able to use data from Facebook (including “likes”) to ascertain people’s personality traits. Cambridge Analytica and one of its founders, Christopher Wylie, attempted to work with the Centre for purposes of vote profiling. It refused, but Kogan accepted the offer in a private/CA capacity.

Kogan did not advise his relationship with CA when asking Facebook (which allows user data to be used for ‘research purposes’) for permission to use the data.  He created an app called ‘thisisyourdigitallife’ which provided a personality prediction.

If this sounds familiar, then yes, many have probably filled in similar ‘tests’ which are available as apps on the Facebook (and other) platform. What most people don’t however know is that these apps are far more insidious than the playful front that they portray. The data collected by these apps can be used for any number of nefarious uses, and as in case, are being used in ways that break the user privacy agreement.

Kogan ended up providing private user data on up to 50 million users to CA, not for academic research but for political profiling purposes. This included not only users that had installed the app, but friends of those users as well. CA then used this data in commercial cases by working with various political parties and people (including Ted Crux and Trump campaigns). The product was called phsychographics.

Anyone who has read Isaac Asimov’s Foundation series may see parallels here with the character Hari Seldon’s phsychohistory which is an algorithmic science that allows him to predict the future in probabilistic terms. This is fairly hard-core Science Fiction …

To see this kind of future-looking large scale profiling occurring in 2015/6 is quite shocking.

Facebook was aware of this information sharing as early as 2015 and had asked Kogan and CA to remove the data. But they never took it further to confirm that this had indeed been done.

This is pretty embarrassing for Facebook and its almost 10% stock drop this week confirms this. The larger concern for Facebook is that the company signed a deal with the US Federal Trade Commission in 2011 that was specifically focused on enforcing user privacy settings. So this saga may be a contravention of that agreement …  and Facebook have more troubles ahead seeing as both US and EU authorities are looking into the matter. Facebook execs have already been before the UK Parliament and are accused now of lying about the facts in this case.

Arstechnica’s take on the story

Christopher Wylie, the brains behind the technology in use, had previously left CA once realising what they were doing, and became the whistleblower that has lead to the furor over the last few weeks.

The Guardian’s article on Christopher Wylie

The NYTimes article

While some will say that they’re not worried about the data that is collected about them, this scenario shows that the issue is much bigger than individuals. Profiling of large groups of people based in individual user data is now a thing.

In the case of Facebook specifically, one can:

This story should be enough for most to rethink their online presence and activity. It’s not necessarily a matter of removing yourself from the Internet but rather being very circumspect about the information your offer up about yourself. Because your information is being bought, sold and used as a weapon against you.

Meltdown and Spectre – hardware gone wild!

We’ve had some big doozies over the last 2 years from a security point of view, but the latest CPU hardware-related bugs called Spectre and Meltdown, that started making headlines early last week, surely take the cake. One has to be careful though in classifying these as bugs, because those affected would say these were conscious design choices in their CPUs, although they must have seen the potential side-effects of their choices.

So what are we actually talking about here?

First, Google’s Project Zero was started in 2014 and is a group of security analysts dedicated to finding vulnerabilities in IT systems. Some of the biggest vulnerabilities in IT systems over the last few years, have been found by GPZ so when they talk, people tend to listen.

GPZ found some interesting cache timing attacks in CPUs in the 1st half of 2017 and advised the affected  vendors on June 1st 2017. The attacks (can) effectively lead to leaked information from kernel memory, a very bad situation to say the least. Public inclusion was limited to give all vendors time to come up with resolutions however in December, another security group caught wind of the issues, released their findings and as of the beginning of this year, the rest is history.

The issues exist in most CPUs (especially Intel) going back to 1995 and are classed into 2 groups:

  • Meltdown
  • Spectre type 1 and 2

Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.

Meltdown has a fairly straightforward fix (which has been released by most OS vendors already) however there can be a performance penalty (sometimes significant) depending on the configuration and circumstances of systems. Intel specifically has tried to downplay the extent of performance degradation, but it is so severe in some cases, that affected vendors are advising not to implement the fixes.

Amazon Web Services (AWS) applied their meltdown patches this weekend past and many of their large customers have been showing light to medium performance impacts.

Note that these issues affect everything from desktop PCs, embedded and mobiles to servers and cloud systems.

Microsoft have advised that older processors with older versions of Windows are likely to suffer more. In addition, Microsoft has pulled their patch for PC systems based on AMD processors due to a compatibility issue.

Another aspect of the Meltdown issue on Windows OS’s is that certain AntiVirus packages have very deep hooks into the kernel to detect rootkit and other kernel-related malicious activity. And these are not playing nice with the patches leading Microsoft to implement a registry key system requiring AV vendors to set a key that confirms their compatibility with the patch. Messy mutch?

Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre.

Spectre is harder to exploit than Meltdown, but it is also harder to mitigate. Most vendors do NOT have fixes for Spectre yet at this moment or at best, existing fixes are incomplete. The reason for this is that the fixes require co-ordination between firmware, CPU microcode and operating system – a delicate and difficult balancing act requiring all vendors to work very closely together.

So where does that leave the general public? On the one hand, Meltdown is mostly sorted but with performance penalties probable and Spectre fixes are an ongoing project leaving unprotected systems at the mercy of potential 0-day attacks.

Most users or organisations running endpoint and perimeter security systems should be ok as these have been retrofitted with protections against potential attacks.

But the situation remains pretty fluid at the moment and we’re likely to see a lot more activity on this over the next few weeks. As usual, patch everything that can be patched.

Multichoice and some news

DSTV has always been a contentious subject amongst South Africans.  Multichoice paved the way for pay-tv with the introduction of Mnet in the mid-80’s; following this, they introduced the digital satellite service DSTV in 1995 effectively becoming a monopoly in South Africa. High costs, many repeats and channel binding seem to show Multichoice as the face of corporate greed,  and a product set that leaves much to be desired. It’s no wonder that in recent times, millennials and others, have been leaving the channel in droves, and looking at alternatives like Netflix and Multichoice’s own streaming solution, Showmax.

However there is now another reason to leave DSTV/Multichoice – it seemingly appears they have been complicit in funding the Guptas, albeit indirectly through (very) large payments to ANN7, the formerly-owned Guptas propaganda mouthpiece. Most of us know ANN7 as a channel that spews non-factual nonsense concerning everyday events in SA.

Multichoice have allegedly paid ANN7 around R250 million over the last 5 years to ‘host’ the ANN7 channel on DSTV. Multichoice had been lobbying former comms minister Muthambi to push through a decision in favour of encrypted set-top boxes for another controversial project, SABC’s Digital TV migration. That project is now mired in legal squabbles over tender and project irregularities due to the question: should the set-top boxed be encrypted or not?

In actual fact, the question comes down to: should the boxes be allowed to host paid-for content/channels (read Multichoice) as opposed to only free-to-air channels. There are opposing views on whether allowing encryption would benefit poorer households. One view is that if decryption had been included in STBs, some poorer households may eventually have been able to purchase pay TV channels without having to buy completely new devices. An opposing view is that since the STBs are going to the country’s poorest people, it would be predatory to use these households as the foundation for a new business venture.

So the question comes back to why have Multichoice (and indirectly its parent Naspers) been paying ANN7 what appears to be a lot of money, for a channel that is by all accounts, a Gupta mouthpiece? Some may argue that this was done to curry favour with the Guptas who seemingly have extended their influence into every sphere of government. Some might argue further that the Guptas had enough pull in government circles to get the vote regarding STBs, to swing in Multichoice’s favour.

Whatever the reason is, Multichoice paying to host ANN7, has irked many in South Africa. A number of other companies, including global-based like KPMG, SAP and McKinsey, have been implicated in irregular dealings with Gupta-affiliated companies and Multichoice’s actions in this matter paint them in a similar light.

The fact is that the South African public may have unwittingly been party to, and funding, corruption through their monthly DSTV premiums. That does not sit well with many.

While Multichoice continues to tout impressive statistics for their pay-tv membership, I think the truth is slightly different and with alternatives becoming available, things are likely to change further.

I left DSTV over 3 years ago and have never looked back. I know many others in my peer group who have done the same. It’s only diehard sports fans who remain loyal to DSTV’s admittedly good sports channel lineup, although the fact that you need their premium package for this, grates.

Will you stay with DSTV?

South African Security (Fails)

It’s been a while since my last post but recent events in SA around security have prompted me to write this post.

It starts with an open website containing what is now believed to be upwards of 70 million entries for names, ID numbers, income, addresses and other information on South African citizens/residents including possibly around 12 million children. This data leak was originally exposed by Troy Hunt from HAVEIBEENPWNED fame, and came in the form of a website from (now believed to be) Jigsaw Holdings, an apparent IT partner of ERA, the property group. It took service provider almost 3 days to plug the leak.

The data was also available in the form of a database file seeded through torrents which means there was widespread access to this data. The fallout from this leak is likely to be big and long lasting, and identity theft is a primary result from leak data such as this. Everyone needs to be extra vigilant on their personal data in the coming years.

Ster Kinekor is also on HAVEIBEENPWNED’s list and unfortunately SK have not come forward with details or advised their customers of this breach. I’ve contacted them on 3 occasions in an attempt to get details on the breach but so far they have  remained mum. #sterkinekor #securityfail …

#computicket also remains stubbornly out of touch with web security  and the safety of their customers – their public website has offered non-SSL access to their site/booking system forever and after contacting them 3 times over the last 2 months to advise them as such, nothing has been done. This is a simple matter of putting in a web-redirect from HTTP to HTTPS which should take a seasoned admin all of 30 seconds to do.

Their front-end staff responses to my calls show their utter ignorance on the matter:

Apparently the main login to their site that is used by all customers is not a transactional page …

So let’s take a look at the site as of last week:

 

Yip no padlock, no security …

There are many examples of this kind of incompetence all around the web/world and also here in SA. There are a lot of people without the necessary skills, putting up websites and publicly accessible systems and not securing them properly.

The best advice I can offer on these types of shenanigans is to use a password database (like KeePass) and a unique password for each site. If one of the sites you use is compromised, at least that data can’t be used to access your other sites.

Stay safe!

Email anti-spam, authentication and signing solutions

There are many solutions providing encryption, anti-spam, authentication and others  available on top of the venerable SMTP protocol. Some of these require management overhead, others require end-user input. But the holy grail is to provide all these features with no user input and low management overhead.

Basics

The most important information needed before starting with any anti-spam or advanced email solution is an understanding of the message triplet. The triplet is made up of:

  • sender email address
  • sender server ( IP/host-name )
  • recipient

You will see later on in this document how important the triplet is but in essence, these 3 values constitute the limits of what can be used for address-based services ( eg. rDNS, blacklist, etc. ).

TLS SMTP encryption

TLS is an extension of SSL designed for, among others, SMTP traffic.

It’s important to understand that TLS does not provide any form of anti-spam or authentication. It’s primarily there to encrypt communications between clients and servers, as well as between  servers. It doesn’t care who is sending to whom, it simply takes the MIME envelope ( a binary composition of the email ) and encrypts it with an agreed upon cipher, if the recipient system supports it.

Anti-spam options

Let ‘s take a look at some of the AS options that are available to us.

address-based AS options

There are quite a few address-based options available for AS purposes and these should be implemented as the very first step in a full AS solution.

  1. black-/white-list – this is a simple list which can block specific sender or recipient addresses based on either the full email address or the domain.
  2. reverse DNS (rDNS) – when the sender server connects, it’s DNS host-name and IP address are provided as part of the SMTP transaction; the recipient server can do a rDNS/PTR lookup for the host-name ( ie. for a particular host-name, give me the IP address ), and then compare that to the IP address of the connecting server; if the 2 do not match then either the recipient server is misconfigured or is being spoofed.
  3. Real-time Blackhole Lists (RBLs) – these are internet-based lookup lists which maintain known spam sources ( in IP address form ); when the sender server connects to you, your server checks to see if the recipient server’s IP address is in the configured RBLs; if it is then the connection is blocked before the setup is complete.
  4. SASL authentication – usually used for client to server connections to send email; if you are not authenticated, then you can’t send email.
  5. invalid host-name/non-fqdn host-name – if the sender server does not provide a fully formed and valid DNS host-name, then the connection is terminated before setup completion.
  6. unknown sender domain – the recipient server checks that the sender domain is in fact valid; if not, then terminate the connection.
  7. unknown recipient domain – same as above but for the recipient.
  8. HELO/EHLO checks – determine if the sender server is who they say they are according to DNS

The above options can provide a powerful solution for address-based AS. However spammers have become very clever over the years and a large percentage operate valid mail servers which can successfully bypass some of the protections above.

content AS

Most MTAs have the ability to block email based on content – eg. phrase or words

heuristic AS options

Heuristics play an important and powerful role in AS as these systems can intelligently ( based on email address and content behaviours ) identify spam.

Heuristic-based solutions use a scoring system and advanced statistical analysis, breaking the email down into component pieces and then assigning sub-values to these pieces. These are then totaled and if greater than a predefined score, the email will be marked as spam. Some of the component checks are:

  1. local and network tests to identify spam signatures – the AS can determine the difference between users sending email from inside or outside your network
  2. DCC, Razor, Pyzor – online email hash sharing databases – email hashes are checked against online databases; if a hash matches, then the email is dropped or tagged.
  3. Bayesian learning ( the big one )  – the AS can automatically learn ( without external input ) what is and isn’t spam
  4. integration with AV
  5. integration with SMTP authentication solutions like SPF, DKIM
  6. URI (RHSBL) blacklists – a blacklist served via DNS to identify UCE/UBE specifically
  7. RBLs – a blacklist served via DNS to identify spam servers
  8. training of the BAYES statistical analysis engines can improve the accuracy of the AS

Remember that heuristic AS uses a combination of the above to compute a total score, not the individual items.

grey-listing

Grey-listing makes use of the message triplet to identify a unique sender/recipient combination. If the grey-listing system receives an email where the triplet has not been seen previously, then the recipient server will reject the email with a temporary defer code. If the greylisting system has seen the triplet previously, then it accepts the email.

This system works on the premise that spammers do not operate receiving SMTP servers and as a result will never receive the defer message. As such, they will never resend the email to be accepted by the recipient’s grey-listing system the 2nd time around.

Grey-listing on average will effectively block 90% of spam.

Greylisting systems also support spam traps, where if designated email addresses receive emails, then the sender servers are blacklisted.

client-based AS

Some email clients ( eg. Thunderbird ) have their own built-in AS solutions which can be very useful in conjunction with the server-side solutions. Once again, training will significantly increase the accuracy of the solution.

Message signing/encryption

This is not specifically an AS solution so is listed here in a separate section. This function is performed at the client side ( and in some solutions, a combination of the client and sending server ).

The  first requirement for this solution is that people who want to communicate with each other using signed emails, need to:

  1. each generate a private/public key set
  2. send their respective public keys to each other

If you don’t know the sender, then you can’t authenticate that the sender is in fact who they say they are. Anyone can generate a private/public key set using any information. This is a major disadvantage to message signing – it’s only useful for people who know/trust each other.

There is also significant management overhead associated with the PKI ( Public Key Infrastructure ) required for message signing. Provision needs to be made for secure storage and retrieval of keys. Lost private keys will mean that you can’t sign or read signed messages.

Public stores/directories are available for storage of public keys.

S/MIME ( Secure MIME ) is the most widely accepted method of digital message signing. Microsoft has the following to say about message signing:

A digital signature attached to an email message offers another layer of security by providing assurance to the recipient that you—not an imposter—signed the contents of the email message. Your digital signature, which includes your certificate and public key, originates from your digital ID. And that digital ID serves as your unique digital mark and signals the recipient that the content hasn’t been altered in transit. For additional privacy, you also can encrypt email messages.

The authentication provided by digital signatures is predicated on the fact that the person who originally generated the keys, is who they say they are so take the above with a pinch of salt.

For use of message signing in Windows AD environments:

As an administrator, you can enable S/MIME-based security for your organization if you have mailboxes in either Exchange 2013 SP1 or Exchange Online, a part of Office 365. To use S/MIME in supported versions of Outlook or ActiveSync, with either Exchange 2013 SP1 or Exchange Online, the users in your organization must have certificates issued for signing and encryption purposes and data published to your on-premises Active Directory Domain Service (AD DS). Your AD DS must be located on computers at a physical location that you control and not at a remote facility or cloud-based service somewhere on the internet. For more information about AD DS, see Active Directory Domain Services.

Outside of Windows AD environments, you can use Enigmail with Thunderbird.

Authentication

As we’ve seen above with the AS and digital signature options, there is no clear way of confirming the authenticity of the sender.

From Wikipedia:

The need for this type of validated identification arose because spam often has forged addresses and content. For example, a spam message may claim to be from sender@example.com, although it is not actually from that address or domain or entity, and the spammer’s goal is to convince the recipient to accept and to read the email. It is difficult for recipients to establish whether to trust or distrust any particular message or even domain, and system administrators may have to deal with complaints about spam that appears to have originated from their systems but did not.

SMTP authentication solutions are specifically designed with this in mind.

SPF -Sender Policy Framework

SPF is a simple and elegant solution to authentication and does not require any configuration or adjustment of clients.

To implement SPF, one needs 2 items:

  1. a TXT DNS record with SPF arguments
  2. an SPF plugin or ability on your email server ( when receiving email )

The primary purpose of SPF is to generate the TXT DNS record with a list of the IP addresses of all servers which can send email for your domain.

When an email is initiated, the recipient server will do a DNS lookup on the sender domain’s SPF record to determine which servers are responsible for sending email for that domain. If that lookup matches the actual IP address of the sending server, then the transaction is authenticated.

In this case, the IP address of the sending server is the 1 item in an email exchange that can not be spoofed.

SPF provides options for different levels of authentication ( PASS, NEUTRAL, SOFTFAIL, FAIL ). These can be used in conjunction with testing or for sender domains which do not publish SPF information.

DKIM – Domain Keys Identified Mail

DKIM provides a similar solution as SPF except it uses digital certificates instead of IP addresses to confirm authenticity.

Digital certs are arguably more secure ( ie. the cert is authenticated by a CA ) than using IP addresses but in practice, the 2 solutions are very similar.

There are 2 parts to DKIM

  1. signing – the sender server, with the help of a DKIM plugin or feature, signs the outbound email with its private certificate
  2. verifying – the recipient server compares the provided certificate against the certificate published in DNS – if a match, then the email is authentic

DMARC – Domain Message authentication, reporting and conformance

DMARC is essentially a combination of SPF and DKIM. The admin for a domain will create a policy defining SPF, DKIM or both, and how failures are managed. It also provides a reporting mechanism ( aggregate and forensic ) on the actions performed in the policy, allowing the recipient to detail messages that pass and/or fail.

These reports are an advantage of DMARC, as neither SPF or DKIM will provide feedback on passes or failures ( except for the SMTP logs ).

The purpose of these authentication systems is to make sure that the sender is who they say they are. In practice, these work quite especially considering that up to 50% of all email servers support 1 or the other solution.

3rd party

There are some 3rd party email client plugins that perform a form of grey-listing.

What can you do to protect yourself?

If you are running your own email server(s), then you need to at the very least, implement address-based AS options including grey-listing and RBLs. The next step is to implement heuristics-based AS – if you’re running MS Exchange then you can use a Linux-based front-end with SpamAsssasin to do this. Alternately there are some commercial solutions that run on Exchange servers.

The next step is domain authentication with SPF, DKIM or DMARC. Note that MS Exchange itself does not have the ability to check SPF/DKIM for incoming emails from other domains. If you run Exchange then you can only create an SPF/DKIM record for your own domains so that others can check it when receiving email from you.

Exchange Online/Office 365 and many other online services like GMail, Yahoo mail, MimeCast, Messagelabs, etc. support SPF and DKIM.

If you need to authenticate incoming email from other domains, then you need to run a FOSS SMTP server that supports SPF/DKIM/DMARC plugins ( eg. postfix ).

There are also other solutions like PGP/SMIME gateways, certified email ( eg. OpenPec )and other libraries but they are beyond the scope of this article.

What else can you do?

  1. increase the intensity of the AS heuristics solution
  2. learn to read message headers – user training (SAT) is crucial for end-users to identify spam email
  3. don’t click on attachments or links
  4. use additional AS solutions like Fortinet’s FortiMail or Cisco IronPort
  5. push your email through a 3rd party message scrubbing solution like Mimecast

Conclusion

There are many solutions available to solve a number of issues with email including spam, authenticity and delivery. Email as a system, was never designed with reliability in mind, and the spectre of spammers and malware actors, means that the industry needed to respond in kind.

A little bit of ransomware with that Sauerkraut?

This past weekend’s shenanigans with WannaCry have been painful for many people. But the simple fact is that solutions for this specific issue ( and many others ) have been available for a long time.

The initial patch for the MS17-101 issue was released by Microsoft in March 2017. Didn’t update?

Many AV vendors have had virus definitions for WannaCry for some time already and at latest, on Friday evening. Don’t have ( updated ) AV?

Have an office  internet connection without a decent firewall?

Still running XP or Vista without extended support?

No 3-tier backups?

The only one to blame is yourself …

IT seems to be treated as an afterthought at many companies. Yet it is IT that helps facilitates your business and income.

Thom from OsNews says:

“Nobody bats an eye at the idea of taking maintenance costs into account when you plan on buying a car. Tyres, oil, cleaning, scheduled check-ups, malfunctions – they’re all accepted yearly expenses we all take into consideration when we visit the car dealer for either a new or a used car.

Computers are no different – they’re not perfect magic boxes that never need any maintenance. Like cars, they must be cared for, maintained, upgraded, and fixed. Sometimes, such expenses are low – an oil change, new windscreen wiper rubbers. Sometimes, they are pretty expensive, such as a full tyre change and wheel alignment. And yes, after a number of years, it will be time to replace that car with a different one because the yearly maintenance costs are too high.

Computers are no different.”

It’s time to put some effort into your IT – especially if you value your data and your business. It may be a difficult pill to swallow, but it’s a necessary one.

The NSA and Ransomware. Oh and a bit of HPE on the side.

If ever there was a perfect example of stupidity, the new highly virulent strain of WanaCrypt ransomware that is currently spreading like wildfire, is it. And that stupidity is care of the NSA; who in their infinite wisdom, wrote exploits based on 0-day vulnerabilities that should have been reported to the relevant vendors, but was instead appropriated.

Well the Shadow Brokers have now in turn appropriated this code from the NSA and and someone else has gotten hold of it to create a self-replicating variant of WannaCrypt or Wcry malware, that is currently causing havoc in hospitals, banks, telecom services, utilities and others, by encrypting drives and blocking access to systems.

Another cause for concern: wcry copies a weapons-grade exploit codenamed Eternalblue that the NSA used for years to remotely commandeer computers running Microsoft Windows. Eternalblue, which works reliably against computers running Microsoft Windows XP through Windows Server 2012, was one of several potent exploits published in the most recent Shadow Brokers release in mid-April. The Wcry developers have combined the Eternalblue exploit with a self-replicating payload that allows the ransomware to spread virally from vulnerable machine to vulnerable machine, without requiring operators to open e-mails, click on links, or take any other sort of action.

The exploit spreads via vulnerabilities in network -accessible Windows subsystems although the exact details are still vague. Microsoft has released a patch in March for the issue however many companies have yet to install the update.

Numerous companies have been affected during the course of today including Telefonica, Vodafone, 16 NHS hospitals across the UK, and many others. The ransomware has been detected in over 74 countries already and the demands include Bitcoin payment of up to $600 per infection. The speed and violence of infection show a highly capable piece of malware with advanced network replication techniques bypassing standard methods of protection.

What can you do to protect yourself?

  • shutdown any non-critical network file access/shares
  • seeing as the malware is probably initiated via email, be especially vigilant for spam emails
  • update all Windows systems with the patch listed above
  • segment sections of your network where possible

And in other news, HP has been including a dodgy Windows audio driver from Conexant for the last 2 years on many HP Laptops which, wait for it … logs all your keystrokes! Yay!

Symantec, Google and the SSL Monkey

Some education first

PKI or Public Key Infrastructure is a technology that allows website visitors to trust SSL certificates presented by SSL encrypted websites. An example is when you visit your Internet Banking website – you can verify the authenticity of the site by checking the SSL Certificate of the site ( ie. clicking on the padlock ) – but that certificate is underpinned/backed by a CA or Certificate Authority and you are trusting that the CA has correctly issued the website’s SSL Certificate.

CAs ( issuers of SSL certificates ) have possibly the most important role to play in an ecosystem based purely on trust. If a CA does something to break that trust, then the entire secure website solution that we rely on daily for critical functions, is put in jeopardy.

Our browsers are the conduit through which CAs are “allowed” to exist – browsers contain the CAs’  root certificates through which all SSL certificates are issued and validated. If a CA does not have a root certificate present in a particular browser, then that browser will not implicitly trust sites issued with certificates backed by the CA, and the user visiting that site will be presented with errors. The user could ignore the errors and continue, but they would have no way of validating the authenticity of the site.

Considering that typosquatting ( website addresses with  purposefully incorrect names ) is a serious security issue, it makes no sense to ignore certificate errors. Here is an example:

I want to go to my internet banking so I type www.standerdbank.co.za ( by mistake instead of www.standardbank.co.za ). I don’t realise the problem and end up at a site that looks exactly like I expect; however this is not the correct site and I could potentially be entering my internet banking login details in an unrelated and malicious site. Once the credentials are captured, the false website redirects me back to the real site so that I don’t suspect a problem. The “attackers” now have my login credentials and can act as they want.

SSL certificates will solve the above issue ( as long as we trust the CAs ) by showing an invalid or unrelated certificate for the false site.

History of abuse

With this ( somewhat simplified ) background, we can now understand the issues surrounding CAs that act badly with respect to the issuing of certificates. And there have been more than a few instances of bad behaviour on the part of CAs.

  1. DigiNotar was a Dutch certificate authority owned by VASCO Data Security International, Inc. On September 3, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over operational management of DigiNotar’s systems. That same month, the company was declared bankrupt.  An investigation into the hacking by Dutch-government appointed Fox-IT consultancy identified 300,000 Iranian Gmail users as the main target of the hack (targeted subsequently using man-in-the-middle attacks), and suspected that the Iranian government was behind the hack.
  2. According to documents released by Mozilla Corporation, Qihoo appears to have acquired a controlling interest in the previously Israeli-run Certificate Authority “StartCom”, through a chain of acquisitions, including the chinese-owned company WoSign. WoSign also has a CA business; WoSign has been accused of poor control and mis-issuing certificates. Furthermore, Mozilla alleges that WoSign and StartCom are in violation of their obligations as Certificate Authorities in respect of their failure to disclose the change in ownership of StartCom; Mozilla is threatening to take action, to protect their users.
  3. In 2015, Symantec’s Thawte CA mis-issued an EV ( Extended Validation –  the highest trusted certificate type ) for the domains google.com and www.google.com. These were just test certificates but if let out into the wild, anyone could have posed as Google.
  4. In June 2011, StartCom suffered a network breach which resulted in StartCom suspending issuance of digital certificates and related services for several weeks. The attacker was unable to use this to issue certificates (and StartCom was the only breached provider, of six, where the attacker was blocked from doing so). StartCom was acquired in secrecy by WoSign Limited (Shenzen, China), through multiple companies, which was revealed by the Mozilla investigation related to the root certificate removal of WoSign and StartCom in 2016.

And now for some current news

As recently as last year, Google ( and now Mozilla – the 2 browsers with biggest market share ) has found wrongdoing on the part of Symantec and many of its subsidiary CAs. Symantec owns 2 of the most popular brands in SSL certificates – Verisign and Thawte – so this is a considerable issue. As of March this year, Google has laid out a plan to gradually distrust SSL Certificates issued by Symantec and any of its subsidiary CAs, and push for their replacement.

“As captured in Chrome’s Root Certificate Policy, root certificate authorities are expected to perform a number of critical functions commensurate with the trust granted to them. This includes properly ensuring that domain control validation is performed for server certificates, to audit logs frequently for evidence of unauthorized issuance, and to protect their infrastructure in order to minimize the ability for the issuance of fraudulent certs.”

“On the basis of the details publicly provided by Symantec, we do not believe that they have properly upheld these principles, and as such, have created significant risk for Google Chrome users. Symantec allowed at least four parties access to their infrastructure in a way to cause certificate issuance, did not sufficiently oversee these capabilities as required and expected, and when presented with evidence of these organizations’ failure to abide to the appropriate standard of care, failed to disclose such information in a timely manner or to identify the significance of the issues reported to them.”

As you can imagine, if CAs mis-issue certificates, this breaks the fundamental trust required for using secure encrypted websites.

I very seldom mention my affiliated businesses in my blog however, in this case it’s important to mention that the primary upstream provider for SSL Certificates for my internet service business, eMailStor, has as of February this year, stopped distributing certificates from Symantec, Verisign, Thawte, GeoTrust and RapidSSL. This gives you an idea of how serious the problem of certificate mis-issuance is.

For end-users, it’s a matter of checking the certificate presented by a site to ensure that it’s valid. If there is a matter of mis-issuance, then most CAs have an insurance facility which may cover losses.

Website operators have a little more work to do – as follows:

  • make sure all critical internet services use SSL Certificates
  • have a complete SSL Certificate generation policy in place, along with all required documentation and procedures
  • do not pin certificates to one CA
  • do not assume that popular CAs are by nature and reputation secure
  • continually review the performance of your CAs
  • follow the instructions of CAs to the letter for installation

SSL Certificates, and their issuers, play a critical part in making sure that websites can be authenticated, and that users can transfer information with websites in a secure manner. Browser makers also play a part in making sure that CAs toe the line and work together to build an infrastructure that can be trusted.

 

%d bloggers like this: