Category Archives: Computer Tech

RDP – the gift that keeps on giving

It’s long been known (at least in security circles) that the RDP protocol, as well as client and server implementations, are horribly broken. While a BlueKeep (the most recent RDP vulnerability) worm has yet to surface, brute-force password attacks on RDP services are a dime a dozen and occurring at a rapid rate.

PoC code is available for DoS attacks and limited RCEs on BlueKeep, and while attacks in the wild have yet to be seen, this is a case of when rather than if.

A recent honeypot test with 10 RDP servers across the world, resulted in the 1st service being identified in 1m30sec. There were 4.3 million login attempts on these honeypots with a logarithmically increasing rate over a period of a month, after which the test was ended.

There are 2 critical security issues that are currently being used with RDP attacks:

  • ransomware
  • cryptomining

Both of these security issues have a prominent financial incentive for the attackers, and with a typically low-cost/effort attack, these are seeing widespread use.

At least 5 malware families are being used for ransomware attacks at the moment including Ryuk, a very destructive piece of malware (municipal services in the US are being infected at e rapid rate), that is generally distributed through Trickbot with spam email or the Emotet download trojan. Powershell is a common tool used by Trickbot to infiltrate targets and install Ryuk …

Brute-password attacks remain an effective method for infiltration as well purely due to the use of poor password choices amongst RDP machine operators/admins, even in the face of decades of user education. The lack of security controls on RDP by Microsoft is also an issue – a simple 2FA/OTP or PKI requirement would stop this issue dead in its tracks. But admins are resistant against changes because any additional security would impact the ease of use of RDP (and other access mechanisms).

So if RDP is so bad, why is it sill seeing significant use with direct exposure to the internet?

  • poor security practices
  • inexperienced/unskilled/lazy administrators
  • bad firewall configurations
  • plain old ignorance

Until RDP is replaced or improved, the buck stops with administrators though. They can lessen their company’s exposure to attack by using Remote Desktop Gateway and enabling multi-factor authentication. While effective against credential harvesting, this still leaves RDP servers exposed to zero-day exploits or unpatched vulnerabilities such as BlueKeep.

VPNs should be used for secure remote access before RDP is available – this removes public exposure of RDP completely and significantly increases the complexity for attackers in using this service to distribute malware.

Administrators can further harden their machines against credential harvesting by not allowing domain administrators to log in via RDP; enabling RDP for only the people who need it; securing idle accounts; rate-limiting or capping the number of password retries each user is allowed; and strength testing users’ passwords.

RDP should not be directly exposed to the Internet. At all. Simple. Don’t do it.

The great web developer con

Another day, another dodgy web developer story. The premise:

We would like to offer you a website design for X amount. But to do so, we need to transfer your domain to us.

This tale is a pretty old one but it appears to be flourishing – the lure of a good once-off price for the design and what appears to be a reasonable monthly charge lead many to take up offers like these. But are these actually good offers? Let’s dissect this …

The web developer (let’s call them web devs from now on) is offering 2 products here:

  • website design
  • website hosting

The first is up to the client to determine whether they are getting reasonable value.

The 2nd is where we run into trouble, and for a number of reasons:

  1. web development is the core focus for web developers; web hosting is not
  2. many web development companies either subcontract the hosting to someone else or do it themselves with cheap shared hosting systems and then markup the cost to the client
  3. web devs are not skilled nor have experience with hosting – any issues and you are on your own
  4. run-of-the-mill web devs have little to no security skills
    • the platforms they use may be unsecured or have vulnerabilities
    • they do not offer an update service for your site or plugins
    • they do not scan your website for security issues
  5. web devs often have no care for email but will transfer your domain nonetheless leaving your email in limbo
  6. web devs do not understand site backup and recovery so if you have an issue with your site and need to restore it to a previous copy, you may be in trouble
  7. some web devs lock you into contracts – if you aren’t happy with their hosting, you just have to grin and bear it

Unfortunately, many clients don’t understand the relationship between web development and hosting, the 2 being very different things. They sound the same but have 2 very different skills requirements, with the latter skills requirement being something that web devs generally do not have.

As an example, if a web dev transfers your domain, they may not do the email hosting at all, or if they do, they may not migrate existing email from your old hosting provider. This leaves you managing, and paying for, 2 disparate systems.

Some web devs even go so far as to offer free web hosting. You get what your pay (or don’t pay) for.

The core premise of the requirement to transfer your domain is false. There is generally no specific requirement to move your domain – the web dev can design your website and place it with your existing hosting provider.

You then retain your website and email hosting as is, and save on hosting charges (as would have been paid to the web dev had you moved the domain).

Be careful and circumspect when approached by web devs who want to transfer your domain – it’s generally not required, you’ll likely get a poor and insecure service, and you’ll end up paying more.

Here follows some more reading on the subject:

DMARC: optimising email delivery

Email is a fickle thing …

There are a huge amount of dependencies involved in what seems like a small task – sending an email. What started out as a simple method of exchanging messages has morphed over the years into a cobbled-together monster as needs changed and especially businesses required a more robust and featureful method for exchanging information. Approximately 250 billion emails are sent each day of which around 10-15% are genuine so email is still heavily utilised.

Note only does email have to deal with the underlying ephemeral infrastructure of the internet, but it also has to deal with spam, blacklisting, bad MIME implementations and flaky email servers.

Email delivery is not guaranteed, no matter how much we would like it to be. But we can improve things to the point where email delivery, in- and outbound, will have a higher chance of delivery.

A number of frameworks have been introduced in recent years to provide authentication/identification of email:

  • sender policy framework (SPF) – check if a server is authorised to send email for a domain
  • domain keys identified email (DKIM) – check if emails from a specific domain are authorised using sigital signatures
  • domain message authentication, reporting and compliance (DMARC) – protect domains from email spoofing (this is basically an extension/combination of SPF and DKIM)

These frameworks work to detect domain or email address spoofing attacks. Domains that have published records for these methods are essentially trusted more than domains that do not have these records. In addition, the signature verification method of DKIM means that a domain signature can’t be spoofed. In the combined method, DMARC, organisations have a strong and proven method for protecting their email domains.

So how do these methods work?

SPF

A DNS record of type TXT or SPF is added for the domain in question, and any IP addresses, hostnames or other domains that should be authorised to send email for the domain, are added to this record. When someone receives email indicated as being from your domain, the recipient server can check if the sender server is authorised. There are 3 actions that the recipient server can take:

  • PASS – the email is sent on without adjustment
  • FAIL – the email is blocked
  • SOFTFAIL – the email is accepted but tagged as having failed the check

DKIM

A digital signature (PKI) is added to a DNS TXT record for the domain in question and emails from this domain have the DKIM-Signature header appended. Recipient systems then check this signature header against the one published in the DNS record to confirm its authenticity.

DMARC

Policies are generated and stored in DNS TXT records. These records check if the FROM: field is aligned with SPF or DKIM. Alignment can be strict or relaxed.

In conclusion, and using the above methods, one can prevent their domain from being used in email spoofing attacks; as well, you can also verify senders of email as being authentic. The basis for these checks and preventions are the ownership of a domain by the organisation in question – if you own your domain, only you can add the necessary records to implement these frameworks. In doing so, you are assumed to have a higher trust than others, and there is a better chance of your emails being received.

Anti-spam solutions also take SPF/DKIM/DMARC headers into account when scoring emails for spam, generally adding a negative value to the score thereby improving the chances of delivery and non-tagging as spam.

These frameworks are designed to improve email delivery and in general, they succeed quite well. Make sure you are using 1 or more of these methods with your email domains.

Security – Hell in a handbasket

The last 2 weeks have really been a bad time for security news and one has to hope things will change for the better; if not, the headline says it all!

BlueKeep

Microsoft released a security patch 2 weeks ago related to Windows Remote Desktop Protocol (RDP) which is used to remote access Windows systems. I’ve long said RDP was inherently insecure and the chickens are coming home to roost now – RDP was found to be vulnerable to a recently disclosed critical, wormable, remote code execution vulnerability.

Dubbed BlueKeep and tracked as CVE-2019-0708, the vulnerability affects Windows 2003, XP, Windows 7, Windows Server 2008 and 2008 R2 editions and could spread automatically on unprotected systems. The vulnerability could allow an unauthenticated, remote attacker to execute arbitrary code and take control of a targeted computer just by sending specially crafted requests to the device’s Remote Desktop Service (RDS) via the RDP—without requiring any interaction from a user.

So to summarise:

  1. no authentication required
  2. remotely triggered
  3. full control over target devices
  4. no interaction required

The issue is so critical that Microsoft took the unusual step of pushing out patches for older unsupported versions of Windows including XP, Server 2003 and Vista. Time to patch?

MDS

As mentioned in a special blog entry a few days ago, new Intel side channel attacks have come to light, 4 of them collectively known as MDS or more colloquially, ZombieLoad. Yip, time for a stiff whiskey.

The 4 attacks are:

  • CVE-2018-12126 – Microarchitectural Store Buffer Data Sampling (MSBDS) [codenamed Fallout]?
  • CVE-2018-12127 – Microarchitectural Load Port Data Sampling (MLPDS)
  • CVE-2018-12130 – Microarchitectural Fill Buffer Data Sampling (MFBDS) [codenamed Zombieload, or RIDL]?
  • CVE-2018-11091 – Microarchitectural Data Sampling Uncacheable Memory (MDSUM)

Remediation includes both firmware (or microcode) and OS patches. Performance impact is expected to be around 10-15% for these patches. When you take this into consideration, along with performance impacts of previous remediations relating to Spectre/Meltdown, we’re starting to get Celeron performance in Xeon chips. Nice – $200 performance for $2000+ prices. Pay more, get less : )

On the bright side, you’re not affected if running AMD CPUs. And seeing the performance improvements in AMD’s latest Ryzen and EPYC chips, along with Intel’s chip shortage, that’s looking like a very good platform bet for the future.

Flipboard

In a case of “yet another online service breached” – let’s call it YAOSB, Flipboard advised that they were the unlucky (smile) recipient of 2 attacks last and this year, during which “an unauthorized party infiltrated some of its databases more than once and “potentially obtained copies” of the user information they contained.”

Besides usernames, email addresses and passwords, the miscreants also got hold of tokens used to connect Flipboard to other social media services such as Facebook.

Do NOT connect 1 social media service to another. Don’t do it. Ever. Even if they ask. With a pretty please.

So all in all, tough times for service providers all round. Patch, patch, patch. Change passwords. You know the drill.

Vuln mitigation and INtel MDS – the spectre looms

Spectre and Meltdown a have been with us for just over a year now and even with all the predictions of dire consequences, we have yet to see any in-the-wild code snippets or attacks beyond theoretical POCs. So the question to ask is whether we should be losing a lot of hardware performance (most of the associated mitigations have performance impacts) for the sake of potentially theoretical security issues.

I recently had a chat to a client about in what order vulns should be mitigated in their organization and what strategy to use in optimizing patch deployment. Admittedly the most popular, and common option is to approach this from the view of criticality. It’s rated critical so we should fix it? Well, not necessarily: there are many variables which impact the potential order of fix for your specific site or organisation. Factors like nature of systems, impact trend of the vuln, age and type of vuln, type of application, patching intervals, is it remote exploitable, reach and platform types can factor in more than just the CVSS score.

This class of CPU architectural issues is a specific case in point. Yes theoretically it’s possible to perform the exploits but have there been any practical implementations yet? No? So do we really need to patch? When there’s large costs associated with critical data loss and compromise, the choice is a difficult one and cuts a fine line.

And this class of exploit is not slowing down. A new Intel-focussed vuln called MDS (that apparently does not affect ARM or AMD platforms) comprising 4 related techniques was released in mid-May, the 3rd such announcement this year already.

Side channel attacks seem to be a dime a dozen these days, but again, this vuln (or class of vulns) is listed as being complex to exploit and possibly as theoretical as all the others have been. Apple’s immediate response was to switch off SMT (commonly know as hyperthreading) which results in an approximate 40% performance hit. As was Google’s response for Chrome OS. I can just see Homer saying “Duh?”.

Disabling HT/SMT by the way, does not completely mitigate MDS …

Once again, mitigations include hardware, firmware microcode and software/OS components, all of which need to be aligned to get full protection. Notwithstanding patching strategies and other blockers in rolling out co-ordinated fixes like this, what will the practical reach be for these patches? Servers will languish with delays and desktops (and other IOT devices) may never even get the firmware fixes.

So it’s all a bit wishy washy at the moment. Time for some risk analysis.

A lesson in supply chain attacks

What happens when the websites we visit and the companies we depend on to provide us with information, are compromised? Supply chain attacks go to the root of information we depend on rather than attack us directly.

A recent attack on the Asus infrastructure paints the exact scenario for supply chain attacks. Attackers compromised an Asus update server to push a malicious backdoor onto numerous customers. The attackers’ aim was to target 600 specific machines however the malicious code was eventually delivered to many more machines.

The malicious code was signed with a compromised but valid Asus certificate which makes detecting this type of attack very difficult. And because the client implicitly trusts a certificate signed by Asus, it will accept the malicious code download.

So what can one do about supply chain attacks? Beyond being very careful about the sites you visit and where you download from, there’s not a whole lot you can do. When even mainstream and trusted sites can compromised, we’re all in a grey zone.

2018 the year of the hacked router

I’ve spoken in depth on consumer (and some enterprise) router security issues.  In brief summary, these devices are pieces of scrap that are full of vulnerabilities and very seldom get updated to fix issues.

It’s no coincidence that this year has seen an exponential growth in attacks on routers as well as botnets making use of pwned routers and other IoT devices. Device pwnage is now one of the main vectors for malicious attacks especially as regards ransomware distribution and cryptomining.

As far as consumer devices go, Wireless Access Points (APs) are in the same poor league as routers, and the same remediations mentioned at the bottom of this article apply. Bluetooth is another area where vulnerabilities are often found so caution is required there too.

Some of the big ones this year:

  • Mikrotik routers vulnerable to VPNfilter attack used in cryptojacking campaigns
  • Mikrotik routers have a vuln in Winbox (their Windows-based admin tool) that allows for root shell and remote code exec – the new exploit could allow unauthorized attackers to hack MikroTik’s RouterOS system, deploy malware payloads or bypass router firewall protections
  • Dlink routers have 8 vulns listed in OCt 2018 including plaintext password and RCE issues
  • VPNfilter affecting multiple brand routers including Linksys, Netgear and TPlink
  • Cisco has had a torrid time this year with multiple backdoors
  • Datacom routers shipped without a telnet password
  • Hard-coded root account in ZTE routers

The Dlink issue is so bad that the US FTC has filed a lawsuit against Dlink citing poor security practices.

To summarise, why all these issues?

  • lowest quality devices to cater for low consumer pricing
  • very little innovation or security in software design leading to (many) vulnerabilities
  • vendors have no interest in maintaining firmware
  • manual updates and/or no notification of updates
  • default and/or backdoor credentials
  • insecure UPnP, HNAP and WPS protocols
  • consumers not skilled in configuration so config left at factory defaults
  • open and web-accessible ports

So how can consumers protect themselves?

  • change default admin credentials
  • change the defaults SSID (WIFI) name
  • enable and only use WPA2 encryption
  • disable telnet, WPS, UPNP and HNAP
  • don’t use cloud-based router management
  • disable remote admin access
  • install new firmware when released (monitor your vendors support website)
  • change access details for your router’s web management interface (eg. IP address and/or port)
  • make use of an open DNS solution like OpenDNS or Google DNS
  • advanced: reflash your router’s firmware with alternatives like DD-WRT or OpenWRT

At minimum, consumer devices have no place in business networks, including SMEs. Even when backgrounded by a firewall, non-bridge mode routers can still be compromised and used for external attacks. And it’s been shown that some enterprise-class equipment (eg. Mikrotik and Cisco) suffer from serious issues too.

For home users, the situation is more difficult primarily because of cost – any more specialised equipment is likely to be out of price range for these users. As well, skill requirements for non-consumer equipment increases significantly (consider that most consumers struggle with consumer devices already) so that may be out of the question. Until vendors start thinking about security seriously and bake it into their products, this will continue to be an ongoing issue.

Microsoft (surprisingly) has started a project called Azure Sphere which is a Linux-based operating system that allows 3rd party vendors to design IoT and consumer devices using an embedded security processor (MCU), a secured OS and Cloud Security to significantly improve the overall security surface of their devices. This is an admirable effort and hopefully many vendors get on board or initiate similar projects.

Absent any change in the consumer device arena and their current lax attitude towards security, the issue of botnets and distribution networks is likely to only get significantly worse over time.

 

Update from The Register:  Spammer scum hack 100,000 home routers via UPnP vulns to craft email-flinging botnet

Update from ZDNet: Bleedingbit zero-day chip flaws may expose majority of enterprises to remote code execution attacks

Some more on Chalubo: This botnet snares your smart devices to perform DDoS attacks with a little help from Mirai

And BlueBorne: Security flaws put billions of Bluetooth phones, devices at risk

(S)RUM

Veronica Schmitt, a senior digital forensic scientist at DFIRLABS, recently featured on Paul’s Security Weekly, showcasing the Microsoft SRUM system tool (System Resource Utilization Monitor).

SRUM was first introduced in Windows 8, and was a new feature designed to track system resource utilization such as CPU cycles, network activity, power consumption, etc. Analysts can use the data collected by SRUM to paint a picture of a user’s activity, and even correlate that activity with network-related events, data transfer, processes, and more.

Very little is known about SRUM outside of a few notes and videos online, and most tellingly, very few sysadmins know about the storage function of this tool.

That sounds pretty interesting.  And it is, especially for performance and system monitoring.

But …

The output from SRUM is continually (at 60min intervals) written to an ese DB, which in turn can be read by a python tool called srum-dump written by Mark Bagget and output to a CSV for further analytics.

The scary part of this is how much data SRUM is actually writing out to the db and what info can be gleaned from this db in forensics terms. Essentially, any actions performed or data generated by a user on that system, can we retrieved at a later stage by srum-dump.

From a forensics pov, that’s brilliant but from a privacy pov, it is very scary. Especially as very few people realise this is going on in the background. It’s also scary in the way that if a (Windows) machine is compromised, the SRUM db can be used to propagate additional (lateral or vertical) malicious activity depending on the data identified.

Comments welcome …

VPNFilter and other neat tricks

The Spectre and Meltdown attacks that came to light at the beginning of the year have been the main focus of this year’s security issues however there has been a lot more going on than that.

On that note though, additional Spectre variations have been found (we’re up to v4 now); as well, the BSD team has alluded to a notice for the end of June potentially regarding Hyper Threading in Intel CPUs which could have far-reaching effects for virtualisation systems.

But on to the main topic of this post: VPNFilter is a modular malware that infects consumer or SOHO routers and can perform a number of malware-related functions. It is thought to be the work of Russian state-sponsored attackers “Fancy Bear” who have been fingered for previous attacks like BlackEnergy.

The attack is split into 3 stages:

  1. exploit router and pull down image from Photobucket website
  2. the metadata in the image is used to determine the IP address for stage 2; open a listener and wait for a trigger packet for direct connection
  3. connect from Command and Control, and engage plugins for stage 3

Some new stage 3 plugins have recently come to light including:

  1. inject malicious content into web traffic as it passes through a network device
  2. remove traces of itself from the device and render the device unusable
  3. perform man in the middle attacks (mitm) to deliver malware and exploits to connected systems
  4. packet sniffer module that monitors data specific to industrial control systems (SCADA)

If this sounds scary, then you’re on the right track. But think bigger, much bigger. Because the attacker is on the device connecting users to the internet, it could potentially both monitor and alter any internet traffic.

From ARSTechnica:

“Besides covertly manipulating traffic delivered to endpoints inside an infected network, ssler is also designed to steal sensitive data passed between connected end-points and the outside Internet. It actively inspects Web URLs for signs they transmit passwords and other sensitive data so they can be copied and sent to servers that attackers continue to control even now, two weeks after the botnet was publicly disclosed.”

What devices are affected? The full list is in the Cisco Talos blog post on the issue however briefly it includes upwards of 70 models from vendors like TP-Link, Dlink, Netgear, Linksys and Mikrotik, all of which are consumer units that can be expected to be used in SOHO environments.

On to Satori, a more recent botnet based on the formerly impressive Mirai code that caused havoc with denial-of-service attacks in 2016. Satori uses the Mirai code as a foundation for a series of evolving exploits that allows the botnet to control devices with even strong credentials.

The initial attack was targeted at Huawei and Realtek routers, however the botnet controllers have displayed impressive skills by moving on to bitcoin miners and now consumer routers like Dlink’s DSL2750B.

“Attack code exploiting the two-year-old remote code-execution vulnerability was published last month, although Satori’s customized payload delivers a worm. That means infections can spread from device to device with no end-user interaction required.”

Dlink currently has no firmware update for this issue. Which brings me back to a statement that I’ve echoed on this blog numerous times – no one should be using consumer routers, or at least routers that do not have a history of consistent security updates. The internet is littered with hundreds of models of router from many manufacturers that are full of holes that do not have a fix from the manufacturer.

Consumer manufacturers do not have the skill to design secure devices nor do they have the capacity to fix broken and exploitable devices. This leaves a sizeable portion of internet users at the mercy of attackers.

And that is scary.

Loki god of …?

In the field of IT Security, one learns very quickly that there’s always another security risk around the corner. An old favourite, the Loki Botnet, is back for another bite of the pie shortly after the fun with WannaCry a week ago.

( Loki a god in Norse mythology, was sometimes good and sometimes bad. Loki the virus is all bad. )

Loki is a malware bot that steals passwords from applications and e-wallets, and it’s been around since early 2015, so has a solid track record. There is a new variant doing the rounds and it’s upped the ante with the ability to steal credentials from over 100 applications. The virus initiates via email PDF attachment or web download so the standard advice of being wary of attachments applies.

It’s unclear at this time if the malware is stealing credentials from stored password databases or from the application itself while running. In all cases, it’s important to:

  1. not execute unknown email attachments
  2. use strong passwords
  3. make use of AV and anti-malware software

On a related note, browsers are often targets of password stealing malware – Firefox, IE, Opera and Safari are all on the list of browsers that Loki ‘supports’. Of note, Firefox ( and related browsers ) is the only one out of this bunch that supports a master password.

Firefox by default stores passwords in a file that is encrypted. Without a master password, this file could be copied to another Firefox instance and viewed there. The master password applies additional encryption and essentially 2FA which means that the password file is useless without the master password.

Chrome/IE uses the OS’ secure encrypted storage ( eg. WPA, Keychain or Wallet ) to store your information – if the OS is compromised then so are your details.

It’s useful to know that using sync solutions ( eg. Google SmartLock, Apple iCloud ) will mean that your details are stored on someone else’s systems and may be accessible by the provider.

Browser password managers know which site is related to which password entry – this means that they can protect you against phishing and other attacks( by checking SSL certs ) using lookalike sites and other tomfoolery. This is another reason to use SSL-encrypted sites.

I’ve written about password managers before, but to reiterate, if you want the best in password management and security, use a dedicated password manager. They provide strong encryption, master password and  encryption keys. And some provide neat tools to auto-input credentials into web sites and applications.

Facebook, Cambridge Analytica and your digital data

The recent Facebook/CA fiasco should be known to most people by now but here is a brief rundown in case you’re unaware.

Aleksander Kogan, a Russian-American researcher, worked as a lecturer at Cambridge University, which has a Psychometrics Centre. The Centre advises to be able to use data from Facebook (including “likes”) to ascertain people’s personality traits. Cambridge Analytica and one of its founders, Christopher Wylie, attempted to work with the Centre for purposes of vote profiling. It refused, but Kogan accepted the offer in a private/CA capacity.

Kogan did not advise his relationship with CA when asking Facebook (which allows user data to be used for ‘research purposes’) for permission to use the data.  He created an app called ‘thisisyourdigitallife’ which provided a personality prediction.

If this sounds familiar, then yes, many have probably filled in similar ‘tests’ which are available as apps on the Facebook (and other) platform. What most people don’t however know is that these apps are far more insidious than the playful front that they portray. The data collected by these apps can be used for any number of nefarious uses, and as in case, are being used in ways that break the user privacy agreement.

Kogan ended up providing private user data on up to 50 million users to CA, not for academic research but for political profiling purposes. This included not only users that had installed the app, but friends of those users as well. CA then used this data in commercial cases by working with various political parties and people (including Ted Crux and Trump campaigns). The product was called phsychographics.

Anyone who has read Isaac Asimov’s Foundation series may see parallels here with the character Hari Seldon’s phsychohistory which is an algorithmic science that allows him to predict the future in probabilistic terms. This is fairly hard-core Science Fiction …

To see this kind of future-looking large scale profiling occurring in 2015/6 is quite shocking.

Facebook was aware of this information sharing as early as 2015 and had asked Kogan and CA to remove the data. But they never took it further to confirm that this had indeed been done.

This is pretty embarrassing for Facebook and its almost 10% stock drop this week confirms this. The larger concern for Facebook is that the company signed a deal with the US Federal Trade Commission in 2011 that was specifically focused on enforcing user privacy settings. So this saga may be a contravention of that agreement …  and Facebook have more troubles ahead seeing as both US and EU authorities are looking into the matter. Facebook execs have already been before the UK Parliament and are accused now of lying about the facts in this case.

Arstechnica’s take on the story

Christopher Wylie, the brains behind the technology in use, had previously left CA once realising what they were doing, and became the whistleblower that has lead to the furor over the last few weeks.

The Guardian’s article on Christopher Wylie

The NYTimes article

While some will say that they’re not worried about the data that is collected about them, this scenario shows that the issue is much bigger than individuals. Profiling of large groups of people based in individual user data is now a thing.

In the case of Facebook specifically, one can:

This story should be enough for most to rethink their online presence and activity. It’s not necessarily a matter of removing yourself from the Internet but rather being very circumspect about the information your offer up about yourself. Because your information is being bought, sold and used as a weapon against you.