Tag Archives: cloud

The cloud and security

Moving your applications and data into the cloud presents a paradox when talking about security. A recent Thales survey found that over 60% of respondents thought that the cloud provider was responsible for protecting their sensitive and/or confidential data. And over 50% said they didn’t know what their cloud provider does to protect their data. That’s a substantial area of unknowns and the reason I said this was a paradox – you’re moving your systems into the cloud for the possibility of less security!

Why is this important? Because many PaaS/IaaS solutions involve putting your beloved data out there where you have less control and security. Witness the new  default  in Windows 8.1 of setting your Documents library to SkyDrive as the default write location. And apparently the contents of files are not stored locally, only the metadata – it looks like the file is local, but only the info about the file is kept locally. You will need to specifically right-click a folder and set it to be available offline if you want a local copy. Stub files or reparse points do the magic in the background.

But this is a serious departure from traditional cloud sync apps for desktop users and requires a certain ( heavy  ) reliance on a good quality internet connection. It also requires heavy reliance on the security and confidentiality of the cloud provider, something that is likely ( and has been proven ) to be in short supply, as can be gathered from recent spying allegations, media reports and lawsuits.

There is the probability that American companies are specifically being caught in broad-ranging requests for customer/user data. And there are reports of the UK following a similar pattern. So the question to ask is how secure do you feel about the confidentiality of your data when stored with a cloud provider. I think this particular issue is going to be shaped by the events around government laws and data interception in the next few years. A word of warning: everything on the internet is available for anyone to see.

csync CLI usage and mirall tuning

This post provides ad collates information regarding  the client components which is not provided as yet by ownCloud.

csync cli usage

The ownCloud csync version uses the same syntax as the original csync but with a differing url syntax/module.

1. create a folder/repository through the web interface that you will sync to
2. choose a local folder to sync to owncloud
3. sync with the following syntax:

csync <local_folder> ownclouds://user:password@www.server.co.za/owncloud/files/webdav.php/<remote_folder>

4. enter login credentials when requested, if not supplied in step 3 above
[ note to do: does anyone know how to provide credentials on the cli when the password includes non-std characters eg. @ and ! ? ]

csync cli usage is very useful when you need to automate syncing to ownCloud from the CLI. I’ve experienced some unusual behaviour though. On one occasion, syncing a local folder to a newly created ownCloud folder resulted in the local files being removed rather than synced to the oc folder.

In this case, you may want to use the –dry-run switch the first time to check what changes will be made.


The current mirall/csync combination uses time stamps ( correct? )  to determine if a file needs to be synced. This is called conflict resolution. See the [url]original csync documentation here[/url]. Currently this is done on a timer of 30 seconds ( correct? ). Mirall has had a timer config option added in the last release which allows this timer to be changed eg. sync check every 5 minutes instead of 30 seconds ).

[note to do: does anyone know the timer config value and in which file this is changed? ]

The mirall client under Linux now has the ability to use a filesystem notification mechanism such as inotify, to decide on sync changes. This is enabled using the -DINOTIFY switch when compiling mirall.

[note to do: does anyone know how and where the timer value is disabled and inotify option enabled? ]

The Cloud, Security and IT Skills

Seeing as everyone is writing about Cloud Computing lately, I thought I’d rehash some of my concerns about this ‘new’  technology. New in parenthesis because the idea is actually quite old, coming from the time-sharing Unix systems of the 60’s and 70’s. Cloud obviously takes this to a new level ( supposedly with non-stop availability ) but the basic premise stays the same.

Another reason for the parenthesis is that commercial companies continually need to invent new markets ( based on old ideas ) so they can expand their profit coverage. The main driver for commercial companies is profit, and little else. While we would like to see some companies as being benevolent, patriotic and altruistic, the fact is if they don’t bring the bacon home, the board is going to get someone else who can. Excuse the cynicism but that’s the bottom line to doing business these days.

Cloud is not new. Cloud is simply a repackaging of of existing technologies with a new spin and some new clothes. Cloud is the latest buzz-word for commercial exploitation of open and closed technologies that have been around for some time ( remember Autonomic Computing from IBM? ). Another example is centralised terminal-based computing? Think VDI and Terminal Services. Boy, the computer industry loves to rehash.

So far, the execution has been less than stellar. Two of the prime drivers for Cloud computing is application availability and reliability – something that has been distinctly lacking from major cloud vendors. Microsoft have had their fair share of outages on their BPOS platform, Amazon’s EC2 has had a problem or 2, and Google’s services have their ups and downs. If the main drivers for cloud already have this poor showing, then the future of cloud is murky, and someone will need to do a lot more to convince me to put my data and apps in the great ether.

Security is another area of concern. There have been a number of reports of Amazon’s EC2 being used to hack ( reverse engineer to be polite ) encryption and wifi protocols, amongst other things. For very little cost, one can purchase quite a lot of computing time to perform all sorts of compute-intensive activities. And certainly there are those out there who have it in mind to poke at the security of your online apps ( and sensitive data ).

And while you’re handing over the keys to your corporate app and data to a third party, this does not negate the responsibility you have to those apps and data. When ( not if ) your cloud vendor has a failure, your directors will come knocking at your door, not your cloudie. And read that fine print very carefully, because your cloudie has an out to the advertised 100% availability that initially caught your eye.

DNS registrars and Certificate Authorities are an old example of cloud computing ( in this case the narrow definition for DNS hosting and 3rd party secure certificate generation ). CAs, supposedly progenitors of our secure online activities, are falling like dominoes lately. DigitNotar, the Dutch CA, has just been taken over by the Dutch government due to their mismanagement of a potentially very dangerous situation and RSA’s SecureToken system was hacked earlier this year. These are just a small example of the many breaches that occur almost weekly.

So if the security people we trust, to ensure our security, can’t get it right, then what chance do the cloud vendors have?

A few years ago, everyone went outsourced with their IT support. That turned out to be a complete and utter mess ( I was in London when the whole Tower of Cards came tumbling down ). Now we’re chomping at the bit to give another part of business away because someone ( read commercial vendor ) said so. Why are we so quick to abdicate our responsibilities? Because that’s all cloud computing is – giving control of our systems to someone else.

If cloud computing is providing ‘IT as a service’, why can’t we effectively do this ourselves? There are a number of reasons:

  • lack of skills to implement new technologies
  • lack of time to correctly test and evaluate new technologies
  • perceived cost of in-house IT services and support
  • ‘build your own universe and don’t let anyone touch it’ syndrome amongst IT staff in SME and large corporates
  • lack of due diligence by management
  • lack of buy-in by management

With these hurdles to cross when running your own IT systems, no wonder some companies think it’s better to hand their systems over to a 3rd party. But cloud has its own set of issues:

  • security
  • legal and regulatory requirements for physical isolation
  • availability
  • bandwidth constraints
  • recourse in the event of outages

Don’t get me wrong, Cloud Computing has its place. But as history has taught us, handing the keys to your house to someone else, is not always the best idea. Just because it appears to be someone else’s responsibility if you host with an on-line service, doesn’t mean it’s not an issue. The problem of data integrity and application availability is not solved with an on-line service, it’s simply moved elsewhere. If you’re going Cloud, do it for the right reasons. Not because someone said so or because it’s the latest buzz-word.

Microsoft: Cloud Services fail

Well if there’s ever been an advertisement against cloud services, Microsoft is it. The recent spate of outages on Microsoft’s BPOS system continued this weekend past with a 7 hour outage at their Dublin data centre after an ‘act of God’ took out their power grid and backup generators. Microsoft said it would “proactively provide impacted customers with a 25 per cent credit on a future monthly invoice”. Thanks Microsoft!

But one has to wonder at the value of the financially backed SLA offered by Redmond: customers with a monthly uptime lower than 95 per cent get a full discount; a 50 per cent credit on uptime between 95 per cent 99 per cent; and a 25 per cent note for uptime between 99 per cent and 99.9 per cent. So customers experiencing anything above 8.76 hours of downtime a year are able to make a claim against the Ts&Cs in Microsoft’s SLA. The SLA does not apply when the service is hit by availability issues arising from “factors outside of our control” – one of the criteria.

Microsoft also says that blackouts should never be a concern for prospective cloud customers.

“When you switch to cloud power Microsoft, you never have to worry about a power outage. You can rest easy. Our financially backed 99.9 per cent uptime guarantee means a steady stream of power is pumped directly into your business at all times and include 24/7 support if anything ever does go wrong,” said the vendor on its website.

Resellers should take note when advising customers to switch to Office 365, the successor to BPOS, as Microsoft previously admitted that outages on the new cloud service are also inevitable.