Tag Archives: open source

Slackware 14.1: an interview

I’ve been quite slack ( yip queue the puns ) on reviewing Slackware 14.1 but time has been short and to tell you the truth, after upgrading, there’s not a whole lot different from an existing user’s point of view ( except for that usual Slackware “it just works” air of operation ). That being said, there is a whole heap of functionality in Slackware 14.1 that deserves mention.

First some version numbers to see where SW 14.1 is at. Kernel 3.10.17 is bang up to date and features support for most current hardware. The huge and generic kernels continue to be supplied leaving it up to the user as to which they prefer ( generic requires an initrd be built ). Xfce 4.10.1 and KDE 4.10.5 are featured for desktop environments, both being fairly up to date ( at the time of 14.1’s release ). There’s also a whole lot of gadgetry underneath dealing with the automatic detection of hardware devices without the user resorting to ninja tricks.

A complete development environment including glibc 2.17 and gcc 4.8.2 is on offer along with a host of other development tools like LLVM and Clang. Then we have server standards like Apache 2.4.6 ( along with php-fpm support, my favourite new toy ), php 5.4.20, Perl 5.18.1, Python 2.7.5 and many others. As usual, Slackware’s package list is as complete for server use as you will find anywhere else. 14.1 is also the first Slackware to support a complete UEFI environment including install and booting. For desktop use, there’s a good helping of mainstream apps from Firefox to Amarok ( and including some GTK+-based apps ) however you’ll probably end up using one of the 3rd party repositories ( like www.slackbuilds.org ) to fill in the gaps or compile those apps not included. Security options are covered in detail with OpenSSL, OpenVPN, OpenSSH and GnuPG.

At just over 2GB in size, the installation image is about average for Linux distros but it does include a huge amount of apps and options. I’ve started playing with BTRFS recently and Slackware provides a good environment for this, having a fairly up-to-date btrfs-tools package and install-time support. Of course, Slackware includes support for a number of other filesystems, including all ext versions, xfs and reiserfs. Glusterfs or Ceph would be a nice addition though … PXE and USB installers are available for cases where traditional installation methods are not an option ( and they are quick ).

Slackware ( and derivatives ) remains one of the few Linux distros that allows for a complete RAID1 boot implementation ( something that has saved me on numerous occasions ). I know that there are some options for RHEL-based and other distros but they do seem a bit complex or don’t mirror the /boot slice.

The installation itself remains straightforward but sparse, using an ncurses-based graphics interface. Some of the hand-holding found in other distros is not evident here – you need to partition your disk(s) before installation and make package selections during installation that may seem a little archaic.  After, there are some tools like slackpkg, sbopkg ( 3rd party ) and slapt-get ( 3rd party ) that assist in keeping your installation up-to-date and provide app repositories.

If you want flash ( not flashplayer ) and desktop-style ease of use, then other distros are probably better suited. But if you need a rock-solid server OS or desktop OS you’re willing to tinker with, then Slackware suits the bill.

Update: I completely forgot about one of the most important changes in 14.1 – the switch from mysql to mariadb. There’s a lot of wrangling going on about which is better technically but at this early stage of dev, there’s not much diff between mariadb and mysql. Bigger changes are expected in the newer releases and a gradual move away from 100% compatibility.

Squid changes in 3.2 and 3.3

Squid, the venerable proxy/caching server, has  recently undergone a few major changes.

3.2

SMP

In 3.2, one of the biggest changes is SMP ( multi-cpu or -core ) support. This could potentially have a huge impact on the performance scalability of a machine that uses multiple CPUs or cores. Previously ( in <= 3.1 ), Squid was a single process application that consumed a single CPU – no matter how many you had in a server. Some components like helpers or back-end drivers could make use of other cores/CPUs but the main Squid process was essentially a serial process. 3.2 changes that with a new ‘worker’ concept where Squid spawns multiple worker processes to utilise some or all available CPU cores. Workers are almost individual Squid processes however they do share a number of components including:

  • the Squid executable
  • general configuration
  • listening ports
  • logs
  • memory object cache ( depending on environment )
  • disk object cache when using the Rock store
  • some cache manager stats

The following  components are not shared:

  • disk object cache ( ufs, aufs, etc. )
  • dns caches
  • snmp stats
  • helper processes and daemons
  • stateful HTTP authentication
  • delay pools
  • some cache manager stats

The interesting thing about the new SMP features is that workers may have different configurations ( eg. listening  ports ). All workers that share http_port listen on the same IP address and TCP port. The operating system protects the shared listening socket with a lock and decides which worker gets the new HTTP connection waiting to be accepted. Once the incoming connection is accepted by the worker, it stays with that worker. This should give you a balanced operation between workers. Or so you would think. However there is kernel TCP stack code responsible for a scheduling behavior which results in a fairly large skew in terms of how much work a particular CPU or core does. The Squid guys have added a small patch ( until such time as they can locate the odd scheduling behaviour ) that, to an extent, solves this issue. While balancing is still not perfect, there is a much better distribution of CPU resource usage which should allow you to scale the app better on larger SMP systems.

Using a different listing port for each worker should allow one to control the spread of requests between workers if that is something you’d like to do.

There is however a big dependency when using workers – you need to use a different back-end store for each worker ( unless using the Rock store ). This in and of itself, is actually not a bad thing because you can now allocate separate disks ( or RAID sets ) to each worker, thereby growing the I/O capability of your system. That is better than running with just a single disk or RAID set. By the way, performance in Squid is best served using single or R1 disks as parity RAID systems can impact performance massively. ( Do not use multiple cache_dir’s on a single disk or RAID set! )

Helpers

The other big change in 3.2 is a demand-based system for helpers and helper multiplexers. This allows one to set a maximum no. of helpers that Squid is able to use, however only a base no. will be started and ramped up from there as required.

Another big change with helpers is a name change to standardise naming and provide an easier understanding of what each helper does.

Logging

Logging is now modularised and uses a separate daemon to provide improved performance in SMP environments. Asynchronous buffering of log writes is also supported so that the logging system does not impact on proxy write operations.

There are a no. of other changes including configuration tags ( added, changed and removed ), Surrogate/1.0 protocol extensions to HTTP and Solaris pthreads support.

3.3

Helpers

Two new helpers make their appearance in Squid 3.3. The first is the SQL db logging daemon, which can now log entries ( in native squid format ) directly to a SQL db ( using the perl db abstraction layer ). This provides some interesting options for people that are currently using custom scripts to get squid data into databases.

The next is a time quota helper ( implemented through ACLs ) which can be used to allocate time budgets for using squid.

Other

Other improvements in 3.3 include SSL bump serer first ( rather than client first ), SSL server cert mimic and custom http request headers. There are some changes to config tags due to the above features but mostly everything stays the same as for 3.2.

Medical Security and Open Source

Earlier this year, I read and listened ( through the linux.conf.au podcast ) to what can only be described as a seminal and thought provoking paper on medical software security by Karen Sandler, opening my eyes to an entire area of software security that one doesn’t normally think about. Karen’s talk at the 2012 Linux Conf in Australia, appropriately titled “Freedom in my heart”, struck a cord – I’ve been dealing with 8 years of CFS so I have some idea of the constant nag at the back of your mind about one’s health. When that health is threatened because of software developed by the lowest cost bidder, then we’re in for a bad time.

Considering that even simple medical devices now have upwards of 100,000 lines of code, and with a commercial industry standard of 1 bug per 1000 lines of code, issues can mount up very quickly. Of course commercial companies are the first to stand up and raise their security-by-obscurity flag, but how can we trust our health to these companies when we know with certainty that their software is riddled with bugs. There is no or little recourse to action as you have with Open Source Software. Unless you can prove negligence implicitly – considering the protection afforded medical companies, this is unlikely.

Besides the security aspect of medical software, there are of course many other areas of critical importance that seem to be dominated by commercial companies with no apparent respect for the ‘client’ and an overriding passion above all else, for profits. The attacks on core Iranian nuclear infrastructure of the last year via Flame, open access to confidential government information on New Zealand’s Work and Income public computer terminals, TD Bank’s missing unencrypted backup tapes, vulnerabilities in Sinapsi’s eSolar SCADA systems, the recent Shamoon virus attack that turned 30,000 Aramco workstations and servers into expensive paperweights, and the in-game exploit that managed to kill  the majority of characters of some cities in World of Warcraft are just some of the issues that we’re facing every day. Exploits and security issues are no longer the prevue of script kiddies and lazy coders – the motives are now profit and destruction, and the actors are governments and organised crime groups.

In the USA for example, it’s estimated that over 80% of critical infrastructure is in the hands of commercial companies who have little incentive to fortify their networks against cyber attacks – that would involve cost and eat into shareholders profits. IOActive researcher Barnaby Jack has recently found flaws in wireless transmitters of medical equipment that could result in death-at-a-distance. The Economist’s article, “When code can kill or cure”, gives a startling perspective on the issues surrounding medical device security. The list is endless.

And my point is?

Open Source Software has proven itself equal to, and in many cases better than, commercial solutions. It’s used in every aspect of life from computers to smartphones to cars and TV’s. But there are certain areas where commercial companies are afforded protection via inaction or action to provide patently insecure solutions to critical areas. Besides the obvious security benefits of OSS solutions, there is also lower ( or no ) costs, rapid development, high quality coding, open standards and best-in-class support.

The next time you’re thinking about purchasing some software, ask your commercial software vendor about their security track-record. Or better yet, think about the OSS option. Either way, you may be surprised.

Virtualisation part 4: oVirt

Red Hat’s virtualisation product, RHEV, was slightly hamstrung in v2 seeing as the manager technology ( which had previously been purchased from Qumranet ) only ran on Windows. This requirement is something that has put me off that product until now – RHEV 3 has been released with a JBoss-based management server now and a whole lot of feature goodness. But it’s a paid-for product and not everyone can afford this. So the Open Source oVirt project is the next best thing.

oVirt comes in 2 parts:

  • ovirt-engine ( the manager )
  • ovirt-node ( the compute node )

These are currently based off of Fedora 16 with an additional repo for the engine but with Fedora 17, it should be integrated. oVirt currently supports FC, iSCSI and NFS as storage repositories for VM images while NFS is supported for ISO and export images. A requirement is that all CPU technology in the oVirt cluster is the same.

oVirt installation is quite straightforward. The first step is to install ovirt-engine.

  • install a machine with Fedora 16
  • add the ovirt repository
  • install the ovirt-engine packages
  • configure ovirt-engine
  • for the Spice-based console, install spice-xpi

Next, install one or more ovirt-node machines using the oVirt Project-supplied ISO. Once you’ve configured your nodes with basic networking, you can add them to your ovirt-engine through the management console.

Also configure:

  • a storage domain for iSCSI/FC/NFS storage of your VM images
  • an NFS storage domain for your ISO install images

Then you can get down to the business of creating virtual machines. Happy virtualising!

Windows 8 a KDE clone?

Microsoft has always been accused of following the pack rather than innovating. So it’s no surprise that early screenshots of the Windows 8 copy dialogue seem to be a direct rip-off of the KDE 4 copy dialogue, from the ‘multiple copy operations in single dialogue’ visual aspect:

 

to the bandwidth usage graphs:

 

The look may not be exactly the same, however the idea is spot on. As they say, copying is the sincerest form of flattery.

Slackware 13.37: an interview

I woke up this morning to find a very nice email in my inbox – Slackware 13.37 has been released!

So continuing on from previous articles in my interview series, it’s time to take a look at Slackware 13.37. One thing is for sure about the development process – Pat has been having some fun. The odd version number is because he has always wanted a cool name for a release and 1337 ( leet ) fit the bill. Instead of simply being called RC4, the 4th release candidate is labelled as RC 3.14159265358979323846264338327950288419716. Furthermore, the 5th rc is called 4.6692 ( first Feigenbaum constant ).

Let’s take a look …

What’s new?

In development, this release has seen a massive amount of updates and bug fixes. This includes new versions of the Mozilla stable of browsers to fix the recent Comodo certificate vulnerability. Other updates, changes and fixes include:

– upgrade to KDE 4.5.x series ( AlienBob is maintaining 4.6.x for the -current series and 13.37 )

– mplayer phonon backend for KDE ( to supplement gxine and gstreamer support )

– bind is upgraded to 9.7.3 ( includes fixes for cache poisoning attacks )

– Firefox 4 takes over from 3.6 as the main browser

– the huge update to xorg-server 1.9 ( xorg.conf no longer required )

– rpm2tgz now supports txz packages

– nvidia nouveau driver added

– LAMP stack now has php 5.3.6, apache 2.2.17 ( supports DSO and SSL ), mysql 5.1.55

– Ian Lance Taylor’s new ELF linker ‘gold’

– yasm, libplist, rfkill, moc console audio player, libsndfile, ddrescue ( my fav app ), iptraf-ng, chrome added

– lxc linux containers based on cgroups support

In other areas, GPT partition support, memtest86+ and virtio modules have been added to the install initrd as well as the usb/pxe installers. Slackware now supports btrfs fully with the btrfs-progs package ( moved from testing ) although you may want to use something else for the /boot slice. Support has been updated for 6xxx series and Wireless-N Intel wifi controllers. Gdisk can be used to partition large disks ( GPT ) instead of fdisk.

Installation

The ncurses-based installation mechanism continues to be simple and to the point. Support for GPT partition layouts makes use of Slackware on large drives ( ~2TB+ ) easy and virtio modules means that  it is now easier to use Slackware as a kvm guest. While not an automated install function, it’s straightforward enough to enable kvm guest support after the install process ( thanks for help from Eric, Ponce and Rambuld in the LinuxQuestions forum ).

The installation procedure is:

  • boot and login
  • partition ( using fdisk/cfdisk/gdisk ) any disks that you need for installation
  • start the installation program – setup
  • answer the install questions ( see below )
  • install and reboot
  • install 3rd party package management tools ( like slapt-get and src2pkg )
  • install binary video drivers ( Nvidia / ATI-AMD ) if required
  • set default init level to 4 ( for desktops )
  • customise as required

Install questions:

  • setup swap disk
  • choose partition(s) for Slackware ( btrfs available )
  • choose source media ( NFS, FTP, HTTP and Samba shares supported )
  • choose package sets
  • choose install mode ( full is fine )
  • create a usb boot stick
  • choose frame buffer console resolution
  • select the lilo boot destination
  • choose mouse options
  • setup networking addressing
  • select startup services
  • set timezone
  • choose windows manager
  • set a root password

Once you’ve finished the install, exit to the prompt and reboot your machine. Done!

Installing as a kvm guest with virtio requires some additional steps – if setting up kvm guest ability straight after quitting the installer:

add the following to /mnt/etc/lilo.conf

disk=/dev/vda bios=0x80 max-partitions=7

Then:

  • make sure the root filesystem is mounted on /mnt ( eg. mount /dev/mda2 /mnt )
  • mount -t proc proc /mnt/proc
  • chroot /mnt
  • /sbin/lilo /etc/lilo.conf
  • reboot your VM

One of the best advantages of Slackware over other distributions, is the ability to setup a raid 1 mirror for the boot disk, something that is a little convoluted to do on other distros ( and not doable at all using their installer GUIs ). With a fairly current kernel, Slackware has good support for most hardware out-of-the-box. USB and PXE can be used for installation – I’ve used the PXE method extensively and it makes for a very quick network install option for Slackware. Eric Hameleers ( AlienBob ) also maintains a full USB-only install method. Another bonus in 13.37 specifically, is that Eric has added a PXE install server directly to the DVD boot image – I’ve not tried this yet but it should make setup of remote Slackware, a doddle.

Core

Slackware 13.37 includes the 2.6.37.6 kernel ( 2.6.38.4 in /testing ) as standard with 2 kernel options available – huge and generic. Huge of course supports every piece of hardware available in the Linux kernel while generic is a trimmed down kernel for use with an initrd. The mkinitrd_command_generator.sh script ( in /usr/share/mkinitrd ) is used to generate an initrd post-install ( those with systems using the aic7xxx module, will need to specify this manually as part of the -m option to mkinitrd ).

There is a big addition, in terms of performance, to the 2.6.38 kernel, otherwise known as the 200 line patch – it is designed to automatically create task groups per TTY in an effort to improve the desktop interactivity under system strain. So it’s a pity 2.6.38 is not the default kernel however, it is included in /testing for those who’d like to try it.

Update:

Considering the recent issues in 2.6.38.x with high power usage, it’s just as well that Pat decided to stick with 2.6.37

Slackware includes not only kernel support for all current peripheral types but also distro support – so your pcmcia, firewire and other equipment should just work. Udev dynamically manages the setup of equipment as they are connected ( and disconnected ) to/from the system.

Networking

Slackware, like all Linuxes, supports pretty much every form of networking available. IPv6 is available out of the box, although you will need to obtain the Router Advertising Daemon ( radvd ) for practical purposes. Someone has created a radvd package and init scripts for Slackware 13.0 – I’ll be looking at adapting the init scripts for 13.37 shortly.

Network tools are 2-a-penny: netcat, tcpdump, iptraf, netwatch and nmap amongst others are available. Network services include NFS, DHCP, Bluetooth, dnsmasq for dhcp/dns/tftp/bootp, bind for dns and wireless support.

Security as always, is a high priority in Slackware: the base install uses mostly sane defaults and one can always tinker with these should you require something tighter. Network security is full catered for in the form of OpenSSL, OpenSSH, OpenVPN and GnuPG.

General Package information

Slackware has stayed fairly constant in size since moving to DVD format a few years ago. That doesn’t mean however, that it’s limited on the apps side.  The full ( current ) LAMP stack is provided along with dhcpd, dns and server apps. CLI purists have access to a number of terminal apps like mutt, tin and lynx while GUI versions of basic internet apps are available in the form of firefox, thunderbird, akregator, ktorrent, kopete, etc.

Multimedia support is well served in the base release ( with mp3 and ogg/vorbis support, libraries and players ) however, if you want the full shebang, head on over to AlienBob’s alternative tree for a range of apps including the ubiquitous media player VLC, brilliant video tool avidemux and swiss-army knife of media, ffmpeg.

While a few browser-based plugins are provided, you’ll want to look towards a 3rd party repository like SlackBuild.org for others. A slackbuild script is included in /extra for Flash and you can download Acrobat Reader from Adobe directly – the installation is straightforward.

Graphical environment

KDE in Slackware is at the 4.5.5 release and provides for a very stable and fast graphical environment. It is stock so don’t expect any bling, however, that plays to both the Slackware ethos and my own preferences. Compositing/acceleration works well with both ATI-AMD and NVidia binary drivers, while additional work has been done to make KDE work well with the ever-insufferable Intel drivers.

Eric Hameleers maintains a 4.6.x release for those who would like the latest KDE release, and all dependencies are catered for in his tree. As well, the requirement for HAL is now completely removed.

For those looking to use the Gnome environment, the GSB project has a -current release of Gnome 3.0 and it may yet be ready for the release of 13.37.

Alternatively, fluxbox, fvwm, windowmaker and xfce are available.

Multilib

Slackware continues to be available in 32- and 64-bit editions. The 64-bit edition does not include 32-bit compatibility, however Eric Hameleers has a multilib setup to enable this configuration. Eric’s instructions are precise so you shouldn’t have any issues installing these packages. The result is a system that can compile/execute 32-bit applications which is fairly useful when there is no 64-bit version of an app ( think Skype ).

Update:

Skype 2.2.0.25 was released earlier this month ( including 64-bit ), but it crashes after logging in and I’ve not had the time ( or patience ) to look for solutions.

Development

As always,  Slackware makes for a great development environment. Versioning tools are abundant while scripted programming languages like Python and Perl are at current release levels. QtDesigner and KDevelop are available for graphical development.

Packaging

Slackware uses a simple packaging method ( pkgtools ) which makes installing applications a doddle. However, in true Slackware fashion, dependencies are not catered for. This is by design ( and choice ) … while some may complain about the lack of dependency checking, I’ve never had an issue with this, although one does need to keep track of applications you install and their requirements.

slackpkg is bundled with Slackware and provides a wrapper for the OS packaging tools by automating the download and installation of distribution apps and updates. sbopkg does a similar job, but in this case, for additional 3rd party applications from the slackbuilds.org site.

The Slackware ecosystem

Pat Volkerding has been the prime developer of Slackware since the very beginning in 1993. In recent years however, a number of people have been helping Pat in bringing the latest versions of Slackware to fruition. Whether it’s assistance with patches to the core system during the development phase or 3rd party application repositories, this help enriches the Slackware experience.

Support is a community function in Slackware – the LinuxQuestions forum is vibrant and busy, with a touch of the Wild West. IRC support is available on #slackware. The largest 3rd party app repository is maintained at SlackBuilds.org which uses the slackbuild method for building packages that is native the distribution itself. sbopkg is an  automated app browser and installer, building directly off the SlackBuild.org site.

Eric Hameleers maintains his own impressive repository while there are many others that provide packaging options. As always, security and stability remain  fundamental to Slackware – even releases as far back as SW 8 are kept current with security updates.

Conclusion

If you’re looking to do more than just use Linux, then Slackware fits the bill. And for rock solid secure server performance, it can’t be beat. This release may be just an evolution of Slackware, but it still thrills …

In this day and age, fast-food lives seem to be the norm. The majority want everything easy and done for them – instead of getting down and dirty fixing that problem, you just pay someone to do it for you. There’s something to be said for ease of use, but there’s a point beyond which we start dumbing ourselves down. Slackware provides the alternative option, the tinkering option, the thinking option.

Slackware is not necessarily for the masses or those wanting a quick fix. But therein lies the beauty of something which can challenge you rather than deliver on a silver plate.

The BSA – FUD, FUD, FUD

It’s well known that the BSA has been an industry mouthpiece and lapdog for commercial software vendors since its inception in the 90’s. However, the level of FUD ( fear uncertainty and doubt ) that now pervades it’s press releases and comments threatens to dispatch any remaining sense of respect for the BSA, to the proverbial computer cemetery in the sky.

Just this week, the following was spewed:

“BSA strongly supports open standards as a driver of interoperability; but we are deeply concerned that by seeking to define openness in a way which requires industry to give up its intellectual property, the UK government’s new policy will inadvertently reduce choice, hinder innovation and increase the costs of e-government,” said the lobbying group, which represents many proprietary software groups.

This was the result of the British Government’s recently expanded and clarified stance on open standards ( note that open standards and open source are not the same thing, an open standard does not need to be free as in code availability ).

But the BSA spouts nonsense like ‘requires industry to give up it’s IP’.

First, why not tell us the truth and say commercial software industry? Ah yes, never say more than you need to …

Second, open standards do not require someone providing such open standards, to relinquish its IP – it simply means that you shouldn’t charge for such an open standard or impress legal rights on anyone using your open standard.

Then we have a real zinger like ‘reduce choice, hinder innovation and increase the cost of …’

But nowhere does the BSA clarify and substantiate these comments. How exactly do open standards reduce choice? Surely if you’ve lowered the entry barriers to both creation and adoption of open standards, you incite further and varied development, leading to greater choice. Eg. instead of having one commercial company flogging their proprietary wares at hugely inflated prices, you now have that company and a number of others providing high quality options at low or perhaps no cost.

Hinder innovation? Surely having an open market helps innovation – this is the fundamental tenet of the free market society. And increase costs? Hmm not sure how that’s supposed to work.

The plain truth of the matter is that the BSA ( and their associated clients ) is worried that the introduction of open standards ( and open source ) will lead to a reduction in the usage of their products, leading to loss of profits. So instead of competing on technical merit, they reduce themselves to a laughing stock by spouting rubbish.

How does an organisation like the BSA expect anyone to take them seriously with nonsense like this? Well keep on talking BSA, because your credibility is just about gone.

Poor reporting from BCS/ITnow

I recently bumped into an article written by Steve Smith, MD of IT Security firm Pentura. After reading only the 1st paragraph, I already came to the conclusion that either Mr. Smith is clueless or purposely disseminating falsehoods about OSS security. The rest of the article is an abomination peppered with inaccuracies and complete rubbish. The real kicker is that this article was hosted by the British Chartered Institute for IT (BCS) ITnow magazine. It’s quite strange for a supposedly decent industry body to associate themselves with such trash but looking through the comments from BCS members, it’s quite apparent that the BCS is no longer the body it used to be.

Let’s first take a look at BCS’ about statement:

BCS, The Chartered Institute for IT, promotes wider social and economic progress through the advancement of information technology science and practice.

How does one ‘promote wider social and economic progress’ by writing ineffectual articles like this one? Surely incorrectly disparaging OSS security ( as Steve appears to have done – I actually still don’t understand the point of this article ) does nothing to further the BCS’ agenda. Unless there is an ulterior motive here. It’s well known that FOSS software is a driver for economic and social progress – just look at it’s use in 3rd world countries and the benefits it brings to those areas of the globe. Does Steve really think that Rwanda, for example, can afford Microsoft’s software? And if they can’t, should they just forgo the ability to take part in the wider global Internet and computing culture? Of course not; FOSS gives everyone an equal footing! Anyone can, using FOSS, do anything others do with proprietary software. And often times more.

These FOSS users don’t have to spend a fortune on 3rd party software to try to secure their systems from security poor proprietary products nor are they at the mercy of these vendors’ belated security patches that don’t even address all the issues on that platform.

Second, let’s take a quick look at some of Steve’s statements:

Experts do not agree about open source security in terms of whether there is an advantage or disadvantage to its use in the business world.

Er, yes they do Steve; any security expert worth their salt, knows that OSS has the lead over proprietary software in terms of security – have you not read the code quality reports coming from Coverity and others?

By its very nature, open source applications expose the source code used to write programs to examination by everyone, both attackers and defenders. Experts argue that keeping the source code closed provides an additional layer of security through obscurity.

They do? Where are these experts that you’ve consulted Steve? Come on Steve, the security by obscurity view was debunked and floored years ago already.

Although Microsoft has become very efficient and transparent with their security vulnerabilities, this still leaves a window of opportunity for anyone who has discovered a security flaw prior to a patch being issued to exploit the vulnerability.

That’s a joke or sarcasm I presume? Do you call it efficient when Windows users wait months and sometimes years for patches to security issues? Is Microsoft being transparent when they don’t respond to notifications of security issues in their software?

And so it goes on …

What’s quite interesting is that Luke Leighton’s critical ( yet entirely valid ) response in the comments section was watered down to a serious degree – BCS, are we no longer adults that we can decide for ourselves? What’s with the censorship? BCS says in response to the editing:

BCS is absolutely against censorship, but as a professional organisation we have a responsibility to remove expletives, profanity and any comment which could potentially be construed as libellous from our site.

What? Huh? You’re not serious …

Luke’s complete response is available on the advogato site. I leave you to make up your own mind but I’m sure you’ll come to many of Luke’s conclusions. And mine. Steve, are you in the employ of Microsoft? Or are you just plain ignorant about OSS security?

The Slackware 13.1 Interview

Slackware releases are like a big shiny new birthday present for me ( in fact mine’s just around the corner, hint hint ) even though I follow -current mostly. It means that the distro is at a point where new packages have been added, others upgraded and bugs worked out. And Patrick, and the rest of the dev gang, are happy …

What’s new?

One of the views of Slackware in the past has been that it’s reliable and stable, but older versions of packages are used in a bid to combat instability in newer package releases. While that has been true in the past, I’ve found that since 13.0 especially, Slackware has been released with more up-to-date packages while retaining its stability.

The big upgrade in this new release is the move from the KDE 4.2 series to 4.4 with PolicyKit ( thanks to Eric Hameleers for all his KDE dev releases ). While some may not notice, there are performance, feature and stability improvements in this upgrade that make this ‘new’ desktop environment even better than before. Read H Online’s 4.4 review for more information.

The kernel is now at a 2.6.33.4 level, gcc at 4.4.4 and glibc at 2.11.1. X is updated to 1.7.x along with all the accompanying drivers, thanks to Robby Workman.

All the security apps have been updated with OpenVPN now being included by default and the LAMP stack is at Apache 2.2.15, PHP 5.2.13 and Mysql 5.1.46. The Cups package has had usblp support added back in, something that a number of people were looking for. Related, the hplip package has ijs support again. Firefox, Thunderbird and Seamonkey are at their latest releases – I was hoping Thunderbird 3.1 would be released before Slackware 13.1 but no matter as I’m sure it will be in -current soon.

In other news, the libv4l package has been replaced by v4l-utils, bittornado and emacsspeak are in /extra, /dev/sr0 is now searched for install media before the old IDE devices, JDK and JRE are at Update 20, Amarok 2.3.0.90 is in /testing and firmware for a number of wireless devices are added/upgraded.

In general

I won’t rehash information from my previous Interviews except to say that the stability, robustness and performance carries on from previous releases. Slackware continues to perform well in both server and desktop roles with support for most current desktop technologies ( bluetooth, wireless, etc. ) and server-side services and development. Don’t expect to find the customisations you’ll find in other mainstream distributions – Slackware is raw and to the point.

The ncurses-based installer hasn’t changed ( much ) while the newer txz package format gets support from 3rd-party solutions like Gilbert Ashley’s excellent src2pkg tool. The GSB project is tracking ( according to their website ) 13.1 closely and hopefully we can expect a release of this Gnome add-on to Slackware soon. In the meantime, a dev version of 2.28 is available from them. The Dropline Gnome and Gware projects appear to have stagnated as there is no recent news from them.

Many of the audio-video libraries I would have previously compiled manually are now included by default which makes Slackware very capable from a desktop audio point of view. These include libraw1394, libmsn, libdiscid, mpg123, libmtp, loudmouth, fftw, liblastfm and others.

64-bit support, introduced in Slackware 13.0, continues to mature in 13.1 and Eric Hameleers is tracking the mainline development effort with his MultiLib libraries very closely ( even when on holiday! ), allowing one to run a combined 32- and 64-bit system without too much extra effort.

Those who mirror the -current tree can go to sligdo for a method of generating ISOs.

Conclusion

Having built a number of 64-bit 13.0 servers in recent months, I can confirm that no issues have cropped up with the addition of the 64-bit version and these machines are running beautifully. My own 64-bit MultiLib desktop remains a pleasure to use even on slightly older hardware ( yes time to upgrade ). Slackware on the whole, retains a vibrant end-user community with many blogs, websites and forums dedicated to this venerable distribution. One of the busiest places on the ‘net is the LinuxQuestions forum where many Slackers hang out.

13.1 continues the Slackware tradition of a simple, no-frills, reliable distribution for those wanting a rock-solid Linux implementation, and also those wanting to learn the ins and outs of Linux. Thanks to Pat and everyone else involved in this wonderful project – support them through the Slackware store if possible so we can continue getting our Slackware fix.

VP8 vs H264

Apparently the MPEG-LA forum, which manages a pool of patents relating to H.264, thinks that any implementation of video will be encompassed by one or more patents from its patent pool. Not only does this reek of megalomania, but it also shows just how far gone the US patent system had gone down hill. It also shows how monopolistic the MPEG-LA forum is.

Nero has come out fighting detailing a host of issues with MPEG-LA and it’s practices. Google has released the VP8 codec as an open source, royalty-free competitor to H.264 ( as part of the new WebM initiative ) and if it gains traction in it’s Youtube system, then many may flock to Google’s banner. Firefox and Opera have support for WebM in their latest test browsers and Microsoft has indicated they will support playback in IE 9 if suitable software is installed on the machine.

The question to ask is just how real is MEPG-LA’s threats against VP8. I think we’ll see a battle royal in the next few months, with the money-printing MPEG-LA trying to hold on to their little corner of gold.

Slackware64 Multilib and GSB

I’ve had a few queries on setting up Slackware64 Multilib as well as GSB with -current. It’s not difficult at all but just requires one to follow a strict set of steps.

Multilib

Eric Hameleers ( Alien ) has the definite write-up on Multilib on his site however I’ll provide a short synopsis here for the impatient.

The first thing to note is that you need a set of Multilib-enabled gcc and glibc packages available on Eric’s site. These need to coincide with the version of Slackware you are running so make sure you get the correct packages:

Slackware64 13

Slackware64 -current

Once downloaded, install using:

upgradepkg –reinstall –install-new *.t?z

Now we need to create 32-bit compat packages from an existing 32-bit installation tree. Have a 13.0 or -current tree available, make a directory somewhere for this purpose and:

massconvert32.sh -i /path/to/slackware-13.0/slackware/

Once this is done, install using:

installpkg *-compat32/*.t?z

Add glibc and gcc as exclusions to your package manager to make sure the Multilib-enabled versions do not get overridden. Upgrades are done the same way except the compat packages, once generated, are installed using:

upgradepkg *-compat32/*.t?z

GSB

GSB is almost tricky but not quite. There are no current -current packages at the moment however the last 64-bit GSB available ( 2.28.1 ) works ok on Slackware64 -current. You may also need to get some of the 32-bit packages from this version – YMMV. The GSB site lists info relating to 2.28 and 2.30 for Slackware 13.1 so keep checking the site for updates.