Rolling Stones song

I see an iMac and I want it painted black,
No colours anymore I want them to turn black.
It's not a real computer, you have to face the facts.
When you're on a Macintosh, your soul has turned to black.

PHP and the isset() function

It has been common usage to utilise the isset() function to determine if a variable in PHP has a value. This is as opposed to checking the contents are not NULL or the empty string: “”. I had utilised this facility thoroughly in the CRIMP codebase (available on source-forge under the same name) and was wondering why a certain scenario wasn’t being handled as I expected. Now, I knew I could work around the issue, but I needed to find out what the actual cause was.

As it turns out, as I was using PHP5’s class infrastructure and pre-declaring variables class-wide, the isset() function was always returning true. This meant that my checks for the existence of an object within the $crimp->_debug variable was always telling me that the object existed when, in fact, the code hadn’t got as far as initialising the debug routines yet. This was also true for my checks on the $crimp->_config variable which houses the configuration array.

This latter issue, the _config variable, meant that if the config.xml file was not present or was unreadable then CRIMP would return a blank empty html document to the browser. The reason was that CRIMP was detecting failure to load the configuration file and was trying to parse and send an error document to explain the fact. However, because the _debug and _config objects hadn’t been initialised yet, some of the calls made in the error page parsing routine were against objects that didn’t exist.

The solution was to replace the isset() calls in my routines with the more appropriate if ($this->_config != “”). It seems to me that many newbie errors can be eliminated by explaining that isset() only checks the variable’s existence, and that it should always be validated by checking the contents of the variable also. The isset() should never be used on its own, as all it is useful for is avoiding PHP Notices about missing variables.

So, my advice to newbies in the field of PHP programming is:

Don’t assume that because a variable is defined (and isset() returns true) that it has the value you are expecting. In other words, check, check and, when you’re sure the variable contains what you expect, check again.

A simple check for the correct value doesn’t eat too many computer cycles, so it is better that you put checks in place, and use up a couple of milliseconds, rather than tear your hair out for days trying to work out why it’s not working.

Daniel Llewellyn


UPDATE: see my new post here:


After I had my mac mini arrive just two weeks ago, I’ve been impressed no-end by the OSX operating system. So much so that, now that I’m sending my mac mini back for an upgrade under the 14-day remorse period (they upgraded the mac mini line on Tuesday to a much more advanced unit), and I’m to be without my beloved OSX, I decided to have another go at getting it installed on my beige box PC.

So, here’s the specs on the machine:

MSI 975X Platinum Power-up Edition Mobo, 2GB RAM, Core 2 Duo E6600 2.4GHz, 250GB HDD, DVD Burner, and nVidia graphics.

The dvd wouldn’t boot directly, and needed to be tickled with a little jiggery-pokery. First I needed to move the dvd drive onto the primary IDE channel with the HDD so that the BIOS would recognise that it existed without using the JMicron second channel which is unsupported by OSX at this time. Second I needed to boot the DVD with the -v option to get past the “/ not found” error (I think that was the error anyway).

Now that I was into the dvd interface, I ran the disk utility to erase my hard drive. This was a dead cinch. Onto the install phase, I selected the optional titan nVidia drivers for my GeForce 7600GS. Once I was in the installed operating system after the reboot I downloaded ALC888Audio.mpkg from the InsanelyMac forums and ran that. After another reboot I then found that I had fully operational audio and graphics with Quartz Extreme and Core Image support.

EDIT: Now, don’t assume that I’m using this machine in favour of a real mac. Quite the contrary, I intend on getting either a Mac Mini (to replace the one I’m sending back – see above re: remorse period), or a more expensive MacBook, or an even more expensive iMAC which would replace this desktop unit completely. So, either way, I will be buying a full-on pukka bona-fide Mac.

EDIT2: I bought a 24Inch iMac.


Mac Mini

So, I finally joined the “dark side”, and had my new shiny Mac Mini arrive via DHL this afternoon. It’s a funky lil machine, and much quieter than my Desktop PC with it’s many cooling fans and über-powerful nVidia GeForce 8800GTS graphical card.

The mini was very simple to set up and I was up-and-running within 15 minutes. The longest time was waiting for the Mac to configure itself and update the software after the first boot.

When I was logged in, my first task was to find a suitable IRC client. I believe Shaun in #computers on IrCQNet ( uses Colloquy, so I downloaded that and gave it a shot. After about 20 minutes using it, however, I decided that it was too dissimilar to my beloved XChat. So a quick Google later, and I found the home-page for XChat Aqua, which is a version of XChat designed specifically for the Mac.

My mini now sits underneath my 17-inch 4:3 LCD display, though the display is not physically sitting on the mini, as the manuals say that is a bad idea (may interfere with the operation of the superdrive).

I’ve also upgraded safari to the latest beta release, and am quite impressed with it’s speed and accuracy in rendering. Maybe people are taking note of the khtml renderer now that apple have adopted it. Though, I don’t think that some sites that render well in safari will also render the same in konqueror, as there may be slight bugs in apple’s version of the khtml engine where they’ve tweaked it, or there may be javascript fixes that apple have applied that haven’t made their way into konqueror yet. For e.g. G-Mail works fine in safari, but I’ve noted that it doesn’t work so well in konqueror, and is marked as being unsupported in that browser by Google.


The END of OpenMosix?

Is this the final stages of the OpenMosix clustering addition to the Linux kernel? It has been announced from Moshe Bar, the leader of the openMosix project, that he wishes to leave the project and shut down development. The first I heard of this was a curious question on the oM users mailing list, and has since been confirmed by a mailing by Moshe himself. Florian Delizy has since said that he will continue his own development, and so will a few others, creating a fork of the OpenMosix code if needed.



I’ve been working increasingly on Windows Vista, as my desktop system’s Linux support is very minimal at the moment. The only Linux distro I’ve managed to get running on it so far is the *buntu series (7.04), but the Ubuntu binary nVidia packages are missing libwfb or some such which is required for the GeForce 8800 series nVidia graphics cards to operate. This means that I’m reduced to using the desktop with the rather unimpressively performing ‘nv’ driver that comes with the Xorg X server.

I’ve been very impressed with how Windows Vista stays out of my way when I’m working, and it also allows me to run visual studio 2005 for which I have a pukka license. With VS2k5, however, I’ve been thus far unable to develop any PHP applications such as Deadpan110’s CRIMP that I’ve become quite an active developer in. Enter “Phalanger” a fully .NET and mono compatible (windows only :-( though) PHP compiler.

Phalanger will take any PHP code and compile it to run natively on the .NET or Mono runtime environments. This not only increases deployment options to some hosts that refuse to run PHP but will consider a .NET compatible system, but also yields a speed increase in execution times (from request sent to result received) as the compilation has already been mostly done, and just requires the .NET JIT to finish off the compile to native code and run it.

Now this doesn’t have anything to do directly with VS2k5, but available on the Phalanger web-site is a visual studio plug-in that enables the IDE to recognise PHP scripts, and allow PHP code-behind files for ‘.aspx’ pages. I’ve not noticed any code completion support as of yet, but I’m hoping that this is being worked on for future releases of the plug-in, as syntax highlighting is already supported.

With this Phalanger product, I’ve taken the CRIMP code and, while only making minor tweaks to allow for IIS’ inability to support url rewriting that apache is so good at, made it fully compatible with Phalanger/ASP.NET/IIS on the windows platform. I am now running my main development web-site on CRIMP using a “Windows Server 2003”-based appliance. All this is good news for CRIMP in that we can now fully support Windows/IIS-based systems whether they’re running Phalanger or the original PHP ISAPI plug-in/CGI binary.


Simple Active Directory Interface

As a follow-up to my recent posting about Samba, Virtualmin and Windows interoperability I found, on the MSDN site, some documentation on how to use the LDAP protocol to create entries in the Active Directory database. I shall look this over and see if it is possible to use the information to add entries using the “LDAP Users and Groups” module of Virtualmin/Webmin. First, though, I need to reinstall my Windows 2003 Server R2.


Samba’s Evolution & Virtualmin/Postfix

A recent interview of Samba release manager Jerry Carter (25/mar/07) reveals some info on the upcoming 3.0.25 release which is currently in the RC phase of production (release candidate: code that is feature complete and packaged as final release to iron out any final bugs before the gold edition).

The result of the current round of testing prompted the Samba team to make Linux machines support disconnected log-on capability just like in Windows, Carter said. “So, for example, you can join a Linux host to a Windows domain, unplug it and go on the road and still be able to log on using your domain user account,” he said.

Active Directory domain support has been extensively evolved, so that now a Linux machine will connect to the closest Domain Controller, for the site it finds itself in, instead of connecting to a possibly remote server over an expensive link.

IDMap now supports the RFC-2307 LDAP extension that is applied to an Active Directory domain by installing the Microsoft “Services For UNIX” package on a Domain Controller.

Samba will be able to better leverage information contained within Active Directory. “If the Samba host is joined to an Active Directory domain supporting UNIX schema attributes — like RFC-2307 or the SFU schema — winbind could retrieve that information from AD while mapping domain users and groups in a trusted Samba domain using the underlying Name Service Switch interface,” Carter wrote in an e-mail.

While I’m not Microsoft’s biggest fan, I do appreciate that Linux and UNIX must operate in a Microsoft dominated world. The Samba project brings interoperability with windows to our favourite operating systems. This can only be a good thing for everyone involved, from the Windows Wannabes™ to the Linux masses to the [insert UNIX OS here] die-hards.

I also read recently, in my monthly Linux Format, an interview that alluded to the possibility of a Samba-based replacement of the Name Server Switch, which is the fundamental mainstay of Linux/UNIX user-name/uid+groupname/gid mapping (plus a few other things). What this would mean to the Linux world (Linux, ‘cos that’s the OS I’m personally interested in) is that UIDs and user-names would all be accounted for by the Samba back-end instead of the standard shadow password suite.

Now this doesn’t mean that shadow files would be obsolete, it just means that Samba would take overall control in assigning UIDs allowing for a site-wide or even enterprise-wide unique identifier for each user, instead of a different UID for each separate machine that the user wants to use.

Another great thing about WinBind (or whatever the Samba team call the technology when released) taking control of the NSS is that no matter what your source of authentication data (be it flat file, LDAP, Samba 4.0+’s AD implementation, or a true Active Directory), the system will always view it in the same way. It will also have the ability to search for all users beginning with the letter “a” etc.

This ability to search user data is essential for enterprises with thousands, if not millions, of users globally. The standard way to search user data in UNIX-land is to read every entry in the database (or flat file) one at a time until you reach the end, finally returning the results. This piecemeal approach is fine for single systems where there may be ten or so users, but it doesn’t scale for the enterprise well at all.

All this interoperability leads me to the conclusion that the unices are bad at authentication (unless you go LDAP, which isn’t compatible with windows at the moment), and Windows has the edge in this arena. However, the unices are far superior in other areas such as multi-user virtual-hosting environments such as that set up by the Virtualmin system that I use.

Virtualmin has the ability to store user data into an LDAP directory, however this does not include MS Active Directory systems. My ideal setup is to have Virtualmin handle adding users to an Active Directory system for the virtual-hosting environment to use, and also have the management facilities of the Windows-based management tools for the directory.

I suppose this would require someone to write a Virtualmin plug-in, or evolve the current LDAP users and groups plug-in, to support Microsoft Active Directory for adding and deleting users and groups. This, complete with the UNIX attributes afforded by the Services For UNIX add-on to Windows Server System 2000+.

In this manner winbind will automatically map the correct UIDs/GIDs to the correct users direct from the Active Directory database. All my systems will have the ability to use centralised single-sign-on for both Windows, UNIX and Linux hosts. This will mean much reduced administration on my part, while allowing for more flexibility.

The question, now, is whether Postfix supports Active Directory. Hmm. Postfix is the only facility on my system that I have doubts as to whether it will work with a single-sign-on system based on Active Directory. Postfix itself has support for LDAP maps, but does this extend to the Active Directory system? Thinking on this, I wonder if Postfix would be able to authenticate against the WinBind data that the up-coming Samba system will provide. So, update to the question:

Does Postfix query against the NSS system or utilise the passwd/shadow files directly?

Well, I can answer the second part straight away. Postfix cannot utilise the passwd/shadow files, as it usually runs in a chrooted environment. So this suggests that the only way to query the user data is to use system calls which return the user data. I am guessing that the standard system calls would return information based on the NSS data, which would be pointing to the WinBind system which allows for off-line access.

That last point is important as, in the past, Windows has earned a reputation for being unreliable and needing lots of reboots. If the Windows system were to go down at all, be it a crash or a simple reboot, all account information will be unavailable meaning that Postfix will be stuck with the question of “do I accept all mail and bounce later, or refuse to accept all mail”. The result will most likely be the latter, meaning that whenever account info is unavailable, my e-mail system will also refuse to accept mail with (worst case scenario) a 5xx error message.

SMTP errors in the 500-599 range are “hard” errors, so the server at the other end will assume that the e-mail has been addressed wrong and will bounce back to the originator or, worse, silently drop the e-mail completely. In an enterprise this lost mail is unacceptable. OK I’m not an enterprise, but I like to work to enterprise standards. Also, the sending server may cache the 5xx error for the addressed user and immediately bounce all mail destined for that user on my system without first checking that the error has been resolved (as a 500-599 error is also a “permanent” error, so the sending server thinks to itself “don’t try that again”).

So, hopefully, I can implement the above Virtualmin based system when the Samba team release the next edition. The technical details I hope to have covered thoroughly in this posting so that I am completely aware of the pitfalls of putting this plan into action. I will post again when I have the system up and running or, when I give up because it’s too complicated for my little brain to comprehend ;-).


Evil Car

I got this link off threenine in IrCQNet’s #computers channel. When I saw the car, I almost vomited. Yes, it truly is that vile. I still can’t look at the image, even knowing what it contains, without feeling queasy and needing to turn away. This car really is the epitome of bad design. As someone is quoted as saying: “All those mangled panels are apparently essential in gaining greater aerodynamic efficiency, presumably because the Weber’s appearance physically scares the air out of its path.”



Yesterday, here in the UK, we had Local Elections. Not all the results are in, but it appears that the Labour Party (who’s web-site appears to be broken at time of writing) has taken a complete bashing with the England share of votes predicted to come out at 27%. This is only 1% higher than their all-time low of 26%.

While turnout was fairly good in Scotland and Wales, the English voters seemed just as apathetic as usual. It’s understandable, really, though, as when I went out and voted at about 13:30 yesterday afternoon, I was quite surprised to note that in my ward there were in fact three candidates. One of whom I’d never even heard of. I really don’t understand how any of the candidates expected me to know for whom to vote, as the only campaigning I noticed was a leaflet through my door about 3 weeks ago from the Conservative Party, and another leaflet from the Labour Party a week before that. So, as you can imagine, finding out that the Liberal Democrat Party had a candidate in my area was quite a shock.

The campaigning in the whole of my local council’s borough was based around the proposed Manydown development. This is a few large fields to the west of Basingstoke that someone at some point thought was a good idea to develop. However, all the main parties have been saying that, if elected into power, they would bin the plans.

The residents of Winklebury, the closest part of Basingstoke to the Manydown site, have set up a web-site for the Save Manydown Campaign, and all the local election candidates picked up on the feeling that the Greenfield site should not be developed under the current plans. Personally, I don’t mind the Manydown development, as long as the kinks are ironed out of the plans before committing. The main detractions, so far, that have brought the process to a halt are: Traffic, Sewerage, Travel and public bus services, and Timing.

So, how did my local council fare in the elections? Well, the Conservatives retained control of the council with no changes in the number of seats each party holds. So, that’s a bit of a damp squid: absolutely no changes in the power distribution whatsoever. Detailed information about these results can be found at the BBC Election 2007 Web-site.