Open Source. Really?

I’ve looked at possible cross-platform mobile device development frameworks, and came across Appcelerator a company with the product “Titanium”, which ordinarily is licensed under the Apache 2.0 license. However, if at any time I decide I need extra functionality than that provided by the default product the Titanium+Plus modules such as the urban airship module are available. I almost downloaded the demo module but stopped to check out the Terms assigned by ticking the agree box to get through to the download; it is here that I became flabbergasted and appalled that a) nobody has mentioned this before, and b) that it is even allowed to happen.

Note Section 1.3(g) states that:

[signatory shall not nor allow a third party to] (g) modify any open source version of Appcelerator’s software source code (“Original Code”) to develop a separately maintained source code program (the “Forked Software”) so that such modifications are not automatically integrated with the Original Code or so that the Forked Software has features not present in the Original Code;

And section 5.4 states that:

Notwithstanding anything to the contrary contained in this Agreement, Sections 1.4 (“License Restrictions”) …” – I believe that is a mis-numbering of 1.3 judging by the textual title of the section – “… shall in all cases survive any expiration or termination of this Agreement.

I read those clauses as a _permanent_ ban on _legitimate_ open source development based upon the code previously released by Appcelerator under an open source license. This I feel is completely against the spirit of open-source and potentially a conflict of the two license grants Apache 2.0 vs Appcelerator Titanium+Plus Modules License, which would need testing in court to find which holds precedence.

As I didn’t know who to contact about this flagrant betrayal of the term “Free”, I figured I would post this blog entry and email the Free Software Foundation for guidance.

I’m still waiting on a response from the FSF.

Update (19/01/2012): I failed to post this at the time, so I’m doing so now. I actually received a reply from Yoni Rabkin from the Free Software Foundation’s Licensing department on the 2nd May (my original email was 13 April) with the following:

> Note Section 1.3(g) states that:
> “[signatory shall not nor allow a third party to] (g) modify any open
> source version of Appcelerator’s software source code (“Original
> Code”) to develop a separately maintained source code program (the
> “Forked Software”) so that such modifications are not automatically
> integrated with the Original Code or so that the Forked Software has
> features not present in the Original Code;”

Indeed such a restriction renders the software non-free and therefore,
of course, also GPL-incompatible.

The Apache 2.0 license is not a copyleft license and therefore permits
use in proprietary software. This is one of the reasons the FSF
recommends licensing your software under a strong copyleft license such as the GNU GPL. If there is copylefted software involved (with copyright holders who are not Appcelerator) then there would be a potential copyright violation to pursue, otherwise what we have is a case of misleading advertisement.

I would recommend contacting the Appcelerator people and explaining the problem; if they can make clear that they distribute both free software and proprietary software people will be able to choose to avoid the proprietary parts.

But please note that if you are into mobile development you will run
into much worse problems once you attempt to actually release your
software via these “app stores” as they actively support proprietary
software (Google’s Android, for example) and some go as far as banning free software completely (Apple’s products, for example).

MisTagged Music

Over the years I’ve acquired a rather large music collection of about 100GB or 17000 files. Unfortunately a lot of these are badly labelled and categorised.

Music files such as MP3 have the option to store so-called “metadata” or (information/data about data) which describes the music stored within the same file. This is the so-called id3 tagging system. When you rip a music CD you can lookup the track numbers and names with a service such as CDDB and then write this information within the file.

This is useful, but sometimes you get incorrect or incomplete info back or the information stored in your local copies gets corrupt over various backups and moving from place to place. That is where a service like TuneUp comes into play.

TuneUp is “Quite possibly the most important piece of software any music lover can buy.” –uncrate

This downloadable program for Windows and Mac scans the files you point it to for a digital fingerprint that allows it to uniquely identify the piece of music contained within. To achieve this, TuneUp will actually “listen” to your music rather than rely on metadata such as the id3 tags or filename.

I’ve used this tool to scan my own music collection and it reports that as much as 75% of my metadata is incorrect or missing. I am now slowly going through my entire collection with TuneUp, again, in clean mode rather than scan mode and am getting it to retag as much of my music as it can recognise. This is a tedious process, however, as sometimes it can get a completely different track than the one you’re relabelling; which means that it requires a bit of babysitting, but the amount of time and energy saving is still considerable, especially for larger collections or those that are wildly mislabelled.

FreeCh.at – The Free IRC Chat Network

I’ve not posted about this before now, even though the network has been operational for a few months now. I am a founding member of a new Internet Relay Chat (IRC) Network that allows anybody to talk with friends for free. The network is comprised of four geographically diverse servers, two of which are my own. The network is run by Nicholas “Cubezero” Weightman, David “Wellard” Fullard, and Myself (Daniel “Fremen” Llewellyn).

We operate using the open source InspIRCd software as the main controller along with “services” provided by the Anope software. (Services provide the facility to maintain control over your own name and rooms while you are not connected to the network.) We have tried as best we can to create a highly redundant system that allows for any one server to go offline without adversely affecting the chat experience.

To connect to the network you can either use the Web-Based interface at or you can connect using an IRC program such as mIRC (http://www.mirc.com/) or XChat (http://www.xchat.org/). The details you need to use these programs are:

Server Name: irc.freech.at
Port: 6667

The servers can also all be accessed directly using either uk1.freech.at, uk2.freech.at, irc.wnsi.co.uk or irc.invaliddomain.com. Wellard offers other services besides just IRC at his site www.invaliddomain.com. Cubezero also has more besides just IRC at www.wnsi.co.uk.

Egypt, the Human Spirit & Freedom.

So, it’s late into the morning again, and I’m awake in what appears to be a progressive cycle. Ho hum.

random service update

At least I can be of some use; to that end I’ve been reorganising the FreeCh.at DNS servers to hopefully improve our load-balancing “round robin” which randomly sends users to each of the four servers. I’ve also moved the FreeCh.at wiki onto the same servers which power the chat service, and upgraded the software at the same time, ensuring that the wiki stays live even when a single point in the “system” fails.

more important: twitter
During these late nights and early mornings I also enjoy keeping up with American twitterers’ mumblings. The latest post from Google’s @mattcutts caught my eye, which alludes to a usage of voice recognition and synthesis via their recent acquisition of SayNow to help Egyptian protestors. Continue reading “Egypt, the Human Spirit & Freedom.”

Sender Policy Framework

There has been some discussion on the FreeChat IRC Network about the email system for reducing SPAM called “Sender Policy Framework”. This system uses the network of servers used worldwide to transform the readable domain name (such as my thehoneymonster.com) into the computer address which uniquely globally identifies the server operating said service (in this case my own blog and XYZindustries).

That network of servers is called the “Domain Name System”. DNS has the ability to publish arbitrary information assigned to specific names such as thehoneymonster.com or www.thehoneymonster.com (where the two are different entities). This arbitrary related information system (known as TXT records) is used by the SPF project to assign a special chunk of information which details the computer addresses which are allowed to send email that purports to be from thehoneymonster.com (or whichever domain you implement the scheme upon).

I have now implemented a set of SPF rules on all the domains that I have access to including a few client domains. If you find problems with sending email from a domain which I administer, please let me know and I can try to find a solution which still minimises the possibility for spammers to use your domain name for their spams’ “from” address. The main issue I can foresee is where a client may be using a third-party webmail service or sending email through an ISP email-server instead of via XYZi’s.

For those who have not delegated responsibility to myself over their Domain Name(s) the correct TXT record to insert for hosting on XYZi’s Pressflow/Drupal cluster is as follows:

Name: Not Specified - leave empty
Value: v=spf1 include:xyz-network.com -all
TTL: any value, default is fine

And for those hosting via IND-Web.com’s system, then the following is correct;

Name: Not Specified - leave empty
Value: v=spf1 include:ind-web.com -all
TTL: any value, default is fine

Movement For Active Democracy

My Dad sent me this link to the Movement for Active Democracy website. Below is my response to my Dad explaining my own views.

I got to the fourth video and couldn’t will myself to go any further. What he says souds idealistic, but implementing it would be somewhat harder than he seems to think. Also, I think I disagree with his premise that the public has no say as it stands at the moment.

For e.g. we have a hung parliament now because the public couldn’t agree on any one party enough to push them past the post (on our “archaic” system). So, if proportional representation were used to it’s fullest degree of capability and the same set of votes were cast, I envisage that all three main parties would have had an equal standing and a bunch of racist parties snapping at their heels.

PR is great for isolationist parties because they don’t need to win outright anywhere and yet still get a few people as MPs; once they have a member in parliament they can tout how well they’ve performed standing up for the rights of the everyman and gain more votes, more MPs, leading in a viscious downward spiral towards getting enough MPs to be able to take us to 1984!

As it stands, First Past The Post is keeping the extreme-right parties under control.

And then there’s his idea that the Swiss have which involves getting together a bunch of your mates and forcing referenda. In theory that sounds great, but if an organised group get ahold of enough people who tow-the-line regularly then they can consistently force expensive referenda for every legislation conceived. This will prevent government from functioning completely, and the offending group would be able to hold the country to ransom until they get their own way.

If that “way” is for them to be voted into power, then when the public get fed up with the constant lack of progress they might decide to vote for this group just to get something, anything, done! And that could lead to 1939-45 all over again!

I think through that thought process I’ve pretty much convinced myself that our current system is the best for us.

Linode and Corosync High Availability

I have recently been configuring multiple Virtual Private Servers, hosted at Linode.com‘s UK data-centre, into a highly available (HA) web-hosting and IRC cluster. My design required the use of a shared storage using DRBD, which is akin to network-based RAID Level 1 or mirroring. To allow two servers to be operational simultaneously while maintaining data consistency DRBD can operate in a Primary/Primary configuration instead of the usual master-slave scenario; the data is kept consistent by using the OCFS2 file-system over the top of the DRBD device. The idea is that DRBD maintains the raw data in sync, while the OCFS2 file-system maintains file locks and related technologies in sync between the two servers preventing both from writing to the same file simultaneously.

Because of my use of OCFS2, which requires a Distributed Lock Manager to operate successfully, and that I wanted the file-system to fail-over nicely meant that I could not use heartbeat as my High Availability back-end to Pacemaker the cluster resource management framework. The preferred HA system for Pacemaker is Corosync, so I installed this from Ubuntu’s Apt repository and started the cluster.

Unfortunately neither system would recognise that the other existed when configured to use the default multicast addressing system. (Multicast was chosen to allow each system to “subscribe” to the data feed and other systems, those not in the cluster, wouldn’t get inundated with irrelevant information.) While researching this problem I found posts on Linode’s forum website that suggested multicast was allowed along with broadcast. This turns out not to be the case, as pinging 192.168.255.255, the broadcast address for my private network at Linode, yielded absolutely no replies from any of my other systems. This confirmed that Broadcast was not allowed, and added to my suspicion that Multicast would be the same.

So then I started searching for Corosync and Unicast to try to find a solution that allowed the HA daemon to directly communicate with all members of the cluster on a pre-arranged basis rather than relying on automatic discovery. I found a patch on the Corosync mailing list which does just that, utilising UDP. (The relevant term for google is “UDPU”.) Unfortunately, again, the compilation process was not easy.

To compile Corosync, a couple other packages are required to have also been compiled (step-by-step below). Also  the packaged version of Pacemaker from Ubuntu won’t work with the custom compilation of Corosync, so we need to compile that too.

  1. First install build tools
    1. sudo apt-get install build-essential
    2. sudo apt-get install groff # I forget where this was needed now, but at some point it’ll complain if you don’t have it.
  2. Cluster Glue
    1. sudo apt-get install libltdl-dev libglib2.0-dev libxml2-dev libbz2-dev intltool
    2. sudo ln -s libuuid.so.1 /lib/libuuid.so
    3. cd /usr/include/glib-2.0/glib
    4. sudo patch -Np5 < gatomic.h.patch
      • *** gatomic.h.old    2010-11-16 18:57:22.000000000 +0000
        --- gatomic.h    2010-11-15 05:14:23.000000000 +0000
        *************** void     g_atomic_pointer_set
        *** 64,79 ****
        #else
        # define g_atomic_int_get(atomic) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), 
        !   (g_atomic_int_get) ((volatile gint G_GNUC_MAY_ALIAS *) (void *) (atomic)))
        # define g_atomic_int_set(atomic, newval) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), 
        !   (g_atomic_int_set) ((volatile gint G_GNUC_MAY_ALIAS *) (void *) (atomic), (newval)))
        # define g_atomic_pointer_get(atomic) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), 
        !   (g_atomic_pointer_get) ((volatile gpointer G_GNUC_MAY_ALIAS *) (void *) (atomic)))
        # define g_atomic_pointer_set(atomic, newval) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), 
        !   (g_atomic_pointer_set) ((volatile gpointer G_GNUC_MAY_ALIAS *) (void *) (atomic), (newval)))
        #endif /* G_ATOMIC_OP_MEMORY_BARRIER_NEEDED */
        
        #define g_atomic_int_inc(atomic) (g_atomic_int_add ((atomic), 1))
        --- 64,79 ----
        #else
        # define g_atomic_int_get(atomic) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), 
        !   (g_atomic_int_get) ((volatile gint G_GNUC_MAY_ALIAS *) (volatile void *) (atomic)))
        # define g_atomic_int_set(atomic, newval) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), 
        !   (g_atomic_int_set) ((volatile gint G_GNUC_MAY_ALIAS *) (volatile void *) (atomic), (newval)))
        # define g_atomic_pointer_get(atomic) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), 
        !   (g_atomic_pointer_get) ((volatile gpointer G_GNUC_MAY_ALIAS *) (volatile void *) (atomic)))
        # define g_atomic_pointer_set(atomic, newval) 
        ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), 
        !   (g_atomic_pointer_set) ((volatile gpointer G_GNUC_MAY_ALIAS *) (volatile void *) (atomic), (newval)))
        #endif /* G_ATOMIC_OP_MEMORY_BARRIER_NEEDED */
        
        #define g_atomic_int_inc(atomic) (g_atomic_int_add ((atomic), 1))
    5. cd ~
    6. wget -O cluster-glue.tar.bz2
    7. tar jxvf cluster-glue.tar.bz2
    8. cd Reusable-Cluster-Components-*
    9. ./autogen.sh
    10. LDFLAGS=”-L/lib” LIBS=”-luuid” ./configure –prefix=/usr –with-daemon-user=hacluster –with-daemon-group=haclient
    11. make
    12. sudo make install
    13. cd ..
  3. Resource Agents
    1. wget -O resource-agents.tar.bz2
    2. tar jxvf resource-agents.tar.bz2
    3. cd Cluster-Resource-Agents-*
    4. ./autogen.sh
    5. ./configure –prefix=/usr
    6. make
    7. sudo make install
    8. cd ..
  4. OpenAIS
    1. sudo apt-get install subversion
    2. svn co http://svn.fedorahosted.org/svn/openais/branches/wilson
    3. mv wilson openais
    4. cd openais
    5. ./autogen.sh
    6. ./configure –prefix=/usr –with-lcrso-dir=/usr/libexec/lcrso
    7. make
    8. sudo make install
    9. cd ..
  5. Corosync
    1. sudo apt-get install git-core pkg-config
    2. git clone git://corosync.org/corosync.git
    3. cd corosync
    4. unzip udpu.patch_.zip
    5. git apply udpu.patch
    6. ./autogen.sh
    7. ./configure –disable-nss
    8. make
    9. sudo make install
  6. Pacemaker
    1. apt-get install libxslt-dev
    2. wget -O pacemaker.tar.bz2
    3. tar jxvf pacemaker.tar.bz2
    4. cd Pacemaker-1-0-*
    5. ./autogen.sh
    6. ./configure –prefix=/usr –with-lcrso-dir=/usr/libexec/lcrso
    7. make
    8. sudo make install
  7. That’s it.

I will also be posting an article soon about getting DRBD, OCFS2 and MySQL Multi-Master (aka MySQL Master/Master) working with the new Pacemaker system.

New site design and emphasis

I need to explain the major changes that have happened overnight. Yesterday I actually stumped up some money and bought a very nice looking WordPress theme, which you can now see being used on this site. I spent all last night going through all my posts re-categorising them and ensuring they’re all appropriately tagged; this being required for correct operation of the new theme. Now that the theme is in place, I’ve also added a few new articles covering my portfolio of previous work. Obviously saying ‘previous’ suggests that there should be future work. To that end XYZ-Network.com will soon be pointing to this site allowing me more freedom to add new service pages and portfolio items due to WordPress’ nice interface.

Honeymonster’s Blog will still be accessible from this site, in addition to the new XYZi pages.

My blog will still be accessible through this site, and I will still update it as regular or not as I have been thus far. However, the main emphasis of this site is now XYZi or XYZindustries which is my new term for all my ventures under one banner. XYZ Network will operate directly from this site along with Honey’s (my) blog. XYZ Internet will still have it’s own site and FreeChat UK will continue, also, to be a separate entity.

The catchphrase for the whole project is “a honeymonster venture” indicating me as the driving force behind it; and, while XYZi is now operational, there are still some things I need to get sorted (like that big blank spot on the homepage where some images and text should appear). These issues are just my lack of time rather than being “bugs” in the theme or my implementation thereof.

There are some very nice features as part of the theme which I have yet to utilise such as pullquotes and fancy text layouts.

Updates

Warning: this is a random amalgum of what should really be separate posts.

I have just finished going through the 600 or so queued up comments for my blog. The most heavily commented articles appear to be my OSX86 Snow Leopard post and, somewhat surprisingly my AJAX Blog post. This latter post’s comments were entirely spam, however. Out of the many obviously spam comments there were a few that didn’t stand out as being such, so I allowed those through and, being the diligent person I am, I replied to all the new comments with at the very least a thankyou for the message.

And now it’s 04:30 am, in the morning, and I really should be in bed. However I’m playing with my Android HTC Magic at the moment with a hacked “ROM” created by the CyanogenMod community. I’ve moved my simcard out of my iPhone and into this Android phone in the hopes of unifying my various online identities with my real-life counterpart. I was using the aforementioned iPhone quite avidly, but I both felt like a change and wanted to get out of the Steve Jobsian walled garden, if only for a small while.

On that front I am also planning on removing OS X off my macbook and replace it with Ubuntu’s earthy-hued goodness. My desktop PC is currently running Citrix XenServer 5.6 instead of the OSX86 Snow Leopard setup alluded-to in the post mentioned above. Unfortunately I can’t install any “Guest” Operating Systems onto this XenServer system until I either install Windows to access Citrix’s official management console called XenCenter, or I install some form of Linux system and use the Open Xen Manager project’s offering. It strikes me as odd that Citrix build an entire product based on Linux (CentOS) but don’t provide any way of managing it from within a different Linux system (my choice being Ubuntu).

WP Super Cache oopsiedoodle

Yeah, so why does WP-Super-Cache have to hard-code it’s file paths into the PHP to work correctly? What I’m talking about here is -content/advanced-cache.php which is created by wp-super-cache when you activate it the first time. That would be all well and good, but it chooses to use absolute file-system paths based on your individual server’s layout.

Why is that bad? Well consider this: IND-Web.com recently migrated all it’s files to a different directory on the server to allow for better maintenance. Unfortunately this new directory was not the same path as the old directory and therefore the hard-coded path in advanced-cache.php was no longer correct, which meant that WP-Super-Cache was broken.

The real crux of the issue, though, is that there is absolutely NO mention of this in the wp-admin dashboard or even in wp-super-cache’s setup screen. The only evidence of the problem was an HTML comment at the base of every page which informed of the situation. While you might think that’s good, it’s not, because of two reasons: 1) it informs hackers that they are able to more effectively DDoS your server and 2) HTML comments are invisible in the rendering of the page and are only seen when you “view source” in your Web Browser.

This lack of visibility of the problem meant that IND-Web.com, along with THM.net were both completely un-cached and susceptible to floods of traffic overloading the processing capability of the server through re-creating each page upon every visit; as opposed to creating once when each page is updated and then serving the static cached copy to subsequent requests instead.