Categories
Blog

OpenSSL, InspIRCd and SymLinks

I’m unaware how long this has been an issue, as it only reared its head after a restart of both my InspIRCd server and my desktop which I use to reach said server with XChat for Windows running.

I had some problems with failed handshakes from XChat for Windows when connecting to my freshly rebooted server running InspIRCd 1.2. XChat for Windows was reporting:

* Connection failed. Error: [336151568] error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure

My attempts to fix this problem were two-fold: I checked OpenSSL was correctly installed on my server and recompiled InspIRCd to make sure it was linking to the correct library. After restarting the server I still couldn’t connect, so I moved to my second step where I verified that my saved SSL certificate from StartSSL was not corrupted by removing the file from my InspIRCd folder and replacing with a symlink to a known-good copy in a different folder.

After verifying that the known-good file is still intact by utilising the OpenSSL command line program with the following incantation, I restarted the InspIRCd daemon and tried connecting again.

$ openssl verify -purpose sslserver -CAfile /path/to/intermediate.pem /path/to/certificate.pem

Unfortunately, while the OpenSSL verify command succeeded, I still couldn’t connect to the server with the same Connection failed errors from XChat for Windows. It was at this point that, after pulling some more of my hair out, I decided to reconfigure InspIRCd to look directly at the SSL files instead of the symlinks. Once I had done this, I finally found a more usable error message with XChat for Windows now reporting:

SSL Verify: [20] unable to get local issuer certificate * Connection failed. Error: SSL failure

More hair-pulling later I have now replaced OpenSSL on my server with GNUTLS and managed to get up to XChat for Windows reporting:

SSL Verify: [19] self signed certificate in certificate chain

I have, at least, managed to discover the cause of my issues being my recent installation of Nmap for Windows which included OpenSSL libraries on the default windows search path. This caused XChat for Windows to bypass its default SSL subsystem for the OpenSSL provided by Nmap. It also seems that some other people are also hair-pulling over OpenSSL and StartSSL’s certificates, so at least I can take comfort that I’m not alone. Removing Nmap from my default PATH fixes it on my client for the moment, but I worry now about users on Linux-based systems using OpenSSL as their provider library causing the same problems until they specifically tell their IRC client to “ignore invalid certificates” which will open a huge security hole on their system allowing for MitM (Man in the Middle) attacks.

Categories
Blog

Open Wireless

I along with countless others have opened a wireless network to allow strangers access via an internet connection that I pay for. I charge a modest amount which helps towards my internet charges (but doesn’t completely negate them), but primarily it’s open to allow others access when they would ordinarily be unable such as due to a problem on their line.

However, my comment today is about an exciting new effort from the Electronic Frontier Foundation, which has published a call-to-arms over the short-term goal of getting more networks to open a portion of their bandwidth to passers-by, and the long-term goal of creating a new wireless standard that allows for encrypted communications over free wireless networks. (Free referring to the freedom to connect.) The crux of this issue is the need for a new standard that allows anybody to connect to a specified Wireless network while still maintaining complete security via encryption methodologies.

The idea is to allow each third-party to connect to the network but be unable to see the communications of other third-parties. One example way of achieving this exampled by the EFF Article uses the SSH protocol as its inspiration, which allows for one security certificate to create multiple session encryption keys which are then used by the user. Also in this scheme is the “Trust-On-First-Use” paradigm which prompts the user when they first connect to the station to accept the security certificate and to then use that as the basis for future un-prompted communication. If the certificate ever changes then the user knows with a high degree of accuracy that either: the network has changed somehow (e.g. by changing the connected station), or the connection has just been intercepted by a Man-in-the-Middle (MitM) beginning an attack, or that a previous MitM attack has just ended.

If I could allow my network’s users to connect in a more secure manner, then I would do so. However, as this proposed protocol is only at the planning stage at the moment, and there is no guarantee that a wireless working group would accept the protocol for a future standard, then I cannot easily allow encrypted communication via my wireless stations. Ideally the Captive Portal suites such as CoovaChilli should provide a means to utilise 802.11x for RADIUS-backed encryption for a user once they have a valid credential for the network. Especially as CoovaChilli and others are backed by RADIUS anyway.

Another related issue, however, is that I may become liable for my users’ misbehaviour on the internet via my connection. Technically, I’m an individual on a residential connection and therefore I am not allowed to resell access. This puts me at odds with my ISP’s T&Cs. Also, while I am technically on a residential connection am I able to claim that I am an ISP to my clients? This puts me at odds with the legal system. My position is that I am an ISP in the sense that yes I do provide an internet service to users of my network; I also state that I am not technically reselling my connection, I am selling my firewalling of the user from internet nasties and then providing free internet access on the back of that without resale.

Related articles
Categories
Blog

Open Source. Really?

I’ve looked at cross-platform mobile device development frameworks and came across Appcelerator a company with the product “Titanium”, which ordinarily is licensed under the Apache 2.0 license. However, if at any time I decide I need extra functionality than that provided by the default product the Titanium+Plus modules such as the urban airship module are available. I almost downloaded the demo module but stopped to check out the Terms assigned by ticking the agree box to get through to the download; it is here that I became flabbergasted and appalled that a) nobody has mentioned this before, and b) that it is even allowed to happen.

Note Section 1.3(g) states that:

[signatory shall not nor allow a third party to] (g) modify any open source version of Appcelerator’s software source code (“Original Code”) to develop a separately maintained source code program (the “Forked Software”) so that such modifications are not automatically integrated with the Original Code or so that the Forked Software has features not present in the Original Code;

Appcelerator License Section 1.3(g)

And section 5.4 states that:

Notwithstanding anything to the contrary contained in this Agreement, Sections 1.4 (“License Restrictions”) …” – I believe that is a mis-numbering of 1.3 judging by the textual title of the section – “… shall in all cases survive any expiration or termination of this Agreement.

Appcelerator License Section 5.4

I read those clauses as a _permanent_ ban on _legitimate_ open source development based upon the code previously released by Appcelerator under an open source license. This I feel is completely against the spirit of open-source and potentially a conflict of the two license grants Apache 2.0 vs Appcelerator Titanium+Plus Modules License, which would need testing in court to find which holds precedence.

As I didn’t know who to contact about this flagrant betrayal of the term “Free”, I figured I would post this blog entry and email the Free Software Foundation for guidance.

I’m still waiting on a response from the FSF.

Update (19/01/2012): I failed to post this at the time, so I’m doing so now. I received a reply from Yoni Rabkin from the Free Software Foundation’s Licensing department on the 2nd May (my original email was 13 April) with the following:

> Note Section 1.3(g) states that:
> “[signatory shall not nor allow a third party to] (g) modify any open
> source version of Appcelerator’s software source code (“Original
> Code”) to develop a separately maintained source code program (the
> “Forked Software”) so that such modifications are not automatically
> integrated with the Original Code or so that the Forked Software has
> features not present in the Original Code;”

Indeed such a restriction renders the software non-free and therefore, of course, also GPL-incompatible.

The Apache 2.0 license is not a copyleft license and therefore permits use in proprietary software. This is one of the reasons the FSF recommends licensing your software under a strong copyleft license such as the GNU GPL. If there is copylefted software involved (with copyright holders who are not Appcelerator) then there would be a potential copyright violation to pursue, otherwise what we have is a case of misleading advertisement.

I would recommend contacting the Appcelerator people and explaining the problem; if they can make clear that they distribute both free software and proprietary software people will be able to choose to avoid the proprietary parts.

But please note that if you are into mobile development you will run into much worse problems once you attempt to actually release your software via these “app stores” as they actively support proprietary software (Google’s Android, for example) and some go as far as banning free software completely (Apple’s products, for example).

Yoni Rabkin – FSF Licensing Department – by eMail on 2 May 2011
Categories
Blog

MisTagged Music

Over the years I’ve acquired a large music collection of about 100GB or 17000 files. Unfortunately, a lot of these are badly labelled and categorised.

Music files such as MP3 have the option to store so-called “metadata” or (information/data about data) which describes the music stored within the same file. This is the so-called id3 tagging system. When you rip a music CD you can look up the track numbers and names with a service such as CDDB and then write this information within the file.

This is useful, but sometimes you get incorrect or incomplete info back or the information stored in your local copies gets corrupt over various backups and moving from place to place. That is where a service like TuneUp comes into play.

TuneUp is “Quite possibly the most important piece of software any music lover can buy.”

Uncrate

This downloadable program for Windows and Mac scans the files you point it to for a digital fingerprint that allows it to uniquely identify the piece of music contained within. To achieve this, TuneUp will actually “listen” to your music rather than rely on metadata such as the id3 tags or filename.

I’ve used this tool to scan my own music collection and it reports that as much as 75% of my metadata is incorrect or missing. I am now slowly going through my whole collection with TuneUp, again, in clean mode rather than scan mode and am getting it to retag as much of my music as it can recognise. This is a tedious process, however, as sometimes it can get a completely different track than the one you’re relabelling; which means that it requires a bit of babysitting, but the amount of time and energy saving is still considerable, especially for larger collections or those that are wildly mislabelled.

Categories
Blog

FreeCh.at – The Free IRC Chat Network

I’ve not posted about this before now, even though the network has been operational for a few months now. I am a founding member of a new Internet Relay Chat (IRC) Network that allows anybody to talk with friends for free. The network is comprised of four geographically diverse servers, two of which are my own. The network is run by Nicholas “Cubezero” Weightman, David “Wellard” Fullard, and Myself (Daniel “Fremen” Llewellyn).

We operate using the open source InspIRCd software as the main controller along with “services” provided by the Anope software. (Services provide the facility to maintain control over your own name and rooms while you are not connected to the network.) We have tried as best we can to create a highly redundant system that allows for any one server to go offline without adversely affecting the chat experience.

To connect to the network, you can either use the Web-Based interface at or you can connect using an IRC program such as mIRC (http://www.mirc.com/) or XChat (http://www.xchat.org/). The details you need to use these programs are:

Server Name: irc.freech.at
Port: 6667

FreeChat IRC Connection Details

The servers can also all be accessed directly using either uk1.freech.at, uk2.freech.at, irc.wnsi.co.uk or irc.invaliddomain.com. Wellard offers other services besides just IRC at his site www.invaliddomain.com. Cubezero also has more besides just IRC at www.wnsi.co.uk.

Categories
Blog

Egypt, the Human Spirit & Freedom.

So, it’s late into the morning again, and I’m awake in what is a progressive cycle. Ho hum.

random service update

At least I can be of some use; to that end I’ve been reorganising the FreeCh.at DNS servers to hopefully improve our load-balancing “round robin” which randomly sends users to each of the four servers. I’ve also moved the FreeCh.at wiki onto the same servers which power the chat service, and upgraded the software at the same time, ensuring that the wiki stays live even when a single point in the “system” fails.

more important: twitter

During these late nights and early mornings, I also enjoy keeping up with American twitter users’ mumblings. The latest post from Google’s @mattcutts caught my eye, which alludes to a usage of voice recognition and synthesis via their recent acquisition of SayNow to help Egyptian protestors. Those bright sparks at Google have created a service which listens on a few various international phone numbers and allows Egyptian citizens to listen to and post through the twitter service using just their voice. The idea is that while the internet is cut off Egyptians can phone a nearby number and post happenings “from the ground” to twitter complete with #hashtags just by speaking what they wish to share. They can also keep up to date with news from twitter via having the latest tweets spoken to them.

Categories
Blog

Sender Policy Framework

There has been some discussion on the FreeChat IRC Network about the email system for reducing SPAM called “Sender Policy Framework”. This system uses the network of servers used worldwide to transform the readable domain name (such as my thehoneymonster.com) into the computer address which uniquely globally identifies the server operating said service (in this case my own blog and XYZindustries).

That network of servers is called the “Domain Name System”. DNS can publish arbitrary information assigned to specific names such as thehoneymonster.com or www.thehoneymonster.com (where the two are different entities). This arbitrary related information system (known as TXT records) is used by the SPF project to assign a special chunk of information which details the computer addresses which are allowed to send email that purports to be from thehoneymonster.com (or whichever domain you implement the scheme upon).

I have now implemented a set of SPF rules on all the domains that I have access to including a few client domains. If you find problems with sending email from a domain which I administer, please let me know and I can try to find a solution which still minimises the possibility for spammers to use your domain name for their spams’ “from” address. The main issue I can foresee is where a client may be using a third-party webmail service or sending email through an ISP email-server instead of via XYZi’s.

For those who have not delegated responsibility to me over their Domain Name(s) the correct TXT record to insert for hosting on XYZi’s Pressflow/Drupal cluster is as follows:

Name: Not Specified - leave empty
Value: v=spf1 include:xyz-network.com -all
TTL: any value, default is fine

And for those hosting via IND-Web.com’s system, then the following is correct.

Name: Not Specified - leave empty
Value: v=spf1 include:ind-web.com -all
TTL: any value, default is fine
Categories
Blog

Movement For Active Democracy

My Dad sent me this link to the Movement for Active Democracy website. Below is my response to my Dad explaining my own views.

I got to the fourth video and couldn’t go any further. What he says sounds idealistic but implementing it would be harder than he seems to think. Also, I disagree with his premise that the public has no say as it stands now.

For e.g. we have a hung parliament now because the public couldn’t agree on any one party enough to push them past the post (on our “archaic” system). So, if proportional representation were used to its fullest degree of capability and the same set of votes were cast, I envisage that all three main parties would have had an equal standing and a bunch of racist parties snapping at their heels.

PR is great for isolationist parties because they don’t need to win outright anywhere and yet still get a few people as MPs; once they have a member in parliament they can tout how well they’ve performed standing up for the rights of the everyman and gain more votes, more MPs, leading in a vicious downward spiral towards getting enough MPs to be able to take us to 1984!

As it stands, First Past the Post is keeping the extreme-right parties under control.

And then there’s his idea that the Swiss have which involves getting together a bunch of your mates and forcing referenda. In theory that sounds great, but if an organised group get hold of enough people who tow-the-line regularly then they can consistently force expensive referenda for every legislation conceived. This will prevent government from functioning completely, and the offending group would be able to hold the country to ransom until they get their own way.

If that “way” is for them to be voted into power, then when the public get fed up with the constant lack of progress they might decide to vote for this group just to get something, anything, done! And that could lead to 1939-45 all over again!

I think through that thought process I’ve convinced myself that our current system is the best for us.

Categories
Blog

Linode and Corosync High Availability

I have recently been configuring multiple Virtual Private Servers, hosted at Linode.com‘s UK data-centre, into a highly available (HA) web-hosting and IRC cluster. My design required the use of a shared storage using DRBD, which is akin to network-based RAID Level 1 or mirroring. To allow two servers to be operational simultaneously while maintaining data consistency DRBD can operate in a Primary/Primary configuration instead of the usual master-slave scenario; the data is kept consistent by using the OCFS2 file-system over the top of the DRBD device. The idea is that DRBD maintains the raw data in sync, while the OCFS2 filesystem maintains file locks and related technologies in sync between the two servers preventing both from writing to the same file simultaneously.

Because of my use of OCFS2, which requires a Distributed Lock Manager to operate successfully, and that I wanted the file-system to fail-over nicely meant that I could not use heartbeat as my High Availability back-end to Pacemaker the cluster resource management framework. The preferred HA system for Pacemaker is Corosync, so I installed this from Ubuntu’s Apt repository and started the cluster.

Unfortunately, neither system would recognise that the other existed when configured to use the default multicast addressing system. (Multicast was chosen to allow each system to “subscribe” to the data feed and other systems, those not in the cluster, wouldn’t get inundated with irrelevant information.) While researching this problem I found posts on Linode’s forum website that suggested multicast was allowed along with broadcast. This turns out not to be the case, as pinging 192.168.255.255, the broadcast address for my private network at Linode, yielded absolutely no replies from any of my other systems. This confirmed that Broadcast was not allowed and added to my suspicion that Multicast would be the same.

So then I started searching for Corosync and Unicast to try to find a solution that allowed the HA daemon to directly communicate with all members of the cluster on a pre-arranged basis rather than relying on automatic discovery. I found a patch on the Corosync mailing list which does just that, utilising UDP. (The relevant term for google is “UDPU”.) Unfortunately, again, the compilation process was not easy.

Howto build Corosync

To compile Corosync, a couple other packages are required to have also been compiled (step-by-step below). Also the packaged version of Pacemaker from Ubuntu won’t work with the custom compilation of Corosync, so we need to compile that too.

Install the Build Tools

sudo apt-get install build-essential sudo apt-get install groff # I forget where this was needed now, but at some point it'll complain if you don't have it.

Cluster Glue

gatomic.h.patch

*** gatomic.h.old    2010-11-16 18:57:22.000000000 +0000 --- gatomic.h    2010-11-15 05:14:23.000000000 +0000 *************** void     g_atomic_pointer_set *** 64,79 **** #else # define g_atomic_int_get(atomic) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), !   (g_atomic_int_get) ((volatile gint G_GNUC_MAY_ALIAS *) (void *) (atomic))) # define g_atomic_int_set(atomic, newval) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), !   (g_atomic_int_set) ((volatile gint G_GNUC_MAY_ALIAS *) (void *) (atomic), (newval))) # define g_atomic_pointer_get(atomic) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), !   (g_atomic_pointer_get) ((volatile gpointer G_GNUC_MAY_ALIAS *) (void *) (atomic))) # define g_atomic_pointer_set(atomic, newval) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), !   (g_atomic_pointer_set) ((volatile gpointer G_GNUC_MAY_ALIAS *) (void *) (atomic), (newval))) #endif /* G_ATOMIC_OP_MEMORY_BARRIER_NEEDED */ #define g_atomic_int_inc(atomic) (g_atomic_int_add ((atomic), 1)) --- 64,79 ---- #else # define g_atomic_int_get(atomic) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), !   (g_atomic_int_get) ((volatile gint G_GNUC_MAY_ALIAS *) (volatile void *) (atomic))) # define g_atomic_int_set(atomic, newval) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gint) ? 1 : -1]), !   (g_atomic_int_set) ((volatile gint G_GNUC_MAY_ALIAS *) (volatile void *) (atomic), (newval))) # define g_atomic_pointer_get(atomic) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), !   (g_atomic_pointer_get) ((volatile gpointer G_GNUC_MAY_ALIAS *) (volatile void *) (atomic))) # define g_atomic_pointer_set(atomic, newval) ((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), !   (g_atomic_pointer_set) ((volatile gpointer G_GNUC_MAY_ALIAS *) (volatile void *) (atomic), (newval))) #endif /* G_ATOMIC_OP_MEMORY_BARRIER_NEEDED */ #define g_atomic_int_inc(atomic) (g_atomic_int_add ((atomic), 1))

Procedure

sudo apt-get install libltdl-dev libglib2.0-dev libxml2-dev libbz2-dev intltool sudo ln -s libuuid.so.1 /lib/libuuid.so cd /usr/include/glib-2.0/glib sudo patch -Np5 < gatomic.h.patch cd ~ wget -O cluster-glue.tar.bz2 tar jxvf cluster-glue.tar.bz2 cd Reusable-Cluster-Components-* ./autogen.sh LDFLAGS="-L/lib" LIBS="-luuid" ./configure --prefix=/usr --with-daemon-user=hacluster --with-daemon-group=haclient make sudo make install cd ..

Resource Agents

wget -O resource-agents.tar.bz2 tar jxvf resource-agents.tar.bz2 cd Cluster-Resource-Agents-* ./autogen.sh ./configure --prefix=/usr make sudo make install cd ..

OpenAIS

sudo apt-get install subversion svn co http://svn.fedorahosted.org/svn/openais/branches/wilson mv wilson openais cd openais ./autogen.sh ./configure --prefix=/usr --with-lcrso-dir=/usr/libexec/lcrso make sudo make install cd ..

Corosync

sudo apt-get install git-core pkg-config git clone git://corosync.org/corosync.git cd corosync unzip udpu.patch_.zip git apply udpu.patch ./autogen.sh ./configure --disable-nss make sudo make install

Pacemaker

apt-get install libxslt-dev wget -O pacemaker.tar.bz2 tar jxvf pacemaker.tar.bz2 cd Pacemaker-1-0-* ./autogen.sh ./configure --prefix=/usr --with-lcrso-dir=/usr/libexec/lcrso make sudo make install

I will also be posting an article soon about getting DRBD, OCFS2 and MySQL Multi-Master (aka MySQL Master/Master) working with the new Pacemaker system.

Categories
Blog

New site design and emphasis

I need to explain the major changes that have happened overnight. Yesterday I stumped up some money and bought a genuinely nice-looking WordPress theme, which you can now see being used on this site. I spent all last night going through all my posts re-categorising them and ensuring they’re all appropriately tagged; this being required for correct operation of the new theme. Now that the theme is in place, I’ve also added a few new articles covering my portfolio of previous work. Obviously saying ‘previous’ suggests that there should be future work. To that end XYZ-Network.com will soon be pointing to this site allowing me more freedom to add new service pages and portfolio items due to WordPress’ nice interface.

Honeymonster’s Blog will still be accessible from this site, in addition to the new XYZi pages.

The Honeymonster (Daniel Llewellyn)

My blog will still be accessible through this site, and I will still update it as regular or not as I have been thus far. However, the main emphasis of this site is now XYZi or XYZindustries which is my new term for all my ventures under one banner. XYZ Network will operate directly from this site along with Honey’s (my) blog. XYZ Internet will still have its own site and FreeChat UK will continue, also, to be a separate entity.

The catchphrase for the whole project is “a Honeymonster venture” indicating me as the driving force behind it; and, while XYZi is now operational, there are still some things I need to get sorted (like that big blank spot on the homepage where some images and text should appear). These issues are just my lack of time rather than being “bugs” in the theme or my implementation thereof.

There are some nice features as part of the theme which I have yet to utilise such as pull quotes and fancy text layouts.