Virtual-host setup for client development sites

What follows is a response to many requests about how to set up a local environment of WordPress.

The manual way

This first list is how my colleagues taught me to set up development sites on my local PC such that each client gets their own URL which maps to 127.0.0.1. This keeps WordPress happy and makes isolation simple and effective.

  1. edit /etc/hosts (or on Windows c:\windows\system32\drivers\etc\hosts) and add an entry for <client>.dan utilising a unique top-level-domain and pointing it to 127.0.0.1.
  2. create an, or copy an existing, apache virtual-host configuration for the new development domain and point it to wherever you want to keep your client’s development site
  3. unzip WordPress to the filesystem location you set in your apache virtual-host configuration
  4. navigate to http://<client>.dan/ and run through the WordPress installation routine

After following those 4 steps you now have a working WordPress installation under a potentially unique domain name to test your code and design as you build it.

The automated way

So you’re probably asking what’s wrong with this setup given its numerous advantages? Well, consider mobile web development work where you need to test your development site on a physical handset. You could try to find some way of forcing the in-built web browser of the phone to reach http://<client>.dan/ via your PC’s Local Area Network IP Address. That is, however, time-consuming at best, and at worst will need a custom development effort into a wrapper application which you create yourself with no intention of releasing or supporting just to view an in-progress site.

Here we go, how I set up my development sites:

  1. unzip WordPress into /data/<client>/htdocs
  2. navigate to http://<client>.bang.bowlhat.net/ and run through the WordPress installation routine

Wasn’t that a lot simpler?!

Technicalities

My way solves the problem I highlighted above of testing the client site on a mobile device because I created a resource record in the Domain Name System (DNS) which points .bang.bowlhat.net to my local PC’s current IP Address. I can change this at will when I develop on a different network, and all client sites update en masse. (The record has a 5 minute Time To Live which allows for updates to happen very quickly unlike the rest of my DNS records which all take 24 hours minimum to update.) Using a DNS-based approach also means I don’t need to edit my hosts file for every site I want to develop.

Apache is the second part of the equation. I get around the issue of having to create a new virtual host configuration for each client by utilising Apache’s mod-vhost_alias, which allows for using part of the requested domain name in the path used to serve each site’s files.

My one and only apache virtual host configuration file is below:

<VirtualHost *:80>
        ServerAlias *.bang.bowlhat.net
        VirtualDocumentRoot /data/%1/htdocs
        <Directory /data/*/htdocs>
                AllowOverride all
                Order allow,deny
                Allow From all
                Options followsymlinks
        <Directory>
        ErrorLog logs/error_log
        CustomLog logs/access_log combined
</VirtualHost>

This configuration tells apache that we’re serving all development sites from /data/<client>/htdocs where “<client>” is taken from the domain name used to reach the web server, i.e. http://<client>.bang.bowlhat.net/.

SSH connection rate limit bypass

I’ve got a client who uses a third-party for their hosting and not allowing my company to do so ourselves. A problem arose recently due to this third-party having instituted a rate-limit on TCP Connections such as SSH and HTTP along with fail2ban. Because it’s a third-party, and we’re not contracting them ourselves, it’s difficult to get information about exactly what triggers the rate limit blocking.

Now as part of my work for this client I need to update their site from time-to-time. For this I use a scripted deployment which uses SSH and RSync to copy files to the server and move them into place. This script uses several dozen short-lived ssh requests in the space of about a minute to do its work. You can see where I’m going here, right? These several dozen requests are enough to cause the block to shut me out mid-way through my deployment leaving my client’s site in an inconsistent state.

This is where SSH Pipelining comes into play…

Pipelining

So SSH allows for a little-known mechanism that creates an ssh connection and then detaches from the console leaving the connection active but in the background. The relevant incantation is:

ssh -nNf -o ControlMaster=yes -o ControlPath="$HOME/.ssh/${HOST}.sock" ${USERNAME}@${HOST}

This isn’t very useful as is, but becomes very powerful when you start to re-use the connection for a scripted operation. To run a command on the remote host through the already established connection, you would call ssh thusly:

ssh -o ControlPath="$HOME/.ssh/${HOST}.sock" ${USERNAME}@${HOST} /path/to/command

And the only piece left missing is rsync:

rsync -e "ssh -o ControlPath='$HOME/.ssh/${HOST}.sock'" -rz --delete local/path ${USERNAME}@${HOST}:remote/path

Once you’re done with the connection, you can close it with:

ssh -O exit -o ControlPath="$HOME/.ssh/${HOST}.sock" ${USERNAME}@${HOST}

30 and Flirty?

I’ve reached the grand old age of 30 years young. I’m wondering if the term “30 and Flirty” as featured by Jennifer Garner in the movie “13 going on 30” is a suitable term to give to a person of a certain age who happens to find themselves single. I don’t mind the odd flirt every now and then, but truth be told I don’t actually know when I’m doing it. Flirting has always been a grey area for me, and, with many girls as friends in past life, I think I’ve been flirty with pretty much all of them all of the time. Which has inevitably left me being accused, on occasion, of trying to take other people’s partners despite that being far from my mind.

And then we get to today, where I’ve found myself at half-way to the new 50, single, with no children, looking around and seeing everyone around me is actually already attached and forking off sproglings. Both my Brother and Sister are married-off to partners they’d both been with for ages by the time they’d got to this milestone age. Have I been left on the shelf or am I my own worst enemy when it comes to love?

Answers on a postcard.

So it’s all bad then? Well, no, not quite. I have a job that I love and colleagues that are awesome. Plus I have Bowl Hat and the ventures related to that to keep me occupied out-of-hours. While I love Bowl Hat and hope that one-day I can make it self-sustaining monetarily I do need to pay the bills, and so I have left this site, related ventures and their trickle of income to languish while I establish myself at my “place of regular employment”™; they must and will always take precedence over my personal projects.

As to work, I’ve now got several projects that I can cite on my personal portfolio, but it would probably be morally objectionable to put them on here, plus I’m not allowed to divulge specifics without prior approval. Instead I’ll leave you with some juicy soundbites. At Bang Communications I have been/am part of the team that produced/es: a national (British) election information website, several multinational informercial sites, a heavily-used web-app stakeholder management system, several intranets for various clients including governmental, and large international public-sector websites.

As you can see I’m involved with some heavy-hitters! My main work at Bang involves WordPress regularly and Drupal intermittently when specifically requested.

Adobe: WHY?!

The topic of this post begins with Adobe. And ends with more frustration.

Yes, I’m fed up of Adobe claiming to be the Web Developer’s go-to guy for software to enable advanced techniques in building the future Web. Actually, I’m fed up with them claiming that and then conspicuously ignoring a large part of the developer market who use Linux as their main operating system of choice.

This rant comes from spotting the release into the wild of Edge Reflow a tool to aid the development of responsive websites. Windows [tick]; Mac [tick]; Linux [huh?].

While I appreciate that the majority of the Design community run Macs and OS X on their workstations, developers invariably don’t. And in my experience they don’t run Windows either. So, as Edge Reflow has no export ability, and Edge Reflow doesn’t work on the developers’ workstations, Adobe seems to think the way forward is for Designers to copy and paste CSS code (presumably into an email?) so that a Developer can re-create their hard-crafted design!?!

A Designers’ job is to design, not mess about with code! Yes I appreciate some designers can code, but that isn’t a requirement for the job, just like design isn’t a requirement for a programmer’s role. Coders code, and Designers design! Expecting, nay REQUIRING, cross pollination of role between the design and code is going to leave the boss wondering why he employs two people to do the job when his designer, who has learned all the code to be able to convey his design to the developer, can do it all himself. Thereby leaving the Coder out of a job.

In my opinion, humble or not though it may be, Adobe is achieving three things:

  1. reducing market penetration potential, and therefore revenue, of Edge products,
  2. alienating their Designer market by requiring them to learn code, and,
  3. putting Coders out of jobs that they’re much more suited to do than their Designer counterpart.

Here ends the rant. My message to Adobe is simple: either embrace the development community or get out of web building altogether!

Followup – Freedom and Net Neutrality

This post is a follow-up to my post of February 2011 in which I talk of the potential issues of Net Neutrality sparked off by the Egyptian regime of the time cutting off the Internet in an attempt to control its populace.

The Pirate Bay

Several ISPs have recently been forced by the British Courts to use technology, that they own for controlling child pornography passing over their networks, to prevent users of said ISPs from accessing The Pirate Bay bit-torrent indexing site. This is a victory for the RIAA and BPI et al in the fight against piracy. However, the implication is that ISPs can be forced to block access to arbitrary websites and they have proven the technology is implemented which is able to do so.

Net Neutrality

The gloves are off now, and the ISPs are potentially going to take this as a signal that they can arbitrarily block random web assets which they unilaterally decide are inappropriate. Alternatively the governments of the world may see the potential to legislate more blocks in law using weasel-words in the legalese to create broad strokes which allow blocking completely unrelated and otherwise legal web properties for no reason other than somebody somewhere finds it distasteful.

Editorial Discretion

Another issue raised recently was that of Verizon’s petition to the US Government that requests the company be allowed what it termed as “Editorial Discretion”. The premise is that an ISP should have the same control of what a user sees on-line as a newspaper has over what a reader sees in the paper. This needs to be squashed as quickly as possible by the public at large, but I fear that most will be apathetic and ignore the issue. The same wouldn’t be true if the telecoms companies started arbitrarily censuring our phone-calls to one-another based on what we say during the call. And that is what this all amounts to; unilaterally deciding what somebody may or may not do, read or view on-line is tantamount to censorship. We don’t trust our Governments to censure our media on behalf of the “public good” so why should we trust corporations to do the same but for their own profit and that of their shareholders instead of the public good?!

Computing for an older generation

I came across this video and just had to share it with someone, but I had more to say than would fit into Facebook or Twitter (and who uses Google+?!)

I love that he is so excited that he can barely speak. It’s absolutely great that someone from one of the older generations (read: alive before the 1990’s) can get into and find uses for technology that kids take for granted.

With “tablets” and smart-phones (iPhone, iPad, Android and others) the whole landscape has shifted to a point where a computing device can be used by anyone without any prior computing experience or training. Finally we’re at a point where technology is becoming a facilitator rather than something for only for geeks. These devices are delivering on the unspoken promise that the founders of the internet made to the world; information will be freely accessible by everyone, anywhere, and at any time.

HotspotSystem.com vs Coova.net

I run a WiFi hotspot using the ChilliSpot software. I have tried both HotSpotSystem.com and Coova.net’s billing services, over the course of two (2) years with about one (1) year on each service. I, unfortunately, did this backwards by moving across to Coova about a year ago in the hopes of finding a more manageable service. I am now going back to HotSpotSystem.com.

HotSpotSystem.com has the benefit feature of allocating people a username and password which they may use. This is instead of Coova’s insistence on creating new access codes which the user must check their email for before they disconnect else they will have to buy more time to regain access again.

HotSpotSystem.com, however, last I used the system, did not allow easy reimbursement to a user following a prolonged network outage. By which I mean when most a user’s access time has elapsed before services come back online. Coova.net, by virtue of tieing directly into your PayPal account allows for refunds in the standard PayPal way. Coova.net also allows for arbitrary access granting via one-time access codes without requiring any further payments. (I am unsure whether HotSpotSystem.com does this, but hoping for pleasently surprising information to the contrary.)

HotSpotSystem.com, by utilising a proper payment processor instead of relying upon PayPal’s system, has more of a professional e-commerce feel than Coova.net’s system. This improves the image of my company.

HotSpotSystem.com’s reporting features are very good and surpass Coova.net’s by a long shot!

Plus, HotSpotSystem.com allows remote monitoring and command execution on my Access Point(s) with no extra configuration required.

All-in-all I prefer HotSpotSystem.com for the more professional system when comparing to Coova.net, and will be glad once I get the transition back sorted.

OpenSSL, InspIRCd and SymLinks

I’m unaware how long this has been an issue, as it only reared its head after a restart of both my InspIRCd server and my desktop which I use to reach said server with XChat for Windows running.

I had some problems with failed handshakes from XChat for Windows when connecting to my freshly rebooted server running InspIRCd 1.2. XChat for Windows was reporting:

* Connection failed. Error: [336151568] error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure

My attempts to fix this problem were two-fold: I checked OpenSSL was correctly installed on my server and recompiled InspIRCd to make sure it was linking to the correct library. After restarting the server I still couldn’t connect, so I moved to my second step where I verified that my saved SSL certificate from StartSSL was not corrupted by removing the file from my InspIRCd folder and replacing with a symlink to a known-good copy in a different folder.

After verifying that the known-good file is still intact by utilising the OpenSSL command line program with the following incantation, I restarted the InspIRCd daemon and tried connecting again.

$ openssl verify -purpose sslserver -CAfile /path/to/intermediate.pem /path/to/certificate.pem

Unfortunately, while the OpenSSL verify command succeeded, I still couldn’t connect to the server with the same Connection failed errors from XChat for Windows. It was at this point that, after pulling some more of my hair out, I decided to reconfigure InspIRCd to look directly at the SSL files instead of the symlinks. Once I had done this I finally found a more usable error message with XChat for Windows now reporting:

SSL   Verify: [20] unable to get local issuer certificate
* Connection failed. Error: SSL failure

More hair-pulling later I have now replaced OpenSSL on my server with GNUTLS and managed to get up to XChat for Windows reporting:

SSL   Verify: [19] self signed certificate in certificate chain

I have, at least, managed to discover the cause of my issues being my recent installation of NMap for Windows which included OpenSSL libraries on the default windows search path. This caused XChat for Windows to bypass its default SSL subsystem for the OpenSSL provided by NMap. It also seems that some other people are also hair-pulling over OpenSSL and StartSSL’s certificates, so at least I can take comfort that I’m not alone. Removing NMap from my default PATH fixes it on my client for the moment, but I worry now about users on Linux-based systems using OpenSSL as their provider library causing the same problems until they specifically tell their IRC client to “ignore invalid certificates” which will open a huge security hole on their system allowing for MitM (Man in the Middle) attacks.

Open Wireless

I along with countless others have opened a wireless network to allow strangers access via an internet connection that I pay for. I charge a modest amount which helps towards my internet charges (but doesn’t completely negate them), but primarily it’s open to allow others access when they would ordinarily be unable such as due to a problem on their line.

However, my comment today is about an exciting new effort from the the Electronic Frontier Foundation, which has published a call-to-arms over the short-term goal of getting more networks to open a portion of their bandwidth to passers-by, and the long-term goal of creating a new wireless standard that allows for encrypted communications over free wireless networks. (Free referring to the freedom to connect.) The crux of this issue is the need for a new standard that allows anybody to connect to a specified Wireless network while still maintaining complete security via encryption methodologies.

The idea is to allow each third-party to connect to the network but be unable to see the communications of other third-parties. One example way of achieving this exampled by the EFF Article uses the SSH protocol as it’s inspiration, which allows for one security certificate to create multiple session encryption keys which are then used by the user. Also in this scheme is the “Trust-On-First-Use” paradigm which prompts the user when they first connect to the station to accept the security certificate and to then use that as the basis for future un-prompted communication. If the certificate ever changes then the user knows with a high degree of accuracy that either: the network has changed somehow (e.g. by changing the connected station), or the connection has just been intercepted by a Man-in-the-Middle (MitM) beginning an attack, or that a previous MitM attack has just ended.

If I could allow my network’s users to connect in a more secure manner then I would do so. However, as this proposed protocol is only at the planning stage at the moment, and there is no guarantee that a wireless working group would accept the protocol for a future standard, then I cannot easily allow encrypted communication via my wireless stations. Ideally the Captive Portal suites such as CoovaChilli should provide a means to utilise 802.11x for RADIUS-backed encryption for a user once they have a valid credential for the network. Especially as CoovaChilli and others are backed by RADIUS anyway.

Another related issue, however, is that supposedly I may become liable for my users’ mis-behaviour on the internet via my connection. Technically, I’m an individual on a residential connection and therefore I am not allowed to resell access. This puts me at odds with my ISP’s T&Cs. Also, while I am technically on a residential connection am I able to claim that I am an ISP to my clients? This puts me at odds with the legal system. My position is that I am an ISP in the sense that yes I do provide an internet service to users of my network; I also state that I am not technically reselling my connection, I am selling my firewalling of the user from internet nasties and then providing free internet access on the back of that without resale.

Related articles