Social Network Birthdays – Facebook still King, but LinkedIn Rising

Last weekend I celebrated my birthday the only way a tech guy should – by dropping off the grid and enjoying the great outdoors, mostly sans internet connection.

When I came back online I was greeted by that familiar and welcome scene – lots of messages from family, friends and the occasional distant acquaintance wishing you happy birthday.

While Facebook has long been king of the birthday message, I was surprised to see just how many birthday messages I received this year via LinkedIn. While last year I received one birthday message via LinkedIn, this year I got 22. For a professional social network, and for a person who’s birthday falls on Labor Day (a public holiday here in the US) it was really surprising to see the difference a year makes.

Anyway, here’s the split in graphical form. It will be interesting to see what happens next year.


RingCentral “Messages Only” – the simple fix they don’t tell you

I’ve recently been trialing RingCentral, and on the whole I like what I’ve been seeing. One problem had me beaten for over an hour today, and it turned out one simple change was all it took to fix the problem.

I share an office with a number of colleagues who are also using RingCentral, so I was particularly confused when my SoftPhone said “Messages Only”, and yet they were showing “Waiting for Call”.


With my colleagues connecting fine, I was pretty confident it wasn’t our internet connection or router settings. And given the connection worked fine for me last week and I hadn’t changed anything on my computer, I didn’t think I could blame any security or other local software for making a mess.

I went on a wild-goose-chase with an Office upgrade (from 2010 to 2013) since for some unknown reason RingCentral uses Office under the hood, and that also didn’t get me anywhere.

Eventually I relented and called their support team. The solution was a 10 second fix – change the Port Number to 5075 instead of the default 5060. It turns out that RingCentral’s SIP servers use a port allocation of 5060-5090, and because I was using the default 5060 my connection had been “corrupted” (this was the word the agent used; I think “overloaded” would be a fairer description). It could also have been caused by my local router getting confused from a NAT basis, but I didn’t stay in my Telecommunications Engineering degree long enough to learn the intricacies of routing with UDP.

Why their “success” help page didn’t contain this suggestion is beyond me. And I’ve lost over an hour getting to this point. Hopefully this helps someone else.


So, in summary, if you’re having trouble with the “Messages Only” status simply try a new random Local Port between 5060 and 5090, and hopefully your connection comes back in 10 seconds just like mine did.

Disabling Google’s “Video Call” default in Google Apps (admin instructions)

Google are know for their ship early, ship often, which in many ways is “ship something that is buggy and not very user friendly, and then iterate like crazy, learning from how your users fail with your product to make it better”. The best known example of this was Gmail (famously in beta for over 5 years), but I think the clearest example of “launching shit and unusable and push it hard anyway” is Google Hangouts.

Unfortunately, Google’s efforts to push Hangouts before they’re ready is now going beyond just pitch meetings with Google Ventures or merging flaky Hangouts with SMS functionality on Android – now they’re getting in the way of business, and that just isn’t good enough.

Last week Google introduced a new default feature for Google Apps Calendar – when you create a new event, it will automatically add a link to the Google Hangout for the meeting “to save time and reduce last minute confusion and delays”. This is bullshit.

I’ve been using Google Calendar for almost a decade, and I’ve never forgotten to add a link to a service I’ve never wanted to use, but now Google is pushing Hangouts down our throats in Calendar we’re seeing real confusion – our users are now clicking on links to Hangouts in Calendar entries rather than the GoToWebinar or links we actually use.

Not to mention the prospective confusion of meeting for a coffee and them seeing the Hangout link. Yeah, time saving my arse.


Unfortunately, your users can’t disable this feature themselves – only the admin can turn it off (or back on again) for everyone in your account. They didn’t make it easy to find or do though – so after some Monday morning frustration, I decided to get some screenshots and post the instructions here. Enjoy.

Turning it off via the Google Apps Admin Console

Of course, with Google Apps’ new “Look, it is like Windows 8 – everything is easier to use because you can’t find it” Admin panel, turning off this feature isn’t as easy as it should be. So, here’s how.

1. Head to the “Advanced settings” for Calendar in Google Apps

Simply substitute YOURDOMAIN with your actual Google Apps domain:

Once you’re there, you can disable this annoying “feature” and save your preferences.


2. Reload your Google Calendar for the change to take effect

Just saving the change won’t apply until you refresh the browser window that Google Calendar is in. You DO NOT need to log off or do any special cache clearing. Win.



Amazon’s ELB implementation is really a Proxy – lessons learned from 1.2MM spam emails in 24 hours

We’ve had a challenging last 24 hours at AffinityLive. After a fairly successful cut-over to Amazon over the weekend, we learned the hard way that Amazon’s implementation of load balancing leaves a lot to be desired.


AffinityLive does a lot of email handling. Not only do our users use AffinityLive to send emails, we also process a lot of incoming and outgoing email for our users so they can use our cool automatic email capture feature. Capturing inbound emails is pretty easy – people set up a forward to a special dropbox account, and since this is on our host and in our domain, it is all easy and part of the normal way email is managed.

Unfortunately, the outgoing email channel is much more challenging. In all but a few isolated situations, users don’t have the ability to set up an outgoing BCC rule to send all their emails to the outgoing log address for capture, so we need to actually act as an outgoing relay for them. For users who use POP3 accounts on generic ISP provided mail servers, we provide a different outgoing server for them to put into Outlook and all their connections are authenticated. However, for users of Gmail, there isn’t a “client” per-se to reconfigure, so we allow our Google Apps users to use us as a relay server without authentication.

This is a plan we’ve had running flawlessly for a couple of years now, and involved us using a script to periodically interrogate Google’s SPF records to ensure we had a list of all their sender IP addresses so we could trust them. The script we use is included below in case you want to do something similar some time.

use strict;
use Error qw( :try );
use Mail::SPF;
use NetAddr::IP;
use Scalar::Util qw( blessed );
sub process;
my @domains = @ARGV or die "Usage: $0 DOMAIN...\n";
my $server = new Mail::SPF::Server();

my @results = map {
 try {
  process $server, new Mail::SPF::Request(
   identity => $_,
   ip_address => "",
 } catch Mail::SPF::Exception with {
  $@ =~ s/ at \S+ line \S+$//;
  warn "$_: $@";
} @domains;
@results or die "No IPs expanded\n";
print "$_\n" foreach @results;

use constant DEBUG => $ENV{DEBUG};
sub debug { DEBUG and warn @_ }

sub dns_lookup {
 my ($server, $request, $type, $domain, $max) = @_;
 debug "DNS: $type => $domain\n";
 my $packet = $server->dns_lookup($domain, $type);
 my @rrs = $packet->answer or $server->count_void_dns_lookup($request);
 debug "... ", (map { $_->string } @rrs), "\n";
 @rrs = splice @rrs, 0, $max if defined $max;
 grep { $_->type eq $type } @rrs;
sub process {
 my ($server, $request) = @_;
 debug "SPF: ", $request->identity, " from ", $request->authority_domain, "\n";
 my $record = $server->select_record($request);
 my @terms = $record->terms;
 my @results;
 while (my $term = shift @terms) {
  debug "Term: $term\n";
  my $domain = $term->domain($server, $request);
  for (blessed $term) {
   /^Mail::SPF::Mech::IP4$/ and do {
    push @results, $term->ip_network;
   /^Mail::SPF::Mech::Include$/ and do {
    push @results, process $server, $request->new_sub_request(authority_domain => $domain);;
   /^Mail::SPF::Mech::MX$/ and do {
    push @results,
     map { $_->address }
     map { dns_lookup $server, $request, A => $_->exchange }
     dns_lookup $server, $request, MX => $domain, $server->max_name_lookups_per_mx_mech;
   /^Mail::SPF::Mech::A$/ and do {
    push @results, map { $_->address }
    dns_lookup $server, $request, A => $domain;

By asking Google regularly about the servers they send from and assert are their own (via their SPF records) we could confidently trust that email from them could be relayed (on behalf of our users).

The other type of email that we’d relay and trust without needing a specific user/pass for AUTH is internal email – the emails that come from the servers in our cluster. Whether they’re reports from cron or puppet or nagios, emails that originate locally need to be trusted, and to keep a handle on things (and keep firewall rules tight) we made sure that all of the servers in our cluster would forward email to our mail servers before being delivered to the outside world, ie, relayed.

When we moved to Amazon, though, this plan fell apart, and in the last 24 hours we had a spammer take advantage of the trusted relay rules to send out 1.2MM spam emails via our infrastructure – all because Amazon’s Load Balancers aren’t actually load balancers, but instead operate like proxies. Taking email out of a load balancer arrangement solves part of the problem, but the other challenge/problem is that our users set their outgoing relay server to be their own AffinityLive domain (like which resolves to the load balancers… and when they get forwarded through to the mail servers in the cluster, we couldn’t use the Google IP address trusting since the Amazon Load Balancers were stripping the details of the original sending IP address.

AWSs Enterprise Load Balancers – they’re actually proxies

When a request come into an Amazon AWS Load Balancer (called an ELB or Enterprise Load Balancer), they are passed onto the internal hosts (we have a bunch of them for horizontal scalability) and these hosts then process them.

With a normal load balancer (like the load-director package we were using), the traffic is passed across to the back-end machine with the originating packet/sender in-tact. This meant that tests like checking the host is one of the trusted Google hosts or one of the trusted internal hosts before relaying worked perfectly.

Unfortunately, though, Amazon doesn’t work this way – it actually proxies the request and on the inside network everything looks like it has come from a trusted, internal server – the AWS load balancer.

It was this mistake in understanding how Amazon doesn’t actually act as a load balancer but rather a proxy that lead to us being used by spammers as an open relay – with the load balancer on our trusted internal network, spammers were able to send mail to our load balancer and then we’d send the emails out on their behalf.


We’ve since patched this hole, and we’re working on a solution to allow Gmail to be a trusted host that we’ll relay for using their TLS certificates (which don’t care about origin IP address); I’ll update this post with this work around once we’ve finalized it.

Easy Cloud Storage Integration with InkFilePicker and Google Picker

TL;DR: if you have a web (or mobile) app and you want to make it easy for your users to import/sync files from their Dropbox, Box, Skydrive, Evernote, Facebook and more than half a dozen other sources, you can now use a single “proxy” service, InkFilePicker, and easily pick, import and then update/export files from and to these services into your web app with just a few lines of JavaScript. Unfortunately, the InkFilePicker model is well suited to importing/copying files and not as well suited to linking to them, which is essential if you want to make it easy for users to click through and see/edit files in Google Drive, but by combining InkFilePicker with Google’s Picker product you can get the best of both worlds. This post shows you how I did it in a PoC sense with AffinityLive.


One of the features of AffinityLive that we’ve been meaning to upgrade for a while is our integration with cloud storage vendors. Dropbox, Box, Google Drive, Skydrive, Evernote.. the list of popular cloud storage platforms where our clients are storing their proposals, project files, contracts and more continue to be more and more popular.. and populous.

One of the challenges we faced when looking at the best way to integrate was the inverted mental model most of these platforms use. While in AffinityLive, a project is a shared collaborative space for internal (and in some circumstances, external) users to use, the model around almost all cloud storage services is personal and private. While folders in Dropbox can be shared (and the view of them is consistent), in Google Drive the way I arrange my files in folders is very different to the way someone else may arrange those exact same files in their own folder view. While our proof of concept work included creating a named/defined project folder in Google Drive, for example, the fact we had to create it in a single user’s account and then share it rather than it being a common, organizationally owned folder with access permissions struck us as a real square peg and round hole issue – and synchronization was always going to be painful and scary.

The other practical challenge is that there are so many cloud storage providers and the slight differences in use cases and focus areas means that users are unlikely to use just one of them. Even if a company decides they’re going to focus on Box, there’s a good chance some users are going to use Evernote’s awesome mobile note taking features to store meeting notes and ideas there… declaring a single, two way sync platform and forcing everyone to use it seemed like something from 1990’s enterprise tech and not realistic or desirable in this age of consumerization of the enterprise.

And finally, even if we could come up with a magical many to many sync which handled access controls and conflicts across half a dozen services, there’s also the commercial consideration – a solid two-way integration would realistically take an engineer 3-4 weeks to do properly, and if you multiple out that time by the (growing) number of popular services, if we started today we might not be finished for a (wo)man-year of engineering time.

But, we still wanted to give our users a convenient way to bring their files from their cloud storage providers into AffinityLive. What is to be done? The answer, it turns out, is InkFilePicker (with a Google Drive specific twist).

Our Model/Pattern

This post won’t go into the specifics of the InkFilePicker options/docs – you can see them yourself on their great developer site – but it is worth sharing our model/pattern of file storage and sharing for context.

AffinityLive uses a folder and file model to make it easy for our users to share files against clients, sales, projects, issues and retainers. Until now, files in AffinityLive have come in from four places:

  • Files uploaded by users through our web attachments interface. This has been pretty cludgy and uses the traditional “browse” and upload model.
  • Files attached by users through our web activities interface. Similar to composing an email, our inbox and activities screens allow users to attach files via an AJAX model when they’re making a note, writing an email or logging time.
  • Email attachments captured in automatic email tracking. This way, if a client sends through a reply with a marked-up attachment, we’ll automatically store that attachment against the client, project or whatever their reply related to.
  • Files uploade via our Forms API. In cases like the Angel Group (who have an job application page on their website) public users can upload files through a web form and have them go against a client, project, etc.

In all of these cases the files are stored in folders related to the project, issue, client etc, with the ability to create sub-folders to keep information organized.

When it came to planning our integrations, we had three (not always mutually exclusive) choices:

  1. Delegate storage to a designated cloud provider on a per-object basis.
  2. Link to objects that remained in the source cloud storage provider.
  3. Import (and sync) objects from the cloud storage provider into AffinityLive’s own storage model.

The first two options have a lot of appeal, but they had shortcomings.

Delegation meant choosing a single cloud integration and limiting users to only using that. If you chose Box, for example, and uploaded a file to AffinityLive using one of the four interfaces above, we’d simply push that file across to Box. Want to connect to Evernote? Bad luck.

Linking to objects had the advantage that our users could link to anything and everything. But it meant that functionality – like attaching a file to an activity you were about to send to a client, which is super common for our users – would become confusing, frustrating, difficult or impossible. Your client probably wouldn’t have direct rights to see it in your Dropbox (and your colleagues might not either) so we’d need to be messing with complex ACLs… a problem that gets all the most complicated as you add in more services. So, the seemingly short-cut approach of “paste in the link to view in the web interface” model quickly becomes a nightmare.

Importing, while facing its own challenges, is actually the model we chose to go with. It means that a user chooses to add a file to their AffinityLive Project from Dropbox, and then it is against the AffinityLive project. Makes sense – nothing confusing around ACLs, consistent shared project file space. You can bring in files selectively from anywhere without worrying about sharing too much or having us try and swap/switch the model of the cloud storage provider from a user-centric private model with explicit sharing to an organization-centric collaborative model with ACLs. There are a couple of downsides with this approach, however, mainly around synchronizing the file changes that may occur in AffinityLive back into the cloud provider in question.

InkFilePicker Integration

The InkFilePicker integration is actually exceptionally easy to get started with. This is what we chose to do:

  1. Use the Pick JavaScript API. This makes it easy for your users to pick a file from their cloud platform, which is then downloaded to the InkFilePicker servers and stored at a permalinked URL. Since we’re handling our own storage we didn’t use the S3 integrated Pick and Store, but if you are you might be able to save a step below.
  2. Get the permalink & metadata. When the filepicker.pick command returns, we fetch the metadata (would be great if they made it an option to return this detail in the pick command to save the extra hit) and then POST via AJAX to our API.
  3. Import/download/create the file to AffinityLive. On our API end, we fetch the file from InkFilePicker and store it in our AffinityLive storage model.

There’s the relevant code snippets. Note that this is rough, PoC code and not final code but ironically this code is likely easier to follow than our final optimized code will be.


function pickFile(collId, colDepth)
  // The collId tells AffinityLive the collection we want to use
  // The colDepth is a visual feature to indent the margin on the row in our PoC
      var permalink = InkBlob.url.split('/').pop(); 
      // Build a new row for the resource. 
      // NB: during this process it is still being pulled back as a resource into AffinityLive
      var resourceTemplate = $('#clone_' + collId);
      var newResource = resourceTemplate.clone().show();
      newResource.children("td[title='Resource Title']").text(InkBlob.filename);
      var newRow = resourceTemplate.after(newResource);
      filepicker.stat(InkBlob, function(metadata){
          type: "POST",
          beforeSend: set_ajax_api_key,
            data: {
            service : 'filepicker',
            action : 'import',
            url : InkBlob.url,
            collection_id: collId,
            title : metadata.filename,
            key : permalink
          url: api_base+'/key/resource',
          success:function(data, code){
            newResource.children("td[title='Resource Title']").text(data.title);
            newResource.children("td[title='Size']").text(Math.round((data.filesize/1024)*100)/100 + ' KB');
            newResource.attr('id','resource_' +;

Note that the DOM we’re writing to has a simple three-col table with the folder/file name, the size of the file and a col for icons that edit/delete etc. The thing to note is that we create the row and update the title when we get the value back from InkFilePicker and then we update it with size and ID information once it has been saved to AffinityLive.

API Back-End

This back-end snippet (in Perl) shows how we’re doing the import from InkFilePicker using LWP and then storing the file in our resource area.

my $collection_id = $cgi->param('collection_id');
my $collection = IRX::Management::Collection->new($context, $collection_id);
return Apache2::Const::DECLINED unless $collection_id && $collection->get('id');
my $ua = LWP::UserAgent->new();
my $tempfile = $cgi->param('key');
$tempfile =~ s/\W/_/g;
$tempfile = '/tmp/filepicker-'. $tempfile;
my $req = HTTP::Request->new(GET => $cgi->param('url'));
my $res = $ua->request($req, $tempfile);
 my $resource = $collection->build_resource($tempfile, $cgi->param('title'));
 $resource->set('service', 'inkfilepicker');
 $resource->set('service_id', $cgi->param('key'));
 return Apache2::Const::OK;
 $logger->debug(sprintf('Fetch from filepicker failed: %s', $res->status_line));
 return Apache2::Const::DECLINED;

Google Picker Integration

The problem with InkFilePicker, unfortunately, is that it exists solely to find and export files. Which means, if the user is using Google Drive for their file picking, it will export the file from Drive, rather than provide a link to view/edit the file. This means, when you Pick a Document in Google Drive, you’ll get a .docx export!

Thankfully, Google have their own Picker solution, and though it is harder to find now (they’re giving the Google Drive SDK all the top billing) it is still out there and perfect for this situation.

To find out more about Google Drive, check out

The first thing to note about Picker is that it is designed to allow picking from a LOT of Google services. Image search, Picasa, you name it, it’s there. Unfortunately, if you’re just looking for Drive (which I was) the default mode isn’t ideal.

What worked best for us was:

  1. Turn off the left hand pane. If you’re only using one service (Drive) it is a waste of real-estate.
  2. Set the scope to be DOCS so you’re just showing the user files from Google Drive. While we wanted to browse on a folder view, you will want to use DOCS to get everything (otherwise you miss out on files in the root folder).
  3. Get the user to OAuth (if you can) so you can show the *right* Google Drive. For users who might be logged into their browser via their own Google Account and their corporate/business Google Apps domain, this is important otherwise the Picker will just show the files in the first account they logged into.


The JS we used (again, this is ugly PoC shit but you get the point) is as follows:

var developerKey = 'YOURKEY';
// Create and render a Picker object for searching images.
function pickDrive(collId, colDepth) {
 var picker = new google.picker.PickerBuilder().
   //setOAuthToken(AUTH_TOKEN). // This is where you force a specific Google User account to be used
     var url = 'nothing';
     if (data[google.picker.Response.ACTION] == google.picker.Action.PICKED) {
       var doc = data[google.picker.Response.DOCUMENTS][0];
       driveUrl = doc[google.picker.Document.URL];
       driveId = doc[google.picker.Document.ID];
       driveName = doc[google.picker.Document.NAME];
       driveType = doc[google.picker.Document.MIME_TYPE];
       driveServiceID = doc[google.picker.Document.SERVICE_ID];
       var resourceTemplate = $('#clone_' + collId);
       var newResource = resourceTemplate.clone().show();
       newResource.children("td[title='Resource Title']").text(driveName);
       var newRow = resourceTemplate.after(newResource);
         type: "POST",
         beforeSend: set_ajax_api_key,
         data: {
           action : 'import',
           service : 'googledrive',
           url : driveUrl,
           key : driveId,
           collection_id: collId,
           mime_type : driveType,
           title : driveName,
         url: api_base+'/key/resource',
         success:function(data, code){
           newResource.children("td[title='Resource Title']").text(data.title);
           newResource.children("td[title='Size']").text('Google Drive');
           newResource.attr('id','resource_' +;

Back-end AffinityLive API

In our back-end we wanted to distinguish between a Google Drive element that was editable and that which was merely a stored file (like a PDF). The current approach isn’t final (is a hack) but one way is to interrogate the mime-type for a Google specific prefix. The other option is to use the other data that Google Picker returns – here’s the reference of what Google can send back:

Another comment – we provide a desktop-mountable interface to the AffinityLive file system and we wanted to make it possible for people using Google Drive and who’ve installed the desktop sync application to be able to click on a link and open the file in Drive. That’s why we’re creating the .gdoc and .gsheet files (which contain a very simple plain-text JSON payload which tells your OS where to open the file on Google’s servers).

my $collection_id = $cgi->param('collection_id');
my $collection = IRX::Management::Collection->new($context, $collection_id);
return Apache2::Const::DECLINED unless $collection_id && $collection->get('id');
my $title = $cgi->param('title');
my $url = $cgi->param('url');
my $type = $cgi->param('mime_type');
my $key = $cgi->param('key');
my $tempfile = undef;
if($type =~ /^application\/vnd\.google-apps\.(.+)$/)
  my $gtype = $1;
  my $resource_content = sprintf('{"url": "%s", "resource_id": "%s:%s"}', $url, $type, $key); 
  $tempfile = $key;
  $tempfile =~ s/\W/_/g;
  $tempfile = '/tmp/gdrive-' . $context->get('system_domain') . '-' . $tempfile;
  my $gfile = open GFILE, "> $tempfile";
  print GFILE $resource_content;
  close GFILE;
  if($gtype eq 'document')
    $title .= '.gdoc';
  elsif($gtype eq 'spreadsheet')
  $title .= '.gsheet';
  elsif($gtype eq 'presentation')
  $title .= '.gslides';
  $logger->debug(sprintf('I don\'t know how to handle a google doc mime-type of %s', $type));
if($tempfile && -e $tempfile)
 my $resource = $collection->build_resource($tempfile, $title);
 $resource->set('url', $url);
 $resource->set('service', $service);
 $resource->set('service_id', $cgi->param('key'));
 return Apache2::Const::OK;
elsif($title && $url)
 my $resource = IRX::Management::Resource->new($context);
 $resource->set('collection_id', $collection_id);
 $resource->set('title', $title);
 $resource->set('content', $url);
 $resource->set('url', $url);
 $resource->set('service', $service);
 $resource->set('service_id', $cgi->param('key'));
 $resource->set('owner_id', $context->get_current_user_id) if($context->get_current_user_id);
 return Apache2::Const::OK;
 $logger->debug(sprintf('Can not create a Google Drive link without a title (%s) and a url (%s)', $title, $url));
 return Apache2::Const::DECLINED;

End Result

The end result is a powerful set of import and connection features to over a dozen cloud storage providers with Google Drive getting special attention because of its in-line editing and cloud creation processes.

Future work around synchronization and integration with Office365 is also on the cards – we’re looking forward to shipping this new feature in a month or so with a brand new Angular built attachments tab in AffinityLive proper.


Deception & Denial – Commercial Aviation's Two Worst Habits

I’m currently sitting in the transit area at Auckland airport, having missed a connecting flight that would have had me home by now. Instead, I’m left reflecting on how broken commercial aviation’s response to dealing with the natural problems that occur when you combine machines, weather and people with tight schedules and connections is.

TL;DR: commercial aviation consistently jumps immediately to deception and denial whenever something happens that causes a problems for their customers. They might have gotten away with it in an age where people had no choice but to be left in the dark without internet, email and social media, but increasingly their policies of outright lying to their customers are causing their brands a lot of damage. The only thing saving their lying arses at all right now are the difficulties of communicating due to unnecessary bans on using mobile technology in-flight and the prohibitive cost of international roaming when you’re on the ground. These will change, and then they’re going to be *really* screwed.

How many times have you had your travel plans screwed up by a “mechanical issue” or “operational issue”? It always seems there’s a mechanical issue that means your flight was cancelled or delayed or some other setback has occurred? Of course, when this is true, it is something you almost appreciate happening even though you’ve just missed your connections and will have to spend hours sleeping on the floor of an airport. “Well, at least they found out and cancelled the flight and didn’t wait to realize until we were airborne – we could have crashed!”. Unfortunately, though, this catch-all excuse is often a complete lie. Here’s a couple of examples I’ve had; my friends in the industry tell me this is very very common.

The “Mechanical Failure as an excuse for Overbooking” Deception

“Sorry sir, there is a mechanical problem with the aircraft – we’re going to need to move you over to X and your direct flight is now going to take twice as long”.

This happened to me a couple of years ago on a flight from Sydney to San Francisco with United. The truth went more like this:

  • We have a policy of over-booking aircraft because we put company before customers and need to squeeze money out every last seat mile.
  • Since we screwed up again, and you’re stupid enough to show up early to check in, we’re going to need to inconvenience you and send you via Auckland and LAX before you eventually get to SFO so we can save money on paying other passengers compensation or provide upgrades to business class.

Having been told a routinely full 747-400 is not making its daily flight, I said “Wow, lucky I got here early – can you book me through on the direct flight to LAX instead?”

Realizing that treating customers like cattle and lying wasn’t going to work first time like it usually does, and knowing that the LAX flight was also over-booked, I was told “Actually, there’s a mechanical problem with that aircraft too, and it has also been cancelled”.

It wasn’t until I was passing through the security checkpoint with my 3 boarding passes in hand and a staring at a much longer trip that I saw the United crew in uniform going through. They only have two flights out of Sydney each day, so I figured, oh, amazing, they fixed the plane. I started talking to the crew: “Oh, I thought that your aircraft went U/S” (which is aviation talk for unserviceable or “broken”), to which they replied “No, it hasn’t been U/S at all?”. When I got to the gate I confronted the check-in staff who lied to me, who then admitted, yes, they had lied to me, but bad luck your bags are already on that plane over there and we can’t move them”.

Lesson: if they tell you there’s something wrong with the plane, ask them what it is. When they say “I don’t know”, then assert that you think they’re lying to you again and that they’ve just overbooked it and you expect to be put on the flight you booked or upgraded to business on an alternative. Remember, they deliberately fucked you over to their policies of maximizing their return at your expense, so don’t be afraid to turn the tables.

The “Mechanical Failure as an excuse for Under-Filling and Combining” Deception

This is another favorite that happens quite a bit on busy routes (such as in the US). The airline sets their schedules months and months and months out. Then through some hard core math and financial engineering known as yield management they try and get a balance between filling the aircraft and getting the highest price, moving prices based on how far out the flight is and a lot more.

Of course, when the time comes for the plane to fly, the yield management guys might not have done a very good job, and the airline is looking at a half full flight scheduled to leave at 1pm. They know they’ve got another flight which is half full (or more) leaving at 5pm, and since it doesn’t cost them to keep you waiting in the airport for 4 hours, they tell you there’s a mechanical problem with the aircraft and the 1pm flight is cancelled.

So you spend 4 hours waiting in the terminal, missing connections, inconveniencing family members who are coming to pick you up, all so they can maximize their per seat mile revenue completely at your expense. Remember, that flight that didn’t leave didn’t have crew, didn’t burn jet fuel, and the engines weren’t spinning so they’ve been able to delay critical maintenance. Their win, completely your loss.

Lesson: if they tell you the flight is cancelled due to a problem with the plane, ask them what it is. When they say “I don’t know”, then assert that you think they’re lying to you again and that they’ve just cancelled the flight to combine it with the later flight to make more money and ruin your day. Of course, complaining isn’t going to uncancel a flight, so tell them you expect to be upgraded to business and hooked up with lounge access or get a cash travel voucher as compensation. Remember, they deliberately fucked you over due to their policies of maximizing their return at your expense, so don’t be afraid to turn the tables.

The “Due to operational requirements there’s a change of plans, but don’t worry, we’ll have things organized for your connections” deception

This was today’s doozie with AirNZ, who until today I’ve held in high esteem. The flight was from SFO to AKL, and then onto SYD, a flight I’ve taken at least half a dozen times. This time, though, there was a problem – “Operational Requirements” meant we had to go via Fiji.

What were the “Operational Requirements”? In this case, there wasn’t much the airline could do; there are limits on the amount of time air crew can spend on duty to help combat fatigue in what is already a pretty stressful (and potentially fatal) workplace environment. In this case, there was a delay the previous day (or two days ago?) on the flight leaving Auckland for SF, and the crew would have gone over their duty hours (a big legal deal) if they tried to fly all the way to Auckland. So, we diverted to Fiji, where AirNZ had flown another crew the day before to swap with our original crew and continue the flight to Auckland.

This means we got into Auckland 2.5 hours late, and probably 200 or so people needed to have their onward connecting flights rearranged. Never fun, but totally predictable – the airline knew more than 20 hours earlier that this was going to happen, and they knew exactly who they needed to rebook and move around.

As a result, they promised passenger after passenger in SF that things would be OK; the mum who’d just flown from NY with her three kids and still had to get to Melbourne was assured and told that they’d booked her through on a Qantas flight because the AirNZ flight would be missed. Ditto for the guy heading to Adelaide. For those of us flying through to Sydney in AirNZ, it should have been a simple matter of printing our boarding passes at some point in the intervening 20 hours.

Of course, this isn’t what happened. We all lined up for over an hour as a completely predictable workload was handled by too few staff who made up for their short number by being extra rude. Only after everyone is lining up do they realize – again, completely predictably – that the 1pm flight wasn’t going to be able to fit everyone, so they then work out what new plane they’re going to use instead. Chaos continues for a few more hours. The people at the lounge disqualify my lounge passes on a technicality, but they do offer $12.50 worth of Burger King to say sorry.

Lesson: when they say “operational requirements”, find out what the truth is (after all, weather and safety rules do routinely mess up the aviation business), and then when checking in try and get your booking changed right there and then and a forward boarding pass assigned.

Deception and Denial don’t work when we have a voice


The marketing and sponsorship teams of airlines spend hundreds of millions of dollars in every year promoting their brands. But in a world of Twitter, Facebook and other forms of social media, decisions to deliberately inconvenience or visit chaos upon their customers in the service of saving them a bit of money has the ability to undermine their brand building, and fast.

Only by asking questions and challenging their standard operating procedure of deception and denial can we have a voice, and the more of us that speak, the more we’ll undermine the bullshit their brands are built upon. Some airlines already get it – Virgin America is the most honest airline I’ve ever dealt with, and their brand is reinforced every time my friends and I interact with them in person – but for those who don’t, ask questions, demand answers and do something cattle don’t do – say something.

WFT America?!?

How is it that the world’s most prominent democracy is so incredibly bad at being, well, a democracy?

8 years ago I looked on from Australia in shock with the rest of the world as America re-elected the least competent and most destructive person to sit in the Whitehouse, since, probably ever. After feeling a strong sense of empathy post 9/11, I joined with the rest of the world in thinking in 2004, “well, you voted for this moron again – you made your bed, so lie in it”.

4 years ago I looked on from Australia – this time having spent a bit of time living in the US – with a sense of pride and hope that the mistakes of 8 years earlier which clearly set American and the free world back almost a decade could be righted. And the American voting public delivered in spades, but as we all know, the damage had been done. Obama promised change, but in retrospect he should have promised harm minimization. The last 4 years hasn’t been pretty.

This year was different. I’ve been living in the US for a 15 months, paying taxes and being part of society here and while I’m a lot closer, I’m strangely more distant. This stuff now actually affects me on a daily basis, but I’m noticeably excluded as a non-citizen. The natural human response to rejection is to say “well, screw you too then electoral system that won’t have me – I’ll ignore you too”. I’ve been too busy to be a political junkie and anyway, this has to have been one of the least inspiring, un-visionary campaigns in memory. Better just to just tune out, especially when in our house we just stream TV (no political ads).

But the things you notice actually being here are really amazing. And disturbing. Seriously, America, what the fuck is going on with a nation that sees itself as the leading light of democracy and freedom?

Here’s a few real and jarring things I wouldn’t have appreciated if I hadn’t been here to see it for myself. These stand-out in a sea of things that wouldn’t be so hypocritical if this country hasn’t spent so much blood and treasure promoting democracy elsewhere in the world.

Touch-screen voting machines, but forget technology in voting.

So, there’s been a bit of a stir today about the voting machines that wouldn’t let you select Obama and instead defaulted to Romney. Amazing seeing life imitate art, but not super surprising.

Coming from little old Australia where we use paper and pencils to vote, I figured that America was just ahead of the curve, wrinkles, bugs and all.

Turns out: not so much.

One of my colleagues wanted to vote today; in a busy start-up, time is tight, but she was fine to head off and make sure her opinion counted any time she wanted to go.

With more than 2 hours until polls closed, I asked her if she needed to take off and she mentioned she’d already given up. “I’d have to head back to Berkeley where I’m registered from when I was in college last election if I want to vote, and there isn’t time now”.

Now, remember, this is a first world country. One that insists that citizens and even non-citizens get a unique Federal ID number so we can all be tracked and identified (the Social Security Number). What do you mean you can’t just go down to ANY POLLING PLACE IN THE COUNTRY, give your details and a photo ID, and vote? WFT America? Sarah was in the same state, but a different county, and unless she’d planned her time out more than a week in advance to ask for and return a postal absentee vote, she was locked out.

Even if you are prepared, voting can cost you $50 or more. Another friend who grew up in Boston wanted to vote. She asked for the absentee voting papers to be posted to her new place in California. The authorities got it here last Thursday. She filled them in and realized that she would have to pay $50 to make sure they would get back to Boston in time to count before the cut-off. WTF America? If people can vote on faulty machines, why can’t they vote online? Even if that is too hard, why the hell does a piece of paper need to be mailed overnight express to the other side of the country in a FEDERAL ELECTION?!?

4 hour waits to vote? It isn’t like today was unexpected!?!

So, the systems suck unless you never move and make sure your registered home address is a stones throw from where you actually live. OK, fine. But why are there 4+ hour lines to actually vote on a day they’ve known is coming for 4 years!?!

In Australia, I’ve waited no more than 20 minutes to vote in an election – ever. You rock up at the nearest school/library, you run the gauntlet of people trying to give you propaganda, you cross your name off the list (or go over to the out of town queue and cross it off a national database), and you vote. Done. Easy. Fast.

Instead here it seems the parties get to play special games. If you’re in a county/state controlled by the blue/red guys but you live in an area controlled by the opposite color, good luck.

Long queues, few booths, few election officials. Making it hard to vote and reduce turnout for the other guy is seen as a legitimate tactic. WFT America? I’m all for electoral and campaigning tactics, focus groups and marketing your message to give your team an advantage, but deliberately starving certain populations of resources to deny them the right to vote without standing in the sun/snow for 4 hours is bullshit. This isn’t about promoting/spinning a message. This is about taking away the rights of fellow citizens to vote in a democracy. In the memory every fallen American who died trying to bring democracy to Iraq/Afghanistan/Eastern Europe/South Korea/Vietnam, you should be ashamed of yourselves.

Branch stacking is nothing. Try population stacking.

From the outside, you might be wondering why there are so many elected nut-jobs in America. The short answer is because the way they carve up electorates / seats / districts encourages it.

In Australia, there’s a concept known as branch-stacking. Primaries (known as pre-selections down-under) are where a party chooses their person to stand as their candidate in the election.

In seats / districts that are tight (known as “swing” here), candidates are selected by their party because they’re the best – they’re respected, they have broad appeal and can work to represent all of their constituents.

In seats are safe (known as their color, red=republican, blue=democrat here), the battle isn’t for the public ballot – that is pretty much a sure thing. Instead, the battle is for the party base, which encourages nut jobs on the fringe. This happens in all representative democracies, but in the US it is a LOT worse.

Instead of having an independent umpire draw the lines on the map to say who is in what Seat / District, here they’re drawn (predominantly) by the party in power at the State level. They’re political in nature, and the result is frightening. If you’re a Democrat and you’re in power, and you know a particular part of your district votes Republican, you can change the boundaries so that those people are included in a different district, one that is more Republican. That way, you don’t need to keep them happy, and your seat / district is safer. Funnily enough, the other guys don’t usually mind – they also get to focus on preaching to the choir, rather than coming up with policies that make the county/state/country better.

This is how the Tea Party went from being a bunch of angry flat-earth types and became a political force. Ignorant, simplistic, selfish and deluded, instead of being marginalized as fringe outliers these crazy buggers rose to prominence by eating their own – incumbent Republicans – because they knew they just had to energize their own team by being more extreme than their Republican brothers. The Democrats have their own fair share of nutters too of course – the Tea Party have just done a better job of showing us how mad they are.

Growing up in Wollongong – the equivalent of a super blue safe seat for the Democrats – I got to see up close just how much of a cancer this one-sided electorate could be. Words like corrupt, incompetent and out of touch come to mind. Populated exclusively by candidates who are former union officials – the 2nd least trusted “profession” in Australia it was reported recently – the “safe” status of the Wollongong region has led to all sorts of dodgy back-room deals where the decisions about who gets to keep power have nothing to do with the people. This in turn has fostered a strong sense of neglect and mistrust in the electors – why be engaged when you know a corrupt and incompetent candidate is going to get re-elected to “represent” you anyway?

I’ve always felt this was the rotten heart of the city where I grew up and wish it wasn’t like that, but almost all of this country works this way – and not because of demography, but because of a false construction of elector districts.

Fucked up, rotten priorities

Sergey Brin was right. This system has forgotten what democracy is – it is just about the parties fighting each other and the public are just ways of tallying who’s winning and losing, while all of us lose.

If this system spent a fraction of the money they spend on lawyers on things like polling booths in independent redistricting commissions that they spend on TV commercials and lawyers to try and exclude the ability of others to vote, America would be in a much more legitimate position to take its place as a leading light of democracy.

Unfortunately, today, the electoral and representative landscape here place better represents something that rhymes with that: hypocrisy. WFT America?