echo "hey, it works" > /dev/null

just enough to be dangerous

There will be disaster


It doesn't matter how good your team is. It doesn't matter how good your documentation is. It doesn't matter how well you've planned your deployment or how much you've spent on your hardware. In fact, no amount of preparation can avoid the fact.

There will be disaster.

Your data will be corrupted. Your infrastructure will fail you. Someone else's infrastructure will fail you. Your database will die. Your code will crash. Choose at least one. More likely, it will be all of them, at one time or another, and probably some other things as well. Of course, all that preparation can avoid disasters and help you recover from them faster when they do happen, but you won't avoid them completely.

What matters is how you handle things when the disaster happens. And really, there are only two things that are important.

Tell people. Stay calm.

Tell people

There are people other than you being affected by your disaster, and you should let them know that something isn't quite right, and you should do it as soon as possible. It's incredibly frustrating to have shit going wrong and have no idea why, so those folks are much less likely to begin to hate you if you talk to them. Not only will they be less inclined to hate you, they'll also be much less likely to try to contact you, which will give you more time to focus on finding out what is actually wrong and fix it.

I'll admit I've been guilty of thinking, "I'll just work out what's happening, then I can tell X" but it is a Bad Idea. Tell them something is wrong immediately, tell them you've worked out what it is and you're making a plan to fix it, tell them you're working on it, tell them you're testing the fix, tell them it's fixed. Tell them at least enough to keep them happy and keep them off your back.

Stay calm

Keeping your calm when the shit hits the fan let's you focus on working out what's going wrong and making a plan to fix it. Use the scientific method; gather data, develop a theory, test your theory, evaluate. Repeat. That's really hard to do when you're the opposite of calm. Jumping from incomplete data to half-arsed theory isn't going to get the problem solved quicker. Worse, if the theory looks promising and you push things out too fast without proper evaluation, you risk making a disaster into … a double-decker disaster. Or something.

Staying calm can also help let people know that, despite the fact that there is currently a disaster playing out, you have things under control and everything will be back on track shortly. Saying, "Fuck. Fuck fuck fuck fuck fuck fuck fuck," while you're on the phone with the client is not the right way to project the sense that you have things under control.1 In fact, if you can't stop yourself from an outburst, I'd recommend not mixing it with the tell people step. Don't raise your voice at your team mates, blame is not going to help anyone, especially in the middle of things.

Of course, you may need to smash some stuff or get really drunk afterwards, but right now, you have to keep your head.

There will be disaster. Stay calm. Tell people.

  1. Guilty, I'm sad to say.

What's the status of my vagrants?


I've been writing puppet manifests for a few different projects lately, and I've found it useful to test them using virtual machines managed by vagrant. However, as I've been swapping back and forth between the various projects, and because I'm forgetful, I've also been accidentally leaving VMs running. Not so great for the performance of my ageing laptop.

Vagrant manages boxes on a per-VM basis; it doesn't have a host-wide view of what VMs are running. I decided to write a zsh function that told me the status of my VMs, and learnt a few things along the way.

vagrants () { setopt LOCALOPTIONS unsetopt AUTOPUSHD pwd_orig=$PWD # This is mine, yours is likely different base=$HOME/Boxes vagrants=(${base}/**/.vagrant) for vagrant in $vagrants do cd $vagrant:h print $PWD vagrant status | awk '/^$/ {stat = !stat;next}; stat == 1 {print "\t",$0}' done cd $pwd_orig }

There were two things that I found tricky.

First, I like to automatically push onto the directory stack when I change directories, so that I can change back to previous directories using cd -<TAB>, and I didn't want to dirty the directory stack while moving to my vagrant directories. I tried to store the state of the directory stack and then restore it at the end using the dirs command, but I couldn't work out how to stop the directories being added as a single (broken) entry. Eventually I found that using setopt LOCALOPTIONS in a function will cause previous options to be restored on exit, meaning I could simply turn off AUTOPUSHD.

Second, the vagrant status command doesn't have script-friendly output, though there is an open ticket to provide friendlier output.

Current VM states: default saved To resume this VM, simply run `vagrant up`.

Running vagrant status on half a dozen vagrant directories that might have multiple environments produces pretty unattractive output. Luckily, the output is at least in a predictable format; three paragraphs, with the actual states in the middle paragraph. My awk is pretty rusty, but I was happy to come up with a solution that works for me.

vagrant status | awk '/^$/ {stat = !stat;next}; stat == 1 {print "\t",$0}'

When a blank line is encountered, the stat flag is toggled, and when it's set we print the whole lines. This means the blank line between the first two paragraphs turns the flag on and the blank line after the statuses turns it off again. If someone has a more reliable way of grabbing the second paragraph with less predictable input, I'd love to hear it.

A bonus discovery was zsh's ability to substitute portions of the current path when using cd, allowing you to visit the same subdirectory in a variety of projects.

Boxes/metropolis/manifests % cd metropolis cantoflash Boxes/cantoflash/manifests %

CERES organic market is safe and I'm pissed off at The Age


TL;DR: Go to CERES market tomorrow and buy fantastic organic produce. If you saw something about the produce being contaminated, it's a lie.

On the front page of Sunday Age on 5 March, there was an article about CERES, specifically the organic farm and the twice weekly organic food market. The article claimed that produce sold through the market was contaminated with lead and that sales had been banned.

I was shattered. CERES is a great organisation, I'm so happy they exist and it's great to have them in our neighbourhood. While we haven't shopped at the market very much, we do all our nursery shopping there. This must be a huge blow to them.

Turns out that the mainstream media is just as bad as you think it is, The Sunday Age lied about contaminated produce being sold at CERES and about the banning of the sale of produce. I am outraged. I am disgusted.

Here's an excerpt of an email from Chris Ennis, the Manager of CERES Fair Food and Organic Farm.

If The Sunday Age had bothered to check their story, the real but far less newsworthy story would have revealed that Moreland Council and EPA testing had found five privately leased community garden plots with lead levels slightly over ANZFSC limits and that produce from CERES Organic Farm had never been contaminated or banned from sale. Never let the facts get in the way of a good story they say.

Not only did they lie, they manipulated quotes to make it seem like CERES admitted that contaminated produce had been sold.

Wrongly assuming the results referred to the CERES Organic Farm instead of the community garden plots, [the journalist Steve] Holland used the report to ask [CERES chairperson] Robert Larocca what he would say to people who could have eaten contaminated CERES produce? Larocca's reply was, "It is unfortunate it has happened and we are sorry for that. A very small number of people will have purchased that [contaminated food], including myself.'' It was an honest answer to a hypothetical question but Holland used the quote make it seem like CERES had actually been selling contaminated produce without ever checking his story was correct.

Even the follow up saying that the produce sold through the market is safe that The Age has on their website (I don't know if it made it to print) reads like scaremongering, like you can't really believe the land has been rehabilitated. This is sloppy journalism—no, this is worse, it's misrepresentation and untruths—targeted at a community organisation that's doing great things for our community. The Age should print a full retraction of the story.

This doesn't just hurt CERES, this hurts the 50 plus farmers and processors that supply the market, and causes flow on pain to drivers, packers, and others.

If you live in Melbourne, go to CERES market tomorrow and buy great organic produce. And if you're a subscriber to The Age, consider cancelling your subscription.

CERES media release

devops for the little guy


Like most web developers, I managed a bunch of domains, applications, and services. Some of these are projects for myself, some are for family and friends, some are for clients. And it's a mess, stuff on one shared hosting, stuff on some other shared hosting, stuff on the VPS I use to muck around. This makes it all difficult and time consuming to manage, not to mention fraught with risk.

I've decided it's time to get my house in order, bring everything I can together onto a quality VPS and manage things properly.

Whenever I've managed servers, I've quickly forgotten how I've installed stuff and how the thing is configured, and everything gets messy and out of control, so the idea of managed and versioned configurations is very attractive. There are two main options around, puppet and chef. A very much surface assessment suggests that puppet seems to be more popular and have more documentation, and I had more luck getting it up and running than I did with chef, so that's what I'm going to explore.

In the end, I want a VPS with:

  • SSH with SSH keys set up
  • a web server, probably Apache at this stage
  • several virtual hosts
  • PHP and Ruby
  • MySQL and Sqlite
  • git and subversion
  • vim
  • zsh
  • utilities such as ack, find, and curl
  • projects cloned from github
  • monitoring and alerting (I don't know anything about this stuff yet, but I'm guessing that's Nagios)
  • appropriate backup
  • and of course, versioned configuration

I'd also like to be able to:

  • easily spin up local VMs with the same base configuration as the VPS

Unfortunately, I haven't been able to find much to support such a small endeavour, most information seems to be about large teams and large projects. If you know of good resources, please let me know. I'll update this post as I work out more of what I want to achieve and how to do it (I may even split this into multiple posts).

MongoDB search and replace


Replacing parts of a string is something we want to do quite often, but unfortunately MongoDB doesn't currently have a simple way to do it. There's a bug that I hope will be accepted and make this post out of date by the time you're reading.

db.coll.update(
  {'entry' : /hate/},
  {$replace : {'entry' : ['hate', 'love']}}
);

This would replace the string hate with love in the value of the field entry, where the string to find could also be a regex.

So, that doesn't work yet. Instead, we can do this.

db.coll.find({'entry' : /hate/}).forEach(function(e) {
    var parts = e.stem.split('hate');
    e.stem = parts.join('love');
    db.coll.save(e);
});

This will find all the entry fields and loop through them. For each field, the value is split on the string hate, and we then join the split parts back together around the replacement string, love.

Not quite as elegant, but it works.

Why I buy books from #abookapart


I just bought A Book Apart's Designing for Emotion and Mobile First as a bundle. In fact, I've been buying all the titles from A Book Apart as they become available.

I buy them because they're about things that interest me, they're well written and readable, they're concise and fluff-free.

But most of all, I buy them because the authors all just seem like such nice people.

It's not all HTML and CSS


I'm teaching again for the first time in a long time, a course with the romantic and evocative name of Web Page Construction. This means that instead of my thoughts being a swirling morass of bits and pieces, of jquery, user testing, code quality, and responsive design, I need to try to organise things up there. You can probably see from the design of my blog that I'm not a designer, I'm a geek who wants to get better at the non-geek stuff.

The first thing I'm trying to answer for myself is, what does it take to make a web site? The common answer when teaching this in a computer science context is that you throw some HTML, CSS, and Javascript together, with varying degrees of care. And at a very high level, that might be a reasonable technical answer, but it definitely shouldn't be the whole story.

My answer is that there are three broad steps to making a web site:

  • Work out what you need to do and how to do it well.
  • Implement things based on your planning.
  • Work out how to do things better.

The final important thing about this process is that it's cyclical, so the "make it better" step feeds back into "work out what it's about". There should definitely be at least a full iteration before a site goes live, and after the site is live you have access to much more data in the final step. You might also choose to have a staged "go live" step, opening up to a few real users before you throw the doors open to all your target audience, so that you can take advantage of the extra data for another iteration or two.

Work out what it's about

You can't build anything until you know what it's for, so the first step is to work out what you're trying to achieve. Who are the target audience, what are their needs, and how will you meet them? A useful technique is to develop personas, fictional people that represent the major users of the site. If this is a site you're creating for someone else, talk to them about what they want and get their help to develop the personas and their goals.

In the context of the personas and their goals, consider the functional requirements of the site, what it should do, and the data required to make that happen. Think about how the users will interact with elements of the user interface.

I can't stress how important it is to talk to people. The least effective development process is "tada development"1 where you turn up to a client with a "finished" system and hope that it will magically meet their needs.

As I said, this isn't just something that happens at the start of the project, you need to make sure you carefully plan what you're doing each iteration. It also doesn't just have to be about user interfaces, it could be about planning what technologies are most appropriate for the particular needs of the site, or how to solve a performance issue you've discovered in the "make better" step below.

Build stuff

The actual stuff that gets built depends on where you are in your iterations. At the start, this step should include sketches or prototypes of the planned user interface, then you might mock up some HTML and CSS, maybe add some interactivity with Javascript. It could even be implementing database replication and sharding to solve that performance issue.

Make it better

Each step of the way, you should be gathering data to see if what you've built so far is meeting the goals you set out to meet, and to identify where you're going wrong. Those prototypes you built? Show them to people and get feedback. Perform some user testing to see how real people actually use the interfaces you've built. If you're not sure that your new and improved interface actually makes things better for people, do some A/B testing and get some hard data.

You can get all sorts of data freely through services such as Google Analytics or what elements of your user interface people click on with services like the not-free CrazyEgg. Or work out what's slowing down your users' experience with Google Page Speed or YSlow.

There are hundreds of things you can analyse and measure, which can in turn feed back into the next iteration and help you meet the goals of your users. And if you're anything like me, the data itself is a source of fascination.

My explanation of these steps is not supposed to be exhaustive. In fact, many, many text books have been written about each one and it wouldn't even be fair to say I'd scratched the surface.

The Web Page Construction course I've inherited definitely focusses on the middle step, and that's fine. My job when teaching the course is to make sure that it's clear that the technical detail should be taken in the context of all three steps. We'll see how I go with that.

  1. Thanks to Donal for this phrase.

The user test


I've had a casual interest in usability and user experience for a while now, but up until now the interest has been passive: reading widely and observing friends and family use web sites.

Yesterday, however, I was involved in my first user testing session. I've been reading Steve Krug's Rocket Surgery Made Easy, which I highly recommend for getting started with user testing, so I had a reasonable idea what to expect. But I was still shocked at how glaringly obvious important usability issues seemed after less than half an hour with someone who hadn't used the software before.

Wow, the top navigation menu is completely invisible!

Now to work out how to get more user testing happening for open source projects.

Quality PHP: Laura Thomson


PHP quality background.

Laura is the co-author of the hugely successful PHP and MySQL Web Development, and a Senior Software Engineer at Mozilla Corporation. Thanks for being the first respondent, Laura.

In terms of PHP, what does quality mean to you?
This is a great question, and one I often ask in interviews. Quality PHP is free of the usual set of code smells. It's been through code review and has a set of meaningful tests with a reasonable level of code coverage. It has minimal but sufficient documentation. You can detect good quality PHP by looking for:
  • happy developers
  • new developers becoming productive quickly
  • other users forking or contributing to your projects
  • your application being robust and not failing in odd and intermittent ways
  • devs are able to modify the code and add features quickly, without forensic spelunking to understand how it works
  • your code lacking that fragile library that nobody wants to maintain or modify in case they break it
What tools and processes do you use in your development to ensure quality?
The main things we do are:
  • Patch review. (This is common among open source projects.) Writing code that you know will be read by others ups the quality in general.
  • Security review for new projects or features.
  • Continuous integration and automated test-on-build.
  • Mozilla's awesome WebQA team run a set of automated tests using Selenium, fuzzers, and also do a lot of manual testing.
Are there tools or processes that you'd like to include in your toolbox that you haven't used yet?
We're looking at moving towards continuous deployment during the next few months and are currently exploring the requirements for that.