“Social layer” vs “Social network”

This afternoon we (Trovix) released Trovix Connect, the new version of our Trovix job portal. (incidentally, we’ve been using that as internal name well before Google Friend Connect and Facebook Connect!)

I feel like we’ve kind of jumped on the bandwagon of social sites.  However, I think this is more an example of responding to what people find useful rather than “everyone else is doing this and succeeding and so we’ll do the same”.

During user studies that I run periodically, one question asked is “how did you get your current job?”, followed by “what is your general strategy for finding a new job?”  While I don’t expect to get strictly accurate answers (an ethnographic study is much better for exploring true user activity), it is a nice broad question that reveals a significant amount of information so long as you frame and interpret it appropriately.

What was found is that most people use job boards as a supplemental source of information for finding jobs.  The majority of people (in the demographics we targeted) did not apply directly through advertised positions (such as on Trovix or CareerBuilder), but instead looked on the company’s site afterwards — or tried to find which of their friends had connections with the company and could help out.  I guess this makes sense as to why so many job boards rely on advertising revenue rather than taking a cut from direct applications.

With Trovix Connect, we tried to support this approach to job-seeking.  However, while creating a social network of friends and colleagues works for an ambiguous site like LinkedIn (where it is general career networking), Trovix is primarily a job matching service.  Job-hunting is obviously a fairly private activity, and a lot of the time you don’t want people to even know you have an account on a job site, let alone set up a connection with them.

Therefore we took a slightly different tack, and instead applied a social layer to the site. What this means is you can’t use your contacts in the normal social networking ways (what are my friends up to, what are all their details, how can I interact with them), but instead we track the network and use the information about it to support job seeking.  For example, if you were searching for a “Software Engineer” position in 94043, and you had me in your network, when the “Software Engineer – Trovix” position shows up, Trovix Connect highlights that you know me at Trovix and allows you to contact me about the position.  (Our resume parsing software automatically distills all your previous work experience automatically.)  We give the user an example email to send to their contact working at the company (or if they recently worked there) asking for their help.  In this way, people can use the same strategy for finding jobs as in “the real world”, but with connections they never knew they had.

Of course, privacy is a big deal to us, and you can opt out from appearing in search results.  You can also make your account invisible (in fact, that’s the default option!).  We have several spam-reducing features in place as well.

We really hope this will be a useful new layer to job-searching without being obtrusive or spammy.  If you end up trying it, please let me know what you think – I love to get feedback!

The curse of updates

One of the worst things about the ubiquity of the internet is the crutch of automatic updates.  Who cares if our code isn’t feature complete?  So what if there are bugs galore?  It works okay – let’s ship and just put out an update.

And thus we are now stuck in the bane of the update.

The worst offender is easily Adobe with Reader.  PDF – Portable Document Format.  Its name conjures up images of lightweight, portable and universal means of viewing a document.  Sadly since Reader 8 or so we have been cursed with updates every few weeks.  And the worst part is the Adobe Updater is an extremely belligerent piece of software, that INSISTS you update.  The worst part is that in providing ‘flexibility’, Adobe have allowed partial updates to plug-ins and the like, and so any minor change in any part of Reader requires a virtual reinstall of the software.

I still find the most galling aspect of it the fact that they have considered the use case.  Some average Joe gets sent a PDF, tries to open it, and then Adobe Reader suddenly realised “oh wait, you need to update”.  All right you say, let’s just get it out of the way, and you click next.  “Wait a minute, you want me to UPDATE this software while it’s  STILL RUNNING?” cries Reader.  And all of a sudden you have to completely break focus, allow it to close reader, and start again.  The worst is when you’re doing this from within a browser.  How on earth did this make it past user testing?

Tonight I wanted to play some Rock Band with my brother.  It’s been maybe a week since I booted my PS3, and at most 2 weeks since an update.  Before I can do *anything* online, “oh ho ho, what do you think you’re doing?  You need to update! You might be running a hacked firmware and we can’t allow you to do anything online.”  Seriously, how has this become acceptable?  The worst part of this is Sony’s download code for the PS3 is abysmally slow – it took half an hour to download and install the new updates.  In fact, it’s almost twice as fast to just download updates on my laptop, copy them to a memory card, and then go through the hullabaloo that is required to do a non-standard install.  And for what?  A few bug fixes and some unknown feature I’ll never use?  To top it all off, several of my games then required individual updates as well.

Although both of these are obnoxious, at least with Adobe you can reject and disable updates easily, and with the PS3 you are still allowed to play locally.  The truly obnoxious updater is Apple’s “Software Update”.  Hey, here’s an idea – let’s push products people don’t want as part of the update process.  Better yet, let’s set this up so it’s extremely difficult to disable it.  Apple did a fantastic job of obfuscating the disablement of updates, which offered my Windows PC little to no benefit (I leave them enabled for my Macs).  Searching through the registry, start up menu and MSConfig turned up nothing.  Googling the problem turned up this gem – it’s a scheduled action instead.  You need to go to your scheduled tasks (of which I have no others) and delete it from there.

So what rules for design can we take away from this horrendous user experience?

1 – Allow updates to be disabled easily.  Not everyone wants the latest and greatest (indeed, many times it is necessary to stick with an old version as I discovered when Apple updated Quicktime and broke my video camera’s ability to view its own files)

2 – Reduce the schedule of updates.  Updates are important, but make sure they’re *really* important before you foist them upon users.  It’s tempting to be constantly adding new features and pushing them out, but unless you have a seamless update procedure it becomes a major source of user frustration… which leads me to my final point:

3 – Make it seamless and quiet.  Nobody likes to be prompted to update, let alone select updates from a large list and then interrupt everything they’re doing to install something they don’t need.  Find a way to do it quietly and without hassle.  How can Microsoft get this right to a reasonably acceptable level but not Apple?  I dislike Vista with a passion and love OS X, but yet I find myself almost as routinely irritated by Apple because of their update procedure.

Personally, I can’t wait till we’re all back on thin clients and have 100% seamless updates.  Hooray for web 2.0.

Picasa photo tuning

This is a continuation of Part 1 of my Picasa discussion.

I am continually surprised by how powerful the relatively simple photo tuning tools of Picasa are. Take for example this photo my wife snapped while flying from Seattle to San Francisco, and how easily Picasa turns it into a great picture.


Here is the original photo:


First all I do is apply an “I’m Feeling Lucky” pass:


Next I straighten the shot:


Then I crop the photo:


Finally I do another “I’m Feeling Lucky” pass:


And there we have it. Quite an amazing difference compared to the first photo. Here is another example before and after thanks to the I’m feeling lucky button (easily my favourite feature in Picasa):


Some more information is available from the Picasa Team about tuning photos.

Picasa as an example of great design

I’ve been a long-time fan of Picasa, and it ultimately comes down to several reasons:

  1. It’s fast. Very fast.
  2. It has a great UI.
  3. Functionality.
  4. Integration.


Picasa has been optimized to very quickly load, to handle large libraries of photos, and to allow very quick scanning of said large libraries. I have seen some great research into better ways to quickly display large libraries of photos, but nothing has been released in a usable product yet. Nothing comes close in terms of immediately loading the main app, and then allowing quick manipulation of tens of thousands of photos.

User Interface

I’m not sure how much of this is Google and how much is thanks to the original IdeaLab team, but kudos to Google for not breaking what works. The keyboard shortcuts for advanced users are there, and yet the interface is inviting enough for the novice user to jump right in. All my non-computer-savvy family members now use it and love it. It is not only simple to use, but very pretty with its transitions and OS X-esque touches to its interface.


The functionality of Picasa is excellent. It allows you to share photos, manipulate them, organise them and present them. Importantly, it doesn’t try to do too much. The Picasa team seem to have found just the right balance of features to satisfy easily 95% of the users out there without overloading it to the point of bloatware. Happily, they also included some features which I didn’t consider “must have”, yet make the experience all the sweeter. This includes things like creating a Gift CD, comprehensive backups, picture collages and a screensaver. It makes using your photos very easy.

The best functionality I have found though is the photo tuning. See my other article here to see why that is. Aside from the power of the tuning though, the thing I also love is that it leaves your originals untouched. Even once you commit changes, it backs up the original, which for people like me who can’t stand to lose data of any kind, is a godsend.


This is what really sold me on Picasa. Doing so much research into ubiquitous computing and the adaption of systems, I’ve come to realise this is what makes and breaks software. Picasa does several things right in this area:

  • Preserving the file system. I want file portability, and this was a dealbreaker for me with many other products.
  • Email. I mainly disseminate my pictures via email, and the integration here is top-notch.
  • Photo printing is integrated in. Not something I use, but a nice to have.
  • Other services. Picasa supports Picasa Web Albums (of course), FTP, Google Video, and Blogger. My main problem is that it doesn’t support Flickr (for obvious reasons, but this is kind of sad given how many Googlers use Flickr!)

The other nice thing is that the Picasa team seem intent on acting on user feedback. However the real killer part of Picasa is the tuning, which is discussed in part 2.

Real World Ubiquitous Computing – Nike Plus

For all the doom and gloom about the lack of real world ubiquitous computing (even I was guilty of it in my thesis), if you look around there are devices that support ubiquitous computing ideals. And I’m not talking about smartphones and iPods.

My first example is the Nike Plus running kit. How does this fit ubiquitous computing?

  1. Embedded, perceptually invisible computing
  2. Functionally invisible
  3. Accountable
  4. Inexpensive

Perceptually Invisible
The Nike Plus running kit has two parts to it.  The first is the receiver that attaches to your iPod.  It is relatively compact, and many armbands and pouches for the iPod have been designed to accommodate it, so it is not noticeable.  The second part is the sensor for your shoe.  If you buy a pair of Nike Plus shoes, it already has a space in the shoe for you to insert your sensor, where you’ll never think about it again.  If you have other shoes, you can buy a “shoe wallet” that holds the sensor.  Again this is unobtrusive, and I never even think about the fact that I have the sensor in my shoe.

One nice touch with regard to the sensor is that they spent a lot of time building smart energy saving routines into it.  This means that although the battery is not user replaceable, it should last years (a new kit can be had for as little as $15 on eBay anyway).

Functionally invisible
Another clever part of the software Nike developed was calibration routines.  Out of the box, the device is fairly accurate, and after a single calibration session, I found it to be within 2% accuracy of my GPS running watch.  What this means is you don’t even notice the fact it is a pedometer – to the user it just works, and gives you your distance.

Another way in which the Nike Plus kit is functionally invisible is its integration with the iPod and iTunes.  I use my iPod as I normally would while running.  When I sync my songs on iTunes, in the background it uploads and formats my run information to the Nike Plus website.  If I am entered in any competitions it updates those.  I choose when I want to look at my stats, and I never have to think about collating them or accessing them (unlike Garmin’s efforts).

By “accountable” I’m referring to a property of design which I defined in my thesis.  For a design to be accountable, it appropriately presents information about itself and how it works to the user.  One difficulty in ubiquitous computing is providing embedded computing that automagically performs actions, while not obscuring how it works to the user.  Users rarely use designs in the way the designer intended, and by exposing how something functionally works you can assist the user’s understanding of the design, and assist with their appropriation.

The Nike running kit is very open with how it works (an accelerometer and wireless system), provides the data in an easy to read format (it has already been used in many academic projects), and allows for calibration of the sensor (while not requiring it).

This is simply – the kit costs only $30 SRP, but can be had for far cheaper on eBay or sites like Eastbay.  It is amazing how many people I know (I can think of four in my extended family alone!) who have bought the kit after finding out how cheap it is.

But the real question is – is it useful and does it work?  I was talking with a colleague yesterday about his running, and he was saying that after a coming competition was over, he knew his motivation would flag.  Since using the iPod running kit, I found my running increased by a factor of 4 thanks to the competitive motivation.  I get to “go running” with my brother who lives in Australia.  Everyone I know who has one is still using it.  I’m here writing about it on my  blog!  That to me seems like success.

Ubiquitous computing done right

I was happily surprised when I saw this video.  I was sent it by Miles, and I think it is an outstanding example of ubiquitous computing.  Amazingly it’s also several years old now, and part of Johnny Lee’s work at Carnegie Mellon University.

You need to a flashplayer enabled browser to view this YouTube video

For those of you who can’t watch the video, it’s of a new type of projector calibration.  By embedding fiber optic sensors into the edges of an object, any standard commercial projector can then be automatically calibrated to perfectly project an image onto that object.  It is quite amazing to watch.

Normally I’m a philosophy-first ubicomp kind of guy, and prefer projects that focus on the human effects of ubiquitous computing.  However, I’m not an idealist and I realise that technical innovation is a fundamental requirement of the field.  However I still believe some of the best technical achievements are in reusing existing technology in novel ways.  This is a perfect example of this.  In particular, there are three things that I think are done right:

  1. Keep the functionality simple
  2. Keep the technology smart but simple
  3. Use off-the-shelf-technology

First of all, they focused on a single problem at hand.  Achieving computing potential embedded invisibly still requires a means to interact with that potential.  Finding new ways of getting information displayed on everyday objects is a huge step forward, and previously was a pretty hard task.  It required custom screens, or complex manual configuration.  Solving a single problem provides a design pattern for others to use and extend upon (and then worry about the user experience).

Secondly, the technology itself is simple.  Fibre optic sensors mean the system should be robust and cheap.  There are not a lot of different sensors which could break, nor is there a complex system with fragile dependencies.  Most of the magic is done in the software which allows for further improvement and customization, as seen by the later project:

You need to a flashplayer enabled browser to view this YouTube video

Finally, the technology is off-the-shelf.  This project used a single embedded chip, plus a regular projector and some custom software.  Sounds like something both easy to hack up yourself, and to commercialize for other people.  This is a very nice comparison to a system like Microsoft Surface, which is full of proprietary components.

This functional and technical simplicity in turn achieves two things.  One – it means the technology itself is cheap, and two, it is reproducible.  Ubicomp needs to drastically lower the cost of entry to continue rapid expansion and adoption.

With micro-projectors becoming more popular, I’m really looking forward to commercial implementation of such a system.   While there are some shortcomings (such as the brightness of the projected image), this is still a lot more immersive than fiducial markers.  This is the exact type of technology needed to allow ubiquitous computing to be useful to mainstream, commercial applications.

Turn your iPhone into a wifi Skype phone

There has been a lot of buzz on the intertubes today about Fring.  They’re an Israeli startup who released a fairly popular mobile chat client.  That’s simplifying things – in addition to supporting every major IM client, Fring automatically logs you into wireless hotspots, does VOIP and allows file transfers.  It’s like a mobile version of Trillian on steroids.

I’d heard bits and pieces about it, but hadn’t really been that interested.  That changed when I was browsing The Unofficial Apple Weblog and read their post about trying out the new beta of Fring on the iPhone.  If you have a jailbroken iPhone then this is easily the best application you can get for it.  Certainly a lot of other bloggers seem to agree.

A bit of backstory as to why I am so excited about this.  When I first moved to the US in July of 2006, I was staying with friends for a while and moving around a lot.  I purchased a SkypeIn number.  Two in fact – one for the US and one for Australia.  This meant people back home could call me for the cost of a local call, and I could also have a local number here that wasn’t a cell phone (I’m not a fan of the paying to receive calls model prevalent here).  Making US based calls was free until the start of 2007, and after that I purchased unlimited calling.  Now I’m on Skype Pro, and for $3 a month I get unlimited US calls and a whole slew of other benefits and discounts.

When I started renting my own place, rather than reconnect the phone line, I bought a Skype phone.  I just plug a network cable into the back of my Netgear SPH200D, give my account details and it just works.  I don’t even feel like I’m making Internet calls – it’s just a home phone to me, and to anyone who’s calling me, thanks to SkypeIn.

I had trialled the Belkin Wifi Skype phone for a couple of months.  This was easily the worst product I can think of using in the last 10 years.  I cannot even begin to explain just how bad this product was.  Slow, unresponsive, ugly, cheaply made and unreliable to start with.  Poor battery life, terrible call quality and broken functionality topped it off.  Wow, the designer in me shudders just thinking about how awful that phone was.

Since the iPhone came out I’d idly wondered if a Skype client would ever be released.  I figured if it did, it was a long-time coming.  Then along came Fring.

While it was somewhat fiddly to install (adding a new source in the Installer application), setting it up was a breeze.  Within just a few minutes I was making my first test call.  And it worked.  Amazingly so.


The best bit though is that while I can make calls on my home Skype phone, it is useless for sending and receiving messages.  Fring’s IM feature is very slick, and I love that I now have dedicated Google Talk and Skype on my iPhone.  Previously I had to use Meebo for Google Talk.  I notice they also appear to have gotten around the “one app at a time” limitation of the iPhone.  Pressing home just minimises the app, and I am able to receive calls and IMs with it in the home screen or even if it is locked which is great.

So basically I now have one phone for everything (except for one thing, which I’ll get to in a minute).  I can now make my cheap international calls at home from my mobile rather than switching to the Netgear phone (I wonder how worried they are about this development?).  I’m a big fan of minimalist setups, and so this pleases me no end.

Some notes on using it so far.  Calling my iPhone number from Fring makes it do odd things.  The “incoming call” dialogue pops up, but then it tries to switch back to Fring and just hangs.  Some outgoing calls seem to fail.  There are some definite UI issues (particularly with number dialling – requiring a “+” for outgoing numbers).   I also couldn’t accept add requests.  But the main problem seems to be no SkypeIn!  I’m not sure what the limitation here is, but calling my SkypeIn number doesn’t result in a call appearing which is kind of a bummer.  It’s also weird, because I can receive calls from Skype contacts just fine.

I have a few questions though, particularly given how slick and just plain good this product is.  Firstly, how did they get Skype access?  I could probably Google an answer, but I’m just surprised that there is Skype access on a free product, given it is a proprietary setup and they would have had to license some libraries.  Ok, I actually bothered doing a search and they are using the Skype API.  More importantly though is how on Earth do they plan to make money?  There are no ads, and while the server load isn’t high, there’s obviously been a lot of development (several years worth based on what I found about the company).  I tried checking to see if they had any plans or if anyone had even any speculation and all I found were a few articles:

From 2006:

An Israeli company has just rolled out a service (beta) that might cut into the Skype subscriber base by allowing users to make free VoIP calls using any 3G handset. Fring is the word and the service is free now until the commercial offering appears around the end of this year. What the innovative service lets subscribers do is call any other fring subscriber for free anywhere in the world. Fring members can also call Skype and other VoIP service subscribers using any 3G-enabled handset. Fring uses your existing data plan to make calls over the network thus saving the caller from using any phone minutes. It’s not clear what fring’s business model will be but for the time being it’s free so what are you waiting for?

From 2007:

Shechter said fring is committed to improving the quality of its product and will be adding innovative new features to it over time.

As per the press release, fring is “100 percent free with no subscription costs; consumers simply pay for the data they use under their existing line rental agreement.” (Therefore, the plan under which a customer pays for data transactions, including any limits therein, comes into play.)

It looks like they recently got 12 million in second round funding.  Whatever their plans, I’m enjoying it for now despite its limitations.  If you have an iPhone, what are you waiting for?  Jailbreak that guy and install Fring.

37Signals disagrees with usability guru Norman, or, what is usability?

There was a thought-provoking rebuttal from 37Signals to criticisms levelled by Don Norman regarding their product.

(side note: did anyone else think 37Signals was using svn to version control their blog postings based on the URL?)

Don’s original post is titled “Why is 37Signals so arrogant?“. In it he says that he found “the developers [at 37Signals] are arrogant and completely unsympathetic to the people who use their products.” He goes on to say that this attitude “will not only lead to failure, it is one that deserves to fail”. Ouch.

While the developers at 37Signals may be “arrogant” in that they aren’t interested in listening to other views on their design, this does not mean that they aren’t creating usable or useful products. No matter what requested features or changes you add to a design, it will never truly satisfy everyone. Trying to do so can eat up precious resources, and may have unintended consequences. While Google might have the bulk to carefully consider everything a user may want and try to accommodate that (and the consequences), startups don’t always have that luxury. User-centered design isn’t putting the user on a pedestal (a flippant comment – will discuss in another blog post!).  The designer is a designer for a reason, and with scarce resources (and a good track record) it is sometimes not just easier, but more efficient to follow your gut.

Besides, no matter how you design something, people will always use it differently to how you expect. Articulation work (the process of adapting a tool to a new use) is a fascinating process and one that should be fully supported by allowing the user as much simplicity and flexibility as possible. By doing so, you provide a low barrier to entry and for people to find innovative new ways of doing things.

I think the problem here is Don Norman is reacting at a principled level, rather than considering it from a “real world” perspective. Sure, I’d love to give users everything they ever wanted, and do it in the slickest, easiest to use package ever. But it’s just not always possible. Look at something very usable and naturalistic, such as the iPhone, and you’ll find missing features. Look at something feature-rich like Photoshop, and you find a high barrier to entry. It’s all about tradeoffs.

Ultimately 37Signals clarified they *do* listen to their customers, but by stating they design for themselves and not their customers, what they really mean is they are ignoring traditional usability approaches, and designing for themselves. While this can have shortcomings, there were plenty of great designs before the invention of the usability lab…

People power versus algorithms

Very interesting article from Wired.  This is something I have struggled with personally.  Is it worth investing the time to automate a process, or is it just cheaper to outsource the smarts of up-and-coming countries?  Ultimately I’ve found that not only is it cheaper to outsource the work, but the quality of the results is much higher.  The main problems I’ve found are scalability and training.  Overcome these for your task at hand and the benefits are immense.

The vogue for human curation reflects the growing frustration Net users have with the limits of algorithms. Unhelpful detritus often clutters search results, thanks to online publishers who have learned how to game the system. Users have tired of clicking through to adware-laden splogs posing as legitimate resources. And unless you get your keywords just right, services like Google Alerts spew out either too much relevant content — or not enough.

Again, I have to say, the quality of the work just blows any automated stuff I’ve done out of the water.  However you do have to manage your sources – something like Amazon’s Mechanical Turk is a bit hit a miss, whereas something like Elance allows for a feedback system and a more personal relationship – just not the sheer bulk of work.

What the article doesn’t cover is the fact that most of this type of work is outsourced.  It would be very interesting to see what the demographics of the workers are like for something like Mechanical Turk.  So if this continues to grow in popularity, what are the long term effects of this going to be?  Will this help improve the skills of the contributors or just burn them out with mindless work?  I personally think the former – most of the projects I have seen are actually very interesting.  I know of people who use Mechanical Turk for fun and as a timewaster – certainly not as an income source.