June 25, 2005

Less Technology

Every once in a while, just for kicks, I'll make a disparaging comment about "technology" or "computers" within earshot of my boss. She, I hope, finds these little asides humorous, and feigns horror in hearing such blasphemy come from the mouth of her director of technology.

I wonder, then, what she would think about me reading a book called "Better Off: Flipping the Switch on Technology." Probably best not to mention it, eh?

The author, Eric Brende, and his wife move to an Amish-like community and literally turn off the electrical switch. While I find this move to be fairly extreme, I do sympathize with their desire for less technology. I have a feeling that many of us would be better off if society was more judicious in its collective use of technology. The automobile, in particular, seems be coming quite close to a net negative for society. I watched a documentary recently (My Architect) on Louis I. Kahn. At one point, they discussed a vision he had for downtown Philadelphia where garages would be built around the downtown core, people would park, and walk to their destinations. No cars would be allowed in the central area. Sounds quite nice. Too bad they didn't go for it.

It becomes quite difficult, I think, to determine just how much technology is enough. Undoubtedly, technology is good (see: modern medicine, the expansion of information availability on the internet, etc). But also, it takes a toll. Finding a balance is hard for individuals, and harder, I'd guess, for society.

I don't have any answers just yet. But I think that striving for simplicity is a good thing. I'm not ready to become a subsistence farmer, but I'll keep working on simplifying stuff...

Posted by Karl at 05:07 PM

Googlizers vs. Resistors

A few weeks ago, Peter Van Dijck pointed to a Library Journal article about the Google-ization of libraries. For those interested in information retrieval and libraries, this is an interesting article. I'm especially interested right now because I'm playing a very active role in "Google-izing" a digital library.

For me, the issue comes down to ease-of-access. We keep hearing from our users that people prefer Google to our hand-picked, high-quality resources. Google is a one-click operation (seeing as it's built into browser toolbars and such), and fast to boot. Our resources currently live three or four clicks off of the homepage, and behind authentication schemes (they're subscription databases). And, each one of our databases has a very different interface. Some are easy and clear; others are more complicated. But perhaps the most difficult part is the user needs to know which one of the databases to try. With some exceptions, users could find relevant information in more than one of the databases, and because of the general nature of the resources, it can be hard to give good guidance to the users upfront. So, the user is left to click into each one with the hopes of finding info. No wonder people go off to Google!

So, we're going to put a federated search engine in place. The idea is that users would be able to use a Google-like interface, but get our high-quality resources back as results. I'm not viewing this solution as a comprehensive search solution. Rather, after returning a few results (say, 5 or 10), I'm going to dump the user back into the native database interface. In other words, those users who would like to use the more powerful tools provided in each system can do so. The simple results we return should also point the user in the right direction. They wouldn't need to try each and every database, only the ones that look like they might have quality results.

This is all well and good, but I think there is a bigger issue at stake in the information literacy realm. It seems like some of the librarians in article are focused on teaching patrons how to use the library information systems (OPACs, databases, whatever). The "click here, click there" type of training is fine and good (although I'd rather see interfaces that are so intuitive they don't require training). But, the focus in information literacy really needs to be on evaluating information. I don't care if the results come from Google or a subscription database, the patron needs to know how to evaluate the information and decide if it is trustworthy, useful, and relevant. This is much harder than "click here, click there", but ultimately much more useful.

Posted by Karl at 04:41 PM

Random Usability Notes

A while back I asked about single-page usability resources. I ended up doing a short test of some paper prototypes we worked up. I came up with three or four questions–mini-scenarios—to ask each participant. These tests don't last long (5 minutes), and, to be honest, we didn't learn that much. We did get some feedback that we plugged in to the next iteration, but compared to doing more in-depth studies, I didn't end up with that many insights.

I did learn one thing about testing prototypes: when you fake content, it needs to either be very realistic or very obviously fake. Representing an image with a crossed-out box in a sketch works well. But, we were using high-fidelity prototypes (Photoshop comps printed out) and made the mistake of throwing in a random image into a spot that should have had a realistic image with text and a call to action. This threw a number of users off. So, make it realistic or not, just don't land in the middle.

Mike Lambert wrote in to mention the EyeTools service. This looks pretty cool, if a bit pricey. Basically, they'll run an eye-tracking usability test for you, and then send nifty charts to show where people are looking.

Posted by Karl at 10:52 AM

The dotted line

Seth Godin posted four charts describing the adoption of a product or service. These are just a simple way to visualize the various modes of user or consumer uptakes. Says Seth:

The challenges are pretty obvious. First, how do you decide where to put the dotted line? Second, how do you avoid killing something too early, or celebrating too early. And last, how do you know when to kill a dud? The odds are with those smart enough to launch something new tomorrow.

These questions, to me, are the most interesting part of the post. The challenges might be obvious, but they're the hardest part to figure out.

It feels like I've been hitting these questions a lot lately at work. Not necessarily with launching a product or a service, but rather with features on our website. As you might have guessed from my series of recent posts on website stats, I've been looking at more ways to measure usage of different aspects of the site. Like most folks, we have lots of features, and limited resources. So, logic dictates that we should focus on the highest value features and dump the low-performance, low-value ones.

But, when to pull the plug? (Or, where is the dotted line?) And why does a feature get used or not? Low usage could point to the fact the feature was, in total, a dud. Or, it could be poor placement. Or unclear copy. Or a usability problem. It is easy to say that the something was just a dud, but pulling the plug could be a hasty decision. I'm inclined to spent at least some time focusing on our execution of a feature before I pull the plug. In a recent case, we spent time redesigning a feature and altering it's layout in an attempt to bump up usage. Turned out that a before/after analysis showed that the changes didn't really make that much of a difference. So, I think we're going to yank the feature. But, I think it was worth the time to experiment with the execution before giving up on the concept.

Posted by Karl at 10:30 AM

June 24, 2005

Social bookmarking inside the firewall

Michael Angeles has posted a very nice writeup of a del.icio.us clone he helped develop: Making libraries more delicious: Social bookmarking in the enterprise. I really like how they extended a good idea (social bookmarking) and thought about how to drive this content into other places, like their portal.

Posted by Karl at 03:55 PM

June 20, 2005

Single page usability testing?

I've run a number of usability tests in the past, but I'm somewhat stuck with a current project. We're re-designing a single page on the site (okay, it's an important one, the home page), and I'd like to do some user testing in the paper prototype stage. But, I'm having a hard time coming up with scenarios that would work well for a single page like this. All the ideas I have seem like they'd be over in a couple of seconds. In other words, it would take me much longer to do the intro to the test than it would to do the actual test:

[Long-winded intro about how we're testing the site, not the user...]

"Pretend you're doing research for a term paper on the Civil War. Where would you start?"

[User points to a link.]

"Great. Thanks for coming in today. Here's a water bottle!"

Anyway, does anyone know of any good resources on conducting usability tests with just a single page? Most of the usabilty resources I've seen assume a more long-term interaction. Shoot me an email at weblog@karlnelson.net if you have any good resources or ideas...

Posted by Karl at 09:22 AM

June 13, 2005

Ajax, InnerHTML, and the back button

I've been playing around with a little quick-n-dirty "meta" search system. In short, the script takes the term entered by the user, queries a couple of third-party search engines, then parses (read: screen scrapes) and displays the results to the end-user. Because the biggest issue here is the time it takes for the third-party servers to process the query and respond, I figured I'd use a bit of that Ajax magic that has been going around. So, now the script updates the page (without reloading) as soon as each of the server responds, meaning the user can see results right away, even as the script still chugs away on the slow parts. This works quite well.

I'm using the prototype.js system that is included with Ruby on Rails. The javascript library doesn't depend on Rails, and in this case I'm using it with a combination of PHP and Python scripts. Basically, the script makes a call to the server and then places the HTML it gets in response inside a div. The javascript that makes this all happen is called using onload(), but I've also toyed with just plopping it in the code, with little difference in functionality.

So, everything works like a charm, and the user is eventually presented with results from multiple sites. Then, Mr. or Ms. User clicks on one of the results, and leaves the site. So far, so good. But, trouble arises when the user clicks the back button and ends up back on the search page. Firefox re-fires the onload(), and the results appear quickly (I'm caching the results on the server side). IE6 doesn't re-fire the onload(), leaving me without results. This isn't good, seeing as a healthy (but declining) portion of our users are using IE.

After a little sleuthing, I came across this page. Taking advantage of the fact that IE's history/cache store pages with distinct URLs (I believe), these guys show a method using the Dojo toolkit to modify the URL by appending a "#" with a string of numbers to the URL. Voila! This should do the trick, but I haven't tried it yet.

I do have a couple of little concerns. One, it does muddy up the URL a bit. I can live with this, though. But, of more concern is the fact that it would create two entries in the history/cache for the same page. One would be blank, the other would have the results in it. Users would then have to step back through the blank page. And having watched many a user rely solely on the back button during usability tests, I'm not wild about this.

So, I see a few potential solutions:


  • Find a different Ajax toolkit. Someone may have solved this?

  • Maybe there is some javascript do-hickey out there that would cause the IE to re-fire the onload() event, making it's behavior match that of Firefox.

  • Try to disable client-side caching so the page has to reload. Not sure if there is a clean way to do this, other than to turn my form from a get to a post. That would, of course, bring up an annoying dialog box, I think.

  • Don't use Ajax at all, and have the whole thing happen on the server-side and send everything all happy-like to the browser. This is fine, except for the speed issues. That is, it's slow. Of course, I could slap some sort of interstitial in there, like an airline reservation system.


So, anyone have any brilliant ideas? Write me at weblog@karlnelson.net.

Posted by Karl at 11:20 AM

June 03, 2005

Link Dump

A small and random collection of nifty things I've seen float past lately:

The BBC is offering downloads of all of Beethoven's symphonies. Unfortunately, you need to pay attention to when they broadcast 'em, as the MP3s are only up for a few days after each broadcast. Still, cheaper than iTunes.

Here's a nice looking version control system comparison.

Peter Merholz on the The Dark Side of Design Thinking: "Look at any interactive design annual, anything judged by a panel of designers, and you will see a stupefying weakness for styling. It doesn't matter that after using any of the winners for 2 minutes, you're pretty much done (if you could figure out how to use it in the first place)."

Here's a couple of tools that developers can use, with Greasemonkey, when developing AJAX apps. Nice.

Tom Hoffman on free concept mapping software.

Posted by Karl at 05:23 PM