A recent post by a local authority web officer was fairly frustrating for me, as it perpetuates several myths in usability, as well as calling into question my motives.
The post was mostly about the UK Governments forays into Web 2.0, but the last part is about the usability advice on the localdirect.gov.uk site.
We had been contracted to provide a usability and accessibility helpdesk for local authorities. This was early 2006, and when it came to the end of the contract, even the critical (of Government) Public sector forums only had this to say:
a telephone helpline was made available alongside a raft of good practice guidance and an online forum to “encourage the knowledge sharing”. Local Directgov claimed that by May, 234 local authorities and 12 government agencies had signed up to the helpdesk. As far PSF is concerned, we’ve heard – amazingly – almost nothing but good words said about it.
(Before then being critical of the service stopping).
However, Paul Canning (who I believe was a member of the forum at the time) now takes umbrage:
The answer, written by Nomensa I assume, a usability company contracted by Whitehall, claims that:no usability guideline is black and white, and the context and users have to be taken into consideration.
Whoever wrote this has a vested interest, pushing their expertise— are they really saying that someone like Jakob Nielsen doesn’t make basic, apply to all, guidance? That ordinary web workers have nothing to learn from Nielsen or any of the others in my links list? That only filtered and packaged government-approved usability guidance is kosher?
I did a double take, as that sounded like something I would say. In fact, I did. The localdirect.gov.uk site has published many of the forum questions and answers as a usability FAQ (I didn’t write all the answer though).
Generalised usability guidelines
If you read the whole section on that FAQ, you’ll see I did indeed point out the best source of general usability guidelines I know of, the research based set from usability.gov. However, I still standby this:
“no usability guideline is black and white, and the context and users have to be taken into consideration. To which Paul says:
are they really saying that someone like Jakob Nielsen doesn’t make basic, apply to all, guidance?
No, not where people are involved.
Jakob Nielsen has done much to publicize usability, but you do have to take care when things are simplified too much, or assumed to be sacred. For example, he used to say people wouldn’t scroll (mistake 6), but this isn’t the case anymore (e.g. 22% scroll to the bottom in this sample, and most scrolled to some degree).
In any case you are dealing with percentages, statistics, and optimising. Not clear guidelines that work for all, which is what I was trying to suggest. There is only one proven ‘law’ in pschology: Fitts law (the time taken to aquire a target is proportional to the size of the target and distance you start from the target). Everything else is interpreted.
The person in the field I respect the most, is Jared Spool, and for a while they were printing t-shirts with “it depends” written on them, because it does. Any usability finding has to be in the context of who, when and what. It’s actually in the definition of usability (emphasis mine):
the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
Spool provides a good example of the problem with assuming there are broad, infallible ‘facts’ in usability:
It is often stated as if it was almost a law of nature that the faster pages download, the more usable the site was. But when we actually compared the usability of sites to their download times, we didn’t see any correlations. None, zero, zip. If this “fact” was true, we should’ve seen something.
When interviewing candidates for a usability position, I tend to ask their opinions on common usability myths such as 5 users, 3 clicks and 7 +/- 2 menu items. I’m actually looking for these things:
- Relying on data, not opinion. (I.e. knowing what the data is really saying).
- Knowing what their own opinions are based on, and being able to justify it dispassionately.
- Knowing which methods suit different situations.
Many sites would benefit from quick internal usability testing at various stages of the process, that is only to be encouraged. But you do run the risk of finding out what you want to hear, or using the wrong tool for the job. Again, it depends. If people are asking for general guidelines to use, it’s a good indicator that help is needed with the methodology.
Anyone can claim to be a usability expert, just like anyone can set up a web site. But like web development, there is a need for professionals.
12 contributions to “Usability myths and professionals”
excellent stuff, alastair. couldn’t agree more.
Posted at length in response.
Many thanks for considering my points, hope you take this all in the right spirit. I *do point out that you have your interests but I honestly don’t think my suggestions would do anything but benefit the industry.
I’ve posted you a challenge as well.
Thanks for the follow up, I couldn’t post this properly on Blogger, so posting here for now. Also, who’s Andrew?
I meant that you were on the LocalDirect.gov forum, I don’t know about PSF. Also, I linked to Usability.gov not Useit.
That’s actually the point, generic usability advice changes, but (non-usability) people’s knowledge of that advice doesn’t update that often. We still regularly come across people (clients and users) who firmly believe in the 3 click ‘rule’, that people don’t scroll, and 7-9 menu items as a maximum. None of which are actually that helpful when creating a site.
I’m not sure what you mean by heuristics in this context? The one you quote, I would say should be irrelevant for most web sites. Sites should generally prevent errors (another, slightly conflicting heuristic), rather than providing help in error recovery. Those heuristics were developed for general user interfaces (slightly biased towards application design), not web sites specifically.
I agree that a little usability testing on the US census homepage would probably have helped them better the 14% of people finding the population. But the same article also talks about his homepage usability guidelines, which number 113, is that simple?
We’re talking about several levels of usability work / process, and how much it is worth investing in:
For a site like the U.S. Census Bureau, do you think testing with 5 people would cover everything they want to cover? There are probably more than 5 major user groups, let alone 5-10 scenarios that you could reasonably test. Discount usability testing is not going to cut it for that site, except to find a few obvious things at the top level.
Local authorities are actually in a similar position, as there are many type of separate user groups and many, many scenarios they could be interested in. That isn’t to say discount testing could not be using to answer specific questions, but you couldn’t consider that to be mission accomplished.
How do I know that finding usability errors consistently is difficult by the internal team? Because we regularly have to clean up afterwards. When you are part of the internal team, you are close to the problem and thinking of solutions.
In general, internal teams know most of the possible problems, but don’t know what are real problems for external users. After some training (workshops, training, or even just viewing tests and reading our reports) the situation is different. However, I regularly come across teams who have taken on personas as a method, and come up with 40 of them! Or found a usability problem and bandaged the symptom, not fixed the cause.
I wouldn’t want to be doing all the usability testing available, even if that were possible. In fact, I’ve found that the more knowledgeable a client the more likely they are to employ us, and the questions they want to answer are more complex, and more interesting. The best projects are collaborative with knowledgeable internal people, and if they aren’t knowledgeable to start with, they will be.
When you talk about, you’ll often find that bringing in external people gets the same results without even having to do testing. A little sad, but it’s easier to believe external people.
Usability (that is, creating usable products) is a process, not a set of answers. There is plenty of usability advice available, (e.g. Krug, Neilsen, Spool), and the LocalDirect.gov CDs were Local Authority specific advice, which is even more useful for that context. I would return the question and say why isn’t that advice being heeded?
Internally, we have a user-centred design process, which means that people with usability expertise are involved from stage 1 in any project, from user-research to IA to wireframing. But, in terms of testing, we would not involve the same people. The team is split so that people uninvolved in the project test it.
I’m not saying that internal teams don’t know about usability problems, far from it, they probably know about most possible issues. Also, doing some testing will find some problems. However, in terms of testing, they are not the people who should be doing that. Whether you get an external company, or internal people not involved in the project, you shouldn’t test your own work.
And check the last part of CD 2 for the advice on usability testing and what to look for in external usability companies.
‘who’s Andrew?’ sorry, I blame sleep deprivation ..
I don’t think that’s true! All three of these myths may be ‘wrong’ but all three make people make sites simpler and easier to use. yes, as ‘rules’ they aren’t accurate but you’re seeing usability as a technical speciality and this is the key difference, I think, between us. not professional/amateur.
How do you answer them Alastair?
One of the things I always do is explain to people that shit happens with the web in general. It’s all evolving and they have to adjust to a different reality of change. beta-world. That wider context needs to be imparted to people so they see all web ‘rules’ in a better context – one which is open to change and challenge (not easy but there you go, needed).
One changed ‘rules’ example which I love telling people is, yes, people will read long stuff, to the end, online (poynter). that’s antiethical for them but makes perfect sense to me.
Another one I use is that reading is 50% harder on screen than paper. I can’t remember where that comes from but it’s ‘true’. Most importantly, it works.
I understand that such myths can be an issue – I deal with myths – but the context to change is wider than countering with more facts, You have to try for bigger buy-in so change is acceptable – ultimately understood and welcomed as opportunity.
Like the red text. When my colleague was discussing your response with me today she related that her experience of our discount testing was a lot of confirmation – we’re on the right track. In the midst of that a few things stood out on the first one she did like sore thumbs – literally after 5 people, we did 25 – and were confirmed in a second venue, with a different demographic. and this was a new experience for the colleague. Those sore thumbs were actually essential – the red text issue wasn’t.
I’m not saying just discount testing but the value of it has been proved to me more than once and my experience trumps your idea that “you shouldn’t test your own work”!
More than welcome advice, but my experience, understanding parameters, shows that the team can do basic testing + get value. Merely doing it has enormous value – and it drives engagement with usability therefore buy-in therefore budget for expertise. A big part of this is understanding that not doing it means ignoring customers you’ve actually met. It’s a real route for web staff in smaller councils in particular to arrive at a point where you’re invited in.
You relate your experience
Yes? Did they actually do any testing with the actual public? Where were customers in their decisions or were they just following guidance? I know very few who do testing and some who hire people. Smaller councils, most councils, don’t have real budgets and you guys need real budgets.
I know that people don’t want to do testing, but they should. Everything else you relate takes them one degree away from the customers. You become another intermediary. They stay in the safe zone.
If people don’t want to do testing, that’s an issue. Why not? From my experience some people aren’t suited to it and many people find it difficult but it’s not what they’re doing it’s interacting with joe/jill public – and that’s not a good thing. Even, and this is my colleague again, when they’re doing it ‘wrong’ they’re still seeing sore thumbs and gaining from participation.
Also, what you’re relating is actually the exact opposite from my experience in terms of lgov people hearing external suppliers. The extent of cynicism is possibly underestimated from your viewpoint.
leadership, lack of. You just have to look at the way your work has been treated. Spun to death and accompanied – literally – by expensive squeezy toys but actually not #1 priority and not properly understood.
Usability is *essential* to ‘transformation’. It is absolutely crucial to any hope of addressing the digital divide. But we are coming off a low base in terms of engagement, led from the top. I know it’s changing but it’s too slow.
If Nomensa can find some corporate way – perhaps the UPA could be prodded? – to say to government ‘get your **** act together’ on usability (that’s one thing, I could go on) then believe me it would help us at the front line pushing the usability cart up a steep slope.
I disagree with using myths to promote usability practice. What happens when they figure out it was a myth, and how much damage will have been done in the meantime? You get buy in from results.
I can’t comment on the specific instance, but in general, if you are the one performing the testing, it is very, very easy to ask leading questions, in which case it’s not surprising that:
To be clear, I’m not saying that local authorities shouldn’t conduct their own testing, it’s great when they can. The original question was about generic usability guidelines, and now we seem to be talking about whether local authorities should conduct their own testing. These are two very different questions.
What form ‘usability’ should take depends on the stage of the development lifecycle, the internal resource, and the goals of the project. Just throwing discount testing at the issue is making everything look like a nail (when all you have is a hammer). Sometimes you can get better results with less time & budget than it takes to do even discount usability testing, it depends on the question you’re trying to answer.
For example, on a site with 90 different content authors and 90 different ways of doing things, a usability style-guide (specific to their organisation and technology) might be more effective than usability testing 90 different areas of the site (or testing one area and trying to extrapolate).
Usability (or the more general user-experience) should form part of the site management strategy, using the most appropriate tools at each stage. How much of that is internal and how much requires external help will have to be an internal decision, but if they don’t have a Paul Canning, perhaps getting some initial help to set that strategy would be appropriate.
Hardly, they are encouraged to sit in on the testing (remotely), and view as many as possible. But it takes a hell of a lot less of their time, and gets better results.
Do you think a usability firm saying that the Government needs to get their act together on usability is going to have much credence? Or even a usability organisation like the UPA? Like Ebay’s recent commitment to UX, it needs to come from inside.
Lets face it over the last 5 years central government both in England & Scotland have made huge strides forward both in terms of central sites and local government sites in terms of accessibility and usability.
No they aren’t perfect but by god they are hugely improved on what was previously there.
How did it happen? Easy – money ! Central government forced government web sites to improve or else financial penalties would be incurred. OK realistically it was primarily aimed at accessibility but most sites also improved their usability as a side effect.
Until enough organisations prod government into action and there are measurable guidelines to judge against nothing will happen IMHO.
So yes Nomensa, UPA, Abilitynet, etc keep prodding and one day with a bit of luck they will take notice.
Point taken, as I have said, you’re the expert, I’m the amateur. However! ‘Scent’ is a hard-sell, (as is ‘findability’). you forget the back-story which Alun aludes to in his post. real-world, the people who green-light spending read about the Web elsewhere and that influences them significantly. ‘3 clicks’ makes people think about the sorts of things we want them to. You have to somehow find language which gets buy-in. That’s another ‘balance’ to find. It’s not as simple as you say – “You get buy in from results.”.
We are not disagreeing, what I’m saying is that Usability is a movement, a general aim which everyone has to buy-into — that’s the goal. Out of that comes budgets, attitude-changes and far better customer service.
But you have to overcome a lot of hurdles. I propose that introducing as many people as possible to customers in such situations has far more pluses than minuses. This won’t “[take] a hell of a lot less of their time” but that’s because of the benefit of real participation and engagement rather than observation of the geeks doing their thang. And I think your fears are overblown in practice. including that staff wouldn’t want it — in my experience they do, not all, and it’s instructive who doesn’t. I also think it is short-sighted from a business perspective not to try this approach.
Well not if you’re supporting the Opposition! But I think you’ll find that other industries do manage to find themselves some ‘credence’ and it’s a bit sad that you don’t think you already have it. The UK – for one thing – is doing well in some areas. We do have a lot of good work. But just like Bebo being ignored over MySpace don’t you think our political leaders are rather letting the side down if they aren’t prepared to engage with industry on something as fundamental as the usability of their products? And stop spinning the failures and talk up the real successes?
This was policy and it was wrong.
I’m certainly not going to disagree with that.
I am puzzled as to why you say that the Policy of improving accessibility was wrong? or am I misunderstanding you?
Accessibility is easily ( relatively ) measurable so it was a logical target and in my opinion made a significant impact. If you mean it should have included usability as well. Then, yes I would agree but to a great extent its an intangible.
At least, I hope, we are heading in the right direction.
Agreed, I’m just saying that this is generally done best as observers to testing, not running it. It is great for people in local authorities to run tests if they have some grounding in doing so, e.g. through training. (Assuming that they aren’t part of the design/development team who created the site to start with – not testing your own work.) It is also important to realise discount testing is only one of many possible methods, and you shouldn’t have unrealistic expectations of what you’d get out of it.
Regarding ‘prodding’ the Government, of course I agree that more should be done on usability, but I’ve already been taken to task over that by you, so wouldn’t that appear to be self-serving?
One of the issues we are getting to now is measurability. One of the reasons I say that results are the best tool for buy in is that usability is virtually impossible to measure over a range of sites. Alun has a great presentation on the results of usability/accessibility for one local authority showing a massive upturn in usage. (Is that available Alun?)
I think we have just about the best (i.e. least argued with) method around for testing accessibility, like we did for the UN. Based on Internationally known checkpoints tempered by our experience with real users (which turned out quite similar to the WCAG errata) and knowledge of best ‘practical’ practice. The results aren’t as granular as I’d like, but accurate.
However, doing the same thing for usability is impossible. Just slight changes in business goals or tasks available means the team is likely to come up with a different design, one that might not conform to generic guidelines.
It’s a draft post I’ve not gotten around to yet, but take the SOCITM’s approach which included getting a small set or ‘users’ to try the same thing on different sites. So each user is testing multiple sites, therefore you get a learning effect. So if a user got to a site that did that task differently, they are likely to think it’s a problem, even though a fresh user might not find it a problem. (Bottom line: Usability testing is good for finding issues, not benchmarking.)
Well put, its what I was trying to express with my
Praise indeed – thanks. I’ve been meaning to put several up on my site. Usual story never actually getting around to it.
Although most of the story can be seen from the actual stats in question here
Link to Haringey Web Stats
Yes you are :}
There hasn’t been much about this, although I presented on ‘Which came first, accessibility or usability?’ two years ago.
At that time every single conference and seminar being organised was about accessibility with usability vaguely included.
This is getting the whole thing backwards and is PC driven web development! I am the strongest advocate for accessibility, however what we’ve been doing, largely, is work on behalf of disabled people!
*Everything ends up putting people one step removed from actual customers – whether disabled or not. This isn’t either good strategy or policy – and it was policy, that’s what all the conference companies selling to us picked up on and sold back to us.
It is not right to say that fixing accessibility (especially when all that means is meeting some checklist) somehow magically fixes usability – but this is what I’ve heard said. there’s a myth for you Alaistair!
Regarding discount testing, i think we are in roughly the same place, which is a relief. I am not professionally trained but I’ve got ten years of reading/doing it etc.- grounding – so I guess I am training in our situation. And I am training others to do it. In other places where I know they’ve done it they’ve trained themselves up, not launched into it.
It would greatly help if there was real guidance and help here which reflected practical reality for small/middle sized organisations. I don’t have much budget – we need other methods. This is why discount testing is so attractive to me, not as substitute but reinforcement for the professional testing I have budget for.
I’m not exactly sure what you mean here but if you mean my original, sharp problem with your LDG advice, that was to do with the general tone which was disempowering for web workers on usability. Professionals always need others to have some idea/some insight into what they do. It is far easier to do the job if you have some level of general buy-in. If it becomes that usability is solely the province of ‘men in white coats’ in my opinion this doesn’t help usability-the cause. If that’s what you meant!
On results, one of the things I’ve been thinking we should do is tie stats to versioning. Things like this will help in prising out usability changes as the factor from other factors. We have had huge increases in usage but we are also building links and doing other things to drive usage, some of this has been very big. Also, my feeling is that we should focus professional testing on high profile, high volume transactions. In the process of transferring successful templates we can see the changes in completions. tying into stats is the key, i think. This would also help with those issues you raise, Alastair. you have to measure apples with apples.
Thought I probably was 😉
I must admit I’ve been pushing it from the point of view, that you can have the most accessible site in the world but if people can’t find the information they want whats the point of it?
If you do accessibility you are wasting your time if you don’t also do usability.
I had nightmares trying to convince certain councilors that the councils internal organisation structure wasn’t the most sensible or user friendly structure for the external site. I eventually spotted and used the example that people probably wouldn’t take to kindly to having “refuse removal and crematorium grouped together……”
Comments are closed.