A news release claims that even the best sites cause issues for people with disabilities. This particular test has issues itself (primarily the scope of tasks and the source of ‘best’ sites). However the larger issue is the testers themselves.
Usability testing, including participants with disabilities is almost always a good thing. However, you have to be aware of what you are getting if you use a panel of regular testers. Obviously I’m somewhat biased by being from a psychology/usability background, and working for a company that does full usability testing. There are some inherent difficulties with repeat users that should be known to interpret the results.
Some expert testers
We will generally discard someone who has done a great deal of usability testing, as they are not a valid tester anymore. Their knowledge of the testing situation and what we are looking for is too great. It often affects the process and they will highlight things that may not have come up for regular people, or attach a high priority to problems they recognise from previous testing.
This applies to general usability testing, and for people with disabilities. The bottom line is that you would likely end up changing things that may not affect people in normal usage situations.
Also, if all the testers are expert with the technology and familiar with the type of site under examination, they may miss problems that affect regular users.
Another aspect is that the testing is self-reported. The biggest issues we find when testing are often the ones the people participating in the testing are not aware of! I.e. they don’t know that something has gone wrong.
When you watch someone using a site you know and don’t interfere, you will see people struggling with things they aren’t aware are problematic. They will only report something like “the text is a bit small”, when in fact the link they were looking for was right in front of them, but they didn’t recognise it as useful.
Testers with a particular point of view
People using a particular technology (e.g. screen reader) will report on what experience they had, and probably provide suggestions on how to make it better – as you would hope! However, very few people have the breadth of experience to weigh that advice against other (accessibility) considerations for other groups of people. You can easily end up with either biased advice, or conflicting advice.
The Ideal situation
(Please excuse this, but I need to add context.) People at Nomensa have been testing sites in general usability terms (with the general public and specific customer segments etc) since the early days of the web, and with people with disabilities since 2001. We also have a great deal of experience doing accessibility audits against the WCAG, occasionally on the same site in parallel, and had the chance to compare the results.
The ideal situation for assessing accessibility is to run an (expert) audit on a sample of pages, and usability test the site, not necessarily with disabled users. Accessibility boils down to usability with some specific technological and user-constraints. Issues that are minor for most people, will be major for those using access technologies or with cognitive issues. The Unified Web Evaluation Methodology is not finished, but the methodology section is excellent.
If you can get people with disabilities to test a site, great. But make sure you either get a good all-round sample, or have the experience to know whether an issue is audience specific and whether the ‘fix’ will adversely affect others. We see a lot of problems caused by people knowing a little, but not enough (e.g. images with alt text of “image of…”, or phonetically spelt accesskey descriptions).
Ironically, the people best placed to use a service like the usability exchange are accessibility & usability experts – who wouldn’t use it anyway.