Reporting on accessibility issues

After releasing another website accessibility survey (this one global in nature), the sticky topic of ‘reporting’ on accessibility issues has come up again. Frankly I don’t think there’s a right answer, but if anyone can think of a better method, I’m all ears.

Jack Pickard wrote:

Unfortunately, simply complying to WCAG does not in any way ensure that your site is accessible to people with disabilities any more than failing WCAG is necessarily proof that your site fails people with disabilities.

Sadly, the reporting of this report fails to take account of this, because it doesn’t make for a good soundbite.

Well, I can agree with the sound-bites part, but concisely explaining what each of the sound-bites mean to a general public audience would take more wordage than the article itself! The main thrust though is related to whether or how much complying with checkpoints is equal to accessibility.

Checkpoints = Accessibility?

Does complying with the WCAG (version 1) mean that your site will be accessible? Actually, I would argue it does, at least when measured by an experienced tester.

Automated testing barely correlates with accessibility for people, and is only really useful for hunting down issues, not benchmarking. However, a manual audit by experienced testers is a pretty good measure.

For example, compare an bank site which had 5 priority one issues, and 15 priority two issues against a government one that had no major issues and a few (minor) priority 2 issues.

There is no possible way for someone who can’t read the images to navigate the first, and few (if any) people would have issues with the second. Those are two extreme examples, is there a grey area?

Yes, and I can appreciate where Jack is coming from: The testing per checkpoint is black and white, the page passes (e.g. every image has an appropriate alt text) or fails (one or more images do not have appropriate alt text). If a site with otherwise excellent alternative text can fail that checkpoint because there is a third party tracking image hidden on the page without alt text. That doesn’t happen much in practice, but it can happen.

In people terms, several checkpoints are a continuum. E.g. having a set of links (e.g. in a footer) in a paragraph is not using lists appropriately, but it would be unlikely that it affects any people. However, if the main navigation or main content areas should be in a list, it would be noticed and justifiably fails that checkpoint.

Reporting accessibility issues

The problem is that we can’t just sweep the little things under the carpet.

This is really a problem brought on by the nature of the field, the nature of the media, and some people in (or just near) the accessibility field.

The nature of the field has to some extent been defined by checkpoints. Now, checkpoints are ‘a good thing’ because they are useful for people building or checking sites. Without the checkpoints, I don’t think accessibility would have gotten anywhere since 1997, and I don’t think web standards would be as popular as they are today. (Passing accessibility checkpoints provided many with the impetus for switching to standards based web design, i.e. CSS makes accessibility for attractive sites possible.)

Unfortunately, when taken as the basis of saying does a site ‘comply’ with the checkpoints, they have to be reported as black or white, yes or no. Given that the WAI guidelines are likely to form the basis for any legal proceedings, that is what site owners want to know, “do I pass”?

This mentality is also seen when reporting the results of accessibility tests. If a site passed, and had the slightest thing wrong, someone would jump on it and cause a big fuss, claiming the results invalid, regardless of how much that issue affected people.

This has the effect that when doing research, you have to be strict. Anything which you aren’t sure about, you confer with your colleagues, and over the years we’ve settled on a fairly comprehensive set of criteria. If these vary from the WCAG guidelines at all, these are very carefully considered. One example would be that we don’t fail sites for not providing accesskeys, however we would fail if they do use accesskeys and don’t allow the user to set the keys.

All this leads to fairly digital reporting (pass/fail), and that fits relatively well with the type of information the general media is interested in. I think that goes some way to explaining Jack’s other objection:

What I do object to is the continuing assumption that WCAG is some kind of accessibility “Holy Grail” and that by complying with all of it your site is accessible to people with disabilities, and by failing to do so, your site isn’t. It just isn’t that simple.. What particularly annoys me here is that I know the people at Nomensa know that too, but you’d not get that impression from reading the document.

We do! And for what it’s worth we incorporate that into our testing. But however you look at it, the results are very poor. With a few exceptions noted in the press releases, and a few more almost-there sites noted in the full report, there were very few sites where accessibility had been a consideration. We can tell, I sure Jack could to. The results are representative.

it’s doing accessibility a disservice by not accepting that there are parts that are contentious

Disability is fairly ‘niche’, and web-accessibility a small aspect of that, so it will never get lots of prime time attention. If you introduce doubt, say goodbye to the mainsteam media. One of the reasons that I don’t do interviews is that I’m not soundbite friendly. A great skill Léonie has is communicating these things very well to a wide variety of audiences.

It’s like in windsurfing competitions, if you try and explain to a reporter that there are four different disciplines, and several categories within each, you’ve lost their attention and you get no coverage. If you point at someone and say “he’s the champion windsurfer”, they go and do an interview and report on the (top line) results. I’m not blaming reporters, they respond to the public’s demand.

Granulated assessments?

Within those pages that fail a checkpoint, there are differences that we could take into consideration, but there are two reasons we haven’t thus far:

  • For some checkpoints it would be hideously complex, such as the effect various structural elements have on accessibility in practice. E.g. is dropping more than one headings better or worse than incorrectly nested lists?
  • It isn’t something defined in the guidelines, making it ‘our spin’ on the guidelines.

You could say how many instances of an issue there were per checkpoint, and weight the results on that basis, but even then, it’s difficult.

A couple of questions for the crowd:

  • Are there any ‘standardisable’ methods for assessing the magnitude of accessibility issues?
  • How useful is benchmarking outside the media sphere?

When we do auditing for a client, the checks are the same, but the whole process is different because we try and meet the developers & stakeholders first to find out as much as possible about any relevant issues. Then our reporting can be targeted at the specific requirements and restrictions they have (e.g. CMSs or untrained editors).


Technorati Tags:

2 contributions to “Reporting on accessibility issues

  1. I don’t think we’re actually disagreeing much, to be honest. I’m not disputing the majority of sites are frankly – erm… am I allowed to say shit? – when it comes to accessibility.

    I also accept that you’ve got a job to do and that involves trying to make something understandable to people outside the accessibility field. I guess I’m just saying it’s a shame that we aren’t able to start expecting that the basic building blocks of accessibility don’t need to be described every single time.

    But please don’t take offence (and I hope you didn’t); you know I rate you and the company you work for highly; and we’re all pulling in the same direction. It’s just that the report came out the same day I was preparing my rant, and I just couldn’t leave it out…

  2. Hi Jack,

    No worries, no offence intended or taken. Now that this has been raised at the UN level, perhaps we can start to expect that more?

    I guess you did hit upon something in passing (better measurement of accessibility) that has been bugging me for a while.

    The guidelines and checkpoints are kind of like usability testing, good for improving sites. However, it isn’t easy to measure or compare sites with them. You can (and obviously we do), but I’d like to add a little granularity in a way that is acceptable to the community at large.

Comments are closed.