CSUN: WCAG evaluation methodology

Shadi Abou-Zahra and Eric Velleman spoke about the “Website accessibility conformance evaluation methodology” (WCAG-EM), i.e. how you test an entire website for accessibility.

NB: It doesn’t necessarily lead to conformance claims which are for pages.

The document stands on it’s own, and it looks like there could be slides available, does anyone know where?

When to use

  • Self-assessment of conformance to WCAG 2.0
  • Third party evaluation and certification services
  • In house QA, monitoring, research, training

Who is WCAG-EM for?

Primarily, experienced web accessibility evaluators, ideally involving users with disabilities.

It is secondarily for web developers, suppliers, integrators etc.


NB: There was some great work in defining things in the document, such as “website”. It was actually rather difficult!

Eric shows a diagram of a 5 step process:

  1. Define the evaluation scope
  2. Explore the target website
  3. Select a representative sample
  4. Audi the sample
  5. Report the findings

Most of this is very similar to notes that I took at the “Unified Accessibility Evaluation Methodology” talk from yesterday, which is to be expected really.

It doesn’t say a specific number of pages that you should do, but it does say what factors would influence that decision, increasing or decreasing the size.

They suggest including a random sample of 10%, so that you don’t avoid issues.

In WCAG you can only pass/fail because of how the success criteria are written. However, the methodology suggests a ‘not present’ state as well. When comparing one audit to another, it helps to know that something was not applicable.

The (optional) scoring aspect was controversial, and they’d like some feedback on that.

Is WCAG-EM ready? Yes! … well, nearly. We need your help.

The comments time period ended in Feb, and they are generally happy with the responses to the comments.

They are currently doing a ‘test run’, and would like people to use WCAG-EM. Either on sites you are doing, or a test site where they will compare results.


Evaluation baseline, what do you test with?

Scoring? So far there haven’t been any comments about how to weight the scoring, the comments have just been for/against scoring.

There was quite a bit of discussion around the scoring topic, but there are so many factors (who it’s for, what does it mean, how do you sent )

Tim from SSB Bart group said that ‘the scores work’, as in they simplify the reporting. Where it runs in to issues is where the scores aren’t good. Do you then conclude that you just need to improve the algorithm?
Just scoring the automated testing leads you to issues as well.

It seems we are stuck with it though. The other side is that it would be left to people to come up with their own mechanism, which (on average) wouldn’t be as good.

Resource in development

  • Web accessibility evacuation tools list
  • Accessibility features of evaluation tools
  • WCAG 2 accessibility support database, a crowd-sourced database.

One contribution to “CSUN: WCAG evaluation methodology

Comments are closed.