Upload
ashlie-kennedy
View
219
Download
0
Tags:
Embed Size (px)
Citation preview
Automatic vs manual testing
Torbjørn Helland [email protected]@solhell
Is automation a tool or a solution?
Why did we look into this?• Reasonable to try to make
assessment more efficient• Frequently asked if there are
any good automatic methods• Most legislation points to
WCAG• Part of the EU methodology
project
Selecting robots
Objective: coverage• Should assess CSS and HTML in
parallell• Cover at least WCAG 2.0 AA• Option for AAA would be
benefitial• Assessing outside of WCAG
would be benefitial
DIDN´T REACH UP
• PEAT – only checks against epilepsia
• AccessLint – to few criteria
• HTML Validator – only HTML quality
• EvalAccess – checks against WCAG 1.0
Objective: easy to run• Limited setup time• Limited input• Intuitive interface• Partially due to project scope
DIDN´T REACH UP
• UCDmanager – too demanding setup
• TestPage – run by command line
Objective: easy to interpret• English, Scandinavian or Google
Translate• Understandable findings• Specific references
DIDN´T REACH UP
• Examinator – Spanish and no translation
• HERA-FFX – doesn´t show actual findings
• Accessibility Valet – too demanding interpretation
Chosen robots 1/2• AccessMonitor • Achecker • A-tester • AInspector• HiSoftware Compliance
Sheriff• Magenta • Siteimprove
Chosen robots 2/2• Sortsite • Tanaguru • TAW • Tenon • Tingtun HTML (eAccessibility)• TotalValidator • WAVE • Web-me
Robots assess (mostly) just quantity, but also more than accessibility and universal design
Selecting test criterias
Objective: Identify detectable error types• Phase 1 – 38 hypotheses• Phase 2 – testing• Phase 3 – evaluation• 34 error types which at
least 1 robot could identify
• Slight editorial focus
Objective: comparing robots• Headings and structure: 6 tests• Links: 12 tests• Contrast: 1 test• Images: 5 tests• Forms: 6 tests• Tables: 4 tests
CODE EXAMPLES<h1> - <h6><a href=”…”CSS color<img alt=”…”<label>, <input><th scope=”…”>
Errors that wasn´t discovered• Typography vs hierarchy• More than color to
identify links• Text on image• Image of text• Focus effect
Headings
Headings – what is important? • Using headings increases
readability• Correct code for headings• Correct heading hierarchy• Relevant content
Headings – Achecker • Do check for headings• Assesses the hierarchy,
but doesn´t find all errors• Doesn´t look for potential
headings• Doesn´t check for
content
Headings – A-Tester• Does check for content• Requires that headings are
placed within main, header, section or article
FINDINGS
<h2> with only CSS content (correct finding)
HTML5 DO-element missing(wrong finding)
Headings – eAccessibility PDF• Doesn´t register the tags
Links
Links – what is important? • Visual appearance• Understandble target• Consistent behaviour• Focus highlight• That they work!
Links – eAccessibility• eAccessibility HTML
check: incorrect error on inconsistent HREF method
Links – AccessMonitor • Checks for skip link• Identifies adjacent links to
same target • Doesn´t check for
external links or new windows
Links– Total Validator• Checks for content• Checks if identical link
texts leads to same URL• Checks if href is valid• Follows every link to
identify removed pages and retired domains
Contrast
Contrast – requirement• Light contrast, measured
between background and text colour
• Scale goes to 21:1• Small text requirement
4.5:1 (AA) – 7.0:1 (AAA)• Large text requirement
3.0:1 (AA) – 4.5:1 (AAA)
Contrast – Contrast Checker • Doesn´t always correctly identify
background color• Requires manual check
FINDING
1.03:1
Contrast – HiSoftware• Assesses objects inside <head>• Requires both background and
text color to be specified for each and every object
CONCLUSION
The real errors are lost in the huge amount of false positives
Alt-texts could be a separate lecture, but…
Image categories
• Pure decoration• Icons• Supportive images• Meaningful images• Complex images
Pure decoration
WRONG SOLUTION
<img> withoutalt attribute
BEST SOLUTION
CSS
SOLUTION 1
alt=””
Icons
Share on Facebook
FULL LINK TEXT
Solutions• alt=”” • Preferably CSS background
image
MEANING DEPEND ON CONTEXTSolutions:• alt=”Follow us on Twitter”• CSS background image +
visually hidden ”Share on Twitter”
Supportive images
CORRECT SOLUTION
alt=”Woman sleeping on keyboard and books. Photo.”
EXAMPLE
WRONG SOLUTION
alt=”Illustration photo. Colourbox.”CSS background image
Meaningful images
CORRECT SOLUTION<figure> <figcaption>EEA registrations, the seven biggest countries, 2012 </figcaption> <img alt=”Poland 15 600, Lithuania 7 500, Romania 2 600, Germany 1 900, Latvia 1 900, Great Britain 1 700, Bulgaria 1 300.”></figure>
EXAMPLE
Complex images
CORRECT SOLUTION
alt=”Who´s suing who in telecom. Infographics. Text description follows after the image.”
EXAMPLE
Illustrasjon: David McCandless
WRONG SOLUTION
Very detailed description in alt or longdesc
The only conclusive automated check:if the alt-attribute is present
WCAG, robots and assessment
Automation ≈ WCAG• Within most error categories there
errors inside and outside WCAG• In all categories, the errors outside
WCAG had significantly lower identification rate
• On average almost three errors inside WCAG was identified for each error identified outside WCAG
Automation ≈ WCAG
Design
TechContent
Automation ≈ WCAG
Design
TechContent
Automation ≈ WCAG
Design
TechContent
Automation ≈ WCAG
Design
TechContent
Robots and JavaScript • Many robots only checks source
code • JavaScript can alter the HTML
code after loading the page
ERGO
Reports on errors removed by JavaScript
Doesn´t report on errors caused by JavaScript
A lot of important stuff outside WCAG • Size of clickable areas• Understandable navigation• Menu structure• Search functionality• Reading support• Prefilled information• Typography• …
Manual assessment is growing• Post- og Telestyrelsen in Sweden• Difi in Norway• Meac in the European Union
Results
• Clearly identifies a definite error
CONCLUSIVEPoints • Conclusive finding = 3 points• Potential finding = 2 points• Unclear finding = 1 point• Doesn´t check = 0 points• Misreports = -1 point
POTENTIAL• Identifies objects in need of
manual check
MISREPORTS• Doesn´t identify all error
instances• False positives
UNCLEAR• Poor descriptions• Identifies without stating
error
Top 3 Robots
TanaguruConclusive: 10Potential: 3Unclear: 3Doesn´t check: 22Fails: 0
TotalValidatorConclusive: 11Potential: 3Unclear: 3Doesn´t check: 20Fails: 1
AccessMonitorConclusive: 12Potential: 2Unclear: 1Doesn´t check: 21Fails: 2
Scores from 14 % to 41 %
9 robots required to cover all 34 confirmed tests
Error types most frequently identified• Links without content• Headings without content• Alt attribute is not
present• Input elements without
correctly attached labels• Iframe without title
attribute
BUT: several examples of inconsistent robot behaviour
Error type rarely identified• The need for input fields
when there are identical labels
• Suspected headings• External links without
indication• Identical alt text and
image text
Achecker and dnb.no
Some of the ignored errors• No top level heading• Missing label• Weak contrast• Only color to identify links• Poor tab sequence• Poor focus highlight• External links without indication
So, how to use robots?
Create a proper foundation• Navigation concept• Zoom and responsive• Facilitate text structure• Colors and contrast• Typography• …• Evalutate manually!
Use robots to evaluate content• With a solid foundation,
editorial content will be the reason for errors
• Common editorial errors are use of headings, tables and alt texts
Use the strengths• SiteImprove does a pretty good job
on headings• AccessMonitor and Web-me is
doing quite well on tables• eAccessibility is the best for forms• WAVE is good on images
TIP
Usability is key to make use of the robot
Summary• Automation should primarily be used to
locate editorial errors• Existing robots does not exploit the
possibilities of automation• Manual check is needed to cover WCAG• A lot of important stuff outside WCAG
which is hardly covered by any robot
Everything we recommend is tested