On this page

The Digital Access team at Vision Australia provide technical and user-based assessments of websites. 

When we test websites, we usually find a high number of issues that relate to screen reader use. A screen reader is a piece of software that reads the text on the screen using a synthetic voice and also indicates the nature of the content, for example if it's a heading, image, list or any other format.

Screen readers are mostly used by users who are blind or have low vision or who live with other print disabilities such as dyslexia. Overall, the number of people who use screen readers is relatively small, however they do play a vital role. 

With the emergence of virtual assistants such as Google Home and Siri, we now have easy aces to software that can understand voice commands and talks back to the user in a synthetic voice.
 
This means that suddenly the screen reader principle is becoming mainstream and is experienced by a much larger number of users.

My guess for the future is, that digital assistants and screen reader software will merge and become one, for example on Windows, Cortana (the digital assistant) and Narrator (the screen reader) will become one.

These smart screen readers will of course have AI capabilities and be able to recognise controls from a visual perspective. This signals a change in the way we work with web accessibility.

For example, one of the issues we often pick up is in relation to widgets, for example a filter control. Visually, it is clear what the control does and it can easily be used by a sighted mouse-user; however, when you run a screen reader through the control it does not work well.

This is typically because there at too many or too few ARIA attributes in HTML mark-up or they are placed on the wrong elements.

However, with a smart screen reader, the exact mark-up will no longer matter because the screen reader will analyse the control from a visual perspective and trigger mouse events - mouse click, mouse hover - on the control to figure out how it works. The smart screen reader will supplement this by checking the mark-up and recognise which library the control is part of.

The screen reader can then present the control in a meaningful way to the user. This can be done in a conversational way "Hey Bob, which one of these options should I pick?" or it can be done in a more traditional way "Combo-box with five options". 

This signals a change in the way we address accessibility as there will be less emphasis on the exact coding of the widget. As long as it works for a sighted user, it will work for the screen reader and then, in turn, for the screen reader-user.

A smart screen reader would also be able to help users with cognitive disabilities by visually repositioning controls.

For example, if the user is filling out a form, the screen reader will be able to display just one form control at the time on the screen. This makes it easier for the user, as he or she will only have to concentrate on one item at a time.

The screen reader would also be able to style controls in a consistent way, so no matter which website or app the form is on, an input field is always presented in the same way to the users. 

Also, filter controls, play buttons, links to a home page and even whole menu systems can be repositioned or created from scratch to ensure they are presented to the user in a consistent manner. 

A smart screen reader would also be able to process bug reports from users, allowing them to submit cases directly to the software.

This could be done by users simply speaking the problem to the software, therefore eliminating the need to type up a case and submit it via a separate channel. The smart screen reader would have direct access to the URL, so the user would not need to submit anything themselves.

So, that is my guess for the future: digital assistants and screen readers will merge. This will make is easier to create accessible interfaces, easier for all users to interact with the interfaces and will change the way we do accessibility assessments.