Recently, a blog caught my attention, called ‘Does Easy Read work? Let’s look at the research’, by the Information Access Group (IAG), who are Easy Read providers in Australia.
Here I’ll look at IAG’s interpretation of the research papers, and their claims that Easy Read is effective.
IAG refers to five ‘key’ studies ‘about the use of Easy Read’ for people with intellectual disability.
Three of these, by Fajardo et al. (2014),Hurtado et al. (2014) and Jones et al. (2006) contributed to my own research on Easy Read.
The other two papers looked at symbols, which are usually used for people with moderate-severe communication needs, and represent a different approach for a different level of need.
Easy Read research
Only the paper by Hurtado et al. actually mentions Easy Read.
IAG interprets Hurtado et al.’s results as evidence participants ‘understood the information’. To work out whether Hurtado et al., and other researchers, really do tell us anything about ‘understanding information’, we have to deconstruct ‘understanding’.
Information is made up of a connection of interrelated sentences, called texts, which can be written or spoken.
To understand the components of written information, imagine texts are constructed like a house. The particles of clay (words) are combined to form a brick (sentence), which are held together with mortar (cohesion) following a plan (structure) to form a house (the text).
For an information document to meet its aims, the recipient has to see the whole house. Seeing is like reading, it’s a surface level experience. To understand, recipients have to process and internalize the information, as if going deep inside the house.
As everyone is different, the inside of every ‘house’ is different. We can look through the window, but we can never know for sure what other people’s interpretations look like.
What’s being tested?
Hurtado et al. say they tested understanding of ‘concepts’ using recall and recognition techniques. Hurtado et al. do not explain what they mean by ‘concepts’, but this term usually refers to the ideas words represent.
For example, ‘back’ could represent the reverse side of an object, but it has other meanings too, so concepts are also context dependent .
However, recalling words is not evidence of understanding, and recall can occur without any understanding. Even if Hurtado et al.’s research does measure understanding of words, we need to know how each participant understands the sentences, and whole text, before we can claim the overall message had been understood.
Components of sentences
Jones et al.’s research involved 24 service-users with a mild or borderline learning disability. To measure verbal comprehension the researchers used a test called the Test for the Reception of Grammar (TROG), which in my role as a Speech and Language Therapist I am very familiar with. The test assesses understanding of isolated syntactical (grammatical) elements, and highlights an individual’s strengths and areas of need.
Identifying sources of difficulty
Providers of accessible information and Easy Read do not usually have the resources to test individuals at the level of detail demonstrated in the research paper. Therefore it can be difficult to know whether, and to what extent, a feedback group is representative of the target audience.
Some syntactic structures are easier to understand than others. For example, Jones found 0% of the 24 participants understood embedded clauses. Yet Easy Read guidelines do not explicitly mention embedded clauses, or other causes of syntactic complexity, and I have found multiple embedded clauses in Easy Read documents.
How should the text be written?
IAG informs us that ‘the studies we looked at do seem to agree on the simplification of the text’. Specifically, IAG tell us:
‘Jones, Long and Finlay state that a general rule in writing Easy Read should be to use lots of short, simple sentences instead of one complex sentence’.
Jones et al. do not state this. In fact, Jones et al. do not use the words ‘Easy Read’ or ‘short’ anywhere in their paper. What the authors actually advise is this:
‘Where individual tailoring is not possible, it would be helpful if those producing the written material could draw on a detailed knowledge base concerning the reading comprehension of adults with learning disabilities … to select vocabulary and grammar that are more likely to be understood.’
Without an understanding of the components of language that contribute to difficulty, writers cannot make informed choices about how to adapt, and have no theoretical guidance on what to test. Adapting information requires specialist, technical knowledge and skills, yet Easy Read and Plain English have little or no accredited training, and no quality control.
A lack of theoretical knowledge also forms a barrier to understanding relevant research. Fajardo et al.’s discussion on connectives is complex, and refers to previous theoretical models. IAG’s claim that the authors attribute difficulty to ‘the number of syllables in each sentence’ is contrary to the paper’s conclusion. Fajardo et al. actually noted their findings were in contrast to approaches which advocate short words and sentences, and advised:
‘A longer sentence in which the link between two clauses is explicitly signalled may be easier to understand than two short separate sentences, if the individual has reached a certain level of knowledge’.
(It should be noted that, if the individual has not reached ‘a certain level of knowledge’, it is the content that needs to be adapted, not the sentence length.)
IAG also tell us that Fajardo et al.’s participants ‘found it harder to understand sentences the more connectives the sentence contained’. Cognitive, language and reading research tell us something different (as discussed by Fajardo et al.), and the researchers conclude that ‘performance level was affected by the type of connective and its familiarity‘ .
Images in Easy Read
IAG reports their images are ‘very well received’, users have a ‘strong recognition of the images used’ and they ‘greatly improve’ understanding.
We need to know what is meant by ‘recognition’, and how this relates to the understanding of words, sentences and whole texts. We also need to know whether service users in different organisations, who are familiar with different pictures, have similarly strong ‘recognition’ of IAG’s pictures.
Lack of research
Although IAG concedes Poncelas and Murphy’s findings that symbols did not help understanding, they do not quote Hurtado et al.’s comment:
‘There is a dearth of research objectively examining whether adding pictures actually enhances the comprehensibility of written text’.
Pictures don’t help
Hurtado et al. also report other studies which show symbols and pictures ‘did not improve understanding’ and ‘may even contribute to short term confusion’.
Pictures can represent objects, but it is harder to represent specific actions, and impossible to represent abstract ideas, and links between ideas (which make up a text), without the recipient already having some understanding of the text.
Matching pictures to meanings
In Easy Read, I have found pictures often illustrate the concrete concepts within a sentence, rather than the more difficult, abstract concepts, and relationships between ideas.
There can be a weak or non-existent match between the objects represented in the picture, and the words within the sentence. For example, the Makaton sign for ‘good’ may be used without the word ‘good’ occurring in the corresponding sentence. The reader needs to interpret the sentence first, and infer meaning not explicitly stated within the text, to understand the relevance of the picture.
There is no evidence, as IAG claims, that ‘these findings suggest that a combination of methods could result in the highest overall comprehension rate’. There is research showing the addition of pictures reduces comprehension, because of the increase in processing demands.
It is also misleading to apply findings about single words to the effectiveness of whole texts, as word comprehension represents only a small part of the reading comprehension process.
IAG reports that one of their ‘key findings’ in user testing is ‘gratitude’.
All efforts to improve accessibility and equality are laudable. However, ‘gratitude’ is not evidence that Easy Read works, nor that Easy Read is the most effective approach to accessible information provision.
IAG tells us the research shows people with learning difficulties have different levels of understanding, and everyone is unique.
I hope we do not need a research paper to tell us this, but how can we write accessible information for such a diverse audience, and make sense of user feedback, when everyone has different information and communication needs? Easy Read does not give us the answers.
IAG describes their user testing as ‘talking to a small group of people’, and finds users have ‘good understanding of key concepts’.
Reading comprehension requires processing not just the words, but also the sentences, and how these all link together to form a coherent text. It is very difficult to control the many variables that impact on understanding. Everyone’s understanding is unique, and a group discussion cannot give us detailed information on individual interpretations.
To increase transparency and validity in user testing, we need to know what was tested, and how.
So what next?
IAG concludes ‘these studies support much of what we know – that short words, short sentences and short paragraphs help.’
In fact, there is very little empirical research specifically on Easy Read. If anyone knows of any Easy Read research, please get in touch.
Few measures of effectiveness
Studies which evaluate the accessibility of health literacy, adapted using traditional simplification techniques (including Easy Read and Plain English), usually use readability formulae – which measure word and sentence length, rather than understanding, and rarely include people with disabilities.
There is also a dearth of evidence to support the claim that ‘Easy Read suits a wide range of audiences, not just people with intellectual disability’.
Some Easy Read and Plain English techniques are contrary to contemporary research on low literacy and communication disability. Several studies, which draw on contemporary reading theory, suggest reducing word and sentence length can increase comprehension difficulties.
A need for modernisation
Easy Read and Plain English do not provide guidance on how to adapt information strategically for different levels of understanding, or how to test for different components of understanding.
I have spent some time researching Easy Read, and my findings to date suggest Easy Read writing techniques have a weak evidence base, were not developed with involvement or reference to people with disabilities, and have not been reviewed or updated since their conception 40 – 80 years ago.
The new Accessible Information Standard requires:
‘Where a need for support from a communication professional is identified, services MUST ensure … that interpreters and other communication professionals are suitably skilled, experienced and qualified. This SHOULD include verification of accreditation, qualification and registration with a relevant professional body.’
Stringent professional standards are set for sign language interpreters and translators. People with language and learning disabilities, who require texts to be adapted for accessibility, deserve the same level of professionalism.
Read more about Easy Read research.
Read IAG’s blog.