Article Text
Abstract
Introduction Literacy is fundamental for educational achievement, and in the longer term contributes substantially to a range of life skills. Literacy difficulties during the early years of school are associated with long-term impacts on academic success, with differences in academic achievement sustained through children’s schooling. Therefore, addressing literacy difficulties during the early years of school is essential in reducing the risk of children progressing onto negative academic, psychosocial and vocational trajectories. This trial will determine whether a phonics-based reading intervention can improve the reading comprehension of students identified as low-progress readers in the second year of primary school.
Methods/design We recruited 236 students fromnine schools after screening for reading difficulties in the second year of primary school (Year 1). Schools in Sydney and Central Coast of New South Wales will be invited to participate via an opt-out consent process. All children identified as being in the bottom 25th percentile using the Wheldall Assessment of Reading Lists will be eligible for the trial. These children will be randomised into either ‘usual teaching’ or ‘intervention’ groups. Trained school support teachers will deliver the MiniLit intervention. Intervention: In groups of four, children will complete a daily 1-hour lesson with their MiniLit teacher over 20 school weeks. Follow-up: Immediately after intervention completion and 6 months later using child face-to-face assessments. Primary outcome: Reading comprehension at 6 months after intervention completion. The study will have an embedded process and cost-effectiveness evaluation.
Discussion The Building Better Readers trial will be the first efficacy randomised controlled trial comparing usual teaching with a phonics-based reading intervention for children with reading difficulties in Year 1 of primary school in Australia. The randomised design will limit the effect of bias on outcomes seen in other studies.
Trial registration number ACTRN12617000179336
- school health
- community child health
- outcomes research
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Background
Literacy is fundamental for educational achievement, and in the longer term contributes substantially to a range of life skills. Literacy difficulties during the early years of school are associated with long-term impacts on academic success, with differences in academic achievement sustained through children’s schooling.1 As well as academic underperformance, poor literacy is associated with higher school dropout rates,2 lower likelihood of pursuing tertiary education2 and limited employment opportunities.3 Furthermore, children with literacy difficulties are at risk of emotional problems including anxiety and depression,4 5 as poor academic progress can negatively affect children’s self-esteem and feelings of self-confidence as a learner.6 Children with literacy difficulties can also experience difficulties with peer relationships,7 such as teasing and bullying.8 Therefore, addressing literacy difficulties during the early years of school is essential in reducing the risk of children progressing onto negative academic, psychosocial and vocational trajectories.
Learning to read is the starting point to becoming literate. Reading instruction typically commences when children start primary school. Learning to read is a complex process that involves being able to decode words as well as extract meaning from the text.9 10 Decoding, the process of converting a written word into its spoken form, is the starting point for children to gain competency in all literacy-related tasks such as reading complex text, spelling and extended writing.11–14 Difficulties with decoding means that the child has difficulty with, or is unable to link, a phoneme (sound) with its corresponding grapheme (letter or letter combination). Consequently, the child will be unable to read words accurately and with automaticity, which in turn impacts negatively on the ability to read fluently and comprehend written text (ie, comprehension).12
If problems with decoding and the subsequent reading comprehension deficits are not addressed early in a child’s educational life, these deficits typically persist throughout childhood to adulthood.10 Educational policies focused on reading are important to governments,15 16 due to the large economic impact of literacy competency to both the individual and the society. In high-income countries, increasing the national reading scores by 1% is expected to see an increase in labour productivity and gross domestic product per capita by 2%.17 Because children from low socioeconomic backgrounds are also more likely to have reading difficulties during the early years of school, redressing reading difficulties may additionally provide a pathway to reducing socioeconomic inequities more generally.
One intervention, which has early promising findings in addressing literacy deficits during the early years of primary school in Australia, is the ‘MiniLit’ programme.18–21 The intervention focuses on improving children’s literacy by targeting five key areas known to be essential elements for literacy instruction14 22: (1) phonemic awareness; (2) phonics; (3) reading fluency; (4) vocabulary; and (5) language comprehension. Lessons are typically delivered over 20 weeks to groups of four students who are withdrawn from the regular classroom for the MiniLit lesson. The lessons are delivered by either trained teachers or trained paraprofessionals under teacher supervision. Despite the intervention having promising evidence, there remain no randomised controlled trials (RCT) which have examined its efficacy on student reading outcomes.
The Building Better Readers study aims to address this current gap via an efficacy RCT in nine primary schools in New South Wales (NSW, Australia). For Year 1 students (second year of primary school) in the bottom 25th percentile of readers, we aimed to determine whether children receiving the MiniLit intervention have better outcomes when compared with children receiving ‘business as usual’ on the following:
Reading at 6 and 12 months after randomisation (primary outcome at 12 months).
Foundational literacy skills at 6 and 12 months after randomisation.
Reduced proportion of children classified as being in the bottom 25th percentile of readers at 6 and 12 months after randomisation.
In addition, the Building Better Readers study aimed to determine aspects of the programme and its implementation that act as enablers or barriers to the programme’s success and sustainability, as well as the cost per student and cost-effectiveness of the intervention.
Methods and design
Approval and registration
The project is registered on the Australian New Zealand Clinical Trials Registry (ACTRN12617000179336) and has primary ethics approval from the Human Research Ethics Committee at the Royal Children’s Hospital, Melbourne (HREC 36301). Ethics approval has also been received from the Melbourne Graduate School of Education at the University of Melbourne and research approval from the NSW Department of Education.
Design
This is an efficacy RCT of the MiniLit intervention for eligible children identified after cross-sectional screening. Results will be reported according to Consolidated Standards of Reporting Trials guidelines and the extension report of non-pharmacologic interventions.23
Setting
All government primary schools within the Australian state of NSW will be eligible to participate if they meet the following criteria:
Year 1 student population of over 70 students.
Be located within 50 km of the metropolitan centre of Sydney, Newcastle or Wollongong.
Have a socioeconomic status (SES) in the top two quartiles (ie, most disadvantaged locations). This is determined by the NSW Department of Education’s ‘Family Occupation and Education Index (FOEI)’. This index includes the parents’ education level and occupation for each student.
Based on data from 2016, there will be 134 primary schools who meet these criteria.
School recruitment
In January 2017, an invitation to participate was emailed to all principals at eligible schools by the NSW Department of Education. The recruitment email contained a School Information Sheet to outline the project’s aims, rationale and expectations for participating schools, as well as a school consent form. Two weeks after the invitation email, a member of the research team called all schools to discuss the project with the school principal, including providing further details when requested.
The school principal or their representative completed an online Expression of Interest (EOI) form. From the EOI list, an independent statistician not involved in the project selected a total of 20 schools using a randomisation sequence stratified by SES category. Schools were informed of their selection via email from the project team and the NSW Department of Education. As less than 20 schools completed the EOI, all schools were selected.
Child recruitment using opt-out consent process
Student recruitment for the project used an informed opt-out consent process, as approved by the NSW Department of Education. The opt-out process covered the initial screening for reading difficulties by the classroom teacher using the Wheldall Assessment of Reading Lists (WARL). For children whose reading performance fell in the bottom 25th percentile, the consent covered trial randomisation and data collection. All parents were provided with a Parent Information Statement (PIS), which included information about the project’s aims, time requirements and expectations.
To increase the likelihood that parents received the information before the study commenced, a notification was placed in the school newsletter to inform parents of Year 1 students that their school was involved in the study and that the PIS was sent home with all Year 1 students. Teachers were encouraged to inform parents about the PIS in their general interactions with families.
Parents were able to contact the research team via a provided phone number or email if they had any concerns or wished their child not to participate.
Exclusion criteria
Students were excluded from the screening and trial stage if they:
Have severe disabilities (eg, cerebral palsy) that do not allow them to participate in the intervention.
Are English language learners, whose English language abilities do not allow them to participate in the intervention. Although this will affect the generalisability of the findings to such students, the aim of the project is to establish efficacy and the intervention can only be delivered in English.
The classroom teacher was responsible for determining which children were excluded based on the above criteria. Teachers are often responsible for identifying children who may not be suitable to enrol in certain support services that schools offer. Teachers were asked to keep a record of any children they excluded from their class, and which exclusion criterion the child met.
Screening for reading difficulties
All students in Year 1 at a participating school were screened for reading difficulties using the WARL to identify children in the bottom 25th percentile of readers in Year 1 (see table 1).24 The WARL is a test of oral reading fluency, designed to identify younger low-progress readers. The assessments comprise three word lists. Students are allowed 1 min to read each list and receive a score based on number of words read correctly, averaged across the three lists. The initial assessments were conducted by the classroom teacher in one-to-one testing conditions with each child. The teacher was provided with step-by-step guidelines about how to determine whether words were read correctly or in error for each list, but not to calculate the overall raw score or the required raw score cut-point to be eligible for the trial. This was undertaken by the research team to identify the target population.
Randomisation
Eligible students at each school were individually randomised to the ‘MiniLit’ (intervention) or ‘business as usual’ (control) group, stratified by school. No student-level variables were included in the randomisation protocol. Intervention contamination was reduced by not identifying the control students to the classroom or MiniLit teachers. In addition, control students were not able to access the MiniLit programme. The randomisation was conducted by an independent statistician. The research team notified parents by mail of their child’s allocation status after randomisation and the remaining steps of the project.
MiniLit intervention
Children randomised to the intervention group received the MiniLit programme.25 The training occurred in Term 1 2017 and the staff member nominated to be trained to deliver the intervention was determined by school leadership. This staff member was often a school support teacher, whose role included delivering specific support interventions for at-risk students. The nominated teacher/s at each school attended a 2-day training to enable them to be able to deliver the intervention. Most schools sent one extra person to the training to be able to cover for staff absences, such as for illness. The training was delivered by MiniLit, and covered the rationale for the intervention, the reading domains in which the intervention targeted, how to deliver the content during the MiniLit lessons as well as how to tailor the intervention to the child’s specific needs.
During terms 2 and 3 in 2017, children receiving the programme were withdrawn from class for 1 hour/day. Working with the trained MiniLit teacher, groups of up to four children completed each MiniLit lesson in a quiet area in the school. If there were more than four children at a school randomised to the MiniLit group, multiple groups of four were formed. Children were grouped based on their initial reading ability. This practice is also part of the standard MiniLit intervention protocol.
Blinding
Allocation will be concealed from members of the research team involved in outcome assessments for the duration of the project as well as the investigators. Only the project manager and those involved in the process evaluation data collection will be unblinded.
School staff, teachers and students will be asked not to disclose student randomisation status during the assessments. However, those who do will be recorded in the project database and this ‘unblinding’ will be examined as a potential confounding variable in the outcome analyses.
Outcome measures
At 6 and 12 months after randomisation, all children in the RCT will complete a 30 min assessment of their reading and literacy abilities. All assessments will be conducted with a trained research assistant, blinded to the child’s intervention status.
The primary outcome will be measured using the York Assessment of Reading for Comprehension-Passage Reading (YARC-PR) at 6 months (secondary outcome) and 12 months (primary outcome) after randomisation.26
All measures used for this study are shown in table 1, and the time points for all measures are being collected.
Data collection
Data collection will involve face-to-face, direct child assessments. All research assistants conducting the assessments will be blinded to each child’s intervention status.
For face-to-face assessments, a trained research assistant will conduct the assessments with the student during school hours in a room allocated by the school. All requirements for assessment for child safety as required by the NSW Department of Education will be adhered to.
School staff and teachers will be asked to not disclose information about any student’s randomisation status during this assessment. However, those who do disclose information will be recorded in the project database and this ‘unblinding’ will be examined as a potential confounding variable in the outcome analyses.
Sample size calculations
The final sample size was based on the capacity of the MiniLit developers considering training and resource requirements for this efficacy project.
We assumed that 20 schools will be involved in this project. This assumes that there will be a mean of 65 students per school (ie, three Year 1 classes per school). We estimate that 5% of students (n=65) will not be eligible for the project based on our inclusion criteria. It is estimated that 25% of students will be identified as ‘low readers’ (n=308) and thus eligible for the project. With an attrition rate of 10% over the first year of the project, a final sample of 278 students (n=139 per group) will have analysable data.
Based on an intention-to-treat analysis, the final sample size will be able to detect an effect size of 0.34 in scores on the primary outcome, with 80% power at a 5% level of significance. The actual participant flow until the 6-month follow-up is shown in figure 1, with the 12-month numbers as estimates. Based on the estimated numbers, we will have 80% power at 5% level of significance to detect an effect size of 0.38 between the groups.
This sample size calculation does not consider the effect of clustering for SES category at the school level (which will decrease power and increase detectable effect size) or the correlation of the pretest and post-test scores (which will increase power and decrease detectable effect size).
Implementation and process evaluation
It is also important to consider how outcomes of interventions are influenced by the fidelity of their implementation, especially within the complex environment of school settings.27 28 The implementation and process evaluation will take a mixed methods approach,29 using both existing programme data collection methods and newly developed tools. These tools were developed following the process evaluation workshop with the intervention developers in March 2017.
The implementation and process evaluation will explore both the theory of change related to the MiniLit programme and dimensions of implementation including fidelity, dosage, quality, differentiation and monitoring of control/comparison conditions that may influence the theory of change. The implementation and process evaluation will seek to understand barriers and enablers of the MiniLit implementation process that may impact the effectiveness of the programme as determined by the results of the RCT. Data will be collected using student, MiniLit teacher and classroom teacher study-designed surveys, as well as observation of three MiniLit lessons at each school during the study (at commencement, after one term of intervention and end of intervention). Structural equation modelling will be used to examine how the implementation of MiniLit was associated with the primary and secondary child outcomes. Confirmatory factor analysis will be used for all process evaluation measures to ensure domains are robust, before inclusion in the path analysis, and model fit will be examined to ensure the model is sufficient for the analyses.
As part of the theory of change, it was proposed that children who (1) attend 80% of their MiniLit lessons during terms 2 and 3, and (2) have been exposed to all components of the MiniLit lesson in two or three observations will have received the intervention as intended. As part of the process evaluation, we will examine whether there are factors or specific thresholds (ie, varying degrees of dosage as measured by attendance rate, levels of fidelity measured by teacher surveys and lesson observations) which are related to, or interact with, improved outcomes. This will enable us to determine whether the proposed per-protocol criteria set by the programme developers are associated with improved outcomes in the intervention group.
Economic analysis
The economic evaluation of the intervention will be a two-stage analysis. We will use cost-consequence analysis as a first step to compare any incremental costs of the MiniLit intervention (costs accrued in the intervention group, from resource use over the period of follow-up, compared with costs accrued in the control group) with all primary and secondary outcomes, expressed in their natural units of measurement. We will then conduct a cost-effectiveness analysis to compare incremental costs with differences in scores on the YARC (the primary outcome measure for reading ability).
Measured resource use will be valued using existing unit cost estimates (eg, education department salary scales, and so on). Uncertainty in cost and outcome data and the sensitivity of economic evaluation results to the chosen methods of evaluation will be tested by extensive sensitivity analyses.
Overall analysis plan
The baseline characteristics of the participants and schools will be summarised by group. Categorical variables will be presented as frequency and proportion values in each category. Continuous variables will be presented by means and SDs for unskewed data, medians and IQRs for skewed data, and ranges. Data analysis for the project will be performed by an independent statistician in the Clinical Epidemiology and Biostatistics Unit at the Murdoch Children’s Research Institute.
The primary analysis will be by intention to treat and will include all randomised participants where outcome data are available. The primary analysis will use a multivariate linear regression to examine the YARC-PR score (continuous) at 12 months after randomisation for the intervention students, compared with students in the control group. Both unadjusted and adjusted analyses will be conducted. For adjusted analyses, two models will be conducted. The first will account for baseline assessment scores while the second will also include student age, gender and family SES as a priori confounders. Family SES will be determined using the NSW Department of Education’s Student Educational Advantage score, which is derived from parent education level and occupation. Clustering of students within schools and MiniLit groups will be accounted for in the models using regression techniques that respect these structures. Findings between groups will be presented as mean differences with 95% CIs, p values and Hedge’s g effect sizes.
For secondary outcomes, continuous variables will be analysed using linear regression and categorical data will use logistic regression. Unadjusted and adjusting findings will be presented according to the models described in the primary outcome analyses. Given the pragmatic sample size, subgroup analyses will only be conducted if the subgroup has over 40 students per group (25% of the final sample).
If appropriate, all analyses will be repeated using Complier-Average Causal Effect analysis to examine the association between non-compliance to our observed outcomes. Compliance is determined as to whether children received the appropriate treatment or no treatment associated with the group they were randomised to.
Ethics and dissemination
The ethical issues and the protocol to manage them are described in table 2.
The findings from this study will be presented in a final report to Evidence for Learning in September 2018. All findings will be presented at a group level, and individual child, schools and teachers will not be identified. In addition, findings about specific schools and teachers will also not be presented in the report.
In addition to the report, we will also disseminate our findings in peer-review publications and presentations at national and international conferences. All participating schools and families will also be provided with a one-page newsletter outlining the study’s main findings.
References
Footnotes
Contributors JQ and SG were involved in the study’s conceptualisation, design and conduct, and in the writing of the manuscript. JQ has overall responsibility for the trial. JC was involved in the study’s conceptualisation, design and conduct, and has reviewed and approved the submitted manuscript. GD and TS were involved in the study’s design, and have reviewed and approved the submitted manuscript. LS was involved in the study’s design and conduct, and has reviewed and approved the submitted manuscript.
Funding The trial is funded by the Evidence for Learning Fund by Social Venture Australia. In-kind support was provided by MultiLit in terms of training and intervention materials required for implementation.
Competing interests None declared.
Patient consent Not required.
Ethics approval Royal Children’s Hospital, Melbourne.
Provenance and peer review Not commissioned; externally peer reviewed.