Changes for 2021
PILNA, as a high-quality learning assessment programme, evolves and improves from cycle to cycle. For the 2021 cycle, enhancements to PILNA included:
- full usage of the rotated booklet design;
- changing literacy reporting to report reading and writing separately;
- supplemented contextual data collection tools; and
- strengthened and expanded coding of cognitive and contextual results.
Fully implemented rotated booklet design
PILNA cycles up to and including PILNA 2018 used only one version of each test booklet, one for literacy and one for numeracy. For PILNA 2021, there were multiple versions of these tests that students could receive. This was due to the new rotated booklet design, tested in the PILNA 2020 field study and then fully implemented in PILNA 2021.
Rotated booklet designs are those that use multiple versions of test booklets. They are created by establishing groups of questions, called blocks, and putting different sets of these blocks into the test booklets. This results in tests that measure performance in the same topics but use different questions to do so. Moreover, the blocks are inserted in the tests in different orders so as to control for the ‘order effect’ – students’ responses changing between questions close to the start of the test and those close to the end.
This design allowed for more questions to be asked across the sample of students so more information about student performance could be collected. Questions could be asked that covered more content and cognitive skills, had a wider level of difficulty, and elicited more types of responses. The design also enabled twice as many questions to be trialed in the PILNA 2020 field study than previous PILNA cycles.
Separately reporting reading and writing
The literacy scale used to analyse and report findings in the previous PILNA cycles covered both reading and writing.
PILNA 2021 split this scale into its two parts and created the PILNA reading proficiency scale, which enabled more focused findings and discussion in the area of reading and better represented its distinctiveness as a construct.
For findings in the writing area, PILNA 2021 discusses average writing scores but no comparisons with benchmarks (proficiency scales) are currently available because the proficiency levels and their associated writing scores are yet to be determined.
Supplemented contextual questionnaires
PILNA 2021 strengthened its collection of contextual data from seven areas in the PILNA 2018 to nine areas in 2021. The new areas were student well-being and teacher satisfaction.
In PILNA 2021, all participating students, their teachers and school leaders responded to questionnaires that collected this contextual information. The questionnaires were tailored to be relevant to students, teachers, school leaders (head teachers /principals).
Strengthened coding of results
Coding of student results was introduced to the PILNA programme during the 2015 PILNA cycle and was refined for the 2018 PILNA cycle, when all cognitive items were coded to capture a range of student responses.
In PILNA 2021, coding was further strengthened and expanded for both the cognitive and contextual data. More codes were developed for each question to capture a greater range of student responses, and coding staff were trained in different approaches to interpreting students’ results at country-level coding sessions.
ACER was a valuable partner in this area and provided support to this process.