How data were analysed:

Comparing question difficulty and student ability

Item-person maps

Analysis also included the construction of item-person maps for each year level and domain (numeracy, reading, and writing).

An item-person map is a visual way to compare the difficulty of test questions (items) and student ability on a common scale. In this analysis, the common scale was the PILNA scaled score.

Students’ PILNA scaled scores were plotted against each proficiency scale. Experts in numeracy and literacy evaluated each question in the assessments to determine the key knowledge and skills they were intended to assess. Based on their rankings, questions were also plotted against this proficiency scale, according to their difficulty.

Item-person maps show how well assessments are targeting the sampled students in each PILNA cycle. They show if large numbers of students are achieving high results or low results. If there are inconsistencies found in the analysis of these maps, it could be that questions in the assessments were too easy, or too difficult, or inadequately distributed. Inadequate distribution of questions might result from not asking enough questions high on the proficiency scale, or low on the proficiency scale.

Generally, assessment developers intend most questions in an assessment to be around the expected level of proficiency for students taking the assessment, with fewer questions being well above or below these levels. This is to better differentiate between the abilities most students are expected to have, while still collecting evidence to show if students are performing well above or below these levels.

Item-person maps for PILNA 2021

The item-person maps for PILNA 2021 are shown in the figures below. Note that the item-person maps for writing are not yet available for reporting because the writing proficiency scale has not yet been developed. The item-person maps show:

  • the number and distribution of students who scored a certain value of the proficiency scale, indicated by ‘x’s on the left-hand side of the axis; and
  • the difficulty rankings of each question in the assessments, indicated by the numbers on the right-hand side of the axis. These numbers are simply identifiers for the questions. A higher number does not mean a more difficult question; difficulty is shown by how high on the axis the number is. For example, how high or low a number ‘7’ in the right-hand axis is shows the difficulty of question number seven in the assessment.

For example, Figure X shows the distribution of numeracy questions given to year four students on the right side of the map, alongside the ability distribution of year four students on the left.

The year four numeracy items are positioned according to their difficulty from the bottom (easiest questions) to the top (most difficult questions). “Item 32” was the most difficult question and “item 55” was the easiest question.

The year four students with higher abilities in numeracy are positioned at the top of the map and those students with lower abilities are positioned at the bottom of the map. On this map, one ‘X’ represents about 113 students.

There are few students below the least difficult item 55. Most of the students are distributed around the middle of the map, with a few at the top and bottom of the map.

The same relationships are shown in the other item-maps.