What is the purpose of validating assessment data?

What is the purpose of validating assessment data?

This article describes what assessment validation is, and in this article I will cover:

  • An introduction to assessment of validation
  • The definition of assessment validation
  • A prime source of information about assessment validation
  • The difference between validation and moderation.

Introduction to assessment of validation

The Australian VET system is highly regulated. The regulations that must be complied with are the Standards for Registered Training Organisations (RTOs). One compliance requirement is that RTOs must continuously improve training and assessment services. Two ways that an RTO may identify improvements are:

  • Training evaluation
  • Assessment validation.

Training evaluation

We say that we evaluate training. Evaluation is the quality review of the training process. Most people have completed an evaluation form at the end of a training program. This is a common method of gathering data that can be analysed to identify areas for improvement focused on the delivery of training.

Assessment validation

We say that we validate assessment. Valuation is the quality review of the assessment process. The Standards for RTOs states that:

  • RTOs must conduct assessment validation
  • RTOs must maintain assessment validation records.

If you are new to assessment validation, you may like to think of this activity being the evaluation of assessment, but we called it ‘validation’. It is a method of gathering data that can be analysed to identify areas for improvement focused on the conduct of assessments.

Definition of assessment validation

A definition of terminology used by the Australian VET sector can be found in the glossary of the Standards for RTOs.

What is the purpose of validating assessment data?

Assessment valuation is defined as:

“Validation is the quality review of the assessment process.

It involves checking that the assessment tools produces valid, reliable, sufficient, current and authentic evidence to enable reasonable judgements to be made as to whether the requirements of the training package or VET accredited courses are met.

It includes reviewing a statistically valid sample of assessments and making recommendations for future improvements to the assessment tool, process and outcomes and acting upon such recommendations.”

I have dissected the definition and provided the following explanation.

What is the purpose of validating assessment data?

Information about assessment validation

A prime source of information about assessment validation is published by the Australian Skills Quality Authority (ASQA). A fact sheet about conducting validation is available. It can be downloaded as a PDF file or viewed onscreen from the ASQA website.

What is the purpose of validating assessment data?

Validation sample size calculator

ASQA has provided a ‘validation sample size calculator’. It can be used to calculate the number of assessments that would represent a statistically valid sample size. The example given by ASQA shows that 31 samples of assessments are required for validation when a total of 100 students have been assessed by an RTO.

What is the purpose of validating assessment data?

Random sample selection

The selection of assessments should be randomly selected. For example, if 31 out of 100 assessments are required, select every third name on an alphabetical list of students.

If you are invited to an assessment validation meeting, please be prepared for a long meeting (and take your lunch).

Difference between validation and moderation

Some people get confused about validation and moderation.

  • Assessment validation is the quality review of the assessment process and is generally conducted after assessment is complete.
  • Assessment moderation is a quality control process aimed at bringing assessment judgements into alignment.

Assessment moderation

Assessment moderation occurs when a group of assessors meet to discuss sample assessments. The purpose of moderation is to help different assessors come to a common agreement so that their future assessment are consistent and based on evidence. The principle of assessment being addressed is ‘reliability’.

It would be useful to have a range of sample assessments at a moderation meeting:

  • Assessments that are clearly competent
  • Assessments that are clearly not yet competent
  • Assessments that are difficult to make a clear decision.

In conclusion

We evaluate training, we validate assessments.

RTO must conduct assessment validation:

  • RTOs must review each training program at least once over a five-year period
  • RTOs must review a statistically valid sample of assessments
  • RTOs must keep records of assessment validation.

And assessment validation is not the same thing as moderation.

Do you need help with your TAE studies?

Are you doing the TAE40116 or TAE40122 Certificate IV in Training and Assessment, and are you struggling with your studies? Do you want help with your TAE40116 or TAE40122 studies?

What is the purpose of validating assessment data?

Ring Alan Maguire on 0493 065 396 to discuss.

Contact now!

What is the purpose of validating assessment data?

Training trainers since 1986

One of Fytster’s key differentiating factors is our use of psychometric assessments. These questionnaires are designed by industrial and organizational psychologists to assess how well an individual candidate might do a particular job. It is important that the assessments we use have a high degree of “predictive validity” to ensure that there is a positive correlation between the assessments we use and future workplace performance. The process by which we demonstrate the scientific credibility of our assessments is referred to as “assessment validation.”

Assessment validation refers to the process in which assessment content is evaluated in various ways to determine its level of efficacy in predicting how well a candidate will perform in a specific job. The aim of Fytster’s Job, Culture, and Personality FYT assessments is to provide hiring managers with information that can be used in conjunction with other pieces of information to predict a candidate’s future performance in a particular role and to make more effective hiring decisions. This objective is accomplished by systematically collecting information about a candidate’s experiences, education, attitudes, and preferences that relate to important behaviors associated with success in a particular role.

There are multiple ways to evaluate assessment validity, one of which involves statistically evaluating the relationship between assessment scores and job performance outcomes. Referred to as criterion-related validation, this type of statistical analysis of assessment and performance data may be used to:

1. Configure the initial content and scoring of an assessment 2. Evaluate the reliability and validity of an assessment 3. Confirm the assessment’s job-relatedness 4. Optimize scoring, set cut-scores, or norm the assessment 5. Shorten the assessment 6. Evaluate the fairness of an assessment

7. Improve the legal defensibility of the assessment

Criterion-related validation may be conducted with either of the two following approaches or via a blend of the two: 1. Concurrent validation (preferred approach to calibrate new assessment content with client population): With the concurrent approach, a large sample of current employees (not candidates) are asked to complete the online selection assessment prior to deployment of the test in the organization. Job performance data from these employees are then statistically correlated with the employees’ assessment scores to ensure that test scores predict how well an individual will perform in the job. These results will be evaluated against historical job performance data on these employees.

2. Closed-Loop Optimization (e.g., Predictive validation; preferred approach to optimize validated content with client over time. Ideal for rapid deployments): With the predictive approach, candidates for a position complete the assessment during the application process (hiring managers will not see assessment scores until the validation analyses are complete), and at a point later in time (usually 6 months to a year depending on hiring volumes), assessment scores of candidates who were hired are correlated with job performance data. Although predictive validation takes longer than the concurrent approach, it is the only method that can be used to optimize scoring to improve employee retention and reduce turnover.