Certification Exam Pass Rates
The table below reflects the CPWA® Certification Exam pass rates for specific periods.
Time Period
|
First-Time Testers
|
Re-Testers
|
Most Recent Quarter Pass Rate: 07/01/2023 – 09/30/2023
|
79%
|
55%
|
Past 2-Year Pass Rate:
10/01/2021 – 09/30/2023
|
78%
|
46%
|
Developing and Scoring the Examination
A reliable and defensible exam begins with a job analysis, a study of the knowledge, skills, activities, and tasks performed by a typical candidate seeking CPWA® certification. The process requires a representative sample of volunteer certification holders to write knowledge, skill, and abilities statements (KSAs). These statements are put before the industry at large in the form of a survey. Practitioners rate the KSA statements based on criteria such as level of importance, frequency performed, etc. The results directly inform which categories are included on the examination and the percentage of questions selected for each category. A new job analysis is conducted approximately every five years to identify major changes in the work activities covered by the certification.
Standard setting is the process by which test programs establish a cut-score, or minimum score required to pass a test. Criterion-referencing compares people to an objective standard of performance or knowledge regardless of test form, time, and location by explicitly linking the passing standard to the purpose of the exam. Criterion-referenced standard setting is not strictly data driven. Rather, it is based on the sound professional judgment of subject matter experts (SMEs).
Before beginning the standard setting activity, SME participants often take the test, so they can read the items (test questions) in a context similar to test candidates. Next, SMEs think about a hypothetical person who performs just well enough on the job to be considered successful. Then, SMEs describe the performance level required to be able to just pass the test (i.e., just good enough to be certified or move onto the next level). This is the minimum standard required to be certified, licensed, or considered for selection/promotion. Test candidates that meet that criterion are traditionally referred to as just sufficiently qualified (JSQ) candidates or minimally qualified candidates.
Once the performance level is defined, SMEs review the test content and make multiple independent rounds of judgments about what type of test score constitutes a JSQ level. Between rounds, SMEs share their first judgments with each other and facilitators provide impact data, such as the percentage of all candidates who answered a selected-response test item correctly. The discussion and impact data are important to ensure that SMEs have a shared understanding of the JSQ level, which enhances their level of agreement. After the discussions are complete, the SMEs independently make a final judgment without further discussion. The analyst calculates a cut-score later and provides the recommendation to the policy making body. The analyst uses equating techniques across forms to ensure that candidates are treated equitably regardless of which items appear.
For more information, download the Candidate Handbook.