Neglected Numerators, Drifting Denominators, and Fractured Fractions: Determining Participation Rates for Students with Disabilities in Statewide Assessment Programs


NCEO Synthesis Report 23

Published by the National Center on Educational Outcomes
in collaboration with Council of Chief State School Officers (CCSSO) and the National Association of State Directors of Special Education

Prepared by:

Ron Erickson, Martha Thurlow, & Jim Ysseldyke

October 1996


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Erickson, R.N., Thurlow, M.L., & Ysseldyke, J.E. (1996). Neglected numerators, drifting denominators, and fractured fractions:   Determining participation rates for students with disabilities in statewide assessment programs (Synthesis Report No. 23). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://cehd.umn.edu/NCEO/OnlinePubs/Synthesis23.html


Executive Summary

As current educational reform efforts seek to ensure accountability for all of our nations students, increasing interest is being placed on the extent to which current assessment practices and programs include students with disabilities. Developing accurate reporting procedures on the participation of students with disabilities in large-scale assessment programs has proven to be difficult, due to a lack of data, differing definitions of eligible testing populations, and the misalignment of data collection efforts and data management responsibilities. Specific recommendations are provided for both policymakers and local practitioners and administrators to assist in improving our ability to accurately report the participation of students with disabilities in statewide assessment programs. These include suggestions to:

• Identify students with disabilities in statewide assessment programs

• Standardize procedures for calculating participation rates

• Improve communication between state special education and assessment offices

• Ask good questions about state and district assessment practices

• Help parents understand the importance of participating in assessment programs

• Evaluate current district and state policies on participation and accommodations in assessment programs

 


Assessment: A Cornerstone of Reform

For many school districts and state departments of education, the 1990s have heralded dramatic and fundamental change. Local and state education agencies have adopted diverse strategies in redefining the expectations of their educational systems, and have put forth myriad conceptual frameworks of goals, outcomes and content standards. The attention of many policymakers has already begun to turn from discussions about what students should know and be able to do, to questions about how best to measure the extent to which students attain these competencies. Consequently, statewide assessment systems, using both traditional and new methodologies (e.g., performance assessments and portfolios) have emerged as critically important components of this reform movement.

Nearly every piece of federal and state legislation focused on educational reform contains inclusionary language; that is, standards, goals, assessments and accountability systems are touted as being for all students. All students are to work toward attaining high standards, and all are to have access to learning opportunities that will enable them to attain those standards. Schools, school districts and states are to report on the progress of all their students, and thus all students are to participate in assessment programs. Consequently, LEAs and SEAs have been making efforts in recent years to have all students participate in assessments and to report on the numbers or proportions who do so. To the extent that there is variability among states in participation rates, differential exclusion from assessment, or inaccurate reporting, state and district comparisons are invalid, and policy decisions based on such results remain questionable.

In this paper, we examine the present variability in the way states report the participation of students with disabilities in their testing programs. Many of the conclusions drawn in this report have emerged from research conducted by the National Center on Educational Outcomes (NCEO) at the University of Minnesota. The primary mission of NCEO has been to work with federal and state agencies to facilitate and enrich the development and use of indicators of educational outcomes for students with disabilities. To accomplish this mission, NCEO has examined current educational assessment practices to determine the participation rates of students with disabilities in national and state assessments, and the current accessibility and use of data on the results of education for students with disabilities.

 


 Will All Ever Mean All in Assessment?

An early analysis of nine major national data collection programs by NCEO researchers (McGrew, Thurlow, Shriner & Spiegel, 1992) revealed that between 40 and 50 percent of school-aged students with disabilities were being excluded from these nationwide efforts. In large scale assessments such as the NAEP Trial State Assessment Program of 1990, exclusion rates among the participating states ranged from 33% to 87% of students with disabilities. Such variability prohibits valid comparisons between states, and prevents policy-relevant findings to be drawn about how students with disabilities are benefiting from their educational experiences.

Students with disabilities have been excluded from achievement testing programs for several reasons. One primary motivation for schools and districts to exclude them is the existence of high stakes statewide accountability systems that compare the performance of schools and districts, and often make awards or sanctions based on these results. Schools within such systems are motivated to minimize the number of low-performing test takers in order to raise their overall test scores. Compounding this problem is the variability in state and district policies about who gets tested. Zlatos (1994) examined 14 major urban school districts and found that participation rates in testing varied from 93% of all enrolled students in Memphis to 66% in Boston. This system of academic red shirting of students with learning problems has resulted in a system that perpetuates invalid comparisons among our nations schools and school districts.

Other reasons for exclusion from testing stem from the fact that many of the standardized tests currently in use were not originally designed or normed with any consideration of students with disabilities. National norms often were established on only samples of students without disabilities. Furthermore, there have been few studies of the effects of various testing accommodations on the validity or reliability of testing instruments. Only recently have federal funds been directed toward research on these issues. The U.S. Office of Educational Research and Improvement has funded eight states (and one consortium of states) to investigate issues related to standards and assessments, particularly focusing on the effects of using accommodations for students with disabilities and limited English proficiency on the technical integrity of assessment instruments.

 


 Students with Disabilities in Assessment:  An Emerging Issue

New federal policies have placed pressure on states to ensure the participation of students with disabilities in their overall assessment systems. Language within Goals 2000: Educate America Act (Public Law 103-227) clearly includes students with special learning needs in its mandate for states to set high standards for all students. Amendments drafted for the reauthorization of IDEA (Individuals with Disabilities Education Act) by both the House and Senate specifically mandate the inclusion of students with disabilities in statewide assessment systems, preferably through regular assessment systems or through alternative means for those who cannot participate in the regular testing program. Other pending language within Part B of the Act stipulates that states will report on the academic performance of students with disabilities in the same way and with the same regularity as they do for students without disabilities.

This federal commitment to a fully unified system of educational accountability has created unique challenges for state and local policymakers and assessment experts alike. Because of this, in part, much attention is now being given to examining current testing policies and their impact on students with disabilities. In a recent literature review of state level policies on testing participation and accommodations, NCEO staff found that nearly all states have revised such policies within the past two years (Thurlow, Scott, & Ysseldyke, 1995a, 1995b).

 

What We Know (and Dont Know) about Participation Rates

Since 1991, NCEO has conducted an annual survey of state directors of special education regarding educational outcomes for students with disabilities. Specific survey questions have solicited information on the participation rates of students with disabilities in statewide achievement testing. Findings over the past four years have indicated that the participation of students with disabilities in statewide assessments continues to vary considerably from state to state, with estimates, when given, ranging from zero to 100%. Furthermore, respondents are unable to provide estimates of participation for the great majority of the assessments presently administered. In its 1994 survey of state practices, NCEO found that state special education directors could report participation rates for only 49 of the 133 assessments used during that year, less than 37% of the total number of tests in use (Erickson, Thurlow, & Thor, 1995). Since respondents were allowed to give approximations, it is not known how many of the reported rates were actually calculated from verifiable data sources.

What is known is how controversial such data can be. Even though NCEO has consistently verified responses before publishing its annual survey, each year has brought a new round of questions about the accuracy of the participation rates being reported. Usually, those questioning the reports accuracy believe the published rates are too conservative; on occasion, however, some have insisted the estimates are inflated. Most questions are raised by state assessment officials whose offices may or may not have been consulted by the state special education units submitting these estimates.

 


 Why are Participation Rates So Elusive?

On the surface, calculating participation rates for students with disabilities in statewide assessment programs would seem to be a relatively straightforward task of dividing one number by another: the number of students with disabilities who take the test, divided by the population of all students with disabilities at the particular age or grade level being tested. Yet states report considerable difficulty when locating accurate data with which to build this deceptively simple ratio. Reported problems generally fall into three categories: (1) lack of data on those students taking the test; (2) differences in determining eligible testing populations; and (3) misaligned data collection procedures. These issues are addressed in the following sections.

Neglected Numerators: The Problem of Simply Not Knowing

Participation rates obviously cannot be calculated without first knowing who participated in the actual assessment. Regrettably, this basic information is not routinely collected in the testing programs of many states. In its 1995 survey of state directors of special education in the 50 regular and 10 unique states, personnel in 12 responding states did not know whether students with disabilities could be identified within their states assessment databases, and those in another 20 states reported that they definitely could not (Erickson, Thurlow, Seyfarth & Thor, 1996). To further complicate this matter, testing administrators or classroom monitors may not always know which tested students are receiving special education services, and therefore cannot provide accurate documentation at the time of testing.

Drifting Denominators: The Problem of Eligible Populations

Variability also occurs in the way educators define the eligible population against which to compare the number of test takers. Presently, the denominator of any participation ratio may be calculated in different ways. With so many different policies in place to determine whether a student with disabilities will participate in any particular assessment, no standard method exists to report on which such students are being included. In Figure 1, the number of students found in the inner circle would be the numerator in calculating a participation rate. All students with disabilities at the particular age or grade level being tested (i.e., the biggest circle) would constitute the denominator. However, current assessment policies often exclude certain subgroups of students with disabilities and leave them literally out of the equation. Consider the following scenario:

Figure 1.  Model for Determining Participation Rates

Model for Determining Participation Rates

 

An education official is asked for information on the percentage of students with disabilities who participated in a statewide fourth grade reading assessment. Checking the aggregated results, the official discovers that 5000 students participating in the examination were coded as having an Individualized Education Program (IEP). This number becomes the numerator. Now the officials attention turns toward determining the denominator, that is, the population of students with disabilities against which this numerator will be compared. He knows that the state assessment program does not test those students with disabilities attending separate schools, private schools, residential programs, correctional facilities, or those receiving homebound services. These placements account for approximately 10% of all fourth grade students receiving special education services. With this in mind, he finds that on a statewide basis, 7500 students with disabilities were at the age equivalent of fourth grade. Reducing this figure by 10%, or 750 students, the assessment official calculates a participation rate of 74% for students with disabilities in the fourth grade reading assessment (i.e., 5000- 6750). But in fact, only 67% of students with disabilities at the fourth grade level were assessed (i.e., 5000 - 7500).

As this example shows, the road to participation in assessment seems to have more than its share of wayside rests for students with disabilities. Besides the primary educational setting (e.g., special school, residential facility, ungraded program, or special classroom) several other factors are often considered in determining assessment eligibility, including:

• A students disability category;

• The extent of a students access to the general curriculum. In the past, for example, NAEP participation criteria allowed for the exclusion of a student with an IEP if that student was mainstreamed less than 50 percent of the time in academic subjects and is judged to be incapable of taking part meaningfully in the assessment (Mullis, 1990). At the state level, the following true example is typical of current policy:

A state official reports a participation rate of 77% for all eligible students with disabilities in a statewide achievement testing program. Testing eligibility is defined as having an educational program that is academically focused. The official also reports that only 60% of all students with disabilities in the state are considered eligible for participation in testing because of this reason. What then is the true participation rateΡ77% or 46% (i.e., 77% of 60%)?

• Completely individualized reasons, through the decisions of individual administrators or teams making decisions about testing students with disabilities. These decisions may or may not be documented on the students IEPs.

These various points of exclusion make the interpretation of participation rates between schools, districts or states very problematic, since the reported rates of participation may be excluding many special education students from the population being referenced.

 

Fractured Fractions: Problems of Timing and Responsibility

Another issue surrounding the accurate determination of participation rate is one of differing data collection cycles, that is, the time at which data reflecting the numerator and denominator are collected. States are required to submit special education child count data to the federal government on December 1st of each school year. The testing cycles of many states are in the fall or spring. Under such conditions, even the most accurate data collection can produce unusual results, including participation rates that exceed 100%. The problem is easier to explain than it is to fix: a December 1 population count does not reflect those students becoming eligible for special education between December and a later testing cycle. Nor does it reflect students who discontinue services. Therefore, participation rates calculated using less than current population information can result in estimates that largely exaggerate or underestimate the true rate at which students with disabilities are participating.

Participation ratios are fractured not only by time, but by departmentalized responsibilities in data collection. In many state departments of education, responsibility over large-scale assessment data falls to designated officials within assessment or evaluation divisions, while information on students with disabilities remains in the domain of state special education offices. Much of the confusion about the determination of participation rates is a direct reflection of poor inter-departmental communication between these units. As one state assessment official recently stated, "We provide the numerator, and the special education division provides the denominator."

 


 Recommendations for Policymakers

Should the amendments currently being considered for the reauthorization of IDEA become law, states will be asked to report the participation rates of students with disabilities as part of their annual state reporting procedures. If this becomes the case, states will undoubtedly seek direction from federal policymakers about which students may or may not be considered part of the population against which test takers with disabilities should be compared. Three general recommendations can be made based on NCEOs experience in pursuing accurate participation data from statewide assessment programs:

Identify students with disabilities in statewide assessment programs. Determining an accurate numerator is problematic in many states, with officials unable to determine how many test takers were actually students with disabilities. NCEO suggests that efforts be undertaken immediately within all state assessment programs to identify those test takers who were being provided special education services at the time of the test. This variable could readily be added to the other demographic descriptors routinely collected by state assessment programs.

Successful implementation of this recommendation necessarily would require cooperation between local testing administrators and special education personnel. Because general education teachers may not always be aware that a student is receiving special education services (particularly indirect monitoring or consultation services) it would be essential for special education personnel to verify all assessment rosters.

Standardize procedures for calculating participation rates. Until clear direction is given to state assessment programs and special education offices, the problems surrounding variance in determining a suitable denominator for calculating participation rates will continue to plague all attempts to track increased participation of students with disabilities in statewide assessment programs. Efforts should be undertaken by policymakers at the state and federal levels to provide clear directives in how such numbers should be derived. Because of their inclusiveness, December 1st child counts of students with disabilities (reported to the U.S. Department of Education) currently are our best available source for determining a reasonable denominator in the participation ratio. Because this count includes a states entire population of children receiving special education services at a set date, states would have a standardized and widely recognized metric to use when calculating rates of assessment participation.

As this paper has noted, however, the December 1st child count is not without its limitations. New means should be explored to assist states in bringing their special education population counts and testing cycles into alignment. And because the child count reports data by age level, age-to-grade conversions should be established and publicized for consistent interpretations of which students should be included in the denominator of participation ratios.

Improve lines of communication between state special education and assessment offices. If accurate reporting of participation by students with disabilities in statewide testing is going to happen, a collaborative strategy will need to be developed by these two educational units. The many issues surrounding the status of students with disabilities in testing programs need to be cooperatively identified and resolved. Improved levels of interaction between assessment and special education offices would undoubtedly lead to improved confidence in our methods of measuring and reporting the participation of students with disabilities in statewide assessment programs.

 


 Recommendations for Practitioners

What can teachers and local special education administrators do to improve our ability to report on the participation of students with disabilities in statewide assessment programs? The NCEO suggests a number of ways in which positive change can be encouraged at the school or district level:

Ask some good questions. Are students with disabilities within your school or district included in assessment programs? Are students with disabilities being held to the same academic standards and expectations as those of general education students? Is your school accountable for all its students? Posing these questions uncovers the fundamental issues surrounding our own expectations for students with disabilities, and their access to the curriculum provided to their non-disabled peers.

Help parents make the connection. Many parents of students with disabilities are likely to feel that their children have already been tested enough. But the majority of assessments given to students with disabilities involve determining their eligibility for services, not measuring their educational progress. Parents can play a pivotal role in advocating for the inclusion of students with special needs in assessment, but only if they are provided information on how participation in testing can lead to higher expectations, broader curricular offerings, and ultimately, better results for students.

Critique your assessment policies. Most states and many individual districts have policies in place that oversee the participation of students with disabilities. Ask to see them. Do they promote the inclusion of students receiving special education? Are appropriate accommodations allowed for use by students who need them? Are the testing results for students with disabilities included when reporting on the performance of schools or districts? Are the results used by special education administrators for planning improvement efforts? If you think such policies need improvement, bring your concerns to testing officials and local school administrators.

 


 In Conclusion

Is this much ado about nothing? We think not. Failure to include students with disabilities in assessment and accountability systems leads to failure to assume responsibility for the results of their education. Partial participation of a states or districts students in assessment can result in policy decisions being made on partial or skewed data. Inaccurate deriving or reporting of participation rates provides policymakers with inaccurate comparisons among states, districts, cities, schools, or even between students with disabilities and their peers without disabilities. Our national, state and district educational policies should be based on complete sets of data on all of Americas school children.

 


 References

Erickson, R.N., Thurlow, M.L., & Thor, K.A. (1995). State special education outcomes 1994. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Erickson, R.N., Thurlow, M.L., Seyfarth, A.L. & Thor, K.A. (1996). State special education outcomes 1995. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

McGrew, K.S., Thurlow, M.L., Shriner, J.G., & Spiegel, A.N. (1992). Inclusion of students with disabilities in national and state data collection programs (Technical Report 2). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Mullis, I. (1990). The NAEP Guide: A description of the content and methods of the 1990-1992 assessment. Washington, DC: National Center for Education Statistics.

Thurlow, M.L., Scott, D.L., & Ysseldyke, J.E. (1995a). Compilation of states guidelines for accommodations in assessments for students with disabilities(Synthesis Report 18). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M.L., Scott, D.L., & Ysseldyke, J.E. (1995b). Compilation of states guidelines for including students with disabilities in assessments (Synthesis Report 17). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Zlatos, B. (1994). Dont ask, dont tell. The American School Board Journal, 11, 24-28.