Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1983
The purpose of this document is to address issues related to the release of test scores to a variety of audiences: parents, school board members, school staff, the news media, and the general public. Guidelines or recommendations for reporting test data are provided. The recommendations are based both on experiences in reporting test results and an informal review of a sample of test reports from school districts across the nation (see Appendix A). Annual reports on testing programs should include (1) descriptive information of the testing program, test content, and test scores; (2) test results for districts, as well as for individual schools; and (3) cautions concerning how the data should and should not be interpreted. Reports to parents will include the same information, but focused on an individual student. Reports to staff will focus on a class or a school. Suggestions for using test data for comparing schools, determining. weak and strong areas, and determining if a school did as well as it should have are presented. Commonly used test terms, testing textbooks that include discussions of testing terms, and reports of test results cited in "Research and Evaluation Studies from Large School Districts 1982" are included in the appendices. (PN)
1982
This is the fifth in a series of six monographs developed to help local educators use and report Michigan. Educational Assessment Program (HEAP) test results. An organized plan facilitates the important task of reporting test results to the school board quickly and accurately. This monograph gives one approach that' enables the staff to take the offensive and initiate the reporting process before MEAP results are even returned to the district. This will reduce anxiety and provide a base for developing the districts' comprehensive reporting plan. Three different types of reports to the school board are recommended: (1) a background report on the purposes of MEAP and how the results can be used; (2) a report on actual test results, uses of results in the district, and implications of results (including other measures of achievement); and (3) follow-up reports that present specific ways test results are being used to correct problem areas. While these are not the only usable report techniques, they are effective in focusing the school board's attention on the instructional uses of HEAP, avoiding misleading comparisons with other schools or districts, and assuring the use of HEAP results to
1984
/11(4, This document has been reproduced as receivnd from the person or organization ongenating it Minor changes havn been made to improve wermfumononatity Points of view or opinions stated in %is document do not necsserity represent official NIE pos.t.un 0. pill. V Table of Contents /2.192.
1997
This report describes how assessments are scored and results are reported in the inclusive assessment and accountability systems in Maryland and Kentucky. Specific topics include: (1) how school performance is reported to the public;
1995
Parent opinions about standardized tests and performance assessments were examined systematically. Mutually exclusive but randomly equivalent stratified samples from schools participating in a study of performance assessment and control schools were used to measure change in parent opinion over time. Approximately one-third of parents (n=105) completed questionnaires at the beginning of the school year, one-third completed them at the end of the year (similar sample), and the remaining third supplied interview samples (n=33 and n=27, respectively). Results demonstrated thaL parents' favorable ratings of standardized national tests did not imply a preference for this type of educational assessment over other types of assessment for measuring student or school progress. Parents considered report cards, hearing from the teacher, and seeing graded samples of student work as more informative than standardized tests, and they wanted comparative information to measure their own child's progress. When parents had a chance to look at performance assessments through the year, they endorsed their use for district purposes and preferred them for classroom use. Survey data like the Gallup Poll showing widespread approval of standardized tests should not be taken to mean that parents are opposed to other forms of assessment. Appendixes contain the parent questionnaire and the interview protocol. (Contains 3 figures, 17 tables, and 9 references.) (SLD)
1993
As part of a larger study of implementation of alternative asse.sment, a survey of parent opinions about standardized tests and performance assessments was conducted in three elementary schools. In the three participating schools, 3rd-grade teachers attended workshops on assessment development and implemented these practices in their classrooms. Samples of 69 parents from participating schools and 36 from three control schools were interviewed after completing questionnaires. Findings suggest that parents' favorable ratings of standardized national tests, supported by a Gallup Poll on the issue, do not imply a preference for such measures over other less formal sources of information for monitoring their children's progress or for judging the quality of education at their local schools. Parents tended to rely on the teacher to tell them how their child was doing relative to others, and they seldom mentioned comparison to external and national norms. Even for accountability purposes, parents preferred talking to the teacher and seeing student work. Most parents endorsed the performance assessment problems they saw, although a few expressed concern over the subjectivity of such measures. Twelve tables present survey and interview findings. An appendix presents excerpts from some parent interviews. (Contains 12 references.) (SLD)
1989
AUTHOR Rudner, Lawrence M., Ed.; Conoley, Jane Close; Plake, Barbara S. TITLE Understanding AcIlievement Tests: A Guide for School Administrators. INSTITUTION American Institutes for Research, Washington, DC.; Buros Inst. of Mental Measurement, Lincoln, NE.; ERIC Clearinghouse on Tests, Measurement, and Evaluation, Washington, DC. SPONS AGENCY Office of Educational Research and improvement (ED), Washington, DC. REPORT NO ISBN-0-89785-215-X PUB DATE Oct 89 CONTRACT R188062003 NOTE 169p.; For three ERIC Digests extracted from this document, see TM 014 145-147. PUB TYPE Guides Non-Classroom Use (055) Information Analyses ERIC Information Analysis Products (071)
2018
After nearly two decades of federal and state accountability requirements relying on conventional standardized assessments, Virginia and several other states are moving to create more balanced approaches to statewide assessment systems that include the use of performance assessments. But Palm (2008) states, “Performance assessment can mean almost anything” (p. 3). This review of extant literature explores varying ways performance assessments are defined, characteristics of quality performance assessments, and educational outcomes associated with their use in K-12 schools. A rudimentary definition of performance assessment is established at the outset of this article to provide a foundation for undertaking the review, which includes sources from empirical, theoretical, and anecdotal literature. Drawing from the exploration of quality characteristics and evident educational outcomes, a refined definition of performance assessments is offered by way of conclusion to the article with th...
1989
It is increasingly recognized, following the lead of J. J. Cannell, that actual gains in educational achievement may be much more modest than dramatic gains reported by many state assessments and many test publishers. An overview is presented of explanations of spurious test score gains. Focus is on determining how test-curriculum alignment and teaching the test influence the meaning of scores. Findings of a survey of 3tate testing directors are summarized, and the question of teaching the test is examined. Some frequently presented explanations refer to norms used; others refer to aspects of teaching the test. Directors of testing from 46 states (four states conduct no state testing) replied to a survey about testing. Forty states clearly had high-stakes testing. T most pervasive source of high-stakes pressure identified by respondents was media coverage. Responses indicate that test-curriculum alignment and teaching the test are distorting instruction. A possible solution is to develop new tests e.tery year, changing the tests rather than the norms. Two tables present explanations for test score inflation and selected survey responses. (SLD)
Current Issues in Education, 2011
Martinez-Garcia, C., LaPrairie, KN & Slate, JR (2011). Accountability Ratings of Elementary Schools: Student Demographics Matter. Current Issues in Education, 14(1). Retrieved from http://cie.asu.edu/ ... The researchers examined the most recent year of data (ie, ...
Multivariate Behavioral Research, 2010
These materials are an unpublished, proprietary work of ETS. Any limited distribution shall not constitute publication. This work may not be reproduced or distributed to third parties without ETS's prior written consent. Submit all requests through www.ets.org/legal/index.html. Educational Testing Service, ETS, the ETS logo, and Listening. Learning. Leading. are registered trademarks of Educational Testing Service (ETS).
1983
The Center for the Study of Evaluation, of the Graduate School of Education at the University' of California at Los Angeles (CSE) hosted a two day conference on "Paths to Excellence: Testing and Technology" on July 14-15, 1983. Attended by over 100 =educational researchers, \practitioners, and-policymakers, the first day of the conference focused on issues in educational testing; day. two explored the status and future of technology in schools. This docuMent presents the collected papers from the first day of the conference. Presentations focused on CSE's study of teachers' and principals' use of achievement testing in the nation's schools. The study provided ba.sic data about'the nature and frequency of classroom testing, the purposes:for which test results are used-, principals' and teachers' attitudes toward testing, and local contexts supporting the use fof tests (e.g., amount cf staff development, testing resources, leadership support). The findings were presented ate he c,Onference, and presenters were asked to provide thein interpretations of the data and their perspectives on their implications for national, state, and/or local testing policies. One speaker, William Coffman,' was asked to provide context for the conference"by considering the study in the light of the history of research on educational testing. (PN)
2016
This important work was possible from funding by the High Quality Assessment Project (HQAP), which supports state-based advocacy, communications, and policy work to help ensure successful transitions to new assessments that measure K-12 college-and careerreadiness standards. HQAP's work is funded by a coalition of national foundations, including the Bill & Melinda Gates Foundation, the Lumina Foundation, the Charles and Lynn Schusterman Family Foundation, the William and Flora Hewlett Foundation, and the Helmsley Trust. We sincerely appreciate the cooperation and efforts of the testing programs that participated in the study-ACT Aspire, Massachusetts Comprehensive Assessment System, the Partnership for Assessment of Readiness for College and Careers, and the Smarter Balanced Assessment Consortium. In particular, we thank Elizabeth (Beth) Sullivan, Carrie Conaway,
2019
In Beyond Testing: 7 Assessments of Students & Schools More Effective Than Standardized Tests (2017), Deborah Meier and Matthew Knoester explore several alternative ways to assess students’ knowledge. The authors make a case that current practices used to assess learning in schools are reduced to a single test score, and argue they should be replaced with more effective methods that gauge what students actually know. Standardized tests are but one way to measure academic success.
1997
This document has been reproduced as received from the person or organization originating it. Minor changes have been made to improve reproduction quality.
new material can be presented. These tests help the teacher gain a perspective of the range of attained learning as well as individual competence. Tests can be used to help make promotion and retention decisions. Many factors enter into the important decision of moving a student into the next grade. Intuition is an important part of any decision but that intuition is enhanced when coupled with data. Standardized tests, and records of classroom performance on less formal tests are essential for supplying much of the data upon which these decisions are based. Test results are important devices to share information with boards of education, parents, and the general public through the media. Rudner, L. and W. Schafer (2002) What Teachers Need to Know About Assessment. Washington, DC: National Education Association. From the free on-line version. To order print copies call 800 229-4200 3 some criterion. This section ends includes a discussion of norm-referenced and criterion referenced tests. This section also includes standardized and large scale assessments-typically the types of tests sponsored by state education agencies, reported in the popular press, and unfortunately, often inappropriately used as the sole measure to judge the worth of a school. We start with a discussion of the different types of scores used to report standardized test results. You will learn the advantages, disadvantages of each along with how the different types of scores should be used. A key feature of state assessments is that they are almost always accompanied by a careful delineation of endorsed educational goals. There should be no ambiguity with regard to what is covered by such tests. The next chapter discusses aligning one's instruction to the test and making the test into a valuable instructional planning tool. There is often a debate with regard to teaching to a test. Some argue that since the test identifies goals, teaching to the test is equivalent to teaching goals and should be done. Others argue that teaching to a test is an attempt to short circuit the educational process. The next chapter identifies a continuum of acceptable and unacceptable practices for preparing students to take standardized achievement tests. Lastly, with testing so prominent in the popular press, we provide an overview of some of the politics of national testing. Section 2: Essential Concepts for Classroom Assessment. The most frequent and most valuable types of tests are those developed and used by classroom teachers. This section is designed to help you develop you write better multiple choice and better performance tests. You will learn to examine what it is that you want to assess, how to write questions that assess those concepts. Special attention is paid to the development of analytic and holistic scoring rubrics. Consistent with the view of testing as a form of data gathering and communication, chapters have been included on asking classroom questions as part of routine instruction and on writing comments on report cards. Rudner, L. and W. Schafer (2002) What Teachers Need to Know About Assessment. Washington, DC: National Education Association. From the free on-line version. To order print copies call 800 229-4200 4 the reasonable expectations that those involved in the testing enterprise-test producers, test users, and test takers-should have of each other. The document is applicable to classroom tests as well as standardized tests.
PsycEXTRA Dataset, 2000
The goal of this study was to understand the role that external testing plays in elementary schools. Focus was on uncovering teachers' beliefs about testing and preparing students to take tests, how these beliefs and values are organized, and what implications they might have on practice. To accomplish this, the day-today life in classrooms and how tests and results come into play were studied. The dual case study design provided an interpretive contrast for two schools from the Phoenix (Arizona) area. Schools used the same external tests (the Iowa Tests of Basic Skills, Basic Skills Test, Continuous Uniform Evaluation System, and Study Skills Test). Although the schools had many similarities, including that of population, one had a program-centered, phonics-based curricular context, and the other had a student-centered, literature-based approach. Observations of 29 classrooms, interviews with 19 teachers, and more extensive observations of 6 focal classrooms made the analysis of beliefs about testing possible and allowed the description of activities related to testing at the two schools, including test preparation and coaching. Study findings are grouped into: (1) local definitions of testing; (2) the role of testing; and (3) the effects of testing. It is held that to define the role of testing as simply psychometric is to oversimplify it, but it is the psychometric weaknesses of tests that make them useful weapons in skirmishes among interest groups. It is argued that no test score ever improves schools. The changes brought about because of test scores are short-term and largely symbo Seven exhibits, one figure, and one table are provided. A 70-m list of references is included. Two appendices summarize a sul !y of Arizona educators and discuss disappointing test scores. (SLD)
ETS Research Report Series, 2011
November 4th, 2010, to explore some issues that influence score reports and new advances that contribute to the effectiveness of these reports. Jessica Hullman, Rebecca Rhodes, Fernando Rodriguez, and Priti Shah present the results of recent research on graph comprehension and data interpretation, especially the role of presentation format, the impact of prior quantitative literacy and domain knowledge, the trade-off between reducing cognitive load and increasing active processing of data, and the affective influence of graphical displays. Rebecca Zwick and Jeffrey Sklar present the results of the Instructional Tools in Educational Measurement and Statistics for School Personnel (ITEMS) project, funded by the National Science Foundation and conducted at the University of California, Santa Barbara to develop and evaluate 3 web-based instructional modules intended to help educators interpret test scores. Zwick and Sklar discuss the modules and the procedures used to evaluate their effectiveness. Diego Zapata-Rivera presents a new framework for designing and evaluating score reports, based on work on designing and evaluating score reports for particular audiences in the context of the CBAL (Cognitively Based Assessment of, for, and as Learning) project (Bennett & Gitomer, 2009), which has been applied in the development and evaluation of reports for various audiences including teachers, administrators and students.
1990
and Student Testing (CRESST) UCLA Graduate School of Education 1 Standardized testing has assumed a prominent role in recent efforts to improve the quality of education. National, state, and district tests, combined with minimum competency, special program, and special diploma evaluations, have resulted in a greatly expanded set of testing requirements for most schools. At a cost of millions, even billions, of dollars and at the expense of valuable student, teacher, and administrator time, testing advocates and many policymakers still view testing as a significant, positive, and costeffective tool in educational improvement.
1997
This document, fourth in a series, describes trends in statewide assessment programs. It is based on surveys conducted in the past f= the Acsociation of State Assessment Programs. States were asked to describe the assessment programs they operated during the 1995-96 school year. Part On of the survey asks each state to describe its existing .1-sc(211_aborat,ive partners, and what it is developing. Part Two of the survey asks each state to describe its efforts in nontraditional assessment. Part Three of the survey asks each state to describe each assessment program, component, or groups of assessments that are used to gather a set of data used for the same assessment purposes. For each component, states explain who is tested, what subjects are tested, and what types of assessments are used. In addition, states describe accommodations provided to English language learners and students with disabilities. States have designed very different assessment systems, from use of a norm-referenced test alone to use of performance assessments. Most states, however, use a combination of multiple choice, short-answer, extended-response questions, performance tasks, or portfolios. This is the last year the North Central Regional Educational Laboratory will participate in the collection of survey information on assessments; the program will continue under the direction of the Council of Chief State School Officers.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.