Randy Elliot Bennett

Randy Elliot Bennett is an American educational researcher who specializes in educational assessment. He is currently the Norman O. Frederiksen Chair in Assessment Innovation at Educational Testing Service in Princeton, NJ. His research and writing focus on bringing together advances in cognitive science, technology, and measurement to improve teaching and learning. He received the ETS Senior Scientist Award in 1996, the ETS Career Achievement Award in 2005, the Teachers College, Columbia University Distinguished Alumni Award in 2016, Fellow status in the American Educational Research Association (AERA) in 2017, the National Council on Measurement in Education's (NCME) Bradley Hanson Award for Contributions to Educational Measurement in 2019 (with H. Guo, M. Zhang, and P. Deane), the E. F. Lindquist Award from AERA and ACT in 2020, and elected membership in the National Academy of Education in 2022.[1][2][3][4][5] Randy Bennett was elected President of both the International Association for Educational Assessment (IAEA), a worldwide organization primarily constituted of governmental and NGO measurement organizations, and the National Council on Measurement in Education (NCME), whose members are employed in universities, testing organizations, state and federal education departments, and school districts.

Randy Elliot Bennett
BornBrooklyn, New York
OccupationEducational Researcher
NationalityAmerican
Notable worksFormative Assessment: A Critical Review
Cognitively Based Assessment of, for, and as Learning: A preliminary theory of action for summative and formative assessment
Educational Assessment: What to Watch in a Rapidly Changing World
The Changing Nature of Educational Assessment
Toward a Theory of Socioculturally Responsive Assessment
Notable awardsNational Academy of Education elected member
AERA E.F. Lindquist Award
NCME Bradley Hanson Award
AERA Fellow
Teachers College, Columbia University Distinguished Alumni Award

Publications

Bennett is author or editor of nine books, as well as over 100 journal articles, chapters, and technical reports. Those publications have concentrated on several themes. The 1998 publication, Reinventing Assessment: Speculations on the Future of Large-Scale Educational Testing,[6] presented a three-stage framework for how paper-and-pencil tests would gradually transition to digital form, eventually melding with online activities, blurring the distinction between learning and assessment, and leading to improvements in both pursuits. A series of subsequent publications built upon the work of Robert Glaser, Norman O. Frederiksen, Samuel Messick, James Pellegrino, Lorrie Shepard and others to create a unified model for formative and summative assessments under the Cognitively Based Assessment of, for, and as Learning (CBAL) initiative.[7][8] This work, noted in the citations for both the E.F. Lindquist Award and his AERA Fellow designation,[2] [4] is described in two journal articles, Transforming K-12 Assessment[9] and Cognitively Based Assessment of, for, and as Learning.[10] The latter publication articulated assumptions for the CBAL assessment model in a detailed "theory of action," which described the assessment system components, intended outcomes, and the action mechanisms that should lead to those outcomes, predating the generally recommended use of that device in operational testing programs.[11] [12]

The journal article, Formative Assessment: A Critical Review,[13] questioned the magnitude of efficacy claims, the meaningfulness of existing definitions, and the general absence of disciplinary considerations in the conceptualization and implementation of formative assessment.[14] The article encouraged a deeper examination of premises, more careful consideration of effectiveness claims, and a move toward incorporating domain considerations directly into the structure and practice of formative assessment.[15][16] [17]

Two reports--Online Assessment in Mathematics and Writing[18] and Problem Solving in Technology Rich Environments[19]--documented studies that helped set the stage for moving the US National Assessment of Educational Progress from paper presentation to computer delivery.[20] [21]

Several recent articles called attention to the need for testing companies and state education departments to exercise caution in using artificial intelligence (AI) methods for scoring consequential tests. That theme was developed in a book chapter, Validity and Automated Scoring,[22] and summarized in The Changing Nature of Educational Assessment.[23] These publications note that in automated essay scoring, for example, caution is needed because of the inscrutability of some AI scoring methods, their use of correlates that can be easily manipulated for undeserved score gain, and the routine practice of building scoring algorithms to model the judgment of operational human graders, thereby unintentionally incorporating human biases.

Bennett's latest work centers on equity in assessment. The commentary, The Good Side of COVID-19,[24] makes the case that standardized testing, and educational assessment more generally, must be rethought so that they better align with the multicultural, pluralistic society the US is rapidly becoming. In a follow-up article, Toward a Theory of Socioculturally Responsive Assessment,[25] he assembles assessment design principles from multiple literatures and uses them to fashion a definition, theory, and suggested path for implementing measures more attuned to the social, cultural, and other relevant characteristics of diverse individuals and the contexts in which they live. That line of thinking is elaborated upon in Let's Agree to (Mostly) Agree: A Response to Solano-Flores.[26]

Books

Andrade, H. L., Bennett, R. E., & Cizek, G. J. (Eds.). (2019). Handbook of formative assessment in the disciplines. New York: Routledge.

Bennett, R. E., & von Davier, M. (Eds.). (2017). Advancing human assessment: The methodological, psychological, and policy contributions of ETS. Cham, Switzerland: Springer Open.

Bennett, R. E., & Ward, W. C. (Eds.). (1993). Construction vs. choice in cognitive measurement: Issues in constructed response, performance testing, and portfolio assessment. Hillsdale, NJ: Lawrence Erlbaum Associates.

Willingham, W. W., Ragosta, M., Bennett, R. E., Braun, H. I. Rock, D. A., & Powers, D. E. (1988). Testing handicapped people. Boston, MA: Allyn & Bacon.

Bennett, R. E. (Ed.). (1987). Planning and evaluating computer education programs. Columbus, OH: Merrill.

Bennett, R. E., & Maher, C. A. (Eds). (1986). Emerging perspectives in the assessment of exceptional children. New York: Haworth Press.

Cline, H. F., Bennett, R. E., Kershaw, R. C., Schneiderman, M. B., Stecher, B., & Wilson, S. (1986). The electronic schoolhouse: The IBM secondary school computer education program. Hillsdale, NJ: Lawrence Erlbaum Associates.

Bennett, R. E., & Maher, C. A. (Eds.). (1984). Microcomputers and exceptional children. New York: Haworth Press.

Maher, C. A., & Bennett, R. E. (1984). Planning and evaluating special education services. Englewood Cliffs, NJ: Prentice-Hall.

References

  1. Levine, J. "Honoring the Very Best: Recognition for a Stellar Group of TC Alumni". Teachers College, Columbia University. Retrieved August 18, 2020.
  2. "2017 AERA Fellows". American Educational Research Association. Retrieved August 18, 2020.
  3. "Bradley Hanson Award for Contributions to Educational Measurement Recipients Announced". National Council on Measurement in Education. Retrieved August 20, 2020.
  4. "E.F. Lindquist Award: 2020 Award Recipient". American Educational Research Association. Retrieved August 18, 2020.
  5. "Seventeen Scholars Elected to Membership in the National Academy of Education". National Academy of Education. 28 January 2022. Retrieved January 28, 2022.
  6. Bennett, R.E. "Reinventing Assessment: Speculations on the Future of Large-Scale Educational Testing". Educational Testing Service.
  7. Rubenstein, G. (March 18, 2008). "Ending Hit-and-Run Testing: ETS Sets Out to Revolutionize Assessment". Edutopia.
  8. Ash, K. (March 14, 2011). "Tailoring Testing with Digital Tools". Education Week, 30(25). pp. 35, 37.
  9. Bennett, R.E.; Gitomer, D.H. (2009). "Transforming K-12 assessment: Integrating accountability testing, formative assessment, and professional support. In C. Wyatt-Smith & J. Cumming (Eds.), Educational assessment in the 21st century". New York: Springer. pp. 43–61.
  10. Bennett, R.E. (2010). "Cognitively based assessment of, for, and as learning: A preliminary theory of action for summative and formative assessment". Measurement: Interdisciplinary Research and Perspectives, 8. pp. 70–91.
  11. NCME (July 26, 2018). "National Council on Measurement in Education (NCME) Position Statement on Theories of Action for Testing Programs" (PDF). NCME.
  12. Chalhoub-Deville, M. (2016). "Validity theory: Reform policies, accountability testing, and consequences". Language Testing. Language Testing, 33(4). 33 (4): 453–472. doi:10.1177/0265532215593312. S2CID 152167855.
  13. Bennett, R.E. (2011). "Formative Assessment: A Critical Review". Assessment in Education: Principles, Policy & Practice. Assessment in Education: Principles, Policy and Practice, 18. 18: 5–25. doi:10.1080/0969594X.2010.513678. S2CID 14804319.
  14. Sawchuk, S. (May 21, 2009). "Has the Research on Formative Assessment Been Oversold?". Education Week Teacher Beat.
  15. Baird, J.; Hopfenbeck, T.N.; Newton, P.; Stobart, G.; Steen-Utheim, A.T. State of the Field Review: Assessment and Learning (PDF). Norwegian Knowledge Centre for Education.
  16. Heritage, M.; Wiley, E.C. (2020). Formative Assessment in the Disciplines: Framing a Continuum of Professional Learning. Cambridge, MA: Harvard Education Press. pp. 15–47.
  17. Nishizuka, K. (2020). "A Critical Review of Formative Assessment Research and Practice in Japan". International Journal of Curriculum Development and Practice. pp. 15–47.
  18. Sandene, B.; Horkay, N.; Bennett, R.E.; Allen, N.; Braswell, J.; Kaplan, B.; Oranje, A. (2005). Online Assessment in Mathematics and Writing: Reports From the NAEP Technology-Based Assessment Project, Research and Development Series. Washington, D.C. IES. Retrieved August 18, 2020.
  19. Bennett, R.E.; Persky, H.; Weiss, A.R.; Jenkins, F. (2007). Problem Solving in Technology-Rich Environments: A Report From the NAEP Technology-Based Assessment Project. Washington, D.C. IES. Retrieved August 18, 2020.
  20. Cavanagh, S. (August 17, 2007). "Computerized Tests Measure Problem-Solving". Education Week.
  21. Tucker, B. (November 2009). "The Next Generation of Testing". Education Leadership, 67(3). pp. 48–53.
  22. Bennett, R.E.; Zhang, M. (2016). "Validity and automated scoring. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement". New York: Routledge. pp. 142–173.
  23. Bennett, R.E. (2015). "The Changing Nature of Educational Assessment". Review of Research in Education. Review of Research in Education, 39. 39: 370–407. doi:10.3102/0091732X14554179. S2CID 145592665.
  24. Bennett, R.E. (2022). "The Good Side of COVID-19". Educational Measurement: Issues and Practice. 41: 61–63. doi:10.1111/emip.12496. S2CID 246588079.
  25. Bennett, R.E. (2023). "Toward a Theory of Socioculturally Responsive Assessment". Educational Assessment. 28 (2): 83–104. doi:10.1080/10627197.2023.2202312.
  26. Bennett, R.E. (2023). "Let's Agree to (Mostly) Agree: A Response to Solano-Flores". Educational Assessment. 28 (2): 122–127. doi:10.1080/10627197.2023.2215978. S2CID 258933453.

Randy E. Bennett publications indexed by Google Scholar.


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.