Authors

Faculty member, Faculty of Education, Science and Psychology, Shahid Chamran University of Ahvaz

10.22055/edus.2006.15993

Abstract

The purpose of this study was first to measure the psychometric characteristics of Shahid Chamran University faculty members’ final test papers. The investigation of different test formats and different procedures used by faculty members to evaluate their students final academic performances also served as a second purpose of this study. A total of 109 faculty members from different academic disciplines voluntarily participated in this study. Each test item on exam papers was analyzed individually and its item difficulty, item discrimination and index of reliability were computed. The results of the completed analysis of test items were confidentially sent to the faculty members. The findings of the study showed that faculty members differ in the test format they use to evaluate their students academic performances. By comparing the psychometric characteristics of the test items with the known criterions, it was found that the differences between faculty members were significant. In addition, the analysis of item diffculty of the test papers showed that the students did not have a hard time answering the test items which means that the test items were relatively easy. It is therefore suggested that the university provide a testing center in which the faculty members would be able to learn how to construct standardized tests and improve their understanding in calculating the psychometric characteristics of their test scores.

Keywords

بلوم‌، بنجامین‌ (1982). ویژگیهای‌ آماری‌ و یادگیری‌ آموزشگاهی‌، ترجمه‌ علی‌اکبر سیف‌ (1323). تهران‌ مرکز نشر دانش‌.
صدیق‌، عیسی‌ (1336). تاریخ‌ فرهنگ‌ ایران‌. انتشارات‌ دانشگاه‌ تهران‌. شماره‌ 424.
کیامنش‌، علیرضا (1371). آزمون‌ چند-گزینه‌ای‌ و تأثیر آن‌ بر یادگیری‌ دانش‌آموزان‌، محتوی‌ آموزش‌ و روش‌ تدریس‌. فصلنامه‌ تعلیم‌ و تربیت‌، شماره‌ 31.
کیامنش‌، علیرضا (1373). ارزشیابی‌ دروس‌ ریاضی‌ و هندسه‌ نظام‌ جدید آموزش‌ متوسطه‌، فصلنامه‌ تعلیم‌ و تربیت‌، شماره‌های‌ 3 و 4.
 
Ahman, J. & Glock, M. (1971). Evaluating Pupil Growth: Principles of the Tests and Measurements (4th ed.), Boston: Allyn and Bacon.
Baillie, C. & Toohey, S. (1997). ‘The power test’, Assessment and Evaluation in Higher Education, 22(1), pp 33-48.
Brinbaum, M. & Tatuska, K.K. (1987). Open versus multiple choice response formats. Journal of Applied Psychological Measurement, 11, 385-95.
Brown, S. & Knight, P. (1994). Assessment of Learners in Higher Education. London: Kogan Page.
Brooker, R. & Smith, D. (1996). Assessing tertiary student in an education faculty. Rhetoric and Reality, Higher Education Research & Development, 5(2), pp 103-175.
Baker, L. (1990). Developing Comprehensive Assessment of Higher Order Thinking. Englewood cliffs, New Jerssey: Prentice Hall.
Biggs, B., Collis, M. & Kevin, F. (1982). Evaluating the value of quality learning. The Solo Taxonomy (Structure of the Observed Learning Outcome). New York:
Academic Press, Co.
Crocker, L, & Algina, J. (1986). Introduction to Classical and Modern Test Theoiy. New York: Holt, Rinehart and Winson, Inc.
Cronbach, I.J. (1984). Further evidence on response sets and test design. Educational and Psychological Measurement, 10, 3-13.
Cook, Averil (2002). Assessing the use of flexible assessment. Assessment and Evaluation in Higher Education, Vol. 26, No. 6, pp 540-549.
Ebel, AL. & Frisbie, A.D. (1991). Essentionals of Educational Measurement. Englewood Cliffs, New Jerssey: Prentice Hall.
Gronlund, N.E. (1986). Constructing Achievement Tests. Englewood cliffs, New Jerssey: Prentice Hall.
Mehrens, A.W. & Lehman, 3.1. (1984). Measurement and Evaluation in Education and Psychology. New York: Holt, Rinehart and Winston Inc.
Night, P., Te Wiata, I., Toohey, S., Ryan, 0., Hughes, C., & Maginm, D. (eds) (1996). Assessing Learning in Universities. Sydney: University of New South Wales Press.
Ritter, Leonora (2000). The quest for an effective form of assessment: The evolution and evaluation of a controled assessment procedure (CPA). Assessment and Evaluation in Higher Edacation, Vol. 25, No. 4 PP 307-320.
Romberg, AT. (1990). A New World View of Assessment in Mathematics. Washington DC: American Association for the Advancement of Science.
Trub, R.E., & Hambelton, R.K. (1972). The effect of scoring instructions and degree of speediness on the validity and reliability of multiple- choice tests. Educational and Psychological Measurement, 32, 737-758.
Wilson, M. (1995). Investigating of Structured Problem Solving Items. New York:
Academic Press.