Document Type : Research Paper

Author

Master of Technical and Vocational Training Organization

Abstract

Introduction
The rapid development of information and communication technology in teaching and learning has changed the main approach of pen and paper-based test system to a technology-based test. Computers and related technologies provide powerful tools for solving evaluation challenges and facilitate the recording of a wider set of cognitive skills. The study aims to Compare the technology-based proficiency tests and paper-based tests from the perspective of General Department of Vocational and Technical Education of Alborz test applicants has been conducted.
 
Method
The research method is a descriptive method. The population of this study were 420 examinees from Alborz Vocational and Technical Educational Department which 201 subjects were selected as cluster sampling using Cochran formula as sample size. This study in a pilot study terms of functional and data collection method. Data collection consisted of researcher- made questionnaire with 5-point scale Likert with reliability of 0.89. The questionnaire was prepared in three dimensions: Technical, environmental and managerial factors. The reliability of the questionnaire has been satisfied by AMOS software, factor analysis based on structural equations. The significance of factor loadings indicated the significance of all factor loads at the level of 0.001. The fittest indices indicated that the model has a relatively good fit. Questionnaire administered in two independent subject groups including computer-based and paper-based subject groups. Data analysis was conducted based on two levels of descriptive and inferential statistics using SPSS software, and analyzing the results was performed by the analysis of variance and t independent samples.
 
Results
The research results indicates the higher average level of electronic examinees' comments and meaningful difference between the average valuations via technology-based tests compared with traditional or paper-based tests.
 
Discussion
Also, the results of the research indicate that the average level of e-
readership is higher and the mean value of the evaluation is significantly different from the traditional tests or paper tests. Regarding the results of the research, it seems that planning for implementation of new models of evaluation and the implementation of a variety of technology-based tests for the types of tests should take place.
 

Keywords

Ali Mohammadi, T., Mehralizadeh, Y., & Shahi, S. (2011). Evaluation of the third base of the theoretical girl's school of Ahvaz based on the theoretical model of the student from the viewpoints of students and graduates, Shahid Chamran University of Ahwaz, 6 (2), 189-208. (Persian).
Azizi, N., & Heidari, Sh. (2010). The Attitude of Primary Teachers of Sanandaj City toward Descriptive Evaluation, Journal of Educational Sciences, Shahid Chamran University, Ahvaz, 2004, 6 (16), 167- 188. (Persian).
Bennett, R. E., Persky, H., Weiss, A. R., & Jenkins, F. (2007). Problem solving in technology-rich environments: A report from the NAEP Technology-Based Assessment Project. Research and Development Series (NCES 2007–466). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Bodmann, S. M. & Robinson, D. H. (2004). Speed and Performance Differences among Computer-Based and Paper-Pencil Tests. Journal of Educational Computing Research, 31 (1), 51 – 60.
Bull, J. (1999). Computer – Assisted Assessment: Impact on Higher Education Institutions. Educational Technology & Society, 2 (3). Retrieved December 05, 2004 from http://ifets.ieee.org/periodicals.
Clariana, R., & Wallace, P. (2002). Paper-based versus computer-based assessment: Key factors associated with the test mode effect. British Journal of Educational Technology, 33 (5), 593-602.
Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42 (2), 21-29.
Fluck, A., Pullen, D., & Harper, C. (2009). Case study of a computer based examination system. Australasian Journal of Educational Technology,25 (4), 509- 523.
Havens, A. (2002). Examinations and Learning: An Activity – Theoretical Analysis of the Relationship between ssessment and Learning. Retrieved December 03, 2010 from http: //www.leeds.ac.uk /educol /documents/ 00002238.htm.
Jamil, M., Tariq, R. H., Shami, P. A., & Zakariys, B. (2012). Computer-based vs paper-based examinations: Perceptions of university teachers. Tojet: The Turkish Online Journal of Educational Technology, 11 (4). 371-381.
Karadeniz, S. (2009). The impacts of paper, web and mobile based assessment on students’ achievement and perceptions. Scientific Research and Essay, 4 (10), 984 – 991. Retrieved May 15, 2011 from http://www.academicjournals.org/sre.
Kikis-Papadakis, K., & Andreas, K. (2009). "Reflections on paper-and-pencil tests to eAssessments: Narrow and broadband paths to 21st century challenges." The Transition to Computer-Based Assessment 99.
Lim, E., CH., Ong, B., KC., Wilder-Smith, E., PV., & Seet, R., CS. (2006).
Computer-based Versus Pen-and-paper Testing: Students’ Perception. Ann Acad Med Singapore, 35 (9), 599-603.
MacCann, R., Eastment, B., & Pickering, S. (2002). Responding to free response examination questions: Computer versus pen and paper. British Journal of Educational Technology, 33, 173–188.
Martin, R. (2008). New possibilities and challenges for assessment through the use of technology, in F. Scheuermann & A. Guimaraes Pereira (Eds) Towards a research agenda in computer-based assessment: Challenges and needs for European Educational Measurement (6-9).
McKenna, C. (2001). Introducing Computers into Assessment Process: What is the Impact Upon Academic Practice? Paper Presented at Higher Education Close Up Conference 2, Lancaster University, 16 – 18 July. Retrieved November 4, 2004 from http://leeds.ac.uk/educol/documents/ 00001805.html
Noyes, J. M., & Garland, K. J. (2008). Computer- vs. paper-based tasks: Are they equivalent? Ergonomics. 51 (9), 1352–1375.
Rehmani, A. (2003). Impact of Public Examination System on Teaching and Learning in Pakistan. Retrieved December 24, 2010 from http://www. aku.edu/AKUEB/pdfs/pubexam.pdf.
Ripley, M. (2008) Technology in the service of 21st century learning and assessment, in F. Scheuermann & A. Guimaraes Pereira (Eds) Towards a research agenda in computer-based assessment: Challenges and needs for European Educational Measurement (22-29).
Russell, Michael & Amie Goldberg, & Kathleen. (2003). Computer-Based Testing and Validity: A Look Back and Into the Future, Technology and Assessment Study Collaborative, Boston College.
Scheuermann, F., & Pereira, A. G. (2008). Towards a research agenda on Computer-based Assessment. Challenges and needs for European Educational Measurement. Luxembourg.
Shah, J. H. (2002). Validity and Credibility of Public Examinations in Pakistan. An Unpublished Thesis Submitted for the Degree of Ph. D., in the Department of Education, Islamia University Bahawalpur, Pakistan.
Sim, G., Holifield, P., & Brown, M. (2004). Implementation of Computer Assisted Assessment: Lessons from the iterature. ALT-J, Research in Learning Technology, 12 (3), 217 – 233.
Solak, E. (2014). Computer versus Paper-Based Reading: A Case Study in English Language Teaching Context. Online Submission, 4 (1), 202-211.
Van Lent, G., & Global, E. T. S. (2009). Risks and benefits of CBT versus PBT in high-stakes testing. The Transition to Computer-Based Assessment, 83.
Van Lent, Gerben, & E. T. S. Global. (2009). "Risks and benefits of CBT versus PBT in high-stakes testing." The Transition to Computer-Based Assessment: 83.