نوع مقاله : علمی- پژوهشی

نویسندگان

1 کارشناس ارشد سازمان آموزش فنی و حرفه ای کشور

2 مدیرکل سنجش مهارت و صلاحیت حرفه ای سازمان آموزش فنی و حرفه ای

چکیده

رشد سریع فناوری ارتباطات و اطلاعات در تدریس و یادگیری، رویکرد اصلی را از سیستم آزمون مبتنی بر قلم و کاغذ به آزمون مبتنی بر فناوری تغییر داده است. رایانه‌ها و فنّاوری‌های مرتبط ابزارهای قدرتمندی برای رفع چالش‌های ارزشیابی ارائه می‌دهند و ثبت مجموعه‌ی گسترده‌تری از مهارت‌های شناختی را آسان می‌کنند. این مطالعه با هدف بررسی مقایسه‌ای آزمون‌های مهارتی مبتنی بر فناوری و آزمون‌های مبتنی بر کاغذ از دیدگاه متقاضیان آزمون اداره کل آموزش فنی و حرفه‌ای استان البرز انجام شده است. جامعه آماری این تحقیق تعداد 420 نفر از آزمون دهندگان سازمان آموزش فنی و حرفه‌ای بودند که تعداد 201 نفر به صورت نمونه گیری خوشه‌ای با استفاده از فرمول کوکران، به عنوان حجم نمونه انتخاب شدند. این پژوهش از نظر هدف کاربردی و از نظر روش گردآوری داده‌ها، آزمایشی است. گردآوری داده‌ها شامل پرسشنامه محقق ساخته با طیف پنج درجه‌ای لیکرت و با پایایی 0.89 می‌باشد. پرسشنامه در سه بعد عوامل فنی، محیطی و مدیریتی تهیه شد. روایی پرسشنامه از طریق نرم‌افزار AMOS و تحلیل عاملی بر اساس معادلات ساختاری، مطلوب بوده است. بررسی معناداری بارهای عاملی نشان از معنادار بودن همه بارهای عاملی در سطح 001/0 بوده است. آزمون شاخص‌های برازش نشان داد که مدل از برازش نسبتاً مطلوبی برخوردار است. پرسشنامه بر روی دو گروه آزمودنی مستقل، شامل گروه آزمودنی مبتنی بر رایانه و گروه آزمودنی مبتنی بر کاغذ اجرا شد. تجزیه و تحلیل داده‌ها در دو سطح آمار توصیفی و آمار استنباطی با استفاده از نرم افزار SPSS صورت گرفت و تحلیل یافته‌ها از طریق تحلیل واریانس و t نمونه‌های مستقل انجام شد، نتایج آزمون نرمالیتی نشان داد که توزیع داده‌ها به صورت نرمال بوده است. نتایج آزمون t نمونه‌های مستقل برای تمامی ‌ابعاد در سطوح فنی، محیطی و مدیریتی نشان داد که اختلاف معناداری میان میانگین‌های کسب‌شده در دو گروه آزمودنی‌ها (کاغذی و الکترونیکی) وجود داشته است. همچنین نتایج تحقیق، نشان دهنده بالاتر بودن سطح میانگین نظرات آزمون شوندگان الکترونیکی و اختلاف معنادار میانگین ارزشیابی از طریق آزمون‌های مبتنی بر فناوری در مقایسه با آزمون‌های سنتی یا کاغذی بوده است. به توجه به نتایج تحقیق به نظر می‌رسد بایستی برنامه‌ریزی برای پیاده‌سازی الگوهای نوین ارزشیابی و اجرای انواع آزمون‌های مبتنی بر فنّاوری برای ‌انواع آزمون‌ها صورت پذیرد.

کلیدواژه‌ها

عنوان مقاله [English]

Comparative Study of Technology-based Proficiency Tests and Paper-based Tests (Case Study: The Department of Vocational and Technical Education of Alborz)

نویسنده [English]

  • Hossein Bagherpour 1

1 Master of Technical and Vocational Training Organization

2

چکیده [English]

Introduction
The rapid development of information and communication technology in teaching and learning has changed the main approach of pen and paper-based test system to a technology-based test. Computers and related technologies provide powerful tools for solving evaluation challenges and facilitate the recording of a wider set of cognitive skills. The study aims to Compare the technology-based proficiency tests and paper-based tests from the perspective of General Department of Vocational and Technical Education of Alborz test applicants has been conducted.
 
Method
The research method is a descriptive method. The population of this study were 420 examinees from Alborz Vocational and Technical Educational Department which 201 subjects were selected as cluster sampling using Cochran formula as sample size. This study in a pilot study terms of functional and data collection method. Data collection consisted of researcher- made questionnaire with 5-point scale Likert with reliability of 0.89. The questionnaire was prepared in three dimensions: Technical, environmental and managerial factors. The reliability of the questionnaire has been satisfied by AMOS software, factor analysis based on structural equations. The significance of factor loadings indicated the significance of all factor loads at the level of 0.001. The fittest indices indicated that the model has a relatively good fit. Questionnaire administered in two independent subject groups including computer-based and paper-based subject groups. Data analysis was conducted based on two levels of descriptive and inferential statistics using SPSS software, and analyzing the results was performed by the analysis of variance and t independent samples.
 
Results
The research results indicates the higher average level of electronic examinees' comments and meaningful difference between the average valuations via technology-based tests compared with traditional or paper-based tests.
 
Discussion
Also, the results of the research indicate that the average level of e-
readership is higher and the mean value of the evaluation is significantly different from the traditional tests or paper tests. Regarding the results of the research, it seems that planning for implementation of new models of evaluation and the implementation of a variety of technology-based tests for the types of tests should take place.
 

کلیدواژه‌ها [English]

  • Assessment
  • Technology-based test
  • test paper
  • Technical and Vocational Training Organization
Ali Mohammadi, T., Mehralizadeh, Y., & Shahi, S. (2011). Evaluation of the third base of the theoretical girl's school of Ahvaz based on the theoretical model of the student from the viewpoints of students and graduates, Shahid Chamran University of Ahwaz, 6 (2), 189-208. (Persian).
Azizi, N., & Heidari, Sh. (2010). The Attitude of Primary Teachers of Sanandaj City toward Descriptive Evaluation, Journal of Educational Sciences, Shahid Chamran University, Ahvaz, 2004, 6 (16), 167- 188. (Persian).
Bennett, R. E., Persky, H., Weiss, A. R., & Jenkins, F. (2007). Problem solving in technology-rich environments: A report from the NAEP Technology-Based Assessment Project. Research and Development Series (NCES 2007–466). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Bodmann, S. M. & Robinson, D. H. (2004). Speed and Performance Differences among Computer-Based and Paper-Pencil Tests. Journal of Educational Computing Research, 31 (1), 51 – 60.
Bull, J. (1999). Computer – Assisted Assessment: Impact on Higher Education Institutions. Educational Technology & Society, 2 (3). Retrieved December 05, 2004 from http://ifets.ieee.org/periodicals.
Clariana, R., & Wallace, P. (2002). Paper-based versus computer-based assessment: Key factors associated with the test mode effect. British Journal of Educational Technology, 33 (5), 593-602.
Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42 (2), 21-29.
Fluck, A., Pullen, D., & Harper, C. (2009). Case study of a computer based examination system. Australasian Journal of Educational Technology,25 (4), 509- 523.
Havens, A. (2002). Examinations and Learning: An Activity – Theoretical Analysis of the Relationship between ssessment and Learning. Retrieved December 03, 2010 from http: //www.leeds.ac.uk /educol /documents/ 00002238.htm.
Jamil, M., Tariq, R. H., Shami, P. A., & Zakariys, B. (2012). Computer-based vs paper-based examinations: Perceptions of university teachers. Tojet: The Turkish Online Journal of Educational Technology, 11 (4). 371-381.
Karadeniz, S. (2009). The impacts of paper, web and mobile based assessment on students’ achievement and perceptions. Scientific Research and Essay, 4 (10), 984 – 991. Retrieved May 15, 2011 from http://www.academicjournals.org/sre.
Kikis-Papadakis, K., & Andreas, K. (2009). "Reflections on paper-and-pencil tests to eAssessments: Narrow and broadband paths to 21st century challenges." The Transition to Computer-Based Assessment 99.
Lim, E., CH., Ong, B., KC., Wilder-Smith, E., PV., & Seet, R., CS. (2006).
Computer-based Versus Pen-and-paper Testing: Students’ Perception. Ann Acad Med Singapore, 35 (9), 599-603.
MacCann, R., Eastment, B., & Pickering, S. (2002). Responding to free response examination questions: Computer versus pen and paper. British Journal of Educational Technology, 33, 173–188.
Martin, R. (2008). New possibilities and challenges for assessment through the use of technology, in F. Scheuermann & A. Guimaraes Pereira (Eds) Towards a research agenda in computer-based assessment: Challenges and needs for European Educational Measurement (6-9).
McKenna, C. (2001). Introducing Computers into Assessment Process: What is the Impact Upon Academic Practice? Paper Presented at Higher Education Close Up Conference 2, Lancaster University, 16 – 18 July. Retrieved November 4, 2004 from http://leeds.ac.uk/educol/documents/ 00001805.html
Noyes, J. M., & Garland, K. J. (2008). Computer- vs. paper-based tasks: Are they equivalent? Ergonomics. 51 (9), 1352–1375.
Rehmani, A. (2003). Impact of Public Examination System on Teaching and Learning in Pakistan. Retrieved December 24, 2010 from http://www. aku.edu/AKUEB/pdfs/pubexam.pdf.
Ripley, M. (2008) Technology in the service of 21st century learning and assessment, in F. Scheuermann & A. Guimaraes Pereira (Eds) Towards a research agenda in computer-based assessment: Challenges and needs for European Educational Measurement (22-29).
Russell, Michael & Amie Goldberg, & Kathleen. (2003). Computer-Based Testing and Validity: A Look Back and Into the Future, Technology and Assessment Study Collaborative, Boston College.
Scheuermann, F., & Pereira, A. G. (2008). Towards a research agenda on Computer-based Assessment. Challenges and needs for European Educational Measurement. Luxembourg.
Shah, J. H. (2002). Validity and Credibility of Public Examinations in Pakistan. An Unpublished Thesis Submitted for the Degree of Ph. D., in the Department of Education, Islamia University Bahawalpur, Pakistan.
Sim, G., Holifield, P., & Brown, M. (2004). Implementation of Computer Assisted Assessment: Lessons from the iterature. ALT-J, Research in Learning Technology, 12 (3), 217 – 233.
Solak, E. (2014). Computer versus Paper-Based Reading: A Case Study in English Language Teaching Context. Online Submission, 4 (1), 202-211.
Van Lent, G., & Global, E. T. S. (2009). Risks and benefits of CBT versus PBT in high-stakes testing. The Transition to Computer-Based Assessment, 83.
Van Lent, Gerben, & E. T. S. Global. (2009). "Risks and benefits of CBT versus PBT in high-stakes testing." The Transition to Computer-Based Assessment: 83.