Accès gratuit
Numéro |
Pédagogie Médicale
Volume 18, Numéro 2, Mai 2017
|
|
---|---|---|
Page(s) | 55 - 64 | |
Section | Recherche et perspectives | |
DOI | https://doi.org/10.1051/pmed/2018002 | |
Publié en ligne | 5 juillet 2018 |
- Case SM, Swanson DB. Constructing written test questions for the basic and clinical sciences. Philadelphia: National Board of Medical Examiners, 2002. [Google Scholar]
- Brady AM. Assessment of learning with multiple-choice questions. Nurse Educ Pract 2005;5:238‐42. [CrossRef] [PubMed] [Google Scholar]
- Epstein RM, Hundert EM. Professional Competence. JAMA 2002;287:226‐35. [CrossRef] [PubMed] [Google Scholar]
- Linn RL, Gronlund NE. Measurement and Assessment in Teaching. Upper Saddle River, New Jersey: Merrill Prentice-Hall, 2000. [Google Scholar]
- van der Vleuten C. Validity of final examinations in undergraduate medical training. BMJ 2000;321:1217‐9. [CrossRef] [Google Scholar]
- Jouquan J. L’évaluation des apprentissages des étudiants en formation médicale initiale. Pédagogie Médicale 2002;3:38‐52. [CrossRef] [EDP Sciences] [Google Scholar]
- Downing SM. Validity: On the meaningful interpretation of assessment data. Med Educ 2003;37:830‐7. [CrossRef] [PubMed] [Google Scholar]
- Rodriguez MC. Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educ Meas Issues Pract 2005;24:3‐13. [CrossRef] [Google Scholar]
- Bush ME. Quality assurance of multiple-choice tests. Qual Assur Educ 2006;14:398‐404. [CrossRef] [Google Scholar]
- Haladyna TM, Downing SM, Rodriguez MC. A review of multiple-choice item-writing guidelines for classroom assessment. Appl Meas Educ 2002;15:309‐33. [CrossRef] [Google Scholar]
- Tarrant M, Ware J, Mohammed AM. An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC Med Educ 2009;9:40. [CrossRef] [PubMed] [Google Scholar]
- Caldwell DJ, Pate AN. Effects of question formats on student and item performance. Am J Pharm Educ 2013;77:1‐5. [CrossRef] [PubMed] [Google Scholar]
- Downing SM. The effects of violating standard item writing principles on tests and students: The consequences of using flawed test items on achievement examinations in medical education. Adv Health Sci Educ Theory Pract 2005;10:133‐43. [CrossRef] [PubMed] [Google Scholar]
- Considine J, Botti M, Thomas S. Design, format, validity and reliability of multiple choice questions for use in nursing research and education. Collegian 2005;12:19‐24. [CrossRef] [PubMed] [Google Scholar]
- Prihoda TJ, Pinckard RN, McMahan CA, Jones AC. Correcting for guessing increases validity in multiple-choice examinations in an oral and maxillofacial pathology course. J Dent Educ 2006;70:378‐86. [PubMed] [Google Scholar]
- Haladyna TM, Downing SM. A taxonomy of multiple-choice. Appl Meas Educ 1989;2:37‐50. [CrossRef] [Google Scholar]
- Bertrand R, Blais J-G. Modèles de mesure: L’apport de la théorie des réponses aux items. Québec : Presse de l’Université du Québec, 2004. [Google Scholar]
- Moreno R, Martínez RJ, Muñiz J. New guidelines for developing multiple-choice\nItems. Methodol Eur J Res Methods Behav Soc Sci 2006;2:65‐72. [Google Scholar]
- DiBattista D, Kurzawa L. Examination of the quality of multiple-choice items on classroom tests. Can J Scholarsh Teach Learn 2011;2:1‐23. [Google Scholar]
- Tarrant M, Ware J. Impact of item-writing flaws in multiple-choice questions on student achievement in high-stakes nursing assessments. Med Educ 2008;42:198‐206. [CrossRef] [PubMed] [Google Scholar]
- DeVellis RF. Scale development: Theory and applications. Thousand Oaks: Sage publications, 2016. [Google Scholar]
- Conseil de recherches en sciences humaines du Canada, Conseil de recherches en sciences naturelles et en génie du Canada, Instituts de recherche en santé du Canada : Énoncé de politique des trois Conseils : Éthique de la recherche avec des êtres humains, décembre 2014. [Google Scholar]
- McConnell MM, St-Onge C, Young ME. The benefits of testing for learning on later performance. Adv Health Sci Educ Theory Pract 2015;20:305‐20. [CrossRef] [PubMed] [Google Scholar]
- Crocker L, Algina J. Introduction to classical and modern test theory. Sea Harbor Drive: Holt, Rinehart and Winston, 1986. [Google Scholar]
- Frisbie DA, Ebel RL. Essentials of educational measurement. Upper Saddle River: Prentice Hall, 1991. [Google Scholar]
- Fleiss JL. Balanced incomplete block designs for inter-rater reliability studies. Applied Psychological Measurement, Thousand Oaks: Sage publications, 1981. [Google Scholar]
- Luecht RM. Adaptive Computer-based tasks under an assessment engineering paradigm. In : D. J. Weiss (Ed.). Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing. 2009 [On-line] Disponible sur : www.psych.umn.edu/psylabs/CATCentral/. [Google Scholar]
- Gierl MJ, Zhou J, Alves C. Developing a taxonomy of item model types to promote assessment engineering. J Technol Learn Assess 2008 7:1‐51. [Google Scholar]
- Gierl MJ, Lai H. Evaluating the quality of medical multiple-choice items created with automated processes. Med Educ 2013;47:726‐33. [CrossRef] [PubMed] [Google Scholar]
Les statistiques affichées correspondent au cumul d'une part des vues des résumés de l'article et d'autre part des vues et téléchargements de l'article plein-texte (PDF, Full-HTML, ePub... selon les formats disponibles) sur la platefome Vision4Press.
Les statistiques sont disponibles avec un délai de 48 à 96 heures et sont mises à jour quotidiennement en semaine.
Le chargement des statistiques peut être long.