Undergraduate Medical Education Program Evaluation Strategy



Download 98.3 Kb.
Page16/16
Date22.04.2018
Size98.3 Kb.
1   ...   8   9   10   11   12   13   14   15   16
Recommendations

Summative Evaluation

Formative Evaluation

Outcomes/Impacts

Formative Evaluation

Implementation Issues


  1. Methodology/Sources of Data




Sources of Data

Needs Assessment

Process Evaluation

Outcome Evaluation

Timeline

Internal Sources of Data













Goals and Objectives Self-Assessment








Yearly

January (Pre-Clerkship)



May (Post-Clerkship)

360 Evaluation








Yearly - January

Course Evaluations








Every 2nd year/course, unless otherwise needed

Content Relevance









Ongoing

Re-tests








Every 3rd year/course - March

SCRC








Ongoing

Academic Half Day Evaluations









Ongoing

Dept Evaluations of PGY1s









Ongoing

External Sources of Data













LMCC Part 1









Yearly – May

LMCC Part 2









Yearly – September

AAMC Graduation Survey









Yearly - May

Internal/External













Correlation between LMCC and Grades









For Classes 2007-2010, then every 3rd year unless significant changes to program

Accreditation Self-Studies







Every 3-8 years

Hidden Curriculum









AFMC working group

Table 1 presents an overview of the methodology/sources of data for the UGME Program Evaluation Strategy.


3.2 INTERNAL SOURCES OF DATA

3.2.1 Goals and Objectives Self-Assessment


The College has several stated goals and objectives reflecting Physician as: Medical Expert, Communicator, Health Advocate, Learner/Scholar/Scientist, Collaborator, Resource Manager/Gatekeeper/Steward, and Person. In order to better understand the extent to which the College is achieving these objectives, students complete self-assessments rating themselves both currently and retrospectively for the first day of medical school. Students complete this self-assessment both at the start and end of clerkship.

3.2.2 360 Evaluation


First year residents will be evaluated by attending physicians, nurses, other residents, and patients on items reflecting the college’s goals and objectives. It is intended that this evaluation will take place approximately six months into residency. Aggregate ratings for residents that graduated from the U of S will be reviewed and compared with ratings for residents who were trained elsewhere. This will provide a picture of how well the College is preparing graduates for residency. At present, multisource evaluations are being conducted with a small number of programs, but it is intended that this will expand to include a larger number of programs.

3.2.3 Course Evaluations


Each course will be evaluated every second year, unless the course has undergone significant changes. For each course, a member of the UGME office will review the standardized evaluation instrument with the Course Coordinator to ensure questions are included that reflect specific course objectives, core competencies, and/or significant curriculum changes (e.g., change in content, organization, delivery, method of student assessment, etc.). The standard set of evaluation questions with C.A.S.E are sent to the course coordinators two weeks before their evaluation will open. This givens the instructor the chance to change or add questions that are relevant to their course. The finalized evaluation form will be administered to students using One45. Course evaluations are sent out the week before the class is over and usually left open for two weeks after the course so students have the chance to comment on the exam. A sampling methodology will be used where half (?) of the students in a class will be selected to complete the evaluation. This method is intended to reduce evaluation fatigue and has been found to result in high response rates and reliable responses (Kreiter & Lakshman, 2005).
A report will be generated and sent to the Course Coordinator. Reports will also be sent to the Program Evaluation Sub-Committee, Phase Chair, and the Curriculum Delivery Committee for review. A response will be generated and communicated to the new class. Furthermore, changes may be made to the course syllabus based on student feedback. It is intended that each course will be evaluated every second unless otherwise required.
The roles and responsibilities of key stakeholders are summarized below as are the sequential steps involved in the course evaluation process (Figure 2):

Figure 2: Course Evaluation Process



Undergraduate Office

Review Process/Trends

Assistant Dean


3.2.4 Content Relevance


It is important that the undergraduate curriculum contain material that is highly relevant to later medical practice. In order to help identify which content is relevant to general medical practice and which is not, final examinations from undergraduate courses will be reviewed by practicing general physicians, JURSIs, and residents for relevance to general medical practice. Specifically, respondents will be asked to rate each question on the following: 1.) the level of knowledge a clinical clerk should have immediately prior to graduation, and 2.) relevancy to general medical practice (i.e., the frequency of the condition and its impact on the patient’s quality of life). A modified Angoff approach will be used to help determine a cut score for each exam question. This will involve respondents indicating whether a marginally competent medical student would be able to answer correctly immediately prior to graduation and whether a marginally competent general physician with 5 or more years of experience would be able to answer correctly.
3.2.5 Re-tests

It is important for medical students to retain important information taught in their courses. In order to better understand the extent to which students retain specific information, and by extension which concepts are reinforced throughout the undergraduate program, students will periodically answer questions from earlier examinations. It is intended that questions from each class of will be retested every three years.


3.2.6 SCRC
??? Will bring any problems to the attention of the Program Evaluation Sub-Committee.

3.2.7 Academic Half Day Evaluations

The results of the JURSI Academic Half Day evaluations will be reviewed by ES&D and incorporated into the overall UGME Program Evaluation Strategy as appropriate.
3.2.8 Department Evaluations of PGY1s

It is intended that departmental evaluations of first year residents will be submitted to ES&D. Research staff from ES&D will conduct analyses comparing the performance of residents who graduated from the U of S with those who completed their undergraduate training at another institution. This will help identify areas in which the undergraduate curriculum is performing well in preparing its graduates for residency and areas which need improvement.



3.3 EXTERNAL SOURCES OF DATA

3.3.1 MCC Qualifying Examinations


Performance on the Medical Council of Canada Qualifying Examination (LMCC Part I and Part II) will be tracked over time and the graduates’ average scores compared to those of all candidates as well as those trained at other Canadian medical schools.

3.3.2 Canadian Medical School Graduation Questionnaire


The results of the Canadian Medical School Graduation Questionnaire (AAMC) will be tracked over time. Feedback pertaining to the quality of educational experiences in clinical clerkships, teaching by resident physicians and fellows, the methods used to evaluate clinical skills, instructional time during rotations, the acquisition of knowledge and skills, and the graduates’ overall satisfaction with their medical education is of particular relevance to the evaluation of the UGME Program. Efforts will be made to increase response rates (e.g., time scheduled during a JURSI Academic Half Day to complete the questionnaire).

3.4 INTERNAL/EXTERNAL SOURCES OF DATA



3.4.1. Correlation between LMCC Scores and Grades

In order to understand which courses are most associated with LMCC performance, correlation coefficients are conducted between grades for all undergraduate courses and LMCC performance. This will be done for the Classes of 2007-2010 and will be repeated when significant changes are made to the curriculum.


3.4.2 Accreditation Self-Studies
???
3.4.3 Working Group on the Hidden Curriculum

One recommendation made by the Future of Medical Education in Canada (FMEC): A Collective Vision of MD Education report produced by the Association of Faculties of Medicine of Canada (AFMC) is to address the hidden curriculum.



Bibliography

Bax, N.D.S., & Godfrey, J. (1997). Identifying core skills for the medical curriculum.



Medical Education, 31, 347-351.
College of Medicine (2004). Information guide 2004-2005: Undergraduate medical

students. Saskatoon, SK: College of Medicine, University of Saskatchewan.
Coombes, Y. (2000). Combining quantitative and qualitative approaches to evaluation. In

M. Thorogood & Y. Coombes (Eds.), Evaluating health promotion: Practice and



methods. New York, NY: Oxford University Press Inc.
Cousins, J.B., Donohue, J.J., & Bloom, G.A. (1996). Collaborative evaluation in North

America: Evaluators’ self-reported opinions, practices, and consequences. Evaluation



Practice, 17(3), 207-226.
Fitzpatrick, J.L., Sanders, J.R., & Worthen, B.R. (2004). Program evaluation: Alternative

approaches and practical guidelines (3rd ed.). Boston, MA: Person Education, Inc.
Guba, E.G., & Lincoln, Y.S. (1989). Fourth generation evaluation. Newbury Part: CA:

SAGE Publications, Inc.


Gerrity, M., & Mahaffy, J. (1998). Evaluating change in medical school curricula: How

did we know where we are going? Academic Medicine, 73(Suppl. 9), S55-S59.


Hendry, G., Cumming, R., Lyone, P., & Gordon, J. (2001). Student-centred course

evaluation in a four-year, problem based medical programme: Issues in collection and

management of feedback. Assessment and Evaluation in Higher Education, 26(4),

327-339.
Issel, L.M. (2004). Health program planning and evaluation: A practical, systematic



approach for community health. Mississauga, ON: Jones and Bartlett Publishers, Inc.
Joint Committee on Standards for Educational Evaluation. (1994). The program

evaluation standards: How to assess evaluations of educational programs (2nd ed.).

Thousand Oaks, CA: SAGE Publications, Inc.


Kreiter, C. D., & Lakshman, V. (2005). Investigating the use of sampling for maximising the

efficiency of student-generated faculty teaching evaluations. Medical Education, 39, 171-175.


Louie, B., Byrne, N., & Wasylenki, D. (1996). From feedback to reciprocity: Developing

a student-centered approach to course evaluation. Evaluation and the Health



Professions, 19(2), 231-242.
UGME Program Evaluation Strategy September 20021 6
Milburn, K., Fraser, E., Secker, J., & Pavis, S. (1995). Combining methods in health

promotion research: Some considerations about appropriate use. Health Education



Journal, 54, 347-356.

Nestel, D. (2002). Development of an evaluation model for an introductory module on

social medicine. Assessment and Evaluation in Higher Education, 27(4), 301-308.
O’Sullivan, R. (2004). Practicing evaluation: A collaborative approach. Thousand Oaks,

CA: SAGE Publications, Inc.


Pabst, R., & Rothkotter, H. (1997). Retrospective valuation of undergraduate medical

education by doctors at the end of their residency time in hospitals: Consequences for

the anatomical curriculum. The Anatomical Record, 249, 431-434
Patton, M.Q. (1998). Utilization focused evaluation. Newberry Park, CA: SAGE

Publications, Inc.


Rossi, P.H., Freeman, H.E., & Lipsey, M.W. (1991). Evaluation: A systematic approach

(6th ed.). Thousand Oaks, CA: SAGE Publications, Inc.


Scriven, M. (1991). Evaluation Thesaurus (4th ed.). Newbury Park, CA: SAGE

Publications, Inc.


Smith, C., Herbert, D., Robinson, W., & Watt, K. (2001). Quality assurance through a

Continuous Curriculum Review (CCR) Strategy: Reflections on a pilot project.



Assessment and Evaluation in Higher Education, 26(5), 489-502).
Spratt, C., & Walls, J. (2003). Reflective critique and collaborative practice in evaluation:

Promoting change in medical education. Medical Teacher, 25(1), 82-88.


Stern E. (1996). Developmental approaches to programme evaluation: An independent

evaluator’s perspective. In Evaluating and reforming education systems. Paris:

Organization for Economic Co-Operation and Development.
Sukkar, M. (1984). An approach to the question of relevance of medical physiology

courses. Medical Education, 18, 217-221.


University of Saskatchewan (2002). Principles of evaluation of teaching at the University

of Saskatchewan. Saskatoon, SK: Author.
Whitman, N.A., & Cockayne, T.W. (1984). Evaluating medical school courses: A user-centered

handbook. Salt Lake City, UT: University of Utah School of Medicine.




Share with your friends:
1   ...   8   9   10   11   12   13   14   15   16


The database is protected by copyright ©psyessay.org 2017
send message

    Main page
mental health
health sciences
gandhi university
Rajiv gandhi
Chapter introduction
multiple choice
research methods
south africa
language acquisition
Relationship between
qualitative research
literature review
Curriculum vitae
early childhood
relationship between
Masaryk university
nervous system
Course title
young people
Multiple choice
bangalore karnataka
state university
Original article
academic performance
essay plans
social psychology
psychology chapter
Front matter
United states
Research proposal
sciences bangalore
Mental health
compassion publications
workplace bullying
publications sorted
comparative study
chapter outline
mental illness
Course outline
decision making
sciences karnataka
working memory
Literature review
clinical psychology
college students
systematic review
problem solving
research proposal
human rights
Learning objectives
karnataka proforma