Assessing professional competence: from methods to programmes



Download 65.8 Kb.
Page8/12
Date22.04.2018
Size65.8 Kb.
1   ...   4   5   6   7   8   9   10   11   12
Validity

Validity refers to whether an instrument does actually measure what it is purported to measure. Newer developments concerning assessment methods in relation to validity were typically associated with the desire to attain a more direct assessment of clinical competence by increasing the authenticity of the measurement. This started in the sixties with the assessment of “clinical reasoning” by Patient Management Problems and continued with the introduction of the OSCE in the seventies. Authenticity was achieved by offering candidates simulated real world challenges either on paper, computerised or in a laboratory setting. Such assessment methods have passed through major developments and refinements of technique 12. . The assessment of higher cognitive abilities has progressed from the use of realistic simulations to short and focused vignettes which tap into key decisions and the application of knowledge, in which the response format (for example menu, write-in, open, matching) is of minor importance. The OSCE has similarly led to a wealth of research, from which an extensive assessment technology has emerged 10. However, on top of the rapid progress in those areas, we see a number of interrelated developments, which may have a marked impact on the validity of our measurements in the future.


Firstly, we are likely to witness a continued progress of the authenticity movement towards assessment in the setting of day-to-day practice 13. Whereas the success of the OSCE was basically predicated on moving assessment away from the workplace to a laboratory-controlled environment by providing authentic tasks in a standardised and objectified way, today, insights into the relationship between sampling and reliability appear to have put us in a position where we can move assessment back to the real world of the workplace as a result of the development of less standardised, but nevertheless reliable, methods of practice-based assessment. Methods are presently emerging that allow assessment of performance in practice by enabling adequate sampling across different contexts and assessors. Methods for performance assessment include the Mini-CEX 14. , Clinical Work Sampling 15. , video-assessment 16. and the use of incognito simulated patients 17. . Such methods are also helpful in the final step of Miller’s competency pyramid 18. . In this pyramid assessment moves from “knows” via “knows how” (paper and computer simulations) and “shows how” (performance simulations such as the OSCE) to the final “does” level of habitual performance in day-to-day practice.
A second development concerns the movement towards the integration of competencies 19. 20. 21. Essentially, this movement follows insights from modern educational theory, which postulates that learning is facilitated when tasks are integrated 22. . Instructional programmes that are restricted to the “stacking” of components or sub-skills of competencies are less effective in delivering competent professionals than methods in which different task components are presented and practised in an integrated fashion, which creates conditions that are conducive to transfer. This “whole-task” approach is reflected in the current competency movement. A competency is the ability to handle a complex professional task by integrating the relevant cognitive, psychomotor and affective skills. In educational practice we now see curricula being built around such competencies or outcomes.

However, in assessment we tend to persist in our inclination to break down the competency that we wish to assess into smaller units, which we then assess separately in the conviction that mastery of the parts will automatically lead to competent performance of the integrated whole. Reductionism in assessment has also emerged from oversimplified skills-by-method thinking 1. , in which the fundamental idea was that for each skill one (and only one) instrument could be developed and used. We continue to think in this way even though our experience has taught us the errors of our simplistic thinking. For example, in the original OSCE, short isolated skills were assessed within a short time span. Previous validity research has sounded clear warnings of the drawbacks of such an approach. For example, the classic patient management problem, which consisted of breaking down the problem-solving process into isolated steps, has been found to be a not very sensitive method for detecting differences in expertise 23. . Another example can be derived from OSCE research that has shown that more global ratings provide a more faithful reflection of expertise than detailed checklists 24. . Atomisation may lead to trivialization, may threaten validity and therefore should be avoided. Recent research that shows the validity of global and holistic judgement thus helps us to avoid trivialization. The competency movement is a plea for an integrated approach to competence, which respects the (holistic or tacit) nature of expertise. Coles has argued that the learning and assessing of professional judgement is the essence of what medical competence is about 25. . This means that authenticity is not so much a quality that augments with each rising level of Miller’s pyramid, but that it is present at all levels of the pyramid and in all good assessment methods. A good illustration of this is the way test items of certifying examinations in the US are currently being written (www.nbme.org). Compared with a few decades ago, today's items are contextual, vignette-based or problem-oriented and require reasoning skills rather than straightforward recall of facts. This contextualisation is considered an important quality or validity indicator 26. . The validity of any method of assessment could be improved substantially, if assessment designers would respect the characteristic of authenticity. We can also reverse the authenticity argument: when authenticity is not a matter of simply climbing the pyramid but something that should be realized at all levels of the pyramid, we can also say that similar authentic information may come from various sources within the pyramid. It is therefore wise to use these multiple sources of information from various methods to “construct” an overall judgement by triangulating information across these sources. Another argument why we need multiple methods do a good job in assessment.


A final trend is also related to the competency movement. The importance is acknowledged of general professional competencies, which are not unique to the medical profession. They include the ability to work in a team, metacognitive skills, professional behaviour, the ability to reflect, self-appraisal, et cetera. Although neither the concepts themselves nor the search for ways to assess them are new, there is currently a marked tendency to place more and more emphasis on such general competencies in education and therefore in assessment. New methods are gaining popularity, such as self-assessment 27. , peer assessment 28. , multisource feedback or 360-degree feedback 29. and portfolios 30. . We see the growing prominence of general competencies as a significant development, because it will require a different assessment orientation with potential implications for other areas of assessment. Information gathering for the assessment of such general competencies will increasingly be based on qualitative, descriptive and narrative information rather than or in addition to quantitative, numerical data. Such qualitative information cannot be judged against a simple preset standard. That is why some form of professional evaluation will be indispensable to ensure its appropriate use for assessment purposes. This is a challenge assessment developers will have to rise to in the near future. In parallel to what we said about the danger of reductionism, the implications of the use of qualitative information point to a similar respect for holistic professional judgment on the part of the assessor. As we move further towards the assessment of complex competencies, we will have to rely more on other, and probably more qualitative, sources of information than we have been accustomed to and we will come to rely more on professional judgment as a basis for decision making about the quality and the implications of that information. The challenge will be to make this decision making as rigorous as possible without trivializing the content for “objectivity” reasons. Much needs to be done here 31. .


Share with your friends:
1   ...   4   5   6   7   8   9   10   11   12


The database is protected by copyright ©psyessay.org 2017
send message

    Main page
mental health
health sciences
gandhi university
Rajiv gandhi
Chapter introduction
multiple choice
research methods
south africa
language acquisition
Relationship between
qualitative research
literature review
Curriculum vitae
early childhood
relationship between
Masaryk university
nervous system
Course title
young people
Multiple choice
bangalore karnataka
state university
Original article
academic performance
essay plans
social psychology
psychology chapter
Front matter
United states
Research proposal
sciences bangalore
Mental health
compassion publications
workplace bullying
publications sorted
comparative study
chapter outline
mental illness
Course outline
decision making
sciences karnataka
working memory
Literature review
clinical psychology
college students
systematic review
problem solving
research proposal
human rights
Learning objectives
karnataka proforma