24 August 2015

Exam Boards - The More the Merrier?


The release of national school examination results in the UK has led to the customary debate about standards; are so many young people getting excellent results because they are well taught or because the examinations are getting easier?  National school  examinations are administered by a number of private, commercially run companies who, while having the basic examination requirements (the skills and knowledge level to be assessed and the number of years students are expected to take to accumulate those skill and that knowledge) set by a government body – it changes its name quite a lot but is currently called the Office of Qualifications and Examination Regulation (or Ofqual for short) – are entirely free to decide how the assessment is to be made.  For their part, schools are at total liberty to choose whichever exam board they like.  Suggestions have been made this year that commercial pressures within the examination boards are encouraging them to attract custom by making their exams easier, and it might be better for a single, unitary exam board to administer all national school examinations.  Nicky Morgan, the UK Minister of Education, was asked about this during a recent radio interview, and while she neatly side-stepped the issue, she might do well to look at how graded music examinations are run.  After all, they have been going on for the best part of 150 years and are an undoubted success story.

There are numerous individual companies and organisations which administer graded music exams and, again, while (in the UK) Ofqual defines the standard, each examination board decides how it should assess those standards.  For most of the past 150 years the ABRSM has been the dominant board both in the UK and internationally; so much so, in fact, that music examinations are pretty much synonymous with its name.  And while Trinity College London actually was the first on the scene with privately organised graded music examinations, it has always played second fiddle to the ABRSM.  This is largely due to Trinity’s history of introspective management, possibly the result of its being founded to furnish just one London music conservatory with potential students rather than, in the case of ABRSM, several.  Trinity long gave off the impression of being a small, family-run concern, while ABRSM has always seemed far better organised and more efficiently run.  Beyond these two, there are many examination boards spread across the world, some carefully monitored by local governments (such as the London College of Music in the UK,  AMEB in Australia and UNISA in South Africa) and some apparently setting their own standards (there is one in Malaysia whose credentials I seriously doubt).  But Ms Morgan could happily confine herself to looking at ABRSM and Trinity to see how, in practice, competing examination boards can be both hugely beneficial and dangerously detrimental to those who submit to their examination syllabuses.

ABRSM has, for the best part of 100 years, pegged its assessment to a rigid framework of six key areas.  These have occasionally been tweaked and adjusted by different Chief Examiners, but with the Chief Examiner always having been selected from someone of impeccable musical and pedagogical credentials and thoroughly imbued in the ABRSM ethos, there has always been considerable stability in and respect for the Board’s assessment methods.  Those six key areas are Technical Work (scales and arpeggios of increasing complexity and number as the grades increase), Sight Reading, Ear Tests and three pieces drawn from different periods of musical history.  Indeed so rigid and long-standing is this framework that it has become the curriculum for most teachers; it not only provides a framework for, but also the boundaries of, their teaching.  This is probably the most damaging aspect of the ABRSM approach, albeit one from which the ABRSM strenuously attempts to distance itself.  As a consequence of this rigid framework, teachers have no concept of vital musical skills like improvisation and believe that an effective programme balance rests wholly on random historical periods (including the astonishing belief that, stylistically, there is no difference between Sibelius and Ferneyhough, both lumped together in a musical period the ABRSM classifies as “modern”). 

After a protracted period of stagnation during which its exams represented a somewhat watered down version of the ABRSM’s differing only in choice of pieces and marking system, Trinity was violently shaken out of its stupor by the arrival of Nicholas King as Chief Examiner in 1998.  He came thoroughly imbued with the ABRSM ethos, but conscious of its failings he was keen to re-position Trinity’s exams as offering a musical alternative to what many saw as the ABRSM’s emphasis on stability and consistency over artistic legitimacy.  The changes he wrought on Trinity’s exams were revolutionary, and brought about an effective and valid alternative to the ABRSM through offering something musically driven yet wholly maintaining (maybe even raising) the accepted standard at each grade level.  King rattled rather a lot of cages in the loose and unprofessional management of Trinity and his abrupt departure in 2002 was celebrated by many in the administration but met with horror by the examiners and many dedicated regional and local agents who had seen in him a saviour able, at last, to put Trinity on an equal footing with its big competitor.  King was replaced as Chief Examiner by Keith Beniston whose soft and unobtrusive manner was seen by many as indicative of academic weakness and musical mediocrity.  It eventually dawned on all the doubters that these first impressions were completely wrong and that Beniston was pretty near the ideal as Chief Examiner.  Under his genial and caring eye Trinity took on a coherence and widespread credibility it had never previously enjoyed.  He tweaked the syllabuses and oversaw a system of assessment which focused on the candidate’s musical abilities rather than an abstract set of ideals as impossible to achieve musically as they were possible to mark consistently. 

It is in this latter area that we find, perhaps, the biggest different between the two boards.  For while the rigid framework of ABRSM allows for the kind of consistency of marking which non-musical parents understand, the artistically-driven principles of Trinity are, like all musical performances, open to a wide range of aural interpretations which are truly reflected in examiners’ wide-ranging comments.  No two people hear the same things in any musical performance; Trinity reflected this, while ABRSM did its best to subvert it.  And thus was set up a very clear difference of approach between two exam boards.  Both were legitimate in that they realistically assessed the required skills and knowledge, yet each offered a very different approach for teachers and students.  True, because these boards had been around for years, brand loyalty played more of a role in customer choice than anything else, but there was undoubtedly the beginnings of a levelling out, and as the Trinity brand gained greater respectability, so the flaws in the ABRSM system were more widely recognised.

And there the story should end, but it does not.  The acrimonious dismissal of Beniston in 2010 occasioned by internal petty jealousies, threw Trinity into a crisis from which it has never really recovered.  Beniston's successor was made redundant after just two years, and the post of Chief Examiner in Trinity has been dispensed with, leaving the administration of the examinations to a staff almost wholly without musical or music pedagogical experience, no practical experience in music examining, and an inability to grasp the concept of assessing artistic values.  Driven by a desire both to match the commercial success of ABRSM and to avoid customer complaints (which take up valuable resources), the move has been away from valued assessments of musical performances and towards a strange hybrid between a desire to encourage artistic expression but a refusal to assess it coherently.  Examiners are being given ever more precise instructions about comments they make, thereby removing from the examination report that sense of artistic validity which comes from a recognised professional expressing both an informed opinion and an authoritative assessment of individual ability (examiners are encouraged to draw from a circulated list of recommended comments, they are instructed to conform to an increasingly confining house style, the word “nice”, for example, has been banned, and an instruction given that examiners should never rely on “gut instinct” but adhere rigidly to a set of criteria which include such innocuous phrases as “very good”, “good”, “reasonable” and “limited”, each one of which will automatically trigger a particular result and avert customer criticism). 

There is plenty for Ms Morgan to mull over.  Competing boards can strive to outdo each other in attracting custom, not by “dumbing down” but by positively encouraging a raise in standards and an enrichment of the customer experience, but private companies and unaccountable organisations can often destroy the good work they do by managerial incompetence over which nobody outside the organisation has any control.

No comments:

Post a Comment