Quality in Schools: Evaluation and Appraisal (1985)

This DES publication was based on surveys by Her Majesty's Inspectorate of practice in a small number of schools and LEAs.

The complete document is presented in this single web page. You can scroll through it or use the following links to go straight to the various sections:

Introduction (page 5)
Terminology (7)
Nature of the enquiry (8)
Some school practices (9)
Some LEA policies and practices (32)
Recent developments in teacher appraisal (43)
Conclusions (47)
Aide-memoire used by HMI in relation to school self-evaluation (49)
Aide-memoire used by HMI in relation to teacher appraisal (51)

The text of Quality in Schools: Evaluation and Appraisal was prepared by Derek Gillard and uploaded on 11 April 2007.


Quality in Schools: Evaluation and Appraisal (1985)

London: Department of Education and Science 1985
© Crown copyright material is reproduced with the permission of the Controller of HMSO and the Queen's Printer for Scotland.


[title page]

Department of Education and Science


Quality in Schools:
Evaluation
and Appraisal






London:
HER MAJESTY'S STATIONERY OFFICE


[page 2]


© Crown copyright 1985
First published 1985






ISBN 0 11 270576 6


[page 3]

Contents

Paragraph
Introduction1
Terminology9
Nature of the enquiry12
Some school practices17

PRIMARY SCHOOLS
17

Whole school evaluation
18
Purposes18
Information and judgements required22
Methods27
External support32
Written reports33
Effectiveness34
Staff appraisal38
Purposes39
Scope40
Methods41
Appraisal of senior staff43
Results44

SECONDARY SCHOOLS
47

Whole school evaluation
50
Purposes50
Scope55
Criteria for judgement57
Methods58
External support63
Written reports64
Outcomes66
Effectiveness67


[page 4]

Paragraph
Some school practices: secondary schools continued
Staff appraisal70
Purposes70
Scope74
Criteria77
Introduction of the schemes78
Methods79
External support82
Written reports83
Outcomes84
Effectiveness86

Some LEA policies and practices
89
Purposes92
Scope95
Methods100
Outcomes109

Recent developments in teacher appraisal
123

Conclusions
135
Page
Annexes49
A Aide-memoire used by HMI in relation to school self-evaluation
B Aide-memoire used by HMI in relation to teacher appraisal51


[page 5]

Introduction


1. In recent years schools as well as Local Education Authorities have become increasingly aware of the need to examine carefully, critically and systematically the education they are providing, in order to satisfy themselves and others that it is relevant and appropriate to the needs of young people towards the end of the twentieth century and that it is being delivered as effectively as possible, in a manner consistent with its aims. This awareness has been brought about by a number of influences operating on four main areas of school life, the curriculum, organisation, resources and teachers.

2. Interest in the curriculum of the compulsory school years accelerated in the late 1970s and was marked by the publication of the HMI Curriculum 11-16 working papers (Red Book I) and A view of the curriculum, the Schools Council's The practical curriculum and the DES document The school curriculum. In recent months, the DES document The organisation and content of the 5-16 curriculum, the HMI publication The curriculum from 5 to 16 and the White Paper Better schools have directly addressed the debate about agreed objectives for the curriculum. The growth of large schools catering for all abilities, offering a range of curricular choices and requiring a more sophisticated management, allied with the findings of the HMI primary and secondary surveys, showed the need for more effective organisation.

3. The calls from central and local government and parents for schools to be more accountable, particularly in times of falling rolls and pressure on resources, have increased the desirability of evaluating the performance of the school as an organisation. The White Paper Teaching quality declares an aim to 'make the best use of available resources to maintain and improve standards in education' and sees the teaching force as the 'major single determinant of the quality of education'. It goes on to argue that, for this force to be managed effectively, accurate knowledge of teacher performance is needed: knowledge based on assessment. The recent White Paper Better schools holds to that view, based on the belief that knowledge of teacher performance results in teachers 'being helped to respond to changing demands and to realise their professional potential'.


[page 6]

4. There have been a number of responses to all this but in relation to school evaluation and teacher appraisal they have been on two levels. A number of LEAs have instituted schemes based on self-evaluation by their schools, looking mainly at organisation, the use of resources and curriculum arrangements. Until recently there has been no move towards systematic teacher appraisal by LEAs though this is now being pursued actively by some authorities. (See paragraphs 123 to 134). A preliminary survey conducted by HMI in September 1983 indicated that there was self-evaluation activity in at least 56 LEAs although only 11 had mandatory policies requiring evaluation and the production of written reports. In some cases self-evaluation was monitored by means of LEA inspections and visits. The position is constantly changing as more schemes are initiated.

5. In addition to LEA schemes there has been considerable activity in individual schools. The starting point for this work has most often been the curriculum, with the focus on issues of organisation, management and resources; this latter often stimulated by training courses for heads and senior staff. Occasionally evaluation has included such areas as subject content and teaching methods. A number of schools have also instituted staff development policies, in some cases including systematic staff appraisal.

6. Both schools and LEAs have benefited from development work done by external agencies - the Schools Council, the Open University, the Council for Educational Technology, several university and polytechnic departments of education - and from the consideration of relevant industrial practice. The last of these has borne particularly directly on performance appraisal.

7. The White Paper Teaching quality also said that 'HM Inspectors are collecting evidence about the extent and effectiveness of practices for teacher assessment and self-evaluation in schools, and will make this evidence more widely available'. HMI have studied at first hand a small number of school-based schemes for institutional evaluation and staff appraisal, examined their origins, their purposes and scope and their methods of operation, and made some assessment of their relative effectiveness. Seven local authorities with different approaches were also visited to examine how their policies are perceived and implemented by the various participants.

8. The report of the findings of this work is in four main sections. The first deals with individual primary and secondary school schemes, the second with LEA schemes, the third describes some recent LEA initiatives and the fourth discusses some of the issues arising and offers some tentative conclusions. Education observed 3 is also concerned with related questions about the criteria for teacher appraisal: it summarises from HMI's published writing what has been said about the contribution of teachers to quality in schools.


[page 7]

Terminology


9. A number of terms recur frequently in LEA and school documentation; eg appraisal, assessment, evaluation and review. There is, however, some inconsistency in the ways these terms are used. For the purpose of this report the following distinctions are made:

i evaluation is a general term used to describe any activity by the institution or the LEA where the quality of provision is the subject of systematic study;
ii review indicates a retrospective activity and implies the collection and examination of evidence and information;
iii appraisal emphasises the forming of qualitative judgements about an activity, a person or an organisation;
iv assessment implies the use of measurement and/or grading based on known criteria.
10. A further distinction is made, in the context of this report, between staff development and staff appraisal. Staff development is concerned with general matters of in-service training needs and career development and may be based on staff appraisal. Staff appraisal involves qualitative judgements about performance and, although it may start as self-appraisal by the teacher, it will normally involve judgements by other persons responsible for that teacher's work - a head of department or year, the head teacher, a member of the senior management team or an officer of the LEA. This appraisal may well (and usually does) include the identification of professional development needs.

11. At present there appear to be no fully established models of teacher appraisal, initiated by LEAs, which have gone through the full cycle of assessment, staff development and follow-up action as appropriate. Thus, while events are now moving quickly, it is not yet possible, in England, to evaluate a mature system of teacher appraisal. Nevertheless the procedures described in this report, whether at school or LEA level, illustrate both some of the possibilities and some of the problems. In paragraphs 135 to 148 a number of issues are identified which will need attention in further development work. There is also an attempt in this section to identify elements of good practice in the procedures observed: what is and what is not effective and what appear to be the necessary ingredients for a successful and useful evaluation/appraisal exercise.


[page 8]

Nature of the Enquiry


12. Self-evaluation activity in schools and, even more, teacher appraisal, are not distributed evenly across the country and therefore it was not possible to identify a nationally representative sample of schools.

13. In the autumn of 1983 enquiries were made through HMI district inspectors about LEA policies and practices in respect of evaluation and appraisal. These enquiries also sought to identify schools where initiatives were known to have been taken. A few schools, hearing of this enquiry, identified themselves.

14. In 1984 visits were made to eight primary, one middle and twelve secondary schools where some activity in the broad field of self-evaluation/teacher appraisal was known to have been undertaken or in progress - usually, although not always, on the school's own initiative. In most cases two to three HMI visited for two to three days.

15. A further set of visits was based on six LEAs where it was known that the authority had an interest in, although not always a policy with regard to, evaluation and/or appraisal. Following discussion with officers of each authority a small sample of schools identified by the LEA was visited. Altogether a further 29 primary, six middle and 23 secondary schools were visited, usually for one day. Locally based HMI followed up mandatory self-evaluative exercises in a seventh authority and findings from their visits are incorporated in the report.

16. In each of the discussions with LEAs and school visits enquiries were based on one or both of two aide-memoires relating respectively to institutional self-evaluation and teacher appraisal. These are reproduced at Annexes A and B. They seek comment on the perceived purposes of the activity, the information and judgements required and the methods used, and some assessment of effectiveness.


[page 9]

Some School Practices


PRIMARY SCHOOLS

17. The eight primary schools and the middle school visited in the spring term of 1984 were drawn from eight local education authorities. The primary schools comprised an infants' school, a first school (5 to 8 years), three infant/junior schools (5 to 11 years) and three junior schools (7 to 11 years). Three of the schools also had nursery units. The numbers on roll, excluding the nursery units, ranged from 73 to 507 (the middle school). Staffing (including the head) in these schools came within the range 9 to 12.5 teachers, except in the middle school, with 22 staff, and two smaller primary schools which had 4.5 and 6 teachers respectively.

Whole school evaluation

Purposes

18. The schools represent a variety of approaches to the question of evaluation and appraisal and different stages of development. Five schools had devised their own schemes; one was part of a School Curriculum Development Committee project on school evaluation Guidelines for Review and Internal Development in Schools (GRIDS), though it had been involved in evaluative activities prior to this; one had been in a joint Schools Council/LEA scheme; one had been through the process of quadrennial self-evaluation as required by the LEA; and one had followed the guidelines contained in the NUT document A fair way forward. The priorities within each school were different but, in general terms, three were mainly concerned with the review and evaluation of aspects of school organisation and curriculum, three were concerned with staff development or appraisal, and three schools combined both approaches, though not giving them the same emphasis.


[page 10]

19. The initiative for evaluation generally stemmed from the head, the one exception being the school which was responding to an LEA requirement for self-evaluation. Two schools had agreed to take part in national projects relating to school evaluation, but in both of them there was a previous history of such activity within the school and the heads saw involvement as a means of extending, or making more effective, what they were already doing.

20. In five cases the purpose of undertaking school evaluation was explicitly perceived as to improve the functioning of the school and the quality of education received by the pupils. In the school where evaluation was an LEA requirement it was seen as a task to be completed and was linked with accountability, though this was not stated explicitly. Improvement in the functioning of the school was a secondary purpose, though it was seen as important by the staff. With the exception of this school, overt concern about accountability to the LEA played little part in motivating evaluation. Although another LEA asked individual schools to prepare reports for governing bodies on each area of the curriculum (the papers for which eventually found their way back to the Authority), this requirement did not influence the way in which the school carried out its evaluation and development. None of the heads rejected the notion of accountability, although two heads defined this as being morally and professionally accountable, not so much to the LEA as towards pupils, parents and the local community.

21. For schools undertaking evaluation as opposed to instituting staff development, the activity was not generally a response to perceived problems. In some cases heads who had been in post for five or six years saw a need to reflect on what had been achieved before going further. In others recently appointed heads saw an opportunity to take stock before instituting changes. For some schools evaluation was part of normal practice.

Information and judgements required

22. In establishing general and specific aims for evaluation the heads were crucial figures. Even so they did not act alone. Typically they would establish, with the help of deputy heads and/or post holders, a long list of areas to be considered which they then discussed with their staff in order to establish agreed priorities for investigation. For the school following the GRIDS scheme this process was formalised in the 'initial review'. The involvement of teachers in this process was seen as essential in securing their commitment to the evaluation and their understanding of its purpose; and in establishing a realistic scale for the operation. In the school which was required to conduct a self-evaluation exercise, the LEA provided a booklet outlining the areas to be considered. The purposes appeared not to have been discussed or discerned clearly by the staff, who were vague about the whole exercise and were concerned mainly with the end product - the report to the LEA.


[page 11]

23. Schools identified areas for evaluation in different ways. The school taking part in a formal LEA scheme had guidelines which identified specific areas and, for the school taking part in the GRIDS scheme, there was a process which enabled it to identify its own priorities and hence to select three areas (science, PE and assessment procedures) which it could tackle in some depth. Two schools already had some history of evaluation over a number of years and the organisational structures, staff and section meetings and curriculum review groups allowed issues to arise naturally as part of a continuous process. One school with a new head concentrated on specific curriculum areas (maths and music) while in another school the head decided that after a period of five years it was time to have a general review of her school's efficiency and effectiveness.

24. There was considerable variation in the way in which information was collected to enable evaluation to take place. In the two schools with formal schemes, guidelines were precise. In the first all areas of school activity were divided up, information gathered and collated by the deputy head to form part of the main report. In the second the areas for investigation were more specific; a review group drawn from interested teachers gathered information, made judgements, and formulated recommendations to the head for future action. For the schools which were used to evaluation the information gathering stage was less formal and detailed. Topics were raised on the agenda of part and whole staff meetings, information was gathered orally at the time, and little was committed to paper at this stage; documentation was largely reserved for the explanation of new policies. One school collected general subjective impressions through questionnaires and through direct observations both internally and in other schools. In the school which was concentrating on specific subject areas, information was collected by the head as a result of discussions between the teachers and college of education teachers who had been invited by the head to be outside consultants following work in the classroom. Teachers kept copies of the notes made, in special files.

25. The emphasis in evaluation was largely on inputs - organisation and curricular arrangements - and on the institutional effectiveness of these. In the case of the school operating a local authority evaluation scheme, the emphasis was on the low level description of arrangements, with relatively little evaluation of their effectiveness. The school operating the tightly structured GRIDS scheme with more closely defined targets was able to concentrate more on effectiveness, and produce proposals for change. A school attempting an evaluation in a less formal way placed relatively little emphasis on the analysis of organisational, curricular, staffing and resource inputs and was more concerned with making general judgements about the effectiveness of the school. In three schools more emphasis was given to processes and outcomes affecting pupils. In two of them there was regular group discussion of the effectiveness of current schemes and practices, with reference to pupils' work and response. In a third the emphasis was on looking at methodology and pupils' work, enabling the school to discover, for example, pupils' difficulties in understanding place value in mathematics.


[page 12]

26. Although all the schools were to a greater or lesser extent involved in making judgements about quality and effectiveness, in none of them were the criteria for these judgements explicit and detailed. One school felt that by discussing the effectiveness of work programmes over a number of years the staff had reached some implicit consensus about criteria. It was broken when a large number of staff moved, leaving the school with the task of building a fresh consensus. One school produced a paper trying to define excellence as it applied across the curriculum; another asked staff to produce examples of 'good' work which were then discussed at group meetings. Where they existed such criteria were general and intuitive and not necessarily agreed.

Methods

27. All schools accepted that evaluation must be a continuing activity. In the school which was required by the LEA to produce a whole evaluation report every four years the teachers were not clear what was supposed to happen in between reports. Another school foresaw repeating the process every two or three years. Others had formal or informal rolling programmes of evaluation, either because this was the regular pattern of activity, or because a number of issues had been identified which could not be tackled all at once.

28. For two schools the structure of the evaluation process was very detailed. In the one where the LEA suggested the areas to be covered by evaluation, the deputy head devised aide-memoires using mainly the LEA's self-evaluation booklet and working parties submitted individual reports on specific topics which were then drawn into a single report and redrafted by the head for presentation to the governors. The second school used GRIDS, first in an initial review to determine those aspects of school life which merited closer attention; and later, when three topics had been chosen on the basis of the questionnaire and subsequent staff discussion, for the specific reviews which analysed and evaluated current practice in those areas and made recommendations for change. In no other school was the structure as clear and detailed as in these two examples. In a third there was a general structure which stipulated the five areas to be covered. These were: teacher self-appraisal, the effectiveness of the school as a whole, the work of the junior classes in the comprehensive school to which pupils transfer, some aspects of the welfare support system and, for post holders, a review of existing schemes of work. While teachers were given a week away from their own classes to accomplish these tasks (which included classroom observation) the organisation and criteria were only sketchy, so that, for example, the section relating to the effectiveness of the school as a whole was reduced to five questions which, in effect, reiterated the question, 'Are we efficient as a school?'

29. One school which had been involved in a Schools Council/LEA evaluation project had started by having detailed guidelines, but gradually these had been internalised and reduced to items on an agenda for discussion,


[page 13]

with little emphasis on documentation. Another school had identified the areas for evaluation and development, but beyond that had no particular structure, other than that which was created by the head as the project proceeded. A further school had developed the practice of regularly reviewing in staff meetings the effectiveness of current planning against pupils' response, including an examination of quality in pupils' work.

30. The origins of the various procedures varied. In the case of the LEA scheme, an agenda for evaluation and to some extent the means of tackling it originated outside the school, although the use of the LEA guidelines for self-evaluation was not mandatory. GRIDS equally originates outside the school. It encourages the school to select its own agenda and offers a framework for this. Elsewhere the methods were evolved by the schools themselves, though there was frequent mention of the influence of courses and higher education institutions as a source of ideas and also of practical help.

31. There was a general involvement of teachers in all schemes. In most cases they were involved in discussions before the project began and were party to a decision to proceed. In larger schools, with wide-ranging schemes, the normal practice was to set up working parties with teachers volunteering for the one which most interested them. Where they existed, subject post holders led working groups in their areas of responsibility. In schools with only a small staff it was usually necessary to work collectively. The main method of working was by discussion. Direct observation of classes played little part although in one school teachers were given the time to observe classes in other parts of the school, and also the junior classes of the local comprehensive school, to help them to make judgements about their own performance. In another school the involvement of lecturers from a local college of higher education and an advisory teacher working in the classroom alongside the class teacher led to an analysis of classroom methodology.

External support

32. With the principal exception mentioned in the preceding paragraph there was little sustained close involvement of consultants or advisers from outside the schools. In the example quoted, they worked alongside the school staff, trying out materials and methods and contributing to both evaluation and the development of new ideas. In another school which had been taking part in a Schools Council/LEA project there was access to a management consultant, who acted as a trained observer, and the involvement of the head in the year long course at a college of higher education led to visits by the staff of the college. It also put him in touch with other heads, forming a supportive network of 'critical friends' who paid termly visits. Both schools mentioned above found their links very helpful. At the school operating the LEA scheme, the project began with a day conference at which discussion was led by the school's adviser. During the process a number of advisers visited the


[page 14]

school. Once the evaluation was complete there were other visits by advisers to help the school look towards the next phase. The school using GRIDS had approached a local science adviser and a lecturer from the college of higher education to help with a specific review of the science curriculum as observers and consultants. Another head had consulted local advisers before beginning school evaluation. In general however LEA advisers were not seen to be closely involved in the process of self-evaluation; and one school said quite firmly that it did not see LEA advisers as evaluators. There was little involvement of parents, employers, or the community in the process.

Written reports

33. Written reports were prepared in some of the schools for various purposes. In one school the LEA scheme required the presentation of a report to the governors and the LEA. It covered the whole of school activity and placed the major emphasis on the description of resources, organisational patterns, staff numbers and curriculum arrangements, rather than on effectiveness and pupils' learning. Judgements were of a general nature, and in the less important areas. The school involved in GRIDS produced a report on the operation of the initial and specific reviews which went to other schools in the project and to the LEA, though this report was as much a report on the pilot project as on the actual evaluation done by the school.

Effectiveness

34. Relatively few school initiatives can be identified as stemming directly from the process of evaluation. Most frequently mentioned was the writing or revising of subject guidelines or schemes of work to take account of skills, continuity and progression with the aim of bringing about improvements in pupils' learning. As a result of planned visits by its teachers, one school was able to develop better links with the comprehensive school to which its pupils transferred. In the school taking part in the LEA scheme a timetabling group and a curriculum review group were set up in response to some perceived shortcomings.

35. Quantifiable improvements in teaching and in pupil performance were few. Most head teachers thought that teaching and learning had improved in various ways but it was impossible to verify this. The general inspector of one school, who had known it for a number of years, was able to confirm 'a notable growth in the teachers' ability to work as a team sharing each other's expertise and exchanging frank comments'. The school using GRIDS had critically reviewed its science provision and given it a more coherent structure, building in progression and differentiation; and these changes were already having some observable effect on teaching and learning. The school which had reviewed closely two areas of the curriculum had, with the help of outside consultants, and by careful analysis of teaching methodology and


[page 15]

pupil performance, been able to identify and give attention to areas of weakness.

36. In all the schools the teachers gave up a considerable amount of time during the lunch hour and after school to hold discussions, write papers and review syllabuses and schemes of work. There were claims of increased confidence, professional awareness, and readiness to work co-operatively and share ideas, and a greater willingness to be self-critical. One possible consequence in two schools has been an increased turnover of teachers on promotion. Participation has helped to make them more competitive in job applications; this was also reported by the heads of some secondary schools.

37. All the schools appear to have gained something from the exercise and feel that it has been worthwhile. Irrespective of the balance of description and evaluation in the report itself, the process promoted professional discussion among staff; brought frustrations and disagreements to the surface; gave a voice to more members of staff; and enabled them to see what the priorities were for the next stage: for example, the need to concentrate on the curriculum and teaching methods and on matching the work to the children's potential.

Staff appraisal

38. In eight of the nine schools there was some form of staff development or staff appraisal policy. In five of them the emphasis was on the former rather than the latter. In no case was there formal appraisal of teachers by the head which included documentation, criteria for judgement and follow-up action. In the school operating the LEA self-evaluation scheme, there were sections for the head and for individual teachers to ask themselves questions about aspects of performance, but no response was required in the formal written report. Although some teachers did respond to the head in writing there was no follow up. The school using the NUT document A fair way forward was invited to take part by an LEA adviser but no written response was required and the process did not involve teacher appraisal, concentrating rather on career development and INSET needs.

Purposes

39. The main purpose of the activities was to attempt to make teachers think more about what they were doing, to encourage them to take a greater interest in their careers, and to develop their skills through self-appraisal and subsequent INSET. In this way it was hoped both to assist the career development and professionalism of teachers and to enable the quality of work in schools to improve. One head teacher saw a staff development exercise as a means of initiating discussion and of overcoming resistance to the notion that there was no room for improvement within the school.


[page 16]

Another scheme arose out of a head's concern to develop the skills of an inexperienced staff appointed on the opening of a new school.

Scope

40. The scope of most staff appraisal was restricted either in the areas covered or in the detail required. One head aimed to assess many aspects of a teacher's performance during her regular classroom visiting, relying on post holders for a detailed appraisal in specific subject areas. Areas for appraisal were variously defined. In one school the teachers were asked to assess their strengths and weaknesses in terms of their relationships with pupils, parents and other teachers and their own performance in the classroom, including planning, marking, use of display, apparatus etc. In another teachers were asked general questions about their satisfaction with their current post and the level of pupil progress, their reaction to the INSET they had undergone and their future ambitions and INSET needs. In a further school appraisal was directed at specific areas of the curriculum under review: lesson content, methodology, pupil understanding and the use of resources. In none of these schools was there systematic recording of the information gathered.

Methods

41. In four of the eight schools the starting point was some form of self-appraisal by teachers: either the production of an agenda for a meeting with the head or the completion of a brief questionnaire. In the other cases the starting point was appraisal by the head and either a deputy or a post holder. In five cases there was an interview with the head, to discuss progress and, where applicable, the teacher's own self-appraisal.

42. There was little involvement of people from outside the school; but one school whose primary concern was curriculum evaluation used an LEA advisory teacher and college of education lecturers to work alongside teachers and evaluate work in the classroom.

Appraisal of senior staff

43. In only one school was appraisal of the head and/or deputy head mentioned. In order to facilitate the introduction of whole school evaluation, the head offered to be appraised along with her staff. She did this by circulating a questionnaire about her performance. Teachers responded in writing and their answers were collated by the head and circulated. Six [sic] questions were asked.

What should her priorities be for the coming year?
Did she keep a broad spectrum of the school in view?


[page 17]

Did she distribute resources fairly?
Was she sufficiently available and interested in teachers' professional needs and personal welfare?
Was she sufficiently aware of the load teachers carried in the classroom, in respect of voluntary activities and in helping children and colleagues?
Staff responded thoughtfully, positively and honestly. As a result the head reassessed how she could be more even-handed in respect of the junior and infant departments.

Results

44. In general any judgements made at development or appraisal interviews were communicated orally. There were few examples of written reports being used. In one school a post holder signed a new job description which included commitments for the coming year. In another the head kept for one year - until the next interview - a record of the interview proforma on which the teacher had indicated satisfactions, dissatisfactions and in-service experience during the current year, and aims and INSET needs for the following year. One head kept records of work done by teachers from the time they started at the school. In all other cases exchanges were oral and it is not possible to say what element of appraisal they contained, though teachers who were interviewed spoke very positively about their value. One head who had been carrying out appraisal for a number of years felt that written reports could make teachers hostile to the interview process.

45. Little specific use was made of staff development interviews except to identify INSET needs and, in one or two cases, to enable a member of staff to adjust his/her role in the school. The main outcomes of the exercise, claimed by all schools, irrespective of the type of staff development or the nature of the school, were increased confidence on the part of the teachers and a greater willingness to talk together in professional terms. In one of these examples, the head identified through appraisal the promotion potential of a particular teacher, but noted her narrow experience. Arrangements were made for the teacher to gain wider experience in all age ranges, including nursery, so that after a suitable interval she was offered a scale post.

46. In some schools it was said that the teachers developed a greater capacity and willingness to be self-critical about their work. In the school where the appraisal of classroom performance was linked closely with curriculum development there were identifiable effects on pupil learning. Specific problems in maths had been overcome; there was a more appropriate choice of work, and more practical involvement in maths and music; neglected areas of the curriculum were defined and more attention given to them; lesson preparation had become more detailed. However, unless there is some training in the techniques of self-appraisal, the assessment of classroom practice may be limited to checking the teachers' perceptions against those of the head.


[page 18]

SECONDARY SCHOOLS

47. Twelve secondary schools were visited during the spring of 1984, 11 of them by teams of two or three HMI for periods of two or three days, depending on the size of the school. The other school, which had been the subject of a recent full inspection, was visited for one day by one HMI.

48. The schools came from 11 LEAs and, apart from two secondary modern schools, were all comprehensive. Five (including the two secondary modern schools) were 11 to 16 schools, six were 11 to 18 schools and one was a 13 to 18 high school. They ranged in size from 650 to 1,500 pupils, with an average size of 1,020.

49. Of the 12 schools 10 had undertaken whole school evaluation and six had staff appraisal schemes. Four of the schools included both whole school evaluation and staff appraisal.

Whole school evaluation

Purposes

50. The generally expressed purpose of evaluation in the ten schools was to improve the functioning of the school and the quality of education, though one school which had been at a very low ebb saw the prime purpose as building the self-confidence of staff and improving work in the classroom by involving them in curricular discussion. In almost all cases the initiative came from the head. Two schools had been invited to join the HMI/LEA curriculum 11 to 16 exercise, though one of them had already been engaged in evaluation prior to this. In no case was evaluation seen primarily as an overt exercise in accountability to the LEA or the governors. One school was influenced in the way it tackled evaluation by the need of the LEA to respond to DES Circulars 6/81 and 8/83. On the other hand a majority of schools saw evaluation as a professional responsibility and these tended to have review mechanisms which were either part of the organisational structure or were set up specifically to identify issues for review. In four schools evaluation was, at least in part, a response to concern about the appropriateness of the curriculum; the relevance of the current curriculum in times of high unemployment; the match of the curriculum to the school's aims; and, in one case, the need of the school to provide a relevant curriculum for its particular range of ability as a secondary modern school.

51. Priorities and objectives for evaluation were established in two ways. In the five larger schools there were policy review or planning groups, varying in size and constitution, which identified areas of current or potential concern, established priorities and set up specific working parties to carry out reviews and suggest changes. In one school a corporate action plan was drawn up on


[page 19]

an annual basis and reviewed at the end of the year. In a school which was taking part in a national project (GRIDS) the process was formalised by the initial review, which asked all staff to look at all aspects of school life in order to identify the areas of most urgent concern, which were then tackled by specific review groups. Although the initial review involved all staff in discussion, the system had some drawbacks in that teachers varied in the experience and information they had, on which to base judgements about priorities. School consultative procedures could become over-elaborate, as happened in one case where the planning group, though it instigated some useful evaluative activity, was sometimes bypassed because of the length of time it took to bring a project to fruition.

52. There would appear to be two benefits of systems which involved staff in establishing priorities for review:

a. there was general agreement about the purpose of the exercise; and,
b. staff were able to concentrate their energies on a restricted number of projects likely to yield results within a reasonable time span rather than dissipate them by trying to cover all aspects of school life at the same time.
53. In five smaller schools the head was more obviously central in decision making and in communicating objectives to heads of department and the staff as a whole, either orally or by means of papers. In some cases heads overestimated the extent to which their purposes were understood; communication was not always adequate and priorities were not always clearly established. In one school, working parties multiplied through the initiatives of the head and interested staff; but some were faltering because there was a lack of coherent planning and purpose and staff efforts were being dissipated. In another school undertaking curriculum evaluation there was insufficient discussion and documentation about whole school curricular issues; the initiatives were at departmental level and the teachers' perceptions of the exercise varied. At a small school, starting from a low baseline of morale and self-esteem, the head asked curriculum leaders to identify areas where they thought there was a need for development and where they would feel confident; it was then left to them to establish aims, objectives and methods of inquiry. Some of the topics chosen were modest and limited in scale. The teachers involved in individual projects were clear about their purpose, which was to meet a perceived need. For the head there was the further purpose of increasing staff confidence and developing the habit of reflection, so that other more difficult issues could be tackled. This approach would possibly have been too low key in a school more accustomed to evaluation, but it was appropriate for this particular one.

54. At the time of the inquiry only one of the schools was attempting to review all aspects of school policy. One school had restricted itself largely to curriculum evaluation in relation to specific subject areas, though some work had been done on the role of the tutor. Otherwise schools were evaluating a greater or smaller number of areas of activity, identified by the head or planning groups and drawn from the whole spectrum of school life.


[page 20]

Scope

55. In few of the schools was the emphasis entirely on organisation and planning. In one case where this was so, it was recognised as a first, limited stage in the process. Where the main emphasis was on planning it was usually combined with an examination of the effectiveness of current practice, as in the school which was looking not only at systems of control and communication, but also at the motivation of pupils and the effectiveness of classroom practice. Where a school was appraising its curriculum the exercise was usually subject orientated and there was considerable variation in the practice and effectiveness of evaluation even within the same school. Some departments made considerable strides; others appeared, relatively, to have made little progress. In a school which was looking at the relevance of its fourth year [now Year 10] curriculum, the starting point was the observation and analysis of all work done in fourth year classes and this led to developments in individual departments. In another school the science department examined its teaching methods and their relationship to departmental syllabuses. A number of schools looked at the effectiveness of their policies for English, maths and health education on a whole-school, cross-curricular basis.

56. While the main emphases of school evaluation were on organisation, planning and their effectiveness, some schools, particularly those which were concerned mainly with the curriculum, were turning their attention to the effect of their policies on pupils' learning. One school was beginning to look at pupil performance as measured not only by external examination results but also by internal assessment practices. Two schools were starting to develop assessment procedures which were related more directly to departmental objectives. Frequently the most detailed work was done at departmental level; the modern languages department in one school and the science department in two other schools were examples of this.

Criteria for judgement

57. In more than half the schools there were no explicit criteria for the judgements being made about the effectiveness of the institution and the education being offered, particularly in issues relating to pastoral care. Elsewhere there was considerable variation. In one school general criteria were stated in the staff handbook and more specific criteria were produced by subject departments, sometimes collected together, as in the school handbook on departmental assessment policies. In another school, although criteria had been established by some departments, the school as a corporate body had not moved forward to establish principles for a review of the whole curriculum. The subject departments in this school and in another in the same authority relied much on the work of the HMI/LEA 11 to 16 curriculum project which defined eight areas of experience as a basis for curriculum


[page 21]

planning.* This second school had devoted considerable energy to identifying and detailing criteria.

Methods

58. Relatively little of the collection of information was done on a whole-school basis. In the school following a national project (GRIDS) the initial review required a response from all staff. Once priorities for review had been established, specific review was carried out through the appropriate working group or sub-committee within the school. Another school required a written response from all staff after discussion in departmental and year groups; this was fed back to the head who then produced a report based on the responses. One school went to considerable trouble to collect comprehensive data of a high order both from within the school and from the LEA, so that comparison could be made with the performance of other schools with regard to specific issues: it represented an attempt to get away from unsupported assertion and to base planning and judgements on sound information. In general most of the work was done by standing committees or working parties set up to look at specific issues and, most commonly, by subject departments. (In one school with a large number of activities these groups were sometimes very small.) Information collected at this level was either used to complete a whole-school picture (eg internal assessment practice) or was complete in itself and was transmitted to the senior management of the school in the form of recommendations. In one or two cases this system did not work well: in one school, departments were selective about what they transmitted to the head; in another, findings went to the head but not to other departments; and in a third the receipt of information by the senior management team did not result in any further action being taken.

59. Most of the schools saw self-evaluation as a continuing process rather than as a single exercise.

60. In one the incoming head had conceived the idea of a one year review of school policies terminating in a staff conference to consider recommendations. Implementation was in the hands of a curriculum committee and for some staff the exercise ended with the conference, though for others it was only just beginning. In four schools there was a continuous rolling programme of evaluation built into the organisational structure of the school. In one school priorities were drawn up on an annual basis; in another departments and pastoral systems had built-in evaluation independent of the head's initiatives; in two others there were mechanisms for raising issues when the need arose. Two schools were more difficult to categorise; activities were, to a large extent, uncoordinated and unconnected (though not necessarily

*The eight areas of experience were identified as: the Aesthetic/Creative, the Ethical, the Linguistic, the Mathematical, the Physical, the Scientific, the Social and Political, and the Spiritual.


[page 22]

unrelated) apart from the requirement that departments redraft schemes of work according to a specified model. In both these schools there was some confusion about the purpose of the exercise and there was also a large number of projects. There were also signs that some of the groups were faltering and that the findings of others were being neither fully discussed nor implemented.

61. All the schools had created some structure to enable them to carry out evaluation. In the more highly developed schemes there were planning and coordinating groups which provided a framework within which departments and specific working groups operated. The school following a national scheme had precise instruments: questionnaires and advice schedules for establishing priorities for review, for investigating these and for drawing up and reviewing a development plan. This structure is very thorough and may be excessively detailed for schools with well established machinery for evaluation, as was the case in the particular school visited. For the schools participating in the HMI/LEA 11 to 16 curriculum project there were demanding schedules for different subject areas, drawn up by advisers and teachers from the pilot schools. In one of these schools some departments found the schedule too demanding and gave up, while other departments in the same school made considerable progress.

62. Most of the detailed work of evaluation was done by departments or specific groups within the schools and these tended to devise their own structures and instruments to meet their particular needs. Extensive use was made of aide-memoires and questionnaires, either specially devised or based on existing documents originating in the school, the LEA or elsewhere. Several schools discovered that an unduly enthusiastic, over-use of questionnaires could be counterproductive, and one school which tried to use the IMTEC/NFER* draft questionnaire in parallel with schedules met with staff resistance. One school, where there was concern about the relevance of the fourth year curriculum, arranged that the senior management should observe fourth year lessons. In another school, a teacher followed a group of children from class to class for a week in order to obtain a view of a pupil's experience of the curriculum. In a school where the English department was given responsibility for identifying the language skills which children needed in their everyday lessons, English teachers visited classes in different subject departments in order to analyse the demands being made on pupils. Another school analysed the teaching methods being used in mixed ability classes. In a school where the head appraised the performance of subject departments biennially, classroom visiting, preceded by a briefing of the department and followed by a written report, played an important part in the process.

*International Movements Towards Educational Change/National Foundation for Educational Research.


[page 23]

External support

63. In all schools there was considerable involvement of teachers both as individuals and as members or leaders of particular review groups. The time and energy required were considerable and there was evidence in several schools, particularly those with a large number of concurrent working parties, or where there were few observable outcomes, that some teachers were reaching saturation point and losing interest. There were no specific examples of direct parent involvement, though one school took care to keep parents informed through parent evenings. In another school information about parents' and pupils' views was used to generate theses for debate. In four schools there was some additional involvement of pupils. In two of them the pupils were asked for their views of the curriculum and of work in school, and in one of these the pupils were also asked for their views of the examinations they took. In a third school the pupils were asked to rank the aims of the school in order of importance - first indicating what they themselves thought and then arranging them in the order they thought the staff would choose. A fourth school investigating homework asked pupils to complete an attitude questionnaire. No use was made of external observers or consultants, apart from one school which asked advice from the personnel training manager of a public corporation, when drawing up its staff appraisal procedures. One school mentioned the involvement of the local polytechnic in collating the responses to questionnaires which formed part of the homework review. The two schools involved in the 11 to 16 curriculum review had had help from the LEA advisers and the HMI associated with the project. Two other schools mentioned some involvement of LEA advisers.

Written reports

64. No school of the 12 visited in the spring term produced a written report covering all the school's evaluation activity. One school involved in the 11 to 16 curriculum exercise had produced a report for the LEA dealing with that part of its activities, and another was in the process of doing so. In all the schools individual working parties produced papers, recommendations, plans and structures, mainly for internal use. In four of the schools papers were sent as a matter of course to the LEA and in one they were sent to the governing body. In the other schools there was either no policy, or the papers were available only to those who asked for them. Such papers as were produced contained little evaluation; there was no reference to teacher performance and only two references to pupil outcomes. What they did contain was some limited description of present practices and, mostly, detailed planning for the future. In some of the schools, evaluation seemed to consist, not of reviewing present practice and making necessary changes, but of choosing a particular issue and going back to first principles.

65. The time scale of the exercise was difficult to establish. Most of the schools had some form of rolling programme for at least part of their activity


[page 24]

with particular topics having different starting and finishing times. It would seem that one year was the minimum time from identification of a topic for evaluation through the process of review to the initial implementation. One school was engaged in curricular evaluation for two years, another had been involved in a national project for nearly three years. In one school it was reckoned that any one major topic took 18 months to reach the implementation stage. One of the main reasons for this was that evaluation exercises were additional tasks, carried out after school and at lunch times, and they had to compete with other after school meetings and other matters perceived as more urgent.

Outcomes

66. In all the schools there had been some action as a result of evaluation; the amount varied according to the level of evaluative activity but it was almost always significant and in some cases considerable. Three schools could claim seven separate initiatives. In a few cases changes were organisational (from a faculty to a departmental system), or administrative (notes for parents about homework policy, a rota for break time supervision). In one school an elaborate staff appraisal scheme was set up as a result of evaluation. In most instances, however, development was concerned with the curriculum. Three schools rewrote their aims and objectives; in six schools there were modifications to the curriculum, influenced sometimes by national developments. These included a pre-employment course for sixth forms comprising world studies (multicultural studies), health education, personal and social education, computer familiarisation, and study skills. One school was acting, in anticipation of falling rolls, to modify its curriculum in order to protect the principles of balance and equal access to the curriculum for all pupils. Six schools were engaged on the revision of schemes of work to give more emphasis to the development of skills and a varied methodology. Four schools had developed more sophisticated assessment and monitoring procedures, in two cases leading towards profiling.

Effectiveness

67. In most of the schools it was too early for developments to have had observable effect in the classroom; and in any case to measure consequent improvement with any certainty would require a longitudinal study. Most of the changes in the curriculum, schemes of work and assessment were potentially beneficial and some new practices had been put into operation. There was also good practice in some of the areas where heads had indicated that progress had been made. The main evidence for improvement in the quality of the education being received came from the statements of heads, all of whom were able to point to some beneficial developments. Apart from curriculum and organisation, the other main area in which major improvements were claimed was in relation to staff. In all the schools there were


[page 25]

comments about the commitment of teachers which included increased confidence and professionalism; the development of a common professional language; the ability to set their work in the wider context of the school curriculum (as opposed to being restricted to their subject); and a willingness and ability to evaluate what they were doing.

68. While all the schools could claim some success in particular aspects of their activity a few encountered difficulties in whole-school terms. In one school the lack of success was associated with a lack of clarity about the overall purpose, an inability to take a whole-school perspective and a too highly demanding subject-orientated structure. In another school there was a failure to set priorities: too many projects, not necessarily related, were tackled simultaneously and staff were over-stretched as a consequence. In a third school the procedures were sometimes bypassed because they were too lengthy and elaborate.

69. The main advantages of this form of exercise were the teachers' involvement in it and their commitment to it, which led to improved morale and increased professionalism. The main drawbacks were the demands of time and effort which it made on teachers, a burden they were not always able to sustain.

Staff appraisal

Purposes

70. Six of the twelve schools visited had staff appraisal schemes which were more than a simple career interview with the head. (Four of them were also engaged in school self-evaluation.)

71. In no case was the scheme a requirement of the LEA and one head was advised against instituting such a scheme by the LEA, on the grounds that the issue was too sensitive. On the other hand, all schemes had one purpose in common, which was to monitor and improve teaching performance.

72. In all the schools which had a staff appraisal scheme it was linked in some way or other to promotion. In two the link was explicit and the identification of promotability was an expressed aim of the exercise. In a third school the head said there was no direct link but this was contradicted by statements in the documentation. The staff at this school were understandably confused about the purpose of the exercise. In the other three schools the teachers clearly assumed there were such links and in two of them they cited examples of staff promotion linked with appraisal.

73. In three of the schools staff appraisal had clear links with the school's staff development policy. In one, staff development grew out of the head's original scheme of classroom appraisal with a subsequent interview. In the


[page 26]

other two, appraisal was seen as a means of making the school's in-service training policy more effective, in that it provided not just an assessment of performance but a more precise identification of training needs. In two schools the links with professional development were not clear: the teachers at one of them were critical of what they saw as the lack of any school development policy which allowed for longer term planning and the establishment of training priorities on a whole-school and individual basis. In a sixth school where appraisal was carried out on a subject department basis there was no obvious link between that appraisal and INSET and development policies. For one school staff appraisal and development for the members of the history department was a product of school evaluation which identified their failure to make appropriate use of original source material in their teaching. Once the teachers had asked for guidance on how to implement the desired changes, the school was able to arrange for some to attend a course in resource-based learning for history teachers.

Scope

74. Half of the schools had compulsory schemes and in those where they were voluntary very few teachers chose not to take part. The scope of teacher appraisal schemes varied. Three schools attempted to include all aspects of teacher activity; both classroom performance and contributions to the department and/or year team and to the wider context of the school. Another school felt that it was not possible in the time available to look at all aspects without running the risk of being too superficial and possibly too threatening. In this school on the first occasion appraisal was limited to two or three areas which could be reviewed in some detail with the intention that new targets for the following year would emerge from the discussion. In addition to individual teacher appraisal this school also had a policy of biennial departmental reviews.

75. Two other schools related staff appraisal to the department or year structure. In one of them the head of department was required to produce a report, to an agreed formula, covering aims and objectives for the year; performance in relation to these aims; assessment and examination results; resources; course changes and development; and staffing, including a report on each member of the department. A sixth school had no consistent overall policy in relation to the methodology of appraisal, but the detailed job descriptions of post holders stated that they were responsible for the work of other teachers within their areas of responsibility.

76. The appraisal of senior management was not a strong feature of any scheme. In one school the head and the LEA adviser interviewed the deputies and the head himself was interviewed, at his own request, by the LEA. In a second school the head asked staff to complete a proforma with the title Evaluation of senior management's areas of responsibility. This was regarded by the staff as an unsatisfactory document, in that it was concerned with


[page 27]

'areas of responsibility' and not with the performance of individuals. Nearly half the staff failed to complete it. In one other school the teachers interviewed observed that the senior management had not been appraised. Beyond that the appraisal of senior staff was not touched upon directly though two schools had sections of the self-appraisal document for teachers which asked them to indicate organisational and administrative constraints upon their effectiveness.

Criteria

77. Although judgements were required in all the areas mentioned above, relatively little help was given to those who had to make them in terms of either the detail required or the criteria to be applied. In two schools the decision was left to heads of departments or heads of year and this resulted in a great variety of practice and of standards. In one of them where the heads of department had autonomy in these matters some gave detailed critiques of lessons observed; others used indirect indicators such as take-up of the subject at the fourth year option stage and examination results; some were not prepared to monitor classroom performance at all. In the second school, heads of department were given a detailed list of topics for their biennial report but no guidance was given for the appraisal of individuals, apart from newly trained teachers for whom there was a detailed assessment grid. Three schools issued initial self-appraisal proformas containing varying degrees of detail. In general, however the criteria applied in most schools were the appraiser's own, and this led to a lack of consistency.

Introduction of the schemes

78. Staff appraisal is a sensitive issue and the introduction of three of the schemes followed periods of consultation and a history of related activity. In two of the schools there had been comprehensive schemes of induction for new teachers with built-in appraisal of performance, and in at least one of these a large influx of new teachers in successive years had meant that more than half the staff already had experience of some form of appraisal. Another school had a history of evaluation of curriculum and organisation over many years and staff appraisal arose naturally from it. All three schools had well established consultative procedures which allowed the issue to be discussed openly and the details of their schemes were arrived at by a combination of head teacher initiative and specific working parties. In one of the remaining three schools the origins were not clear, the head claiming that staff appraisal had arisen out of a school-based conference on the role of middle management, the staff claiming that it had been presented at a staff meeting without warning. At a further school the head had instituted the scheme unilaterally, following on from the classroom visiting he had instituted on his arrival in the school. In neither of these cases was the staff consulted about the drawing up of the appraisal proformas. In the sixth school, a recently


[page 28]

formed amalgamation of three existing schools, the job descriptions which made heads of department and heads of year responsible for monitoring the performance of their teams were established by the head and a working party from the constituent schools before staff were appointed. Nevertheless, in spite of the lack of staff involvement in setting up the last three schemes, there was a general acceptance that staff appraisal was not in itself unreasonable, provided the process was concerned with developments as well as appraisal and that there were safeguards regarding confidentiality. In two of the schools appraisal was compulsory from the outset and one further school has since taken the decision to make the scheme compulsory.

Methods

79. The introduction of all the appraisal schemes was preceded by considerable debate about principles and methods. In one school where there was classroom appraisal, there had previously been a school-based in-service course on classroom methodology. However, in no school was there training in appraisal and interviewing techniques. This led to varying practices and standards, and in one case the first stage appraisal conducted by heads of department was found unsatisfactory, at least in part, because some of them lacked the appropriate skills.

80. Four possible phases can be identified in staff appraisal schemes:

i. initial self-appraisal (usually by means of a questionnaire);
ii. classroom observation (by the head or the teacher's immediate superior);
iii. appraisal by an immediate superior (either oral or written and possibly incorporating the preceding stage);
iv. appraisal by the head or deputy (usually face to face, possibly including a written statement).
Three of the schemes examined contained two of these phases, two schemes contained three of them. All included iv; only one included ii.

81. Three schemes included an initial self-appraisal, in one instance combined with classroom observation by two of the senior management team and an extended interview. In the second case self-appraisal was followed by an interview with the head and the LEA general adviser for the school. In the third school it was followed by a written report from the head of department and/or head of year which could be discussed with the teacher appraised but which had to be signed by him. The other two schools operated two stage schemes: first an appraisal by the immediate superior, in one case face to face, in the other by a written report to be signed and possibly discussed; then an interview with the head or a deputy to discuss the first stage and also wider issues of development.


[page 29]

External support

82. There was little external input into these schemes. The head of one school invited the personnel training manager of a public company to review his staff appraisal procedures and modified them in the light of her comments. He also used an LEA adviser to assist with the appraisal of deputy heads and persuaded LEA officials to appraise him. In one other school the head invited the school's LEA general adviser to take part in the second stage interviews. His presence was welcomed by the staff.

Written reports

83. Some form of written record was kept in all the schools. In two of them this record consisted only of the self-appraisal questionnaire which the teacher had completed prior to the interview and notes which the head had written during it. In a school where the initiative for appraisal lay with the head of department, there was a range of practice, from letters to individuals analysing the lessons, raising general issues and making suggestions for improvement, to oral reports only, or no reports at all. Where anything was committed to paper, copies were kept by the head of department, the director of studies and the head. In the other three schools there was an established procedure for keeping records of interviews, reports, targets set and action to be taken. Access to them was controlled. There appeared to be few policies on the length of time for which the records would be kept. In one school they were to be kept for two years; one other school had a policy of updating reports annually. Notable features of the self-appraisal and appraisal forms were the honesty of the former and frankness of the latter.

Outcomes

84. The main outcome of appraisal was the identification of in-service training needs. In some cases appraisal led to a modification or a change of role for the teacher, or to adjustments in teaching programmes. Appraisal was also used to provide evidence of promotability. One head pointed to ten internal and eight external promotions in the previous 18 months. The information gained and judgements made during the appraisal process were frequently used as a means of ensuring fuller, more accurate, references and this was generally welcomed by the teachers. In one case it led to a policy of open references.

85. There was little direct mention of incompetent teachers. In one school appraisal by the head of department had confirmed that a teacher was very weak. A fully documented case was submitted through the school to the LEA, but no further action ensued. In a school where the scheme had originally been voluntary, the main objections to making it compulsory had come, according to the head, from teachers known to be weak. On the other


[page 30]

hand no school claimed as a major role of appraisal the identification of incompetence per se. One of the assumptions of the schools which made use of an initial self-appraisal was that failing teachers could be helped to admit and come to terms with their problems and that one task of the school was to provide a supportive environment where such openness would be possible. Once such problems were admitted the teacher could be given the advice and support needed to improve; but there were few indications, in any of the schemes, of the actions that would follow a failure to respond positively to such advice and support.

Effectiveness

86. Adequate follow-up to appraisal was an important issue for all staff; without it disillusionment soon set in. In two schools the responsibility for follow-up action rested with the heads. Neither of these kept records of the final interviews and in both schools there were complaints by teachers that promised action had not materialised. In one case there was no apparent link between appraisal and in-service provision. In the other the heads of department were supposed to monitor progress towards the meeting of targets but this was made difficult because, in general, departments lacked schemes of work with an agreed framework of objectives and criteria. In two schools responsibility was shared between the teacher, his immediate superior and the head or deputy. One scheme was particularly effective in that the prime responsibility for implementation rested with the deputy head (staffing) who had control over in-service training, timetabling and staff deployment and who was able to work closely with the individual teacher and his head of department or year. The head of one school claimed that teaching had been poor but had improved as a result of classroom appraisal, particularly in respect of the techniques of questioning pupils. There were a number of well constructed and well conducted lessons seen in this school. Teachers in another school said that the process of self-appraisal had made them analyse, modify and question their teaching for the first time and again a number of very competent lessons were seen in this school. In a school which had only just completed the first round of appraisal it was too early to identify improvements, although the exercise had revealed weaknesses in mathematics which had been rectified. This school had also been engaged in other evaluation activity over a number of years as a result of which it appeared to be on an upward spiral of improvement. In another school there were numerous small indicators which could lead to improved learning opportunities: a paper on the use of resources; support materials for language teachers in languages in which they were not secure; and specific in-service training in the science department to correct perceived weaknesses. In the school which had a full inspection just prior to this exercise, some improvements could be confirmed. Less emphasis was now given to what was taught and more to questions of how and why: this change had led to new courses and more appropriate methodologies for the less able and to closer monitoring by heads of houses to ensure that the aims and objectives of the pastoral system and the


[page 31]

active tutorial programme were being met. Teachers had become more professionally aware.

87. There is little doubt that many teachers benefited from these procedures. The heads commented on high levels of professionalism, greater analysis of practice, a broader educational context against which to make judgements, a receptivity to new ideas and a willingness to explore them. The teachers spoke of the benefits of having to analyse their strengths and weaknesses, and of questioning curricula and methodologies which they had taken for granted for years. They appreciated the opportunity of discussing their performance and career with the head, because it improved relationships. It also enabled them to see that they had not just a job but a career, which could be developed to take account of their strengths and in which needs could be identified without fear, with a view to their being remedied by appropriate in-service experience. A number mentioned an atmosphere of openness in which issues could be raised which had previously been taboo, and where differences could be discussed without rancour. With few exceptions there was an acceptance of appraisal as reasonable and professional, provided that it was positive and supportive. Where this existed the teachers were prepared to be critical of themselves and to accept quite sharp criticism from others.

88. The problems associated with staff appraisal centred round the head, middle management and the cost in terms of time. There were examples of heads failing to consult or communicate adequately with their staff, which led to some uncertainty and apprehension on the part of teachers and to instruments for appraisal which were mediocre; and there were not always effective structures to ensure implementation. Where heads of department or pastoral heads were primarily responsible for appraisal, or were given discretion as to how they accomplished stage one appraisal, there was considerable variation in practice and effectiveness. While some took this responsibility very seriously, some did not see staff appraisal as part of their job, and some lacked the necessary skills to carry it out effectively. Some heads were aware of the inconsistencies but took the view that they did not wish to impose a rigid methodology, or felt that it was important to establish a scheme, albeit with imperfections, which could be improved later. Some heads of department found appraisal and monitoring difficult because their schemes of work and the aims and objectives within them did not provide an adequate framework within which to make judgements or to monitor change. The other main problems concerned the time and effort involved. In some schools there could be three appraisal interviews in each round; this was seen as a burden on both the appraised and the appraiser. One school had already moved to biennial review. Whatever the frequency of appraisal it was thought to be important that it should be regular, at a specific interval and not haphazard. The whole process was time consuming and one school which had a written self-appraisal, lesson observation and an interview with the head calculated that it had taken 10 to 12 hours for each teacher. Nevertheless, in spite of the high cost in terms of time, all the schools thought the exercise was worthwhile.


[page 32]

Some LEA policies and practices


89. During the summer term 1984 visits were made to six LEAs exemplifying different approaches to school evaluation and varying lengths of experience of it. Altogether 29 primary, six middle and 23 secondary schools were visited by specialist primary and secondary HMI, usually for one day. In addition discussions were held with LEA officers and advisers. Also included in this section are five secondary schools visited in November 1983, as part of a local exercise by HMI to look at the effects of self-evaluation in a seventh authority with an established scheme.

90. Of the seven LEAs being considered four have mandatory policies, requiring schools to conduct self-evaluation exercises and present reports to the governors, or the authority, or both. Two of these, with relatively long experience, have a five year cycle, though one of them also requires annual reports to the governors and an annual review of each school's self-evaluation. A third LEA has a four year cycle and the last LEA has a rolling programme to be completed within four years with schools reporting on one section of their evaluation per term. The other authorities do not have mandatory policies but encourage self-evaluation in various ways. One has drawn up detailed primary and secondary guidelines and encourages schools to use them, another encourages its schools to take part in a variety of national and local projects and a third authority has involved some schools in a national project (GRIDS), as well as conducting its own evaluative visits to schools and encouraging individual initiatives.

91. In none of the seven LEAs visited was staff appraisal a part of the formal requirements of any policy. Of the five LEAs which had produced guidelines for self-evaluation one made no reference to individual teacher performance at all, and where it dealt with classroom techniques it asked about the variety of techniques used by all the staff. The other four LEAs all had sections allowing for teacher self-appraisal with questions for the teacher or head to consider. None of them required a written response. One LEA, where reports on the work of subject departments were required, stated that these were to be 'the result of group discussions and should not identify individuals'. The general attitude was summed up by the LEA as follows:


[page 33]

'This section on teaching skills is for reference and discussion only and at any appropriate time: it is not intended for any written response'. There was clear reluctance on the part of the LEAs to impose staff appraisal, and some suspicion of and resistance to any such attempt on the part of the teachers, particularly secondary teachers. Nevertheless, in each of the authorities examples were found of school-initiated, structured and sometimes rigorous staff appraisal schemes which were accepted by the teachers.

Purposes

92. The main purpose stated by all the LEAs was to help schools to improve themselves and thereby improve the quality of education received by the pupils. In some cases, this was the only stated purpose. There was the assumption that the process was valuable in itself and that improvements would result:

'Self appraisal has the overwhelming advantage that the process itself is productive';
'The process of self-assessment should itself be valuable as a contribution to staff development, individually and corporately';
'and the outcome will then be more effectively applied to curricular and organisational developments within the school'.
There was no clearly defined relationship between self-evaluation and accountability. One LEA at least was prompted by demands for greater accountability and this was seen as the main thrust at the beginning of its programme. Since then the emphasis has swung more towards evaluation as a guide to future development of both schools and individual teachers. Another LEA raised the issue of accountability by saying that:
'to retain public confidence and justify greater resources it is essential that it (the education service) is more obviously seen to be about the business of assessing its performance. It must be less reticent about revealing its successes and failures, and the reasons for both'.
93. Some schools saw a clear relationship between self-evaluation and accountability. Where a written report was required after four or five years, this was taken to indicate that the prime purpose was accountability, though the opportunity to effect improvement in the quality of education was also seen as important. One consequence, dealt with in more detail later, was that the production of a full report usually resulted in an overemphasis on description and a lack of real evaluation. Attitudes in one school apparently hardened when the evaluation report was used to justify redeployment, though this was not a common occurrence. One authority which had established a rigorous scheme for primary school self-evaluation a number of years ago is only now in the process of introducing a similar scheme for secondary schools. It was shown to be important, on numerous occasions, to provide a clear exposition of the purposes, scope and methods of evaluation, and the use to be made of it; to involve teachers in setting up the scheme; and to arrange briefings, with opportunities for discussion. One LEA asked its inspectors to validate the findings of self-evaluation.


[page 34]

94. A purpose of evaluation expressed by one LEA, and confirmed by the practice of several others, was:

'to gain an overall view of primary education in its (the LEA's) area and thus to assess more realistically the relative needs and priorities for its own development responsibilities'.
Statistics and information gained in this way have to be handled with care but, in aggregate, they can reveal trends and help an authority to identify needs in terms of resources, organisation, curriculum and in-service training.

Scope

95. Evaluation in the three LEAs requiring a four or five year cycle of reporting, and in the one with a voluntary scheme, covered the whole spectrum of school activity, though in this last scheme it was not suggested that a school should attempt to cover all sections at the same time. All had extensive sections on resources, organisation, staffing, curriculum, pastoral care and external relations. One LEA emphasised changes that had taken place and in the section for the head's overview provided an opportunity for particular issues to be raised. In general a considerable amount of factual and statistical information was required, demanding a description of actual practices. One LEA with a mandatory scheme required a report on three topics each year, one of which was to be concerned with organisational and non-curricular areas, and the other two to be reports on specific curricular areas eg reading and language development, science, moral, religious and social education. In the LEA where one scheme involved joint evaluation by the school and advisers, the intention was to include an evaluation of all areas of school activity within a two week period, on the basis of factual information collected. One secondary school, which was dissatisfied with this process, offered seven topics for evaluation from which the LEA chose five. A national project operating in this authority, and a voluntary scheme in another, had mechanisms for reviewing briefly all areas of school life in order to determine priorities for more intensive evaluation and development. Two specific projects in another authority concentrated on an evaluation of classroom activity with emphases on effective methodology and improved pupil learning. When LEAs encouraged individual school activities these tended to be selective rather than all embracing depending on the school's perceived priorities and its resources and also to be centred round the curriculum.

96. Emphasis in the whole school, mandatory schemes was on inputs and on the description and analysis of school planning and organisation, eg:

'How are decisions on policy and other matters arrived at?' 'What procedures are there for the regular review of the provision of materials and resources in all major areas of the curriculum?'
Judgements were required on the effectiveness of arrangements in organisational terms vis-a-vis the aims of the school. Relatively little attention was given to their impact on pupils or to outcomes in terms of pupil performance,


[page 35]

other than as measured by examinations. In one set of guidelines only one of 19 questions on the curriculum dealt with the effects on pupils.
'What observations have been made of the total school diet of an individual class or an individual pupil? (variety of styles, subjects etc encountered in a day, percentage of class constructively involved in lessons, balance of active/passive learning, cognitive/affective learning etc.)'.
In two reports seen, that particular question did not receive any response. The topic was important but there was no means of indicating its relative priority, with the result that it could be overlooked, ignored or given inadequate treatment, particularly in view of the number of other questions to be tackled. This was a problem common, in varying degrees, to all the schemes.

97. Where LEA schemes did ask for judgements schools were often given little indication of the criteria by which these judgements could be made, except in general, indirect terms; nor were they helped to establish their own. Questions such as:

'Do we prepare school leavers as well as we can for adult life by helping them to meet the demands and stresses of both employment and possible unemployment? Does this begin early enough?'
are not very helpful. They imply criteria by which to judge the curriculum but give no guidance as to how to evaluate any particular programme. In one non-compulsory scheme the criteria for evaluating the pastoral system were a series of statements, often value-laden, which, in effect, summarised the features of what the authors thought was a good pastoral system; eg
'The school deals effectively with the problems of maladjusted and unstable pupils'.
There were, however, no criteria suggested for judging 'maladjusted', 'unstable' or 'effectively'.

98. To some degree the lack of detailed criteria for judgement was a result of schemes being so wide that detail was difficult to provide. It was equally true that they were trying to do two things simultaneously: to gather information about structures and practices and to ask for an evaluation of them. Some of the statements and questions to which responses were required implied criteria, but their status was unclear. The problem was acknowledged in two sets of guidelines.

'Some questions may appear to be leading, implying there is a right answer; such is not the intention.'
'It (the guidelines booklet) might seem ... to prescribe a model of the ideal school and the ideal teacher. However it is felt that both detail and the ideal are necessary prerequisites to the achievement of some degree of excellence.'
What no LEA schemes did was to explain how to set up criteria for assessment or how to establish whether they were being met. One national project which was concerned with the methodology of school evaluation went some way towards this.


[page 36]

99. Although three of the LEA schemes which were looked at were nominally recurrent activities to be repeated every four or five years, they were in fact developing into rolling programmes. One LEA required a summary report every four years but schools were also required to maintain an evaluation file, dealing with all aspects of school life, for regular updating and review. In another LEA with a four year evaluation cycle, the first evaluation report was seen, certainly by some of the schools, as an agenda to be tackled during the subsequent five years. In both authorities there were schools, particularly those which were unenthusiastic, which saw the exercise as a recurrent task with no activity in between. The status and nature of the five year review in a third LEA was less than clear, certainly to some of the schools, in that they were required to submit annual reports to governors and the LEA and annual summaries of evaluation activity to governors. An LEA which, from an earlier scheme, was aware of the problems posed by attempting a whole-school evaluation within a short time, particularly in primary schools, divided the task into 12 units planned to take four years to complete. The schools were required to evaluate two curriculum areas and one non-curriculum area each year, on a termly basis. Even so, some heads found this process something of a treadmill, with no time to stop and take stock periodically of what had been achieved so far. There was evidence, too, that some of the curriculum areas were receiving a relatively superficial appraisal. The LEA scheme for joint evaluation by the school and advisers had no fixed cycle, but, on the basis of one primary and one secondary school per year, it would be a long one. The other two schemes which were seen were intended as rolling programmes with a mechanism for establishing priorities which allowed schools to proceed at a pace they could sustain.

Methods

100. In all the schemes, the bulk of the work of evaluation was done by subject departments, pastoral groups, or specific working groups producing reports on their areas of activity, or taking responsibility for a specific section of the report. One LEA made a point of stressing that responses should result from group discussion, but in this authority, as in several others, the involvement of teachers, particularly in secondary schools, was uneven. A number of factors emerged. Not all the heads were able to carry their staff with them. One head volunteered his school as a pilot for the LEA scheme against opposition from some of the staff, and was unable to exert much pressure on them to take part. The lack of enthusiasm of another head had communicated itself to the staff and progress had been slow. In secondary schools the heads of department, heads of year, etc were key figures. While some made evaluation a team effort and produced detailed evaluation, others, who were hostile or less effective, produced individual and much less substantial reports. In one or two schools there was an attempt to break the hierarchical nature of the exercise by putting relatively junior teachers in charge of working parties: but this was not common and in at least one case was unsuccessful because of the lack of experience of the teacher concerned.


[page 37]

In primary schools there was generally a greater involvement of staff, partly at least because there were fewer of them. It was common for relatively junior teachers to prepare draft sections of the report, as well as taking part in discussion.

101. The main task of editing was done by heads. In one LEA where there was a section of the report specifically for the head teachers' perceptions, their contributions were wide-ranging and dealt with whole-school issues, sometimes raising matters not covered in the sectional contributions. Inevitably, in all the authorities where there was a written report, there was some criticism by teachers of the editing.

102. In all the schemes the general structure was clear in relation to the various stages of the reporting process and in detailing topics to be reported on. Where most of them were weaker was in guidance as to how to evaluate: to carry out investigation, establish criteria and form judgements. Some of the variation in the quality of the departmental and sectional contributions can be explained by the variation in the ability of different teachers to carry out evaluation.

103. All schemes had more or less detailed guidelines for topics to be covered, drawn up usually by working parties of teachers and LEA advisers and officials. In some the format was a series of statements about a particular topic, eg statements about pupil assessment which effectively defined good practice and to which the school responded. In another there was a series of key questions with supplementary factors to be taken into account when answering them. Others had a predominance of straightforward questions, usually preceded by: 'what', 'how' and occasionally 'to what extent'. Some degree of latitude was allowed as the questions were not always mandatory and could be replaced by others or the topic could be tackled in a different way. When schools had the freedom to choose they proceeded by discussion or by questionnaire, sometimes drawn up within the establishment, sometimes derived from other sources.

104. In a number of schools, particularly those not producing a written report for the LEA, there was a tendency to start with the assumption that change was needed and to proceed almost immediately to the elaboration of new policies with little evaluation intervening. This was also noticed in the GRIDS project which stressed the importance of the evaluation process but which was criticised by some schools at this point for being too detailed and thereby holding up the real activity, the redefining of policy.

105. In no cases was the observation of lessons an integral part of the process. This was in keeping with the lack of emphasis on the effect of school policies and practices on pupils' learning experiences, and with a reluctance to initiate teacher appraisal schemes.


[page 38]

106. Only rarely were there contributions from pupils, parents or employers, even though all the sets of guidelines had sections dealing with external relations. Where this section was included in reports it was dealt with solely on the basis of school perceptions of the views of parents and employers.

107. There was also little use made of external consultants, other than LEA advisers and inspectors; and even there it was uneven. LEAs tended to stress the role of advisers: in helping schools to carry out the evaluation process; in providing specific subject or phase (primary, secondary) advice; at the reporting stage, when the report was being written; and particularly in debriefing afterwards and in follow-up work where necessary. LEAs tended to have a more optimistic view of the extent and value of advisory involvement than did the schools visited. Some schools did report help from their general adviser and this was more often than not in the form of encouragement. The general adviser of one primary school was instrumental in bringing in a specialist science colleague and putting the school in touch with a local polytechnic. But examples of this nature were few and there were almost no examples of advisory involvement in specific review areas. In several authorities the lack of an LEA response to reports and the LEA's failure to help with subsequent development work were mentioned as a source of disappointment by schools.

108. In one LEA inspectors had a specific role, the validation of the school's evaluation report. This was done by means of a group visit by LEA inspectors lasting several days and took the form mainly of discussion with particular subject or pastoral groups. There was apparently little classroom visiting and little time spent subsequently in observing whether recommendations had been put into practice. There was some uncertainty on the part of schools as to the role of the inspectors, particularly in the few cases where local inspectors had been involved with departments in preparing their reports and then returned to validate. There was most resistance to the involvement of advisers during the process of evaluation when there was some concern about the dual role of adviser and assessor. There was, however, more support for advisory involvement once the evaluation was completed. In two LEAs the schools were in favour of more support from advisers to give an objective view of the evaluation and particularly to help with post-evaluation development. Where there was significant activity in the field of evaluation, it increased the calls made on the advisory services, in terms of both amount and complexity, to a degree which made it difficult for them to respond fully.

Outcomes

109. Two LEAs required full reports: these were bulky documents which contained a high proportion of low level description of organisation and structures and relatively little evaluation. One reason for the lack of evaluation was the nature of the issued guidelines, which contained a


[page 39]

preponderance of items requiring factual information. One area in which there was an element of evaluative comment, sometimes outspoken, was in relation to resources or the lack of them, usually in departmental reports. There was very little reference to classroom practice or to pupil outcomes, other than examination results. The appraisal of teacher performance formed no part of them.

110. One LEA required a report under four headings: trends over the previous four years; assessment of the work of departments, of pastoral work and of school organisation; the head's assessment of progress; and local inspectorate comment.

The other required major sections on: policy aims and objectives; factual background; pastoral and curricular arrangements; pupil progress; school and community; future directions. A third LEA required summary reports of approximately four sides of A4. While these last were much briefer they still contained more description than evaluation, and failed to deal in depth with particular issues. Again there were few references to work in the classroom and pupil performance. A fourth LEA adopted a different approach. Schools reported on a different topic each term in a previously agreed order, and were required to complete a standard proforma; for each question from the guidelines, they gave an indication of the time scale, a comment on current activity, and their plans for future action. Not all the questions were susceptible to this approach and some needed more space; as a result some answers were sketchy and superficial. No other LEAs required formal reports, although the schools participating in the GRIDS scheme produced reports intended mainly for the GRIDS central team, which contained a description of the methods adopted; an analysis of the areas being investigated; conclusions; recommendations; and an evaluation of the scheme.

111. In the main the problems and issues which emerged from evaluation related to resources and to organisation and planning, (eg lack of materials, space and staffing; the need to review fourth and sixth form curriculum provision). Where issues were open to internal action there were indications that the schools were implementing change. Where the problems which had been highlighted were related to the teachers' classroom work (eg revision of schemes of work to take account of mixed ability grouping or the problems of coping with the less able), then, generally, less progress had been made. This highlighted the need for schools to monitor the development stage closely to ensure that the more difficult issues were faced up to and that decisions were implemented. Where the evaluation was thought to have resource implications for the LEA much less action had resulted and this was a cause of disappointment to the schools concerned.

112. One of the chief concerns from the schools' point of view was the lack of response by the LEA. Some schools in two LEAs mentioned the time lag between evaluation and response. Some criticised the lack of detailed feedback. For one school the half day debriefing with the LEA was seen as a


[page 40]

scarcely adequate return for the time spent by the school. Some concern was expressed at the LEA's failure to support follow-up action and development, to monitor the implementation of recommendations and to provide specialist help. One LEA has now instituted a senior level post specifically to monitor evaluation and development. This same LEA has a system of group secondments to enable issues raised in evaluation to be tackled. Another LEA, within its limited resources, has tied the INSET programme more closely to the needs identified in the evaluation.

113. In most of the LEAs with mandatory schemes and in the schools where there was independent activity, there were gains, some of a general nature. These include an atmosphere where open and frank discussion could take place; the establishment of a framework for systematic evaluation; the development of a shared understanding of what the school was trying to achieve; better communication between teachers as individuals and groups; and a greater professional awareness by teachers. In more specific terms there were many gains: the introduction of a staff appraisal scheme in several schools (although these were not LEA initiatives); improvements in organisation, for example in the arrangements for probationers and for liaison with feeder schools; improvements in curriculum provision, including new courses for the less able and cross-curricular co-operation; and the revision of schemes of work for different subjects.

114. While these developments should result in enhanced learning opportunities for pupils, it was too early for them to have had a measurable effect in the classroom. Nor was it possible to say that changes had come about exclusively because schools were involved in evaluation. In a number of cases they were already in the minds of senior management but had been eased into implementation by the exercise. This was a point brought out by some schools when calculating the cost benefit of the exercise. The schools with less confidence and less experience in evaluation gained relatively more than those which were already fairly active in this respect.

115. The whole-school schemes had a number of drawbacks. The fact that they were all-embracing, requiring considerable documentation, made them long and largely descriptive, and prevented them from focusing sharply on important issues. In two LEAs with whole-school evaluation reports, many teachers had not read the complete school document but looked only at their own sectional contributions, so that the whole-school perspective of the exercise was lost. The production of the report in some cases dominated the exercise to the detriment of the evaluation process. For some it was a chore to be completed. The time costs were considerable, both in terms of the duration of the exercise - between six months and two years - and of the demands made on individuals. The head of one school calculated that it had taken 287 hours of staff discussion and 96 hours to draft the report. The head in another LEA estimated that it took 50 hours to produce a sectional report and 200 hours to produce the total report. Familiarity with the procedures, should they become established, might well reduce such commitment of time.


[page 41]

116. The most approving comments about the evaluation exercise came from the primary schools. While the costs in terms of time and effort were considerable, particularly where the head had a substantial teaching commitment, the exercise in primary schools was of necessity on a smaller scale; a greater proportion of the staff were involved; organisational issues were less prominent; and the focus fell more naturally on the curriculum, although there was still insufficient attention to pupils' learning.

117. Among the secondary schools, although there was support for evaluation, the observable benefits in the schools involved were fewer, particularly when related to the costs incurred.

118. One LEA sought the views of schools involved in its first cycle, in its preparation for the second. Of the schools replying three quarters were in favour of a compulsory second round, though few wanted it to cover the whole school. While most accepted the need for some written report, a much shorter one was preferred, preferably a regular report to governors on a specific topic rather than a long all-embracing one. A smaller proportion of primary than secondary schools replied but most were still in favour. There was however concern over the use of time, particularly where there was a teaching head. Few wanted the same overall report format and preferred either to take the conclusions of the first report as a starting point or, preferably, to have a number of topics selected by the school and advisers.

119. The issuing of guidelines by one LEA, even though they had been the result of a working party of heads and LEA officers, appears to have had little long-term effect in the secondary schools. Only one of the schools visited had used the document as a checklist. A school which had played a leading part in the development of the booklet had since moved on and used other methods. An incoming head had set it aside and was using his own scheme. In primary schools the LEA document, again produced by the LEA and teachers, with the experience gained from the secondary document, seemed to have had greater effect and had frequently been used as a basis for school-based INSET and in the initial stages of school-based evaluation.

120. A second LEA, with a strong commitment to INSET, had its own joint evaluation scheme with schools and advisers and was also involved in piloting a national project. While the joint scheme contained a higher proportion of evaluation than many and had been found valuable, particularly in primary schools, the slow rate of progress, one per year for each type of establishment, gave the scheme limited value as a means of monitoring the LEA as a whole. The decision had been taken to concentrate the pilot scheme in one area of the authority, to make possible the development of mutual support among schools and their general advisers. The results had been positive but there was still insufficient focus on work in the classroom and the monitoring of the development stage was problematic in schools. The further extension of the scheme was likely to put more demands on the advisory service.


[page 42]

121. The third LEA had no single policy or guidelines for evaluation but for many years had positively encouraged and supported individual school initiatives. These included productive partnerships between teachers and educational researchers, concentrating on the evaluation of classroom activity, which had enriched the learning opportunities for some pupils. In addition some schools had embarked on extensive curriculum evaluation projects.

122. Papers prepared by the advisers suggest that the authority might investigate and evaluate different models of curriculum development as a basis for policy. Primary schools have already been invited to adopt a GRIDS approach. These initiatives should help the schools in the authority to derive full benefit from the many and various evaluative activities undertaken so far.



[page 43]

Recent Developments in Teacher appraisal


123. Although, at the time when this exercise was initiated, no LEA had a mandatory policy in relation to teacher appraisal, such is the rate of development that at least five authorities are now in the process of establishing schemes for teacher appraisal. Information is available on two of them, one of which is now in operation.

124. In the first of these the LEA proposes to combine an annual programme of 15 inspections of primary and secondary schools with a programme of teacher appraisal by its inspectorate, at the same time encouraging school initiatives in institutional evaluation following on from the authority's involvement in the 11 to 16 curriculum project. The purposes of the inspections are to meet pressures towards greater accountability and to provide more information about schools, so contributing to a more comprehensive picture of the education service and the identification of development priorities. Teacher appraisal is seen as an essential part of effective staff development. It is expected to provide a more systematic, objective and reliable evaluation; identify those teachers with promotion potential; and inform managerial decisions in schools and in the LEA. It is also a response to the encouragement given to performance appraisal by the government and other agencies such as the local authority associations.

125. Schools are selected for inspection for a variety of reasons, such as: that they are examples of good practice; or that they have known difficulties; or simply to maintain a representative sample. The aim is to cover all aspects of school life. Guidelines already exist for secondary schools and are being drafted for primary schools. Reports are discussed with heads and checked for accuracy before presentation to the governors and the LEA schools sub-committee.

126. The inspectors are required to spend 70 per cent of their time in classroom visits and to record these, completing a summary each month. Two sorts of reports are made on individual teachers. One is prepared following every visit by a local inspector; the other is an annual review. The form and scope of these reports were devised and agreed by a working party of teachers


[page 44]

and local inspectors and it was accepted that they represented a more open and systematic means of recording comment. The lesson observations were reported to be 'often' seen by teachers but this was not as yet mandatory. The classroom visit report asks for comment under 11 headings relating to classroom practice, plus a record of the advice given and of any comments made by the teacher. The second form is meant to be an annual statement of appraisal completed by the relevant subject/phase inspector. It covers nine areas, three dealing with matters of curriculum cover, the others requiring comments on management and organisational skills, leadership qualities, professional relationships, professional development and other skills and interests. This document is also available to the teacher. So far this appraisal has been completed for deputy heads in primary schools, and is in progress for secondary deputy heads and heads of departments. There is also a report following interviews for promotion.

127. The system was introduced in Autumn 1983 and more widely in January 1984. It is realised that considerable in-service training is required, not only to ensure that the relevant skills and techniques are acquired but also to win the active co-operation of all the schools and inspectors, some of whom are not yet participating fully and effectively. There are problems in finding time in schools to enable discussion to take place after lesson observation. There is discussion as to whether heads should be allowed to see the lesson observation reports, currently confidential between the inspector and the teacher. There is also discussion about the availability of appraisal reports, currently restricted to the inspectorate, the area officer and the assistant education officer and valid for two years. There are problems of logistics. At present the appraisal is being done by the inspectorate, and this places a considerable burden on its human and material resources even though there are more than 50 inspectors. There are plans to train senior staff in schools to take over appraisal, and for the inspectorate to 'validate' the results, although how this will be done is not yet clear. The lack of involvement of the senior staff in schools appears to be a weakness; appraisals may be made without sufficient understanding of the whole-school context; follow-up which is likely to be centred within the school will require senior staff involvement; and the problems of arranging visits and follow-up discussion would largely disappear if the appraisal were school-based. One positive feature is that INSET resources for follow-up include 40 secondments for specific teachers and residential courses for all the staff of particular schools.

128. The second LEA is preparing an integrated scheme of school evaluation and performance appraisal. This approach recognises the links between individual performance and the content of school organisation and curriculum in which it takes place. It also recognises that structural changes will be less effective if the performance of individuals is not evaluated and, where necessary, modified; and that there are limits to the effectiveness of individual appraisal if it is not tied to agreed curricular, pastoral and organisational aims. The proposals, which recognise the considerable resource implications of the whole scheme, contain two other important


[page 45]

principles: first, that appraisal involves all, from the director down; and second, that at all stages evaluation and appraisal are two-way processes.

129. The elements of the whole-school evaluation are:

a. the evaluation of organisation and management: objective-setting, planning, decision-making, distribution of responsibility and resources, group and personal relations, coordination of activity, evaluation and review of working groups;
b. the evaluation of specific aspects of the school's work: including the curriculum, pastoral care, school 'climate', careers, pupil assessment, external relations, statistics;
c. the further analysis of the teacher's role: in relation to subject, the pastoral system, the school as a whole, the teaching profession;
d. INSET and staff development: curricular, pastoral, interdepartmental and general aspects; external agents, evaluation responses.
130. It is proposed to start initially with a pilot professional performance appraisal in six schools. The six heads and two inspectors are to participate in a residential course dealing with 'Appraisal and counselling training' and are to prepare a draft evaluation model for the whole authority.

131. Appraisal is to include everyone at least annually; termly in the case of those involved in significant changes of responsibility. It will have three distinct aspects:

a. an annual appraisal of current performance to identify those suitable for promotion and those 'in need of support'. This will consider: specific areas of an individual teacher's performance (14 headings); personal development (4 headings); school involvement (8 headings);
b. a review of development plans to improve current performance;
c. a career review intended to identify those who are potentially promotable, based on key criteria for specific positions, and for whom preparation of an appropriate kind can be provided.
132. There are also guidelines to help the interviewer and the interviewee to prepare, and for the conduct of the interview. A written record of the interview and of the summary review based on it under the headings mentioned in a. will be signed by the head and the teacher concerned, with room for comments by the latter. Where the appraisal gives rise to particular causes for concern the general inspector of the school can intervene at the request of either party. The inspector will also sign the summary form which will be sent annually to the LEA. Both forms are to be used as the basis of any references. The LEA will also have an up to date overall picture of its teaching force, enabling it to take informed decisions about its strengths and weaknesses and any development needs.

133. The advantages of the scheme would appear to be that it includes everyone; it is open; and it involves a two-way exchange with the possibility of


[page 46]

disagreement and referral to a third party. It is likely to prove to be a better basis for promotion and references than what exists at present. It is positive and supportive, linked as it is with INSET and career development. It is part of a whole-school evaluation and development. The LEA recognises the in-service implications of setting up and maintaining the scheme.

134. The areas for appraisal are described up to head of department level, but as yet nothing has been published on the appraisal of the senior staff in the school; or for LEA officers and advisers. The LEA envisages a one stage appraisal and there appears to be no appraising role for middle management in schools even though the heads of department, for example, may be in the best position to appraise staff for whom they are responsible. It is not clear what part the appraisal of classroom performance plays in the scheme.



[page 47]

Conclusions


135. A great deal of experiment is already going on in the field of teacher appraisal and school self-evaluation and interest is growing.

136. Systems of teacher appraisal which offer their findings to agencies external to the school such as the LEA, governors, parents, are beginning to appear within one or two recent LEA initiatives. The procedures which have been generated by individual schools do not make the data derived from appraisal of teachers directly available although they may be drawn upon for writing references or where some summary statement on performance is required by the employing authority.

137. Teacher appraisal appears to work best where the school as a whole is accustomed to looking critically at its practices; and certainly the evaluation of curricular, pastoral and other provision is given more substance and credibility if it includes an assessment of the functions and performance of individual teachers.

138. Evaluation and appraisal must be linked to in-service training opportunities and staff development policies at school and LEA levels if they are to be acceptable to teachers and effective in achieving improvement in school and teacher performance.

139. Whole-school evaluation and teacher assessment take a great deal of time and a considerable amount of effort. This not only requires teachers to have the time: it also calls for their support and goodwill. For this to be forthcoming a healthy perception of status and good morale are critical, and need to be actively sought for.

140. The roles which different people play in evaluation and appraisal need to be clearly understood by all parties.

141. School evaluation and teacher appraisal seem to bring about some improvement both in teachers' understanding of what is expected of them and in how others see their performance in the classroom and more widely. They can enhance teachers' awareness and understanding of the objectives and


[page 48]

performance of the school as a whole and of its various parts. In many cases the procedures appear to lead to a better working climate and to improved performance by the school and by individual teachers.

142. Effective evaluation and appraisal are linked with good school leadership and trust between head and teachers; the latter must feel confident that their observations and recommendations will be treated seriously.

143. Good communications are essential if there is to be the shared perception of purpose which is essential to avoid confusion and consequent lack of rigour and coherence in the exercise.

144. For schools introducing an individual scheme or implementing an LEA policy, a state of readiness appears important. Some schools have created this by pre-evaluation or pre-appraisal activities introducing some of the concepts, skills, methods and attitudes which are likely to be required for an effective process.

145. Without classroom observation appraisal will lack real evidence of teaching skill and provide little that can be built upon to secure improvement. Observations should be supportive as well as evaluative and pay attention to the direct effect of curriculum planning, organisation and teaching on pupils' learning.

146. Evaluation and appraisal demand skill and sensitivity. Training is necessary for all those involved, particularly in the process of evaluation itself including classroom observation and interviewing. Criteria need to be made explicit and known to the participants. More use could be made of the experience and expertise in these matters which is available outside the education service. Similarly more could be learnt from the experience of educational evaluation and teacher appraisal in other countries.

147. Greater recognition could also be given to the function of groups external to the school in the process of evaluation. Parents, governors, employers and the wider community have expectations of schools and of individual pupils. These need to be clearly articulated and schools might do more to seek and consider these views.

148. More attention needs to be given to the appraisal of middle managers (heads of department, for example) in schools, and even more to the appraisal of deputies and heads. For schemes to be fully effective they should include all staff and be seen to do so. Difficulties are not confined to teachers at the most junior levels.

149. The identification of teachers in difficulty is not generally an important objective in existing appraisal schemes; such teachers are usually known. However, systematic and comprehensive appraisal can help them and others to recognise and overcome their problems, and to specify and secure remedial support. If such support is ineffective then a fair and objective appraisal can best inform whatever other action is appropriate.


[page 49]

ANNEX A

Aide-memoire used by HMI in relation to school self-evaluation


1. The purpose of evaluation

To improve the functioning of the school?
In response to specific problems/issues?
To provide information for accountability?
Whose initiative?
How are general and specific objectives established and communicated?
Is there agreement about the purpose?
2. The information and judgements required
On all aspects of school/college life or is the focus on particular aspects?
How is the information assembled?
What emphasis on:
    a. inputs (O+M, staff, resources, planning)?
    b. processes (effectiveness, delivery)?
    c. outputs (pupils' learning)?
Are criteria explicit where judgements are required?
Do teachers evaluate their own performance?
3. The methods used
Is it to be a single exercise, a recurrent activity or a continuous rolling programme?
How detailed is the prescribed or recommended structure?
Aide-memoires, schedules, questionnaire?
How much originates
    - in the institution?
    - in the LEA?
    - from elsewhere?
Are all staff involved? as individuals or groups?
Is there observation of lessons? Do pupils contribute? - parents? - employers?
Are there external observers or consultants?
To whom is the information available?

[page 50]

4. How effective is the exercise?

What action has resulted?
Has it led to improvements?
Has the exercise been reviewed?
Is fresh thinking apparent?
What are the advantages, disadvantages and general effectiveness of the exercise in this school/college? (If possible, include reference to the cost (time/manpower etc) of the exercise.)
5. Is there a written report? What form does it take?
What aspects of school life are reported on?
What is the balance between description and evaluation?
Is there reference to pupil outcomes?
Is there reference to teacher performance?
To whom does the report go?
6. What is the time scale of the exercise?



[page 51]

ANNEX B

Aide-memoire used by HMI in relation to teacher appraisal

a. What appear to be the motives and purposes of these teacher assessment procedures?

Primarily for HM's use?
Related to LEA requirements for accountability?
Related to promotion possibilities?
Related to school policies on
    - organisation?
    - curriculum?
    - staff development?
b. What is the scope of the assessment?
What information is collected and recorded?
What judgements are made?
What are the criteria for the judgements?
Who establishes the criteria?
c. What methods are used?
Who is involved in making judgements?
How was it presented to/perceived by teachers?
Is there any input from others (ie not teachers)?
    - inspectors?
    - governors?
    - pupils?
    - parents?
[page 52]

d. What are the outcomes?

Are there written records? (For whom?) (For how long are they kept?)
What use is made of the assessment?
Who is responsible for any consequent action?
Is there any evidence to suggest that the process is improving, or is likely to improve, pupils' learning opportunities?
e. How effective is the operation?
Have teachers been helped to assess their:
    classroom performance?
    relationships with pupils?
    lesson preparation?
    assessment techniques?
Has the assessment of teachers' performance actually helped to improve it? (ie do the observations of teaching skills support any improvement which teachers/HM may claim to have occurred?)
Are there other significant benefits?
Are there particular problems?
Is the exercise to continue?



[page 53]

Official publications in print referred to in Quality in Schools are available as follows:

A view of the curriculum.
ISBN 0 11 270500 6HMSO 1980£1.95 net

The school curriculum.
ISBN 0 11 270383 6HMSO 1981£2.00 net

Teaching quality Cmnd 8836.
ISBN 0 10 188360 9HMSO 1983£3.40 net

The curriculum from 5 to 16. HMI Series Curriculum Matters 2.
ISBN 0 11 270568 5HMSO 1985£2.00 net

Better schools Cmnd 9469.
ISBN 0 10 194690 2HMSO 1985£6.40 net

Better schools: a summary.

DES 1985

Education observed 3: Good teachers A paper by HM Inspectorate.
DES 1985.

HMSO publications are available from HMSO bookshops and through booksellers. Prices are correct at May 1985.

DES publications are available from DES Publications Despatch Centre, Honeypot Lane, Stanmore, Middlesex.