Paper presented at the 1st Annual Qualitative Methods Conference: "A spanner in the works of the factory of truth"
20 October 1995, University of the Witwatersrand, South Africa


Empowerment Evaluation: An Application of David Fetterman's Approach
Charles Potter, Gordon Naidoo, Ruth Dube & Jenny Kenyon
David Fetterman of Stanford University has proposed an approach to evaluation research which aims to increase the self-determination of individuals, and empower them to cope with the problems they face in their life and work. As applied in programme evaluation, Fetterman's approach implies that the evaluator takes on different roles relative to those working in the programme being evaluated. These include training, facilitation, advocacy and illuminative roles, with the aim of increasing the insight of those working within the organisation, and their power to confront issues and solve problems.

This paper focuses on the implementation of Fetterman's approach in an evaluation of a radio learning project in South African schools, funded by USAID under the auspices of LearnTech Washington. The evaluation took place over a two year period (1993 and 1994). The paper is a case study involving four aspects: evaluation design and implementation, issues relating to programme development, issues relating to school and teacher development, and issues relating to wider programme implementation. The authors are persons who have been involved both in the evaluation and in the development of the programme.

As is common in evaluative case studies, information needs to be provided both about the programme being evaluated, as well as the evaluation. In this paper we first provide a short theoretical background. We then describe the programme being evaluated, and outline the main features of the evaluation design in relation to the developmental issues facing the programme at the time the evaluation was conducted. The evaluation process and results of the evaluation are then presented. We conclude the paper with a meta-evaluation, consisting of different perspectives on the evaluative process and its effects on different aspects of the programme's development.

1. Theoretical Background

Shadish and Reichardt (1987) state that action and practice tend to precede theory development in any discipline. In the social sciences, Chen (1990) suggests that programme evaluation has followed this trend, emphasising method rather than theory. In Chen's terms, this pre-occupation has been symptomatic of broader divisions in the field of evaluation research which emerged over the 1970's and 1980's, between those who based their evaluations on positivistic assumptions (eg Campbell and Stanley 1963; Reicken and Boruch 1974; Cook and Campbell 1979), and those "new wave" evaluators (eg Parlett and Hamilton 1972; Hamilton et al 1977; Stake 1973; Patton 1978; 1980; 1987; Guba and Lincoln 1981; 1983) who based their work in a more naturalistic and qualitative tradition.

There has been criticism of the use of experimental and quasi-experimental approaches in evaluation, in particular (Hamilton et al 1977). Stenhouse (1975) and Cronbach (1963; 1980; 1982) have argued that measurement-based approaches result in evaluations which are inflexible, rigid and narrow. Objectives-based and systematic evaluation approaches (Smith and Tyler 1942; Tyler 1950; Taba 1971; Rossi and Freeman 1985) have similarly been criticised as being pre-ordinate and limited (Eisner 1967; 1969; 1975; 1977; Stenhouse 1975).

As an alternative to pre-ordinate evaluation approaches, naturalistic evaluation takes many different forms, and is not a "monolithic entity" (Fetterman 1986). Qualitative evaluation approaches include responsive evaluation (Stake 1973; 1983), illuminative evaluation (Parlett and Hamilton 1972; Parlett and Dearden 1978) evaluative case studies (MacDonald 1971; 1977 (a); MacDonald and Walker 1974; Walker 1980; Stake 1978; 1985; Simons 1987), responsive-constructivist approaches to evaluation (Guba and Lincoln 1981; 1983; 1989), and the use of anthropological/ethnographic methods in evaluation (Fetterman 1982; 1989; 1993; 1994).

Guba and Lincoln (1981; 1983) argue that there are both paradigmatic and methodological differences between naturalistic and rationalistic evaluation. MacDonald (1977 (b)) similarly proposes that evaluation approaches differ from each other in terms of the role the evaluator plays relative to the programme or entity being evaluated, and the position the evaluator adopts relative to those who have power and influence in society. Nisbet (1980) points out that evaluator role differences may be symptomatic of a deeper process, in which those involved in working in social programmes seek out those evaluators whose values are likely to match their own. The values of both evaluator and other stakeholders are important in evaluation (Weiss 1973; House 1973; 1977), the paradigm adopted by the evaluator often reflecting deeper ideological biases (Scriven 1983; Popkewitz 1984).

The choices taken by the evaluator at the design level thus influence the form of evaluations as well as the likelihood that evaluation findings will be utilised (Patton 1978; 1980; Weiss 1977; 1979; 1980; 1982). Despite differences between evaluators on a paradigmatic level (House 1983), there would appear to be consensus in the literature that evaluations need to be holistic and issue-driven. One reason for this is that evaluations are often concerned with programmes working with complex social issues. Such programmes are often working to produce social change (eg Fullan 1982; Fullan and Stiegelbauer 1991), and are themselves often in a state of evolution and change.

In such programmes, evaluation approaches are needed which can facilitate the process of development, rather than stifle it. Progressive focusing on issues is often necessary (Fetterman 1982; 1989), and formative designs which focus on current and emergent developmental needs. Approaches are also needed which emphasise the relevance and utility of evaluation findings to the different stakeholders involved in the programme being evaluated (Patton 1978; 1980; 1982; Weiss 1980; 1983 (a) and (b)).

There would appear to be growing consensus from evaluators within the positivist camp (eg Cook and Shadish 1986), those evaluators adopting an eclectic position (eg Smith 1986; Williams 1986), as well as those evaluators arguing the need for a distinct naturalistic paradigm (eg Guba and Lincoln 1981; 1989; Lincoln and Guba 1985), that evaluation designs need to be broad enough to deal with multiple values and the issues and concerns of different programme stakeholders.

The recent literature on evaluation reflects a trend away from paradigm and method-orientated debates, Cook (1985) has proposed the value of what he termed a post-experimental perspective based on "critical multiplism" in evaluation design. Lipsey et al (1985), Lipsey (1987) and Chen (1990) have suggested that while refinement of research methods is helpful, intensive method-orientated debates will not appreciably expand the focus and scope of evaluation research. Both Lipsey and Chen argue that the development of theoretical frameworks, and the linking of conceptual and theoretical efforts in what they term "theory-driven" evaluations is necessary.

Evaluators also need to make a choice as to whether to be oblivious of, or to reflect broader societal realities and power relationships in their evaluation designs (MacDonald 1977 (b); Stake 1983; Simons 1987). The role of the evaluator relative to the various stakeholders in the programme being evaluated is of particular importance (Weiss 1983 (a) and (b). Evaluations can be conceptualised in bureaucratic terms, or negotiated with the programme's stakeholders (Guba and Lincoln 1989). Such negotiations have the potential of being educative, and can be undertaken with empowerment in mind (Fetterman 1993; 1994).

Empowerment may be defined phenomenologically (Fetterman 1994) or structurally in terms of critical or emancipatory perspectives (eg Carr and Kemmis 1986; Grundy 1987; Choudhary and Tandon 1988; Lakomski 1988). In terms of the critical perspective, the notion of evaluation itself needs to be critically scrutinised, as reflecting power relationships in society (MacDonald 1977 (b); Simons 1987). Calls for accountability, and evaluation for purposes of accountability, in particular, often reflect wider dynamics of power and social control (MacDonald, 1978; Lacey and Lawton 1981). These are differently experienced by the various stakeholders in social programmes, depending on their position relative to power and influence in society (MacDonald 1977 (b)).

In teacher development, Goodson (1992), Hargreaves (1992) and Hargreaves and Fullan (1992) have noted the tension between what they term teachers' "vision and voice." This tension, the wider tension between bureaucratic and empowerment concerns, and the need to conceptualise teacher development in a personal as well as an ecological sense, form the background against which the evaluation described in this paper was conceptualised and conducted.

2. The Programme, and the Evaluation

OLSET's "English in Action" programme introduces English from Grade One (Sub A), through the medium of interactive radio. The programme consists of a number of levels. At the time this evaluation commenced, the first level, consisting of one hundred and thirty lessons to cover tuition during the first year at school, was being implemented on a pilot level in over a hundred primary schools across South Africa.

"English in Action" has been designed to use simple technologies of radio, tape recorder and recorded tapes. The aim of the programme's developers is to develop a programme which can have particular applicability in primary schools in rural and remote areas of the country. Interactive radio has been utilised in many developing countries (Zirker 1990), and has been demonstrated to be an effective teaching medium with pupils in other African contexts (eg Imhoof and Christensen 1986). In the South African context, OLSET's aim has been to provide an affordable addition to the existing primary school curriculum in both rural and urban areas, radio being a low-cost and widely available technology across the country as a whole.

In January 1993, OLSET commissioned an evaluation of "English in Action", with a view to establishing whether the project was having effects on learning in the classroom, whether the emerging model of intervention was appropriate, and how it could be improved. The evaluation thus had both formative and summative dimensions.

In 1993, the programme was still at an early stage of development. The formative (developmental and process-related) dimension was thus a strong priority. A summative (product) dimension was, however, also needed, due to the emphasis in the project on moving fairly rapidly to large-scale implementation and adoption over the next two years.

3. Evaluation Design

The aims of the evaluation were to identify issues relevant to the development of the project, to establish whether the project was being effective in the areas in which it was working, and to identify areas in which the project's work could be improved, and to make recommendations as to how this could be done. In terms of these purposes, a hybrid evaluation design was evolved, which had formative and summative elements. The evaluation was conceptualised as participatory in nature. It was based on a mixture of two approaches, the theory-driven approach suggested by Chen and Rossi and their colleagues, and the empowerment evaluation approach suggested by David Fetterman of Stanford University.

Theory-driven evaluation provided a framework for the process involved in getting project staff to articulate the vision, intentions and policy underpinning the different aspects of the programme they were developing. Empowerment evaluation provided a framework for the democratic and participatory process which developed in the evaluation, relevant to the concerns of various stakeholders in the programme.

3.1 Theory-Driven Evaluation
The theory-driven approach to evaluation (Chen and Rossi 1980; 1983; 1987; Chen 1990) assumes that there is a theory implicit in every programme. The evaluator's task is to establish what the programme's theoretical underpinnings are, so that the programme can be evaluated in its own terms, as well as relative to the theories implicit in other programmes and in the literature.

As applied in the programme, the theory-driven approach was utilised formatively, to establish the basis on which decision-making concerning course and curricular issues was based, as well as how strategic planning in the project was conducted. Formative evaluation was viewed as intrinsic to the process of curriculum development in which the project was engaged, and was conducted with the purpose of contributing to an emerging discourse within the project and its stakeholders about the project's work.

The evaluation design focused on the coherence between the levels of vision, intention, policy and action (Potter 1991; 1992; Potter and Moodie 1991; 1992) in the project's work. Coherence between these four levels would imply a grounded theory, informing the work of those involved in managing the programme, those involved in curriculum development, and those involved in implementation and work with the teachers in the field.

3.2 Empowerment Evaluation
Empowerment evaluation is based on an ethnographic approach to the evaluation of social programmes (Fetterman 1989; 1993; 1994). The evaluator first establishes issues relevant to the programme's development, and then plays different roles relative to the programme's needs with respect to these issues.

Empowerment evaluation is essentially developmental and educative in character. It is thus particularly suited to formative evaluation. The evaluator's focuses on increasing the awareness and self-determination of individuals, and the empowerment of individuals and groups, to act independently in relation to concerns, problems and issues relative to the programme's work. The roles played by the evaluator include training, facilitation, advocacy, illumination and liberation (Fetterman 1993; 1994).

Empowerment evaluation assumes that not all stakeholders in programmes have equal access to power and influence, and that the evaluator needs to facilitate increased power and influence relative to decision-making, and to solving the developmental problems implicit in the work of social programmes. The evaluator's role thus changes as the evaluation unfolds, as his or her understanding of the issues develops, relative to the needs of the various participants in the evaluation process.

3.3 Focuses of the Evaluation
OLSET's aims were not only to develop appropriate materials, but to establish credibility and legitimacy of its curriculum with the teachers. A coherent theory of curriculum development linked to an implementation strategy which enabled teacher "ownership" of the materials was thus necessary (Hawes 1979; Fullan 1982; Fullan and Stiegelbauer 1991; Huberman 1992) if the programme was to be adopted in the schools.

To evaluate these levels in the programme's work, the evaluation was conceptualised as developmental, interactive and participatory (Potter 1993). This implied that the various stakeholders in the programme would be included in the evaluation process from the outset. David Fetterman's suggestions for engaging with programme stakeholders by means of ethnographic interviewing (Fetterman 1982; 1989; 1993) provided the means by which issues relevant to the programme's development, and indicators of the success of the programme, were elicited.

Following Fetterman's suggestions, interviews were first conducted with each of the persons employed within the project's structure, in which they were asked to describe their work and to rate their success in their work. Through this process of self evaluation, the nature of the issues with which the project team were dealing, and problems associated with the everyday work of the project were established.

Those interviewed were then asked to provide information on how they established whether their work was successful. This information provided the basis for establishing criteria relating to the programme's development, and indicators of the success of the project's work, which were directly linked to the current practices of those involved in the programme.

From the side of both project staff and the teachers, one important indicator of the programme's potential was whether the pupils learned English using the materials. Another was how they learned. Other important indicators were how the teachers learned to use the programme, how they used the technology, how the programme linked with the teaching in other areas of the curriculum and, most importantly, how the teaching practices of the teachers changed. Would teaching and classroom management be improved, or would radio learning de-skill the teacher? Would the programme increase or lessen the teacher's involvement in teaching English? Would the programme's introduction be associated with improvement in teaching English across the curriculum, or would the opposite occur?

The initial stage of the evaluation was one of establishing issues and criteria. Besides interviews with project staff and teachers, observation was conducted of the programme's implementation in the classroom. From this it was apparent that the issues with which the project was dealing were not simply materials and technology-related, but how the new teaching approach was adopted by the teachers, and how the new programme was incorporated within the existing ecology and culture of the school. In common with other forms of educational technology (eg computer-based instruction) it was evident that the radio programmes needed to be contextualised in order to be effective in the schools. Above all, they needed to involve the teacher. Radio programmes without the teacher's active support and reinforcement of concepts would achieve little.

The initial interviews and observation provided a means of progressive focusing on the issues the evaluation needed to address in order to be useful to the project's stakeholders. These were crystallised into the following research questions, which were agreed with OLSET's management and the project team:

  • Is the "English in Action" programme effective in teaching primary English?
  • Are teachers who use the programme empowered, supported in their jobs and assisted in professionalisation?
  • Is there acceptance of the programme by the community, inclusive of teachers, parents, principals and other stakeholders?
  • Are radio and cassette efficacious as a delivery medium?
  • Is "English in Action" having an effect on the school environment?
  • Is the programme cost-effective, and are there economies of scale for national implementation?

4. The 1993 Evaluation

4.1 Data Sources
To answer these evaluation questions, the evaluators had access to evidence from a number of sources. Besides qualitative data based on school visiting, classroom visits and observation of lessons, the reports of teachers, principals and parents concerning pupil progress, observation of teacher support groups, interviews with project staff and with teachers, and case studies of the programme in the schools, quantitative data would be available from pre- and post-testing of pupils, conducted in 71 schools. Focus groups would also be conducted in all regions in which the programme operated, involving a range of community stakeholders.

In its formative sense, the evaluation would focus on both theory and empowerment. In terms of theory, those involved in the programme would be interviewed to establish the project's frame of reference, how each person was working in the context of the project's aims, and to establish the developmental problems faced in the project. Following Chen's suggestions (1990), the project's frame of reference and areas of difficulty would be related to existing theory in the literature and to the work and evaluations of other projects working in the field. Following Fetterman's suggestions (1993; 1994), they would also form the focus of a broad-based process of learning and development, involving the project's staff and its stakeholders. The evaluation process would thus be developmental, focusing on issues relevant to the work the project was doing, with the aim of establishing what was workable, what should be done, and how it should be done.

In its summative sense, the evaluation would test how the programme's theory was being applied in practice. It would be based on both quantitative and qualitative data. A socio-economic analysis of the programmes's implementation would also be undertaken. This would examine various scenarios for programme implementation, and various models for teacher development and support, with the aim of costing the programme's implementation at scale under various conditions.

4.2 Data Analysis and Integration
Separate analyses of each area of the data were undertaken, and the indications from the analyses then combined into a composite report (Potter, Arnott, Mansfield, Mentis and Nene 1993). A report on the pre-testing was also compiled (Arnott, Mansfield and Mentis 1993 (a)), and a report on the focus groups (Nene 1993). These analyses were supplemented by a series of personal accounts on the development of the programme in the schools. These were brought together into a book of case studies, written by the project team, under the editorship of one of the evaluators (Potter 1993). In addition, a report was compiled on the analysis of pre- and post-test data (Arnott, Mansfield and Mentis 1993 (b)).

In integrating the various analyses into the final report, a theory-driven evaluation framework was utilised (Potter 1992; Potter and Moodie 1991; 1993), in terms of which the data were analysed to establish the congruence between the vision of individual members of the project team, their intentions with respect to the programme they were developing, how these intentions were encapsulated in project policy, and how project policy was implemented in practice. From the analysis, gaps in project theory and conceptualisation, strategic planning and implementation were identified, and the logic, coherence and feasibility of the project assessed. Emerging issues were discussed with the project team, as the basis for establishing a structure within which findings from the evaluation could be utilised, and recommendations implemented.

The main focus of evaluation lay on the project's curriculum, and the implementation of "English in Action" in the field. From the data, there was evidence of learning gains made by the pupils, as well as favourable indications concerning the perceived value of the programme on the part of the teachers, and of the community. Questions were, however, raised concerning the process of curriculum development in the project. These are outlined in the sections following.

4.3 Evidence from Pre and Post-Testing
The pre- and post-testing of the pupils was conducted using a test of receptive language developed by Arnott, Mentis and Mansfield, who based their instrument on the RLAP test implemented in Imhoof and Christensen's (1986) study of the development of "English In Action" in Kenya. This was implemented as part of a stratified two-stage cluster sample design, based on region, school type, and grouping for experimental and control schools. The test was applied at the pre-test stage in 35 experimental and 36 control schools matched in terms of demographic area and socio-economic background of parents. 2255 pupils in total were tested, of whom 53,9% were in urban schools, 21,9% in farm schools, 14,5 % in informal settlement schools and 7% in rural schools (Arnott, Mansfield and Mentis 1993 (a)).

Item analysis was conducted of the pre-test instrument, following which the test was modified. It was then used for post-testing of pupils in both project and comparison schools. The results were then compared with the pretest scores, in both project and comparison schools (Arnott, Mansfield and Mentis 1993 (b). Learning gains of pupils in the project schools were found to be highly significant (p < 0,0001), when compared with pupils in the comparison schools who had not had the benefit of the programme. This suggested that those pupils who had been exposed to English in Action in the project schools had performed significantly better than their counterparts in the comparison schools who had not had the benefit of such exposure.

The performance difference between project and comparison schools was 20%. An increasing amount of improvement in post-test scores took place, depending on the number of lessons of the "English in Action" programme to which pupils had been exposed. Pupils who received less than 33 lessons improved by 6,7%, pupils who received between 34 and 66 lessons improved by 13%, and pupils who received more than 66 lessons improved by 24%, relative to their comparison school counterparts. In terms of school, the greatest learning differentials were shown by pupils in farm schools in rural settings where school resources, support and training have historically been weakest.

Overall, these results suggested that "English in Action" had developed a more comprehensive grounding in English language for project pupils, as compared to pupils in comparison schools. The effect size of the difference between project and comparison schools was 2,42, suggesting that certain primary goals of the "English in Action" intervention, namely the development of listening skills and receptive language abilities, had been met.

4.4 Issues relating to the Classroom and the Curriculum
The evidence from testing indicated that the programme was having effects on the English language competence of the pupils in the required direction. In addition, favourable results from the testing were supported by qualitative data from teacher interviews, questionnaires, focus groups and case studies of participating schools. This indicated that the programme was having positive effects in all four regions in which it was being implemented. The evidence suggested clearly that the teachers felt that the "English in Action" approach was helpful to their work as teachers, and had benefit for the pupils.

However, observation in the classroom indicated that the programme still provided too little space for the teacher's own contribution. Analysis of the materials, and interviews with the programme writers and production staff also revealed that there were content weaknesses in the scripts. Few of the project team had junior primary school background, or training in second language teaching. Thus, though the scripts were well constructed and had high attentional and motivational value with respect to the pupils, there were problems both in the selection of content, as well as in the conceptualisation and development of the scripts and supporting visual materials as a curriculum for introducing English at junior primary level.

Given the project's aim to develop materials to support the teaching of English as a second language at higher levels in the primary school, the dearth of educational background and curriculum development expertise in the project team was problematic. There was a high level of skill with respect to script-writing and production for radio. In addition, the team worked systematically, was highly task directed, and had established a sound managerial structure for the project as a whole.

In terms of relationships with the educational authorities and others working in the field, the project had had a shaky start with respect to establishing credibility in the field. Nevertheless, management had worked to redress these problems, and was networking extensively with other educational projects and with the educational authorities. At management level, the links with both educational authorities as well as with other NGO's working in the field were strong. However, relationships with others working in the field were less well developed at lower levels in the project's hierarchy, and required strengthening.

Overall, despite many assets and the evidence of very hard work, there was a danger that without primary school trained staff and language specialists on the team, the project would not meet its objectives. It was unlikely, in particular, that the project team with its current staffing would produce a language curriculum which could make a major contribution to the education of South African primary school children. In many ways, the difficulties with staffing and orientation reflected the fact that the project team were working in a new area. There were few other educational projects working at the Sub A level. In addition, there was no history of Sub A teachers being involved in teaching English in DET schools, nor was there a developed English language curriculum at this level. The result was that there was no existing infrastructure in the schools of Sub A teachers trained in the skills involved in teaching English as a second language.

With respect to the context of the programme and of the evaluation, the combination of these factors implied a lack of alternative English language teaching programmes in many of the comparison schools at Sub A level. This was not the case at Sub B level, and at levels higher in the primary school curriculum. At these levels, English language teaching was standard practice in the schools, and was the norm rather than the exception. In addition, other English language projects had been involved in working with the teachers at these levels for many years.

The competition OLSET would face at Sub B level from other existing language teaching approaches, in other words, would be greater than at Sub A level, and the comparisons perhaps more realistic. To meet these challenges, the project team would require greater background in applied English language teaching at primary school level, as well as in curriculum development, than was currently evident.

4.5 Issues relating to Teacher Development
As part of its work in 1993, the project had established a structure of training workshops, a teacher support group structure, and a focus group structure in all regions in which "English in Action" was being implemented. Evidence from school visits, interviews with the teachers in the schools and with the regional coordinators, and evidence from case studies of the participating schools indicated that the training workshops had been well received. In addition, both focus group and teacher support group structures had considerable potential as regards the development of community credibility, and the support of teachers in their work.

However, at the same time there were major gaps in the project in the teacher development area. Management had been unable to make the appointment of a suitably qualified and experienced teacher development coordinator. As a result the various teacher development activities were taking place in the absence of an overall rationale for teacher development, and in absence of a teacher development curriculum. Though it was evident that the work of the project team had considerable potential as regards teacher development and INSET, the work required greater focus, and to be implemented within the framework of an overall curricular and policy structure.

It was thus evident, as at December 1993, that the major gaps in the project lay at the curriculum level. The project's language curriculum needed to be conceptualised in the context of its contribution within the junior primary curriculum, on the one hand. In addition, teacher development activities to support the implementation of the language curriculum needed to relate to a framework for teacher development, and the project's language curriculum on the other. At a third level, the project team needed to engage with the issues involved in second language teaching, if they were to succeed.

4.6 Community and Parental Issues
At the community level, there was evidence of strong support for the project. Sbongile Nene's report on OLSET's Focus Group Project (Nene 1993) highlighted overall acceptance of the programme by the teachers, parents and community. The teachers expressed strong support for "English in Action", as assisting them in their work. Parents were supportive of the programme. These indications were supported by evidence from case studies of the participating schools.

The project's work with parents also had great potential. There were clear needs for greater parental involvement in the schools, and it was apparent from the focus groups as well as the teacher support groups, that OLSET could contribute to the process of increasing parental involvement in education. This could also be valuable as regards increasing community advocacy for the programme. Both support on the level of advocacy, as well as financial contribution, might be necessary in the future.

Sbongile Nene thus concluded that it was important for OLSET to continue to involve community and educational department stakeholders with parents. Besides support of the teachers in the schools, community outreach activities through the medium of the programme could be of great value. Nene recommended that OLSET would need to involve both parental groups and other progressive NGO's, in the project's further work in this area.

5. The 1994 Evaluation

5.1 Evaluation Design
During 1994 the evaluation focused on both product and process issues, relative to the central issue of whether the programme was ready for wider implementation. 1994's evaluation design included formative issues, as well as issues relating to potential impact as well as generalizability (Chen and Rossi 1980; 1983; 1987; Rossi and Freeman 1985; Chen 1990). The formative emphasis, encapsulated in the questions which guided the 1993 evaluation, however, remained prominent.

As in 1993's evaluation, pre and post-testing formed one aspect of the design, being implemented at the Sub B level where the programme was introduced at the beginning of 1994 for the first time. As with the 1993 evaluation, the main focus of pre- and post-testing lay on receptive language (Arnott, Mansfield and Mentis 1994). In addition, pilot work was undertaken to develop a more comprehensive procedure for assessing the language proficiency of pupils at the Sub B level (Hingle and Linington 1994). This latter work focused on oral production (ie expressive language).

Classroom observation, interviews with teachers and with programme personnel were again conducted. The emphasis on self evaluation within the project also continued, on two levels. The first was at the level of further case studies from the schools (Potter 1994). The second was at the level of response to the criticisms raised in the 1993 evaluation.

In terms of the criticisms concerning staffing and the curriculum, intensive re-conceptualisation and restructuring took place within the project. The details were reflected in a curriculum development document written by the project team (OLSET 1994). This encapsulated the project team's conception of the rationale underpinning its approach to teaching language, the rationale underpinning the radio curriculum, and the rationale underpinning the teacher development side of its work. The process of writing the curriculum document involved all staff of the programme, and in addition a number of external curriculum consultants.

As with the 1993 evaluation, the aim was that external evaluation would be supplemented with a number of internally generated self evaluation documents written by the project team, reflecting what Chen (1990) has called the descriptive theory underpinning the project's work. The project team's curriculum development document was an example of this, reflecting the principles underpinning curriculum design and development in the programme, as well as an emphasis on issues relating to policies on implementation and teacher development.

In addition, a series of case studies were written by the regional coordinators, working with teachers, principals and parents in the participating schools. These were then combined with contributions from the project team (Potter 1994 (c); 1995), with the aim of providing perspectives on the programme as implemented in the schools.

Besides a continuing formative emphasis on curriculum and materials development, the 1994 evaluation also focused on issues relating to the status of the project's curriculum as regards wider implementation. The evaluation focused on both product and process aspects of implementation, with the aim of reflecting the emerging prescriptive theory of the project (Chen 1990). This emphasis can be found in those project documents and evaluation reports which focus on issues of wider implementation environment, delivery medium, and issues of scale (eg Myers 1984; Cobbe 1994).

In summary, as in the 1993 evaluation, the main focus of the evaluation design in 1994 was formative in character. The evaluation design was based on the comprehensive theory-driven evaluation approach suggested by Chen and Rossi (1980; 1983; 1987) and Chen (1990), with which the self-evaluation model used in 1993 (based on Fetterman's suggestions) was compatible.

In terms of this design, the remaining sections of this report summarise indications from the various documents written as part of the process of evaluation during 1994. Certain of these documents were written my members of the project team, and certain by the external evaluators. An overall assessment is then provided of the programme's status as at the end of 1994.

5.2 Data Analysis and Integration
As with the 1993 evaluation, the procedures for analysing and integrating the data were based on the multi-trait multi-method procedures suggested by Campbell and his co-workers (Campbell and Stanley 1963; Cook and Campbell 1979; Cook 1986), the procedures for triangulation suggested by Denzin (1970; 1978), the procedures for analysing and integrating qualitative data suggested by Patton (1980; 1987), Guba and Lincoln (1981; 1983), Miles and Huberman (1984; 1994), Fielding and Fielding (1986) and Sowden and Keeves (1988), and the case aggregative methodologies described by MacDonald and Walker (1974), Lucas (1974 (a) and (b)), and Simons (1987).

In terms of this rationale, indications from analysis of the various data sources and reports from the 1993 evaluation were combined with those from the 1994 evaluation, each being treated as separate "cases" or cells. The analysis was conducted with the overall purpose of examining programme theory in relation to implementation evidence, to reach conclusions as to the potential impact and generalizability of the "English in Action" programme in its intended implementation context.

5.3 Evidence from the Testing of Receptive Language
Pre and post-testing of the receptive language of pupils was conducted in 1994, in 20 project schools and 23 comparison schools (Arnott, Mansfield and Mentis 1994). As in 1993, the testing was applied as part of a stratified two-stage cluster sample design, based on region, school type, and grouping for experimental and control schools. The project schools were randomly selected from the 105 schools in which the programme was being implemented nationally. The comparison schools were then matched with the selected project schools by school type (urban, rural and farm schools) and by geographical area.

In all project schools in the sample, the teachers had implemented the programme at Level One in the previous year. They were all attending teacher support group meetings, and some were also implementing other English language programmes (eg "Day by Day") besides "English in Action" as part of the Sub B programme. In the comparison schools, in contrast, the Sub B English language curriculum was being implemented with no support from OLSET staff, but with the support of education department personnel, and, in some cases, support from other language projects.

The pre-test consisted of a 34 item instrument divided into four sections:

a. Listening comprehension;
b. Pre-reading skills involving number letter and grapheme- phoneme identification;
c. Visual matching of words and letters; and
d. Reading words and sentences.

Each of these sections contained its own practice items. The test as a whole was designed to focus on various aspects of receptive language, in oral and written form.

The pretest was administered in the project and comparison schools between January and April 1994. Based on item analysis, a shortened and more difficult post-test was developed. This consisted of 30 test items, and was divided into three sections. In the first two sections, the pupils were required to listen to a tape-recording, and mark off the correct answer in their test booklets. In the third section, pupils were required to visually identify the answer corresponding to the test item.

The post-test was administered in the schools in September 1994. Unlike the 1993 analysis, which was based on comparison of the paired responses of individual pupils, analysis of pre- and post-test data was conducted in 1994 by class averages.

The analysis revealed that the project school pupils performed significantly better than the comparison school pupils (p < ,05). This indicated that the superior performance of the project pupils over their comparison school counterparts demonstrated at the end of 1993 had been sustained. Though statistically significant, it should be noted that the difference between the performance of project and comparison school pupils at Sub B level was less than had been found in the previous year's analysis at Sub A level.

The learning gains of pupils in the project schools at Sub B level were on average 5% greater than pupils in the comparison schools who had experienced English language programmes other than "English in Action". As in the 1993 evaluation, the learning gains were greatest in the rural schools. Urban pupils in the project schools began the year with average scores of 75,6% on the pretest and improved by 6%. Rural pupils in the project schools, in contrast, began the year with average scores of 59,4% and improved by 24,9%, reaching similar performance levels on the post-test to their urban school counterparts.

These results suggested that those pupils beginning with the least advantages appeared to derive the greatest benefits from the programme. Overall, the effect size of the difference between project and comparison schools at Sub B level was ,71. For meta-evaluative purposes, this indicated less effect of the programme at Sub B than at Sub A level (where the effect size of advantage of project schools over comparison schools had been 2,42).

A number of factors may have contributed to the lower effect size at Sub B level. The programme started late in the year, time being spent in many schools over the first term completing areas of the first level programme. The results may also have been an artefact of the test procedures used. Urban pupils scored far higher on the pretest than their rural school counterparts. Whereas the post-test scores reflected a high margin of gains for rural schools, the same was not true for the urban schools.

Arnott, Mansfield and Mentis also suggested in their report that the smaller difference found at Sub B level could have been be due to a number of other factors, of which disturbances during the year of the election, a short period of exposure to "English in Action" Level Two relative to the exposure afforded pupils in the previous year, and the influence of other English language programmes at Sub B level are perhaps relevant.

Viewed together, however, the 1993 and 1994 analyses indicated that "English in Action" would be able to make a contribution to developing the English language competence of pupils in the early stages of the junior primary school, both at Sub A level as well as at Sub B level. The performance advantage demonstrated at Sub B level in the 1994 analysis, though less than the performance advantage demonstrated at Sub A level in the 1993 analysis, is still an important indicator. The results of the 1994 analysis suggested that, while pupils in both project and comparison schools gained from their English language programmes, those in the project schools gained more. The results also indicated that the project school pupils sustained the performance advantage they had acquired in the previous year at Sub A level, when compared to their comparison school counterparts.

Both the 1993 and 1994 results quoted in this subsection were based on analysis of a single year's testing of Sub A and Sub B levels respectively. This was a limitation, and the testing should if possible be replicated. There would be value in such testing being conducted as part of a broader evaluation of the impact of "English in Action" in the rural as opposed to urban schools, and in combination of the type of paper and pencil tests used in the Arnott, Mansfield and Mentis studies with the more in-depth language analyses reported in the subsection following.

5.4 Evidence from the Testing of Oral Production
During 1994, additional language testing was conducted at Sub B level (Hingle and Linington 1994), using a test of oral language production developed specifically for the purpose. The work was conducted with small samples of pupils drawn from 5 project and 5 comparison schools. As with the Arnott, Mansfield and Mentis study (refer previous section), attenuation of the samples between pre and post-testing took place, with the result that a large proportion of pupils pre-tested were not able to be post-tested.

For this reason, group comparison of pre and post-test scores was not attempted. In-depth analysis of the test protocols was conducted. In addition, analysis of pre and post-test scores on a case by case basis was undertaken.

These analyses yielded a number of indications. The first was that gains in language fluency were made by pupils in both project schools and comparison schools. Viewed on a case by case basis, there were no indications across the sample as a whole that the project pupils had gained in fluency to a greater extent than the pupils in the comparison schools. There were, however, indications that project pupils in rural schools had made greater gains than their comparison school counterparts.

There were also indications that pupils in the project schools exhibited a greater variety of grammatical structures in their oral output than pupils in the comparison schools. This was particularly marked in project pupils in the rural schools, who used a greater variety of grammatical structures than their comparison school counterparts. As with the indications on fluency in the previous paragraph, however, these indications are very tentative, and may have been due to sampling or chance factors.

In interpreting these results, the small size of the sample should also be borne in mind. The aim of the investigation was exploratory, and no claims can be made beyond this level. The indications reported in this section may have been a due to chance, and would need to be investigated more fully.

The difficulties implicit in testing oral production, and in conducting analysis of oral language samples should also be noted. Both are labour intensive, and yield indications which may be difficult to replicate at scale. Nevertheless, there is potential in the in-depth data yielded by analyses undertaken on a case by case basis. The procedures developed in the Hingle and Linington study were able to yield a far more detailed picture of the language abilities of pupils in the samples than paper and pencil tests.

There would thus be value in supplementing large-scale testing using paper and pencil tests with further in-depth testing of the language of smaller samples of pupils in project and comparison schools. There would also be value in conducting such an investigation as part of a wider analysis of the impact of "English in Action" in rural as opposed to urban schools. Reference has already been made to this earlier in this report (refer previous subsection).

5.5 Issues relating to the Classroom and the Curriculum
Viewed in relation to the 1993 test data, the indications from the 1994 testing indicated that the programme was having positive effects on the English language competence of the pupils, and had potential for wider use. In addition, qualitative data from the schools remained positive. Evidence from the teacher interviews, questionnaires, teacher support groups, focus groups and case studies of participating schools indicated that the teachers felt that the "English in Action" approach was helpful to their work as teachers, and had benefit for the pupils.

One of the problems raised in the 1993 evaluation had been the nature of the curriculum, and its relationship to the teachers' work in the classroom. Another had been the space provided by the radio programme for the teacher's own contribution. Interviews with the programme writers and production staff had indicated that there were content weaknesses in the scripts. Problems with respect to the qualifications of the project team, and their experience in the junior primary phase, had also been raised.

During 1994, there was evidence that the project team responded to these criticisms. With respect to materials development, OLSET employed a teacher with applied linguistic training to join the script-writing team. In addition, the script-writers checked their work with outside consultants with primary school training and applied linguistic background. A new approach to writing materials was adopted, and implemented at Level Two of the programme. This placed greater emphasis on a story to carry a central theme, to which the various supporting materials were related.

The Level Two materials created through this process allowed far greater space for the teacher's individual contribution and involvement. This was reflected in the scripts, as well as in supporting teachers' manual and classroom materials. These contained a far greater emphasis on using media of various types (eg audio-cassettes, comic books, posters, workbooks) in combination.

In terms of the wider processes of planning and curriculum development, the project team employed a primary school consultant, who had wide experience of language teaching and project development, to work with the staff. One of the outcomes of this work was a curriculum document, which was significant not only with respect to the fact that it outlined a coherent vision for the project's conceptualisation and development, but laid out clear policies with respect to how the project would be implemented. This was linked to a socio- economic analysis of the costs and benefits of the project's work, which was undertaken by James Cobbe of Florida State University (Cobbe 1994).

The project also followed up on the evaluators' recommendation that a separate analysis of the materials should be undertaken, and commissioned Letta Mashishi, an applied linguist with wide experience of working with primary school teachers, to undertake an evaluation of the materials. It was thus evident that the project team took action in relation to the criticisms raised in the 1993 evaluation, in a number of areas. These included development of a new approach to materials writing at Level Two, as well as the development of a greater coherence in the materials, related to an underlying theory of second language teaching. As at December 1994, revision of Level One materials had not been conducted. Letta Mashishi's analysis (Mashishi 1994) provided clear indications that this was necessary.

5.5 Issues relating to Teacher Development
The 1993 evaluation had indicated that the project team had established a structure of training workshops, a teacher support group structure, and a focus group structure in all regions in which it was working. The project had not established a policy framework relating to the teacher development side of its work, and in addition had been unable to appoint a teacher development coordinator, with the result that different aspects of INSET in which the project team were involved lacked structure and coherence.

In May 1994, OLSET appointed a teacher development coordinator, to oversee the teacher development side of the programme, and to develop an integrated implementation strategy, working with the regional coordinators in the field. The evidence from our school visits, interviews with the teachers in the schools and with the regional coordinators, and evidence from case studies of the participating schools in 1994 indicated a far greater coherence in the teacher development side of the project's work than had previously been the case. This was evident both at the level of planning and at the level of implementation.

At the level of planning, the project's curriculum document included a strategy for INSET as an integral part of the implementation of the programme in the schools, while at the level of implementation a major shift took place in the teacher support groups. After the appointment of the teacher development coordinator, these were conceptualised as a vehicle for in-service training, focused on issues relating to classroom management, as well as language teaching activities which could support the programme in the classroom. In this new format, the teachers met together at a central venue, and were involved in workshop-type activities.

The response of the teachers to this new format was generally very favourable, though there was some negative comment on the amount of time the workshops lasted. As a result, suggestions were made in the 1994 evaluation that, in addition to their focus on general language teaching issues and activities, the teacher support groups should focus more on the detail of the radio lessons. In terms of this format, the workshop might involve sharing ideas of how to prepare lessons, sharing experiences of teaching particular lessons and levels in the programme, as well methods of teaching supporting lessons and language development skills. Letta Mashishi made reference to this in her evaluation report (Mashishi 1994).

Overall, the evidence from observation of teacher support groups, interviews with teachers, and questionnaires completed by the teachers indicated that, as in 1993, the INSET side of the project's work had been beneficial. Teacher training workshops had been well received. In addition, both focus group and teacher support group structures had considerable potential as regards the development of community credibility, and the support of teachers in their work in the classroom.

The need for these types of activities with respect to teacher development was a central issue in the current implementation of the programme, and to the planning of its implementation at greater scale in the future. Evidence from observation of the programme in the classroom indicated clearly the need for OLSET to link its materials to an ongoing INSET curriculum aimed at changing the classroom practice of teachers. For "English in Action" to be successful, the evaluators recommended that a carefully conceptualised programme of INSET was necessary which was sustainable, and an integral part of overall implementation strategy.

5.6 Community and Parental Issues
At the community level, the evidence from the 1994 focus groups and case studies indicated continuing support for the project. As in 1993, the evidence suggested overall acceptance of the programme by the teachers, parents and community. Teachers continued to express strong support for "English in Action", as assisting them in their work. Parents also appeared, from the reports of the teachers and principals, to be supportive of the programme.

In the 1993 evaluation, the evaluators suggested that the project's work with parents had great potential. There were clear needs for greater parental involvement in the schools, and it was thus suggested that OLSET could contribute to the process of increasing parental involvement in education. This could be valuable as regards increasing community advocacy for the programme.

While the evidence from focus groups and case studies in both 1993 and 1994 indicated continuing support for the programme at the level of advocacy, the evaluators noted that the development of parental programmes (eg reading clubs) had not been a priority for OLSET. It was thus suggested that parental support (eg the ability to structure and support the homework tasks of children after school) could greatly assist the development of the language arts curriculum at primary school level, as a basis for successful schooling.

6. Product and Process Issues


In 1993 the project team were involved in materials development in the form of recorded radio lessons with a teacher's manual, workbook and supporting posters, with a very limited emphasis on the carryover of these materials into the classroom curriculum. In 1994, in contrast, the project team undertook development of a variety of curricular materials for classroom use. These included recorded radio lessons, teacher's manuals, workbooks, a comic reader, a readiness kit and posters. In addition, the project team became involved in the development of a curriculum for use in the In-Service Training of Teachers.

Evidence from the evaluation indicated tensions between materials in concept and materials in use, but that the majority of the teachers were supportive of the programme at both the classroom and the In-Service Training levels. The evaluation also indicated that materials development formed the focal point of the work of the majority of OLSET's staff, while at the same time forming the point of contact between the realities of the teachers in the schools, and the project team.

Classroom observation, interview and questionnaire data indicated that the majority of the teachers switched off the tapes during the lessons and then taught the concepts themselves, whenever it was apparent that the pupils were not understanding the material. About half the teachers also repeated lessons. The pattern of usage was thus to use the materials as a framework for classroom teaching, which could not be described as similar to the conventional usage associated with radio transmission.

In addition, interviews with the project team indicated that there was increasing emphasis in the "English in Action" materials, as well as in the materials being developed on the mathematical side of the project's work, on an approach in which teachers used the audio materials to support their own teaching. This approach, which encouraged the teacher to use the materials as a guideline, was fundamentally different to an approach in which the radio teacher took control of the classroom for 30 minutes, and the classroom teacher responded within limited parameters set by the radio teacher.

This tension in the programme's methodology indicated that radio was one of a number of options for "English in Action", which might or might not turn out to be either the most successful way of introducing the programme or the preferred choice of teachers. As at the end of 1994, the effects of radio as opposed to tape transmission on the quality of classroom implementation had not been established. The programme in radio format needed to be trialled, and these trials needed to be evaluated against forms of materials delivery (eg cassette format), as well as against comparison schools in which other viable options for teaching English were being introduced.

As at the end of 1994, "English in Action" on the product level was a comprehensive curriculum of teacher upgrading and support, which delivered audio materials to the schools as one part of the service it provided to teachers, and via teachers to their pupils and their parents. On the process level, the point at issue was not only how to deliver a multi-media programme to the schools cheaply, but also how to deliver and support an INSET programme which developed and enhanced the capacity of the classroom teacher to teach.

At time of writing the final evaluation report, there was no clarity as to the priorities of the new education authorities, and how these would translate into funds available for primary education. In this context, the multi-media, multi-delivery option (as opposed to the radio option) seemed to be the most appropriate to OLSET's current stage of development, and the option that offered the most strategic possibilities.

7. Conclusions

7.1 Summary and Evaluation
The evaluation summarised in this paper focused on the development and implementation of "English in Action" from its pre-pilot phase in 1992, and over the first two stages of its pilot development. The major focus of the evaluation lay on the programme's curriculum and materials, as these had been the main focus of the work of the project team over the past three years. However, the evaluation also focused on organisational aims, process and capacity, as intrinsic to the process of curriculum and project development.

The evaluation design had both formative and summative aspects. The main emphasis, however, lay on the developmental side of OLSET's work. David Fetterman's suggestions with respect to ethnographic interviewing formed the means by which issues relevant to the work of each member of the project team were established. Related to these issues were criteria of success, and indicators of quality, relevant to the work of the project team, and the other stakeholders in the project.

During 1993, it was evident that there were major gaps in the way in which the project's curriculum was being conceptualised. While the project team were highly skilled in the technical side of materials writing and production, these activities were taking place in the absence a coherent policy framework as regards language teaching. There were similar gaps with respect to the teacher development side of the project's work. In both areas, there was a need for the development of a guiding programme theory, which would provide the necessary framework within which the work of the staff, and the overall innovation would take place.

At the end of 1993, recommendations were made that the programme should place emphasis on the development of a coherent policy framework which could guide the work of the staff. Our observation was that the project team were hard-working, but that there was a danger of dissipation in the absence of such a central framework. Given the focus of the project's work on developing a language curriculum for the lower primary school, there was a need for the employment of staff with a primary school background, and staff with a background in language teaching. There was also a need for greater focus on teacher development than had been the case.

Over 1994, the project team took a number of steps to address these criticisms. One positive result was that staff as consultants with primary school and applied linguistic training were employed to assist in the programme's development. Another was the employment of a teacher development coordinator. At the level of management, there were also changes aimed at bringing in greater educational expertise. As at December 1994, the evidence suggested that the project team as a whole were more aware of language development and curricular issues, which reflect positively at both policy and materials development levels.

At the end of 1994, it was clear that the project team were working from the standpoint of a clearer and more coherent theory of development than was the case a year previously. This was exemplified at the policy level in the project's curriculum document, written in the latter half of 1994. More importantly, at the materials development level, there were clear indications that "English in Action" Level Two had been written with language teaching theory in mind. Supporting materials implementation at classroom level, in-school visiting and teacher support groups had been implemented with clearer aims, which in turn related to the notion of a teacher development curriculum. This was conceptualised as supporting the implementation of the programme at both school and classroom levels.

As at December 1994, the status of the programme was as follows:


a. Level One materials had been trialled over a two year period and required revision, while Level Two had been implemented at the pilot level for one year only. Gains had been demonstrated in project pupils as compared to pupils in comparison schools, which were greater at Sub A than at Sub Levels. There were, however, still a number of difficulties in the materials which needed to be ironed out (Mashishi 1994).
b. It was evident that there was a far more coherent project theory with respect to teacher development, which was being translated into an INSET curriculum. This was at its early stages of development, but had considerable potential. As currently conceptualised, the INSET curriculum was being implemented in workshops run within the teacher support group framework established in 1993. This still required refinement and revision. Suggestions were made in the evaluation reports as to ways in which this side of the project's work could be improved.
c. It was evident that the staff as a whole had a far greater awareness of the issues involved in language teaching and curriculum development than was the case a year previously. As this was a qualitative and ongoing process, it was important that the processes of contact and dialogue between staff which had accompanied the process of curriculum development were continued.

OLSET's aims were to develop a viable curriculum which could be implemented widely as a means of teaching English language at the primary level. By the end of 1994, it was apparent that there were, in fact, two curricula, one for the classroom, and one for the in-service training of teachers in workshops and teacher support groups. In terms of its classroom curriculum, two levels of materials had been developed. These were still at the pilot stage, and required revision. However, the evidence from the evaluation suggested that the 1994 materials were far more coherent than those produced the first time around, in 1993.

Was the classroom and in-service training curricula ready for wider implementation? The evaluators recommended that the next stage should involve revision of the materials base, and implementation of the existing materials via various delivery media (eg via radio; via cassettes). This would need to be accompanied by evaluation of the processes necessary to support the implementation of the programme in these different forms. This recommendation implied that ongoing research into implementation would take place simultaneously to revision of the project's existing materials base and ongoing materials evaluation.

Had the project met the needs of teachers and pupils? Even in its unrevised form, there was overwhelming evidence that teachers were supportive of "English in Action", and had used the materials to the benefit of their teaching, and their pupils. The evidence from the evaluation also indicated that the project had developed an infra-structure for teacher support which had the potential of improving teaching and learning, in many classrooms.

8. Meta-Evaluation

The evaluation described in this paper was based on a multi-method design, and was primarily formative in character. It was based within a theory-driven perspective, and relied on interviews, observation and documentary analysis as the means by which issues relevant to the programme's development were elicited, and psychometric testing, interviewing, observation, case studies, questionnaires and documentary analysis as the means of gathering evidence relevant to these issues.

David Fetterman's approach to evaluation focuses on empowerment. With respect to those involved in programme development, this implies that evaluations should be participatory, and develop improved ability to deal with the issues and problems in everyday work. It also implies that organisational structures should be created to sustain the process of self-evaluation on which empowerment evaluation is based.

Each of these persons provides comment on the evaluation process as it affected those with whom they worked in the programme. Together, these accounts indicate that many of the persons involved in the programme have emerged from the evaluation in a stronger position relevant to their daily work, and that the structures established through the evaluation have continued to function as part of the programme's structure. This has benefitted OLSET on a materials development as well as on an organisational level.

8.1 The Involvement of Teachers in the Evaluation
I am Ruth Dube and I am the regional coordinator of the programme in the Gauteng region. As a regional coordinator, my duties involve implementing, monitoring and classroom support on a daily basis. I am involved in teacher training workshops and in delivering all materials at these workshops. Thereafter, I go around the schools doing lesson observations and giving other forms of classroom support of the teachers.

When we started the teachers felt threatened by the radio and thought it would replace them completely. But with lots of practice, the teachers now feel that the radio is there to help them teach English which has been in the past, a very difficult subject to teach.

The teachers find that the radio programmes are flexible, with the result that they find teaching easy and enjoyable. They don't stick to the suggested method in their lessons but have room to apply other methods in their classes. The programme thus appears to be meeting the needs of the teachers with respect to providing structured materials which are useful in the classroom context. These materials are Pupil Workbooks, Teacher Notes, Comic Readers, Tapes and Posters and the Readiness Kit. At the same time the teacher development side of the programme concentrates on the development of teachers' own teaching methods and materials for teaching English, which is the focus of Teacher Support Group meetings.

The teachers have also been actively involved in the evaluation of the programme in the Teacher Support Group meetings. As a regional coordinator I go out to schools to do lesson observations. During these visits I get to talk to teachers face to face on an individual basis and get feedback of what they are experiencing in their classrooms. It could be problems or suggestions about improving the lessons.

I then take these suggestions to the teacher training team and we draft a programme which incorporates most of the concerns raised, which then form the agenda for the Teacher Support Group meetings. These meetings are held once per month in each area. We try by all means to generate discussion so that the teachers can find solutions to their problems. It is amazing because the teachers think that they don't have any answer to their problems yet they have many good ideas.

As the teachers give us ideas on how certain activities can be improved they are actively evaluating the programme. This process is very important for our ongoing evaluation.

We also organise Focus Group meetings towards the end of the year where all the stakeholders are invited to share views on the programme. We invite the Education Department personnel, parents, teachers, principals, community leaders, health personnel and the community at large to help with ideas to revamp the education system to suit the child. These have been very fruitful because almost each group of stakeholders is represented at these meetings and lots of issues are discussed.

Teachers have expressed appreciation for the assistance they have got from the radio programmes. At first they thought the radio was a threat to them because they felt it would replace them in the classroom. As they got familiar with the lessons, they saw that they radio was there as a tool to assist them in teaching. It addresses both the children and the teachers (ie there is a lot of interaction between the three: teacher - pupil - radio).

The teachers pose questions and find solutions themselves by discussing the problems in small groups and report back in a plenary session. This improves communication between the teachers themselves and myself too. They gain a lot of confidence because of interacting with their colleagues and sharing ideas at these meetings.

The teachers are also involved in formative evaluation by way of the Focus Group meetings. Here the community at large is involved, the parents, principals, Department of Education, and the teachers themselves. At class level, the teacher organises parents to come and observe a demonstration lesson. At the end of the day, the teachers discuss the lesson, and the education of their children arising from the contact.

At regional level, we invite parents to the Focus Group meetings and information is gathered at these meetings. Parents have a lot to say about what the programme has done to their children's education and how they think it could be improved.

Another way of evaluating the programme which involves the teachers is by Case Studies. These are based on stories of the teachers' and principals' involvement in their schools, and how they use "English In Action". We design questionnaires with the purpose of providing guidelines for the teachers as to what sort of information we would find useful. The type of questions we ask are as follows:

  • What activities have you been involved in from the time you did your training as a teacher to the present?
  • What experiences have you had while teaching?
  • What experiences have you had since joining the school in which you are currently working?
  • How have you taught English previously?
  • How are you currently working with "English in Action"?
  • How does "English in Action" link with your teaching in other areas of the curriculum?
We have had many interesting accounts based on the experiences of teachers. There have also been changes within the schools. Some teachers say that because of the "English in Action" lessons, the principals schedule their meetings so that there will be no interference with the programme. However, others say that some of the principals have divorced themselves from the programme because of lack of interest. In certain cases, teachers have wooed these principals in such a way that everyone is now involved in the programme.

The case studies are bound into a book which is brought out annually. This has been useful for both our own internal evaluation as well as for external evaluation purposes.

Gordon Naidoo, who is the programme director, will add to what I have just said from a Management perspective.

8.2 How Evaluation has Assisted in Organisational Development
In 1993, OLSET management commissioned Charles Potter and others, to conduct an independent evaluation of both its programme development as well as the project per se (ie its organisational side as well as its radio learning programmes). The rationale for the "outside evaluation" was the perceived value of a reliable and "objective" assessment of goals versus process. Consensus existed at management level concerning the importance of such an independent evaluation. However, there were concerns. These related to the predominant trend of evaluations to be dispassionate and bureaucratic in character.

There was a particular risk that an evaluation would adopt a rigid theoretical posture, rather than focus on the issues involved in development, and the nuances and compromises involved in attempting ESL teaching through the medium of radio. OLSET's management believed that radio teaching was an imperative in South African education, due to its potential as a means of reaching out to disadvantaged communities in urban as well as remote rural areas. As traditionally used in the classroom in many other areas of the world, however, radio learning potentially posed a challenge to the teacher's domain and decision-making, and could usurp time and space in the school day if not integrated and relevant to the teaching delivered in other areas of the curriculum.

For the above reasons, teacher development was conceptualised as forming a central focus in the implementation of "English in Action". This was particularly necessary as the ethos of teaching in South African schools had substantially dissipated in the face of sustained unrest in schools throughout the country, since the educational crisis of the mid seventies. In terms of the need for teacher development, there were dangers in the prospect of a "soulless" and technicist approach to evaluation, removed from appreciation of school-focused issues, of the processes involved in teacher development, and the need for teacher development to link directly with teaching as it occurs "on the ground."

There were also issues of adoption and wide-scale implementation, and the need for revision in the existing school curricula developed during the apartheid years. Implementation at scale needed to link to a coherent and implementable system of in-service training. Conventional in-service teacher training under the previous government's administration had been of a "short-course" variety, and had left a great deal to be desired, both with respect to its relationship with what actually took place in teachers' classrooms, as well as its relationship to a relevant curriculum endorsed by a legitimate national (or provincial) educational authority.

At best, existing state and provincial interventions with respect to in-service training of teachers had proved to be little more than ad hoc knee jerk responses to a multi-causal, national problem. Due to lack of a coherent vision on the part of the educational authorities, educational projects such as our own were working to fill the gap. Consequently, several often incompatible in-service models prevailed simultaneously within a single school or cluster of schools. OLSET's initiative in subscribing to a distinctly different delivery medium (radio) risked being merely an addition to the chorus of existing in-service voices.

Interactive radio learning as conceptualised by OLSET project management, was a departure from the more "behaviouristic" models of its counterparts in other developing countries (most notably in Latin America and elsewhere on the African continent). Much of the literature on existing approaches to radio learning focused on content issues. In South Africa, such an approach posed dangers as it should inevitably resonate with the existing content-based curriculum paradigm of the previous education authorities. This paradigm was diametrically opposed to more "constructivist" models, in which greater emphasis was placed on the processes of teaching and learning, the content in the curriculum forming a framework for the exploration of these processes.

In short, OLSET's management, while recognising the importance of evaluation to the project's development, was perturbed at the possibility of an evaluation exercise which would be antithetical to the issues involved in a new form of design in interactive radio. The approach adopted in the evaluation was thus a key issue, in order to avoid an evaluation which was destructive to the project.

OLSET's "English in Action" project held as its primary objective, development of a pertinent methodological approach to ESL teaching, as well as enhancement of teaching skills in English. To this end, the project embarked on a course of programme development attuned to both teacher and learner needs, using existing teaching and classroom practices as the point of departure. In essence, the conceptualisation and development of the programme was informed by the aim of teacher enablement to cope with the new ESL teaching methodology, which was being introduced in still largely conventional school settings.

OLSET's teaching methodology constituted an eclectic mix of approaches, acknowledging teachers' existing classroom practices, while attempting to enable a gradual change in those practices, and a paradigm shift. Thus the intervention aimed to be "low threat" and offer a comfort zone to the corps of predominantly under-qualified teachers. The aim was to constructively engage them in a process of skills upgrading, linked to the implementation of the programme. A balance was also sought between existing and new practices. In-servicing would need to be introduced without alienation and disruption of daily schooling. Great store was placed on the need to maintain uninterrupted school attendance. This was especially important due to the considerable disruption in schooling which had taken place over the past two decades, and the lack of a culture of teaching and learning in the schools as a result of political turmoil.

OLSET's approach in its materials has been criticised (Mashishi 1994), and the eclectic approach adopted at present departs from "communicative approaches" in many respects. One of the more controversial decisions taken was to include repetition and choral responses in a number of places in the materials. This has been interpreted as a subscription to retrogressive teaching methodologies. We believe, however, that our decision is justified within an eclectic approach, which proceeds from the need to acknowledge the nature of teacher practices as a starting point. The materials also reflect many ways of introducing concepts which are of a constructivist nature, which are not part of teachers' current practices. The involvement of teachers in Teacher Support Groups has led to many of these new practices being incorporated into their classroom organisation, and their own lesson materials.

We would also argue that our approach has been justified from the results we have seen in working with teachers. Teacher practices have been instructive in informing the development of both our programmes and the in-servicing models we have used. Regional coordinators run teacher support groups with the teachers, and feed back opinions and suggestions from the teachers and other stakeholders which are incorporated where relevant, into the design and content of our materials. Involvement in the process of evaluation of the materials, in turn, leads to increased "ownership" of the material and content on the part of the teachers.

What were the perceptions of the evaluators, and how has the evaluation process assisted the development of the "English in Action" programme, and the project more generally?

With the wisdom of hindsight, the participatory approach adopted in the evaluation has acknowledged the classroom as the starting point for teachers' involvement in the project. Emphasis on existing practices and classroom-related problems has empowered the teachers and children alike.

What has been the usefulness of evaluation in programme/organisational development?

The role of evaluation has been to point in the right direction. It has taken an organic view of the development of the project, in which different sections and individuals (eg curriculum development, teacher development, regional coordinators, writers, those involved in technical production) need to work together cohesively plus in terms of information relevant to the task in hand. To do this they need a language of curriculum development, classroom-based development, teacher development and regional coordination, linked to key issues.

What has happened is that evaluation has assisted in developing a project language on each of these levels. In addition, evaluation has played a role in changing direction. It has focused first on coming to understand what the organisation is about, secondly on becoming focused on the nature of the developmental task, and thirdly on capacity building. The evaluators have been critical of existing practices in the project in many ways, pointing out (inter alia) lack of understanding of academic issues, the need for greater understanding of curriculum development, and the need to be aware of what other NGO's in the field are doing.

Above all, the evaluation has been useful in being issues-based. The qualitative evaluation strategy adopted has been capable of highlighting the kind of developmental issues faced by project staff and teachers in their daily work. In addition, quantitative data have been important in the evaluation process, but this needs to be contextualised/interpreted. This is only possible if a test-based design is nested in a more contextually-based, qualitative and holistic design.

Organisationally, evaluation is now ongoing in the project, driven both from the top and from the bottom (Ruth has referred to this in her talk). Myron Atkin commented on his recent visit here on the need to collapse the distinction between formative and summative evaluation. It would seem to me, in addition, that empowerment evaluation collapses the distinction between outside and insider evaluation.

Jenny Kenyon will talk more about this, in the following section.

8.3 How Evaluation has assisted in Teacher Development
The Teacher Development team workshops its in-service training materials both with the materials writers and producers, as well as with the teachers. The aim is to develop an approach which can support the development of relevant quality radio learning programmes for English Second Language for use in the Junior Primary Phase of formal schooling.

Implicit in OLSET's aims in the development of the English In Action Radio Learning Programmes is the creation of teaching and learning materials which empower both pupils and teachers. In the context of the project's work, teacher development involves a process of transformation from current practices to other more appropriate ways of teaching. It is OLSET's aim to include teachers in discussions about teaching practice and to encourage reflective practices likely to promote the positive, democratic transformation of education.

Ruth has explained the work of individual coordinators in implementing the principles on which the Radio Learning Programmes have been based. What follows is a brief description of how the Teacher Development Team as a whole views its work with the teachers and how the evaluation has contributed to the overall OLSET aim of empowering teachers to take charge of their own development.

In order to undertake development at scale, there are two preconditions. Firstly there needs to be a set of procedures and principles which are established (ie a programme theory capable of being implemented). Secondly, there needs to be the developmental capacity to work at greater scale.

Theory development implies the development of clear purposes and procedures for working, which can be justified to others, and which can form the basis for the work of others (programme theory needs to be grounded in practice and vice versa). With respect to curriculum, the theory on which the project's work is based is set out in a Curriculum Document, which outlines the purposes and procedures which OLSET has formulated for working with the teachers at two levels:

a. At the level of support in teaching English to pupils in the classroom.
b. At the level of in-service teacher training.

In terms of these two levels, there is a classroom curriculum, and a teacher development curriculum.

OLSET's classroom curriculum is based around the "English in Action" instructional system, in which the learner is supported by the teacher, daily audio programmes, and print materials. The programme design includes teacher led activities, storytelling, games and songs. Integrated classroom print materials include posters, student workbooks, readers and teacher notes which are provided in an attractive, user-friendly and easy-to-read format. The programme uses carefully developed radio or audio cassette lessons to present natural language in dramatized situations and to promote specific language learning activities.

While the "English in Action" instructional system relies on a media component, the teacher's role is central at all times. Included in the programmes are regular strategies to involve teachers in creating interactive communicative language activities. Through the teachers' exposure to the material and the methodology implicit in the lessons teaching skills are enhanced.

OLSET's teacher development curriculum has been designed in response to an educational system with serious deficiencies and inequities directly affecting teachers' skills, self-image, and professional standing. Included in these are the following:

  • 18% of primary school teachers lack professional qualifications.
  • 35% are professionally qualified with less than Std 10.
  • 47% are professionally qualified with more than Std 10.
  • Existing pre-service teacher training is often inadequate.
  • Appropriate teaching methodologies have not been disseminated widely within the professional teaching community.
  • English is not the first language for virtually all the teachers.
  • Basic equipment and materials are lacking in schools.
  • Overcrowding is common. Many classes have a student/teacher ratio of more than 65:1.
  • Authoritarian methods are still used in many cases to ensure management of large classes.
  • Rural schools are often understaffed.

In terms of the conditions cited above, simply developing new materials and telling teachers about new methods are unlikely to be sufficient to bring about the changes in classroom practice which are needed. A holistic approach to teacher development, focused on the situations teachers face in their schools and classrooms, is more likely to be effective. Careful and explicit links must be forged between radio, teacher, child, print materials, teacher support systems and classroom practice. It is also clear that these links will only happen if teachers are brought into the process and collaborate as equals in their professional development."

In terms of these needs, evaluation has been directed at empowering teachers to solve their daily problems, and to utilise the materials to assist in improving teaching in the classroom. In the writing and compilation of the materials the project team has tried to be sensitive towards aspects of style and language use which may empower teachers to teach better, and encourage a more reflective critical view of their teaching practice.

One of the purposes of the work of the Regional Coordinators as members of the Teacher Development Team is to try to establish whether OLSET is able to provide teachers with materials which teachers find useful, simple to use, and, at the same time less prescriptive and more open to teachers' own creative thinking and teaching. In this process, self evaluation is central.

In terms of the intention to transform teaching practice in the Junior Primary classrooms in which the programme is implemented, the Teacher Development team works very closely with teachers in encouraging discussion of existing teaching methods and practices, and the adoption of alternative methods and practices. Our work in this regard is influenced by the responses of teachers themselves. The role and value of action research in this regard has been alluded to by Ruth.

In addition, there have been comments, criticisms and suggestions at a more formal level, provided by outside independent observers in their evaluation reports. These, combined with the comments of insiders, have enabled OLSET to develop material for the "English In Action" lessons which are favourably received in the classrooms and have a positive impact on the learning of the students.

The role and value of evaluation as action research has thus been central to both materials, curriculum and teacher development. The processes involved in external and internal evaluation have been open-ended, overlapping and ongoing. The tension that arises as a result of these processes being conducted by both outsiders and insiders has been challenging and, at times, exhausting. Nevertheless, this tension has also been very creative especially in the development of the capacity of insiders to be involved in processes such as action research and self-evaluation. Both action research and evaluation are open-ended, and in our case, evaluation has provided clear statements of issues requiring work within the project, as well as trends and directions.

REFERENCES

  1. ARNOTT, A, MANSFIELD, J and MENTIS, M (1993) (a). Pre-Test Report of "English in Action", in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  2. ARNOTT, A, MANSFIELD, J and MENTIS, M (1993) (b). A Summative Evaluation of OLSET's "English in Action" Radio Learning Programme, in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  3. ARNOTT, A, MANSFIELD, J and MENTIS, M (1994). A Summative Report on OLSET's "English in Action" Radio Learning Programme, in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  4. CAMPBELL, DT and STANLEY, JC (1963). Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin.
  5. CARR, W and KEMMIS, S (1986). Becoming Critical: Education, Knowledge and Action Research. London: The Falmer Press.
  6. CHEN, HT (1990). Theory-Driven Evaluations. London: Sage.
  7. CHEN, HT and ROSSI, PH (1980). The Multi-Goal, Theory-Driven Approach to Evaluation: A Model Linking Basic and Applied Social Science. Social Forces, 59, (September), 106-122.
  8. CHEN, HT and ROSSI, PH (1983). Evaluating with Sense: The Theory-Driven Approach. Evaluation Review, 7, 283-302.
  9. CHEN, HT and ROSSI, PH (1987). The Theory-Driven Approach to Validity. Evaluation and Program Planning, 10, 95-103.
  10. CHOUDHARY AND TANDON (1988). Participatory Evaluation. New Delhi: Society for Participatory Research in Asia.
  11. COOK, TD (1985). Postpositive Critical Multiplism, in L SHOTLAND and MM MARK (Eds), Social Science and Social Policy. Beverly Hills, CA: Sage.
  12. COOK, TD AND CAMPBELL, DT (1979). Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally.
  13. COOK, TD and REICHARDT, CS (1979). Qualitative and Quantitative Methods in Evaluation Research. Chicago: Rand McNally.
  14. CRONBACH, LJ (1963). Course Improvement through Evaluation. Teachers College Record, 64, 672-683.
  15. CRONBACH, LJ and Associates (1980). Toward Reform of Program Evaluation. San Francisco, CA: Jossey-Bass.
  16. CRONBACH, LJ (1982). Designing Evaluation of Educational and Social Programs. San Francisco, CA: Jossey-Bass.
  17. DENZIN, NK (1970). The Research Act: A Theoretical Introduction to Sociological Methods. Chicago: Aldine.
  18. DENZIN, NK (1978). The Logic of Naturalistic Inquiry, in NK DENZIN, (Ed), Sociological Methods: A Sourcebook. New York: McGraw-Hill.
  19. EISNER, EW (1967). Educational Objectives: Help or Hindrance. The School Review, 75, 250-260.
  20. EISNER, EW (1969). Instructional and Expressive Objectives: Their Formulation and Use in Curriculum. in WJ POPHAM, EW EISNER, HJ SULLIVAN and LL TYLER (Eds). Instructional Objectives. American Educational Research Association Monograph Series on Curriculum Evaluation No 3. Chicago: Rand McNally.
  21. EISNER, EW (1975). The Perceptive Eye: Toward the Reformation of Education Evaluation. Stanford, Cal: Stanford Evaluation Consortium.
  22. EISNER, EW (1977). On the Uses of Educational Connoisseurship and Educational Criticism for Evaluating Classroom Life. Teachers College Record, 78, 345-358.
  23. FETTERMAN, DM (1982). Ethnography in Educational Research: The Dynamics of Diffusion. Educational Researcher, 11 (3), 17-29.
  24. FETTERMAN, DM (1989). Ethnography: Step by Step. Beverly Hills, CA: Sage.
  25. FETTERMAN, DM (1993). Speaking the Language of Power: Communication, Collaboration, and Advocacy (Translating Ethnography into Action). London: Falmer Press.
  26. FETTERMAN, DM (1994). Empowerment Evaluation. Evaluation Practice, 15 (1): American Evaluation Association, Presidential Address delivered in Dallas, Texas, November 5, 1993.
  27. FIELDING, NG and FIELDING, JL (1986). Linking Data: The Articulation of Qualitative and Quantitative Methods in Social Research. Beverly Hills, CA: Sage.
  28. FULLAN, M (1982). The Meaning of Educational Change. New York: Columbia University, Teachers College Press.
  29. FULLAN, M and STIEGELBAUER, S (1991). The New Meaning of Educational Change. London: Cassell Educational.
  30. GOODSON, IF (1992). Sponsoring the Teacher's Voice: Teachers' Lives and Teacher Development, in A HARGREAVES and M FULLAN (Eds), Understanding Teacher Development. New York: Columbia University, Teachers College Press.
  31. GRUNDY, S (1987). Curriculum: Product or Praxis. London: The Falmer Press.
  32. GUBA, EG and LINCOLN, YS (1981). Effective Evaluation. San Francisco, CA: Jossey-Bass.
  33. GUBA, EG and LINCOLN, Y (1983). Epistemological and Methodological Bases of Naturalistic Inquiry, in GE MADAUS, M SCRIVEN and DL STUFFLEBEAM, (Eds), Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer-Nijhoff.
  34. GUBA, EG and LINCOLN, YS (1989). Fourth Generation Evaluation. Newbury Park, CA: Sage.
  35. HAMILTON, D, JENKINS, D, KING, C, MCDONALD, B and PARLETT, M (Eds) (1977). Beyond the Numbers Game. London: MacMillan Education.
  36. HARGREAVES, A (1992). Cultures of Teaching: A Focus for Change, in A HARGREAVES and M FULLAN (Eds), Understanding Teacher Development. New York: Columbia University, Teachers College Press.
  37. HARGREAVES, A and FULLAN M (Eds) (1992). Understanding Teacher Development. New York: Columbia University, Teachers College Press.
  38. HAWES, H (1979). Curriculum and Reality in African Primary Schools. London: Longman.
  39. HINGLE, I AND LININGTON, V (1994). The Design, Administration and Evaluation of the OLSET Test of Oral Production: A Pilot Study, in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  40. HOUSE, ER (1973). School Evaluation: The Politics and Process. Berkeley, CA: McCutchan.
  41. HOUSE, ER (1977). The Politics of Evaluation in Higher Education, in FG CARO (Ed) Readings in Evaluation Research. New York: Russell Sage Foundation.
  42. HOUSE, ER (1983). Assumptions underlying Evaluation Models, in GE MADAUS, M SCRIVEN and DL STUFFLEBEAM, (Eds), Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer-Nijhoff.
  43. HUBERMAN, M (1992). Teacher Development and Instructional Mastery, in A HARGREAVES and M FULLAN M (Eds), Understanding Teacher Development. New York: Columbia University, Teachers College Press.
  44. IMHOOF, M and CHRISTENSEN, PR (Eds) (1986). Teaching English by Radio: Interactive Radio in Kenya. Washington, DC: Academy for Educational Development.
  45. LACEY, C and LAWTON, D (Eds) (1981). Issues in Evaluation and Accountability. London: Methuen.
  46. LAKOMSKI, G (1988). Critical Theory, in JP KEEVES, (Ed), Educational Research, Methodology and Measurement: An International Handbook. Oxford: Pergamon Press.
  47. LINCOLN, YS and GUBA, EG (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.
  48. LIPSEY, MW, CROSS, S, DUNKLE, J, POLLARD, J and STOBART, G (1985). Evaluation: The State of the Art and the Sorry State of the Science, in DS CORDRAY (Ed), Utilizing Prior Research in Evaluation Planning: New Directions for Program Evaluation (No 27). San Francisco: Jossey-Bass.
  49. LIPSEY, MW (1987). Theory as Method: Small Theories of Treatments. Paper presented at the National Center for Health Sciences Research Conference "Strengthening Causal Interpretations of Non-Experimental Data", Tucson, AZ.
  50. LUCAS, W (1974) (a). The Case-Survey Method: Aggregating Case Experience. Santa Monica, CA: Rand Corporation.
  51. LUCAS, W (1974) (b). The Case Survey and Alternative Methods for Research Aggregation. Santa Monica, CA: Rand Corporation.
  52. MACDONALD, B (1971). The Evaluation of the Humanities Curriculum Development Project: A Holistic Approach. Theory into Practice, 10, 163-167.
  53. MACDONALD, B (1977) (a). The Portrayal of Persons as Evaluation Data, in N NORRIS (ed). SAFARI: Papers Two: Theory and Practice. Norwich: University of East Anglia, Centre for Applied Research in Education.
  54. MACDONALD, B (1977) (b). A Political Classification of Evaluation Studies, in D HAMILTON, D JENKINS, C KING, B MACDONALD, and M PARLETT, (Eds). Beyond the Numbers Game. London: MacMillan Education.
  55. MACDONALD, B (1978). Accountability, Standards and the Process of Schooling, in A BECHER and S MACLURE (Eds), Accountability in Education. Windsor: NFER Publishing Co.
  56. MACDONALD, B and WALKER, R (Eds) (1974). SAFARI I: Innovation, Evaluation, Research and the Problem of Control. Norwich: University of East Anglia, Centre for Applied Research in Education.
  57. MASHISHI, L (1994). An Evaluation of the English in Action Radio Programme, in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  58. MILES, MB and HUBERMAN, AM (1984). Qualitative Data Analysis: A Sourcebook of New Methods. Beverly Hills, CA: Sage.
  59. MILES, MB and HUBERMAN, AM (1994). Qualitative Data Analysis: An Expanded Sourcebook. Beverly Hills, CA: Sage.
  60. MYERS, RG (1984). Going to Scale. A Paper prepared for the Second Inter-Agency Meeting on Community-Based Child Development. New York: UNICEF.
  61. NENE, S (1993). OLSET's Focus Group Project August-September 1993: Preliminary Report, in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  62. NISBET, J (1980). Educational Research: The State of the Art, in WB DOCKRELL and D HAMILTON (eds). Rethinking Educational Research. London: Hodder and Stoughton.
  63. OLSET (1994). The OLSET Radio Learning Programme for Educational Development: Curriculum Document Number 1, in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  64. PARLETT, M and DEARDEN, G (1978). Introduction to Illuminative Evaluation. New York: Lilly Endowment.
  65. PARLETT, M and HAMILTON, D (1972). Evaluation as Illumination: A New Approach to the Study of Innovatory Programmes. Edinburgh: University of Edinburgh, Centre for Research in the Educational Sciences, Occasional Paper No 9.
  66. PATTON, MQ (1978). Utilization-Focused Evaluation. Beverly Hills, CA: Sage.
  67. PATTON, MQ (1980). Qualitative Evaluation Methods. Beverly Hills, CA: Sage.
  68. PATTON, MQ (1987). Analyzing Qualitative Data. Beverly Hills, CA: Sage.
  69. POPKEWITZ, TS (1984). Paradigm and Ideology in Educational Research. London: The Falmer Press.
  70. POTTER, CS (1991). Charting Progress in Large-Scale Innovation: Two Case Studies. Part One: A Longitudinal Evaluation of Curriculum Development in a Pre-University Project. Journal of Educational Evaluation, 1 (1), (Spring), 30-59.
  71. POTTER, CS (1992). Vision, Intention, Policy and Action: Dimensions in Curriculum Evaluation. Pretoria: Human Sciences Research Council, Centre for Science Methodology, Proceedings of Conference on Science and Vision.
  72. POTTER, CS (1993). Formative and Summative Evaluation Design for a Radio Learning Project. Journal of Educational Evaluation, 3 (1), (Spring), 3-30.
  73. POTTER, CS (Ed) (1994). Case Studies of Radio Learning in Schools in Four Regions of South Africa. Washington: LearnTech.
  74. POTTER, CS (Ed) (1995). Case Studies of Interactive Radio in South African Primary Schools. Washington: LearnTech.
  75. POTTER, CS, ARNOTT, A, MANSFIELD, J, MENTIS, M and NENE, S (1993). The Development and Implementation of "English in Action" in South Africa: An Interim Evaluation Report, in CS POTTER and S LEIGH (Eds), English in Action in South Africa 1992-1994: A Formative Evaluation. Washington: LearnTeach.
  76. POTTER, CS AND MOODIE, P (1991). The Urban Foundation Primary Science Programme: A Formative Evaluation. Johannesburg: The Urban Foundation.
  77. POTTER, CS AND MOODIE, P (1992). Charting Progress in Large-Scale Innovation: Two Case Studies. Part Two: A Cross-Sectional Evaluation of Curriculum Development in a Primary Science Project. Journal of Educational Evaluation, 1 (2), (Autumn), 50-70.
  78. REICKEN, HW and BORUCH, RF (Eds) (1974). Social Experimentation: A Method for Planning and Evaluating Social Intervention. New York: Academic Press.
  79. ROSSI, PH and FREEMAN, HE (1985). Evaluation: A Systematic Approach. New York: Academic Press.
  80. SCRIVEN, M (1983). Evaluation Ideologies, in GE MADAUS, M SCRIVEN and DL STUFFLEBEAM, (Eds), Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer-Nijhoff.
  81. SHADISH, WR and REICHARDT, CS (1987). The Intellectual Foundations of Social Program Evaluation, in WR SHADISH and CS REICHARDT (Eds), Evaluation Review Studies Annual, Vol 12, Newbury Park, CA: Sage Publications.
  82. SIMONS, H (1987). Getting to Know Schools in a Democracy: The Politics and Process of Evaluation. London: The Falmer Press.
  83. SMITH E R and TYLER R W (1942) Appraising and Recording Student Progress. New York: Harper.
  84. SMITH, ML (1986). The Whole is Greater: Combining Qualitative and Quantitative Approaches in Evaluation Studies, in DD WILLIAMS (Ed), Naturalistic Evaluation. San Francisco, CA: Jossey-Bass.
  85. SOWDEN, S and KEEVES, JP (1988). Analysis of Evidence in Humanistic Studies, in JP KEEVES, (Ed), Educational Research, Methodology and Measurement: An International Handbook. Oxford: Pergamon Press.
  86. STAKE, RE (1973). Responsive Evaluation, in New Trends in Education. Goteborg: University of Goteborg, No 35.
  87. STAKE, R E (1978) The Case Study Method in Social Inquiry. Educational Researcher, 7, 5-8.
  88. STAKE, RE (1983). Program Evaluation, particularly Responsive Evaluation, in GE MADAUS, M SCRIVEN and DL STUFFLEBEAM, (Eds), Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer-Nijhoff.
  89. STAKE, RE (1985). Case Study, in J NISBET, J MEGARRY and S NISBET (Eds). Research, Policy and Practice. New York: Nichols, World Yearbook of Education.
  90. STENHOUSE, L (1975). An Introduction to Curriculum Research and Development. London: Heinemann.
  91. TABA, H (1971) The Functions of a Conceptual Framework for Curriculum Design. in R HOOPER (ed) The Curriculum: Context, Design and Development. Edinburgh: Oliver Boyd.
  92. TYLER, R W (1950) Basic Principles of Curriculum and Instruction. Chicago: University of Chicago Press.
  93. WALKER, R (1980). The Conduct of Educational Case Studies: Ethics, Theory and Procedures, in WB DOCKRELL and D HAMILTON (eds). Rethinking Educational Research. London: Hodder and Stoughton.
  94. WEISS, CH (1973). Where Politics and Evaluation Meet. Evaluation, 1, 37-45.
  95. WEISS, CH (1977). Between the Cup and the Lip, in FG CARO (Ed), Readings in Evaluation Research. New York: Russell Sage Foundation.
  96. WEISS, CH (1979). The Many Meanings of Research Utilization. Public Administration Review, 39, 426-431.
  97. WEISS, CH (1980). Knowledge Creep and Decision Accretion. Knowledge: Creation, Diffusion, Utilization. Vol 1 (3), 381-404.
  98. WEISS, CH (1982). Policy Research in the Context of Diffuse Decision Making. Journal of Higher Education, 53, 619-639.
  99. WEISS, CH (1983) (a). Toward the Future of Stakeholder Approaches in Evaluation. New Directions for Program Evaluation, 17, 83-96.
  100. WEISS, CH (1983) (b). The Stakeholder Approach to Evaluation: Origins and Promise. New Directions for Program Evaluation, 17, 3-14.
  101. WILLIAMS, DD (Ed) (1986). Naturalistic Evaluation. San Francisco, CA: Jossey-Bass.
  102. ZIRKER, JM (1990). Interactive Radio Instruction: Confronting Crisis in Basic Education. Newton, MA: Education Development Center.

Charles Potter
Department of Psychology
University of the Witwatersrand
PO Wits
2050


Paper presented at the 1st Annual Qualitative Methods Conference: "A spanner in the works of the factory of truth"
20 October 1995, University of the Witwatersrand, South Africa
critical methods society - www.criticalmethods.org - info@criticalmethods.org