Deconstructing Paranoia: An Analysis of the Discourses Associated with the Concept of Paranoid Delusion

David J. Harper PhD Thesis June 1999

Back to title page





Though empirical, our work need not be empiricist.

Scheper-Hughes (1992, p.23)

In the first part of this thesis I have explored both the discourses which construct paranoia as a historical, popular and professional concept and their effects through an analysis of structured texts in the public domain. But how might these themes be played out in less structured texts, like conversations about paranoia? Much of the early work in DA drew on analysis of such texts, some generated by interviews with participants. Often such 'empirical' work is seen as a way of 'grounding' analysis but there are difficulties with realist readings of this notion (Parker, 1994). I would see the value of interviews as lying in enabling an exploration of: the different ways discourses can work together to produce novel and surprising positions; the different effects such positioning can have; and the possible interests at work in shaping those accounts. My previous work looking at interview material (Harper, 1994b) was criticised for being too abstract and not focused on individual cases (Walkup, 1994). As a result I decided to interview a number of users of psychiatric services and professionals involved with them to discuss paranoia in these cases. The aim here was to 'triangulate' -- not in a realist sense of getting at an underlying truth but, rather, to get multiple perspectives on the 'same' situations. The idea was that the potentially contradictory accounts might speak of influences like power and so on. Here, then, there was a micro rather than a macro focus -- looking at how paranoia was used in everyday clinical practice situations and looking at the detail of how wider discourses and positions were effected in less structured situations. I was also interested in whether unexpected things would emerge from the twists and turns of the detailed analysis of specific instances.

Clinical talk has been an increasing focus of discursive researchers (eg Morris & Chenail, 1995) -- see Soyland (1995) for a review. DA has been used in a range of clinical domains examining: case presentations (Anspach, 1988; Soyland, 1994b); case records (Crawford et al., 1995; Gillman et al., 1997; Heartfield, 1996; Mohr & Noone, 1997; Swartz, 1996; Terre Blanche, 1997); and psychotherapy (Burman, 1992; Edwards, 1995b; Foreman & Dallos, 1992; Frosh et al., 1996; Hare-Mustin, 1994; Kogan & Gale, 1997; Madill, in press; Madill & Barkham, 1997; Madill & Doherty, 1994; Soal & Kottler, 1996; Stancombe & White, 1997). Moreover, a wide range of clinical categories have been analysed through DA of interview material: depression (Lewis, 1995); eating disorders (Brooks et al., 1998; Hepworth, 1994; Hepworth & Griffin, 1995; Malson, 1997); hearing voices (McLaughlin, 1996); jealousy (Stenner, 1993); paranoia (Harper, 1994b, 1995c); and premenstrual syndrome (Swann & Ussher, 1995).


Before I go on to describe the method of analysis I used to examine the interview transcripts, it is worth saying something about criteria for evaluating this kind of research. Evaluative criteria are helpful to keep in mind during analysis and the reading of analyses since they guide us to 'good practices' in interpretation (cf. Henwood & Pidgeon, 1992, 1994; Stiles, 1993). Clarifying appropriate criteria is also an important task for making our evaluative judgements explicit (Parker, 1997c). Some might argue for an 'anything goes' policy but then we risk parodying ourselves or even being hoodwinked by the likes of physicist Alan Sokal -- see his Social Text article (Sokal, 1996a) although his parody was theoretical rather than empirical (see Ferguson, 1996 and Sokal, 1996b for an account of this bizarre episode). Ironically, given the focus of this thesis, some have termed this the need to judge how 'trustworthy' an analysis is (Henwood & Pidgeon, 1992). It is now accepted that whilst traditional positivist concepts like reliability and validity may fit na´vely realist work (Watts, 1992) they are inappropriate for evaluating qualitative work drawing on other epistemologies, especially social constructionism (Henwood & Pidgeon, 1994) Unfortunately, commentators have suggested a bewildering range of alternative criteria (eg Elliott et al., 1996; Henwood & Pidgeon, 1992, 1994; Potter & Wetherell, 1987; Potter, 1996a; Stiles, 1993; Turpin et al., 1997). There is a debate about whether qualitative work needs separate criteria from quantitative work Ó la Elliott et al. (1996) but to my mind similar criteria could apply (there will, however, always be notable exceptions to any criteria developed: Parker, 1997c). Like Coyle (1995) I find some criteria more persuasive than others and there is, perhaps, an onus on researchers in this kind of work, to clarify what would be appropriate criteria for judging studies within their framework. My selective overview of currently published criteria (see appendix 1) suggests key ones for the evaluation of this study (ie ones which are not dependent on a simplistic realist and positivist epistemology) are: situating the researcher and sample, clarity of aims, contextualisation, linearity and coherence and persuasion. Readers should keep these criteria in mind as they read the following chapters.

Of course, there are conflicts between these criteria (eg between transparency and the requirement of confidentiality and accountability). Moreover, there is the danger of a creeping positivism in assuming that another analyst could come along and provide the same interpretation since this ignores the detailed reading, theoretical work and cultural and historical analysis done prior to interviewing and analysis. My test of the transparency criterion, for example, is not that my interpretation could be repeated, but that it could be followed. Sherrard (1996) has noted the danger of sacrificing validity for repeatability and reliability.

Other criteria which are popular but not entirely appropriate here are triangulation and participant validation. The first is regarded as key by some (Stiles, 1993) and can refer to a triangulation of theory, method and data. A triangulation of theory seems hard to understand -- how could results from different epistemologies be brought together? In this research a triangulation of data of a kind was conducted (interviews with users and professionals). However, this criterion tends to assume a na´ve realism (ie that phenomena remain stable regardless of context) not shared by me and so this criterion is not appropriate here. The second criterion is more problematic and will be discussed further in chapter 7. In short, it was not the aim here to produce an account that all participants would agree with.


In chapters 4 and 5 I would not claim to be presenting a representative mapping of positions available in discourse whilst in chapter 6 my analysis is nearer to this aim. In chapters 4 and 5 I explore a particular question (like 'in what ways do speakers see a link between 'belief' and 'distress'?') or issue (like rationality and plausibility). In early drafts of these chapters I included more or less exhaustive lists of examples of strategies and positions from a range of transcripts but this would have made presentation unwieldy and would have obscured the next stage of analysis (ie what conditions produce such accounts and what effects do they have?). Hollway (1989) has argued that the theoretical goal of any analysis of discourse is not to ensure the methodological conditions for discovery of truth (eg through the perfection of sampling) but to understand the conditions which produce accounts and how meaning is to be produced from them -- which is the case even if there is only one example. In chapter 6 mapping was more of an aim and, indeed, there I have drawn on Curt's (1994) notion of concourses. Clyde Mitchell (1984) explains these different aims in terms of how case examples might be used differently: the difference between presenting an example (here, an interview extract) as a 'typical' case (ie one which is representative of the wider population) versus presenting an example as a 'telling' case in which 'the particular circumstances surrounding a case, serve to make previously obscure theoretical relationships suddenly apparent' (p. 239). At times in Part II I use extracts to illustrate particular positions (eg in sections of chapter 5 and the first part of chapter 6) whilst at other times I use examples to show the complexity of discursive moves (eg second part of chapter 6 and rest of chapters 4 and 5).

A further point to note is that I do not wish to impute a simplistic individualistic or institutional determinism in my analysis. Moreover individualistic intentionalist rhetoric can easily creep into DA descriptions with functions of discourse being traced back to individual speakers (Parker, 1997a). It is a feature of much discourse analytic work that it can locate agency either in the person or complexes of institutional power relations (Cobb, 1994) but it is equally one of the potentials of a discursive approach that it is able to provide an analysis that avoids falling into such dualist traps by taking a both/and rather than an either/or position since the acts of an individual are at the same time social and have social consequences (and vice versa) -- see my discussion of this at the end of chapter 3. It is unlikely that my analytic descriptions have purged all intentionalist attributions but I hope to have at least given enough evidence for my interpretations to demonstrate that effects occur at multiple levels and that meaning is in a very real sense over-determined. Moreover, I seek to link extracts with wider culturally-available discourses and so, at numerous points, I combine a wider analysis in Part II with analysis of individual extracts.


Following the criteria of transparency and persuasiveness I have included here a detailed description of the steps involved in my method -- reflections on this are to be found in chapter 7.

Initial plan

The initial aims of this part of the study were to understand the conditions under which a diagnosis of paranoid delusion was made in practice and to consider the views both of psychiatric service users and health professionals (eg psychiatrists, general practitioners, Community Psychiatric Nurses [CPNs] etc) on the utility of such diagnoses and the diagnostic process. As a result, the initial plan was to interview six users diagnosed as experiencing paranoid delusions and their Consultant Psychiatrist as well as their GP (or referrer, if not the GP). In addition two initial diagnostic interviews between a psychiatrist and person suspected to be experiencing paranoid delusions were to be tape-recorded. This latter point was in response to Walkup's (1994) criticism of Harper (1994b) that I had only interviewed professionals talking about diagnosing paranoia rather than observing them doing diagnosis.

Making contacts

I applied for consent from an NHS Trust's Research Ethics Committee and notified the Local Medical Committee. Once approval was given, I met with Consultant Psychiatrists in the department of Psychiatry in the Trust to explain the study and to ask for their co-operation. I then wrote to each of them individually enclosing an outline of the research and ethical approval (appendix 2). Inclusion criteria were very broad: a range of age and gender of users considered to be experiencing persecutory delusions. They suggested some names of potential interviewees. Unfortunately no-one was suggested in time for me to be present at an initial diagnostic interview -- a problem encountered also by Palmer (1997). There was incredible difficulty both in obtaining sufficient referrals from professionals and in obtaining consent from service-users for the research (see chapter 7 on this issue). As a result I then approached Consultants from an attached unit as well as the Community Psychiatric Nursing department.

Approaching participants and seeking consent

In my contact with potential participants I followed current Professional Practice Guidelines for clinical psychologists (Division of Clinical Psychology, 1995). Once I had the name of a potential user-interviewee, I wrote to them outlining the study (and that our interview would be recorded) and asking for their help (appendix 3) -- they had to 'opt in' at this point. Three users failed to reply. For those who were currently admitted in hospital I contacted them to discuss the study and request their help. Three users declined to meet me. In addition a Consultant Psychiatrist refused me permission to contact two users.

On six occasions I met with the user, discussed the study more fully then sought their consent before beginning the taped interview proper. On three occasions I first met with the users, discussed the study more fully and arranged a second meeting some days later when the interview took place. I asked them to sign a consent form -- including consent to publish -- (appendix 4) and the interview was tape-recorded. Participants were encouraged to terminate the interview if they felt uncomfortable. One participant did and their transcript was not used in the research.

Interviewing users and contacting professionals

The interviews were semi-structured and followed guidelines I produced beforehand and which drew from my theoretical concerns, questions and aims (see appendices 5, 6 and 7 for interview-guidelines for users, psychiatrists/CPNs and GPs) -- these were formulated through reading literature, discussions with others and my own thinking. The interviews with users were to focus on their perceptions of their diagnosis and their own views about their beliefs together with some discussion of their treatment. The interviews with psychiatrists were to focus on the information on which they based their diagnostic decision, their views on abnormal beliefs and thoughts about treatment. Interviews with GPs were to concentrate on what prompted the referral and whether they thought the user's beliefs were abnormal together with their views about treatment. Interviews with CPNs broadly followed the format I used with the psychiatrists. They were conducted broadly in the manner suggested by Potter & Wetherell (1987). As Potter (1996a) notes, interviews here are not regarded as a way of getting to 'the truth' of claims but are rather seen as an arena in which culturally available discourses and rhetorical strategies will be at work -- some of the issues involved in interviewing are discussed in Chapter 7. Brief biographical details about the interviewees are listed in appendix 8 together with other information relevant to the interview. In order not to breach guidelines regarding confidentiality (Division of Clinical Psychology, 1995; Wilkinson et al., 1995), some details (eg the name of the Trust in which I did the research) are omitted whilst others are necessarily brief. Although the consent given included permission to reproduce extracts from the interviews it did not seek permission to include whole transcripts (which might prejudice confidentiality) and so no examples of full transcripts are included in this thesis.

At the end of interviews with the users I asked for their written consent to interview their psychiatrist, GP and/or CPN (appendix 9). I then approached professionals by letter and subsequently by telephone to discuss the study with them and to request their participation -- here I requested consent immediately prior to the interview to reduce time (consent forms for psychiatrists and GPs/CPNs are in appendices 10 and 11 respectively). Five GPs were interviewed. In addition one refused (due to pressure of time) and one declined since the user had, by then, left the district. The GPs of two users were not accessible (further details on this would prejudice confidentiality). Four Psychiatrists (three Consultants and one Senior House Officer since the relevant Consultant had recently retired and remaining Consultants did not remember the service-user well) were interviewed -- this represented the key psychiatric medical staff for each user. In addition, three CPNs were interviewed -- this was additional to the original aim but it was considered that they would bring an additional point of view to the six users with whom they had had involvement including several for whom GP interviews had not been possible.

In addition a Care Programme Review for one of the users was recorded with the consent of all present. In summary I was able to secure 21 interviews plus 1 case review. The resulting interview combinations are listed in appendix 12.


The interviews were transcribed using a simplified version of Potter & Wetherell's scheme (1987), similar to my previous work (Harper 1994b, 1995c). Transcription conventions are listed in appendix 13 and some of the difficulties of transcription discussed in chapter 7. Transcripts were anonymised during the transcription process (eg names given pseudonyms, place names not transcribed etc). There was 10 hours of recorded material with an average interview length of approximately 27 minutes -- the interviews with GPs were very short (usually 10 minutes) because of their busy schedule. The transcription took just over 93 hours to complete with an average interview/transcription ratio of 1/9.3 (ie one minute of interview took 9.3 minutes to transcribe). Times for each transcription are listed in appendix 14.


The process of DA is difficult to describe and there have been calls for clear accounts (Coyle, 1995). Banister et al., (1994), Potter & Wetherell (1987) and Gill (1996) give some illustration of the analytic process. It is important to make this process explicit. There were a number of steps which were common to the analysis in general with some slight variations for each of chapters 4,5 and 6 -- here the analysis and chapter-writing were one and the same process. These steps are described in appendix 15. I have tried to convey some of the difficulties faced here in chapter 7.

Back to title page

To first chapter in this section