A wide range of diverse responses by individual students to innovative or alternative assessment are described and discussed, drawing on research data. Student perspectives are significant since assessment is a powerful factor in determining the hidden curriculum and assessment reform has frequently been proposed as a means of better aligning actual experience with the official curriculum.
Copyright Carfax Publishing Company Dec 1998
ABSTRACT A wide range of diverse responses by individual students to innovative or alternative assessment are described and discussed, drawing on research data. Student perspectives are significant since assessment is a powerful factor in determining the hidden curriculum and assessment reform has frequently been proposed as a means of better aligning actual experience with the official curriculum. At a general level, students appeared to understand and adapt to new assessment requirements but case studies illustrate that students do not respond in a fixed nor simple way. Individuals are active in the reconstruction of the messages and meanings of assessment. Ostensibly the same assessment is interpreted differently not just by `staff and 'students' but by individuals. Students import a range of experiences, motivations and perspectives which influence their response. However, although the process is complex, insights gained can be helpful in better aligning the hidden and the formal curriculum.
This paper focuses on the `hidden curriculum' in higher education, drawing particularly on students' experiences and perceptions. The emphasis is on newer forms of assessment, variously termed innovative, alternative or authentic assessment, which are becoming increasingly common in universities. Our research study included 13 case studies of such innovative assessment in practice in a UK university.
Versions of the Hidden Curriculum
The term `hidden curriculum' is widely-known and used but encompasses a broad range of definitions. It is an apposite metaphor to describe the shadowy, ill-defined and amorphous nature of that which is implicit and embedded in educational experiences in contrast with the formal statements about curricula and the surface features of educational interaction. At the macro-level, social theorists describe a hidden curriculum largely in terms of its detrimental effects on the ideals of liberal educational philosophy and the process of schooling as a coercive societal mechanism (Bowles & Gintis, 1976; Illich, 1973; Meighan, 1986).
At a micro-level, the hidden curriculum is expressed in terms of the distinction between `what is meant to happen', that is, the curriculum stated officially by the educational system or institution, and what teachers and learners actually do and experience `on the ground', a kind of de facto curriculum. Snyder (1971) was a key influence in bringing the term `hidden curriculum' to the attention of the higher education community. He explored the ways in which, in Massachusetts Institute of Technology, the formal curriculum emphasised high order educational goals, such as independent thinking, analysis, problem-solving ability and originality, but where, from the student viewpoint, assessment and teaching procedures suggested that the hidden curriculum involved memorising facts and theories to achieve success. Becker et al. (1968) also described a university context in the US where the grading system was extremely powerful and where what students 'really' learned were strategies which enabled them to earn high grades, which was rarely the same thing as understanding the course material.
Assessment and the Hidden Curriculum
There is repeated emphasis in the educational literature on assessment as the element of educational practice which most powerfully determines the hidden curriculum: If we wish to discover the truth about an educational system, we must look to its assessment procedures. (Rowntree, 1987, p. 1) ... any assessment is ... a head-on encounter with a culture's models of prowess. It is an encounter with a deep-running kind of 'ought'. Assessments publish what we regard as skill and what we will accept or reject as a demonstration of accomplishment. (Wolf, 1993, p. 213)
Conventional assessment has frequently been criticised for embodying a sub-text which communicates the 'wrong' messages to students, thus, according to some perspectives, 'creating' the problem of the hidden curriculum. Ramsden (1992, p. 68) claims that "Unsuitable assessment methods impose irresistible pressures on students to take the wrong approach to learning tasks". Frederiksen (1984) proposes that many large-scale testing programmes in the US have damaging effects on the content and processes of learning and teaching. Elbow (1991, p. v) talks about time-constrained `proficiency exams' sending a `damaging message' about the nature of the writing process, by suggesting to students that drafting, editing in the light of feedback and discussion are, in fact, unnecessary and superfluous processes. Entwistle and Entwistle (1991) note how preparation for examinations hinders students' efforts towards genuine understanding of course material. We have reported elsewhere, how students themselves may perceive that assessment 'contaminates' their learning (Sambell et al., 1997).
There has been considerable support for reform or innovation in assessment in order to limit the damage which assessment causes and improve the alignment between the formal and the hidden curriculum. Some discussions might lead us to expect a direct relationship between `what actually happens' (such as a new form of assessment) and student behaviour (such as better learning). For example, Brown and Knight (1994, p. 12) state: "Assessment defines what students regard as important, how they spend their time, and how they come to see themselves as students". It therefore appears that the hidden curriculum arises as a result of students' direct responses to what teachers do. Boud (1995, p. 39) makes a similar statement, but then qualifies it: Every act of assessment gives a message to students about what they should be learning and how they should go about it. Assessment messages are coded, not easily understood and are often read differently and with different emphases by staff and by students.
It is our position that students do not respond to the hidden curriculum, they construct it through their interpretations, perceptions and actions. Boud (1995, p. 39) emphasises the influence of prior educational experiences: "Students are not simply responding to the given subject-they carry with them the totality of their experiences of learning and being assessed". Bergenhenegouwen (1987) stresses the impacts of students' rationales for studying and suggests that some refuse to respond to some of the implicit demands of the hidden curriculum where these conflict with their underlying aims. Variations amongst student perspectives on what is ostensibly the same experience have emerged from our research and are reported below.
The Research Study
There has been widespread change and innovation in assessment methods in recent years (Birenbaum, 1996). Our research aimed to illuminate students' experiences of some of the newer forms of assessment in a UK university. We emphasised the investigation of the student experience because we were attempting to employ a way of conducting educational research, which "conveys a subject's sense of educational experience ... [to] disclose what has been hidden from the normative discourse of the curriculum". (Grumet, 1990), in particular to illuminate students' construction of a hidden curriculum in response to innovative assessment.
Thirteen case studies of assessment in action were undertaken, gathering data from staff and student interviews, documentary sources and observation, but with a strong emphasis on student interviews. The case studies covered a range of subjects and assessment methods such as: individual research projects, assessment by oral presentation, group projects, open book exams, poster presentations, simulated professional tasks, portfolios and profiles. Many also included elements of self, peer and co-assessment.
In each case study, interviews were conducted both with student groups and individuals using semi-structured interview schedules which permitted key issues to be explored whilst, as far as possible, allowing students to define and determine their own interests and use their own terms. An informal style of interview was adopted to allow this to happen and to acknowledge the interview as "a specific professional form of conversational technique in which knowledge is constructed through the interaction of interviewer and interviewee" (Kvale, 1996, p. 36, original emphasis). Although the researcher certainly remained as an outsider, there was a move towards ethnographic style, using procedures to gain, as far as possible, acceptance to the student group; participating in informal conversations, attending group preparation meetings arranged by students, joining groups at coffee. There was variation from case to case since some instances of innovative assessment took place over a short period whilst others extended over a semester or academic year.
The kinds of data which we obtained from students included their detailed descriptions of what they thought and did in particular circumstances but also, through these descriptions and through students' more general reflections, their ideas about assessment. We were dealing here at the level of 'typifications' as described by Holstein & Gubrium, (1994, p. 263):
Typifications make it possible to account for experience, rendering things and occurrences recognisable as being of a particular type or realm. At the same time, typifications are indeterminate, adaptable and modifiable. Stocks of knowledge are always essentially incomplete, open-ended. Meaning requires the interpretative application of a category to the concrete particulars of a situation.
Thus the data permit us to gain some insight into students' typifications of assessment and also see how these shifted and developed in relation to particular experiences of innovative assessment.
Data for analysis was in the form of interview transcripts and field notes. Interpretation of students' meanings and intentions required a continual shifting of focus. One approach was to consider isolated items of text or units of meaning which were grouped into themes and structures. This was complemented by replacing text fragments within the wider context of an individual student's statements and the context of the case study as a whole, to draw out interpretations of a rather different nature. The validity of interpretations rests upon rigorous, considered reflection by the researchers, enhanced by debate and discussion about meanings and interpretations; the search for both confirmatory evidence and disconfirmation to strengthen developing interpretations; and an initial thorough analysis of each case undertaken before moving onto cross-case analysis. Generalisability is strengthened when readers find echoes of their own experiences and situations and plausibility within the presentation of results.
Results and Discussion
In this paper we focus upon similarities and variations in students' perspectives on assessment, based on two levels of data analysis. At the first level, the whole dataset was used to examine the alignment between the lecturers' stated intentions for the innovation in assessment (the formal curriculum) and the 'messages' students received about what they should be learning and how they should go about it, in order to fulfil their perceptions of the new assessment requirements. This analysis revealed students' views of the subtexts of innovative assessment: students' typifications, on a relatively abstract level, of the purposes and principles underlying the 'new' method of assessment which they were experiencing.
This level of analysis revealed that, at surface levels, there was a clear match between statements made by staff and the 'messages' received by students. This occurred whether or not staff had extensively briefed students about the rationale for the innovation. In initial group interviews, students displayed a high level of consensus about the ways in which the 'new' assessment method differed from their perceptions of 'normal' assessment. Several themes emerged, indicating shifts in students' typifications of assessment. First, students consistently expressed views that the new assessment motivated them to work in different ways; second, that the new assessment was based upon a fundamentally different relationship between staff and students, and third, that the new assessment embodied a different view of the nature of learning.
At the second stage of analysis, data was closely investigated on the level of the individual, to look for contradictory evidence, or ways in which, in practice, students expressed views of assessment which did not match these typifications, and in which the surface-level close alignment of formal and hidden curriculum was disrupted in some way. Three interrelated themes identified are described below using data from three particular case studies but illustrating issues which have arisen from the examination of all 13 cases and which are present, to varying degrees, across the whole dataset.
This section draws upon a case study in a social science discipline in which level one students were being introduced to the use of a range of research methodologies. The lecturers had introduced a multiple-choice question test (MCQ) to replace the traditional unseen 3-hour exam in which students selected three essay-style questions from a limited range. Instead of being administered at the end of the unit of learning, one MCQ test was arranged in the middle of the unit, and another towards the end. During interview, the lecturer claimed that the change was an attempt to ensure that students covered essential material instead of `topic spotting' for the traditional exam. The MCQ test was viewed as a means of extrinsically motivating students to learn what they needed to learn.
On this level, our initial analysis of a range of group interviews with students, conducted at the beginning of the semester, indicated that students had responded in the ways the lecturer hoped. Their corporate reading of the subtext of the MCQ test suggested that they were conscious of a shift in their approach to preparation and learning throughout the unit:
In multiple choice there'll be questions on everything, so you've got to have an understanding of everything, more than you would if you'd revised a smaller number of things [to produce exam] essays, when you have a vague idea [what's likely to come up] so you don't need to bother with them all.
The frequency with which this view was expressed indicated that the surface reading of the innovation was fairly homogenous: all were aware that the assessment instrument 'demanded', a different approach to learning than the customary one when faced with a 'traditional' exam.
Assessment and Motivation at the Level of the Individual
The in-depth interviews were designed to elicit and probe the reasons and feelings which underpinned students' reactions to assessment. The wide variation in individual responses to the motivational aspects of ostensibly the same assessment context can be illustrated most forcefully if we consider two students' descriptions of the purpose and value of the same MCQ test at the same point in the semester.
Sam viewed the MCQ as a positive and helpful instrument. He claimed a high level of intrinsic motivation: "I study because I want to study, and it wouldn't help if somebody held a whip up", but immediately went on to acknowledge the 'necessary' addition of the extrinsic motivation embedded in assessment: "I still find that to know you've got an exam in two weeks or whatever acts as a focus, and makes you get things together. You need those points of focus to motivate you to work to that date". Interestingly, Sam also interpreted the MCQ test as a diagnostic tool: "I liked the format. I think at this point the aim should be to guide us through and keep us on the right direction ... I feel multiple choice is a good way of doing that. So early on in the [first] year is good because you need some feedback". Sam had 'enjoyed' the test, and described how it had boosted his self-esteem, a perception which was not divorced from his knowledge that he had "come third in the class" and had "done really well". Sam admitted he was unable to separate out whether "I do well because I enjoy the subject, or enjoy the subject because I do well. I wouldn't like to say which comes first".
In stark contrast, Mark read the MCQ test as an instrument designed to catch him out: "If there was an area that you didn't cover, you paid for it!" He described how the point of it was to make sure that he worked: "It's the only way they've got to check you're working, because they don't keep very strict class registers or anything". The(hidden curriculum represented a threat: "We've been told that if we fail to get 40% they'll chuck us out, which is a bit harsh!" but he added "they have to give us [assessments] ... because they can't just let people be messing about, because that's wasting tax-payer's money, and that's bad". Mark viewed assessment as a mechanism for accountability, and before the test his greatest worry was of having to tell his parents he had failed. Assessment for this student became viewed as a stick to make him work and a trap, and he spoke of being "really fearful" as a result. He admitted he was at university mainly because he wanted to enjoy university life. This may provide one rationale for his view of assessment as an external threat to be overcome rather something from which to learn as indicated by Sam.
The lecturer in this case emphasised the MCQ test as a way of providing extrinsic motivation for students to cover the subject. The two student vignettes illustrate that students were aware of this but interpreted it in a variety of ways and brought to it their own intrinsic motivations in relation to the subject, such as Sam's view that he studies because he wants to study. A mixture of extrinsic and intrinsic motivations in relation to study has been noted before (Fazey, 1996). The term 'orientation' has also been used to indicate students' overall motivation, purpose or rationale in undertaking a university course (Beaty et al., 1997). Mark illustrates clearly a predominantly social orientation. At this stage of his university career (first year) his main purpose is to enjoy student life. Getting through assessment is a necessary evil which enables him to continue to do so. However, the key point to be made here is that variation in students' readings of the MCQ test are, in part, related in complex ways to their motivations and orientations to study and that assessment as motivation is not a simple tool with which to change student behaviour.
Relationships Between Assessor and Assessed
This case study concerns a self-assessment profiling scheme, in which first years' were invited to a series of individual tutorials, to which they could bring any problems or questions, and in which they could, with the aid of staff, compile a profile of their own achievement, identify their strengths and weaknesses, and complete an action plan for future learning. The scheme was devised to provide `academic support and guidance', intended to counter perceived problems with "traditional end-stage assessment": to stop students "falling behind without realising it," and to provide fuller information to prevent students `not knowing what is expected of them therefore paying insufficient attention to critical aspects of the route". Keen to promote `student-centred learning', staff felt it was crucial to develop an 'interactive' and 'friendly' form of assessment, with `student ownership', which they hoped would encourage reflection and discussion, thereby helping students to develop the skills and qualities required by autonomous learners, which include self-evaluation and diagnosis (Sambell & McDowell, 1997). No marks were awarded: "the value is in the preparation ... students should come prepared, with some notes on how they think they're performing".
Students were acutely aware of the difference between profiling and normal assessment, and were reluctant to use the term 'assessment' to describe their experiences in profiling tutorials, which indicates something of the boundaries of their usual typifications of assessment which normally included `being marked'. They all alluded to the informality, and the ways in which profiling was `your time', 'a good opportunity' 'helpful'. Many claimed it was 'nice' not to have a standardised, mass approach: "It's a good opportunity, where you can pick anything that's on your mind". Many described how it had taken "some getting used to" because it assumed a different relationship between staff and students, to which they were unaccustomed:
At first I expected it to feel a bit like being marked, when they're looking at your forms: you're a bit nervous. But now I think it's just a case of go in ... and chat.
There was a high level of consensus that profiling was characterised by the view that staff 'cared': "You don't feel like just another first year, or just another pupil, they get to know your little quirks". This first level analysis suggests that, as they desired, lecturers were being viewed more as friendly guides, rather than as judges and sole providers of expert knowledge.
Several students felt it was crucial that, to be effective, their profiling tutorials were conducted with staff who were heavily involved in teaching them. Interestingly, only then were tutors perceived to `have some right' to offer formative feedback on student work, not just because they were fully conversant with the context for each piece of work being discussed, but because they actually knew the student and their way of working. This also perhaps suggests that students were reading a hidden message in profiling as an assessment mechanism, one in which the learning process (`your way of working') is fully integrated into the assessment process, and redefines the status and expertise of the tutor. The respected staff member (that is, the one qualified to offer worthwhile commentary) is seen in the role of teacher, rather than judge. In striking contrast, no student, in any case study, queried the 'right' of a lecturer to assess their work in more conventional assessment contexts, perhaps suggesting that profiling is typified by a shift in conceptions of the role and nature of expertise of the tutor.
In short, then, on one level it appears that students' typifications of profiling as an assessment instrument were radically different from their views of the relationships embedded in their experience of more traditional forms of assessment, and there was some congruence between staff aims (the official curriculum) and the hidden curriculum of assessment in terms of shifting the role of the tutor from that of sole judge to that of mentor in collaboration with students.
Individual Constructions of Assessment Relationships
Closer analysis of the case, however, reveals that once more, things are not so simple. When student views of the value of profiling were probed, divergent views emerged about the 'real' relationships between students and staff. For example, one student described how she valued the profiling system because it gave her individual access to expert judgement and instruction from her tutor, which she seemed disposed to passively accept: "It's useful because they tell you things, that you can put into your work" (our italics).
Another saw the expert/novice relationship in the following way: "They've been doing it [assessing students' work] for so many years that they can see whether you're doing OK and if you're up to standard. I was worried I'd be at a lesser standard. During the tutorials you find out". Here the student also views the assessment point as a means not of charting her own progress, but of seeing if she is `as good as everyone else.' One interpretation is, then, that this assessment is a sorting mechanism, conducted by staff.
Another described the purpose of profiling in a way which emphasised her role as a 'student' rather than as an individual: "What we're doing as students is seeking their approval, that is what we want. We want to be approved and told that we're going in the right direction". The language used ('them' and 'us') suggests perhaps that what students may really be learning from their experience of profiling is how to be a 'model' student. At least, some valued the experience in that light, trying to discover what behaviour will satisfy tutors. Several, for example, claimed that profiling was useful in that it "let you see what they expect".
Others were deeply suspicious of the new system, and viewed it in terms of staff accountability: "I think it's for their benefit, perhaps so they can see in general what is the level of the students, and consequently what is the level of the university". Others saw it as a means of "checking up on your attendance" "to check you'd been working". For others it represented a token gesture towards student involvement, or a meaningless pen and paper exercise: "I just put `keep up my level of work' each time, and she didn't seem bothered!"
It is tempting to trace these suspicions of tutors' 'real' motives to a deep-seated typification of assessment which regard, the assessor as expert, in possession of secret 'answers' which the assessed must try to uncover. Other interpretations suggest that students may require access to the expertise of the lecturer on the standards they should be achieving to reassure them and to enable them to calibrate their own achievements and progress. Nevertheless, the varying views indicate once again how apparently effective assessment, with, in this case, students engaging in dialogue with tutors and completing self-assessment profiles, works in different ways for individual students.
Approaches to Learning
Our final case study concerns a deliberate move away from a traditional unseen exam in a technological subject, towards an open-book paper, in which staff attempted to use (for them) a new form of assessment to emphasise the need to understand, rather than recollect and 'regurgitate' unit material and to engage in wider reading and draw on workplace experience.
Again, at surface levels students interviewed responded in ways which accorded with the stated aims of staff. Every student interviewed said that this exam would not test memory or reward regurgitation of lecture notes, which they perceived that normal closed-book exams frequently did: "I reckon you'd scrape through reciting facts but you wouldn't do well". There was a high level of agreement that the open-book exam would demand 'better' learning: "In an open-book I would have to explain what I was doing and why, whereas in a closed-book it's a case of mechanically going through a procedure, which you memorise by rote": Students claimed that they were making active effort to understand course material, and described indexing their notes, identifying key concepts and relating topics to each other: "I have a system where, when I find a concept that crosses more than one boundary, I index it. Then I can see all the relevant points that deal with that subject at a high level". They were aware of having to learn more independently and undertake wider reading with "less emphasis [in the course] on actual lectures and people throwing information at you". The activities they described suggested that the general reading of assessment requirements had led to a shift towards a deep approach to learning (Marton et al., 1997).
Approaches to learning at the individual level
Once again, further analysis revealed considerable variation in students' approaches to this subject and the exam. We illustrate this first with the example of Gordon who, in the main, studied in the ways hoped for by lecturers. Gordon was an experienced student with a well-developed exam technique in which he had a considerable degree of confidence. He talked about listening out in lectures for hints: "so in my notes I have big asterisks saying `This may be on the exam!'. When I look back through my notes, I've always been right before". In the actual exam he would "search through for the words which tell you what to do" and consider strategically where he would gain or lose marks. When faced with an open book exam he did not abandon these approaches but adapted them. He still looked for clues as to what might `come up' and when revising he was very conscious of where extra marks might come from: "I've got to the stage that I know the concepts, now I'm going to concentrate on finding details to give me that extra 5-10%".
However, Gordon recognised the open-book exam as different and was pleased not to have to "memorise things and spew them out again". He aimed to understand material. For Gordon, understanding was defined as: a conscious attempt to focus on `ideas and methods; being able to `talk around' an idea or concept, being able to discuss the 'implications' of something or put it into a wider context such as `your knowledge of how companies work'. In order to do this he indexed and highlighted his notes and said "you have to understand the concept to know where to put it". When reading he would `sum up' what was said in `big chapters' and try to `work out the main points on [his] own'. Gordon thought that he would remember what he learnt for this exam because it had 'meaning' for him rather than being `mere details'. He offered a clear distinction between what we might call quantitative and qualitative views of learning: When you revise for a closed-book exam ... you look at it, you memorise it, you recite it. You know how much you've learnt ... With an open-book exam ... you don't know how far you are along to understanding it. One minute you don't, the next you do. It's not a case of `Oh I've memorised 70% of that list', it's a case of suddenly it clicks. It's a great pleasure when that happens.
Gordon also had worries and problems. He was concerned about confusing topics from different courses because "you can go off rambling on a topic which is really [not part of this exam]". At the same time he recognised the value of making connections: "It's good when they cross over because you think you've got so much revising in a subject you're not familiar with and then you find you haven't got so much to do because it's come up already elsewhere and you start to recognise key concepts". He was also 'frightened' that in the end he would fail to understand something important. For him this way of learning was: "more worrying, but it's more pleasurable".
Sandra illustrates a quite different approach to studying for the exam. In her description the influence of conventional exams came across strongly. She said "I'm treating it like a closed-book exam". She did not claim to have particularly good exam techniques and depended heavily on memorisation. She was very worried that, in an open-book exam, she would not be able to cope with additional information which she had not memorised: "... it should all be in my head and I [would] put it down and I wouldn't be confused by all this extra material that I've got and hunting through books". Sandra knew that lecturers had suggested including relevant examples from working experience in exam answers. However, even though she had considerable experience she did not do so because she was "... frightened to do that in case it was gibberish".
Unlike Gordon, Sandra expressed little pleasure in learning. She valued the kind of learning which enabled you to perform well in a job and dismissed academic learning as frequently having `no depth' or giving you "a basic understanding, that's all". She was happier with a job to be done or academic tasks where there were `end products' or `right answers' because then she knew how she was doing. In contrast, when it came to other academic tasks she was at a loss as to how they were judged: "It is [supposed to be] valid if you put across good points, but how do you validate [an essay]?" Her sense of satisfaction in learning depended almost entirely on the marks she got: "If you get a good mark, you feel better". In her revision Sandra mainly tried to "memorise as much stuff as I can". She was aware of the supposedly different requirements of the open-book exam and was undertaking some additional reading but she was not sure what it was for except to "give you more information [to memorise]".
These descriptions make it clear how students who have received basically the same message about assessment interpret it and act on it in very different ways. Gordon responded positively to the prospect of an open-book exam and he enjoyed getting to grips with the subject, although he also struggled with his learning and worried about what he would achieve. Sandra changed her approach only in token ways, perhaps because she saw little point in academic learning apart from gaining marks, and also had a strong need to feel in control and secure. Both students also illustrate how familiar assessment methods, in this case conventional exams, influence ideas and approaches to alternative forms of assessment.
At a general level students hear and understand the explicit communication about assessment offered by their lecturers but they are also aware of the embodied sub-texts and have their own individual perspectives all of which come together to produce many variants on a hidden curriculum. Students' motivations and orientations to study influence the ways in which they perceive and act upon messages about assessment so that one student could see an MCQ test as a means of obtaining guidance and feedback whilst another saw it as a means of judging and threatening him. Students' willingness to exercise independence and responsibility varies so that, whilst ostensibly engaged in an evaluative dialogue with their lecturers, some were actually waiting to be told what to do or playing a game to keep lecturers happy. Students' views of the nature of academic learning influence the kinds of meaning they find in assessment tasks and whether they adopt an approach to learning likely to lead to understanding or go through the motions of changing their approach.
Students' typifications of assessment based on previous experience, especially in relation to conventional exams, also, strongly influence their approach to different assessment methods. We have reported elsewhere the potency of the `idea of the exam' which resides in students' (and staffs') minds, and emerges in their discussions of assessment (Sambell et al, 1997). Throughout the research we see individuals importing their general and powerful typifications of exams per se, which then act as filters through which to view a new situation. Students also strongly associate assessment with being marked or graded and that this can overshadow their perceptions of any of the other functions and aspects of assessment.
In an important sense our research problematises assessment, because it suggests that students, as individuals, actively construct their own versions of the hidden curriculum from their experiences and typifications of assessment. This means that the outcomes of assessment `as lived' by students are never entirely predictable, and the quest for a 'perfect' system of assessment is, in one sense, doomed from the outset. However, these insights can help us to work towards a reduction in the negative aspects of the hidden curriculum in local contexts. As further encouragement, it is important to recognise that our research has identified shifts and developments in students' attitudes and approaches to studying and in their typifications of assessment. Innovative assessment has played a part in altering students' readings of the latent messages of assessment, potentially minimising the gap between the stated formal curriculum and the hidden curriculum. The fact that it has not done so for all students, or for all students in the same way, is an important insight which can assist in further developments in this important aspect of the higher education curriculum.