GR issues 2007 to present
You are here: Home> Generations Review> GR issues 2007 to present> April 2010> The power of ...
The power of observation: A reliable method for measuring outcomes in care homes?

As part of the Measuring Outcomes for Public Service Users’ (MOPSU) project1, the PSSRU (University of Kent) was asked to develop and test an approach to measuring and monitoring outcomes of the care and support provided to residents of care homes for older people and people with learning disabilities.  Details of this strand of work and the wider project can be found on the ONS website: http://www.ons.gov.uk/about-statistics/methodology-and-quality/measuring-outcomes-for-public-service-users/index.html. Interested readers can also download from this link a report on the conceptual work related to the project (Forder, Netten et al, 2007), including justification of the measures described here.

Whilst there is increasing policy emphasis on dignity, quality of care and outcome for service users, measuring and monitoring these for residents of care homes presents particular methodological challenges.  Accessing the views and experiences of care home residents is becoming increasingly challenging owing to the level of cognitive impairment and communication difficulties among residents, especially in homes for older adults and people with learning disabilities.  Consequently, we developed an approach based on fieldworkers collecting evidence from a variety of sources including: previously validated instruments, interviews with staff (and, where possible, residents) and structured observation, to enable them to make informed, evidence-based judgements about residents’ needs.  Details of the content of the questionnaires, interviews and observational toolkit can be found in the publicly available interim report (Forder, Towers et al, 2007).

Adapting the Adult Social Care Outcomes Toolkit
On-going Personal Social Services Research Unit (PSSRU) work has been developing an approach to measuring outcomes which is being drawn together in the Adult Social Care Outcomes Toolkit (ASCOT) which is to be published in June 2010 (Netten et al 2010). The approach to measuring outcomes involves identifying people’s currently experienced social care related quality of life (SCRQOL) and their ‘expected’ SCRQOL in the absence of the social care intervention. Together these components allow the outcomes of service use to be measured. The SCRQOL scale is based on a levels-within-domains approach. The starting point is the full set of SCRQOL domains as described in Box 1.

Box 1: 
Social care related quality of life domains



Personal cleanliness and comfort

The service user feels he/she is personally clean and comfortable and looks presentable or, at best, is dressed and groomed in a way that reflects his/her personal preferences


The service user feels safe and secure. This means being free from fear of abuse, falling or other physical harm and fear of being attacked or robbed

Control over daily life

The service user can choose what to do and when to do it, having control over his/her daily life and activities

Accommodation cleanliness and comfort

The service user feels their home environment, including all the rooms, is clean and comfortable

Food and nutrition

The service user feels he/she has a nutritious, varied and culturally appropriate diet with enough food and drink that he/she enjoys at regular and timely intervals


The service user is sufficiently occupied in a range of meaningful activities whether it be formal employment, unpaid work, caring for others or leisure activities

Social participation and involvement

The service user is content with their social situation, where social situation is taken to mean the sustenance of meaningful relationships with friends, family and feeling involved or part of a community should this be important to the service user


The negative and positive psychological impact of support and care on the service user’s personal sense of significance

Service users are rated (either by themselves in interview or self-completion questionnaire) on their experienced SCRQOL in each domain and, if they are being interviewed, on their expected wellbeing in the absence of services. At the time of the study we were using three levels in each domain reflecting no needs, low needs or high needs.  High needs are distinguished from low needs in having mental or physical health implications if they are not met.  This task is repeated for all domains to build up a picture of overall, current and expected SCRQOL.

Unlike previous work, typically involving interviews with mentally intact older people in their own homes, we knew we could not rely on self-report methodologies. Rather, we had to rely on the judgments of a third party, our fieldworkers.  Fieldworkers spent up to two days in each care home. Day one began with a meeting with the home manager, collecting consent forms and questionnaires, and included a two hour period of structured observation (Engagement in Meaningful Activities and Relationships (EMACR); Mansell and Beadle-Brown, 2005).  Fieldworkers also conducted interviews and completed the other observational measures.  Within a day of visiting the home, fieldworkers were required to use all available evidence from the toolkit, interviews and questionnaires and make informed judgments about residents’ levels of SCRQOL (with and without services) in each of the core domains of outcome described above 2.

While fieldworkers were given intensive training and detailed guidance on observation techniques and the measures, to be confident in our findings and to be sure reliable judgements can be made by third parties, it was crucial to calculate inter-rater reliability. Working in pairs, the fieldworkers carried out at least four visits, in two rounds. The first round of visits happened within the first two months, so that ratings could be compared early in the study, and so that any necessary advice and feedback could be given before the second round. Each fieldworker made two visits as the main observer, and two as the buddy, and they completed all measures independently.

Reliability of ratings 
We employed both qualitative and quantitative approaches to evaluating reliability.  We examined inter-rater reliability and analysed individual fieldworker reliability in terms of the consistency of the basis for the ratings made.  Inter-observer/rater reliability was available for 113 residents in 28 services. A detailed account of these results is given elsewhere (Netten et al 2010). Here we provide a very broad overview and summary of the results.

Generally, we found that ratings in the absence of services appear to be more reliable and therefore easier to complete. Although the Kappa statistics were below the normally acceptable level of 0.6 (average Kappa 0.53), the percentage agreement almost reached the generally acceptable level for high agreement of 0.8. The domain of control appeared the hardest to rate, in the sense that there was more disagreement between fieldworkers. Qualitative analyses identified one very weak fieldworker. When this person was removed from the sample the unweighted total measure was estimated and agreement between observers tested using Spearman’s rank order correlations. There were significant correlations between observers for both the total with service (r = 0.618 p<0.001) and in the absence of service (r = 0.723 p<0.001).

The qualitative analysis provided some very helpful insights into these results and highlighted where there were fieldworkers who may not have had the skills needed, were thrown off by the reality of the observation process or who just required more experience and training.  We found that overall, the raters agreed about which situations were better or worse (with services or without) but that there were often small but meaningful differences in the levels within each domain. For example, these differences were usually between ratings of levels 1 and 2 (no needs and all needs met) or between levels 2 and 3 (all needs met and low needs). Wider disagreements were rare.

Whilst we maintain that an individuals’ own perspective has to be the starting point both ethically and from the point of view of creating the right incentives in collecting and interpreting data, we know that in this study alone a high proportion of residents were unable to take part in an interview at all, and the nature of residential care makes it particularly difficult to ask residents about their ‘expected’ situation in the absence of the service. Although the inter-rater reliability was not as high as we would have liked, the qualitative analyses were able to identify how the approach might be improved in the future. We believe that with some future development of the tool and the necessary additional guidance and training, this could be not only an essential but also a reliable methodology for measuring outcomes in care homes in the future.


1. MOPSU was funded over three years (2007-2009) by the Treasury under the Invest to Save budget and led by the Office of National Statistics (ONS).

2. The dignity domain is the exception to the rule. Dignity is the only domain focusing on the process of care and its impact on the service user, meaning it would be nonsensical to measure it in the absence of services.


Forder, J., Netten, A., Caiels, J., Smith, J., and Malley, J. (2007) Measuring Outcomes in
Social Care: Conceptual Development and Empirical Design. PSSRU Discussion Paper No. 2422, Personal Social Services, Research Unit, University of Kent.

Forder, J., Towers, A-M., Caiels, J., Beadle-Brown, J. and Netten, A. (2007) Measuring Outcome in Social Care: Second Interim Report, PSSRU Discussion Paper No. 2542, Personal Social Services Research Unit, University of Kent.

Mansell, J and Beadle-Brown, J (2005) Engagement in Meaningful Activities and Relationship: Handbook for Observers, Tizard Centre, University of Kent, Canterbury.

Netten, A., Beadle-Brown, J., Trukeschitz, B., Towers, A-M., Welch, E., Forder, J., Smith, J. and Alden, E. (2010) Measuring the Outcomes of Care Homes, PSSRU Discussion Paper No. 2696/2, Personal Social Services Research Unit, University of Kent.

Back Print