Don’t have a spare $100,000 to use in evaluating your program? Don’t despair. It’s possible to get a meaningful picture of outcomes without requiring a large research team, a PhD or years of preparation. This article provides some tips.

This article is intended as a non-technical introduction to outcomes measurement for community-based organisations. It is adapted from an article by Mark Planigale that originally appeared in Parity, August 2011 and is reproduced with permission of Council to Homeless Persons Victoria.


Why measure outcomes?

There are many potential benefits to tracking client outcomes.

Outcome measurement is a critical means of gauging whether interventions are working or not. Establishing a baseline and then measuring change against that baseline provides a reference point for evaluating new approaches and initiatives.

Organisations that have a clear understanding of the outcomes they produce can:

  • advocate for the needs for their clients and influence change in the sector
  • demonstrate their success and gain further resourcing
  • identify weak points in their approach and remedy these.

Reflection on outcomes – backed by solid data – can engage consumers and motivate workers.


Outcome measurement is about people, and it needs people to make it happen.

You will need someone to drive the process. It could be an enthusiastic team leader or manager, a project worker, or someone with a quality or research role in the organisation.

Engage senior management. Their backing will mean that the process is accepted and the results will be used well.

Service delivery staff are vital. Without their buy-in, data collection will be poor and the results will be unreliable. Whatever process you develop for outcome measurement needs to be feasible within the constraints of normal service delivery.

Involve your clients. They are experts on which outcomes matter! They will also be able to tell you what approaches to data collection will work best for them.

Talk to peer organisations in your network. They may be thinking about outcome measurement, or already have processes in place. Perhaps you can share ideas and resources.

Which outcomes to measure?

Outcomes are the lasting effects for clients of engaging with our services. Often we think about outcomes in terms of change (accessing safe, affordable housing; getting a job; family reunification; …). But sometimes an outcome may involve maintaining a client’s current status – for example, sustaining accommodation where it is at risk.

To identify outcomes, ask what goals your program works towards with its clients. These goals could be about the client’s:

  • emotional state and attitudes
  • knowledge and access to information
  • skills and behaviour
  • status (for example, housing status, employment status, level of vulnerability)
  • environment and access to resources.

Outcomes could be in different life domains: housing, physical health, mental health, addiction, relationships, and so forth.

Think about short-term and long-term outcomes. A crisis service will typically have between a few days and a few months of engagement with a client. They may focus on short-term outcomes such as dealing with immediate health issues and establishing connections with support services. These outcomes may be building blocks for later change. A PDRSS-oriented homelessness service may work with people for years and have the chance to pursue longer-term outcomes: establishing permanent housing, capacity building, self-care.

Funding agreements provide a starting point for thinking about goals. But staff and clients usually have the clearest idea about what matters on the ground. A brainstorm can be useful. What is the most important effect that your program has for its clients? What is the second most important? Pick a few key ones and put the rest on the “follow up later” list. Trying to measure every possible outcome is a recipe for disaster.


For each outcome that you want to track, you will need one or more indicators (also known as ‘measures’). This sounds technical but the underlying concept is not difficult.

An indicator is just a characteristic of a client or their situation that can be measured (given a value or rating). It could be given a numerical value (e.g. a number between one and ten), or a word value (bad, OK, good).

An indicator of risk to a tenancy might be the level of arrears. (There might be others as well – e.g. the point in the legal process that the landlord has reached). What is the client’s housing status – primary homeless, secondary homeless, transitionally housed or permanently housed? How many times was the client hospitalised in the past 6 months? On a scale from one to ten, how satisfied is the client with their physical health? These are all examples of indicators.

You can come up with your own indicators. The key question to ask is – for each outcome, how would we know if it had been achieved? What would we notice about the client, their life or their environment?

Some outcomes are difficult to measure. For example, reliably measuring an improvement (or decline) in someone’s mental health is not easy. The good news is, for these types of outcomes there are sets of indicators already available in the form of standardised questionnaires. Widely-used questionnaires include CANSAS, BASIS, HoNOS, WHOQOL and others. There are literally thousands of standardised scales available. Keep in mind that some are designed to be used by staff with specialist qualifications.

One of the best-known “off the shelf” outcomes tools is the Outcomes Star. This provides one indicator in each of ten major life areas. The Star has a data collection form which is designed to be used collaboratively between worker and client, and has associated online software to collate data. For a general picture of how clients are travelling, the Star is a good option.

Another option is Goal Attainment Scaling (GAS). This involves rating the extent to which agreed goals (for example, case plan or ISP goals) have been achieved. Usually the rating is worked out collaboratively between client and worker, using pre-agreed criteria. Standardised forms are available to record and rate goals. GAS fits naturally with case planning and review processes, and is very client-focused. On the downside, it doesn’t connect to “external” criteria (such as the proportion of clients who achieved permanent housing).

Collecting data

During data collection, clients and/or staff review what has been happening in the client’s life, and give a value to each indicator. This might involve a conversation between worker and client, or a client filling out a questionnaire. Workers may also record information about a client’s situation (for example, their tenancy type).

Staff will need clear guidelines on when to collect data (every three months? at intake and exit? when reviewing the case plan?). Staff will also need to know when NOT to collect data. In some circumstances, data collection may be inappropriate – for example, if it will threaten the client’s engagement with the service. In general, clients should have the right to opt out of outcomes data collection.

Staff need training on the process of data collection – what information to provide to clients, how questionnaires are to be filled out, whether to discuss responses with clients, and so forth. A training session of a couple of hours, with clear written information and chances to role-play, will provide a good start.

Remember to review data collection processes regularly (perhaps in team meetings). It is understandable that staff will be concerned about any additional administrative burden. They need opportunities to shape the logistics of data collection so that it supports the service they deliver.


Now we have some data – how do we make sense of it? The amount of data collected will grow over time. It needs to be stored in an electronic format – either a spreadsheet (e.g. Excel) or a database. Unless you have good IT support on hand, a spreadsheet may be the most realistic option to start off with. The spreadsheet should include a column for the client’s name or other ID; a column for the date on which data was collected; and then columns for each of the indicators which are being measured. Each row contains a set of measurements collected for one client on one occasion.

Use filtering and sorting (or pivot tables if you know how) to get tallies of indicator values for your client group over particular periods of time. Remember to connect each indicator back to the outcome which it is designed to measure. Usually, there will be two aspects that are of interest:

  • What proportion of the client group achieved a particular outcome in this period? When calculating percentages, only consider those clients who were “eligible” for that particular outcome. For example, if the outcome was to maintain transitional housing for at least 6 months, then don’t include clients who signed up less than 6 months ago – we already know they can’t have achieved that outcome yet.
  • What proportion of clients made progress in this period, compared to their situation in the previous period (or to when they first entered the service)? You will need to use consistent client IDs to track data about individual people or families over time.

Present the results in a report. The most useful reports show trends over time. Is our program assisting with more positive housing outcomes than last year, or less? Extrapolate the current trend and see if the picture it paints for the future is acceptable. If not, implement strategies for change.

Use the information

Share the results with your stakeholders. Release a bulletin, put them on your intranet, hold a forum to discuss the results, reflect on them at team meetings and in strategic planning sessions. Make sure service delivery staff have the opportunity to explore the reasons why the results are as they are. Use the figures in tenders and advocacy documents. Boast about your successes and use your failures as leverage for change.

Remember that comparing results across programs can be like comparing apples with oranges. Differences may be due as much to differences in the client group, resourcing or operating environment as to the work of the programs themselves.

Build on these foundations

Once you have some experience in outcomes measurement you can start to develop more robust evaluation systems. For large organisations, the challenge is to come up with consistent organisation-wide approaches to measurement. This requires a combination of top-down planning (identifying headline measures and data collection systems that apply across the organisation) and bottom-up input (identifying factors that matter locally).

Develop formal logic models for your programs and consider what elements of these should be measured. Consider research projects that look at more complex issues such as attribution (how do we know that the outcomes we measure are the results of our program’s work?) and data quality (how do we know that our results are reliable and valid?).

The ultimate test for any of these measurement activities is whether they contribute to a better experience and better outcomes for people using services. If unsure about which direction to take with outcomes measurement, come back to this central question.

Key principles

  • Keep it simple and achievable
  • Measure what counts – the most important outcomes
  • Use the results – feed them back to staff and clients
  • Only measure outcomes if it will benefit clients – not at the detriment of service.

Assistance with outcomes measurement

Lirata Consulting assists organisations to develop their capabilities in the area of outcomes measurement.

For further information or assistance, please contact Mark Planigale at Lirata Consulting.

Mobile: 0429 136 596
Landline: 03 9457 2547
Email: This email address is being protected from spambots. You need JavaScript enabled to view it.


D.I.Y. Outcomes Measurement (PDF 433 KB)

Related articles

Literature review: Measurement of client outcomes in homelessness services

Homelessness outcomes: It's not just about the house...

External resources

Friedman, M. (2009) Trying Hard Isn’t Good Enough. Trafford Press. This book is the definitive guide to the Results-Based Accountability framework.