Liaison psychiatrists are fundamentally interested in improving the symptoms and functional well-being of people referred to liaison services. To this end they seek to bring best evidence-based practice to the benefit of the person they are helping recover in a timely and personalised way. This is done in the main through human interaction. For those charged with delivering high-quality liaison psychiatry care from the public purse, there is a need for systems and processes that ensure the quality of the care delivered remains safe and effective, and offers a good experience for the patient or person using services.
It is important that liaison psychiatrists can agree and describe how liaison psychiatry benefits people and what it is about models of liaison psychiatry services that bring this benefit over other models of mental healthcare provision in general or acute hospitals, such as counselling, consultancy or crisis resolution. If liaison psychiatry services are to be valued by people using and commissioning healthcare, the services need to demonstrate improved outcomes and outputs for the public investment. They need clear therapeutic purposes that can be described with attendant outcomes that can be measured and clear business processes with estimates of capacity available to the patient pathway and outputs that can be measured.
So are liaison psychiatry services, their professionals and their interventions safe, effective and efficient? Do they offer a good experience for people? If we aim to improve and innovate, we must measure this. This chapter is about research, clinical audit and service improvement, and the tools and techniques for measuring change.
What is meant by research, audit and evaluation?
Research is generally held to be the systematic search for new evidence. Audit is most simply understood as the systematic testing of our compliance with pre-set standards. Evaluation systematically tests that our processes deliver their intended outcome.
All three are about systematic inquiry and as a result draw on common quantitative and qualitative methods of study and their supporting statistics. The published outputs can look similar and this may lead to confusion. The key is to understand the purpose of the work being described and the population it will serve. This is important because pure research is held to require a higher level of ethical scrutiny and governance than the clinical audit or evaluation of processes based on already accepted evidence.