quality assurance research articles

  • Get new issue alerts Get alerts
  • Submit a Manuscript

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Quality assurance

Importance of systems and standard operating procedures.

Manghani, Kishu

Proprietor-consultant, Somdev Clinical Development Associates, Mumbai, Maharashtra, India

Address for correspondence: Dr. Kishu Manghani D/8 Ferreira Mansion, Sitla Devi Temple Road, Mahim, Mumbai-400 016, Maharashtra, India. E-mail: [email protected]

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

It is mandatory for sponsors of clinical trials and contract research organizations alike to establish, manage and monitor their quality control and quality assurance systems and their integral standard operating procedures and other quality documents to provide high-quality products and services to fully satisfy customer needs and expectations. Quality control and quality assurance systems together constitute the key quality systems. Quality control and quality assurance are parts of quality management. Quality control is focused on fulfilling quality requirements, whereas quality assurance is focused on providing confidence that quality requirements are fulfilled. The quality systems must be commensurate with the Company business objectives and business model. Top management commitment and its active involvement are critical in order to ensure at all times the adequacy, suitability, effectiveness and efficiency of the quality systems. Effective and efficient quality systems can promote timely registration of drugs by eliminating waste and the need for rework with overall financial and social benefits to the Company.

INTRODUCTION

High levels of quality are essential to achieve Company business objectives. Quality, a source of competitive advantage, should remain a hallmark of Company products and services. High quality is not an added value; it is an essential basic requirement. Quality does not only relate solely to the end products and services a Company provides but also relates to the way the Company employees do their job and the work processes they follow to produce products or services. The work processes should be as efficient as possible and continually improving. Company employees constitute the most important resource for improving quality. Each employee in all organizational units is responsible for ensuring that their work processes are efficient and continually improving.

Top management should provide the training and an appropriate motivating environment to foster teamwork both within and across organizational units for employees to improve processes.

Ultimately, everyone in a Company is responsible for the quality of its products and services.

A Company in the role of a sponsor of clinical trials can best achieve its business objectives by establishing and managing robust quality systems with their integral quality documents including standard operating procedures (SOPs).

QUALITY SYSTEMS

A quality system is defined as the organizational structure, responsibilities, processes, procedures and resources for implementing quality management. Quality management includes those aspects of the overall management function that determine and implement the Company quality policy and quality objectives. Both quality control and quality assurance are parts of quality management.

The 13 th principle in the International Conference on Harmonization Good Clinical Practice (ICH GCP) guideline clearly states that systems and procedures that assure the quality of every aspect of the (clinical) trial should be implemented. The sponsor is responsible for implementing and maintaining quality assurance and quality control systems with written SOPs to ensure that trials are conducted and data are generated, documented (recorded) and reported in compliance with the protocol, Good Clinical Practice (GCP) and the applicable regulatory requirements. Although a sponsor may transfer any or all of its trial-related duties and functions to a contract research organization (CRO), the ultimate responsibility for the quality and integrity of the trial data always resides with the sponsor.[ 1 ] However, the CRO is also required in its own right to always implement quality assurance and quality control. Both quality control and quality assurance systems must be commensurate with the Company business objectives and business model. The two together constitute the key quality systems.

Top management commitment and active involvement in the establishment, management and monitoring of quality systems is critical and is achieved by:[ 2 ]

  • Defining and documenting a quality policy and quality objectives and ensuring that both the policy and objectives are understood and implemented by all employees at all levels;
  • Ensuring that appropriate processes are implemented to fully satisfy customer needs and expectations and Company objectives;
  • Defining and documenting the responsibility, authority and interrelation of key personnel managing the quality systems;
  • Providing adequate resources for implementing and maintaining the quality systems;
  • Conducting scheduled management reviews of the quality systems to assess their continued suitability, adequacy, effectiveness and efficiency; and
  • Deciding on actions for continual quality improvement.

Quality control is focused on fulfilling quality requirements, and as related to clinical trials, it encompasses the operational techniques and activities undertaken within the quality assurance system to verify that the requirements for quality of the trial-related activities have been fulfilled.[ 1 ]

Quality assurance, on the other hand, is focused on providing confidence that quality requirements are fulfilled. As related to clinical trials, it includes all those planned and systemic actions that are established to ensure that the trial is performed and the data are generated, documented (recorded), and reported in compliance with GCP and the applicable regulatory requirements.[ 1 ]

Quality control is generally the responsibility of the operational units and quality is infused into the outputs and verified as they are being generated. Therefore, quality control is an integral part of the daily activities occurring within each operational unit.

Quality assurance is the responsibility of the quality assurance department. The mission of a quality assurance department is to provide an effective and efficient quality assurance system and counsel for the operational units. The quality assurance department must be manned by an adequate number of dedicated and adequately qualified and trained personnel with well-developed interpersonal skills. The well-developed interpersonal skills will provide the quality assurance personnel with persuasive, diplomatic, tactful and resilient qualities generally required of them. The quality assurance department must operate independently from the operational units and it must regularly perform quality review activities (self-inspection audits/internal audits) to ensure compliance within operational units with Company quality standards, good working practices [GxPs: current Good Manufacturing Practice (cGMP), Good Laboratory Practice (GLP), GCP, etc.], and local, national, regional and international legal, ethical and regulatory requirements.

The quality assurance department under the leadership of a Quality Assurance Manager will ensure the following:

  • Appropriate global and affiliate-specific quality documents (Level 1: Company policies including quality policy and quality management plan; Level 2: SOPs; Level 3: working instructions; Level 4: conventions, guidelines, forms, templates, logs, tabs, and labels) are determined, developed and implemented.
  • Personnel involved in clinical research and development are, and remain, properly qualified and trained for job roles for which they are made responsible. The training will include new staff induction, ongoing quality awareness training including training in applicable SOPs and other quality documents, training for changing roles within and between functional units, and training resulting from an analysis of needs including the results of audits and regulatory inspections, top management reviews and employee appraisals. Further education and additional training needs should be constantly assessed by the Company.
  • All clinical research and development activities are conducted according to Company quality standards, current GxPs, and all applicable local, national, regional and international legal, ethical and regulatory requirements as defined in the quality documents, to meet with Company quality objectives and customer requirements.
  • A system is put in place to track all global and affiliate-specific quality documents and to maintain an up-to-date overall inventory of all historical and effective quality documents.
  • Personnel will have written job descriptions which will clearly define their roles and responsibilities, and the processes and SOPs which they have to follow.
  • A system is put in place to initiate and maintain a personal file on each employee, containing his/her current curriculum vitae, job description, education and training records and personal and professional development plan.
  • An auditing function, independent of the operational units and the quality control system, is created to plan, conduct, and report internal and external audits and to support and monitor their close-out via appropriate corrective actions and preventive actions (CAPA) plan.[ 3 4 ] The effectiveness of the corrective and the preventive actions must be assessed.
  • A system is put in place to oversee customer audits, regulatory inspections and Company certifications/accreditations as applicable.
  • A system is put in place to a) share audit and regulatory inspection findings and learning with the relevant functional units and top management, b) promote auditing-in-tandem, and cross-pollination of auditors, c) track all internal and external audits, customer audits and regulatory inspections, and d) track status of findings (open, closed or pending) made during audits and regulatory inspections.
  • Liaison is maintained with functional units, affiliates, and human resources for continued personal and professional development (basic and advanced knowledge-based and skill-based training and retraining) of employees worldwide.
  • Liaison is maintained with and between functional units and affiliates to promote standardization, improve communication, and to enhance efficiency of quality systems through cooperation.
  • All functional units and affiliates are kept up-to-date with various established and emerging local, national, regional and international legal, ethical and regulatory standards.
  • Continual quality improvement initiatives (adoption of industry best practices: determination, development, implementation and monitoring of key performance indicators; and internal and external benchmarking) are identified, implemented and monitored via the Plan–Do–Check–Act (P–D–C–A) cycle.[ 5 6 ]
  • Persons responsible for the quality assurance system are available in an advisory role to employees worldwide on matters related to the quality systems, regulations in force including GxPs and regulatory compliance.

STANDARD OPERATING PROCEDURES

Standardization is defined as an activity that gives rise to solutions for repetitive application to problems in various disciplines including science and it is aimed at achieving the optimum degree of order in a given context. Generally, the activity consists of the process of establishing (determining, formulating, and issuing) and implementing standards. Therefore, standards are the ultimate result of a standardization activity and within the context of quality systems consist of quality documents or documents related to the quality systems.

The quality documents consist of Company policies, quality management plan, SOPs, working instructions, conventions, guidelines, forms, templates, logs, tags and labels. They are established by consensus and approved by a nominated body and they provide for common and repeated use, rules, guidelines or characteristics for activities or their results with a view to promote transparency, consistency, reproducibility, interchangeability and to facilitate communication. The hierarchy and types of quality documents relevant to quality systems will depend upon Company business objectives and business model. SOPs are Level 2 quality documents and, along with other relevant quality documents, ensure the effectiveness and efficiency of quality systems.

The ICH GCP guideline defines SOPs as “detailed, written instructions to achieve uniformity of the performance of a specific function”.[ 1 ] Simply put, SOPs specify in writing, who does what and when, or the way to carry out an activity or a process. SOPs establish a systematic way of doing work and ensure that work is done consistently by all persons who are required to do the same task. SOPs must be well written in order to provide an effective control of GCP and prevent errors from occurring, thereby minimizing waste and rework. Poorly written SOPs are a source of misinformation. To be user friendly, they should be clear, unambiguous and must be written in plain language. SOPs are controlled documents and are best written by persons involved in the activity, process or function that is required to be specified or covered in the SOP. SOPs must be reviewed prior to their approval for release, for adequacy, completeness and compliance with Company standards and all applicable legal, ethical and regulatory requirements. They must be reviewed and updated as required over their life cycle and any changes made to the SOPs must be re-approved. They must bear a revision status on them and their distribution must always be documented and controlled. When obsolete SOPs are required to be retained for any purpose, they should be suitably identified to prevent unintended use. Only relevant SOPs in their current version must be available at points of use and must remain legible. SOPs are mandatory for the implementation of GCP and other GxPs, namely, cGMP and GLP, within the scope of quality systems; therefore, it is well said that without SOPs there are no GxPs: no SOPs, no quality systems, and no GxPs.

For an activity to become the topic of an SOP, it must be either subject to regulations or it must address a task important within quality systems or between quality systems and other functional units. Quality systems related SOPs should generally cover the following topics in order to capture the core quality control and quality assurance activities and processes:

  • Definition, format, content, compilation, indexing, review, approval, update, distribution and archiving of quality documents;
  • Definition, format, content, review, approval, update, distribution and archiving of quality management plan;
  • Definition of and activities related to quality control of clinical trials and compilation of trial-specific quality control plan;
  • Initiation and maintenance of personnel files including format and content of curriculum vitae, job description, training records and personal and professional development plan;
  • Top management reviews of quality systems and issuance of management review reports;
  • Selection and management of contract auditors;
  • Format, content, compilation, review, approval, update, distribution and archiving of audit program;
  • Format, content, compilation, review, approval, update, distribution and archiving of audit plan;
  • Planning, conduct, reporting and close-out of risk-based internal and external audits;
  • Planning, conduct, reporting and close-out of specific audits of sites, processes, systems and documents: sponsor site, third party (CRO, central clinical laboratory) site, investigator site, quality management system including SOP management, education and training and auditing, document management system including archives, data management system including information technology support, serious adverse events management system, pharmacovigilance system, medical dictionary management system, and regulatory submission documents (clinical trial reports, and clinical sections of new drug applications, marketing authorization applications, and common technical documents);
  • Planning, conduct, reporting and close-out of for cause/directed audits;
  • Hosting of customer audits;
  • Preparation of sites for regulatory inspections;
  • Coordination and management of regulatory inspections;
  • Format, content, compilation, review, approval, update, distribution and archiving of CAPA plan, and assessment of its effectiveness;
  • Change control to ensure that changes and the current status of quality systems related components including documents are identified; and
  • Roles and responsibilities of quality assurance in handling of scientific misconduct/fraud.

BENEFITS OF QUALITY SYSTEMS

The importance of properly established and managed quality control and quality assurance systems with their integral well-written SOPs and other quality documents for the achievement of Company business objectives cannot be ignored. They serve as a passport to success by assisting the Company to achieve high-quality processes, procedures, systems, and people, with eventual high-quality products and services and enhancement of the following:

  • Customer satisfaction, and therefore, customer loyalty and repeat business and referral;
  • Timely registration of drugs by eliminating waste and the need for rework;
  • Operational results such as revenue, profitability, market share and export opportunities;
  • Alignment of processes with achievement of better results;
  • Understanding and motivation of employees toward the Company quality policy and business objectives, as well as participation in continual quality improvement initiatives; and
  • Confidence of interested parties in the effectiveness and efficiency of the Company as demonstrated by the financial and social gains from Company performance and reputation.

Quality assurance; quality control; quality management; quality standards; quality systems

  • + Favorites
  • View in Gallery

quality assurance research articles

Journal for Quality, Comparability and Reliability in Chemical Measurement

Quality control charts for short or long runs without a training phase. Part 2. Performances in the presence of a persistent systematic error and simultaneous small shifts in the mean and the variance

  • Manuel Alvarez-Prieto
  • Ricardo Páez-Montero

quality assurance research articles

Promotion of metrological traceability through the provision of a metrologically traceable proficiency testing reference value for the mass fraction of benzoic acid in fish sauce

  • Jasmine Po-Kwan Lau
  • Alvin Wai-Hong Fung

quality assurance research articles

A fit-for-purpose nongaseous impurity assay procedure for potential inorganic primary reference materials by inductively coupled plasma mass spectrometry

  • Jong Wha Lee
  • Hyung Sik Min
  • Kyoung-Seok Lee

quality assurance research articles

Selection of bacterial strains in a testing microbiology laboratory for quality assurance purposes: ISO/IEC 17025:2017 standard point of view

  • Han-Min Ohn

Unlocking the secret to reliable leather test results: the importance of proficiency testing in establishing metrological traceability

  • Tabitha Orwa Ahuya

quality assurance research articles

Risk management: ten years of experience in the organization of the proficiency testing AQUA

  • Paola Carnieletto
  • Paola Perini
  • Marzia Mancin

quality assurance research articles

Correction: Certification of the total element mass fractions in UME EnvCRM 03 soil sample via a joint research project

  • Alper Isleyen
  • Suleyman Z. Can
  • Zeynep Gumus

The optimization approach for uncertainty assessment of the heating value of aviation fuel

  • Zhaoming Zhou
  • Shangchun Wei

quality assurance research articles

Association between blood lead levels and socio-demographic factors among outpatient children in Ningbo, China

  • Chengbo Zhou
  • Wenyuan Liu

quality assurance research articles

Appropriate statistical techniques when using consensus statistics to evaluate performance

  • Daniel Tholen
  • Piotr Robouch

quality assurance research articles

AQbD enabled method development and quantification of asiaticoside in foam-based formulations

  • Mohit Kumar
  • Anjali Pant
  • Amit Bhatia

quality assurance research articles

Proficiency tests for analysis of pesticide residues in kimchi cabbage and ginseng in South Korea from 2008 to 2020

  • Seonghee Ahn
  • Byungjoo Kim

quality assurance research articles

Collusion or falsification of results in PT: why does it happen and how can it be prevented?

  • Matthew Whetton

quality assurance research articles

Points to consider when establishing an equipment calibration programme in a conventional food microbiology laboratory for ISO/IEC 17025:2017 accreditation purpose

quality assurance research articles

Approaches for the production of reference materials with qualitative properties—The new International Standard ISO 33406

  • Stefanie Trapmann
  • Stephen L. R. Ellison
  • Angelique Botha

quality assurance research articles

Application of ISO 16140-3:2021 standard to verify compact dry BC method in a single food microbiology laboratory

  • Zon-Zin-Wai-Hlaing Thwin

The issue of reporting the measurement uncertainty in accredited tests

  • Ricardo J. N. Bettencourt da Silva

Risk analysis approach for PT participation

Carbon monoxide proficiency testing scheme with metrological traceability.

  • Florencia Almirón
  • Simone Fajardo
  • Mariela Medina

quality assurance research articles

Indonesian experience in providing accuracy-based proficiency testing scheme using metrologically traceable reference values for elements mass fraction in drinking water

  • Christine Elishian
  • Eka Mardika Handayani
  • Marlina Oktaviadong Siahaan

quality assurance research articles

Analytical quality by design (AQbD) in the ICHQ14 guidelines for analytical procedure development

  • Kiranmayi Sathuluri
  • RamyaSri Bakam
  • Ramalingam Peraman

quality assurance research articles

Concept for evaluating results close to the participants’ limits of quantification in proficiency tests for elements in food

  • Rudolf Hackenberg

quality assurance research articles

New statistical framework for interlaboratory evaluation of anti-doping testing results by WADA

  • Juris Meija
  • Antonio Possolo
  • Osquel Barroso

quality assurance research articles

A new proficiency testing scheme for occupational hygiene laboratories undertaking the analysis of diesel engine particulate emissions

  • Owen Butler
  • James Forder

quality assurance research articles

Certification of the total element mass fractions in UME EnvCRM 03 soil sample via a joint research project

Internal audit techniques for testing laboratories: iso/iec 17025:2017 perspective, measurement uncertainty in testing of biologicals in national control laboratories for iso/iec 17025: practical approaches.

  • Shalini Tewari
  • Niharika Trivedi
  • Anupkumar R. Anvikar

The role of measurement uncertainty in the validation of a measurement procedure

  • Fernando C. Raposo
  • Michael H. Ramsey

Proficiency testing in analytical chemistry, microbiology and laboratory medicine: discussions on current practice and future directions

  • Brian Brookman
  • Csilla Bélavári
  • Angela Sorbo

Comparison of continuous flow analysis and ion chromatography for determinations of nitrate, nitrite and phosphate ions in seawater and development of related seawater certified reference materials

  • Chikako Cheong
  • Toshihiro Suzuki
  • Akiharu Hioki

quality assurance research articles

A comparison of proficiency testing evaluation criteria approaches for fitness-for-purpose

  • Kelly Black
  • Polona Carson
  • Emmie Jenkins

quality assurance research articles

(K)ILO project: proficiency tests for students

  • Kees van Putten
  • Esther Peters-Grutters
  • Martijn Heuven

quality assurance research articles

Quality control charts for short or long runs without a training phase. Part 1. Performances in state of control and in the presence of outliers

  • Ricardo S. Páez-Montero

quality assurance research articles

Improved coverage factors for expanded measurement uncertainty calculated from two estimated variance components

  • Peter D. Rostron

quality assurance research articles

A practical two-step procedure for taking into account all available information (prior and current) about influence quantities in measurement uncertainty analysis

  • Hening Huang

Paradox? What paradox?

Information combination and transformation: a paradox and its resolution.

quality assurance research articles

Comparisons of real versus synthetic proficiency testing items

  • Finlay MacKenzie
  • Rachel Marrington

quality assurance research articles

Target uncertainty: a critical review

The comparison of flow rate calibration methods for high-performance liquid chromatography (hplc) pump.

  • Prangtip Uthaiwat
  • Theera Leeudomwong
  • Tassanai Sanponpute

quality assurance research articles

Certification of a novel matrix reference material for accurate measurements of aflatoxin M1 in milk powder

  • Chun Yuan Huang
  • Ya Xuan Liu
  • Liyuan Zhang

quality assurance research articles

Measurement uncertainty from sampling and its role in validation of measurement procedures

quality assurance research articles

Accreditation of medical laboratories: What is new in ISO 15189:2022

  • Kyriacos C. Tsimillis
  • Sappho Michael

quality assurance research articles

Ten years of an external quality program control in clinical microbiology: a statistical analysis of the results

  • Clara Morales
  • Martha Uzeta
  • Ramón Giraldo

quality assurance research articles

S-score: a new score for binary qualitative proficiency testing schemes interpretable as the z-score

  • Christian Baudry
  • Gaëlle Jadé
  • Michel Laurentie

quality assurance research articles

Investigation of the effect of antioxidant-added skin care creams on UV-B-induced oxidation in skin simulation environment by spectroscopic and chromatographic analyses

  • Temel Kan Bakır
  • Hüseyin Kastrati

quality assurance research articles

Insights on the validation of alternative tools for water quality monitoring: the case of on-site test kits, portable devices and continuous measuring devices

  • Nathalie Guigues
  • Emrah Uysal
  • Béatrice Lalere

quality assurance research articles

Measurement uncertainty of blood alcohol concentration (BAC) by headspace gas chromatography: comparison of different strategies

  • R. García-Repetto
  • M. L. Soria-Sánchez

quality assurance research articles

Use of the Monte Carlo method for the estimation of measurement uncertainty in chemical analysis systems with intensive mathematical treatment

  • A. Fuentes-García
  • J. Jiménez-Chacón
  • M. Alvarez-Prieto

quality assurance research articles

Residue behavior and consumer risk assessment of combination product of betacyfluthrin and imidacloprid on cucumber ( Cucumis sativus L. )

  • Sakshi Sharma
  • Jatiender Kumar Dubey
  • Nimisha Thakur

quality assurance research articles

  • Find a journal
  • Publish with us
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 08 November 2021

What is Quality Assurance?

  • Elisa Fisher 1  

BDJ In Practice volume  34 ,  page 31 ( 2021 ) Cite this article

38 Accesses

2 Citations

2 Altmetric

Metrics details

Quality Assurance is by definition, a programme for the systematic evaluation of various aspects of a project, service, or facility to ensure that standards of quality are being met. But what does this actually mean for your practice?

This article will outline why a process of evaluation matters for you, your team, and patients. This process can only be achieved through good teamwork and efficient practice systems, with the aim of improving the patient experience. As with most things in a dental practice, these aspects must work hand in hand to ensure success. When we talk of systems, we normally look to the raft of policies required to maintain compliance and safety. As well as this, staff need to feel confident in the content of these policies, their existence and implementation in practical terms and all the procedures created surrounding them. The policies and procedures required for compliance form the basis upon which the systems in the practice are structured and organised. With your systems as a solid foundation, your team should feel able to deliver the most exceptional level of care across the practice. A clear understanding of these policies, open communication, and ongoing professional development around them will encourage your team to feel empowered to work towards the goals of the practice.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

We are sorry, but there is no personal subscription option available for your country.

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Author information

Authors and affiliations.

British Dental Association, London, UK

Elisa Fisher

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Elisa Fisher .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Fisher, E. What is Quality Assurance?. BDJ In Pract 34 , 31 (2021). https://doi.org/10.1038/s41404-021-0933-y

Download citation

Published : 08 November 2021

Issue Date : 08 November 2021

DOI : https://doi.org/10.1038/s41404-021-0933-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

quality assurance research articles

  • Open access
  • Published: 19 December 2011

Quality assurance of qualitative research: a review of the discourse

  • Joanna Reynolds 1 ,
  • James Kizito 2 ,
  • Nkoli Ezumah 3 ,
  • Peter Mangesho 4 ,
  • Elizabeth Allen 5 &
  • Clare Chandler 1  

Health Research Policy and Systems volume  9 , Article number:  43 ( 2011 ) Cite this article

44k Accesses

57 Citations

5 Altmetric

Metrics details

Increasing demand for qualitative research within global health has emerged alongside increasing demand for demonstration of quality of research, in line with the evidence-based model of medicine. In quantitative health sciences research, in particular clinical trials, there exist clear and widely-recognised guidelines for conducting quality assurance of research. However, no comparable guidelines exist for qualitative research and although there are long-standing debates on what constitutes 'quality' in qualitative research, the concept of 'quality assurance' has not been explored widely. In acknowledgement of this gap, we sought to review discourses around quality assurance of qualitative research, as a first step towards developing guidance.

A range of databases, journals and grey literature sources were searched, and papers were included if they explicitly addressed quality assurance within a qualitative paradigm. A meta-narrative approach was used to review and synthesise the literature.

Among the 37 papers included in the review, two dominant narratives were interpreted from the literature, reflecting contrasting approaches to quality assurance. The first focuses on demonstrating quality within research outputs; the second focuses on principles for quality practice throughout the research process. The second narrative appears to offer an approach to quality assurance that befits the values of qualitative research, emphasising the need to consider quality throughout the research process.

Conclusions

The paper identifies the strengths of the approaches represented in each narrative and recommend these are brought together in the development of a flexible framework to help qualitative researchers to define, apply and demonstrate principles of quality in their research.

Peer Review reports

The global health movement is increasingly calling for qualitative research to accompany its projects and programmes [ 1 ]. This demand, and the funding that goes with it, has led to critical debates among qualitative researchers, particularly over their role as applied or theoretical researchers [ 2 ]. An additional challenge emanating from this demand is to justify research findings and methodological rigour in terms that are meaningful and useful to global public health practitioners. A key area that has grown in quantitative health research has been in quality assurance activities, following the social movement towards evidence-based medicine and global public health [ 3 ]. Through the eyes of this movement, the quality of research affects not only the trajectory of academic disciplines but also local and global health policies. Clinical trials researchers and managers have led much of health research into an era of structured standardised procedures that demarcate and assure quality [ 4 , 5 ].

By contrast, disciplines using qualitative research methods have, to date, engaged far less frequently with quality assurance as a concept or set of procedures, and no standardised guidance for assuring quality exists. The lack of a unified approach to assuring quality can prove unhelpful for the qualitative researcher [ 6 , 7 ], particularly when working in the global health arena, where research needs both to withstand external scrutiny and provide confidence in interpretation of results by internal collaborators Furthermore, past and existing debates on what constitutes 'good' qualitative research have tended to be centred firmly within social science disciplines such as sociology or anthropology, and as such, their language and content may prove difficult to penetrate for the qualitative researcher operating within a multi-disciplinary, and largely positivist, global health environment.

The authors and colleagues within the ACT Consortium [ 8 ] conduct qualitative research that is mostly rooted in anthropology and sociology, to explore the use of antimalarial medicines and intervention trials around antimalarial drug use, within the global health field. Through this work, within the context of clinical trials following Good Clinical Practice (GCP) guidelines [ 4 ], we have identified a number of challenges relating to the demands for evidence of quality and for quality assurance of qualitative research. The quality assurance procedures available for quantitative research, such as GCP training and auditing, are rooted in a positivist epistemology and are not easily translated to the reflexive, subjective nature of qualitative research and the interpretivist-constructionist epistemological position held by many social scientists, including the authors. Experiences of spatial distance between collaborators and those working in remote study field sites have also raised questions around how best to ensure that a qualitative research study is being conducted to high quality standards when the day-to-day research activity is unobservable by collaborators.

In response to the perceived need for the authors' qualitative studies to maintain and demonstrate quality in research processes and outcomes, we sought to identify existing guidance for quality assurance of qualitative research. In the absence of an established unified approach encapsulated in guidance format, we saw the need to review literature addressing the concept and practice of quality assurance of qualitative research, as a precursor to developing suitable guidance.

In this paper, we examine how quality assurance has been conceptualised and defined within qualitative paradigms. The specific objectives of the review were to, firstly, identify literature that expressly addresses the concept of quality assurance of qualitative research, and secondly, to identify common narratives across the existing discourses of quality assurance.

Search strategy

Keywords were identified from a preliminary review of methodological papers and textbooks on qualitative research, reflecting the concepts of 'quality assurance' and 'qualitative research', and all their relevant synonyms. The pool of keywords was augmented and refined iteratively as the search progressed and as the nature of the body of literature became apparent. Five electronic databases-Academic Search Complete, CINAHL Plus, IBSS, Medline and Web of Science-were searched systematically between October and December 2010, using combinations of the following keywords: "quality assurance", "quality assess*", "quality control*", "quality monitor*", "quality manage*, "audit*", "quality", "valid*", "rigo*r", "trustworth*", "legitima*", "authentic*", "strength", "power", "reliabil*", "accura*","thorough*", "credibil*", "fidelity", "authorit*", "integrity", "value", "worth*", "good*", "excellen*", "qualitative AND (research OR inquiry OR approach* OR method* OR paradigm OR epistemolog* OR study). Grey literature was also searched for using Google, and the key phrases "quality assurance" AND "qualitative research".

Several relevant journals- International Journal of Qualitative Methods, International Journal of Social Research Methodology and Social Science and Medicine - were hand searched for applicable papers using the same keywords. Finally, additional literature, in particular books and book chapters, was identified through snowballing techniques, both backwards by following references of eligible papers and forwards through citation chasing. At the point where no new references were identified from the above techniques, the decision was made to curtail the search and begin reviewing, reflecting the practical and time implications of adopting further search strategies.

Inclusion and exclusion criteria

Inclusion criteria were identified prior to the search, to include:

Methodological discussion papers, books or book chapters addressing qualitative research with explicit focus on issues of assuring quality.

Guidance or training documents (in 'grey literature') addressing quality assurance in qualitative research.

Excluded were:

Publications primarily addressing critical appraisal or evaluation of qualitative research for decision-making, reviews or publication. These topics were considered to be distinct from the activity of quality assurance which occurs before writing up and publication.

Publications focusing only on one or more specific qualitative methods or methodological approaches, for example grounded theory or focus groups; focusing on a single stage of the research process only, for example, data collection; or primarily addressing mixed methods of qualitative and quantitative research. It was agreed by the authors that these method-specific papers would not help inform narratives about the discourse of quality assurance, but may become useful at a later date when developing detailed guidance.

Publications not in the English language.

Review methodology

A meta-narrative approach was chosen for the reviewing and synthesis of the literature. This is a systematic method developed by Greenhalgh et al [ 9 ] to make sense of complex, conflicting and diverse sources of literature, interpreting the over-arching narratives across different research traditions and paradigms [ 9 , 10 ]. Within the meta-narrative approach, literature is mapped in terms of its paradigmatic and philosophical underpinnings, critically appraised and then synthesised by constructing narrative accounts of the contributions made by each perspective to the different dimensions of the topic [ 9 ]. Due to the discursive nature of the literature sought, representing different debates and philosophical traditions, the meta-narrative approach was deemed most appropriate for review and synthesis. A process of evaluating papers according to predefined quality criteria and using methods to minimise bias, as in traditional, Cochrane-style systematic reviewing, was not considered suitable or feasible to achieve the objectives.

Each paper was read twice by JR, summarised and analysed to determine the paper's academic tradition, the debates around quality assurance in qualitative research identified and discussed, the definition(s) used for 'quality' and the values underpinning this, and recommended methods or strategies for assuring quality in qualitative research. At the outset of the review, the authors attempted to identify the epistemological position of each paper and to use as a category by which to interpret conceptualisations of quality assurance. However, it emerged that fewer than half of the publications explicitly presented their epistemology; consequently, epistemological position was not used in the analytical approach to this review, but rather as contextual information for a paper, where present.

Following the appraisal of each paper individually, the literature was then grouped by academic disciplines, by epistemological position (where evident) and by recommendations. This grouping enabled the authors to identify narratives across the literature, and to interpret these in association with the research question. The narratives were developed thematically, following the same process used when conducting thematic analysis of qualitative data. First, the authors identified key idea units in each of the papers, then considered and grouped these ideas into broader cross-cutting themes and constructs. These themes, together with consideration of the epistemologies of the papers, were then used to develop overarching narratives emerging from the reviewed literature.

Search results

The above search strategy yielded 93 papers, of which 37 fulfilled the inclusion and exclusion criteria on reading the abstracts or introductory passages. Of the 56 papers rejected, 26 were papers specifically focused on the critical evaluation or appraisal of qualitative research for decision-making, reviews or publication. The majority of the others were rejected for focusing solely on guidance for a specific qualitative method or single stage of the research process, such as data analysis. Dates of publication ranged from 1994 to 2010. This relatively short and recent timeframe can perhaps be attributed in part to the recent history of publishing qualitative research within the health sciences. It was not until the mid-1990s that leading medical publications such as the British Medical Journal began including qualitative studies [ 11 , 12 ], reflecting an increasing acknowledgement of the value of qualitative research within the predominant evidence-based medicine model [ 13 , 14 ]. Within evidence-based medicine, the emphasis on assessment of quality of research is strong, and as such, may account for the timeframe in which consideration of assuring quality of qualitative research emerged.

Among the 37 papers accepted for inclusion in the review, a majority, 19, were from the fields of health, medical or nursing research [ 6 , 15 – 32 ]. 11 papers represented social science in broad terms, but most commonly from a largely sociological perspective [ 33 – 43 ]. Three papers came from education [ 44 – 46 ], two from communication studies [ 47 , 48 ] and one each from family planning [ 49 ] and social policy [ 50 ]. In terms of the types of literature sourced, there were 27 methodological discussion papers, 3 papers containing methodological discussion with one case study, two editorials, two methodology books, two guidance documents and one paper reporting primary research.

Appraisal of literature

Epistemological positions.

In only 10 publications were the authors' epistemological positions clearly identifiable, either explicitly stated or implied in their argument. Of these publications, five represented a postpositivist-realist position [ 16 , 24 , 39 , 44 , 47 ], and five represented an interpretive-constructionist position [ 17 , 21 , 25 , 34 , 38 ]; see Table 1 for further explanation of the authors' use of these terms. Many of the remaining publications appeared to reflect a postpositivist position due to the way in which authors distinguished qualitative research from positivist, quantitative research, and due to the frequent use of terminology derived from Lincoln and Guba's influential postpositivist criteria for quality [ 51 ].

Two strong narratives across the body of literature were interpreted through the review process, plus one other minor narrative.

Narrative 1: quality as assessment of output

A majority of the publications reviewed (n = 22) demonstrated, explicitly or implicitly, an evaluative perspective of quality assurance, linked to assessment of quality by the presence of certain indicators in the research output [ 15 , 16 , 18 – 22 , 24 , 26 , 27 , 30 , 32 , 36 , 39 , 40 , 42 , 44 , 45 , 47 – 50 ]. These publications were characterized by a 'post-hoc' approach whereby quality assurance was framed in terms of demonstrating that particular standards or criteria have been met in the research process. The publications in this narrative typically offered or referred to sets of criteria for research quality, listing specific methods or techniques deemed to be indicators of quality, and the documenting of which in the research output would be assurance of quality [ 15 , 18 – 20 , 24 , 26 , 32 , 39 , 42 , 47 , 48 , 50 ].

Theoretical perspectives of quality

Many of the authors addressing quality of qualitative research from the output perspective drew upon recent debates that juxtapose qualitative and quantitative research in efforts to increase its credibility as an epistemology. Several of the earlier publications from the 1990s discussed the context of an apparent lack of confidence in quality of qualitative research, particularly against the rising prominence of the evidence-based model within health and medical disciplines [ 16 , 19 , 27 ]. This contextual background links into the debate raised in a number of the publications around whether qualitative research should be judged by the same constructs and criteria of quality as quantitative research.

Many publications engaged directly with the discourse of the post-positivist movement of the mid-1980s and early 1990s to develop criteria of quality unique to qualitative research, recognizing that criteria rooted in the positivist tradition were inappropriate for qualitative work [ 18 , 20 , 24 , 26 , 39 , 44 , 47 , 49 , 50 ]. The post-positivist criteria developed by Lincoln and Guba [ 51 ], based around the construct of 'trustworthiness', were referenced frequently and appeared to be the basis upon which a number of authors made their recommendations for improving quality of qualitative research [ 18 , 26 , 39 , 47 , 50 ]. A number of publications explicitly drew on a post-positivist epistemology in their approach to quality of qualitative research, emphasising the need to ensure research presents a 'valid' and 'credible' account of the social reality [ 16 , 18 , 24 , 39 , 44 , 47 ]. In addition, a multitude of other, often rather abstract, constructs denoting quality were identified across the literature contributing to this narrative, including: 'rigour', 'validity', 'credibility', 'reliability', 'accuracy', 'relevance', 'transferability' 'representativeness', 'dependability' and more.

Methods of quality assurance

Checklists of quality criteria, or markers of 'best practice', were common within this output-focused narrative [ 15 , 16 , 19 , 20 , 24 , 32 , 39 , 42 , 47 , 48 ], with arguments for their value centring on a perceived need for standardised methods by which to determine quality in qualitative research [ 20 , 42 , 50 ]. Typically, these checklists comprised specific techniques and methods, the presence of which in qualitative research, was deemed to be an indicator of quality. Among the publications that did not proffer checklists by which to determine quality, methodological techniques signalling quality were also prominent among the authors' recommendations [ 26 , 40 , 44 , 49 ].

A wide range of techniques were referenced across the literature in this narrative as indicators of quality, but common to most publications were recommendations for the use of triangulation, member (or participant) validation of findings, peer review of findings, deviant or negative case analysis and multiple coders of data. Often these techniques were presented in the publications with little explanation of their theoretical underpinnings or in what circumstances they would be appropriate. Furthermore, there was little discussion within the narrative of the quality of these techniques themselves, and how to ensure they are conducted well.

Recognition of limitations

Two of the more recent papers in this review highlight debates of a more fundamental challenge around defining quality, linked to the challenges in defining the qualitative approach itself [ 26 , 32 ]. These papers, and others, reflect upon the plethora of different terminology and methods used in discourse around quality in qualitative research, as well as the numerous different checklists and criteria available to evaluate quality [ 20 , 32 , 40 , 42 ]. Some critique is offered of the inflexibility of fixed lists of criteria by which to determine quality, with authors emphasizing that standards, and the corresponding techniques by which to achieve them, should be selected in accordance with the epistemological position underpinning each research study [ 18 , 20 , 22 , 30 , 32 , 45 ]. However, in much of the literature there is little guidance around how to determine which constructs of quality are most applicable, and how to select the appropriate techniques for its demonstration.

Narrative 2: assuring quality of process

The second narrative identified was less prominent than the first, with fewer publications addressing the assurance of quality in terms of the research process (n = 13). Among these, several explicitly stated the need to consider how to assure quality through the research process, rather than merely evaluating it at output stage [ 6 , 17 , 31 , 33 , 34 , 37 , 38 , 43 ]. The other papers addressed aspects of good qualitative research or researcher that could be considered process rather than output-oriented, without explicitly defining them as quality assurance methods [ 23 , 25 , 35 , 41 , 46 ]. These included process-based methods such as recommending the use of field diaries for on-going self-reflection [ 25 ], and researcher-centred attributes such as an 'underlying methodological awareness' [ 46 ].

Conceptualisations of quality within the literature contributing to this narrative appeared most commonly to reflect a fundamental, internal set of values or principles indicative of the qualitative approach, rather than theoretical constructs such as 'validity' more traditionally linked to the positivist paradigm. These were often presented as principles to be understood and upheld by the research teams throughout the research process, from designing a study, through data collection to analysis and interpretation [ 17 , 31 , 34 , 37 , 38 ]. Six common principles were identified across the narrative: reflexivity of the researcher's position, assumptions and practice; transparency of decisions made and assumptions held; comprehensiveness of approach to the research question; responsibility towards decision-making acknowledged by the researcher; upholding good ethical practice throughout the research; and a systematic approach to designing, conducting and analyzing a study.

Of the four papers in this narrative which explicitly presented an epistemological position, all represented an interpretive/constructionist approach to qualitative research. These principles reflected the prevailing argument in this narrative that unthinking application of techniques or rules of method does not guarantee quality, but rather an understanding of and engagement with the values unique to qualitative paradigms are crucial for conducting quality research [ 6 , 25 , 31 ].

Critique of output-based approach

Within this process-focused narrative emerged a strong theme of critique of the approach to evaluating quality of qualitative research by the research output [ 6 , 17 , 25 , 31 , 33 , 35 , 37 , 38 , 43 , 46 ]. The principle argument underpinning this theme was that judging quality of research by its output does not help assure or manage quality in the process that leads up to it, but rather, the discussion of what constitutes quality should be maintained throughout the research [ 43 , 46 ]. Furthermore, several papers explicitly criticised the use of set criteria or standards against which to determine the quality of qualitative research [ 6 , 34 , 37 , 46 ], arguing that checklists are inappropriate as they may fail to accommodate the subjectivity and creativity of qualitative inquiry. As such, many studies may appear lacking or of poor quality against such criteria [ 46 ].

A number of authors within this narrative argued that checklists can promote the 'uncritical' use of techniques considered indicative of quality research, such as triangulation. Meeting specific criteria may not be a true indication of the quality of the activities or decisions made in the research process [ 37 , 43 ] and methodological techniques become relied upon as "technical fixes" [ 6 ] which do not automatically lead to good research practice or findings. Authors argued that the promotion of such checklists of may result in diminished researcher responsibility for their role in assuring quality throughout the research process [ 6 , 25 , 35 , 38 ], leading to a lack of methodological awareness, responsiveness and accountability [ 38 ].

Assuring quality of the research process

A number of activities were identified across this narrative to be used along the course of qualitative research to improve or assure its quality. They included the researcher conducting an audit or decision trail to document all decisions and interpretations made at each stage of the research [ 25 , 33 , 37 ]; on-going dynamic discussion of quality issues among the research team [ 46 ]; and developing reflexive field diaries in which researchers can explore and capture their own assumptions and biases [ 17 ]. Beyond these specific suggestions, however, were only broader, more conceptual recommendations without detailed guidance on exactly how they could be enacted. These included encouraging researchers to embrace their responsibility for decision making [ 38 ], understanding and applying a broad understanding of the rationale and assumptions behind qualitative research [ 6 ], and ensuring that the 'attitude' with which research is conducted, as well as the methods, are appropriate [ 37 ].

Although specific recommendations to assure quality were not present in all papers contributing to this narrative, there were some commonalities across each publication in the form of the principles or values that the authors identified as underpinning good quality qualitative research. Some of the publications made explicit reference to principles of good practice that should be appreciated and followed to help assure good quality qualitative research, including transparency, comprehensiveness, reflexivity, ethical practice and being systematic [ 6 , 25 , 35 , 37 ]. Across the other publications in this narrative, these principles emerged from definitions or constructs of quality [ 34 ], from recommendations of strategies to improve the research process [ 17 , 31 , 38 , 43 ], or through critiques of the output-focused approach to evaluating quality [ 33 ].

Minor narrative

Two papers did not contribute coherently to either of the two major narratives, but were similar in their approach towards addressing quality of qualitative research [ 28 , 29 ]. Both were methodological discussion papers which engaged with recent and ongoing debates around quality of qualitative research. The authors drew upon the plurality of views of quality within qualitative research, and linked it to the qualitative struggle to demonstrate credibility alongside quantitative research [ 29 ], and the contested nature of qualitative research itself [ 28 ].

The publications also shared critique of existing discourse around quality of qualitative research, but without presentation of alternative ways to assure it. Both papers critiqued the output-focused approach, conceptualising quality in terms of the demonstration of particular technical methods. However, neither paper offers a clear interpretation of the process of quality assurance; when and how it should be conducted, and what it should seek to achieve. One paper synthesised other literature and described abstract principles of qualitative research that indicate quality, but it was not clear whether these were principles were intended as guidance for the research process or standards against which to evaluate the output. Similarly, the second paper argues that quality cannot be assured by predetermined techniques, but does not offer more constructive guidance. Perhaps it can be said that these two papers encapsulate the difficulties that have been faced within the qualitative research field with defining quality and articulating appropriate ways to assure that it reflects the principles of the qualitative approach, which itself is contested.

Synthesis of the two major narratives

The key features of the two major narratives emerging from the review, assuring quality by output and assuring quality by process, have been captured in Table 2 . This table details the perspectives held by each approach, the context in which the narratives are situated, how quality is conceptualised, and examples from the literature of recommended ways in which to assure quality.

The literature reviewed showed a lack of consensus between qualitative research approaches about how to assure quality of research. This reflects past and on-going debates among qualitative researchers about how to define quality, and even the nature of qualitative research itself. The two main narratives that emerged from the reviewed literature reflected differing approaches to quality assurance and, underpinning these differing conceptualisations of quality in qualitative research.

Among the literature that directly discusses quality assurance in qualitative research, the most dominant narrative detected was that of an output-oriented approach. Within this narrative, quality is conceptualised in relation to theoretical constructs such as validity or rigour, derived from the positivist paradigm, and is demonstrated by the inclusion of certain recommended methodological techniques. By contrast, the second, process-oriented narrative presented conceptualisations of quality that were linked to principles or values considered inherent to the qualitative approach, to be understood and enacted throughout the research process. A third, minor narrative offered critique of current and recent discourses on assuring quality of qualitative research but did not appear to offer alternative ways by which to conceptualise or conduct quality assurance.

Strengths of the output-oriented approach for assuring quality of qualitative studies include the acceptability and credibility of this approach within the dominant positivist environment where decision-making is based on 'objective' criteria of quality [ 11 ]. Checklists equip those unfamiliar with qualitative research with the means to assess its quality [ 6 ]. In this way, qualitative research can become more widely accessible, accepted and integrated into decision-making. This has been demonstrated in the increasing presence of qualitative studies in leading medical research journals [ 11 , 12 ]. However, as argued by those contributing to the second narrative in this review, the following of check-lists does not equate with understanding of and commitment to the theoretical underpinnings of qualitative paradigms or what constitutes quality within the approach. The privileging of guidelines as a mechanism to demonstrate quality can mislead inexperienced qualitative researchers as to what constitutes good qualitative research. This runs the risk of reducing qualitative research to a limited set of methods, requiring little theoretical expertise [ 52 ] and diverting attention away from the analytic content of research unique to the qualitative approach [ 14 ]. Ultimately, one can argue that a solely output-oriented approach risks the values of qualitative research becoming skewed towards the demands of the positivist paradigm without retaining quality in the substance of the research process.

By contrast, strengths of the process-oriented approach include the ability of the researcher to address the quality of their research in relation to the core principles or values of qualitative research (see Table 2 ). For example, previous assumptions that incorporating participant-observation methods over an extended period of time in 'the field' constituted 'good' anthropology and an indicator of quality have been challenged on the basis that fieldwork as a method should not be conducted uncritically [ 53 ], without acknowledgement of other important steps, including exploring variability and contradiction [ 54 ], and being explicit about methodological choices made and the theoretical reasons behind them [ 55 ]. The core principles identified in this narrative also represent continuous, researcher-led activities, rather than externally-determined indicators such as validity, or end-points. Reflexivity, for example, is an active, iterative process [ 56 ], described as ' an attitude of attending systematically to the context of knowledge construction... at every step of the research process' [p484, 23]. As such, this approach emphasises the need to consider quality throughout the whole course of research, and locates the responsibility for enacting good qualitative research practice firmly in the lap of the researcher(s).

The question remains, however, as to how researchers can demonstrate to others that core principles have guided their research process. The paucity of guidelines among those advocating a process-oriented approach suggests these are either not possible or not desirable to disseminate. Guidelines, by their largely fixed nature, could be considered incompatible with flexible, pluralistic, qualitative research. Awareness and understanding of the fundamental principles of qualitative research (such as those six identified in this review) could be considered sufficient to ensure that researchers conduct the whole research process to a high standard. Indeed, it could be argued that this type of approach has been promoted within qualitative research fields beyond the health sciences for several decades, since debates around how to do 'good' qualitative research emerged publically [ 41 , 43 , 51 ]. However, the premises of this approach are challenged by increasing scrutiny over the accuracy and ethics of the generation of information through scientific activity [ 57 , 58 ]. Previous critiques of a post-hoc evaluation approach to quality, in favour of procedural mechanisms to ensure good research [ 43 ], have not responded to the demand in some research contexts, particularly in global health, for externally demonstrable quality assurance procedures.

The authors propose, therefore, that some form of guidelines may be possible and desirable, although in a less structured format than those representing a more positivistic paradigm and based on researcher-led principles of good practice rather than externally-determined constructs of quality such as validity. However, first it is important to acknowledge some of the limitations of our search and interpretations.

Limitations

The number of papers included in the review was relatively low. The search was limited to publications explicitly focused on 'quality assurance', and the inclusion criteria may have excluded relevant literature that uses different terminologies, particularly as this concept has not commonly been used within qualitative methods literature. As has been demonstrated in the narratives identified, approaches to quality assurance are linked closely to conceptualisations of quality, about which there is a much larger body of literature than was reviewed for this paper. The possibility of these publications being missed, along with other hard-to-find and grey literature, has implications for the robustness of the narratives identified.

This limitation is perhaps most evident in the lack of literature in this review identified from the field of anthropology. Debates around concepts such as validity and what constitutes 'knowledge' from research have long been of interest to anthropologists [ 55 ], but the absence of these in the publications which met the inclusion criteria raises questions about the search strategy used. Although the search strategy was revised iteratively during the search process to capture variations of quality assurance, anthropological references did not emerge. The choice was made not to pursue the search further for practical and time-related reasons, but also as we felt that limiting the review to quality assurance as originally described would be useful for understanding the literature that a researcher would likely encounter when exploring quality assurance of qualitative research. The lack of clear anthropological voice in this literature reflects the paucity of engagement with the theoretical basis of this discipline in the health sciences, unlike other social sciences such as sociology [ 52 ]. As such, anthropology's contributions to debates on qualitative research methods within health and medical research have been somewhat overlooked [ 59 ].

Hence, this review presents only a part of the discourse of assuring quality of qualitative research, but it does reflect the part that has dominated the fields of health and medical research. Although this review leaves some unanswered questions about defining and assuring quality across different qualitative disciplines, we believe it gives a valuable insight into the types of narratives a typical researcher would begin to engage with if coming from a global health research perspective.

Recommendations

The narratives emerging from this literature review indicate the challenges related to approaching quality assurance from a perspective shaped by the positivist fields of evidence-based medicine, but also the lack of clear, structured guidance based on the intrinsic principles of qualitative research. We recommend that the strengths of both the output-oriented and process-oriented narratives be brought together to create guidance that reflects core principles of qualitative research but also responds to expectations of the global health field for explicitly assured quality in research. The fundamental principles characterising qualitative research, such as the six presented in Table 2 , offer the basis of an approach to assuring quality that is reflexive of and appropriate to the specific values of qualitative research.

The next step in developing guidance should focus on identifying practical and specific advice to researchers as to how to engage with these principles and demonstrate enactment of the principles at each stage of the research process while being wary of promoting unthinking use of 'technical fixes' [ 6 ]. We recommend the development of a framework that helps researchers to identify their core principles, appropriate for their epistemological and methodological approach, and ways to demonstrate that these have been upheld throughout the research process. Current generic quality assurance activities, such as the use of standard operating procedures (SOPs) and monitoring visits could be attuned to the principles of the qualitative research being undertaken through an approach that demonstrates quality without constraining the research or compromising core principles. The development of such a framework should be undertaken in a collaborative way between researchers and field teams undertaking qualitative research in practice. We propose that this framework be flexible enough to accommodate different qualitative methodologies without dictating essential activities for promoting quality. Unlike previous guidance, we propose the framework should also respond to different demands from multi-disciplinary research teams and from external, positivist, audiences for evidence of quality assurance procedures, as may be faced, for example, in the field of global health research. This review has also highlighted the challenges of accessing a broad range of literature from across different social science disciplines (in particular anthropology) when conducting searches using standard approaches adopted in the health sciences. Further consideration should be taken as to how best to encourage wider search parameters, familiarisation with different sources of literature and greater acceptance of non-traditional disciplinary perspectives within health and medical literature reviews.

Within the context of global health research, there is an increasing demand for the qualitative research field to move forwards in developing and establishing coherent mechanisms for quality assurance of qualitative research. The findings of this review have helped to clarify ways in which quality assurance has been conceptualised, and indicates a promising direction in which to take the next steps in this process. Yet, it also raises broader questions around how quality is conceptualised in relation to qualitative research, and how different qualitative disciplines and paradigms are represented in debates around the use of qualitative methods in health and medical research. We recommend the development of a flexible framework to help qualitative researchers to define, apply and demonstrate principles of quality in their research.

Gilson L, Hanson K, Sheikh K, Agyepong IA, Ssengooba F, Bennett S: Building the field of health policy and systems research: social science matters. PLoS Med. 2011, 8: e1001079

Article   PubMed   PubMed Central   Google Scholar  

Janes CR, Corbett KK: Anthropology and Global Health. Annual Review of Anthropology. 2009, 38: 167-183.

Article   Google Scholar  

Pope C: Resisting Evidence: The Study of Evidence-Based Medicine as a Contemporary Social Movement. Health:. 2003, 7: 267-282.

Google Scholar  

ICH: ICH Topic E 6 (R1) Guideline for Good Clinical Practice. Book ICH Topic E 6 (R1) Guideline for Good Clinical Practice. 1996, City: European Medicines Agency, Editor ed.^eds.

Good Clinical Practice: Frequently asked questions. [ http://www.mhra.gov.uk/Howweregulate/Medicines/Inspectionandstandards/GoodClinicalPractice/Frequentlyaskedquestions/index.htm#1 ]

Barbour RS: Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?. British Medical Journal. 2001, 322: 1115-1117.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Dixon-Woods M, Shaw RL, Agarwal S, Smith JA: The problem of appraising qualitative research. Quality and Safety in Health Care. 2004, 13: 223-225.

ACT Consortium. [ http://www.actconsortium.org ]

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, Peacock R: Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review. Social Science & Medicine. 2005, 61: 417-430.

Greenhalgh T, Potts H, Wong G, al e: Tensions and Paradoxes in Electronic Patient Record Research: A Systematic Literature Review Using the Meta-narrative Method. The Milbank Quarterly. 2009, 87: 729-788.

Stige B, Malterud K, Midtgarden T: Toward an Agenda for Evaluation of Qualitative Research. Qualitative Health Research. 2009, 19: 1504-1516.

Article   PubMed   Google Scholar  

Pope C, Mays N: Critical reflections on the rise of qualitative research. BMJ. 2009, 339: b3425

Dixon-Woods M, Fitzpatrick R, Roberts K: Including qualitative research in systematic reviews: opportunities and problems. Journal of Evaluation in Clinical Practice. 2001, 7: 125-133.

Article   CAS   PubMed   Google Scholar  

Eakin JM, Mykhalovskiy E: Reframing the evaluation of qualitative health research: reflections on a review of appraisal guidelines in the health sciences. Journal of Evaluation in Clinical Practice. 2003, 9: 187-194.

Plochg T, van Zwieten M: Guidelines for quality assurance in health and health care research: Qualitative Research. Book Guidelines for quality assurance in health and health care research: Qualitative Research. 2002, City: Amsterdam Centre for Health and Health Care Research, Editor ed.^eds.

Boulton M, Fitzpatrick R: 'Quality' in qualitative research. Critical Public Health. 1994, 5: 19-26.

Bradbury-Jones C: Enhancing rigour in qualitative health research: exploring subjectivity through Peshkin's I's. Journal of Advanced Nursing. 2007, 59: 290-298.

Devers K: How will we know "good" qualitative research when we see it? Beginning the dialogue in health services research. Health Services Research. 1999, 34: 1153-1188.

CAS   PubMed   PubMed Central   Google Scholar  

Green J, Britten N: Qualitative research and evidence based medicine. British Medical Journal. 1998, 316: 1230-1232.

Kitto SC, Chesters J, Grbich C: Quality in qualitative research. Medical Journal of Australia. 2008, 188: 243-246.

PubMed   Google Scholar  

Koch T: Establishing rigour in qualitative research: the decision trail. Journal of Advanced Nursing. 1994, 19: 976-986.

Macdonald ME: Growing Quality in Qualitative Health Research. International Journal of Qualitative Methods. 2009, 8: 97-101.

Malterud K: Qualitative research: standards, challenges, and guidelines. The Lancet. 2001, 358: 483-488.

Article   CAS   Google Scholar  

Mays N, Pope C: Assessing quality in qualitative research. British Medical Journal. 2000, 320: 50-52.

McBrien B: Evidence-based care: enhancing the rigour of a qualitative study. British Journal of Nursing. 2008, 17: 1286-1289.

Nelson AM: Addressing the threat of evidence-based practice to qualitative inquiry through increasing attention to quality: A discussion paper. International Journal of Nursing Studies. 2008, 45: 316-322.

Peck E, Secker J: Quality criteria for qualitative research: does context make a difference?. Qualitative Health Research. 1999, 9: 552-558.

Rolfe G: Validity, trustworthiness and rigour: quality and the idea of qualitative research. Journal of Advanced Nursing. 2006, 53: 304-310.

Ryan-Nicholls KD, Will CI: Rigour in qualitative research: mechanisms for control. Nurse Researcher. 2009, 16: 70-85.

Secker J, Wimbush E, Watson J, Milburn K: Qualitative methods in health promotion research: some criteria for quality. Health Education Journal. 1995, 54: 74-87.

Tobin GA, Begley CM: Methodological rigour within a qualitative framework. Journal of Advanced Nursing. 2004, 48: 388-396.

Whittemore R, Chase SK, Mandle CL: Validity in Qualitative Research. Qualitative Health Research. 2001, 11: 522-537.

Akkerman S, Admiraal W, Brekelmans M, al e: Auditing quality of research in social sciences. Quality and quantity. 2008, 42 (2): 257-274.

Bergman MM, Coxon APM: The Quality in Qualitative Methods. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2005, 6:

Brown A: Qualitative method and compromise in applied social research. Qualitative Research. 2010, 10 (2): 229-248.

Dale A: Editorial: Quality in Social Research. International Journal of Social Research Methodology. 2006, 9: 79-82.

Flick U: Managing quality in qualitative research. 2007, London: Sage Publications

Book   Google Scholar  

Koro-Ljungberg M: Validity, responsibility, and aporia. Qualitative inquiry. 2010, 16 (8): 603-610.

Lewis J: Redefining Qualitative Methods: Believability in the Fifth Moment. International Journal of Qualitative Methods. 2009, 8: 1-14.

Research Information Network: Quality assurance and assessment of quality research. Book Quality assurance and assessment of quality research. 2010, City: Research Information Network, Editor ed.^eds.

Seale C: The Quality of Qualitative Research. 1999, London: SAGE Publications

Tracy SJ: Qualitative Quality: Eight "Big-Tent" Criteria for Excellent Qualitative Research. Qualitative inquiry. 2010, 16: 837-851.

Morse JM, Barrett M, Mayan M, Olson K, Spiers J: Verification Strategies for Establishing Reliability and Validity in Qualitative Research. International Journal of Qualitative Methods. 2002, 1: 1-19.

Johnson RB: Examining the validity structure of qualitative research. Education. 1997, 118: 282

Creswell JW, Miller DL: Determining Validity in Qualitative Inquiry. Theory Into Practice. 2000, 39: 124

Torrance H: Building confidence in qualitative research: engaging the demands of policy. Qualitative inquiry. 2008, 14 (4): 507-527.

Shenton AK: Strategies for ensuring trustworthiness in qualitative research projects. Education for Information. 2004, 22: 63-75.

Barker M: Assessing the 'Quality' in Qualitative Research. European Journal of Communication. 2003, 18: 315-335.

Forrest Keenan K, van Teijlingen E: The quality of qualitative research in family planning and reproductive health care. Journal of Family Planning and Reproductive Health Care. 2004, 30: 257-259.

Becker S, Bryman A, Sempik J: Defining 'Quality' in Social Policy Research: Views, Perceptions and a Framework for Discussion. Book Defining 'Quality' in Social Policy Research: Views, Perceptions and a Framework for Discussion. 2006, City: Social Policy Association, Editor ed.^eds.

Lincoln YS, Guba EG: Naturalistic inquiry. 1985, Beverly Hills, CA: SAGE Publications

Lambert H, McKevitt C: Anthropology in health research: from qualitative methods to multidisciplinarity. British Medical Journal. 2002, 325: 210-213.

Gupta A, Ferguson J: Introduction-discipline and practice: "the field" as site, method, and location in anthropology". Anthropological locations: boundaries and grounds of a field science. Edited by: Gupta A, Ferguson J. 1997, Berkeley: University of California Press, 1-46.

Manderson L, Aaby P: An epidemic in the field? Rapid assessment procedures and health research. Social Science & Medicine. 1992, 35: 839-850.

Sanjek R: On ethnographic validity. Fieldnotes: the makings of anthropology. Edited by: Sanjek R. 1990, Ithaca, NY: Cornell University Press, 385-418.

Barry C, Britten N, Barber N, al e: Using reflexivity to optimize teamwork in qualitative research. Qualitative Health research. 1999, 9: 26-44.

Murphy E, Dingwall R: Informed consent, anticipatory regulation and ethnographic practice. Social Science & Medicine. 2007, 65: 2223-2234.

Glickman SW, McHutchison JG, Peterson ED, Cairns CB, Harrington RA, Califf RM, Schulman KA: Ethical and Scientific Implications of the Globalization of Clinical Research. New England Journal of Medicine. 2009, 360: 816-823.

Savage J: Ethnography and health care. BMJ. 2000, 321: 1400-1402.

Denzin N, Lincoln YS: Introduction: the discipline and practice of qualitative research. The SAGE Handbook of Qualitative Research. Edited by: Denzin N, Lincoln YS. 2005, Thousand Oaks, CA: SAGE, 3

Download references

Acknowledgements and funding

The authors would like to acknowledge with gratitude the input and insights of Denise Allen in developing the discussion and recommendations of this paper, and in particular, offering an important anthropological voice. JR, JK, PM and CC have full salary support and NE and EA have partial salary support from the ACT Consortium, which is funded through a grant from the Bill & Melinda Gates Foundation to the London School of Hygiene and Tropical Medicine.

Author information

Authors and affiliations.

Department of Global Health & Development, London School of Hygiene & Tropical Medicine, London, UK

Joanna Reynolds & Clare Chandler

Infectious Diseases Research Collaboration, Mulago Hospital Complex, Kampala, Uganda

James Kizito

Department of Sociology/Anthropology, University of Nigeria, Nsukka, Nigeria

Nkoli Ezumah

National Institute for Medical Research, Amani Centre, Muheza, Tanzania

Peter Mangesho

Division of Clinical Pharmacology, Department of Medicine, University of Cape Town, Cape Town, South Africa

Elizabeth Allen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Joanna Reynolds .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

JR helped with the design of the review, searched for and reviewed the literature and wrote the first draft of the manuscript. JK, NE, PM and EA contributed to the interpretation of the results and the writing the manuscript. CC conceived of the review and helped with its design, interpretation of results and writing the manuscript. All authors read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Reynolds, J., Kizito, J., Ezumah, N. et al. Quality assurance of qualitative research: a review of the discourse. Health Res Policy Sys 9 , 43 (2011). https://doi.org/10.1186/1478-4505-9-43

Download citation

Received : 15 July 2011

Accepted : 19 December 2011

Published : 19 December 2011

DOI : https://doi.org/10.1186/1478-4505-9-43

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • global health
  • quality assurance
  • meta-narrative
  • literature review

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

quality assurance research articles

Quality Assurance in Research

quality assurance research articles

In research contexts, quality assurance (QA) refers to strategies and policies for ensuring that data integrity, quality, and reliability are maintained at every stage of the project. This includes strategies for preventing errors from entering the datasets, taking precautions before data is collected, and establishing procedures while data is used in a study. 

Quality assurance is important for many reasons. The most obvious is that the whole point of research projects is to produce reliable data that yield rigorous and reproducible research results. There are other important factors as well. Internal Review Boards (IRBs), funding agencies, and other organizations that oversee research activity often require quality assurance procedures be implemented into project workflows to ensure that all policies are followed and that disbursed funds are going to well-organized and executed projects. There are also compliance issues in which research projects must be able to establish that data collection and analysis followed all protocols for human and animal subjects, privacy rules and regulations such as HIPAA and FERPA, and other safeguards that guarantee research projects are conducted in a responsible manner. In some instances, administrative audits are conducted to evaluate your project’s quality assurance and policy compliance. 

Having quality assurance practices in place helps your project compliance and also helps evaluating your own research and data management practices to produce the best results possible. 

Here are some steps you can take to promote quality assurance in your research:

Establishing clear data normalization protocols: Normalizing the data you record can have substantial impacts in all aspects of your research project. Normalizing means standardizing all the features and categories of data so that everyone working on the project has a clear sense for how to record it as it’s collected. Planning ahead and having clearly defined protocols for data collection before beginning the collection process means that all data that is part of the project adheres to the same standards. 

Using consistent data formats and measurement standards: Using consistent format and measurement standards is part of the normalization process, and often you can find controlled vocabularies or ontologies that will provide established structural and definitional guidelines for your data based on your discipline. This will result in consistency in your data, not only within your own project, but also for others who may want to use it later on for further analysis or evaluation. 

Rigorous data handling and analysis procedures: This is one of the most crucial components of quality assurance because data collection introduces significant opportunities for human error to undermine the integrity of data. At every stage of data collection in which a researcher records, transforms, or analyzes data, there is the potential for simple mistakes. Identifying those stages in data collection where errors are more likely to occur, and putting preventative measures in place can minimize those errors. Simple things such as establishing data and measurement formats can help, but also the tools you select for data collection can have significant impacts. 

Select data collection and storage tools that promote data consistency: Spreadsheets for instance, are notorious for making it easy for errors to occur in data collection because they offer few controls on how it’s entered. Other tools such as databases or fillable forms provide features that allow you to control how data is entered. If you have a large team of researchers collecting data from the field or in varying contexts it’s easy for inconsistencies to arise. If the tools the researchers are using require consistency, you can be more successful at maintaining data integrity at every stage of handling data.  

Metadata documenting how data was collected, recorded, and processed: Documenting how your data was handled throughout your project is good practice for a host of reasons, and it’s particularly helpful for maintaining data integrity. Making your data handling procedures explicit and formalized in the way metadata demands requires, first, that you consider these issues carefully. It also clarifies any ambiguities in your workflow so that a researcher during the project or making use of your research outputs at a later date could identify when the data is correct and where errors may have occurred.

Research staff training: Perhaps the most important thing you can do to produce consistent and reliable data is to make sure everyone working on the research project, from seasoned researchers to graduate and undergraduate project team members, have proper training in all the data collection and analysis procedures. Having everyone on the same page means that you can be confident that each person working on the project knows how their data handling tasks contribute to the overall project’s data quality goals.

Want to learn more about this topic? Check out the following resources: 

The UK Data Service provides detailed information on establishing quality assurance strategies in your research. 

DataOne provides guidance on how to craft a quality assurance plan that will allow you to “think systematically” as you put these protocols in place.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Appl Clin Inform
  • v.9(2); 2018 Apr

Logo of appci

Using Clinical Data Standards to Measure Quality: A New Approach

John d. d'amore.

1 Diameter Health, Inc., Farmington, Connecticut, United States

2 Boston University Metropolitan College, Boston University, Boston, Massachusetts, United States

Laura McCrary

3 Kansas Health Information Network, Topeka, Kansas, United States

Jonathan M. Niloff

Dean f. sittig.

4 School of Biomedical Informatics, University of Texas-Memorial Hermann Center for Healthcare Quality and Safety, University of Texas Health Science Center, Houston, Texas, United States

Allison B. McCoy

5 Department of Global Biostatistics and Data Science, Tulane University School of Public Health and Tropical Medicine, New Orleans, Louisiana, United States

Adam Wright

6 Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States

Background  Value-based payment for care requires the consistent, objective calculation of care quality. Previous initiatives to calculate ambulatory quality measures have relied on billing data or individual electronic health records (EHRs) to calculate and report performance. New methods for quality measure calculation promoted by federal regulations allow qualified clinical data registries to report quality outcomes based on data aggregated across facilities and EHRs using interoperability standards.

Objective  This research evaluates the use of clinical document interchange standards as the basis for quality measurement.

Methods  Using data on 1,100 patients from 11 ambulatory care facilities and 5 different EHRs, challenges to quality measurement are identified and addressed for 17 certified quality measures.

Results  Iterative solutions were identified for 14 measures that improved patient inclusion and measure calculation accuracy. Findings validate this approach to improving measure accuracy while maintaining measure certification.

Conclusion  Organizations that report care quality should be aware of how identified issues affect quality measure selection and calculation. Quality measure authors should consider increasing real-world validation and the consistency of measure logic in respect to issues identified in this research.

Background and Significance

The U.S. federal government's goal is to have 90% of its health care payments based on care quality by 2018. 1 In addition, private payers have increasingly incorporated quality outcomes in their contracts. 2 The transition from fee-for-service to value-based-payment relies on accurate and reliable methods to measure the quality of care delivered. Many programs have advanced this capability; all of which require objective data and measure definitions.

The longest established program for quality measurement in the United States is the Health Effectiveness Data and Information Set (HEDIS) program managed by the National Committee for Quality Assurance (NCQA). This program began in 1991 and is currently used by over 90% of health plans. 3 HEDIS has historically used longitudinal information, primarily electronic billing data from multiple providers, to calculate care quality. This program has shown progress in improving quality outcomes. 4 5 However, it is challenging to use measures calculated from payer administrative data for ambulatory care improvement due to reporting latency, insufficient clinical specificity, payer patient market share, and inadequate risk adjustment. 6

A more recent national initiative directly focused on ambulatory care improvement is the Physician Quality Reporting System. Started in 2006, this program provided a voluntary reporting bonus. It reached over 600,000 physicians participating in Medicare but relied on methods developed before widespread electronic health record (EHR) adoption. 7 To accelerate EHR adoption with a goal of improving care quality, the Meaningful Use incentive program was launched in 2010 by the Centers for Medicare and Medicaid Services (CMS). Only 11% of physicians had a basic EHR at that time. 8 The Meaningful Use program brought widespread EHR adoption with over 78% of ambulatory clinicians using certified EHRs by the end of 2015. 9 Part of the Meaningful Use program required the calculation and reporting of at least six quality measures. Incentives were paid for reporting but were not tied to performance. Quality calculations for reporting in this program used information available in EHRs; challenges have been noted in this approach. 10 11 12 Unlike HEDIS, EHRs often calculate measure compliance using only data documented within that EHR, in part due to lack of health information exchange and interoperability challenges. 13 14

The Merit-Based Incentive Payment System, enacted as part of the Medicare Access and CHIP Reauthorization Act (MACRA), succeeded Meaningful Use for ambulatory clinical quality reporting. Beginning in 2017, based on quality performance, high performing clinicians are paid more than lower performing ones. 15 This program also introduces an alternative method of quality reporting, qualified clinical data registries (QCDRs). QCDRs are third-party organizations that accumulate clinical data from various providers for quality measurement. Since QCDRs can collect data on the same patient from different organizations, including those using different EHRs, they can provide a longitudinal approach to performance measurement like HEDIS. This requires the use of interoperability standards to aggregate the data from different EHRs.

The primary standards that support clinical data exchange today from EHRs are Health Level 7 (HL7) messaging and the Consolidated Clinical Document Architecture (C-CDA). Previous research has demonstrated that clinical documents, such as the C-CDA, provide many of the necessary data elements for quality measure calculation. 16 17 Research is lacking, however, on the implementation of quality measurement by QCDRs, particularly those integrated with health information exchanges. In addition, studies have called into question the validity and reliability of quality measures calculated by EHR reporting systems. This is due to challenges in data completeness, accuracy, appropriate codification, gaps between structured fields and available free-text, as well as inconsistency of measure logic implementation. 18 19 20 Examination of clinical data from multiple EHRs provides an opportunity to explore how data transformation may improve quality measure calculation while recognizing these concerns. Furthermore, quality measure definitions for HEDIS and other reporting programs are specified using the Health Quality Measure Format and Quality Data Model (QDM). These specifications expect Quality Reporting Document Architecture (QRDA) documents as the clinical data format while this research explores the applicability of C-CDA documents to quality measurement.

The purpose of quality measurement is to evaluate the care quality delivered to the patient. This research seeks to detail and address challenges that affect the use of interoperability standards to achieve this intent of quality measurement by a QCDR. The Doctors Quality Reporting Network, offered as part of the Kansas Health Information Network (KHIN), was approved as a QCDR in 2017 by CMS and is the locus for this research. Through its use of data in KHIN, its potential reach extends to nearly 10,000 providers and over 5 million patients. The quality measures selected for evaluation included 17 electronic clinical quality measures adjudicated using technology certified by the NCQA.

We sampled the KHIN data from 11 ambulatory care sites during the 1-year period from July 1, 2016 to June 30, 2017. Sites were selected based on size (> 300 visits per month), continuous submission of clinical documents to KHIN, and independence from an acute care institution since all the quality measures in this study relate to ambulatory care. Selected facilities were not contacted in advance, so the data sample represents a sample of data regularly used in health information exchange. Patient data use in this research was approved by the UTHealth Committee for the Protection of Human Subjects.

One hundred unique patients were randomly selected from each facility; the same patient was never selected from more than one facility. Data from a single clinical document during the time frame was used for quality measurement. Documents included a wide range of clinical data, including patient diagnoses, immunizations, medications, laboratory results, problems, procedures, and vital signs. These clinical domains are required by Meaningful Use as part of Continuity of Care Documents. Multiple EHRs were represented, including Allscripts (Chicago, Illinois, United States), Computer Programs and Systems, Inc. (Mobile, Alabama, United States), eClinicalWorks (Westborough, Massachusetts, United States), General Electric (Chicago, Illinois, United States), and Greenway Medical (Carrollton, Georgia, United States). The data were processed by Diameter Health's (Farmington, Connecticut, United States) Fusion and Quality modules (version 3.5.0), technology certified by NCQA for electronic clinical quality measurement. 21 This software includes both transformation logic associated with clinical data and measure logic necessary to calculate and report quality performance. An example of how quality measure compliance may be calculated in the software application for a fictional patient not derived from any real patient information is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is 10-1055-s-0038-1656548-i180013ra-1.jpg

Quality measure presentation in software application. Quality calculation shown for a fictional patient for calculated measures, with clinical detail shown for a specific measure. Note 1: Tabs along the top show three eligible measures with compliance and three eligible measures with noncompliance. Note 2: The button labeled “Smoking Gun” provides specific clinical detail that substantiates measure eligibility and compliance calculation. Note 3: The clinical detail of the eligible encounter, diagnosis and laboratory result that supports compliance for the selected measure (cms122v5 Diabetic HbA1c < 9%). Copyright and reprinted with permission of Diameter Health, Inc.

Twenty-four measures were available in the certified software, although 7 were excluded from this study. Five measures were excluded since they require data on multiple care encounters, which may not be accurately represented in a randomly selected clinical document (e.g., multivisit initiation and maintenance for drug dependence therapy). One was excluded due to the lack of behavioral assessment data in the sample and one was excluded since it had been discontinued for use by CMS. The 17 examined measures constituted a broad range of process and outcomes measures across diseases and preventative care as shown in Table 1 . Each measure's logic was specified according to the QDM and was eligible for use in CMS quality reporting programs. 22

CMS identifierMeasure descriptionMeasure type (reason)Measure steward
74v6Primary caries preventionProcess (preventative)CMS
82v4Maternal depression screeningProcess (preventative)NCQA
122v5Diabetes: Poor HbA1c controlOutcome (disease control)NCQA
123v5Diabetes: Annual foot examProcess (preventative)NCQA
124v5Cervical cancer screeningProcess (preventative)NCQA
125v5Breast cancer screeningProcess (preventative)NCQA
127v5Pneumonia vaccination of older adultsProcess (Preventative)NCQA
130v5Colorectal cancer screeningProcess (preventative)NCQA
131v5Diabetes: Annual eye examProcess (preventative)NCQA
134v5Diabetes: Attention for nephropathyOutcome (disease control)NCQA
146v5Appropriate testing for children with pharyngitisProcess (utilization)NCQA
153v5Chlamydia screening for womenProcess (preventative)NCQA
154v5Appropriate treatment for children with upper respiratory infectionOutcome (utilization)NCQA
155v5Pediatric weight assessmentProcess (preventative)NCQA
156v5High risk medication use in elderlyOutcome (patient safety)NCQA
165v5Controlling high blood pressureOutcome (disease control)NCQA
166v6Use of imaging studies for back painOutcome (utilization)NCQA

Abbreviations: CMS, Centers for Medicare and Medicaid Services; NCQA, National Committee for Quality Assurance.

The quality measures were first calculated using clinical data without any transformation logic. Since clinical documents generally available to KHIN were used in this study, the software-aligned clinical data from these extracts to quality measure criteria as specified in the QDM. The measures were then recalculated using an iterative approach where techniques were added to improve adherence to national standards, such as terminology and free-text mappings. This included techniques to deal with data heterogeneity in clinical document as detailed in prior research. 14 Quantitative metrics on clinical encounters, problems, medications, laboratory results, and vital signs were analyzed for the 1,100 patients and illustrative issues affecting quality measurement were recorded. Changes in measure calculation were then extensively tested against test cases made available by NCQA to determine if certification was affected by the iterative improvement. Population counts of both denominators and numerators were captured both before and after the iterative improvement process.

Depth of Clinical Data

All 1,100 selected clinical documents were loaded into the quality measurement software without error. Of the facilities selected, 4 (36%) submitted Healthcare Information Technology Standards Panel C-32 Continuity of Care Documents and 7 (64%) submitted HL7 C-CDA 1.1 Continuity of Care Documents. Patient age ranged from 0 to 99 at the beginning of the measure period. A total of 589 (53.5%) of patients were female and 510 (46.4%) were male with one patient not having gender recorded as male or female.

Content extracted from the clinical documents included 12,308 clinical encounters, 3,678 immunizations, 20,723 medications, 25,921 problems, 17,959 procedures, 45,704 diagnostic results, and 32,944 vital sign observations. All 11 sites produced clinical documents with information in the domains of patient medications, problems, procedures, results, and vital signs. The majority of clinical encounters represented annual wellness visits and typical evaluation and management distributions for ambulatory encounters. For nine of the sites, data included information for prior clinical visits dating back months and years prior. Historical data are important for several of the quality measures that examine prior clinical information (e.g., past colonoscopies for colon cancer screening). For two of the sites, the clinical documents were more limited, sending data primarily related to the most recent clinical encounter.

Nonnormalized Measure Calculation and Focus Areas for Improvement

Using the clinical data without any transformation, quality measures were calculated using certified technology for a 12-month period from July 2016 to June 2017. Results for individual patients was collected using standard reporting formats of the software and presented in Table 2 (the “Calculation before Iterative Improvement” column).

Calculation before iterative improvementCalculation after iterative improvement
CMS identifierMeasure descriptionDenominatorComplianceDenominator(% change)Compliance(absolute change)
74v6Primary caries prevention1074.7%164 (+53%)3.0% (–1.7%)
122v5Diabetes: Poor HbA1c control2045.0%78 (+290%)37.2% (–7.8%)
123v5Diabetes: Annual foot exam200.0%78 (+290%)0.0% (NA)
124v5Cervical cancer screening880.0%182 (+107%)7.1% (+7.1%)
125v5Breast cancer screening640.0%120 (+88%)9.2% (+9.2%)
127v5Pneumonia vaccination of older adults11355.8%204 (+81%)55.9% (+0.1%)
130v5Colorectal cancer screening1171.7%237 (103%)14.3% (+12.6%)
131v5Diabetes: Annual eye exam200.0%78 (+290%)0.0% (NA)
134v5Diabetes: Attention for nephropathy2035.0%78 (+290%)69.2% (+34.2%)
146v5Appropriate testing for children with pharyngitis0NA50 (NA)9.1% (NA)
153v5Chlamydia screening for women0NA5 (NA)20.0% (NA)
155v5 Rate 1Pediatric weight assessment: BMI percentile810.0%123 (+52%)22.0% (+22%)
155v5 Rate 2Pediatric weight assessment: Nutrition counseling0.0%0.0% (NA)
155v5 Rate 3Pediatric weight assessment: Activity counseling0.0%0.0% (NA)
156v5Rate 1High risk medication use in elderly: 1 medication109100%196 (+80%)98.5% (–1.5%)
156v5Rate 2High risk medication use in elderly: 2 or more medications100%100% (NA)
165v5Controlling high blood pressure4434.1%190 (+332%)36.4% (+2.3%)
Measures not included in iterative improvement
82v4Maternal depression screening10.0%Not available
154v5Appropriate treatment for children with upper respiratory infection44100% (73% excluded)Not available
166v6Use of imaging studies for back pain2Not available (100% excluded)Not available

Abbreviations: BMI, body mass index; CMS, Centers for Medicare and Medicaid Services; NA, not available.

Of the 17 measures, most measures showed unexpectedly low proportions of eligible patients (i.e., denominators) both relative to disease prevalence and patient demographics. For example, a recent report identified 9.7% of adults in Kansas as having diabetes, but only 1.8% of the 1,100 patients qualified for the diabetes measures examined. 23 Consequently, one area for examination and iterative improvement was to increase the number of eligible patients (“Iterative Improvements for Patient Inclusion”).

Of the 15 measures with at least 1 eligible patient, 9 showed no clinical events associated with the measure numerator, resulting in either 0 or 100% compliance. These rates called into question the validity of the calculation. Consequently, a second area for iterative improvement was to examine if data transformations would improve the accuracy of compliance rates (“Iterative Improvements for Quality Measure Compliance”).

Iterative Improvements for Patient Inclusion

Eligible Population Improvement for Encounters. Each of the 17 quality measures as defined by the measure steward requires a face-to-face encounter or office visit in the measurement period for the patient to be eligible for quality measure calculation. Since our information drew directly from interoperable documents from EHRs, the codes used in encounter documentation often lacked this specificity. An example is shown in Fig. 2 , where no specific code is shown in the yellow highlighted XML, although the human-readable text provides the context of the visit.

An external file that holds a picture, illustration, etc.
Object name is 10-1055-s-0038-1656548-i180013ra-2.jpg

Illustrative example of encounter normalization. This example from a clinical document, edited to protect patient identity, demonstrates how code omission in the XML (highlighted in yellow) would normally exclude this patient from being included in quality measures. Using the text of “office visit” in the reference tag, however, allows a valid code to be selected from appropriate terminology.

Using automated mapping available in the software, the reference between the human-readable narrative and machine-readable content were used to assign a code for this encounter based on the text of “Office Visit.” The software uses a simple text-matching algorithm using exact keywords in the text (e.g., “Office Visit,” “Hospitalization,” “ER Visit”) to assign an appropriate code when not appropriately codified in the machine-readable portion. The code selected was “308335008 (Patient encounter)” from the Systemized Nomenclature of Medicine (SNOMED) which qualified this patient encounter for quality calculation. Analogous encounter normalization techniques were performed on all 1,100 patients.

Eligible Population Improvement for Problem Inclusion. Several of the quality measures require patients to have a specific diagnosis before a specific date for inclusion in the quality measure. For example, for inclusion in the diabetes measures, a patient must have an eligible SNOMED, International Classification of Diseases (ICD)-9, or ICD-10 code on or before the measure period. Real-world documentation of onset dates, however, is often lacking in EHRs. This may be due either to the information not being known or from clinicians skipping over fields when documenting in the EHR.

Nine measures selected for this sample require a specific problem to be documented. These include diabetes (measures 122v5, 123v5, 131v5, 134v5), pharyngitis (146v5), pregnancy or sexually transmitted disease (153v5), respiratory infections (154v5), hypertension (165v5), and back pain (166v6). We examined all 25,291 problems that were documented on the 1,100 patients to determine the documentation of the time of problem onset. Note that 51.7% of problems had no onset date documented. In addition to the omission of problem onset date, we also examined other sections in the clinical documents which may contain problems that were not on the problem list. These included the history of past illness and the encounters sections. We found 5,483 incremental problems or diagnoses in these sections, which represented a meaningful percentage (21.1%) of overall problems.

To address these issues, we used all sections of clinical documents that may include problems and changed our measure logic to address problem onset omission. Specifically, if a problem was documented as active, we assessed that the onset date must have been prior to visit date (i.e., it is not reasonable that any clinician would document a problem to occur in the future).

Iterative Improvements for Quality Measure Compliance

Compliance Improvement through Value Set Mapping. Electronic clinical quality measures use a set of codes, often referred to as “value sets,” to determine whether a specific activity was performed. For example, with breast cancer screening (125v5), the measure specifies a value set of mammography studies that would qualify a mammography as being performed. Through the examination of specific records, we found the specific codes used in these value sets have a material impact on quality measure calculation. With mammography, all the specified codes were from Logical Observation Identifier Names and Codes (LOINC). As shown in Table 2 for mammography, none of the eligible patients for this measure had one of those LOINC codes in the appropriate time period since the compliance rate was 0%. This electronic clinical quality value set for mammography, however, varies from the value set for the equivalent HEDIS measure for mammography, which allows for Current Procedural Terminology, ICD-9, and ICD-10 codes.

We contacted NCQA, who is a measure steward for 16 of the 17 measures included in this research, to discuss this specific concern. They agreed that for the measures where codes were included in HEDIS, equivalent concepts are acceptable through mapping (Smith A, Archer L, at National Committee for Quality Assurance, phone call, November 2017). This significantly increased compliance for the cancer preventative screening measures (124v5, 125v5, 130v5). This process would be expected to have had an impact on the two diabetes measures (123v5, 131v5) although no change was observed based on the small eligible populations for these measures.

Compliance Improvement through Medication Normalization. Electronic clinical quality measures use a national standard vocabulary, RxNorm, established by the National Library of Medicine for medication-related logic. RxNorm is a normalized naming system that contains concepts spanning ingredient, coordinated dose forms, generic name, and brand names. When value sets are created for medication usage, however, they often include only generic concepts, omitting branded and ingredient concepts. There are significant challenges in using such a limited value set. First, we found that 3,095 (14.9%) of medications collected in this sample are not coded in RxNorm. These likely included medications affecting measure calculation, but without terminology mapping would provide inaccurate results. Second, we found that the term types of RxNorm codes in real-world data often did not match the measure value set. Specifically, only 12,146 (69.3%) of RxNorm-coded medications were mapped to a generic drug concepts that align with quality measure value sets. The combined effect of medications not coded in RxNorm and not mapped to generic medication concepts are that only 58.6% of real-world medications from our samples appropriately functioned with quality measures that include medication logic.

The resolution to this inability to identify medications for measure calculations was to use terminology mapping of medications that were available in the research software. This mapping included relationships between the RxNorm term types publicly available as well as proprietary technology for the free-text mapping of medications names. This successfully mapped 18,767 (90.6%) of the original medications to a usable RxNorm concept which could then be applied to the quality measure logic. For the remaining 1,956 medications that were not mappable, manual review showed that 460 were vitamins (e.g., multivitamins that did not specify content), 360 were medical supplies (e.g., lancets, test strips, nebulizers), and 191 were “unknown” or null entries. These types of entries were not applicable to the quality measures selected. This left 945 (4.5%) of medication entries not available to quality measure logic. Several of these were actual medications, but others were concepts recorded in a manner which did not detail a specific ingredient (e.g., “allergy immunotherapy” or “hormones”). The effective yield of usable medication data was approximately 95% (18,767 mapped medication entries vs. 945 unmapped medication entries).

Once translations were performed, it was also necessary to adjust the logic associated with medication administration before medication quality logic would function appropriately. Specifically, 17,505 (84.5%) of all medications were recorded in clinical document as medication orders (i.e., HL7 moodCode of “INT”). Of those, however, 14,318 (81.8%) had an associated start date at or before the clinical encounter. For medications that had a start date in the past, we treated them as administered medication events rather than intentional. This allowed the medication duration logic of High Risk Medications in the Elderly (156v5) to function (i.e., have at least 1 numerator event). This issue may stem from poor implementation of the clinical document standards as detailed in prior research. 14

Compliance Improvement through Laboratory and Vital Sign Normalization. Often laboratory information recorded in EHRs does not meet the value set of laboratory results in quality measures. This impacted the diabetes control measure (122v5) which required HbA1c results. Using all the result data in the collected information, 4.1% of all HbA1c results were found to not have the appropriate LOINC code. In addition, 14.8% of these HbA1c results did not use the appropriate unit of measure (i.e., %) for the laboratory results. An even larger impact was shown among laboratory results related to the diabetes nephropathy measure (136v5), where 18.3% of results were not shown to have appropriate code. For pediatric body mass screening measure (155v5), while vital signs used the appropriate LOINC code for body mass index (BMI), 35.1% did not use the appropriate unit (i.e., kg/m 2 ). The solution to this was to normalize laboratory and vital signs using both code mapping and unit translation to transform data for the above examples which affected measures 122v5, 134v5, and 155v5.

Compliance Improvement through Logic Changes. Finally, additional logic changes were attempted for three pediatric-related measures. For the pediatric testing of pharyngitis, the relationship between the timing of the encounter, medication start, and problem onset were simplified. For the treatment of childhood upper respiratory infections (154v5), we found that the relationship between encounter timing, problem onset, and medication timing could not be simplified to make this measure include a reasonable portion of patients. Attempted resolutions for this measure were unsuccessful. For the measure relating to pediatric weight (155v5), we found that the requested vital sign of BMI percentile was never recorded in interoperable clinical documents we examined. Using the data that was recorded on BMI, gender, and patient age, however, permitted the calculation of the appropriate percentile for part of this measure (i.e., BMI percentile was unambiguously known from information provided).

Resultant Quality Measure Calculations

Of the original 17 measures selected, we found two measures (166v6 and 82v4) where the eligible population remained under 5 patients from the sampled population of 1,100. In addition, all attempted changes to the treatment of upper respiratory infections measure (154v5) were not able to reasonably reduce the exclusion rate. These three measures were considered to be nonfunctional despite attempts to increase the eligible populations in ( Table 2 —“Measures not included in Iterative Improvement”). For the remaining 14 measures, we report both the original and the normalized quality measure rates in Table 2 (“Calculation after Iterative Improvement”).

The overall impact of the iterative improvement on the eligible population increased the denominator populations across these 14 measures from 803 to 1,783 (+122%). This counts the same patient multiple times when the patient qualifies for multiple measures. The number of unique patients included in at least one measure increased from 315 to 601 (+91%).

The overall impact of the iterative improvement in compliance was varied. Five measures saw an increase from no applicable compliance to a nonzero number. One measure decreased from 100% compliance to a lower rate. Three measures had at least one rate component remain at zero compliance despite attempts to improve compliance. Other measures had small or moderate changes in reported compliance.

Once these changes were made, the 14 revised measures were extensively tested to determine if certification compliance was maintained. Appropriate Testing for Children with Pharyngitis (146v5) was found to not maintain certification. While data are presented for this measure, the revised logic could not be used in reporting. Certification for the other 13 measures was unaffected since techniques for free-text normalization, terminology mapping, or missing data as addressed through the iterative improvement do not affect certification test data, which include only properly structured data.

Implications of these results can be categorized into two domains: considerations for measure authors and stewards and considerations for organizations performing quality calculation.

Considerations for Measure Authors and Stewards

Quality measure development is a difficult task often done in the abstract; authors lack heterogeneous clinical data sets to validate logic and examine how real-world documentation practices affect calculations. Our findings support the need for measure developers to better understand how the routine collection of clinical data impacts quality measurement, as policymakers have acknowledged. 24 That requires access and testing with real-world data before a measure is released for use. This will help measure authors evaluate the inherent limitations of terminologies, value sets, discrete data entry, and cohort definitions in the process of measure development. It also helps identify gaps between clinical data collection and the available data for reporting. This study validated that interoperability standards for clinical documents, as promoted by the Meaningful Use program, is a viable strategy. In addition, the use of interoperability standards provides a clear audit trail back to the source EHR. Auditing using interoperability standards can include both the original source information and any data transformations performed. This becomes increasingly important as both private and public payers use quality measure performance for provider payment.

Another finding is the importance of measure consistency across programs. We observed that value sets for terminologies varied substantially from HEDIS to electronic clinical quality measures. Specifically, some terminologies included in HEDIS were excluded in clinical quality measures. This caused several preventative measures to report zero compliance, when any observer would find evidence of the clinical prevention in the data. We strongly believe that there should be alignment and compatibility of value sets across measure programs, particularly since providers have been encouraged to document in a way which supports older programs such as HEDIS. This need for consistency also applies to how patients are qualified for measures as documented in other research. 25 Electronic clinical quality measures incorporate the concept of a specific type of visit before a patient is eligible for quality measure calculation. The lack of proper encounter coding in EHRs creates a burden in this domain. HEDIS measures apply to broader member populations based on billing profiles, while electronic clinical quality measures are artificially restricted. Such attribution logic also overlooks patients who go 12 to 24 months between physician visits and emerging modalities where virtual encounters are used for patients in good health. We believe that measure eligibility logic should recognize these concerns to ensure greater consistency across programs.

Finally, poor documentation practices, such as free-text order entry or missing qualifiers, should never result in better compliance. In the example of high risk medications in the elderly, we found higher compliance when medication data were not normalized. This rewards clinicians and technologies that do not record medications in the standard terminology. Since we found 41% of medications were not in the expected term type of RxNorm, this issue of normalization for complex clinical data, such as medications, will remain important for the near term.

Considerations for Organization Performing Quality Calculation

This study validates that the strategy promulgated by MACRA to establish QCDRs for quality measurement is technically feasible for at least several measures. It also demonstrates viability of collecting clinical data from various sources using interoperability standards that could be adopted by integrated delivery systems with multiple in-house EHRs. While the compliance rates reported for selected measures vary from known benchmarks, we believe that to be reasonable given the limited data examined and the fact that selected facilities were not known to have any focus on the selected measures. Measure selection by QCDRs will be important based on the findings of this research. Also important will be the selection of a technology vendor to collect and normalize clinical data. Our findings substantiate the value in transforming clinical data collected using interoperability standards, as had been previously demonstrated for individual EHRs. 26

In addition, clinical documentation practices should always remain a priority when working with providers who intend to use a QCDR to support electronic clinical quality measurement. For several of the measures with low or zero compliance rates, the information required is often not structured in the appropriate place to be available for quality measure calculation, as documented in prior research. 27 For example, we never found nutritional or physical activity counseling to be documented as a particular code for the pediatric weight assessment measure, but we fully expect this was performed on at least some of the 123 eligible pediatric patients. Previous research has validated that practice type, size, and experience with EHR technology have significant impacts in data availability for quality reporting. 28 Further work with local practices and EHRs will be required to implement tactics that will increase data completeness.

Since QCDRs have access to real-world data and the ability to author measures, they are in a unique position to advance the state of quality measure development. We believe that cross-industry collaboration between QCDRs and payers needing quality measurement for value-based contracting measure will be critical. These collaborations could include deidentified data repositories for new measures, measure validation using real-world clinical data, and best practices in data transformation to support quality measurement.

Finally, some QCDRs are tightly integrated with a health information exchange, and we believe this research highlights an important implication. Improving clinical data will not only improve clinical quality measurement but will also improve care transitions and improvement objectives supported by HIEs. We believe that using interoperability standards to empower quality measurement provides an incentive and feedback loop to improve interoperability generally.

Limitations

This study was limited in several dimensions. First, it used a single clinical document to calculate the quality measures. Had multiple documents been used, the rates for both patient inclusion and compliance would likely have been different. Other data sources, such as QRDA or Fast Health Interoperability Resource extracts, may have provided additional data than what was recorded in available clinical documents but were not examined in this research. 29 30 Moreover, using electronic data capture for quality measurement has been shown to differ from manual abstraction and is not examined in this research. 20 Next, only a single measurement technology was used in this research. Nine vendors have been certified by the NCQA to calculate quality measures and dozens more are certified by other authorized testing laboratories. 21 We fully expect that other technologies will generate different results, even based upon the same data set. Data transformations performed by any software may introduce variability and potential data anomalies to quality measurement, although the process of software certification helps minimize inadvertent errors. Finally, no facility was contacted in advance about this study so no effort was specifically expended to improve measure documentation or compliance. Further research should establish how longitudinal, multisource clinical data may impact quality measure calculation as it may be anticipated that such data would provide better rates than those observed from the point-in-time information examined in this research.

Quality measure calculation is possible using interoperability standards to collect data from a variety of EHRs. Quality measure stewards should increasingly use real-world data in their measure development to validate measure integrity, reliability, and consistency. The selection of specific quality measures by QCDRs will be an important consideration since quality measures may have issues affecting inclusion and compliance calculation, even when using certified technology. The use of interoperability standards to support quality measurement provides a long-term incentive to jointly improve interoperability, clinical documentation, and care quality. This will be paramount as payers transition to value-based contracting.

Clinical Relevance Statement

The use of clinical data exchanged routinely from EHRs can empower quality measurement. The results described in this article specify how to improve patient inclusion and measure accuracy using an iterative approach. Organizations that report quality measurement should be aware of how such techniques affect compliance rates for reported quality measures.

Multiple Choice Question

Why can the transformation of medication data from certified EHRs improve quality measure calculation?

  • Medication administration instructions are different among EHRs
  • Medication data need to align with the subset of codes, known as a “value set,” used by the quality measure
  • Medication doses can change for the same patient over time
  • All medications recorded by clinicians were unstructured and need codification before quality measurement can occur

Correct Answer: The correct answer is option b. This research found that over 40% of medication data coding from certified EHRs varied from the “value sets” used by quality measure logic. Consequently, transformation of the medication data is required for the appropriate calculation of measures. Terminology mapping is one technique that markedly improves the usability of medication data within interoperable clinical document. This research made similar observations in other clinical domains, such as problems, encounters, laboratory results, and vital signs.

Acknowledgments

We would like to thank many who assisted in this study. From KHIN, we would like to acknowledge Andy Grittman and Vince Miller who provided the technical infrastructure for the research. We would also like to thank Mary Matzke and Jody Denson from KHIN in their research assistance of issues related to quality reporting. From Diameter Health, we acknowledge Judith Clark and Dan Andersen for their assistance regression testing measures for certification and Tom Gaither in assembling the research team. From NCQA, we would like to thank Ben Hamlin, Anne Smith, and Latasha Archer who were responsive to questions and discussion.

Funding Statement

Funding Support for this research was provided by the Kansas Health Information Network and Diameter Health, jointly donating time and resources to the research team.

Conflict of Interest John D'Amore, Chun Li, and Jonathan Niloff receive salaries from and have an equity interest in Diameter Health, Inc., whose software provided the quality measure calculation used in this research. Dean Sittig serves as a scientific advisor with an equity interest in Diameter Health.

Protection of Human and Animal Subjects

This study was approved by the Institutional Review Board for the University of Texas, Health Science Center, Committee for the Protection of Human Subjects. Technical and administrative safeguards were utilized to protect the privacy of such information throughout this research.

  • Open access
  • Published: 09 September 2024

Patient satisfaction and associated factors with inpatient health services at public hospitals in Ethiopia: a systematic review and meta-analysis

  • Ayenew Takele Alemu 1 ,
  • Eyob Ketema Bogale 2 ,
  • Solomon Ketema Bogale 3 ,
  • Eyob Getachew Desalew 4 ,
  • Getnet Alemu Andarge 3 ,
  • Kedir Seid 5 ,
  • Gebeyehu Lakew 4 ,
  • Amlaku Nigusie Yirsaw 4 ,
  • Mitiku Tefera 6 ,
  • Amare Mebrat Delie 1 &
  • Mahider Awoke Belay 1  

BMC Health Services Research volume  24 , Article number:  1042 ( 2024 ) Cite this article

Metrics details

The way the healthcare delivery system is reflected by patient satisfaction. Establishing a health system with better results depends on it. It has been assumed that higher patient satisfaction levels correlate with quality healthcare outcomes. There is little national data to support patient satisfaction with inpatient health services in Ethiopia. In order to estimate the pooled proportion of patient satisfaction and determine the associated factors with inpatient health services at public hospitals, a systematic review and meta-analysis were conducted.

The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines were followed in the extraction of the data. To get the included studies, the following electronic databases were searched: Pub-Med, Google Scholar, Med-Line, Web of Science, Scopus and Repositories. Software called STATA version 17 was used to analyze statistical data using the random effects model. Forest plots were used to display the pooled results.

Of the 1583 records resulted in electronic databases searching, 11 studies with 3,958 participants were included in this systematic review and meta-analysis. The estimated pooled proportion of patient satisfaction with inpatient health services was found to be 57.4% (95% CI: 50.88–64.59, I 2  = 95.25%). Assuring privacy for patients (OR = 7.44, 95% CI: 3.63–15.25, I 2  = 0.0%), availability of direction signs (2.96, 95% CI: 1.91–4.57, I 2  = 0.0%), providing adequate information (OR = 3.27, 95% CI: 1.63–6,58, I 2  = 65.60%), history of previous admission (OR = 0.29, 0.18–0.46, I 2  = 86.36%) and providing on time treatment (OR = 1.63, 95% CI: 1.21–2.20, I 2  = 86.36%) were statistically significant factors associated with patient satisfaction with inpatient health services.

The estimated pooled level of patient satisfaction with inpatient health services is low in Ethiopia. A higher level of patient satisfaction with inpatient health treatments was predicted by factors such as privacy assurance, fast services, availability of direction signs, provision of services with adequate information transfer, and no history of previous admission. To improve patient satisfaction, the Ministry of Health and hospital administration must place a strong emphasis on ensuring the provision of high-quality, standard-based inpatient healthcare.

Peer Review reports

Introduction

Establishing a health system with better results and long-lasting treatment is essential [ 1 ]. Quality, patient-centered treatment is becoming more prevalent in healthcare these days [ 2 ]. The effectiveness of healthcare delivery and the quality of treatment provided have been assessed through the measurement of patient satisfaction [ 3 , 4 , 5 , 6 ]. The term "patient satisfaction with health services" describes how people feel about the setting, procedure, and overall state of health delivery system. Satisfaction compares the experiences of patients with their preconceived expectations [ 7 , 8 , 9 ]. Assessing the opinions of patients on healthcare services is a crucial part of the global healthcare delivery system [ 6 , 10 , 11 ]. It has been assumed that higher patient satisfaction levels correlate with quality healthcare outcomes [ 12 , 13 , 14 , 15 ]. Patients perceive health care to be up to date, unless they experience dissatisfaction [ 8 , 16 , 17 , 18 ]. Dissatisfaction arises when individuals' expectations and the health treatments provided to them differ [ 12 ].

A moderate degree of patient satisfaction with inpatient health treatment was found in a study that analyzed 19 studies that were only published in Iran and Persia [ 19 ]. According to a Chinese study by Chen, H. et al., 89.75% of patients were satisfied with inpatient medical services [ 20 ]. Another study conducted in a different part of China by Shan L et al. revealed that 24% of the reason was patient dissatisfaction with inpatient care [ 21 ]. According to a survey of 52 Iranian studies, the total patient satisfaction rate with hospital treatments was found to be 14.1%, with a range of 0.2% to 65.1% [ 22 ]. Overall patient satisfaction in Iran was 84.3% with inpatient care provided by a hospital affiliated to Tehran University of Medical Sciences [ 23 ]. According to a Saudi Arabian study, patient satisfaction with primary healthcare overall was examined in 25 papers, ranging from 78–96% [ 24 ].

The level of patient satisfaction could be influenced by multifaceted factors including cultural, behavioral, and socio-demographic features [ 25 ]. Patients’ age, sex, residence, previous admission, educational status, occupational status, quality of care, hospital accreditation, length of hospital stays, and insurance ownership were determinants of inpatient satisfaction [ 1 , 14 , 19 ]. Higher educational levels, longer waiting times, and diagnosis type were identified as predicting patient satisfaction [ 2 , 7 ]. Lack of essential resources like drugs, poor communication, long waiting times, uncleanliness of wards, violating patients’ privacy, and minimal time of visiting were identified as reasons for patients’ dissatisfaction regarding the health services offered to them [ 26 ]. Healthcare providers’ attitudes, costs of services, the nature of the working environment, and patient trust are related to inpatient satisfaction [ 20 , 21 , 27 ]. It has been evidenced that health status and provider-to-patient relationships are the most important determinants of satisfaction with medical care [ 6 ]. It was also evidenced that patients’ income is significantly related to their satisfaction with health services [ 28 ].

The combination of information from primary studies synthesized through meta-analysis can provide more relevant and credible pieces of evidence than primary studies that may be used for health policy and decision-making purposes. More powerful, generalizable, and precise findings can be estimated, and possibly new research questions could be generated [ 29 , 30 ]. Although patient perspectives are imperative in evaluating health service standards in developing nations, their views have not been adequately considered [ 31 ]. Many primary studies were conducted in different regions of Ethiopia with inconsistent results regarding inpatient satisfaction status and its determinants. However, the evidence in country level is still scant. Therefore, this systematic review and meta-analysis was conducted to determine the national level of inpatient satisfaction status and its determinants at public hospitals in Ethiopia.

Methods and material

Study design, setting, and search strategy.

A systematic review and meta-analysis were employed to determine the pooled patient satisfaction and identify associated factors with inpatient health services in Ethiopia. All primary studies that investigated the level of patient satisfaction and associated factors with inpatient health services of public hospitals in Ethiopia were used to conduct this meta-analysis. It was checked whether the systematic and meta-analysis were conducted or not on this title using a trial registration number. We registered for a meta-analysis study on this title on the PROSPERO database with the registration number CRD42024498195. We followed the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guidelines in the procedure of conducting this study [ 29 , 32 ]. Electronic databases including Google Scholar, Pub-Med, Med-Line, Web of Science and Scopus were searched for identifying studies reporting the level of patient satisfaction and associated factors with inpatient health services that were published from any time in Ethiopia. The articles were downloaded, screened, and cited using EndNote 20 reference management software for Windows. Additional literatures were retrieved by extending our search to cross-references. Ethiopian Universities’ online database repositories were accessed to retrieve unpublished papers.

Research questions to this systematic review and meta-analysis were organized, framed, and answered using the population, exposure, comparator, and outcome (PECO) approach [ 29 ]. Population refers to admitted patients or caregivers or parents (of childhood patients), exposure refers to associated factors with inpatient satisfaction, comparator refers to the reference group reported in the included studies, and outcome refers to the level of patient satisfaction with inpatient health services.

A literature search was held from January 20 to February 20, 2024. To search the articles, we used the combination of key search terms as follows: (‘satisfaction’ OR ‘patient satisfaction’) AND (‘in-patient’ OR ‘in-patient health services’ OR ‘admitted patients’) AND (‘associated factors’ OR ‘determinants) AND (‘Ethiopia’).

Eligibility criteria

Both published and unpublished studies reporting the level of satisfaction with inpatient health services among admitted patients in the English language regardless of study years were eligible. All studies reporting the level of satisfaction in the context of inpatient departments at public hospitals in Ethiopia irrespective of methodological measurements were included. Studies that assessed the associated factors with patient satisfaction with inpatient services were also included. However, we excluded studies that were methodologically error, not conducted in the context of inpatient units, without full-text access, reviews, and studies conducted outside of Ethiopia. Studies that reported inpatient satisfaction with specific segments of services like nursing, laboratory, and pharmacy services were also excluded. Evaluation of eligibility criteria for all retrieved studies was carried out by two authors (A.T & M.A) independently, and any inconsistencies and disagreements were resolved by thorough discussion and consensus.

Outcome variables and operational definitions

The first outcome variable in this meta-analysis was patient satisfaction with inpatient health services expressed in percentage. It was estimated by performing meta-analysis for pooled effect size using random effects model. The second outcome variable was predictors of patient satisfaction level with inpatient health services. The odds ratios were used to identify predictors based on the binary results from primary studies included in the meta-analysis.

Quality assessment and data extraction

Two authors (A.T & A.M) conducted a full-text review of the included articles independently using a quality appraisal tool. The third author (M.A) facilitated and arranged discussions to resolve any disagreements and differences. The quality appraisal tool adapted from the Joanna Briggs Institute (JBI) critical appraisal checklist for cross-sectional studies consisting of eight-item checklists [ 33 ]. We also assessed the methodological quality of included studies using a modified version of the Newcastle–Ottawa Scale (NOS) for cross-sectional studies which includes key criteria like sample size, sample representativeness, response rates, controlling confounders, outcome determination, and appropriateness of statistical tests [ 34 ].

Data Extraction was conducted by two authors (A.T & M.A) independently using a pre-piloted data extraction format developed in Excel spreadsheet Microsoft for Windows. The relevant information to be collected regards the author’s name, publication year, study hospital, region, study design, sample size, sampling technique, outcome measuring instrument, proportion of patient satisfaction with inpatient health services, and determinants of inpatient satisfaction.

Data processing and analysis

Meta-analysis was computed after importing the extracted data into STATA 17 software. Forest plots was done using a Q-test and inverse-variance ( I 2 ) statistical test to measure the degree of heterogeneity across included studies at a p -value less than 0.05 level of significance. The I 2 statistical test measures the percentage of total variation among studies influencing effect size. The range is always from 0–100% reflecting no-to-high degrees of heterogeneity. A random effects meta-analysis model was used to estimate the pooled values of the patient satisfaction with inpatient health services by generating the pooled 95% confidence interval [ 30 , 35 ]. To identify possible sources of heterogeneity, a Univariate meta-regression was computed. Publication bias was checked using a Funnel plot and Egger’s test at p -values less than 0.05 significant levels [ 36 , 37 ].

After searching an electronic database, 1583 records were found. Of these, 58 duplicate studies, 1452 studies that were irrelevant, 33 studies that did not involve hospital services were removed. After the full texts of the remaining 40 papers were evaluated for eligibility, 29 of them were deemed ineligible due to not meeting the inclusion criteria. Finally, 11 studies: two studies [ 38 , 39 ] in Addis Ababa, one study [ 40 ] in Amhara, two studies [ 41 , 42 ] in Oromia, three studies [ 43 , 44 , 45 ] in Southern Nations, Nationalities and people, one study [ 46 ] in Tigray, one study [ 47 ] in Beninshangul Gumuz, and one study [ 48 ] in Gambella regions were included in systematic review and meta-analysis based on inclusion criteria (Fig.  1 ). Nine of the eleven included studies were published, while two of the studies Girma et al. [ 39 ] and Melese et al. [ 43 ] were gray studies. Nine studies were undertaken in general inpatient settings, and two studies Marama et al. [ 46 ] and Sileshi et al. [ 45 ] were conducted in inpatient settings related to obstetrics/gynecology and neonatal intensive care units, respectively. About 3,958 patients in all participated in the primary studies, which were conducted from 2013 to 2023. The 10 studies had cross-sectional study designs, with the exception of one mixed study by Sabo et al. [ 44 ], all of which employed random sampling to choose their study participants. A four-point Likert scale (very dissatisfied to very satisfied) was employed by Animut et al. [ 48 ] to measure the state of patient satisfaction, while 10 primary studies used a five-point Likert scale (strongly dissatisfied to strongly satisfied). Participants were given the option to score their perception of the inpatient health services they received on a scale of 1 to 5 points (Table  1 ).

figure 1

PRISMA flow chart for selecting study articles

Patient satisfaction with inpatient health services

Since there was a significant heterogeneity across studies when computing the fixed effects model ( p  = 0.00), we used the random effects model to do the analysis for the pooled effect size of patient satisfaction with inpatient health services. As a result, 57.4% (95% CI: 50.88–64.59, I 2  = 95.25%) was estimated pooled national level of patient satisfaction with inpatient health services (Fig.  2 ). An extremely high amount of heterogeneity is present, as shown by the high I 2 value. We used sample size and publication year as factors in a meta-regression analysis to determine the potential cause of heterogeneity. The results of a meta-regression analysis showed that sample size and publication year had no statistically significant impact on the existence of heterogeneity between studies (Table  2 ).

figure 2

Forest plot for pooled proportion of patient satisfaction with inpatient health services, Ethiopia, 2024

Subgroup analysis

The included studies fell into two categories: "year of publication from 2013 to 2018" and "year of publication from 2019 to 2023." We performed subgroup analysis using this category to evaluate the possible heterogeneity across studies. Consequently, research conducted between 2013 and 2018 showed the greatest pooled proportion of patient satisfaction with inpatient health services among the 11 studies included in this systematic review and meta-analysis (65.01%, 95% CI: 53.28–76.75, I 2  = 95.99%)(Fig.  3 ).

figure 3

Subgroup analysis for inpatient satisfaction by publication year category, Ethiopia, 2024

We computed the Eggers' test and examined the funnel plot to see whether publication bias was present. Symmetrical distribution of study effects size against standard error is visible upon inspection of the funnel plot, providing proof that there is no publication bias (Fig.  4 ). Additionally, the Egger's test result ( p  = 0.35) revealed no conclusive evidence of publication bias. Furthermore, we performed a sensitivity analysis using a random effects model for the included studies, and the results indicated that not a single study had an impact on the overall estimated level of patient satisfaction with inpatient health services (Fig.  5 ).

figure 4

Funnel plot of testing publication bias for 11 studies, 2024

figure 5

Result of sensitivity analysis of 11 studies, 2024

Factors associated with patient satisfaction with inpatient health services

Educational status.

Compared to their counterparts, patients with no formal education had a 76% higher likelihood of being satisfied with inpatient health services, however this difference was not statistically significant (OR = 1.76, 95% CI: (0.50, 6.23), I 2  = 95.2%). The heterogeneity test revealed a substantial variation between studies ( p  = 0.00).

Place of residence

Although not statistically significant, patients who live in urban areas were 28% more likely than those who live in rural areas to be dissatisfied with inpatient health services (OR = 1.28, 95% CI: (0.45, 3.61), I 2  = 94.33). The results of Egger's test showed no publication bias ( p  = 0.068).

Patient privacy assurance

Patients who felt that their privacy was protected were 7.44 times more likely to be satisfied than those who felt that their privacy had been violated (OR = 7.44, 95%CI: 3.63–15.25, I 2  = 0.0%). In regression analysis, no publication bias was found using Egger's test for small study effects ( p  = 0.43).

Availability of direction indicators

Patients in hospitals with direction indicators demonstrated a 2.96-fold higher likelihood of satisfaction compared to those in hospitals without it. (OR = 2.96, 95% CI: 1.91–4.57, I 2  = 0.0%). No discernible variance was seen between studies according to the heterogeneity test ( p  = 0.69).

Adequacy of information received

When compared to patients who did not have access to sufficient information about the treatments they received, those who felt that they received adequate information from healthcare providers were 3.27 times more likely to be satisfied with inpatient health services (OR = 3.27, 95% CI: 1.63–6.58, I 2  = 65.60%). Egger’s test showed no publication bias ( p  = 0.088).

History of admission

Compared to patients without a previous admission history, individuals with a prior history of admission had a 71% higher likelihood of not being satisfied with inpatient health services (OR = 0.29, 95% CI: 0.18–0.46, I 2  = 86.36%). Significant variation between studies was found using the heterogeneity test ( p  = 0.01). Significant publication bias was found using the Egger's test ( p  = 0.0068).

Provision of on-time treatment

Compared to patients who did not receive treatment on time, those who felt that they received it in accordance with correct timeliness were 1.63 times more likely to be satisfied (OR = 1.63, 95% CI: 1.21–2.20, I 2  = 52.74%). Heterogeneity test showed non-significant variation across studies ( p  = 0.15).

When determining whether or not the healthcare services provided to patients meet their expectations, patient satisfaction is a crucial concept [ 8 ]. Patient satisfaction, particularly that of inpatients, is increasingly being used globally as an indicator for the provision of high quality, desired healthcare [ 1 , 49 ]. By calculating the overall pooled effect size of patient satisfaction with inpatient health services and identifying associated characteristics, this systematic review and meta-analysis was conducted to generate national evidence on inpatient health services.

After reviewing studies including 3,958 participants, the overall pooled proportion of satisfied patients with inpatient health services was found to be 57.4% (95% CI: 50.88–64.59). This result is less than other studies conducted in China [ 20 ], Saudi Arabia [ 24 ], and a hospital affiliated to Tehran University of Medical Sciences (Iran) [ 23 ] which found that patient satisfaction rates were, respectively, 89.75%, 78–96%, and 84.3%. The differences could be attributed to national policies and socioeconomic conditions. Our result, however, is higher than that of other study carried out in hospitals of Iran [ 22 ], where the examined percentage of patient satisfaction accounted for 14.1%. The observed disparity could perhaps stem from variations in the level of anticipation across research participants, which may be attributed to variations in their national, cultural, and socioeconomic backgrounds. On the other hand, our result is nearly comparable to the finding of a systematic review and meta-analysis that assessed patient satisfaction with Ethiopian general healthcare system, estimating the overall pooled effect size of patient satisfaction accounting for 63.7% [ 50 ]. This consistent result indicates that Ethiopian healthcare providers are still not addressing the basics to patients' concerns.

Our meta-analysis's subgroup analysis showed that studies conducted between 2013 and 2018 (the early five years) had a greater pooled proportion of patient satisfaction with inpatient health services than studies conducted between 2019 and 2023 (the late five years). This outcome is reflecting the patient satisfaction trend of decline. It may be related to patients' rising expectations as their knowledge of the quality and standard of healthcare gradually increases. Patients today have high expectations for high-quality healthcare.

The results of our meta-analysis showed that the following factors were statistically significant predictors of patient satisfaction with inpatient health services: the assurance of patients' privacy, the availability of direction indicators in hospitals, the provision of services with adequate information, the provision of on-time treatment, and the history of previous admission. The finding is consistent with other studies [ 6 , 19 , 26 ]. This consistent finding may suggest that the variables are the global predictors of patient satisfaction with health care. Moreover, it might be due to limited number of studies included in our meta-analysis; increasing the sample size may change the result. While not statistically significant, the educational background and residential location of patients were also found to be associated with the degree of patient satisfaction with inpatient health treatments. The small number of studies that were utilized to assess the pooled effect size may be the cause of the non-significant association.

Study limitations

The findings of this systematic review and meta-analysis offered evidence and insight at the national level about Ethiopian public hospitals' inpatient satisfaction rates. It was not, however, done without limitations. First, primary studies included were small in number as we didn’t consider studies conducted in particular areas of inpatient services, such as the nursing, laboratory, and pharmacy units. Second, despite performing subgroup and meta-regression analysis, it was not possible to identify the potential source of the notable heterogeneity in the estimates of the overall pooled effect size across studies. It might be related to high sensitiveness of Cochran’s Q-test to the limited number of studies included. Third, while there are many variables influencing patient satisfaction, only a small number of them were the subject of a meta-analysis because there were insufficient studies with consistent results.

Conclusion and recommendations

The overall pooled effect size of patient satisfaction with inpatient health services is low in Ethiopia. Patient satisfaction with inpatient health services decreased over time, according to our meta-analysis. A higher level of patient satisfaction with inpatient health services was predicted by factors such as privacy assurance, timely service, availability of direction indicators, services with adequate information transfer, and no history of previous admission. To increase inpatient satisfaction, the Ministry of Health and hospital administration must place a strong emphasis on ensuring the provision of high-quality, standard-based inpatient healthcare. It is advised that healthcare providers act without hesitation when it comes to invading patient privacy, withholding pertinent information, and delaying patient care. One key strategy to increase patient satisfaction is to take into account the hospital's infrastructure, which includes clearly marked directional signs for every unit and evidence-based decisions on patient admission to reduce needless patient admission. Additional investigation is required to examine several aspects in order to improve the delivery of standard-based inpatient healthcare.

Availability of data and materials

All data included in the Systematic review and Meta-analyses are available in the main manuscript.

Abbreviations

Confidence Interval

Cross Sectional

Joanna Briggs Institute

Newcastle-Ottawa Scale

Preferred Reporting Items for Systematic Review and Meta-Analysis

Southern Nations, Nationalities, and Peoples

Dewi WP, Peristiowati Y, Wardani R. Determinants of Satisfaction of Inpatients in Hospitals. Journal for Quality in Public Health. 2022;5(2):444–52.

Article   Google Scholar  

Molalign Takele G, et al. Assessment patient satisfaction towards emergency medical care and its determinants at Ayder comprehensive specialized hospital, Mekelle, Northern Ethiopia. PLoS ONE. 2021;16(1): e0243764.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Molla M, et al. Assessment of adult patients satisfaction and associated factors with nursing care in black lion hospital, Ethiopia; institutional based cross sectional study, 2012. International Journal of Nursing and Midwifery. 2014;6(4):49–57.

Yehualaw A, et al. Determinants t of patient satisfaction with pharmacy services at Felege Hiwot comprehensive specialized hospital, Bahir Dar. Ethiopia Annals of Medicine and Surgery. 2023;85(12):5885–91.

Article   PubMed   Google Scholar  

Mezemir R, Getachew D, Gebreslassie M. Patients’ satisfaction and its determinants in outpatient Department of Deberebirhan Referral Hospital, north Shoa, Ethiopia. Int J Econ Manag Sci. 2014;3(191):2.

Google Scholar  

Cleary, P.D. and B.J. McNeil, Patient satisfaction as an indicator of quality care . Inquiry, 1988: p. 25–36.

Desta H, Berhe T, Hintsa S. Assessment of patients’ satisfaction and associated factors among outpatients received mental health services at public hospitals of Mekelle Town, northern Ethiopia. Int J Ment Heal Syst. 2018;12:1–7.

Batbaatar E, et al. Conceptualisation of patient satisfaction: a systematic narrative literature review. Perspect Public Health. 2015;135(5):243–50.

Thompson AG, Sunol R. Expectations as determinants of patient satisfaction: concepts, theory and evidence. Int J Qual Health Care. 1995;7(2):127–41.

Article   CAS   PubMed   Google Scholar  

Schoenfelder T, Klewer J, Kugler J. Determinants of patient satisfaction: a study among 39 hospitals in an in-patient setting in Germany. Int J Qual Health Care. 2011;23(5):503–9.

Habte F, Gedamu M, Kassaw C. Patient satisfaction and associated factor at red cross pharmacies in Addis Ababa, Ethiopia. BMC Health Serv Res. 2023;23(1):1181.

Article   PubMed   PubMed Central   Google Scholar  

Lubis B, et al. Analysis Of Factors Affecting The Quality Of Service Toward Inpatient Patient Satisfaction Bpjs Dirsu Bandung. International Journal Of Management And Humanities. 2021;5(5):14–7.

Asres AW, et al. Assessment of patient satisfaction and associated factors in an outpatient department at Dangila primary hospital, Awi zone, Northwest Ethiopia, 2018. Global Security: Health, Science and Policy. 2020;5(1):57–64.

Eleni TT, Tessema KM, Kaleab TT. Inpatient Care Satisfaction among public and private health sectors in Bahir Dar town, Amhara regional state, North West Ethiopia. Int J Caring Sci. 2018;11(3):1609–22.

Belayneh M. Inpatient satisfaction and associated factors towards nursing care at Felegehiwot referral hospital, Amhara regional state, Northwest Ethiopia. Global J Med Public Health. 2016;5(3):1–13.

Kidanemariam, G., et al., Improving Patient Satisfaction and Associated Factors at Outpatient Department in General Hospitals of Central Zone, Tigray, Northern Ethiopia, June 2018-August 2019: Pre-and Postinterventional Study . BioMed Research International, 2023. 2023.

Bjertnaes OA, Sjetne IS, Iversen HH. Overall patient satisfaction with hospitals: effects of patient-reported experiences and fulfilment of expectations. BMJ Qual Saf. 2012;21(1):39–46.

Goben, K.W., E.S. Abegaz, and S.T. Abdi, Patient satisfaction and associated factors among psychiatry outpatients of St Paulo’s Hospital, Ethiopia . General Psychiatry, 2020. 33(1).

Farzianpour F, Byravan R, Amirian S. Evaluation of patient satisfaction and factors affecting it: a review of the literature. Health. 2015;7(11):1460.

Chen, H., et al., Factors influencing inpatients’ satisfaction with hospitalization service in public hospitals in Shanghai, People’s Republic of China . Patient preference and adherence, 2016: p. 469–477.

Shan L, et al. Patient satisfaction with hospital inpatient care: effects of trust, medical insurance and perceived quality of care. PLoS ONE. 2016;11(10): e0164366.

Esfahani P, Nezamdust F. Patients’ Satisfaction in hospitals of Iran: A systematic review and meta-analysis. Qom University of Medical Sciences Journal. 2019;13(4):58–72.

Makarem, J., et al., Patients' satisfaction with inpatient services provided in hospitals affiliated to Tehran University of Medical Sciences, Iran, during 2011–2013 . Journal of medical ethics and history of medicine, 2016. 9.

Alshahrani, A.M. Predictors of Patients’ Satisfaction with Primary Health Care Services in the Kingdom of Saudi Arabia: A Systematic Review. in Healthcare . 2023. MDPI.

Batbaatar, E.D., Javkhlanbayar; Luvsannyam, Ariunbat; Mario, Matteo Savino; and Amenta, Pietro, Determinants of patient satisfaction: a systematic review . Perspectives in Public Health, 2016.

Assefa FM. Andualem. Hailemichael, Yohannes, Assessment of clients’ satisfaction with health service deliveries at Jimma University Specialized Hospital Ethiop J Health Sci. 2011;21(2):102–9.

Kidanemariam G, et al. Improving Patient Satisfaction and Associated Factors at Outpatient Department in General Hospitals of Central Zone, Tigray, Northern Ethiopia, June 2018-August 2019: Pre- and Postinterventional Study. Biomed Res Int. 2023;2023:6685598.

Ahmed T, et al. Levels of adult patients’ satisfaction with nursing care in selected public hospitals in Ethiopia. Int J Health Sci. 2014;8(4):371.

Liberati A, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7): e1000100.

Ried, K., Interpreting and understanding meta-analysis graphs: a practical guide. Australian family physician, 2006. 35(8).

Andaleeb SS, Siddiqui N, Khandakar S. Patient satisfaction with health services in Bangladesh. Health Policy Plan. 2007;22(4):263–73.

Moher D, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.

Munn Z, et al. The development of a critical appraisal tool for use in systematic reviews addressing questions of prevalence. Int J Health Policy Manag. 2014;3(3):123–8.

Modesti PA, et al. Panethnic Differences in Blood Pressure in Europe: A Systematic Review and Meta-Analysis. PLoS ONE. 2016;11(1): e0147601.

Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002;21(11):1539–58.

Tura G, Fantahun M, Worku A. The effect of health facility delivery on neonatal mortality: systematic review and meta-analysis. BMC Pregnancy Childbirth. 2013;13:18.

Mulugeta H, et al. Patient satisfaction with nursing care in Ethiopia: a systematic review and meta-analysis. BMC Nurs. 2019;18:1–12.

Asamrew N, Endris AA, Tadesse M. Level of Patient Satisfaction with Inpatient Services and Its Determinants: A Study of a Specialized Hospital in Ethiopia. J Environ Public Health. 2020;2020:2473469.

Girma, M., Assessment of inpatient satisfaction on quality of care and associated factors at Zewwditu Memorial Hospital, Addis Ababa. School of Public Health. College of Health Sciences: Addis Ababa University; 2015.

Weldearegay ZW, et al. Patient satisfaction and associated factors among in-patients in Primary Hospitals of North Shoa Zone, Amhara Regional State. Ethiopia International Journal of Public Health. 2020;9(2):76–81.

Tesfaye HT, Arbaminch E. Statistical analysis of patients’ satisfaction with Hospital Services: A Case Study of Shashemene and Hawassa University Referral Hospitals. Ethiopia Ethiopia Arbaminch. 2009;7:1–6.

Woldeyohanes TR, et al. Perceived patient satisfaction with in-patient services at Jimma University Specialized Hospital. Southwest Ethiopia BMC research notes. 2015;8(1):1–8.

Melese, M., Patients’ Level of Satisfaction With Quality of Health Service Delivered at The In-Patient Department of Nigist Elleni Mohammed Memorial Referal Hospital in Hadiya Zone, Southern Ethiopia. 2018.

Sabo KG, et al. Factors affecting satisfaction with inpatient services among adult patients admitted to Arba Minch General Hospital, Southern Ethiopia: a mixed method study. Health Services Insights. 2023;16:11786329231166512.

Sileshi, E., et al., Parental satisfaction towards care given at neonatal intensive care unit and associated factors in comprehensive and referral hospitals of southern ethiopia . Journal of Pregnancy, 2023. 2023.

Marama, T., et al., Patient satisfaction and associated factors among clients admitted to obstetrics and gynecology wards of public hospitals in Mekelle town, Ethiopia: an institution-based cross-sectional study . Obstetrics and gynecology international, 2018. 2018.

Aga TB, Ferede YM, Mekonen EG. Satisfaction and associated factors towards inpatient health care services among adult patients at Pawie General Hospital, West Ethiopia. PLoS ONE. 2021;16(4): e0249168.

Animut, N., et al., Satisfaction toward Quality of Care and Associated Factors among Patients Admitted to Gambella General Hospital, Gambella Region, Southwest Ethiopia . Advances in Public Health, 2022. 2022.

Asamrew, N., A.A. Endris, and M. Tadesse, Level of patient satisfaction with inpatient services and its determinants: a study of a specialized hospital in Ethiopia . Journal of environmental and public health, 2020. 2020.

Biresaw H, et al. Patient satisfaction towards health care services provided in Ethiopian health institutions: a systematic review and meta-analysis. Health services insights. 2021;14:11786329211040688.

Download references

Acknowledgements

We authors would like to thank all authors of the primary studies included in our research.

Have no financial and non-financial support.

Author information

Authors and affiliations.

Department of Public Health, College of Medicine and Health Sciences, Injibara University, Po. Box 40, Injibara, Ethiopia

Ayenew Takele Alemu, Amare Mebrat Delie & Mahider Awoke Belay

Health Promotion and Behavioral Science Department, School of Public Health, College of Medicine and Health Science, Bahir Dar University, Bahir Dar, Ethiopia

Eyob Ketema Bogale

Department of Nutrition, Antsokiya Gemza Woreda Health Office, North Shoa, Northeast, Ethiopia

Solomon Ketema Bogale & Getnet Alemu Andarge

Institute of Public Health, College of Medicine and Health Sciences, Health Promotion and Health Behavior, University of Gondar, PO.Box 196, Gondar, Ethiopia

Eyob Getachew Desalew, Gebeyehu Lakew & Amlaku Nigusie Yirsaw

Bati Primary Hospital, Oromia Special Zone, North Shoa, Kemisie, Ethiopia

Department of Midwifery, School of Nursing and Midwifery, Asrat Woldeyes Health Science Campus, Debre Berhan University, Debre Berhan, Ethiopia

Mitiku Tefera

You can also search for this author in PubMed   Google Scholar

Contributions

Ayenew Takele Alemu and Mahider Awoke Belay worked on the protocol development, selection of studies, data extraction, analysis, and interpretation, as well as the initial draft of the manuscript. Eyob Ketema Bogale, Solomon Ketema Bogale, Eyob Getachew Desalew, and Getnet Alemu Andargie involved in the data quality assessment, data extraction, and document revision. Kedir Seid, Gebeyehu Lakew, Amlaku NigusieYirsaw, Mitiku Tefera, and Amare Mebrat worked on data analysis, interpretation and revision. The final draft of this work was prepared by Ayenew Takele Alemu, Amare Mebrat Delie, and Mahider Awoke Belay. After reading the work, all authors provided their final approval.

Corresponding author

Correspondence to Ayenew Takele Alemu .

Ethics declarations

Ethics approval and consent to participate.

This section is not applicable as the study is systematic review and meta-analysis.

Consent for publication

Non-applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Alemu, A.T., Bogale, E.K., Bogale, S.K. et al. Patient satisfaction and associated factors with inpatient health services at public hospitals in Ethiopia: a systematic review and meta-analysis. BMC Health Serv Res 24 , 1042 (2024). https://doi.org/10.1186/s12913-024-11552-5

Download citation

Received : 11 April 2024

Accepted : 06 September 2024

Published : 09 September 2024

DOI : https://doi.org/10.1186/s12913-024-11552-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Patient satisfaction
  • Inpatient health service
  • Systematic review
  • Meta-analysis

BMC Health Services Research

ISSN: 1472-6963

quality assurance research articles

IMAGES

  1. (PDF) The Quality of quality assurance agencies

    quality assurance research articles

  2. (PDF) Current Trends in Quality Assurance of Clinical Research

    quality assurance research articles

  3. (PDF) Impact of Quality Assurance on Academic Performance

    quality assurance research articles

  4. Quality Assurance

    quality assurance research articles

  5. (PDF) A Review of Quality Assurance Research of Dialogue Systems

    quality assurance research articles

  6. (PDF) Quality Assurance and Quality Control for Project Effectiveness

    quality assurance research articles

VIDEO

  1. Quality Assurance in Generating Research Data (NEUP)

  2. Quality Assurance Homework: Development Research Presentation

  3. A career in Quality Assurance in Research- A talk with experts

  4. How to Know about CHANGE CONTROL Format in Pharma Industry

  5. QUALITY ASSURANCE for AYUSH :BASIC CONCEPTS IRES AYUSH SAMRIDDHI 1492 International Webinar-

  6. How to get BPHARM Admission in a good college

COMMENTS

  1. Quality 2030: quality management for the future

    6.1. General conclusions. Quality 2030 consists of five collectively designed themes for future QM research and practice: (a) systems perspectives applied, (b) stability in change, (c) models for smart self-organising, (d) integrating sustainable development, and (e) higher purpose as QM booster.

  2. Quality assurance: Importance of systems and standard operating

    Quality assurance: Importance of systems and standard ...

  3. The Quality Assurance Journal

    Research Article. free access. A Risk Assessment Approach: Qualification of a HVAC System in Aseptic Processing Area Using Building Management System. ... Abstracts of the Society of Quality Assurance 27th Annual Meeting, San Antonio, Texas, USA, 27 March - 1 April 2011. Pages: S1-S56. April 2011. Volume 14, Issue 1-2. Pages: 1-37.

  4. Research quality: What it is, and how to achieve it

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  5. An introduction to quality improvement

    An introduction to quality improvement

  6. Quality assurance: Importance of systems and standard operat

    It is mandatory for sponsors of clinical trials and contract research organizations alike to establish, manage and monitor their quality control and quality assurance systems and their integral standard operating procedures and other quality documents to provide high-quality products and services to fully satisfy customer needs and expectations.

  7. 114074 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on QUALITY ASSURANCE. Find methods information, sources, references or conduct a literature review on ...

  8. Quality Management Journal

    Description. The Quality Management Journal (QMJ) is a peer-reviewed journal, published by ASQ. It aims to link the efforts of academic researchers and quality management practitioners by publishing significant research relevant to quality management practice and provide a forum for discussion of such research by academics and practitioners.

  9. How quality control could save your science

    How quality control could save your science. Nature 529, 456-458 (2016) Cite this article. It may not be sexy, but quality assurance is becoming a crucial part of lab life. There are at least ...

  10. (PDF) Evolving Trends in Quality Assurance Testing: A Comprehensive

    Quality assurance testing (often known as QA testing) is a part of the software development lifecycle. ... This article's goal is to research recent shifts in quality assurance testing and give an ...

  11. Articles

    Accreditation and Quality Assurance is the leading forum for information and discussion on quality, transparency, and reliability of measurement results in ... Skip to main content. ... Research 24 June 2024 Pages: 285 - 291 Appropriate statistical techniques when using consensus statistics to evaluate performance. Daniel Tholen; Piotr Robouch ...

  12. Full article: Assessing quality assurance in higher education: quality

    Introduction. Quality of teaching and learning has become a major strategic issue in tertiary education systems across the globe over the past decades (Harvey and Williams Citation 2010; Enders and Westerheijden Citation 2014).In Europe, the Bologna process, as well as other concurrent developments, has hastened the introduction and elaboration of institutionalized quality assurance (QA) and ...

  13. What is Quality Assurance?

    BDJ In Practice 34, 31 (2021) Cite this article. Quality Assurance is by definition, a programme for the systematic evaluation of various aspects of a project, service, or facility to ensure that ...

  14. The Importance of Quality Assurance and Quality Improvement in Low- and

    These six aims as established at that time are that medical care should be safe, effective, efficient, patient-centered, timely, and equitable [2]. Embedding a culture of quality improvement and patient safety within all health systems around the world can assist in work towards these aims to mitigate harm.

  15. Improving the quality of care in health systems: towards better

    Improving the quality of care in health systems

  16. Quality assurance of qualitative research: a review of the discourse

    Background Increasing demand for qualitative research within global health has emerged alongside increasing demand for demonstration of quality of research, in line with the evidence-based model of medicine. In quantitative health sciences research, in particular clinical trials, there exist clear and widely-recognised guidelines for conducting quality assurance of research. However, no ...

  17. Quality Risk Management Framework: Guidance for Successful

    In 2012, the Clinical Trials Transformation Initiative introduced Quality by Design to the industry. 1 Then, in 2013, the European Medicines Agency (EMA) issued its Reflection Paper on Risk-Based Quality Management in Clinical Trials. 2 In 2016, TransCelerate's Clinical Quality Management System: From a Vision to a Conceptual Framework ...

  18. Full article: Quality assurance and quality enhancement: is there a

    Quality assurance. The Analytic Quality Glossary defines quality assurance as 'the collections of policies, procedures, systems and practices internal or external to the organisation designed to achieve, maintain and enhance quality'. Quality assurance can be both an internal and external process. However, Harvey observes that 'it has become a shorthand term for 'for all forms of ...

  19. 608 PDFs

    Review articles in SOFTWARE QUALITY ASSURANCE

  20. Building and implementing a quality assurance/quality improvement

    2.1. Supporting our study teams. The goals of Good Clinical Practice (GCP) are to protect the rights, safety and welfare of human subjects participating in research, as well as to assure the quality, reliability and integrity of data collected from clinical trials [].Clinical trial sponsors are required by the International Conference on Harmonisation (ICH) guidelines to implement and maintain ...

  21. Quality Assurance in Research

    Quality Assurance in Research

  22. How to … assess the quality of qualitative research

    The abstract at the very beginning of a qualitative research article can be seen as the first marker of high quality. An abstract should clearly explicate the research problem and the aim of the study, and should skillfully condense the contents of the study, as well as indicate its novelty and contribution to research and practice in health ...

  23. Using Clinical Data Standards to Measure Quality: A New Approach

    We contacted NCQA, who is a measure steward for 16 of the 17 measures included in this research, to discuss this specific concern. They agreed that for the measures where codes were included in HEDIS, equivalent concepts are acceptable through mapping (Smith A, Archer L, at National Committee for Quality Assurance, phone call, November 2017).

  24. Patient satisfaction and associated factors with inpatient health

    Patient satisfaction with inpatient health services. Since there was a significant heterogeneity across studies when computing the fixed effects model (p = 0.00), we used the random effects model to do the analysis for the pooled effect size of patient satisfaction with inpatient health services.As a result, 57.4% (95% CI: 50.88-64.59, I 2 = 95.25%) was estimated pooled national level of ...