Qualitative study design: Surveys & questionnaires

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

Surveys & questionnaires

Qualitative surveys use open-ended questions to produce long-form written/typed answers. Questions will aim to reveal opinions, experiences, narratives or accounts. Often a useful precursor to interviews or focus groups as they help identify initial themes or issues to then explore further in the research. Surveys can be used iteratively, being changed and modified over the course of the research to elicit new information. 

Structured Interviews may follow a similar form of open questioning.  

Qualitative surveys frequently include quantitative questions to establish elements such as age, nationality etc. 

Qualitative surveys aim to elicit a detailed response to an open-ended topic question in the participant’s own words.  Like quantitative surveys, there are three main methods for using qualitative surveys including face to face surveys, phone surveys, and online surveys. Each method of surveying has strengths and limitations.

Face to face surveys  

  • Researcher asks participants one or more open-ended questions about a topic, typically while in view of the participant’s facial expressions and other behaviours while answering. Being able to view the respondent’s reactions enables the researcher to ask follow-up questions to elicit a more detailed response, and to follow up on any facial or behavioural cues that seem at odds with what the participants is explicitly saying.
  • Face to face qualitative survey responses are likely to be audio recorded and transcribed into text to ensure all detail is captured; however, some surveys may include both quantitative and qualitative questions using a structured or semi-structured format of questioning, and in this case the researcher may simply write down key points from the participant’s response.

Telephone surveys

  • Similar to the face to face method, but without researcher being able to see participant’s facial or behavioural responses to questions asked. This means the researcher may miss key cues that would help them ask further questions to clarify or extend participant responses to their questions, and instead relies on vocal cues.

Online surveys

  • Open-ended questions are presented to participants in written format via email or within an online survey tool, often alongside quantitative survey questions on the same topic.
  • Researchers may provide some contextualising information or key definitions to help ‘frame’ how participants view the qualitative survey questions, since they can’t directly ask the researcher about it in real time. 
  • Participants are requested to responses to questions in text ‘in some detail’ to explain their perspective or experience to researchers; this can result in diversity of responses (brief to detailed).
  • Researchers can not always probe or clarify participant responses to online qualitative survey questions which can result in data from these responses being cryptic or vague to the researcher.
  • Online surveys can collect a greater number of responses in a set period of time compared to face to face and phone survey approaches, so while data may be less detailed, there is more of it overall to compensate.

Qualitative surveys can help a study early on, in finding out the issues/needs/experiences to be explored further in an interview or focus group. 

Surveys can be amended and re-run based on responses providing an evolving and responsive method of research. 

Online surveys will receive typed responses reducing translation by the researcher 

Online surveys can be delivered broadly across a wide population with asynchronous delivery/response. 

Limitations

Hand-written notes will need to be transcribed (time-consuming) for digital study and kept physically for reference. 

Distance (or online) communication can be open to misinterpretations that cannot be corrected at the time. 

Questions can be leading/misleading, eliciting answers that are not core to the research subject. Researchers must aim to write a neutral question which does not give away the researchers expectations. 

Even with transcribed/digital responses analysis can be long and detailed, though not as much as in an interview. 

Surveys may be left incomplete if performed online or taken by research assistants not well trained in giving the survey/structured interview. 

Narrow sampling may skew the results of the survey. 

Example questions

Here are some example survey questions which are open ended and require a long form written response:

  • Tell us why you became a doctor? 
  • What do you expect from this health service? 
  • How do you explain the low levels of financial investment in mental health services? (WHO, 2007) 

Example studies

  • Davey, L. , Clarke, V. and Jenkinson, E. (2019), Living with alopecia areata: an online qualitative survey study. British Journal of Dermatology, 180 1377-1389. Retrieved from https://onlinelibrary-wiley-com.ezproxy-f.deakin.edu.au/doi/10.1111%2Fbjd.17463    
  • Richardson, J. (2004). What Patients Expect From Complementary Therapy: A Qualitative Study. American Journal of Public Health, 94(6), 1049–1053. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=s3h&AN=13270563&site=eds-live&scope=site  
  • Saraceno, B., van Ommeren, M., Batniji, R., Cohen, A., Gureje, O., Mahoney, J., ... & Underhill, C. (2007). Barriers to improvement of mental health services in low-income and middle-income countries. The Lancet, 370(9593), 1164-1174. Retrieved from https://www-sciencedirect-com.ezproxy-f.deakin.edu.au/science/article/pii/S014067360761263X?via%3Dihub  

Below has more detail of the Lancet article including actual survey questions at: 

  • World Health Organization. (2007.) Expert opinion on barriers and facilitating factors for the implementation of existing mental health knowledge in mental health services. Geneva: World Health Organization. https://apps.who.int/iris/handle/10665/44808
  • Green, J. 1961-author., & Thorogood, N. (2018). Qualitative methods for health research. SAGE. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=cat00097a&AN=deakin.b4151167&authtype=sso&custid=deakin&site=eds-live&scope=site   
  • JANSEN, H. The Logic of Qualitative Survey Research and its Position in the Field of Social Research Methods. Forum Qualitative Sozialforschung, 11(2), Retrieved from http://www.qualitative-research.net/index.php/fqs/article/view/1450/2946  
  • Neilsen Norman Group, (2019). 28 Tips for Creating Great Qualitative Surveys. Retrieved from https://www.nngroup.com/articles/qualitative-surveys/     
  • << Previous: Documents
  • Next: Interviews >>
  • Last Updated: Jul 3, 2024 11:46 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs
  • Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

767k Accesses

346 Citations

90 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

qualitative research uses structured research instruments like questionnaires

Structured Questionnaires

  • Reference work entry
  • pp 6399–6402
  • Cite this reference work entry

qualitative research uses structured research instruments like questionnaires

  • Adam Ka Lok Cheung 3  

4410 Accesses

8 Citations

Questionnaire, structured

Structured questionnaire is a document that consists of a set of standardized questions with a fixed scheme, which specifies the exact wording and order of the questions, for gathering information from respondents.

Description

Structured questionnaire is the primary measuring instrument in survey research . The use of structured questionnaire has a close relationship with quantitative analysis. The use of structured questionnaires in social research was pioneered by Francis Galton and is very common in the collection of data in quality of life research nowadays. A typical example of a structured questionnaire is the Census questionnaire, which collects demographic information from individuals. In addition, structured questionnaire is also often used as an assessment tool for psychological and psychiatric tests.

From Population Census to mini-surveys, structured questionnaires can appear in many different forms and are used in different types...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bethlehem, J. (2009). Applied survey methods: A statistical perspective . Hoboken, NJ: Wiley.

Google Scholar  

Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire . London: Sage.

Fowler, F. J. (1995). Improving survey questions: Design and evaluation . Thousand Oaks, CA: Sage.

Kalton, G., & Schuman, H. (1982). The effect of the question on survey responses: A review. Journal of the Royal Statistical Society: Series A, 145 , 42–57.

Marsden, P. V., & Wright, J. D. (2010). Handbook of survey research . Bingley, England: Emerald.

Singer, E., & Presser, S. (1989). Survey research methods: A reader . Chicago: University of Chicago Press.

Download references

Author information

Authors and affiliations.

Asia Research Institute, National University of Singapore, Bukit Timah Campus, Tower Block #10-01, Bukit Timah Road, 259770, Singapore, Singapore

Adam Ka Lok Cheung

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Adam Ka Lok Cheung .

Editor information

Editors and affiliations.

University of Northern British Columbia, Prince George, BC, Canada

Alex C. Michalos

(residence), Brandon, MB, Canada

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Cheung, A.K.L. (2014). Structured Questionnaires. In: Michalos, A.C. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0753-5_2888

Download citation

DOI : https://doi.org/10.1007/978-94-007-0753-5_2888

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-0752-8

Online ISBN : 978-94-007-0753-5

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Qualitative Research

Try Qualtrics for free

Your ultimate guide to qualitative research (with methods and examples).

16 min read You may be already using qualitative research and want to check your understanding, or you may be starting from the beginning. Learn about qualitative research methods and how you can best use them for maximum effect.

What is qualitative research?

Qualitative research is a research method that collects non-numerical data. Typically, it goes beyond the information that quantitative research provides (which we will cover below) because it is used to gain an understanding of underlying reasons, opinions, and motivations.

Qualitative research methods focus on the thoughts, feelings, reasons, motivations, and values of a participant, to understand why people act in the way they do .

In this way, qualitative research can be described as naturalistic research, looking at naturally-occurring social events within natural settings. So, qualitative researchers would describe their part in social research as the ‘vehicle’ for collecting the qualitative research data.

Qualitative researchers discovered this by looking at primary and secondary sources where data is represented in non-numerical form. This can include collecting qualitative research data types like quotes, symbols, images, and written testimonials.

These data types tell qualitative researchers subjective information. While these aren’t facts in themselves, conclusions can be interpreted out of qualitative that can help to provide valuable context.

Because of this, qualitative research is typically viewed as explanatory in nature and is often used in social research, as this gives a window into the behavior and actions of people.

It can be a good research approach for health services research or clinical research projects.

Free eBook: The qualitative research design handbook

Quantitative vs qualitative research

In order to compare qualitative and quantitative research methods, let’s explore what quantitative research is first, before exploring how it differs from qualitative research.

Quantitative research

Quantitative research is the research method of collecting quantitative research data – data that can be converted into numbers or numerical data, which can be easily quantified, compared, and analyzed .

Quantitative research methods deal with primary and secondary sources where data is represented in numerical form. This can include closed-question poll results, statistics, and census information or demographic data.

Quantitative research data tends to be used when researchers are interested in understanding a particular moment in time and examining data sets over time to find trends and patterns.

The difference between quantitative and qualitative research methodology

While qualitative research is defined as data that supplies non-numerical information, quantitative research focuses on numerical data.

In general, if you’re interested in measuring something or testing a hypothesis, use quantitative research methods. If you want to explore ideas, thoughts, and meanings, use qualitative research methods.

While qualitative research helps you to properly define, promote and sell your products, don’t rely on qualitative research methods alone because qualitative findings can’t always be reliably repeated. Qualitative research is directional, not empirical.

The best statistical analysis research uses a combination of empirical data and human experience ( quantitative research and qualitative research ) to tell the story and gain better and deeper insights, quickly.

Where both qualitative and quantitative methods are not used, qualitative researchers will find that using one without the other leaves you with missing answers.

For example, if a retail company wants to understand whether a new product line of shoes will perform well in the target market:

  • Qualitative research methods could be used with a sample of target customers, which would provide subjective reasons why they’d be likely to purchase or not purchase the shoes, while
  • Quantitative research methods into the historical customer sales information on shoe-related products would provide insights into the sales performance, and likely future performance of the new product range.

Approaches to qualitative research

There are five approaches to qualitative research methods:

  • Grounded theory: Grounded theory relates to where qualitative researchers come to a stronger hypothesis through induction, all throughout the process of collecting qualitative research data and forming connections. After an initial question to get started, qualitative researchers delve into information that is grouped into ideas or codes, which grow and develop into larger categories, as the qualitative research goes on. At the end of the qualitative research, the researcher may have a completely different hypothesis, based on evidence and inquiry, as well as the initial question.
  • Ethnographic research : Ethnographic research is where researchers embed themselves into the environment of the participant or group in order to understand the culture and context of activities and behavior. This is dependent on the involvement of the researcher, and can be subject to researcher interpretation bias and participant observer bias . However, it remains a great way to allow researchers to experience a different ‘world’.
  • Action research: With the action research process, both researchers and participants work together to make a change. This can be through taking action, researching and reflecting on the outcomes. Through collaboration, the collective comes to a result, though the way both groups interact and how they affect each other gives insights into their critical thinking skills.
  • Phenomenological research: Researchers seek to understand the meaning of an event or behavior phenomenon by describing and interpreting participant’s life experiences. This qualitative research process understands that people create their own structured reality (‘the social construction of reality’), based on their past experiences. So, by viewing the way people intentionally live their lives, we’re able to see the experiential meaning behind why they live as they do.
  • Narrative research: Narrative research, or narrative inquiry, is where researchers examine the way stories are told by participants, and how they explain their experiences, as a way of explaining the meaning behind their life choices and events. This qualitative research can arise from using journals, conversational stories, autobiographies or letters, as a few narrative research examples. The narrative is subjective to the participant, so we’re able to understand their views from what they’ve documented/spoken.

Web Graph of Qualitative Research

Qualitative research methods can use structured research instruments for data collection, like:

Surveys for individual views

A survey is a simple-to-create and easy-to-distribute qualitative research method, which helps gather information from large groups of participants quickly. Traditionally, paper-based surveys can now be made online, so costs can stay quite low.

Qualitative research questions tend to be open questions that ask for more information and provide a text box to allow for unconstrained comments.

Examples include:

  • Asking participants to keep a written or a video diary for a period of time to document their feelings and thoughts
  • In-Home-Usage tests: Buyers use your product for a period of time and report their experience

Surveys for group consensus (Delphi survey)

A Delphi survey may be used as a way to bring together participants and gain a consensus view over several rounds of questions. It differs from traditional surveys where results go to the researcher only. Instead, results go to participants as well, so they can reflect and consider all responses before another round of questions are submitted.

This can be useful to do as it can help researchers see what variance is among the group of participants and see the process of how consensus was reached.

  • Asking participants to act as a fake jury for a trial and revealing parts of the case over several rounds to see how opinions change. At the end, the fake jury must make a unanimous decision about the defendant on trial.
  • Asking participants to comment on the versions of a product being developed , as the changes are made and their feedback is taken onboard. At the end, participants must decide whether the product is ready to launch .

Semi-structured interviews

Interviews are a great way to connect with participants, though they require time from the research team to set up and conduct, especially if they’re done face-to-face.

Researchers may also have issues connecting with participants in different geographical regions. The researcher uses a set of predefined open-ended questions, though more ad-hoc questions can be asked depending on participant answers.

  • Conducting a phone interview with participants to run through their feedback on a product . During the conversation, researchers can go ‘off-script’ and ask more probing questions for clarification or build on the insights.

Focus groups

Participants are brought together into a group, where a particular topic is discussed. It is researcher-led and usually occurs in-person in a mutually accessible location, to allow for easy communication between participants in focus groups.

In focus groups , the researcher uses a set of predefined open-ended questions, though more ad-hoc questions can be asked depending on participant answers.

  • Asking participants to do UX tests, which are interface usability tests to show how easily users can complete certain tasks

Direct observation

This is a form of ethnographic research where researchers will observe participants’ behavior in a naturalistic environment. This can be great for understanding the actions in the culture and context of a participant’s setting.

This qualitative research method is prone to researcher bias as it is the researcher that must interpret the actions and reactions of participants. Their findings can be impacted by their own beliefs, values, and inferences.

  • Embedding yourself in the location of your buyers to understand how a product would perform against the values and norms of that society

Qualitative data types and category types

Qualitative research methods often deliver information in the following qualitative research data types:

  • Written testimonials

Through contextual analysis of the information, researchers can assign participants to category types:

  • Social class
  • Political alignment
  • Most likely to purchase a product
  • Their preferred training learning style

Advantages of qualitative research

  • Useful for complex situations: Qualitative research on its own is great when dealing with complex issues, however, providing background context using quantitative facts can give a richer and wider understanding of a topic. In these cases, quantitative research may not be enough.
  • A window into the ‘why’: Qualitative research can give you a window into the deeper meaning behind a participant’s answer. It can help you uncover the larger ‘why’ that can’t always be seen by analyzing numerical data.
  • Can help improve customer experiences: In service industries where customers are crucial, like in private health services, gaining information about a customer’s experience through health research studies can indicate areas where services can be improved.

Disadvantages of qualitative research

  • You need to ask the right question: Doing qualitative research may require you to consider what the right question is to uncover the underlying thinking behind a behavior. This may need probing questions to go further, which may suit a focus group or face-to-face interview setting better.
  • Results are interpreted: As qualitative research data is written, spoken, and often nuanced, interpreting the data results can be difficult as they come in non-numerical formats. This might make it harder to know if you can accept or reject your hypothesis.
  • More bias: There are lower levels of control to qualitative research methods, as they can be subject to biases like confirmation bias, researcher bias, and observation bias. This can have a knock-on effect on the validity and truthfulness of the qualitative research data results.

How to use qualitative research to your business’s advantage?

Qualitative methods help improve your products and marketing in many different ways:

  • Understand the emotional connections to your brand
  • Identify obstacles to purchase
  • Uncover doubts and confusion about your messaging
  • Find missing product features
  • Improve the usability of your website, app, or chatbot experience
  • Learn about how consumers talk about your product
  • See how buyers compare your brand to others in the competitive set
  • Learn how an organization’s employees evaluate and select vendors

6 steps to conducting good qualitative research

Businesses can benefit from qualitative research by using it to understand the meaning behind data types. There are several steps to this:

  • Define your problem or interest area: What do you observe is happening and is it frequent? Identify the data type/s you’re observing.
  • Create a hypothesis: Ask yourself what could be the causes for the situation with those qualitative research data types.
  • Plan your qualitative research: Use structured qualitative research instruments like surveys, focus groups, or interviews to ask questions that test your hypothesis.
  • Data Collection: Collect qualitative research data and understand what your data types are telling you. Once data is collected on different types over long time periods, you can analyze it and give insights into changing attitudes and language patterns.
  • Data analysis: Does your information support your hypothesis? (You may need to redo the qualitative research with other variables to see if the results improve)
  • Effectively present the qualitative research data: Communicate the results in a clear and concise way to help other people understand the findings.

Qualitative data analysis

Evaluating qualitative research can be tough when there are several analytics platforms to manage and lots of subjective data sources to compare.

Qualtrics provides a number of qualitative research analysis tools, like Text iQ , powered by Qualtrics iQ, provides powerful machine learning and native language processing to help you discover patterns and trends in text.

This also provides you with:

  • Sentiment analysis — a technique to help identify the underlying sentiment (say positive, neutral, and/or negative) in qualitative research text responses
  • Topic detection/categorisation — this technique is the grouping or bucketing of similar themes that can are relevant for the business & the industry (eg. ‘Food quality’, ‘Staff efficiency’ or ‘Product availability’)

How Qualtrics products can enhance & simplify the qualitative research process

Even in today’s data-obsessed marketplace, qualitative data is valuable – maybe even more so because it helps you establish an authentic human connection to your customers. If qualitative research doesn’t play a role to inform your product and marketing strategy, your decisions aren’t as effective as they could be.

The Qualtrics XM system gives you an all-in-one, integrated solution to help you all the way through conducting qualitative research. From survey creation and data collection to textual analysis and data reporting, it can help all your internal teams gain insights from your subjective and categorical data.

Qualitative methods are catered through templates or advanced survey designs. While you can manually collect data and conduct data analysis in a spreadsheet program, this solution helps you automate the process of qualitative research, saving you time and administration work.

Using computational techniques helps you to avoid human errors, and participant results come in are already incorporated into the analysis in real-time.

Our key tools, Text IQ™ and Driver IQ™ make analyzing subjective and categorical data easy and simple. Choose to highlight key findings based on topic, sentiment, or frequency. The choice is yours.

Qualitative research Qualtrics products

Some examples of your workspace in action, using drag and drop to create fast data visualizations quickly:

Qualitative research Qualtrics products

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

Researchmate.net logo

9 Best Examples of Research Instruments in Qualitative Research Explained

Introduction.

Qualitative research is a valuable approach that allows researchers to explore complex phenomena and gain in-depth insights into the experiences and perspectives of individuals. In order to conduct qualitative research effectively, researchers often utilize various research methodologies and instruments. These methodologies and instruments serve as tools to collect and analyze data, enabling researchers to uncover rich and nuanced information.

Qualitative research instruments are tools used to gather non-numerical data, providing researchers with detailed insights into participants' experiences, emotions, and social contexts.

In this article, we will delve into the world of qualitative research instruments, specifically focusing on research instrument examples. We will explore the different types of qualitative research instruments, provide specific examples, and discuss the advantages and limitations of using these instruments in qualitative research. By the end of this article, you will have a comprehensive understanding of the role and significance of research instruments in qualitative research.

Goals of Research Instruments in Qualitative Research

Qualitative research instruments are tools that researchers use to collect and analyze data in qualitative research studies. These instruments help researchers gather rich and detailed information about a particular phenomenon or topic.

One of the main goals of qualitative research is to understand the subjective experiences and perspectives of individuals. To achieve this, researchers need to use instruments that allow for in-depth exploration and interpretation of data. Qualitative research instruments can take various forms, including interviews, questionnaires, observations, and focus groups. Each instrument has its own strengths and limitations, and researchers need to carefully select the most appropriate instrument for their study objectives.

Exploring qualitative research instruments involves understanding the characteristics and features of each instrument, as well as considering the research context and the specific research questions being addressed. Researchers also need to consider the ethical implications of using qualitative research instruments, such as ensuring informed consent and maintaining confidentiality and anonymity of participants.

Examples of Qualitative Research Instruments

Qualitative research instruments are tools used to collect data and gather information in qualitative research studies. These instruments help researchers explore and understand complex social phenomena in depth. There are several types of qualitative research instruments that can be used depending on the research objectives and the nature of the study.

Interviews are a fundamental qualitative research instrument that allows researchers to gather in-depth and personalized information directly from participants through structured, semi-structured, or unstructured formats.

Interviews are one of the most commonly used qualitative research instruments. They involve direct communication between the researcher and the participant, allowing for in-depth exploration of the participant’s experiences, perspectives, and opinions. Interviews can be structured, semi-structured, or unstructured , depending on the level of flexibility in the questioning process. They involve researchers asking open-ended questions to participants to gather in-depth information and insights. Interviews can be conducted face-to-face, over the phone, or through video conferencing.

Focus Groups

Focus groups are a qualitative research instrument that involves guided group discussions, enabling researchers to collect diverse perspectives and explore group dynamics on a particular topic.

Focus groups are another example of qualitative research instrument that involves a group discussion led by a researcher or moderator. Participants in a focus group share their thoughts, ideas, and experiences on a specific topic. This instrument allows for the exploration of group dynamics and the interaction between participants. It also allow researchers to gather multiple perspectives and generate rich qualitative data.

Observations

Observations are a powerful qualitative research instrument that involves systematic and careful observation of participants in their natural settings. This type of qualitative research instrument allows researchers to gather data on behavior, interactions, and social processes. Observations can be participant observations, where the researcher actively participates in the setting, or non-participant observations, where the researcher remains an observer.

Document Analysis

Document analysis is a qualitative research instrument that involves the examination, analyzation and interpretation of written or recorded materials such as documents, texts, audio/video recordings or other written materials. Researchers analyze documents to gain insights into social, cultural, or historical contexts, as well as to understand the perspectives and meanings embedded in the documents.

Visual Methods

Visual methods, such as photography, video recording, or drawings, can be used as qualitative research instruments. These methods allow participants to express their experiences and perspectives visually, providing rich and nuanced data. Visual methods can be particularly useful in studying topics related to art, culture, or visual communication.

Diaries or Journals

Diaries or journals are qualitative research instruments that allow participants to record their thoughts, experiences, and reflections over time, providing researchers with rich, longitudinal data.

Diaries or journals can be used as qualitative research instruments to collect data on participants’ thoughts, feelings, and experiences over a period of time. Participants record their daily activities, reflections, and emotions, providing valuable insights into their lived experiences.

While surveys are commonly associated with quantitative research, they can also be used as qualitative research instruments. Qualitative surveys typically include open-ended questions that allow participants to provide detailed responses. Surveys can be administered online, through interviews, or in written form.

Case Studies

Case studies are in-depth investigations of a particular individual, group, or phenomenon. They involve collecting and analyzing qualitative data from various sources such as interviews, observations, and document analysis. Case studies provide rich and detailed insights into specific contexts or situations.

Ethnography

Ethnography is a qualitative research instrument that involves immersing researchers in a particular social or cultural group to observe and understand their behaviors, beliefs, and practices. Ethnographic research often includes participant observation, interviews, and document analysis.

These are just a few examples of qualitative research instruments. Researchers can choose the most appropriate data collection method or combination of methods based on their research objectives, the nature of the research question, and the available resources.

Advantages of Using Qualitative Research Instruments

Gathering in-depth and detailed information.

Qualitative research instruments offer several advantages that make them valuable tools in the research process. Firstly, qualitative research instruments allow researchers to gather in-depth and detailed information. Unlike quantitative research instruments that focus on numerical data, qualitative instruments provide rich and descriptive data about participants’ feelings, opinions, and experiences. This depth of information allows researchers to gain a comprehensive understanding of the research topic .

Flexibility and Adaptability in Qualitative Research

Another advantage of qualitative research instruments is their flexibility. Researchers can adapt their methods and questions during data collection to respond to emerging insights. This flexibility allows for a more dynamic and responsive research process, enabling researchers to explore new avenues and uncover unexpected findings.

Capturing Data in Natural Settings

Qualitative research instruments also offer the advantage of capturing data in natural settings. Unlike controlled laboratory settings often used in quantitative research, qualitative research takes place in real-world contexts. This natural setting allows researchers to observe participants’ behaviors and interactions in their natural environment, providing a more authentic and realistic representation of their experiences.

Promoting Participant Engagement and Collaboration

Furthermore, qualitative research instruments promote participant engagement and collaboration. By using methods such as interviews and focus groups, researchers can actively involve participants in the research process. This engagement fosters a sense of ownership and empowerment among participants, leading to more meaningful and insightful data.

Exploring Complex Issues Through Qualitative Research

Lastly, qualitative research instruments allow for the exploration of complex issues. Qualitative research is particularly useful when studying complex phenomena that cannot be easily quantified or measured. It allows researchers to delve into the underlying meanings, motivations, and social dynamics that shape individuals’ behaviors and experiences.

Limitations of Qualitative Research Instruments

Qualitative research instruments have several limitations that researchers need to consider when conducting their studies. In this section, we will delve into the limitations of qualitative research instruments as compared to quantitative research.

Time-Consuming Nature of Qualitative Research

One of the main drawbacks of qualitative research is that the process is time-consuming. Unlike quantitative research, which can collect data from a large sample size in a relatively short period of time, qualitative research requires in-depth interviews, observations, and analysis, which can take a significant amount of time.

Subjectivity and Potential Bias in Qualitative Research

Another limitation of qualitative research instruments is that the interpretations are subjective. Since qualitative research focuses on understanding the meaning and context of phenomena, the interpretations of the data can vary depending on the researcher’s perspective and biases. This subjectivity can introduce potential bias and affect the reliability and validity of the findings.

Complexity of Data Analysis

Additionally, qualitative research instruments often involve complex data analysis. Unlike quantitative research, which can use statistical methods to analyze data, qualitative research requires researchers to analyze textual or visual data, which can be time-consuming and challenging. The analysis process involves coding, categorizing, and interpreting the data, which requires expertise and careful attention to detail.

Challenges in Maintaining Anonymity and Privacy

Furthermore, qualitative research instruments may face challenges in maintaining anonymity. In some cases, researchers may need to collect sensitive or personal information from participants, which can raise ethical concerns . Ensuring the privacy and confidentiality of participants’ data can be challenging, and researchers need to take appropriate measures to protect the participants’ identities and maintain their trust.

Limited Generalizability of Qualitative Research Findings

Another limitation of qualitative research instruments is the limited generalizability of the findings. Qualitative research often focuses on a specific context or a small sample size, which may limit the generalizability of the findings to a larger population. While qualitative research provides rich and detailed insights into a particular phenomenon, it may not be representative of the broader population or applicable to other settings.

Difficulty in Replicating Qualitative Research Findings

Lastly, replicating findings in qualitative research can be difficult. Since qualitative research often involves in-depth exploration of a specific phenomenon, replicating the exact conditions and context of the original study can be challenging. This can make it difficult for other researchers to validate or replicate the findings, which is an essential aspect of scientific research.

Despite these limitations, qualitative research instruments offer valuable insights and understanding of complex phenomena. By acknowledging and addressing these limitations, researchers can enhance the rigor and validity of their qualitative research studies.

In conclusion, qualitative research instruments are powerful tools that enable researchers to explore and uncover the complexities of human experiences. By utilizing a range of instruments and considering their advantages and limitations, researchers can enhance the rigor and depth of their qualitative research studies.

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Related articles

Writing engaging introduction in research papers : tips and tricks.

Comparative-Frameworks-

Understanding Comparative Frameworks: Their Importance, Components, Examples and 8 Best Practices

artificial-intelligence-in-thesis-writing-for-phd-students

Revolutionizing Effective Thesis Writing for PhD Students Using Artificial Intelligence!

Interviews-as-One-of-Qualitative-Research-Instruments

3 Types of Interviews in Qualitative Research: An Essential Research Instrument and Handy Tips to Conduct Them

highlight abstracts

Highlight Abstracts: An Ultimate Guide For Researchers!

Critical abstracts

Crafting Critical Abstracts: 11 Expert Strategies for Summarizing Research

Informative Abstract

Crafting the Perfect Informative Abstract: Definition, Importance and 8 Expert Writing Tips

Descriptive Abstracts: A Complete Guide to Crafting Effective Summaries in Research Writing

Descriptive Abstracts: A Complete Guide to Crafting Effective Summaries in Research Writing

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Structured Interview | Definition, Guide & Examples

Structured Interview | Definition, Guide & Examples

Published on January 27, 2022 by Tegan George and Julia Merkus. Revised on June 22, 2023.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews .

In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but this is less common.

While structured interviews are often associated with job interviews, they are also common in marketing, social science, survey methodology, and other research fields.

  • Semi-structured interviews : A few questions are predetermined, whereas the other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Table of contents

What is a structured interview, when to use a structured interview, advantages of structured interviews, disadvantages of structured interviews, structured interview questions, how to conduct a structured interview, how to analyze a structured interview, presenting your results, other interesting articles, frequently asked questions about structured interviews.

Structured interviews are the most systematized type of interview. In contrast to semi-structured or unstructured interviews, the interviewer uses predetermined questions in a set order.

Structured interviews are often closed-ended. They can be dichotomous, which means asking participants to answer “yes” or “no” to each question, or multiple-choice. While open-ended structured interviews do exist, they are less common.

Asking set questions in a set order allows you to easily compare responses between participants in a uniform context. This can help you see patterns and highlight areas for further research, and it can be a useful explanatory or exploratory research tool.

Structured interviews are best used when:

  • You already have a very clear understanding of your topic, so you possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

A structured interview is straightforward to conduct and analyze. Asking the same set of questions mitigates potential biases and leads to fewer ambiguities in analysis. It is an undertaking you can likely handle as an individual, provided you remain organized.

Differences between different types of interviews

Make sure to choose the type of interview that suits your research best. This table shows the most important differences between the four types.

Fixed questions
Fixed order of questions
Fixed number of questions
Option to ask additional questions

Reduced bias

Increased credibility, reliability and validity, simple, cost-effective and efficient, formal in nature, limited flexibility, limited scope.

It can be difficult to write structured interview questions that approximate exactly what you are seeking to measure. Here are a few tips for writing questions that contribute to high internal validity :

  • Define exactly what you want to discover prior to drafting your questions. This will help you write questions that really zero in on participant responses.
  • Avoid jargon, compound sentences, and complicated constructions.
  • Be as clear and concise as possible, so that participants can answer your question immediately.
  • Do you think that employers should provide free gym memberships?
  • Did any of your previous employers provide free memberships?
  • Does your current employer provide a free membership?
  • a) 1 time; b) 2 times; c) 3 times; d) 4 or more times
  • Do you enjoy going to the gym?

Structured interviews are among the most straightforward research methods to conduct and analyze. Once you’ve determined that they’re the right fit for your research topic , you can proceed with the following steps.

Step 1: Set your goals and objectives

Start with brainstorming some guiding questions to help you conceptualize your research question, such as:

  • What are you trying to learn or achieve from a structured interview?
  • Why are you choosing a structured interview as opposed to a different type of interview, or another research method?

If you have satisfying reasoning for proceeding with a structured interview, you can move on to designing your questions.

Step 2: Design your questions

Pay special attention to the order and wording of your structured interview questions . Remember that in a structured interview they must remain the same. Stick to closed-ended or very simple open-ended questions.

Step 3: Assemble your participants

Depending on your topic, there are a few sampling methods you can use, such as:

  • Voluntary response sampling : For example, posting a flyer on campus and finding participants based on responses
  • Convenience sampling of those who are most readily accessible to you, such as fellow students at your university
  • Stratified sampling of a particular age, race, ethnicity, gender identity, or other characteristic of interest to you
  • Judgment sampling of a specific set of participants that you already know you want to include

Step 4: Decide on your medium

Determine whether you will be conducting your interviews in person or whether your interview will take pen-and-paper format. If conducted live, you need to decide if you prefer to talk with participants in person, over the phone, or via video conferencing.

Step 5: Conduct your interviews

As you conduct your interviews, be very careful that all conditions remain as constant as possible.

  • Ask your questions in the same order, and try to moderate your tone of voice and any responses to participants as much as you can.
  • Pay special attention to your body language (e.g., nodding, raising eyebrows), as this can bias responses.

After you’re finished conducting your interviews, it’s time to analyze your results.

  • Assign each of your participants a number or pseudonym for organizational purposes.
  • Transcribe the recordings manually or with the help of transcription software.
  • Conduct a content or thematic analysis to look for categories or patterns of responses. In most cases, it’s also possible to conduct a statistical analysis to test your hypotheses .

Transcribing interviews

If you have audio-recorded your interviews, you will likely have to transcribe them prior to conducting your analysis. In some cases, your supervisor might ask you to add the transcriptions in the appendix of your paper.

First, you will have to decide whether to conduct verbatim transcription or intelligent verbatim transcription. Do pauses, laughter, or filler words like “umm” or “like” affect your analysis and research conclusions?

  • If so, conduct verbatim transcription and include them.
  • If not, conduct intelligent verbatim transcription, which excludes fillers and fixes any grammar issues, and is often easier to analyze.

The transcription process is a great opportunity for you to cleanse your data as well, spotting and resolving any inconsistencies or errors that come up as you listen.

Coding and analyzing structured interviews

After transcribing, it’s time to conduct your thematic or content analysis . This often involves “coding” words, patterns, or themes, separating them into categories for more robust analysis.

Due to the closed-ended nature of many structured interviews, you will most likely be conducting content analysis, rather than thematic analysis.

  • You quantify the categories you chose in the coding stage by counting the occurrence of the words, phrases, subjects or concepts you selected.
  • After coding, you can organize and summarize the data using descriptive statistics .
  • Next, inferential statistics allows you to come to conclusions about your hypotheses and make predictions for future research. 

When conducting content analysis, you can take an inductive or a deductive approach. With an inductive approach, you allow the data to determine your themes. A deductive approach is the opposite, and involves investigating whether your data confirm preconceived themes or ideas.

Content analysis has a systematic procedure that can easily be replicated , yielding high reliability to your results. However, keep in mind that while this approach reduces bias, it doesn’t eliminate it. Be vigilant about remaining objective here, even if your analysis does not confirm your hypotheses .

After your data analysis, the next step is to combine your findings into a research paper .

  • Your methodology section describes how you collected the data (in this case, describing your structured interview process) and explains how you justify or conceptualize your analysis.
  • Your discussion and results sections usually address each of your coded categories, describing each in turn, as well as how often they occurred.

If you conducted inferential statistics in addition to descriptive statistics, you would generally report the test statistic , p -value , and effect size in your results section. These values explain whether your results justify rejecting your null hypothesis and whether the result is practically significant .

You can then conclude with the main takeaways and avenues for further research.

Example of interview methodology for a research paper

Let’s say you are interested in healthcare on your campus. You attend a large public institution with a lot of international students, and you think there may be a difference in perceptions based on country of origin.

Specifically, you hypothesize that students coming from countries with single-payer or socialized healthcare will find US options less satisfying.

There is a large body of research available on this topic, so you decide to conduct structured interviews of your peers to see if there’s a difference between international students and local students.

You are a member of a large campus club that brings together international students and local students, and you send a message to the club to ask for volunteers.

Here are some questions you could ask:

  • Do you find healthcare options on campus to be: excellent; good; fair; average; poor?
  • Does your home country have socialized healthcare? Yes/No
  • Are you on the campus healthcare plan? Yes/No
  • Have you ever worried about your health insurance? Yes/No
  • Have you ever had a serious health condition that insurance did not cover? Yes/No
  • Have you ever been surprised or shocked by a medical bill? Yes/No

After conducting your interviews and transcribing your data, you can then conduct content analysis, coding responses into different categories. Since you began your research with the theory that international students may find US healthcare lacking, you would use the deductive approach to see if your hypotheses seem to hold true.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. & Merkus, J. (2023, June 22). Structured Interview | Definition, Guide & Examples. Scribbr. Retrieved August 5, 2024, from https://www.scribbr.com/methodology/structured-interview/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, semi-structured interview | definition, guide & examples, unstructured interview | definition, guide & examples, what is a focus group | step-by-step guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Structured vs. unstructured interviews: A complete guide

Last updated

7 March 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Interviews can help you understand the context of a subject, eyewitness accounts of an event, people's perceptions of a product, and more.

In some instances, semi-structured or unstructured interviews can be more helpful; in others, structured interviews are the right choice to obtain the information you seek.

In some cases, structured interviews can save time, making your research more efficient. Let’s dive into everything you need to know about structured interviews.

Analyze all kinds of interviews

Bring all your interviews into one place to analyze and understand

  • What are structured interviews?

Structured interviews are also known as standardized interviews, patterned interviews, or planned interviews. They’re a research instrument that uses a standard sequence of questions to collect information about the research subject. 

Often, you’ll use structured interviews when you need data that’s easy to categorize and quantify for a statistical analysis of responses.

Structured interviews are incredibly effective at helping researchers identify patterns and trends in response data. They’re great at minimizing the time and resources necessary for data collection and analysis.

What types of questions suit structured interviews?

Often, researchers use structured interviews for quantitative research . In these cases, they usually employ close-ended questions. 

Close-ended questions have a fixed set of responses from which the interviewer can choose. Because of the limited response selection set, response data from close-ended questions is easy to aggregate and analyze.

Researchers often employ multiple-choice or dichotomous close-ended questions in interviews. 

For multiple-choice questions, interviewees may choose between three or more possible answers. The interviewer will often restrict the response to four or five possible options. An interviewee will likely need help recalling more, which can slow down and complicate the interview process. 

For dichotomous questions, the interviewee may choose between two possible options. Yes or no and true or false questions are examples of dichotomous questions.

Open-ended questions are common in structured interviews. However, researchers use them when conducting qualitative research and looking for in-depth information about the interviewee's perceptions or experiences. 

These questions take longer for the interviewee to answer, and the answers take longer for the researcher to analyze. There's also a higher possibility of the researcher collecting irrelevant data. However, open-ended questions are more effective than close-ended questions in gathering in-depth information.

Sometimes, researchers use structured interviews in qualitative research. In this case, the research instrument contains open-ended questions in the same sequence. This usage is less common because it can be hard to compare feedback, especially with large sample sizes.

  • What types of structured interviews are there?

Researchers conduct structured interviews face-to-face, via telephone or videoconference, or through a survey instrument. 

Face-to-face interviews help researchers collect data and gather more detailed information. They can collect and analyze facial expressions, body language, tone, and inflection easier than they might through other interview methods . 

However, face-to-face interviews are the most resource-intensive to arrange. You'll likely need to assume travel and other related logistical costs for a face-to-face interview. 

These interviews also take more time and are more vulnerable to bias than some other formats. For these reasons, face-to-face interviews are best with a small sample size.

You can conduct interviews via an audio or video call. They are less resource-intensive than face-to-face interviews and can use a larger sample size. 

However, it can be difficult for the interviewer to engage effectively with the interviewee within this format, which can inject bias or ambiguity into the responses. This is particularly true for audio calls, especially if the interviewer and interviewee have not met before the interview. 

A video call can help the interviewer capture some data from body language and facial expressions, but less so than in a face-to-face interview. Technical issues are another thing to consider. If you’re studying a group of people that live in an area with limited Internet connectivity, this can make a video call challenging.

Survey questionnaires mirror the essential elements of structured interviews by containing a consistent sequence of standard questions. Surveys in quantitative research usually include close-ended questions. This data collection method can be beneficial if you need feedback from a large sample size.

Surveys are resource-efficient from a data administration standpoint but are more limited in the data they can gather. Further, if a survey question is ambiguous, you can’t clear up the ambiguity before someone responds. 

By contrast, in a face-to-face or tele-interview, an interviewee may ask clarifying questions or exhibit confusion when asked an unclear question, allowing the interviewer to clarify.

  • What are some common examples of structured interviews?

Structured interviews are relevant in many fields. You can find structured interviews in human resources, marketing, political science, psychology, and more. 

Academic and applied researchers commonly use them to verify insights from analyzing academic literature or responses from other interview types.

However, one of the most common structured interview applications lies outside the research realm: Human resource professionals and hiring managers commonly use these interviews to hire employees.

A hiring manager can easily compare responses and whittle down the applicant pool by posing a standard set of closed-ended interview questions to multiple applicants. 

Further, standard close-ended or open-ended questions can reduce bias and add objectivity and credibility to the hiring process.

Structured interviews are common in political polling. Candidates and political parties may conduct structured interviews with relatively small voter groups to obtain feedback. They ask questions about issues, messaging, and voting intentions to craft policies and campaigns.

  • What do you need to conduct a structured interview?

The tools you need to conduct a structured interview vary by format. But fundamentally, you will need: 

A participant

An interviewer

A pen and pad (or other note-taking tools)

A recording device

A consent form

A list of interview questions

While some interviewees may express qualms about you recording the interview, it’s challenging to conduct quality interviews while taking detailed notes. Even if you have a note-taker in the room, note-taking may introduce bias and can’t capture body language or facial expressions. 

Depending on the nature of your study, others may wish to review your sources. If they call your conclusions into question, audio recordings are additional evidence in your favor.

To record, you should ask the interviewee to sign a consent form. Check with your employer's legal counsel or institutional review board at your academic institution for guidance about obtaining consent legally in your state. 

If you're conducting a face-to-face interview, a camcorder, digital camera, or even some smartphones are sufficient for recording.

For a tele-interview, you'll find that today's leading video conferencing software applications feature a convenient recording function for data collection.

If a survey is your method of choice, you'll need the survey and a distribution and collection method. Online survey software applications allow you to create surveys by inputting the questions and distributing your survey via text or email. 

In some cases, survey companies even offer packages in which they will call those who do not respond via email or text and conduct the survey over the phone.

  • How to conduct a structured interview

If you're planning a face-to-face interview, you'll need to take a few steps to do it efficiently. 

First, prepare your questions and double-check that the structured interview format is best for your study. Make sure that they are neutral, unbiased, and close-ended. Ask a friend or colleague to test your questions pre-interview to ensure they are clear and straightforward.

Choose the setting for your interviews. Ideally, you'll select a location that is easy to get to. If you live in a city, consider addresses accessible via public transportation. 

The room where your interview takes place should be comfortable, without distraction, and quiet, so your recording device clearly captures your interviewee's audio.

If you're looking to interview people with specific characteristics, you'll need to recruit them. Some companies specialize in interview recruitment. You provide the attributes you need, and they identify a pool of candidates for a fee. Alternatively, you can advertise to participants on social media and other relevant avenues. 

If you're looking for college students in a specific region, look at student newspaper ads or affiliated social media pages. 

You'll also want to incentivize participation, as recruiting interview respondents without compensation is exceedingly difficult. It’s best to include a line or two about requiring written consent for participation and how you’ll use the interview audio.

When you have an interview participant, discuss the intent of your research and acquire their consent. Ensure your recording tools are working well, and begin your interview. 

Don't rely on the recordings alone: Note the most significant insights from your participant, as you could easily forget them when it's time to analyze your data.

You'll want to transcribe your audio at the data analysis stage. Some recording applications use AI to generate transcripts. Remove filler words and other sounds to generate a clear transcript for the best results. 

A written transcript will help you analyze data and pull quotes from your audio to include in your final research paper.

  • What are other common types of interviews?

Typically, you'll find researchers using at least one of these other common interview types:

Semi-structured interviews

As the name suggests, semi-structured interviews include some elements of a structured interview. You’ll include preplanned questions, but you can deviate from those questions to explore the interviewee's answers in greater depth.

Typically, a researcher will conduct a semi-structured interview with preplanned questions and an interview guide. The guide will include topics and potential questions to ask. Sometimes, the guide may also include areas or questions to avoid asking.

Unstructured interviews

In an unstructured interview , the researchers approach the interview subjects without predetermined questions. Researchers often use this qualitative instrument to probe into personal experiences and testimony, typically toward the beginning of a research study. 

Often, you’ll validate the insights you gather during unstructured and semi-structured interviews with structured interviews, surveys, and similar quantitative research tools.

Focus group interviews

Focus group interviews differ from the other three types of interviews as you pose the questions to a small group. Focus groups are typically either structured or semi-structured. When researchers employ structured interview questions, they are typically confident in the areas they wish to explore. 

Semi-structured interviews are perfect for a researcher seeking to explore broad issues. However, you must be careful that unplanned questions are unambiguous and neutral. Otherwise, you could wind up with biased results.

What is a structured vs. an unstructured interview?

A structured interview consists of standard preplanned questions for data collection. These questions may be close-ended, open-ended, or a combination. 

By contrast, an unstructured interview includes unplanned questions. In these interviews, you’ll usually equip facilitators with an interview guide. This includes guidelines for asking questions and samples that can help them ask relevant questions.

What are the advantages of a structured interview?

Relative to other interview formats, a structured interview is usually more time-efficient. With a preplanned set of questions, your interview is less likely to go into tangents, especially if you use close-ended questions. 

The more structure you provide to the interview, the more likely you are to generate responses that are easy to analyze. By contrast, an unstructured interview may involve a freewheeling conversation with off-topic and irrelevant feedback that lasts a long time.

What is an example of a structured question?

A structured question is any question you ask in an interview that you’ve preplanned and standardized.

For example, if you conduct five interviews and the first question you ask each one is, "Do you believe the world is round, yes or no?" you have asked them a structured question. This is also a close-ended dichotomous question.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 6 February 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.14(3); Jul-Sep 2023
  • PMC10405529

Designing and validating a research questionnaire - Part 1

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India

Carlo Caduff

1 Department of Global Health and Social Medicine, King’s College London, London, United Kingdom

Questionnaires are often used as part of research studies to collect data from participants. However, the information obtained through a questionnaire is dependent on how it has been designed, used, and validated. In this article, we look at the types of research questionnaires, their applications and limitations, and how a new questionnaire is developed.

INTRODUCTION

In research studies, questionnaires are commonly used as data collection tools, either as the only source of information or in combination with other techniques in mixed-method studies. However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss how to design (part 1) and how to use and validate (part 2) a research questionnaire. It is important to emphasize that questionnaires seek to gather information from other people and therefore entail a social relationship between those who are doing the research and those who are being researched. This social relationship comes with an obligation to learn from others , an obligation that goes beyond the purely instrumental rationality of gathering data. In that sense, we underscore that any research method is not simply a tool but a situation, a relationship, a negotiation, and an encounter. This points to both ethical questions (what is the relationship between the researcher and the researched?) and epistemological ones (what are the conditions under which we can know something?).

At the start of any kind of research project, it is crucial to select the right methodological approach. What is the research question, what is the research object, and what can a questionnaire realistically achieve? Not every research question and not every research object are suitable to the questionnaire as a method. Questionnaires can only provide certain kinds of empirical evidence and it is thus important to be aware of the limitations that are inherent in any kind of methodology.

WHAT IS A RESEARCH QUESTIONNAIRE?

A research questionnaire can be defined as a data collection tool consisting of a series of questions or items that are used to collect information from respondents and thus learn about their knowledge, opinions, attitudes, beliefs, and behavior and informed by a positivist philosophy of the natural sciences that consider methods mainly as a set of rules for the production of knowledge; questionnaires are frequently used instrumentally as a standardized and standardizing tool to ask a set of questions to participants. Outside of such a positivist philosophy, questionnaires can be seen as an encounter between the researcher and the researched, where knowledge is not simply gathered but negotiated through a distinct form of communication that is the questionnaire.

STRENGTHS AND LIMITATIONS OF QUESTIONNAIRES

A questionnaire may not always be the most appropriate way of engaging with research participants and generating knowledge that is needed for a research study. Questionnaires have advantages that have made them very popular, especially in quantitative studies driven by a positivist philosophy: they are a low-cost method for the rapid collection of large amounts of data, even from a wide sample. They are practical, can be standardized, and allow comparison between groups and locations. However, it is important to remember that a questionnaire only captures the information that the method itself (as the structured relationship between the researcher and the researched) allows for and that the respondents are willing to provide. For example, a questionnaire on diet captures what the respondents say they eat and not what they are eating. The problem of social desirability emerges precisely because the research process itself involves a social relationship. This means that respondents may often provide socially acceptable and idealized answers, particularly in relation to sensitive questions, for example, alcohol consumption, drug use, and sexual practices. Questionnaires are most useful for studies investigating knowledge, beliefs, values, self-understandings, and self-perceptions that reflect broader social, cultural, and political norms that may well diverge from actual practices.

TYPES OF RESEARCH QUESTIONNAIRES

Research questionnaires may be classified in several ways:

Depending on mode of administration

Research questionnaires may be self-administered (by the research participant) or researcher administered. Self-administered (also known as self-reported or self-completed) questionnaires are designed to be completed by respondents without assistance from a researcher. Self-reported questionnaires may be administered to participants directly during hospital or clinic visits, mailed through the post or E-mail, or accessed through websites. This technique allows respondents to answer at their own pace and simplifies research costs and logistics. The anonymity offered by self-reporting may facilitate more accurate answers. However, the disadvantages are that there may be misinterpretations of questions and low response rates. Significantly, relevant context information is missing to make sense of the answers provided. Researcher-reported (or interviewer-reported) questionnaires may be administered face-to-face or through remote techniques such as telephone or videoconference and are associated with higher response rates. They allow the researcher to have a better understanding of how the data are collected and how answers are negotiated, but are more resource intensive and require more training from the researchers.

The choice between self-administered and researcher-administered questionnaires depends on various factors such as the characteristics of the target audience (e.g., literacy and comprehension level and ability to use technology), costs involved, and the need for confidentiality/privacy.

Depending on the format of the questions

Research questionnaires can have structured or semi-structured formats. Semi-structured questionnaires allow respondents to answer more freely and on their terms, with no restrictions on their responses. They allow for unusual or surprising responses and are useful to explore and discover a range of answers to determine common themes. Typically, the analysis of responses to open-ended questions is more complex and requires coding and analysis. In contrast, structured questionnaires provide a predefined set of responses for the participant to choose from. The use of standard items makes the questionnaire easier to complete and allows quick aggregation, quantification, and analysis of the data. However, structured questionnaires can be restrictive if the scope of responses is limited and may miss potential answers. They also may suggest answers that respondents may not have considered before. Respondents may be forced to fit their answers into the predetermined format and may not be able to express personal views and say what they really want to say or think. In general, this type of questionnaire can turn the research process into a mechanical, anonymous survey with little incentive for participants to feel engaged, understood, and taken seriously.

STRUCTURED QUESTIONS: FORMATS

Some examples of close-ended questions include:

e.g., Please indicate your marital status:

  • Prefer not to say.

e.g., Describe your areas of work (circle or tick all that apply):

  • Clinical service
  • Administration
  • Strongly agree
  • Strongly disagree.
  • Numerical scales: Please rate your current pain on a scale of 1–10 where 1 is no pain and 10 is the worst imaginable pain
  • Symbolic scales: For example, the Wong-Baker FACES scale to rate pain in older children
  • Ranking: Rank the following cities as per the quality of public health care, where 1 is the best and 5 is the worst.

A matrix questionnaire consists of a series of rows with items to be answered with a series of columns providing the same answer options. This is an efficient way of getting the respondent to provide answers to multiple questions. The EORTC QLQ-C30 is an example of a matrix questionnaire.[ 1 ]

For a more detailed review of the types of research questions, readers are referred to a paper by Boynton and Greenhalgh.[ 2 ]

USING PRE-EXISTING QUESTIONNAIRES VERSUS DEVELOPING A NEW QUESTIONNAIRE

Before developing a questionnaire for a research study, a researcher can check whether there are any preexisting-validated questionnaires that might be adapted and used for the study. The use of validated questionnaires saves time and resources needed to design a new questionnaire and allows comparability between studies.

However, certain aspects need to be kept in mind: is the population/context/purpose for which the original questionnaire was designed similar to the new study? Is cross-cultural adaptation required? Are there any permission needed to use the questionnaire? In many situations, the development of a new questionnaire may be more appropriate given that any research project entails both methodological and epistemological questions: what is the object of knowledge and what are the conditions under which it can be known? It is important to understand that the standardizing nature of questionnaires contributes to the standardization of objects of knowledge. Thus, the seeming similarity in the object of study across diverse locations may be an artifact of the method. Whatever method one uses, it will always operate as the ground on which the object of study is known.

DESIGNING A NEW RESEARCH QUESTIONNAIRE

Once the researcher has decided to design a new questionnaire, several steps should be considered:

Gathering content

It creates a conceptual framework to identify all relevant areas for which the questionnaire will be used to collect information. This may require a scoping review of the published literature, appraising other questionnaires on similar topics, or the use of focus groups to identify common themes.

Create a list of questions

Questions need to be carefully formulated with attention to language and wording to avoid ambiguity and misinterpretation. Table 1 lists a few examples of poorlyworded questions that could have been phrased in a more appropriate manner. Other important aspects to be noted are:

Examples of poorly phrased questions in a research questionnaire

Original questionIssueRephrased question
Like most people here, do you consume a rice-based diet?Leading questionWhat type of diet do you consume?
What type of alcoholic drink do you prefer?Loaded or assumptive question (assumes that the respondent consumes alcohol)Do you consume alcoholic drinks? If yes, what type of alcoholic drink do you prefer?
Over the past 30 days, how many hours in total have you exercised?Difficult to recall informationOn average, how many days in a week do you exercise? And how many hours per day?
Do you agree that not smoking is associated with no risk to health?Double negativeDo you agree that smoking is associated with risk to health?
Was the clinic easy to locate and did you like the clinic?Double-barreled questionSplit into two separate questions: was the clinic easy to locate? Did you like the clinic?
Do you eat fries regularly?Ambiguous – the term “regularly” is open to interpretationHow often do you eat fries?
  • Provide a brief introduction to the research study along with instructions on how to complete the questionnaire
  • Allow respondents to indicate levels of intensity in their replies, so that they are not forced into “yes” or “no” answers where intensity of feeling may be more appropriate
  • Collect specific and detailed data wherever possible – this can be coded into categories. For example, age can be captured in years and later classified as <18 years, 18–45 years, 46 years, and above. The reverse is not possible
  • Avoid technical terms, slang, and abbreviations. Tailor the reading level to the expected education level of respondents
  • The format of the questionnaire should be attractive with different sections for various subtopics. The font should be large and easy to read, especially if the questionnaire is targeted at the elderly
  • Question sequence: questions should be arranged from general to specific, from easy to difficult, from facts to opinions, and sensitive topics should be introduced later in the questionnaire.[ 3 ] Usually, demographic details are captured initially followed by questions on other aspects
  • Use contingency questions: these are questions which need to be answered only by a subgroup of the respondents who provide a particular answer to a previous question. This ensures that participants only respond to relevant sections of the questionnaire, for example, Do you smoke? If yes, then how long have you been smoking? If not, then please go to the next section.

TESTING A QUESTIONNAIRE

A questionnaire needs to be valid and reliable, and therefore, any new questionnaire needs to be pilot tested in a small sample of respondents who are representative of the larger population. In addition to validity and reliability, pilot testing provides information on the time taken to complete the questionnaire and whether any questions are confusing or misleading and need to be rephrased. Validity indicates that the questionnaire measures what it claims to measure – this means taking into consideration the limitations that come with any questionnaire-based study. Reliability means that the questionnaire yields consistent responses when administered repeatedly even by different researchers, and any variations in the results are due to actual differences between participants and not because of problems with the interpretation of the questions or their responses. In the next article in this series, we will discuss methods to determine the reliability and validity of a questionnaire.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Privacy Policy

Research Method

Home » Questionnaire – Definition, Types, and Examples

Questionnaire – Definition, Types, and Examples

Table of Contents

Questionnaire

Questionnaire

Definition:

A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people.

It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective. The questions may be open-ended or closed-ended, and the responses can be quantitative or qualitative. Questionnaires are widely used in research, marketing, social sciences, healthcare, and many other fields to collect data and insights from a target population.

History of Questionnaire

The history of questionnaires can be traced back to the ancient Greeks, who used questionnaires as a means of assessing public opinion. However, the modern history of questionnaires began in the late 19th century with the rise of social surveys.

The first social survey was conducted in the United States in 1874 by Francis A. Walker, who used a questionnaire to collect data on labor conditions. In the early 20th century, questionnaires became a popular tool for conducting social research, particularly in the fields of sociology and psychology.

One of the most influential figures in the development of the questionnaire was the psychologist Raymond Cattell, who in the 1940s and 1950s developed the personality questionnaire, a standardized instrument for measuring personality traits. Cattell’s work helped establish the questionnaire as a key tool in personality research.

In the 1960s and 1970s, the use of questionnaires expanded into other fields, including market research, public opinion polling, and health surveys. With the rise of computer technology, questionnaires became easier and more cost-effective to administer, leading to their widespread use in research and business settings.

Today, questionnaires are used in a wide range of settings, including academic research, business, healthcare, and government. They continue to evolve as a research tool, with advances in computer technology and data analysis techniques making it easier to collect and analyze data from large numbers of participants.

Types of Questionnaire

Types of Questionnaires are as follows:

Structured Questionnaire

This type of questionnaire has a fixed format with predetermined questions that the respondent must answer. The questions are usually closed-ended, which means that the respondent must select a response from a list of options.

Unstructured Questionnaire

An unstructured questionnaire does not have a fixed format or predetermined questions. Instead, the interviewer or researcher can ask open-ended questions to the respondent and let them provide their own answers.

Open-ended Questionnaire

An open-ended questionnaire allows the respondent to answer the question in their own words, without any pre-determined response options. The questions usually start with phrases like “how,” “why,” or “what,” and encourage the respondent to provide more detailed and personalized answers.

Close-ended Questionnaire

In a closed-ended questionnaire, the respondent is given a set of predetermined response options to choose from. This type of questionnaire is easier to analyze and summarize, but may not provide as much insight into the respondent’s opinions or attitudes.

Mixed Questionnaire

A mixed questionnaire is a combination of open-ended and closed-ended questions. This type of questionnaire allows for more flexibility in terms of the questions that can be asked, and can provide both quantitative and qualitative data.

Pictorial Questionnaire:

In a pictorial questionnaire, instead of using words to ask questions, the questions are presented in the form of pictures, diagrams or images. This can be particularly useful for respondents who have low literacy skills, or for situations where language barriers exist. Pictorial questionnaires can also be useful in cross-cultural research where respondents may come from different language backgrounds.

Types of Questions in Questionnaire

The types of Questions in Questionnaire are as follows:

Multiple Choice Questions

These questions have several options for participants to choose from. They are useful for getting quantitative data and can be used to collect demographic information.

  • a. Red b . Blue c. Green d . Yellow

Rating Scale Questions

These questions ask participants to rate something on a scale (e.g. from 1 to 10). They are useful for measuring attitudes and opinions.

  • On a scale of 1 to 10, how likely are you to recommend this product to a friend?

Open-Ended Questions

These questions allow participants to answer in their own words and provide more in-depth and detailed responses. They are useful for getting qualitative data.

  • What do you think are the biggest challenges facing your community?

Likert Scale Questions

These questions ask participants to rate how much they agree or disagree with a statement. They are useful for measuring attitudes and opinions.

How strongly do you agree or disagree with the following statement:

“I enjoy exercising regularly.”

  • a . Strongly Agree
  • c . Neither Agree nor Disagree
  • d . Disagree
  • e . Strongly Disagree

Demographic Questions

These questions ask about the participant’s personal information such as age, gender, ethnicity, education level, etc. They are useful for segmenting the data and analyzing results by demographic groups.

  • What is your age?

Yes/No Questions

These questions only have two options: Yes or No. They are useful for getting simple, straightforward answers to a specific question.

Have you ever traveled outside of your home country?

Ranking Questions

These questions ask participants to rank several items in order of preference or importance. They are useful for measuring priorities or preferences.

Please rank the following factors in order of importance when choosing a restaurant:

  • a. Quality of Food
  • c. Ambiance
  • d. Location

Matrix Questions

These questions present a matrix or grid of options that participants can choose from. They are useful for getting data on multiple variables at once.

The product is easy to use
The product meets my needs
The product is affordable

Dichotomous Questions

These questions present two options that are opposite or contradictory. They are useful for measuring binary or polarized attitudes.

Do you support the death penalty?

How to Make a Questionnaire

Step-by-Step Guide for Making a Questionnaire:

  • Define your research objectives: Before you start creating questions, you need to define the purpose of your questionnaire and what you hope to achieve from the data you collect.
  • Choose the appropriate question types: Based on your research objectives, choose the appropriate question types to collect the data you need. Refer to the types of questions mentioned earlier for guidance.
  • Develop questions: Develop clear and concise questions that are easy for participants to understand. Avoid leading or biased questions that might influence the responses.
  • Organize questions: Organize questions in a logical and coherent order, starting with demographic questions followed by general questions, and ending with specific or sensitive questions.
  • Pilot the questionnaire : Test your questionnaire on a small group of participants to identify any flaws or issues with the questions or the format.
  • Refine the questionnaire : Based on feedback from the pilot, refine and revise the questionnaire as necessary to ensure that it is valid and reliable.
  • Distribute the questionnaire: Distribute the questionnaire to your target audience using a method that is appropriate for your research objectives, such as online surveys, email, or paper surveys.
  • Collect and analyze data: Collect the completed questionnaires and analyze the data using appropriate statistical methods. Draw conclusions from the data and use them to inform decision-making or further research.
  • Report findings: Present your findings in a clear and concise report, including a summary of the research objectives, methodology, key findings, and recommendations.

Questionnaire Administration Modes

There are several modes of questionnaire administration. The choice of mode depends on the research objectives, sample size, and available resources. Some common modes of administration include:

  • Self-administered paper questionnaires: Participants complete the questionnaire on paper, either in person or by mail. This mode is relatively low cost and easy to administer, but it may result in lower response rates and greater potential for errors in data entry.
  • Online questionnaires: Participants complete the questionnaire on a website or through email. This mode is convenient for both researchers and participants, as it allows for fast and easy data collection. However, it may be subject to issues such as low response rates, lack of internet access, and potential for fraudulent responses.
  • Telephone surveys: Trained interviewers administer the questionnaire over the phone. This mode allows for a large sample size and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Face-to-face interviews : Trained interviewers administer the questionnaire in person. This mode allows for a high degree of control over the survey environment and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Mixed-mode surveys: Researchers use a combination of two or more modes to administer the questionnaire, such as using online questionnaires for initial screening and following up with telephone interviews for more detailed information. This mode can help overcome some of the limitations of individual modes, but it requires careful planning and coordination.

Example of Questionnaire

Title of the Survey: Customer Satisfaction Survey

Introduction:

We appreciate your business and would like to ensure that we are meeting your needs. Please take a few minutes to complete this survey so that we can better understand your experience with our products and services. Your feedback is important to us and will help us improve our offerings.

Instructions:

Please read each question carefully and select the response that best reflects your experience. If you have any additional comments or suggestions, please feel free to include them in the space provided at the end of the survey.

1. How satisfied are you with our product quality?

  • Very satisfied
  • Somewhat satisfied
  • Somewhat dissatisfied
  • Very dissatisfied

2. How satisfied are you with our customer service?

3. How satisfied are you with the price of our products?

4. How likely are you to recommend our products to others?

  • Very likely
  • Somewhat likely
  • Somewhat unlikely
  • Very unlikely

5. How easy was it to find the information you were looking for on our website?

  • Somewhat easy
  • Somewhat difficult
  • Very difficult

6. How satisfied are you with the overall experience of using our products and services?

7. Is there anything that you would like to see us improve upon or change in the future?

…………………………………………………………………………………………………………………………..

Conclusion:

Thank you for taking the time to complete this survey. Your feedback is valuable to us and will help us improve our products and services. If you have any further comments or concerns, please do not hesitate to contact us.

Applications of Questionnaire

Some common applications of questionnaires include:

  • Research : Questionnaires are commonly used in research to gather information from participants about their attitudes, opinions, behaviors, and experiences. This information can then be analyzed and used to draw conclusions and make inferences.
  • Healthcare : In healthcare, questionnaires can be used to gather information about patients’ medical history, symptoms, and lifestyle habits. This information can help healthcare professionals diagnose and treat medical conditions more effectively.
  • Marketing : Questionnaires are commonly used in marketing to gather information about consumers’ preferences, buying habits, and opinions on products and services. This information can help businesses develop and market products more effectively.
  • Human Resources: Questionnaires are used in human resources to gather information from job applicants, employees, and managers about job satisfaction, performance, and workplace culture. This information can help organizations improve their hiring practices, employee retention, and organizational culture.
  • Education : Questionnaires are used in education to gather information from students, teachers, and parents about their perceptions of the educational experience. This information can help educators identify areas for improvement and develop more effective teaching strategies.

Purpose of Questionnaire

Some common purposes of questionnaires include:

  • To collect information on attitudes, opinions, and beliefs: Questionnaires can be used to gather information on people’s attitudes, opinions, and beliefs on a particular topic. For example, a questionnaire can be used to gather information on people’s opinions about a particular political issue.
  • To collect demographic information: Questionnaires can be used to collect demographic information such as age, gender, income, education level, and occupation. This information can be used to analyze trends and patterns in the data.
  • To measure behaviors or experiences: Questionnaires can be used to gather information on behaviors or experiences such as health-related behaviors or experiences, job satisfaction, or customer satisfaction.
  • To evaluate programs or interventions: Questionnaires can be used to evaluate the effectiveness of programs or interventions by gathering information on participants’ experiences, opinions, and behaviors.
  • To gather information for research: Questionnaires can be used to gather data for research purposes on a variety of topics.

When to use Questionnaire

Here are some situations when questionnaires might be used:

  • When you want to collect data from a large number of people: Questionnaires are useful when you want to collect data from a large number of people. They can be distributed to a wide audience and can be completed at the respondent’s convenience.
  • When you want to collect data on specific topics: Questionnaires are useful when you want to collect data on specific topics or research questions. They can be designed to ask specific questions and can be used to gather quantitative data that can be analyzed statistically.
  • When you want to compare responses across groups: Questionnaires are useful when you want to compare responses across different groups of people. For example, you might want to compare responses from men and women, or from people of different ages or educational backgrounds.
  • When you want to collect data anonymously: Questionnaires can be useful when you want to collect data anonymously. Respondents can complete the questionnaire without fear of judgment or repercussions, which can lead to more honest and accurate responses.
  • When you want to save time and resources: Questionnaires can be more efficient and cost-effective than other methods of data collection such as interviews or focus groups. They can be completed quickly and easily, and can be analyzed using software to save time and resources.

Characteristics of Questionnaire

Here are some of the characteristics of questionnaires:

  • Standardization : Questionnaires are standardized tools that ask the same questions in the same order to all respondents. This ensures that all respondents are answering the same questions and that the responses can be compared and analyzed.
  • Objectivity : Questionnaires are designed to be objective, meaning that they do not contain leading questions or bias that could influence the respondent’s answers.
  • Predefined responses: Questionnaires typically provide predefined response options for the respondents to choose from, which helps to standardize the responses and make them easier to analyze.
  • Quantitative data: Questionnaires are designed to collect quantitative data, meaning that they provide numerical or categorical data that can be analyzed using statistical methods.
  • Convenience : Questionnaires are convenient for both the researcher and the respondents. They can be distributed and completed at the respondent’s convenience and can be easily administered to a large number of people.
  • Anonymity : Questionnaires can be anonymous, which can encourage respondents to answer more honestly and provide more accurate data.
  • Reliability : Questionnaires are designed to be reliable, meaning that they produce consistent results when administered multiple times to the same group of people.
  • Validity : Questionnaires are designed to be valid, meaning that they measure what they are intended to measure and are not influenced by other factors.

Advantage of Questionnaire

Some Advantage of Questionnaire are as follows:

  • Standardization: Questionnaires allow researchers to ask the same questions to all participants in a standardized manner. This helps ensure consistency in the data collected and eliminates potential bias that might arise if questions were asked differently to different participants.
  • Efficiency: Questionnaires can be administered to a large number of people at once, making them an efficient way to collect data from a large sample.
  • Anonymity: Participants can remain anonymous when completing a questionnaire, which may make them more likely to answer honestly and openly.
  • Cost-effective: Questionnaires can be relatively inexpensive to administer compared to other research methods, such as interviews or focus groups.
  • Objectivity: Because questionnaires are typically designed to collect quantitative data, they can be analyzed objectively without the influence of the researcher’s subjective interpretation.
  • Flexibility: Questionnaires can be adapted to a wide range of research questions and can be used in various settings, including online surveys, mail surveys, or in-person interviews.

Limitations of Questionnaire

Limitations of Questionnaire are as follows:

  • Limited depth: Questionnaires are typically designed to collect quantitative data, which may not provide a complete understanding of the topic being studied. Questionnaires may miss important details and nuances that could be captured through other research methods, such as interviews or observations.
  • R esponse bias: Participants may not always answer questions truthfully or accurately, either because they do not remember or because they want to present themselves in a particular way. This can lead to response bias, which can affect the validity and reliability of the data collected.
  • Limited flexibility: While questionnaires can be adapted to a wide range of research questions, they may not be suitable for all types of research. For example, they may not be appropriate for studying complex phenomena or for exploring participants’ experiences and perceptions in-depth.
  • Limited context: Questionnaires typically do not provide a rich contextual understanding of the topic being studied. They may not capture the broader social, cultural, or historical factors that may influence participants’ responses.
  • Limited control : Researchers may not have control over how participants complete the questionnaire, which can lead to variations in response quality or consistency.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Observational Research

Observational Research – Methods and Guide

Applied Research

Applied Research – Types, Methods and Examples

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Mixed Research methods

Mixed Methods Research – Types & Analysis

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

31k Accesses

61 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

qualitative research uses structured research instruments like questionnaires

Interviews in the social sciences

qualitative research uses structured research instruments like questionnaires

Professionalism in dentistry: deconstructing common terminology

A review of technical and quality assessment considerations of audio-visual and web-conferencing focus groups in qualitative health research, introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Assessment of women’s needs and wishes regarding interprofessional guidance on oral health in pregnancy – a qualitative study.

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

‘baby mamas’ in urban ghana: an exploratory qualitative study on the factors influencing serial fathering among men in accra, ghana.

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

qualitative research uses structured research instruments like questionnaires

  • Systematic review
  • Open access
  • Published: 07 August 2024

Models and frameworks for assessing the implementation of clinical practice guidelines: a systematic review

  • Nicole Freitas de Mello   ORCID: orcid.org/0000-0002-5228-6691 1 , 2 ,
  • Sarah Nascimento Silva   ORCID: orcid.org/0000-0002-1087-9819 3 ,
  • Dalila Fernandes Gomes   ORCID: orcid.org/0000-0002-2864-0806 1 , 2 ,
  • Juliana da Motta Girardi   ORCID: orcid.org/0000-0002-7547-7722 4 &
  • Jorge Otávio Maia Barreto   ORCID: orcid.org/0000-0002-7648-0472 2 , 4  

Implementation Science volume  19 , Article number:  59 ( 2024 ) Cite this article

333 Accesses

6 Altmetric

Metrics details

The implementation of clinical practice guidelines (CPGs) is a cyclical process in which the evaluation stage can facilitate continuous improvement. Implementation science has utilized theoretical approaches, such as models and frameworks, to understand and address this process. This article aims to provide a comprehensive overview of the models and frameworks used to assess the implementation of CPGs.

A systematic review was conducted following the Cochrane methodology, with adaptations to the "selection process" due to the unique nature of this review. The findings were reported following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines. Electronic databases were searched from their inception until May 15, 2023. A predetermined strategy and manual searches were conducted to identify relevant documents from health institutions worldwide. Eligible studies presented models and frameworks for assessing the implementation of CPGs. Information on the characteristics of the documents, the context in which the models were used (specific objectives, level of use, type of health service, target group), and the characteristics of each model or framework (name, domain evaluated, and model limitations) were extracted. The domains of the models were analyzed according to the key constructs: strategies, context, outcomes, fidelity, adaptation, sustainability, process, and intervention. A subgroup analysis was performed grouping models and frameworks according to their levels of use (clinical, organizational, and policy) and type of health service (community, ambulatorial, hospital, institutional). The JBI’s critical appraisal tools were utilized by two independent researchers to assess the trustworthiness, relevance, and results of the included studies.

Database searches yielded 14,395 studies, of which 80 full texts were reviewed. Eight studies were included in the data analysis and four methodological guidelines were additionally included from the manual search. The risk of bias in the studies was considered non-critical for the results of this systematic review. A total of ten models/frameworks for assessing the implementation of CPGs were found. The level of use was mainly policy, the most common type of health service was institutional, and the major target group was professionals directly involved in clinical practice. The evaluated domains differed between the models and there were also differences in their conceptualization. All the models addressed the domain "Context", especially at the micro level (8/12), followed by the multilevel (7/12). The domains "Outcome" (9/12), "Intervention" (8/12), "Strategies" (7/12), and "Process" (5/12) were frequently addressed, while "Sustainability" was found only in one study, and "Fidelity/Adaptation" was not observed.

Conclusions

The use of models and frameworks for assessing the implementation of CPGs is still incipient. This systematic review may help stakeholders choose or adapt the most appropriate model or framework to assess CPGs implementation based on their specific health context.

Trial registration

PROSPERO (International Prospective Register of Systematic Reviews) registration number: CRD42022335884. Registered on June 7, 2022.

Peer Review reports

Contributions to the literature

Although the number of theoretical approaches has grown in recent years, there are still important gaps to be explored in the use of models and frameworks to assess the implementation of clinical practice guidelines (CPGs). This systematic review aims to contribute knowledge to overcome these gaps.

Despite the great advances in implementation science, evaluating the implementation of CPGs remains a challenge, and models and frameworks could support improvements in this field.

This study demonstrates that the available models and frameworks do not cover all characteristics and domains necessary for a complete evaluation of CPGs implementation.

The presented findings contribute to the field of implementation science, encouraging debate on choices and adaptations of models and frameworks for implementation research and evaluation.

Substantial investments have been made in clinical research and development in recent decades, increasing the medical knowledge base and the availability of health technologies [ 1 ]. The use of clinical practice guidelines (CPGs) has increased worldwide to guide best health practices and to maximize healthcare investments. A CPG can be defined as "any formal statements systematically developed to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances" [ 2 ] and has the potential to improve patient care by promoting interventions of proven benefit and discouraging ineffective interventions. Furthermore, they can promote efficiency in resource allocation and provide support for managers and health professionals in decision-making [ 3 , 4 ].

However, having a quality CPG does not guarantee that the expected health benefits will be obtained. In fact, putting these devices to use still presents a challenge for most health services across distinct levels of government. In addition to the development of guidelines with high methodological rigor, those recommendations need to be available to their users; these recommendations involve the diffusion and dissemination stages, and they need to be used in clinical practice (implemented), which usually requires behavioral changes and appropriate resources and infrastructure. All these stages involve an iterative and complex process called implementation, which is defined as the process of putting new practices within a setting into use [ 5 , 6 ].

Implementation is a cyclical process, and the evaluation is one of its key stages, which allows continuous improvement of CPGs development and implementation strategies. It consists of verifying whether clinical practice is being performed as recommended (process evaluation or formative evaluation) and whether the expected results and impact are being reached (summative evaluation) [ 7 , 8 , 9 ]. Although the importance of the implementation evaluation stage has been recognized, research on how these guidelines are implemented is scarce [ 10 ]. This paper focused on the process of assessing CPGs implementation.

To understand and improve this complex process, implementation science provides a systematic set of principles and methods to integrate research findings and other evidence-based practices into routine practice and improve the quality and effectiveness of health services and care [ 11 ]. The field of implementation science uses theoretical approaches that have varying degrees of specificity based on the current state of knowledge and are structured based on theories, models, and frameworks [ 5 , 12 , 13 ]. A "Model" is defined as "a simplified depiction of a more complex world with relatively precise assumptions about cause and effect", and a "framework" is defined as "a broad set of constructs that organize concepts and data descriptively without specifying causal relationships" [ 9 ]. Although these concepts are distinct, in this paper, their use will be interchangeable, as they are typically like checklists of factors relevant to various aspects of implementation.

There are a variety of theoretical approaches available in implementation science [ 5 , 14 ], which can make choosing the most appropriate challenging [ 5 ]. Some models and frameworks have been categorized as "evaluation models" by providing a structure for evaluating implementation endeavors [ 15 ], even though theoretical approaches from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 13 ]. Two frameworks that can specify implementation aspects that should be evaluated as part of intervention studies are RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) [ 16 ] and PRECEDE-PROCEED (Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development) [ 17 ]. Although the number of theoretical approaches has grown in recent years, the use of models and frameworks to evaluate the implementation of guidelines still seems to be a challenge.

This article aims to provide a complete map of the models and frameworks applied to assess the implementation of CPGs. The aim is also to subside debate and choices on models and frameworks for the research and evaluation of the implementation processes of CPGs and thus to facilitate the continued development of the field of implementation as well as to contribute to healthcare policy and practice.

A systematic review was conducted following the Cochrane methodology [ 18 ], with adaptations to the "selection process" due to the unique nature of this review (details can be found in the respective section). The review protocol was registered in PROSPERO (registration number: CRD42022335884) on June 7, 2022. This report adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 19 ] and a completed checklist is provided in Additional File 1.

Eligibility criteria

The SDMO approach (Types of Studies, Types of Data, Types of Methods, Outcomes) [ 20 ] was utilized in this systematic review, outlined as follows:

Types of studies

All types of studies were considered for inclusion, as the assessment of CPG implementation can benefit from a diverse range of study designs, including randomized clinical trials/experimental studies, scale/tool development, systematic reviews, opinion pieces, qualitative studies, peer-reviewed articles, books, reports, and unpublished theses.

Studies were categorized based on their methodological designs, which guided the synthesis, risk of bias assessment, and presentation of results.

Study protocols and conference abstracts were excluded due to insufficient information for this review.

Types of data

Studies that evaluated the implementation of CPGs either independently or as part of a multifaceted intervention.

Guidelines for evaluating CPG implementation.

Inclusion of CPGs related to any context, clinical area, intervention, and patient characteristics.

No restrictions were placed on publication date or language.

Exclusion criteria

General guidelines were excluded, as this review focused on 'models for evaluating clinical practice guidelines implementation' rather than the guidelines themselves.

Studies that focused solely on implementation determinants as barriers and enablers were excluded, as this review aimed to explore comprehensive models/frameworks.

Studies evaluating programs and policies were excluded.

Studies that only assessed implementation strategies (isolated actions) rather than the implementation process itself were excluded.

Studies that focused solely on the impact or results of implementation (summative evaluation) were excluded.

Types of methods

Not applicable.

All potential models or frameworks for assessing the implementation of CPG (evaluation models/frameworks), as well as their characteristics: name; specific objectives; levels of use (clinical, organizational, and policy); health system (public, private, or both); type of health service (community, ambulatorial, hospital, institutional, homecare); domains or outcomes evaluated; type of recommendation evaluated; context; limitations of the model.

Model was defined as "a deliberated simplification of a phenomenon on a specific aspect" [ 21 ].

Framework was defined as "structure, overview outline, system, or plan consisting of various descriptive categories" [ 21 ].

Models or frameworks used solely for the CPG development, dissemination, or implementation phase.

Models/frameworks used solely for assessment processes other than implementation, such as for the development or dissemination phase.

Data sources and literature search

The systematic search was conducted on July 31, 2022 (and updated on May 15, 2023) in the following electronic databases: MEDLINE/PubMed, Centre for Reviews and Dissemination (CRD), the Cochrane Library, Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Epistemonikos, Global Health, Health Systems Evidence, PDQ-Evidence, PsycINFO, Rx for Change (Canadian Agency for Drugs and Technologies in Health, CADTH), Scopus, Web of Science and Virtual Health Library (VHL). The Google Scholar database was used for the manual selection of studies (first 10 pages).

Additionally, hand searches were performed on the lists of references included in the systematic reviews and citations of the included studies, as well as on the websites of institutions working on CPGs development and implementation: Guidelines International Networks (GIN), National Institute for Health and Care Excellence (NICE; United Kingdom), World Health Organization (WHO), Centers for Disease Control and Prevention (CDC; USA), Institute of Medicine (IOM; USA), Australian Department of Health and Aged Care (ADH), Healthcare Improvement Scotland (SIGN), National Health and Medical Research Council (NHMRC; Australia), Queensland Health, The Joanna Briggs Institute (JBI), Ministry of Health and Social Policy of Spain, Ministry of Health of Brazil and Capes Theses and Dissertations Catalog.

The search strategy combined terms related to "clinical practice guidelines" (practice guidelines, practice guidelines as topic, clinical protocols), "implementation", "assessment" (assessment, evaluation), and "models, framework". The free term "monitoring" was not used because it was regularly related to clinical monitoring and not to implementation monitoring. The search strategies adapted for the electronic databases are presented in an additional file (see Additional file 2).

Study selection process

The results of the literature search from scientific databases, excluding the CRD database, were imported into Mendeley Reference Management software to remove duplicates. They were then transferred to the Rayyan platform ( https://rayyan.qcri.org ) [ 22 ] for the screening process. Initially, studies related to the "assessment of implementation of the CPG" were selected. The titles were first screened independently by two pairs of reviewers (first selection: four reviewers, NM, JB, SS, and JG; update: a pair of reviewers, NM and DG). The title screening was broad, including all potentially relevant studies on CPG and the implementation process. Following that, the abstracts were independently screened by the same group of reviewers. The abstract screening was more focused, specifically selecting studies that addressed CPG and the evaluation of the implementation process. In the next step, full-text articles were reviewed independently by a pair of reviewers (NM, DG) to identify those that explicitly presented "models" or "frameworks" for assessing the implementation of the CPG. Disagreements regarding the eligibility of studies were resolved through discussion and consensus, and by a third reviewer (JB) when necessary. One reviewer (NM) conducted manual searches, and the inclusion of documents was discussed with the other reviewers.

Risk of bias assessment of studies

The selected studies were independently classified and evaluated according to their methodological designs by two investigators (NM and JG). This review employed JBI’s critical appraisal tools to assess the trustworthiness, relevance and results of the included studies [ 23 ] and these tools are presented in additional files (see Additional file 3 and Additional file 4). Disagreements were resolved by consensus or consultation with the other reviewers. Methodological guidelines and noncomparative and before–after studies were not evaluated because JBI does not have specific tools for assessing these types of documents. Although the studies were assessed for quality, they were not excluded on this basis.

Data extraction

The data was independently extracted by two reviewers (NM, DG) using a Microsoft Excel spreadsheet. Discrepancies were discussed and resolved by consensus. The following information was extracted:

Document characteristics : author; year of publication; title; study design; instrument of evaluation; country; guideline context;

Usage context of the models : specific objectives; level of use (clinical, organizational, and policy); type of health service (community, ambulatorial, hospital, institutional); target group (guideline developers, clinicians; health professionals; health-policy decision-makers; health-care organizations; service managers);

Model and framework characteristics : name, domain evaluated, and model limitations.

The set of information to be extracted, shown in the systematic review protocol, was adjusted to improve the organization of the analysis.

The "level of use" refers to the scope of the model used. "Clinical" was considered when the evaluation focused on individual practices, "organizational" when practices were within a health service institution, and "policy" when the evaluation was more systemic and covered different health services or institutions.

The "type of health service" indicated the category of health service where the model/framework was used (or can be used) to assess the implementation of the CPG, related to the complexity of healthcare. "Community" is related to primary health care; "ambulatorial" is related to secondary health care; "hospital" is related to tertiary health care; and "institutional" represented models/frameworks not specific to a particular type of health service.

The "target group" included stakeholders related to the use of the model/framework for evaluating the implementation of the CPG, such as clinicians, health professionals, guideline developers, health policy-makers, health organizations, and service managers.

The category "health system" (public, private, or both) mentioned in the systematic review protocol was not found in the literature obtained and was removed as an extraction variable. Similarly, the variables "type of recommendation evaluated" and "context" were grouped because the same information was included in the "guideline context" section of the study.

Some selected documents presented models or frameworks recognized by the scientific field, including some that were validated. However, some studies adapted the model to this context. Therefore, the domain analysis covered all models or frameworks domains evaluated by (or suggested for evaluation by) the document analyzed.

Data analysis and synthesis

The results were tabulated using narrative synthesis with an aggregative approach, without meta-analysis, aiming to summarize the documents descriptively for the organization, description, interpretation and explanation of the study findings [ 24 , 25 ].

The model/framework domains evaluated in each document were studied according to Nilsen et al.’s constructs: "strategies", "context", "outcomes", "fidelity", "adaptation" and "sustainability". For this study, "strategies" were described as structured and planned initiatives used to enhance the implementation of clinical practice [ 26 ].

The definition of "context" varies in the literature. Despite that, this review considered it as the set of circumstances or factors surrounding a particular implementation effort, such as organizational support, financial resources, social relations and support, leadership, and organizational culture [ 26 , 27 ]. The domain "context" was subdivided according to the level of health care into "micro" (individual perspective), "meso" (organizational perspective), "macro" (systemic perspective), and "multiple" (when there is an issue involving more than one level of health care).

The "outcomes" domain was related to the results of the implementation process (unlike clinical outcomes) and was stratified according to the following constructs: acceptability, appropriateness, feasibility, adoption, cost, and penetration. All these concepts align with the definitions of Proctor et al. (2011), although we decided to separate "fidelity" and "sustainability" as independent domains similar to Nilsen [ 26 , 28 ].

"Fidelity" and "adaptation" were considered the same domain, as they are complementary pieces of the same issue. In this study, implementation fidelity refers to how closely guidelines are followed as intended by their developers or designers. On the other hand, adaptation involves making changes to the content or delivery of a guideline to better fit the needs of a specific context. The "sustainability" domain was defined as evaluations about the continuation or permanence over time of the CPG implementation.

Additionally, the domain "process" was utilized to address issues related to the implementation process itself, rather than focusing solely on the outcomes of the implementation process, as done by Wang et al. [ 14 ]. Furthermore, the "intervention" domain was introduced to distinguish aspects related to the CPG characteristics that can impact its implementation, such as the complexity of the recommendation.

A subgroup analysis was performed with models and frameworks categorized based on their levels of use (clinical, organizational, and policy) and the type of health service (community, ambulatorial, hospital, institutional) associated with the CPG. The goal is to assist stakeholders (politicians, clinicians, researchers, or others) in selecting the most suitable model for evaluating CPG implementation based on their specific health context.

Search results

Database searches yielded 26,011 studies, of which 107 full texts were reviewed. During the full-text review, 99 articles were excluded: 41 studies did not mention a model or framework for assessing the implementation of the CPG, 31 studies evaluated only implementation strategies (isolated actions) rather than the implementation process itself, and 27 articles were not related to the implementation assessment. Therefore, eight studies were included in the data analysis. The updated search did not reveal additional relevant studies. The main reason for study exclusion was that they did not use models or frameworks to assess CPG implementation. Additionally, four methodological guidelines were included from the manual search (Fig.  1 ).

figure 1

PRISMA diagram. Acronyms: ADH—Australian Department of Health, CINAHL—Cumulative Index to Nursing and Allied Health Literature, CDC—Centers for Disease Control and Prevention, CRD—Centre for Reviews and Dissemination, GIN—Guidelines International Networks, HSE—Health Systems Evidence, IOM—Institute of Medicine, JBI—The Joanna Briggs Institute, MHB—Ministry of Health of Brazil, NICE—National Institute for Health and Care Excellence, NHMRC—National Health and Medical Research Council, MSPS – Ministerio de Sanidad Y Política Social (Spain), SIGN—Scottish Intercollegiate Guidelines Network, VHL – Virtual Health Library, WHO—World Health Organization. Legend: Reason A –The study evaluated only implementation strategies (isolated actions) rather than the implementation process itself. Reason B – The study did not mention a model or framework for assessing the implementation of the intervention. Reason C – The study was not related to the implementation assessment. Adapted from Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. https://doi.org/10.1136/bmj.n71 . For more information, visit:

According to the JBI’s critical appraisal tools, the overall assessment of the studies indicates their acceptance for the systematic review.

The cross-sectional studies lacked clear information regarding "confounding factors" or "strategies to address confounding factors". This was understandable given the nature of the study, where such details are not typically included. However, the reviewers did not find this lack of information to be critical, allowing the studies to be included in the review. The results of this methodological quality assessment can be found in an additional file (see Additional file 5).

In the qualitative studies, there was some ambiguity regarding the questions: "Is there a statement locating the researcher culturally or theoretically?" and "Is the influence of the researcher on the research, and vice versa, addressed?". However, the reviewers decided to include the studies and deemed the methodological quality sufficient for the analysis in this article, based on the other information analyzed. The results of this methodological quality assessment can be found in an additional file (see Additional file 6).

Documents characteristics (Table  1 )

The documents were directed to several continents: Australia/Oceania (4/12) [ 31 , 33 , 36 , 37 ], North America (4/12 [ 30 , 32 , 38 , 39 ], Europe (2/12 [ 29 , 35 ] and Asia (2/12) [ 34 , 40 ]. The types of documents were classified as cross-sectional studies (4/12) [ 29 , 32 , 34 , 38 ], methodological guidelines (4/12) [ 33 , 35 , 36 , 37 ], mixed methods studies (3/12) [ 30 , 31 , 39 ] or noncomparative studies (1/12) [ 40 ]. In terms of the instrument of evaluation, most of the documents used a survey/questionnaire (6/12) [ 29 , 30 , 31 , 32 , 34 , 38 ], while three (3/12) used qualitative instruments (interviews, group discussions) [ 30 , 31 , 39 ], one used a checklist [ 37 ], one used an audit [ 33 ] and three (3/12) did not define a specific instrument to measure [ 35 , 36 , 40 ].

Considering the clinical areas covered, most studies evaluated the implementation of nonspecific (general) clinical areas [ 29 , 33 , 35 , 36 , 37 , 40 ]. However, some studies focused on specific clinical contexts, such as mental health [ 32 , 38 ], oncology [ 39 ], fall prevention [ 31 ], spinal cord injury [ 30 ], and sexually transmitted infections [ 34 ].

Usage context of the models (Table  1 )

Specific objectives.

All the studies highlighted the purpose of guiding the process of evaluating the implementation of CPGs, even if they evaluated CPGs from generic or different clinical areas.

Levels of use

The most common level of use of the models/frameworks identified to assess the implementation of CPGs was policy (6/12) [ 33 , 35 , 36 , 37 , 39 , 40 ]. In this level, the model is used in a systematic way to evaluate all the processes involved in CPGs implementation and is primarily related to methodological guidelines. This was followed by the organizational level of use (5/12) [ 30 , 31 , 32 , 38 , 39 ], where the model is used to evaluate the implementation of CPGs in a specific institution, considering its specific environment. Finally, the clinical level of use (2/12) [ 29 , 34 ] focuses on individual practice and the factors that can influence the implementation of CPGs by professionals.

Type of health service

Institutional services were predominant (5/12) [ 33 , 35 , 36 , 37 , 40 ] and included methodological guidelines and a study of model development and validation. Hospitals were the second most common type of health service (4/12) [ 29 , 30 , 31 , 34 ], followed by ambulatorial (2/12) [ 32 , 34 ] and community health services (1/12) [ 32 ]. Two studies did not specify which type of health service the assessment addressed [ 38 , 39 ].

Target group

The focus of the target group was professionals directly involved in clinical practice (6/12) [ 29 , 31 , 32 , 34 , 38 , 40 ], namely, health professionals and clinicians. Other less related stakeholders included guideline developers (2/12) [ 39 , 40 ], health policy decision makers (1/12) [ 39 ], and healthcare organizations (1/12) [ 39 ]. The target group was not defined in the methodological guidelines, although all the mentioned stakeholders could be related to these documents.

Model and framework characteristics

Models and frameworks for assessing the implementation of cpgs.

The Consolidated Framework for Implementation Research (CFIR) [ 31 , 38 ] and the Promoting Action on Research Implementation in Health Systems (PARiHS) framework [ 29 , 30 ] were the most commonly employed frameworks within the selected documents. The other models mentioned were: Goal commitment and implementation of practice guidelines framework [ 32 ]; Guideline to identify key indicators [ 35 ]; Guideline implementation checklist [ 37 ]; Guideline implementation evaluation tool [ 40 ]; JBI Implementation Framework [ 33 ]; Reach, effectiveness, adoption, implementation and maintenance (RE-AIM) framework [ 34 ]; The Guideline Implementability Framework [ 39 ] and an unnamed model [ 36 ].

Domains evaluated

The number of domains evaluated (or suggested for evaluation) by the documents varied between three and five, with the majority focusing on three domains. All the models addressed the domain "context", with a particular emphasis on the micro level of the health care context (8/12) [ 29 , 31 , 34 , 35 , 36 , 37 , 38 , 39 ], followed by the multilevel (7/12) [ 29 , 31 , 32 , 33 , 38 , 39 , 40 ], meso level (4/12) [ 30 , 35 , 39 , 40 ] and macro level (2/12) [ 37 , 39 ]. The "Outcome" domain was evaluated in nine models. Within this domain, the most frequently evaluated subdomain was "adoption" (6/12) [ 29 , 32 , 34 , 35 , 36 , 37 ], followed by "acceptability" (4/12) [ 30 , 32 , 35 , 39 ], "appropriateness" (3/12) [ 32 , 34 , 36 ], "feasibility" (3/12) [ 29 , 32 , 36 ], "cost" (1/12) [ 35 ] and "penetration" (1/12) [ 34 ]. Regarding the other domains, "Intervention" (8/12) [ 29 , 31 , 34 , 35 , 36 , 38 , 39 , 40 ], "Strategies" (7/12) [ 29 , 30 , 33 , 35 , 36 , 37 , 40 ] and "Process" (5/12) [ 29 , 31 , 32 , 33 , 38 ] were frequently addressed in the models, while "Sustainability" (1/12) [ 34 ] was only found in one model, and "Fidelity/Adaptation" was not observed. The domains presented by the models and frameworks and evaluated in the documents are shown in Table  2 .

Limitations of the models

Only two documents mentioned limitations in the use of the model or frameworks. These two studies reported limitations in the use of CFIR: "is complex and cumbersome and requires tailoring of the key variables to the specific context", and "this framework should be supplemented with other important factors and local features to achieve a sound basis for the planning and realization of an ongoing project" [ 31 , 38 ]. Limitations in the use of other models or frameworks are not reported.

Subgroup analysis

Following the subgroup analysis (Table  3 ), five different models/frameworks were utilized at the policy level by institutional health services. These included the Guideline Implementation Evaluation Tool [ 40 ], the NHMRC tool (model name not defined) [ 36 ], the JBI Implementation Framework + GRiP [ 33 ], Guideline to identify key indicators [ 35 ], and the Guideline implementation checklist [ 37 ]. Additionally, the "Guideline Implementability Framework" [ 39 ] was implemented at the policy level without restrictions based on the type of health service. Regarding the organizational level, the models used varied depending on the type of service. The "Goal commitment and implementation of practice guidelines framework" [ 32 ] was applied in community and ambulatory health services, while "PARiHS" [ 29 , 30 ] and "CFIR" [ 31 , 38 ] were utilized in hospitals. In contexts where the type of health service was not defined, "CFIR" [ 31 , 38 ] and "The Guideline Implementability Framework" [ 39 ] were employed. Lastly, at the clinical level, "RE-AIM" [ 34 ] was utilized in ambulatory and hospital services, and PARiHS [ 29 , 30 ] was specifically used in hospital services.

Key findings

This systematic review identified 10 models/ frameworks used to assess the implementation of CPGs in various health system contexts. These documents shared similar objectives in utilizing models and frameworks for assessment. The primary level of use was policy, the most common type of health service was institutional, and the main target group of the documents was professionals directly involved in clinical practice. The models and frameworks presented varied analytical domains, with sometimes divergent concepts used in these domains. This study is innovative in its emphasis on the evaluation stage of CPG implementation and in summarizing aspects and domains aimed at the practical application of these models.

The small number of documents contrasts with studies that present an extensive range of models and frameworks available in implementation science. The findings suggest that the use of models and frameworks to evaluate the implementation of CPGs is still in its early stages. Among the selected documents, there was a predominance of cross-sectional studies and methodological guidelines, which strongly influenced how the implementation evaluation was conducted. This was primarily done through surveys/questionnaires, qualitative methods (interviews, group discussions), and non-specific measurement instruments. Regarding the subject areas evaluated, most studies focused on a general clinical area, while others explored different clinical areas. This suggests that the evaluation of CPG implementation has been carried out in various contexts.

The models were chosen independently of the categories proposed in the literature, with their usage categorized for purposes other than implementation evaluation, as is the case with CFIR and PARiHS. This practice was described by Nilsen et al. who suggested that models and frameworks from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 14 , 15 , 42 , 43 ].

The results highlight the increased use of models and frameworks in evaluation processes at the policy level and institutional environments, followed by the organizational level in hospital settings. This finding contradicts a review that reported the policy level as an area that was not as well studied [ 44 ]. The use of different models at the institutional level is also emphasized in the subgroup analysis. This may suggest that the greater the impact (social, financial/economic, and organizational) of implementing CPGs, the greater the interest and need to establish well-defined and robust processes. In this context, the evaluation stage stands out as crucial, and the investment of resources and efforts to structure this stage becomes even more advantageous [ 10 , 45 ]. Two studies (16,7%) evaluated the implementation of CPGs at the individual level (clinical level). These studies stand out for their potential to analyze variations in clinical practice in greater depth.

In contrast to the level of use and type of health service most strongly indicated in the documents, with systemic approaches, the target group most observed was professionals directly involved in clinical practice. This suggests an emphasis on evaluating individual behaviors. This same emphasis is observed in the analysis of the models, in which there is a predominance of evaluating the micro level of the health context and the "adoption" subdomain, in contrast with the sub-use of domains such as "cost" and "process". Cassetti et al. observed the same phenomenon in their review, in which studies evaluating the implementation of CPGs mainly adopted a behavioral change approach to tackle those issues, without considering the influence of wider social determinants of health [ 10 ]. However, the literature widely reiterates that multiple factors impact the implementation of CPGs, and different actions are required to make them effective [ 6 , 46 , 47 ]. As a result, there is enormous potential for the development and adaptation of models and frameworks aimed at more systemic evaluation processes that consider institutional and organizational aspects.

In analyzing the model domains, most models focused on evaluating only some aspects of implementation (three domains). All models evaluated the "context", highlighting its significant influence on implementation [ 9 , 26 ]. Context is an essential effect modifier for providing research evidence to guide decisions on implementation strategies [ 48 ]. Contextualizing a guideline involves integrating research or other evidence into a specific circumstance [ 49 ]. The analysis of this domain was adjusted to include all possible contextual aspects, even if they were initially allocated to other domains. Some contextual aspects presented by the models vary in comprehensiveness, such as the assessment of the "timing and nature of stakeholder engagement" [ 39 ], which includes individual engagement by healthcare professionals and organizational involvement in CPG implementation. While the importance of context is universally recognized, its conceptualization and interpretation differ across studies and models. This divergence is also evident in other domains, consistent with existing literature [ 14 ]. Efforts to address this conceptual divergence in implementation science are ongoing, but further research and development are needed in this field [ 26 ].

The main subdomain evaluated was "adoption" within the outcome domain. This may be attributed to the ease of accessing information on the adoption of the CPG, whether through computerized system records, patient records, or self-reports from healthcare professionals or patients themselves. The "acceptability" subdomain pertains to the perception among implementation stakeholders that a particular CPG is agreeable, palatable or satisfactory. On the other hand, "appropriateness" encompasses the perceived fit, relevance or compatibility of the CPG for a specific practice setting, provider, or consumer, or its perceived fit to address a particular issue or problem [ 26 ]. Both subdomains are subjective and rely on stakeholders' interpretations and perceptions of the issue being analyzed, making them susceptible to reporting biases. Moreover, obtaining this information requires direct consultation with stakeholders, which can be challenging for some evaluation processes, particularly in institutional contexts.

The evaluation of the subdomains "feasibility" (the extent to which a CPG can be successfully used or carried out within a given agency or setting), "cost" (the cost impact of an implementation effort), and "penetration" (the extent to which an intervention or treatment is integrated within a service setting and its subsystems) [ 26 ] was rarely observed in the documents. This may be related to the greater complexity of obtaining information on these aspects, as they involve cross-cutting and multifactorial issues. In other words, it would be difficult to gather this information during evaluations with health practitioners as the target group. This highlights the need for evaluation processes of CPGs implementation involving multiple stakeholders, even if the evaluation is adjusted for each of these groups.

Although the models do not establish the "intervention" domain, we thought it pertinent in this study to delimit the issues that are intrinsic to CPGs, such as methodological quality or clarity in establishing recommendations. These issues were quite common in the models evaluated but were considered in other domains (e.g., in "context"). Studies have reported the importance of evaluating these issues intrinsic to CPGs [ 47 , 50 ] and their influence on the implementation process [ 51 ].

The models explicitly present the "strategies" domain, and its evaluation was usually included in the assessments. This is likely due to the expansion of scientific and practical studies in implementation science that involve theoretical approaches to the development and application of interventions to improve the implementation of evidence-based practices. However, these interventions themselves are not guaranteed to be effective, as reported in a previous review that showed unclear results indicating that the strategies had affected successful implementation [ 52 ]. Furthermore, model domains end up not covering all the complexity surrounding the strategies and their development and implementation process. For example, the ‘Guideline implementation evaluation tool’ evaluates whether guideline developers have designed and provided auxiliary tools to promote the implementation of guidelines [ 40 ], but this does not mean that these tools would work as expected.

The "process" domain was identified in the CFIR [ 31 , 38 ], JBI/GRiP [ 33 ], and PARiHS [ 29 ] frameworks. While it may be included in other domains of analysis, its distinct separation is crucial for defining operational issues when assessing the implementation process, such as determining if and how the use of the mentioned CPG was evaluated [ 3 ]. Despite its presence in multiple models, there is still limited detail in the evaluation guidelines, which makes it difficult to operationalize the concept. Further research is needed to better define the "process" domain and its connections and boundaries with other domains.

The domain of "sustainability" was only observed in the RE-AIM framework, which is categorized as an evaluation framework [ 34 ]. In its acronym, the letter M stands for "maintenance" and corresponds to the assessment of whether the user maintains use, typically longer than 6 months. The presence of this domain highlights the need for continuous evaluation of CPGs implementation in the short, medium, and long term. Although the RE-AIM framework includes this domain, it was not used in the questionnaire developed in the study. One probable reason is that the evaluation of CPGs implementation is still conducted on a one-off basis and not as a continuous improvement process. Considering that changes in clinical practices are inherent over time, evaluating and monitoring changes throughout the duration of the CPG could be an important strategy for ensuring its implementation. This is an emerging field that requires additional investment and research.

The "Fidelity/Adaptation" domain was not observed in the models. These emerging concepts involve the extent to which a CPG is being conducted exactly as planned or whether it is undergoing adjustments and adaptations. Whether or not there is fidelity or adaptation in the implementation of CPGs does not presuppose greater or lesser effectiveness; after all, some adaptations may be necessary to implement general CPGs in specific contexts. The absence of this domain in all the models and frameworks may suggest that they are not relevant aspects for evaluating implementation or that there is a lack of knowledge of these complex concepts. This may suggest difficulty in expressing concepts in specific evaluative questions. However, further studies are warranted to determine the comprehensiveness of these concepts.

It is important to note the customization of the domains of analysis, with some domains presented in the models not being evaluated in the studies, while others were complementarily included. This can be seen in Jeong et al. [ 34 ], where the "intervention" domain in the evaluation with the RE-AIM framework reinforced the aim of theoretical approaches such as guiding the process and not determining norms. Despite this, few limitations were reported for the models, suggesting that the use of models in these studies reflects the application of these models to defined contexts without a deep critical analysis of their domains.

Limitations

This review has several limitations. First, only a few studies and methodological guidelines that explicitly present models and frameworks for assessing the implementation of CPGs have been found. This means that few alternative models could be analyzed and presented in this review. Second, this review adopted multiple analytical categories (e.g., level of use, health service, target group, and domains evaluated), whose terminology has varied enormously in the studies and documents selected, especially for the "domains evaluated" category. This difficulty in harmonizing the taxonomy used in the area has already been reported [ 26 ] and has significant potential to confuse. For this reason, studies and initiatives are needed to align understandings between concepts and, as far as possible, standardize them. Third, in some studies/documents, the information extracted was not clear about the analytical category. This required an in-depth interpretative process of the studies, which was conducted in pairs to avoid inappropriate interpretations.

Implications

This study contributes to the literature and clinical practice management by describing models and frameworks specifically used to assess the implementation of CPGs based on their level of use, type of health service, target group related to the CPG, and the evaluated domains. While there are existing reviews on the theories, frameworks, and models used in implementation science, this review addresses aspects not previously covered in the literature. This valuable information can assist stakeholders (such as politicians, clinicians, researchers, etc.) in selecting or adapting the most appropriate model to assess CPG implementation based on their health context. Furthermore, this study is expected to guide future research on developing or adapting models to assess the implementation of CPGs in various contexts.

The use of models and frameworks to evaluate the implementation remains a challenge. Studies should clearly state the level of model use, the type of health service evaluated, and the target group. The domains evaluated in these models may need adaptation to specific contexts. Nevertheless, utilizing models to assess CPGs implementation is crucial as they can guide a more thorough and systematic evaluation process, aiding in the continuous improvement of CPGs implementation. The findings of this systematic review offer valuable insights for stakeholders in selecting or adjusting models and frameworks for CPGs evaluation, supporting future theoretical advancements and research.

Availability of data and materials

Abbreviations.

Australian Department of Health and Aged Care

Canadian Agency for Drugs and Technologies in Health

Centers for Disease Control and

Consolidated Framework for Implementation Research

Cumulative Index to Nursing and Allied Health Literature

Clinical practice guideline

Centre for Reviews and Dissemination

Guidelines International Networks

Getting Research into Practice

Health Systems Evidence

Institute of Medicine

The Joanna Briggs Institute

Ministry of Health of Brazil

Ministerio de Sanidad y Política Social

National Health and Medical Research Council

National Institute for Health and Care Excellence

Promoting action on research implementation in health systems framework

Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

International Prospective Register of Systematic Reviews

Reach, effectiveness, adoption, implementation, and maintenance framework

Healthcare Improvement Scotland

United States of America

Virtual Health Library

World Health Organization

Medicine I of. Crossing the Quality Chasm: A New Health System for the 21st Century. 2001. Available from: http://www.nap.edu/catalog/10027 . Cited 2022 Sep 29.

Field MJ, Lohr KN. Clinical Practice Guidelines: Directions for a New Program. Washington DC: National Academy Press. 1990. Available from: https://www.nap.edu/read/1626/chapter/8 Cited 2020 Sep 2.

Dawson A, Henriksen B, Cortvriend P. Guideline Implementation in Standardized Office Workflows and Exam Types. J Prim Care Community Heal. 2019;10. Available from: https://pubmed.ncbi.nlm.nih.gov/30900500/ . Cited 2020 Jul 15.

Unverzagt S, Oemler M, Braun K, Klement A. Strategies for guideline implementation in primary care focusing on patients with cardiovascular disease: a systematic review. Fam Pract. 2014;31(3):247–66. Available from: https://academic.oup.com/fampra/article/31/3/247/608680 . Cited 2020 Nov 5.

Article   PubMed   Google Scholar  

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-015-0242-0 . Cited 2022 May 1.

Article   Google Scholar  

Mangana F, Massaquoi LD, Moudachirou R, Harrison R, Kaluangila T, Mucinya G, et al. Impact of the implementation of new guidelines on the management of patients with HIV infection at an advanced HIV clinic in Kinshasa, Democratic Republic of Congo (DRC). BMC Infect Dis. 2020;20(1):N.PAG-N.PAG. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=146325052&amp .

Browman GP, Levine MN, Mohide EA, Hayward RSA, Pritchard KI, Gafni A, et al. The practice guidelines development cycle: a conceptual tool for practice guidelines development and implementation. 2016;13(2):502–12. https://doi.org/10.1200/JCO.1995.13.2.502 .

Killeen SL, Donnellan N, O’Reilly SL, Hanson MA, Rosser ML, Medina VP, et al. Using FIGO Nutrition Checklist counselling in pregnancy: A review to support healthcare professionals. Int J Gynecol Obstet. 2023;160(S1):10–21. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85146194829&doi=10.1002%2Fijgo.14539&partnerID=40&md5=d0f14e1f6d77d53e719986e6f434498f .

Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):1–12. Available from: https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-015-0089-9 . Cited 2020 Nov 5.

Cassetti V, M VLR, Pola-Garcia M, AM G, J JPC, L APDT, et al. An integrative review of the implementation of public health guidelines. Prev Med reports. 2022;29:101867. Available from: http://www.epistemonikos.org/documents/7ad499d8f0eecb964fc1e2c86b11450cbe792a39 .

Eccles MP, Mittman BS. Welcome to implementation science. Implementation Science BioMed Central. 2006. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-1-1 .

Damschroder LJ. Clarity out of chaos: Use of theory in implementation research. Psychiatry Res. 2020;1(283):112461.

Handley MA, Gorukanti A, Cattamanchi A. Strategies for implementing implementation science: a methodological overview. Emerg Med J. 2016;33(9):660–4. Available from: https://pubmed.ncbi.nlm.nih.gov/26893401/ . Cited 2022 Mar 7.

Wang Y, Wong ELY, Nilsen P, Chung VC ho, Tian Y, Yeoh EK. A scoping review of implementation science theories, models, and frameworks — an appraisal of purpose, characteristics, usability, applicability, and testability. Implement Sci. 2023;18(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-023-01296-x . Cited 2024 Jan 22.

Moullin JC, Dickson KS, Stadnick NA, Albers B, Nilsen P, Broder-Fingert S, et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun. 2020;1(1):1–12. Available from: https://implementationsciencecomms.biomedcentral.com/articles/10.1186/s43058-020-00023-7 . Cited 2022 May 20.

Glasgow RE, Vogt TM, Boles SM. *Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322. Available from: /pmc/articles/PMC1508772/?report=abstract. Cited 2022 May 22.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Asada Y, Lin S, Siegel L, Kong A. Facilitators and Barriers to Implementation and Sustainability of Nutrition and Physical Activity Interventions in Early Childcare Settings: a Systematic Review. Prev Sci. 2023;24(1):64–83. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85139519721&doi=10.1007%2Fs11121-022-01436-7&partnerID=40&md5=b3c395fdd2b8235182eee518542ebf2b .

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions. version 6. Cochrane; 2022. Available from: https://training.cochrane.org/handbook. Cited 2022 May 23.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372. Available from: https://www.bmj.com/content/372/bmj.n71 . Cited 2021 Nov 18.

M C, AD O, E P, JP H, S G. Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. Higgins JP, Green S, eds Cochrane Handb Syst Rev Interv. 2011;Version 5.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):1–8. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-019-0957-4 . Cited 2024 Jan 22.

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):1–10. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-016-0384-4 . Cited 2022 May 20.

JBI. JBI’s Tools Assess Trust, Relevance & Results of Published Papers: Enhancing Evidence Synthesis. Available from: https://jbi.global/critical-appraisal-tools . Cited 2023 Jun 13.

Drisko JW. Qualitative research synthesis: An appreciative and critical introduction. Qual Soc Work. 2020;19(4):736–53.

Pope C, Mays N, Popay J. Synthesising qualitative and quantitative health evidence: A guide to methods. 2007. Available from: https://books.google.com.br/books?hl=pt-PT&lr=&id=L3fbE6oio8kC&oi=fnd&pg=PR6&dq=synthesizing+qualitative+and+quantitative+health+evidence&ots=sfELNUoZGq&sig=bQt5wt7sPKkf7hwKUvxq2Ek-p2Q#v=onepage&q=synthesizing=qualitative=and=quantitative=health=evidence& . Cited 2022 May 22.

Nilsen P, Birken SA, Edward Elgar Publishing. Handbook on implementation science. 542. Available from: https://www.e-elgar.com/shop/gbp/handbook-on-implementation-science-9781788975988.html . Cited 2023 Apr 15.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-4-50 . Cited 2023 Jun 13.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. Available from: https://pubmed.ncbi.nlm.nih.gov/20957426/ . Cited 2023 Jun 11.

Bahtsevani C, Willman A, Khalaf A, Östman M, Ostman M. Developing an instrument for evaluating implementation of clinical practice guidelines: a test-retest study. J Eval Clin Pract. 2008;14(5):839–46. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=105569473&amp . Cited 2023 Jan 18.

Balbale SN, Hill JN, Guihan M, Hogan TP, Cameron KA, Goldstein B, et al. Evaluating implementation of methicillin-resistant Staphylococcus aureus (MRSA) prevention guidelines in spinal cord injury centers using the PARIHS framework: a mixed methods study. Implement Sci. 2015;10(1):130. Available from: https://pubmed.ncbi.nlm.nih.gov/26353798/ . Cited 2023 Apr 3.

Article   PubMed   PubMed Central   Google Scholar  

Breimaier HE, Heckemann B, Halfens RJGG, Lohrmann C. The Consolidated Framework for Implementation Research (CFIR): a useful theoretical framework for guiding and evaluating a guideline implementation process in a hospital-based nursing practice. BMC Nurs. 2015;14(1):43. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=109221169&amp . Cited 2023 Apr 3.

Chou AF, Vaughn TE, McCoy KD, Doebbeling BN. Implementation of evidence-based practices: Applying a goal commitment framework. Health Care Manage Rev. 2011;36(1):4–17. Available from: https://pubmed.ncbi.nlm.nih.gov/21157225/ . Cited 2023 Apr 30.

Porritt K, McArthur A, Lockwood C, Munn Z. JBI Manual for Evidence Implementation. JBI Handbook for Evidence Implementation. JBI; 2020. Available from: https://jbi-global-wiki.refined.site/space/JHEI . Cited 2023 Apr 3.

Jeong HJJ, Jo HSS, Oh MKK, Oh HWW. Applying the RE-AIM Framework to Evaluate the Dissemination and Implementation of Clinical Practice Guidelines for Sexually Transmitted Infections. J Korean Med Sci. 2015;30(7):847–52. Available from: https://pubmed.ncbi.nlm.nih.gov/26130944/ . Cited 2023 Apr 3.

GPC G de trabajo sobre implementación de. Implementación de Guías de Práctica Clínica en el Sistema Nacional de Salud. Manual Metodológico. 2009. Available from: https://portal.guiasalud.es/wp-content/uploads/2019/01/manual_implementacion.pdf . Cited 2023 Apr 3.

Australia C of. A guide to the development, implementation and evaluation of clinical practice guidelines. National Health and Medical Research Council; 1998. Available from: https://www.health.qld.gov.au/__data/assets/pdf_file/0029/143696/nhmrc_clinprgde.pdf .

Health Q. Guideline implementation checklist Translating evidence into best clinical practice. 2022.

Google Scholar  

Quittner AL, Abbott J, Hussain S, Ong T, Uluer A, Hempstead S, et al. Integration of mental health screening and treatment into cystic fibrosis clinics: Evaluation of initial implementation in 84 programs across the United States. Pediatr Pulmonol. 2020;55(11):2995–3004. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2005630887&from=export . Cited 2023 Apr 3.

Urquhart R, Woodside H, Kendell C, Porter GA. Examining the implementation of clinical practice guidelines for the management of adult cancers: A mixed methods study. J Eval Clin Pract. 2019;25(4):656–63. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=137375535&amp . Cited 2023 Apr 3.

Yinghui J, Zhihui Z, Canran H, Flute Y, Yunyun W, Siyu Y, et al. Development and validation for evaluation of an evaluation tool for guideline implementation. Chinese J Evidence-Based Med. 2022;22(1):111–9. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2016924877&from=export .

Breimaier HE, Halfens RJG, Lohrmann C. Effectiveness of multifaceted and tailored strategies to implement a fall-prevention guideline into acute care nursing practice: a before-and-after, mixed-method study using a participatory action research approach. BMC Nurs. 2015;14(1):18. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=103220991&amp .

Lai J, Maher L, Li C, Zhou C, Alelayan H, Fu J, et al. Translation and cross-cultural adaptation of the National Health Service Sustainability Model to the Chinese healthcare context. BMC Nurs. 2023;22(1). Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85153237164&doi=10.1186%2Fs12912-023-01293-x&partnerID=40&md5=0857c3163d25ce85e01363fc3a668654 .

Zhao J, Li X, Yan L, Yu Y, Hu J, Li SA, et al. The use of theories, frameworks, or models in knowledge translation studies in healthcare settings in China: a scoping review protocol. Syst Rev. 2021;10(1):13. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7792291 .

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50. Available from: https://pubmed.ncbi.nlm.nih.gov/22898128/ . Cited 2023 Apr 4.

Phulkerd S, Lawrence M, Vandevijvere S, Sacks G, Worsley A, Tangcharoensathien V. A review of methods and tools to assess the implementation of government policies to create healthy food environments for preventing obesity and diet-related non-communicable diseases. Implement Sci. 2016;11(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-016-0379-5 . Cited 2022 May 1.

Buss PM, Pellegrini FA. A Saúde e seus Determinantes Sociais. PHYSIS Rev Saúde Coletiva. 2007;17(1):77–93.

Pereira VC, Silva SN, Carvalho VKSS, Zanghelini F, Barreto JOMM. Strategies for the implementation of clinical practice guidelines in public health: an overview of systematic reviews. Heal Res Policy Syst. 2022;20(1):13. Available from: https://health-policy-systems.biomedcentral.com/articles/10.1186/s12961-022-00815-4 . Cited 2022 Feb 21.

Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof. 2004;24 Suppl 1:S31-7. Available from: https://pubmed.ncbi.nlm.nih.gov/15712775/ . Cited 2021 Nov 9.

Lotfi T, Stevens A, Akl EA, Falavigna M, Kredo T, Mathew JL, et al. Getting trustworthy guidelines into the hands of decision-makers and supporting their consideration of contextual factors for implementation globally: recommendation mapping of COVID-19 guidelines. J Clin Epidemiol. 2021;135:182–6. Available from: https://pubmed.ncbi.nlm.nih.gov/33836255/ . Cited 2024 Jan 25.

Lenzer J. Why we can’t trust clinical guidelines. BMJ. 2013;346(7913). Available from: https://pubmed.ncbi.nlm.nih.gov/23771225/ . Cited 2024 Jan 25.

Molino C de GRC, Ribeiro E, Romano-Lieber NS, Stein AT, de Melo DO. Methodological quality and transparency of clinical practice guidelines for the pharmacological treatment of non-communicable diseases using the AGREE II instrument: A systematic review protocol. Syst Rev. 2017;6(1):1–6. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-017-0621-5 . Cited 2024 Jan 25.

Albers B, Mildon R, Lyon AR, Shlonsky A. Implementation frameworks in child, youth and family services – Results from a scoping review. Child Youth Serv Rev. 2017;1(81):101–16.

Download references

Acknowledgements

Not applicable

This study is supported by the Fundação de Apoio à Pesquisa do Distrito Federal (FAPDF). FAPDF Award Term (TOA) nº 44/2024—FAPDF/SUCTI/COOBE (SEI/GDF – Process 00193–00000404/2024–22). The content in this article is solely the responsibility of the authors and does not necessarily represent the official views of the FAPDF.

Author information

Authors and affiliations.

Department of Management and Incorporation of Health Technologies, Ministry of Health of Brazil, Brasília, Federal District, 70058-900, Brazil

Nicole Freitas de Mello & Dalila Fernandes Gomes

Postgraduate Program in Public Health, FS, University of Brasília (UnB), Brasília, Federal District, 70910-900, Brazil

Nicole Freitas de Mello, Dalila Fernandes Gomes & Jorge Otávio Maia Barreto

René Rachou Institute, Oswaldo Cruz Foundation, Belo Horizonte, Minas Gerais, 30190-002, Brazil

Sarah Nascimento Silva

Oswaldo Cruz Foundation - Brasília, Brasília, Federal District, 70904-130, Brazil

Juliana da Motta Girardi & Jorge Otávio Maia Barreto

You can also search for this author in PubMed   Google Scholar

Contributions

NFM and JOMB conceived the idea and the protocol for this study. NFM conducted the literature search. NFM, SNS, JMG and JOMB conducted the data collection with advice and consensus gathering from JOMB. The NFM and JMG assessed the quality of the studies. NFM and DFG conducted the data extraction. NFM performed the analysis and synthesis of the results with advice and consensus gathering from JOMB. NFM drafted the manuscript. JOMB critically revised the first version of the manuscript. All the authors revised and approved the submitted version.

Corresponding author

Correspondence to Nicole Freitas de Mello .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

13012_2024_1389_moesm1_esm.docx.

Additional file 1: PRISMA checklist. Description of data: Completed PRISMA checklist used for reporting the results of this systematic review.

Additional file 2: Literature search. Description of data: The search strategies adapted for the electronic databases.

13012_2024_1389_moesm3_esm.doc.

Additional file 3: JBI’s critical appraisal tools for cross-sectional studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for cross-sectional studies.

13012_2024_1389_MOESM4_ESM.doc

Additional file 4: JBI’s critical appraisal tools for qualitative studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for qualitative studies.

13012_2024_1389_MOESM5_ESM.doc

Additional file 5: Methodological quality assessment results for cross-sectional studies. Description of data: Methodological quality assessment results for cross-sectional studies using JBI’s critical appraisal tools.

13012_2024_1389_MOESM6_ESM.doc

Additional file 6: Methodological quality assessment results for the qualitative studies. Description of data: Methodological quality assessment results for qualitative studies using JBI’s critical appraisal tools.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Freitas de Mello, N., Nascimento Silva, S., Gomes, D.F. et al. Models and frameworks for assessing the implementation of clinical practice guidelines: a systematic review. Implementation Sci 19 , 59 (2024). https://doi.org/10.1186/s13012-024-01389-1

Download citation

Received : 06 February 2024

Accepted : 01 August 2024

Published : 07 August 2024

DOI : https://doi.org/10.1186/s13012-024-01389-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Practice guideline
  • Evidence-Based Practice
  • Implementation science

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

qualitative research uses structured research instruments like questionnaires

  • Open access
  • Published: 06 August 2024

Adaptation and validation of the evidence-based practice profile (EBP 2 ) questionnaire in a Norwegian primary healthcare setting

  • Nils Gunnar Landsverk 1 ,
  • Nina Rydland Olsen 2 ,
  • Kristine Berg Titlestad 4 ,
  • Are Hugo Pripp 3 &
  • Therese Brovold 1  

BMC Medical Education volume  24 , Article number:  841 ( 2024 ) Cite this article

149 Accesses

Metrics details

Access to valid and reliable instruments is essential in the field of implementation science, where the measurement of factors associated with healthcare professionals’ uptake of EBP is central. The Norwegian version of the Evidence-based practice profile questionnaire (EBP 2 -N) measures EBP constructs, such as EBP knowledge, confidence, attitudes, and behavior. Despite its potential utility, the EBP 2 -N requires further validation before being used in a cross-sectional survey targeting different healthcare professionals in Norwegian primary healthcare. This study assessed the content validity, construct validity, and internal consistency of the EBP 2 -N among Norwegian primary healthcare professionals.

To evaluate the content validity of the EBP 2 -N, we conducted qualitative individual interviews with eight healthcare professionals in primary healthcare from different disciplines. Qualitative data was analyzed using the “text summary” model, followed by panel group discussions, minor linguistic changes, and a pilot test of the revised version. To evaluate construct validity (structural validity) and internal consistency, we used data from a web-based cross-sectional survey among nurses, assistant nurses, physical therapists, occupational therapists, medical doctors, and other professionals ( n  = 313). Structural validity was tested using a confirmatory factor analysis (CFA) on the original five-factor structure, and Cronbach’s alpha was calculated to assess internal consistency.

The qualitative interviews with primary healthcare professionals indicated that the content of the EBP 2 -N was perceived to reflect the constructs intended to be measured by the instrument. However, interviews revealed concerns regarding the formulation of some items, leading to minor linguistic revisions. In addition, several participants expressed that some of the most specific research terms in the terminology domain felt less relevant to them in clinical practice. CFA results exposed partial alignment with the original five-factor model, with the following model fit indices: CFI = 0.749, RMSEA = 0.074, and SRMR = 0.075. Cronbach’s alphas ranged between 0.82 and 0.95 for all domains except for the Sympathy domain (0.69), indicating good internal consistency in four out of five domains.

The EBP 2 -N is a suitable instrument for measuring Norwegian primary healthcare professionals’ EBP knowledge, attitudes, confidence, and behavior. Although EBP 2 -N seems to be an adequate instrument in its current form, we recommend that future research focuses on further assessing the factor structure, evaluating the relevance of the items, and the number of items needed.

Registration

Retrospectively registered (prior to data analysis) in OSF Preregistration. Registration DOI: https://doi.org/10.17605/OSF.IO/428RP .

Peer Review reports

Evidence-based practice (EBP) integrates the best available research evidence with clinical expertise, patient characteristics, and preferences [ 1 ]. The process of EBP is often described as following the five steps: ask, search, appraise, integrate, and evaluate [ 1 , 2 ]. Practicing the steps of EBP requires that healthcare professionals hold a set of core competencies [ 3 , 4 ]. Lack of competencies such as EBP knowledge and skills, as well as negative attitudes towards EBP and low self-efficacy, may hinder the implementation of EBP in clinical practice [ 5 , 6 , 7 , 8 , 9 , 10 ]. Measuring of EBP competencies may assist organizations in defining performance expectations and directing professional practice toward evidence-based clinical decision-making [ 11 ].

Using well-designed and appropriate measurement instruments in healthcare research is fundamental for gathering precise and pertinent data [ 12 , p. 1]. Access to valid and reliable instruments is also essential in the field of implementation science, where conducting consistent measurements of factors associated with healthcare professionals’ uptake of EBP is central [ 13 ]. Instruments measuring the uptake of EBP should be comprehensive and reflect the multidimensionality of EBP; they should be valid, reliable, and suitable for the population and setting in which it is to be used [ 14 ]. Many instruments measuring different EBP constructs are available today [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ]. However, the quality of these instruments varies, and rigorous validation studies that aim to build upon and further develop existing EBP instruments are necessary [ 13 , 16 ].

The authors of this study conducted a systematic review to summarize the measurement properties of existing instruments measuring healthcare professionals’ EBP attitudes, self-efficacy, and behavior [ 16 ]. This review identified 34 instruments, five of which were translated into Norwegian [ 23 , 24 , 25 , 26 , 27 ]. Of these five instruments, only the Evidence-based practice profile questionnaire (EBP 2 ) was developed to measure various EBP constructs, such as EBP knowledge, confidence, attitudes, and behavior [ 28 ]. In addition, EBP 2 was developed to be trans-professional [ 28 ]. Although not exclusively demonstrating high-quality evidence for all measurement properties, the review authors concluded that the EBP 2 was among the instruments that could be recommended for further use and adaption for use among different healthcare disciplines [ 16 ].

EBP 2 was initially developed by McEvoy et al. in 2010 and validated for Australian academics, practitioners, and students from different professions (physiotherapy, podiatry, occupational therapy, medical radiation, nursing, human movement) [ 28 ]. The instrument was later translated into Chinese and Polish and further tested among healthcare professionals in these countries [ 29 , 30 , 31 , 32 ]. The instrument was also translated into Norwegian and cross-culturally adapted into Norwegian [ 27 ]. The authors assessed content validity, face validity, internal consistency, test-retest reliability, measurement error, discriminative validity, and structural validity among bachelor students from nursing and social education and health and social workers from a local hospital [ 27 ]. Although the authors established the content validity of the EBP 2 -Norwegian version (EBP 2 -N), they recommended further linguistic improvements. Additionally, while they found the EBP 2 -N valid and reliable for three subscales, the original five-factor model could not be confirmed using confirmatory factor analysis. Therefore, they recommended further research on the instrument measurement properties [ 27 ].

We recognized the need for further assessment of measurement properties of the EBP 2 -N before using this instrument in a planned cross-sectional survey targeting physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors working with older people in Norwegian primary healthcare [ 33 ]. As our target population differed from the population studied by Titlestad et al. [ 27 ], the EBP 2 -N should be validated again, assessing content validity, construct validity and internal consistency [ 12 , p. 152]. The assessment of content validity evaluates whether the content of an instrument is relevant, comprehensive, and understandable for a specific population [ 34 ]. Construct validity, including structural validity and cross-cultural validity, can provide evidence on whether an instrument measures what it intends to do [ 12 , p. 169]. Furthermore, the degree of interrelatedness among the items (internal consistency) should be assessed when evaluating how items of a scale are combined [ 35 ]. Our objectives were to comprehensively assess content validity, structural validity, and internal consistency of the EBP 2 -N among Norwegian primary healthcare professionals. We hypothesized that the EBP 2 -N was a valid and reliable instrument suitable for use in Norwegian primary healthcare settings.

Study design

This study was conducted in two phases: Phase 1 comprised a qualitative assessment of the content validity of the EBP 2 -N, followed by minor linguistic adaptions and a pilot test of the adapted version. Phase 2 comprised an assessment of structural validity and internal consistency of the EBP 2 -N based on the result from a web-based cross-sectional survey.

The design and execution of this study adhered to the COSMIN Study Design checklist for patient-reported outcome measurement instruments, as well as the methodology for assessing the content validity of self-reported outcome measures [ 34 , 36 , 37 ]. Furthermore, this paper was guided by the COSMIN Reporting guidelines for studies on measurement properties of patient-reported outcome measures [ 38 ].

Participants and setting

Participants eligible for inclusion in both phases of this study were health personnel working with older people in primary healthcare in Norway, such as physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors. Proficiency in reading and understanding Norwegian was a prerequisite for inclusion. This study is part of a project called FALLPREVENT, a research project that aims to bridge the gap between research and practice in fall prevention in Norway [ 39 ].

Instrument administration

The EBP 2 -N consists of 58 self-reported items that are divided into five different domains: (1) Relevance (items 1–14), which refers to the value, emphasis, and importance respondents place on EBP; (2) Sympathy (items 15–21) which refers to the perceived compatibility of EBP with professional work; (3) Terminology (items 22–38), which refers to the understanding of common research terms; (4) Practice (items 39–47), which refers to the use of EBP in clinical practice and; (5) Confidence (items 48–58), which relates to respondents perception of their EBP skills [ 28 ]. All the items are rated on a five-point Likert scale (1 to 5) (see questionnaire in Additional file 1 ). Each domain is summarized, with higher scores indicating a higher degree of the construct measured in the domain in question. The items in the Sympathy domain are negatively phrased and need to be reversed before being summarized. The possible range in summarized scores (min-max) per domain are as follows: Relevance (14–70), Sympathy (7-35) , Terminology (17–85), Practice (9-45) , and Confidence (11–55).

Phase 1: content validity assessment

Recruitment and participant characteristics.

Snowball sampling was used to recruit participants in Eastern Norway, and possible eligible participants were contacted via managers in healthcare settings. The number of participants needed for the qualitative content validity interviews was based on the COSMIN methodology recommendations and was set to at least seven participants [ 34 , 37 ]. We recruited and included eight participants. All participants worked with older people in primary healthcare, and included two physical therapists, two occupational therapists, two assistant nurses, one nurse, and one medical doctor. The median age (min-max) was 35 (28–55). Two participants held upper secondary education, four held a bachelor’s degree, and two held a master’s degree. Six participants reported that they had some EBP training from their education or had attended EBP courses, and two had no EBP training.

Qualitative interviews

Before the interviews, a panel of four members (NGL, TB, NRO, and KBT) developed a semi-structured interview guide. Two panel members were EBP experts with extensive experience in EBP research and measurement (NRO and KBT). KBT obtained consent from the developer of the original EBP 2 questionnaire and translated the questionnaire into Norwegian in 2013 [ 27 ].

To evaluate the content validity of the EBP 2 -N for use among different healthcare professionals working in primary healthcare in Norway, we conducted individual interviews with eight healthcare professionals from different disciplines. Topics in the interview guide were guided by the standards of the COSMIN study design checklist and COSMIN criteria for good content validity, which include questions related to the following three aspects [ 34 , 37 ]: Whether the items of the instrument were perceived relevant (relevance), whether all key concepts were included (comprehensiveness), and whether the instructions, items, and response options were understandable (comprehensibility) [ 34 ]. The interview guide is presented in Additional File 2 . Interview preparations and training included a review of the interview guide and a pilot interview with a physical therapist not included in the study.

Eight interviews were conducted by the first author (NGL) in May and June 2022. All interviews were conducted in the participant’s workplaces. The interviews followed a “think-aloud” method [ 12 , p. 58, 40 , p. 5]. Hence, in the first part of the interview, the participants were asked to complete the questionnaire on paper while simultaneously saying aloud what they were thinking while responding to the questionnaire. Participants also had to state their choice of answer aloud and make a pen mark on the items or responses that either were difficult to understand or did not feel relevant to them. In the second part of the interviews, participants were asked to elaborate on why items were marked as difficult to understand or irrelevant, focusing on relevance and comprehensibility. In addition, the participants were asked to give their overall impression of the instrument and state if they thought any essential items (comprehensiveness) were missing. Only the second part of the interviews were audio-recorded.

Analysis and panel group meetings

After conducting the individual interviews, the first author immediately transcribed the recorded audio data. The subsequent step involved gathering and summarizing participants’ comments into one document that comprised the questionnaire instructions, items, and response options. Using the “text summary” model [ 41 , p.61], we summarized the primary “themes” and “problems” identified by participants during the interviews. These were then aligned with the specific item or section of the questionnaire to which the comments were related. For example, comments on the items’ comprehensibility were identified as one “theme”, and the corresponding “problem” was that the item was perceived as too academically formulated or too complex to understand. Comments on an item’s relevance was another “theme” identified, and an example of a corresponding “problem” was that the EBP activity presented in the item was not recognized as usual practice for the participant. The document contained these specific comments and summarized the participants’ overall impression of the instrument. Additionally, it included more general comments addressing the instrument’s relevance, comprehensibility, and comprehensiveness.

Next, multiple rounds of panel group discussions took place, and the final document with a summary of participants’ comments served as the foundation for these discussions. The content validity of the items, instructions, and response options underwent thorough examinations by the panel members. Panel members discussed aspects, such as relevance, comprehensiveness, and comprehensibility, drawing upon insights from interview participants’ comments and the panel members’ extensive knowledge about EBP.

Finally, the revised questionnaire was pilot tested on 40 master’s students (physical therapists) to evaluate the time used to respond, and the students were invited to make comments in free text adjacent to each domain in the questionnaire. The pilot participants answered a web-based version of the questionnaire.

Phase 2: Assessment of structural validity and internal consistency

Recruitment and data collection for the cross-sectional survey.

Snowball sampling was used to recruit participants. The invitation letter, with information about the study and consent form, was distributed via e-mail to healthcare managers in over 37 cities and municipalities representing the eastern, western, central, and northern parts of Norway. The managers forwarded the invitation to eligible employees and encouraged them to respond to the questionnaire. The respondents that consented to participation automatically received a link to the online survey. Our approach to recruitment made it impossible to keep track of the exact number of potential participants who received invitations to participate. As such, we were unable to determine a response rate.

Statistical methods

Statistical analyses were performed using STATA [ 42 ]. We tested the structural validity and internal consistency of the 58 domain items of the EBP 2 -N, using the same factor structure as in the initial evaluation [ 28 ] and the study that translated the questionnaire into Norwegian [ 27 ]. Structural validity was assessed using confirmatory factor analysis with maximum likelihood estimation to test if the data fit the predetermined original five-factor structure. Model fit was assessed by evaluating the comparative fit index (CFI), root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). Guidelines suggest that a good-fitting model should have a CFI of around 0.95 or higher, RMSEA of around 0.06 or lower, and SRMR of around 0.08 or lower [ 43 ]. Cronbach’s alpha was calculated for each of the five domains to evaluate whether the items within the domains were interrelated. It has been proposed that Cronbach’s alpha between 0.70 and 0.95 can be considered good [ 44 ].

The sample size required for a factor analysis was set based on COSMIN criteria for at least an “adequate” sample size, which is at least five times the number of items and > 100 [ 45 , 46 ]. Accordingly, the sample size required in our case was > 290 respondents. Regarding missing data, respondents with over 25% missing items on domain items were excluded from further analysis. Respondents with over 20% missing on one domain were excluded from the analysis of that domain. The Little’s MCAR test was conducted to test whether data were missing completely at random. Finally, for respondents with 20% or less missing data on one domain, the missing values were substituted with the respondent’s mean of other items within the same domain.

Ethical approval and consent to participate

The Norwegian Agency for Shared Services in Education and Research (SIKT) approved the study in March 2022 (ref: 747319). We obtained written informed consent from the participants interviewed and the cross-sectional survey participants.

The findings for Phase 1 and Phase 2 will be presented separately. Phase 1 will encompass the results of the qualitative content validity assessment, adaptions, and pilot testing of the EBP 2 -N. Phase 2 will encompass the results of assessing the structural validity and internal consistency of the EBP 2 -N.

Phase 1: Results of the content validity assessment

Comprehensiveness: whether key concepts are missing.

Only a few comments were made on comprehensiveness. Notably, one participant expressed the need for additional items addressing clinical experience and user perspectives.

Relevance: whether the items are perceived relevant

Overall, the participants commented that they perceived the instrument as relevant to their context. However, several participants pointed out some items that felt less relevant. The terminology domain emerged as a specific area of concern, as most participants expressed that this subscale contained items that felt irrelevant to clinical practice. Comments such as “I do not feel it’s necessary to know all these terms to work evidence-based,” and “The more overarching terms like RCT, systematic review, clinical relevance, and meta-analysis I find relevant, but not the more specific statistical terms,” captured the participants’ perspectives on the relevance of the terminology domain.

Other comments related to the terminology domain revealed that these items could cause feelings of demotivation or inadequacy: “One can become demotivated or feel stupid because of these questions” and “Many will likely choose not to answer the rest of the form, as they would feel embarrassed not knowing”. Other comments on relevance were related to items in other subscales, for example, critical appraisal items (i.e., items 20, 42, and 55), which were considered less relevant by some participants. One participant commented: “If one follows a guideline as recommended, there is no need for critical assessment”.

Comprehensibility: Whether instructions, items, and response options are understandable

All eight participants stated that they understood what the term EBP meant. The predominant theme from the participant’s comments was related to the comprehensibility of the EBP 2 -N. Most of the comments on comprehensibility revolved around the formulation of items. Participants noted challenges related to comprehensibility in 35 out of 58 items, either due to difficulty in understanding, readability issues, the length of items, lack of clarity, or overly academic language. For instance, item five in the Relevance domain, “I intend to develop knowledge about EBP”, received comments that expressed uncertainty about whether “EBP” referred to the five steps of EBP or evidence-based clinical interventions/practices (e.g., practices following recommendations in evidence-based guidelines). Items that were perceived as overly academic included phrases such as “intend to apply”, “intend to develop”, or “convert your information needs”. For these phrases, participants suggested simpler formulations in layperson’s Norwegian. Some participants deemed the instrument “too advanced,” “on a too high level,” or “too abstract”, and others expressed that they understood most of the instrument’s content, indicating a divergence among participants.

Examples of items considered challenging to read, too complex, or overly lengthy were items six and 12 in the relevance domain, 16 and 20 in the sympathy domain, and 58 in the confidence domain. The typical comments from participants revealed a preference for shorter, less complex items with a clear and singular focus. In addition, some comments referred to the formulation of response options. For instance, two response options in the confidence domain, “Reasonably confident” and “Quite confident”, were perceived as too similar in Norwegian. In the practice subscale, a participant pointed out that the term “monthly or less” lacked precision, as it could cover any frequency from once to twelve times a year, thus being perceived as imprecise.

Panel group meetings and instrument revision

The results of the interviews were discussed during several rounds of panel group meetings. After thoroughly examining the comments, 33 items underwent revisions during the panel meetings. These revisions primarily involved minor linguistic adjustments to preserve the original meaning of the items. For example, the Norwegian version of item 8 was considered complex and overly academically formulated and underwent revision. The phrase “I intend to apply” was replaced by “I want to use”, as the panel group considered this phrase easier to understand in Norwegian. Another example involved the term “Framework,” which some participants found vague or difficult to understand (i.e., in item 3, “my profession uses EBP as a framework”). The term “framework” was replaced with “way of thinking and working”, considered more concrete and understandable in Norwegian. The phrase “way of thinking and working” was also added to item 5 to clarify that “EBP” referred to the five steps of EBP, not interventions in line with evidence-based recommendations. Additionally, it was challenging to revise items that participants considered challenging to read, too complex, or overly lengthy (i.e., 6, 12, 16, 20, and 58), as it was difficult to shorten them without losing their original meaning. However, replacing overly academic words with simpler formulations made these examples less complex and more readable.

In terms of relevance of the items, no items were removed, and the terminology domain was retained despite comments regarding its relevance. Changing this domain would have impeded the opportunity to compare results from future studies using this questionnaire with previous studies using the same questionnaire. Regarding comprehensiveness, the panel group reached a consensus that the domains included all essential items concerning the constructs that the original instrument states to measure. Further, examples of minor linguistic changes and additional details on item revisions are reported in Additional File 3 .

The median time to answer the questionnaire was nine minutes. Students made no further comments to the questionnaire.

Participants’ characteristics and mean domain scores

A total of 313 responded to the survey. The respondents’ mean age (SD) was 42.7 years (11.4).The sample included 119 nurses, 74 assistant nurses, 64 physical therapists, 38 occupational therapists, three medical doctors, and 15 other professionals, mainly social educators. In total, 63.9% ( n  = 200) of the participants held a bachelor’s degree, 11.8% ( n  = 37) held a master’s degree, and 0.3% ( n  = 1) held a Ph.D. Moreover, 10.5% ( n  = 33) of the participants had completed upper secondary education, and 13.1% ( n  = 41) had tertiary vocational education. One hundred and eighty-five participants (59.1%) reported no formal EBP training, while among the 128 participants who had undergone formal EBP training, 31.5% had completed over 20 h of EBP training. The mean scores (SD) for the different domains were as follows: Relevance 80.2 (7.3), Sympathy 21.2 (3.6), Terminology 44.5 (15.3), Practice 22.2 (5.8), and Confidence 31.2 (9.2).

Missing data

Out of 314 respondents, one was excluded due to over 25% missing domain items, and three were excluded due to more than 20% missing data in specific domains. Twenty-six respondents had under 20% missing data on one domain, and these missing values were substituted with the respondent’s mean of the other items within the same domain. In total, 313 responses were included in the final analysis. Each domain item had at most 1.3% missing items in total. The percentage of missing data per domain was low and relatively similar across the five domains ( Relevance  = 0.05%, Sympathy  = 0.2%, Terminology  = 0.4%, Practice  = 0.6%, Confidence  = 0.6%). The Little’s MCAR test showed p-values higher than 0.05 for all domains, indicating that data was missing completely at random.

Structural validity results

A five-factor model was estimated based on the original five-factor structure (Fig.  1 ). The model was estimated using the maximum likelihood method. A standardized solution was estimated, constraining the variance of latent variables to 1. Correlation among latent variables was allowed. The results of the CFA showed the following model fit indices: CFI = 0.749, RMSEA = 0.074, and SRMR = 0.075. The CFI and RMSEA results did not meet the criteria for a good-fitting model set a priori (CFI of around 0.95 or higher, RMSEA of around 0.06 or lower). However, the SRMR value met the criteria around 0.08 or lower. All standardized factor loadings were over 0.32, and only five items loaded under 0.5. The range of standardized factor loadings was the following in the different domains: Relevance  = 0.47–0.79; Terminology  = 0.51–0.80; Practice  = 0.35–0.70, Confidence  = 0.43–0.86, and Sympathy  = 0.32–0.65 (Fig.  1 ).

figure 1

Confirmatory factor analysis, standardized solution of the EBP2-N. ( n  = 313). Note: Large circles = latent variables, Rectangles = measured items, small circles = residual variance

Internal consistency results

As reported in Table  1 , Cronbach’s alphas ranged between 0.82 and 0.95 for all domains except for the Sympathy domain, where Cronbach’s alpha was 0.69. Results indicate good internal consistency for four domains and close to the cut-off of good internal consistency (> 0.70) on Sympathy.

In this study, we aimed to assess the measurement properties of the EBP 2 -N questionnaire. The study population of interest was healthcare professionals working with older people in Norwegian primary healthcare, including physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors. The study was conducted in two phases: content validity was assessed in Phase 1, and construct validity and internal consistency were assessed in phase 2.

The findings from Phase 1 and the qualitative interviews with primary healthcare professionals indicated that the content of the EBP 2 -N was perceived to reflect the constructs intended to be measured by the instrument [ 28 ]. However, the interviews also revealed different perceptions regarding the relevance and comprehensibility of certain items. Participants expressed concerns about the formulation of some items, and we decided to make minor linguistic adjustments, aligning with previous recommendations to refine item wording through interviews [ 27 ]. Lack of content validity can have adverse consequences [ 34 ]. Irrelevant or incomprehensible items may make respondents tired of answering, leading to potentially biased answers [ 47 , 48 , p. 139]. Analysis of missing data showed that possible irrelevant or incomprehensible items did not lead to respondent fatigue, as the overall percentage of missing items was low (at most 1.3%), and the percentage of missing data did not vary across the domains. Irrelevant items may also impact other measurement properties, such as structural validity and internal consistency [ 34 ]. We believe that the minor linguistic revisions we made to some items made the questionnaire easier to understand. This assumption was supported by the pilot test of 40 master’s students, where no further comments regarding comprehensibility were added.

The overall relevance of the instruments was perceived positively. However, several participants expressed concerns about the terminology domain as some of the most specific research terms felt irrelevant to them in clinical practice. Still, the panel group decided to keep all items in the terminology domain to allow comparison of results among future studies on the same instrument and subscales. In addition, this decision was based on the fact that knowledge about research terminology, such as “types of data,” “measures of effect,” and “statistical significance,” are essential competencies to perform step three of the EBP process (critical appraisal) [ 3 ]. Leaving out parts of the terminology domain could, therefore, possibly make our assessment of the EBP constructs less comprehensive and complete [ 14 ]. However, since the relevance of some items in the terminology domain was questioned, we cannot fully confirm the content validity of this domain, and we recommend interpreting it with caution.

The confirmatory factor analysis (CFA) in Phase 2 of this study revealed that the five-factor model only partially reflected the dimensionality of the constructs measured by the instrument. The SRMR was the only model fit indices that completely met the criteria for a good-fitting model set a priori, yielding a value of 0.075. In contrast, the CFI at 0.749 and RMSEA at 0.074 fell short of the criteria for a good-fitting model (CFI ≥ 0.95, RMSEA ≤ 0.06). However, our model fit indices were closer to the criteria for a good-fitting model compared to Titlestad et al. (2017) [ 27 ] who demonstrated a CFI of 0.69, RMSEA of 0.089, and SRMR of 0.095. This tendency toward better fit in our study may be related to the larger sample size, in agreement with established recommendations of a minimum of 100–200 participants and at least 5–10 times the number of items to ensure the precision of the model and overall model fit [ 46 , p. 380].

Although our sample size met COSMIN’s criteria for an “adequate” sample size [ 45 ], the partially adequate fit indices suggest that the original five-factor model might not be the best-fitting model. A recent study on the Chinese adaptation of the EBP 2 demonstrated that item reduction and using a four-factor structure improved model fit (RMSEA = 0.052, CFI = 0.932) [ 30 ]. The same study removed eighteen items based on content validity evaluation (four from relevance , seven from terminology , and seven from sympathy ) [ 30 ]. In another study where the EBP 2 was adapted for use among Chinese nurses, thirteen items (two from sympathy , eight from terminology , one from practice , and two from confidence ) were removed, and an eight-factor structure was identified [ 29 ]. However, compared to our study, noticeably improved model fit was not demonstrated in this study [ 29 ]. The model fit indices of their 45-item eight-factor structure were quite similar to the one found in our study (RMSEA = 0.065, SRMR = 0.077, CFI = 0.884) [ 29 ]. The results from the two above mentioned studies suggest that a model including fewer items and another factor structure potentially could have applied to our population as well. Although the five-factor model only partially reflects the constructs measured by the EBP 2 -N in our population, it contributes valuable insights into the instrument’s performance in a specific healthcare setting.

Cronbach’s alpha results in this study indicate good internal consistency for four domains, being over 0.82. However, the alpha of 0.69 in the sympathy did not reach the pre-specified cut-off of good internal consistency (0.70) [ 44 ]. A tendency of relatively lower Cronbach’s alpha values on the sympathy domain, compared to the other four domains, has also been identified in previous similar studies [ 27 , 28 , 31 , 32 ]. Titlestad et al. (2017) reported Cronbach’s alpha to be 0.66 in the sympathy domain and above 0.90 in the other domains [ 27 ]. McEvoy et al. (2010), Panczyk et al. (2017), and Belowska et al. (2020) reported Cronbach’s alphas of 0.76–0.80 for the sympathy domain, and 0.85–0.97 for the other domains [ 28 , 31 , 32 ]. In these three cases, Cronbach’s alphas of the sympathy domain were all over 0.70, but the same tendency of this domain demonstrating lower alphas than the other four domains was evident. The relatively lower alpha values in the sympathy domain may be related to the negative phrasing of items [ 49 ], the low number of items in this domain compared to the others ( n  = 7) [ 12 , p. 84, 47 , p. 86], and a possible heterogeneity in the construct measured [ 47 , p. 232]. The internal consistency results of our study indicate that the items in the sympathy domain are less interrelated than the other domains. However, having a Cronbach’s alpha value of 0.69 indicates that the items do not entirely lack interrelatedness.

Limitations

Methodological limitations that could potentially introduce bias into the results should be acknowledged. Although the eight participants involved in the qualitative content validity interviews in Phase 1 covered all healthcare disciplines and education levels aimed to be included in the survey in Phase 2, it remains uncertain whether these eight participants demonstrated all potential variations in the population of interest. It is possible that those that agreed to participate in qualitative interviews regarding an EBP instrument held more positive attitudes toward EBP than the general practitioner would do. Another possible limitation pertains to the qualitative interviews and the fact that the interviewer (NGL) had limited experience facilitating “think-aloud” interviews. To reduce the potential risk of bias related to the interviewer, the panel group with extensive experience in EBP research took part in the interview preparation, and a pilot interview was conducted before the interviews to ensure training.

Furthermore, using a non-random sampling method and the unknown response rate in Phase 2 may have led to biased estimates of measurement properties and affected the representativeness of the sample included. Additionally, the characteristics of non-responders remain unknown, making it challenging to assess whether they differ from the responders and if the final sample adequately represents the variability in the construct of interest. Due to potential selection bias and non-response bias, there may be uncertainty regarding the accuracy of the measurement property assessment and whether the study sample fully represents the entire population of interest [ 50 , p. 205].

Conclusions

The EBP 2 -N is suitable for measuring Norwegian primary healthcare professionals’ EBP knowledge, attitudes, confidence, and behavior. Researchers can use the EBP 2 -N to increase their understanding of factors affecting healthcare professional’s implementation of EBP and to guide the development of tailored strategies for implementing EBP.

This study revealed positive perceptions of the content validity of the EBP 2 -N, though with nuanced concerns about the relevance and comprehensibility of certain items and uncertainty regarding the five-factor structure of the EBP 2 -N. The minor linguistic revisions we made to some items made the questionnaire more understandable. However, when EBP 2 -N is used in primary healthcare, caution should be exercised when interpreting the results of the terminology domain, as the relevance of some items has been questioned.

Future research should focus on further assessing the factor structure of the EBP 2 -N, evaluating the relevance of the items, and exploring the possibility of reducing the number of items, especially when applied in a new setting or population. Such evaluations could further enhance our understanding of the instrument’s measurement properties and potentially lead to improvements in the measurement properties of the EBP 2 -N.

Data availability

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

  • Evidence-based practice

The Evidence-based practice profile

The Norwegian version of the Evidence-based practice profile questionnaire

Consensus-based Standards for the Selection of Health Measurement Instruments

Confirmatory factor analysis

Comparative fit index

Root mean square error of approximation

Standardized square residual

The Norwegian Agency for Shared Services in Education and Research

Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, et al. Sicily statement on evidence-based practice. BMC Med Educ. 2005;5(1):1.

Article   Google Scholar  

Straus SE, Glasziou P, Richardson WS, Haynes RB, Pattani R, Veroniki AA. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Elsevier; 2019.

Google Scholar  

Albarqouni L, Hoffmann T, Straus S, Olsen NR, Young T, Ilic D, et al. Core competencies in evidence-based practice for Health professionals: Consensus Statement based on a systematic review and Delphi Survey. JAMA Netw Open. 2018;1(2):e180281.

Straus S, Glasziou P, Richardson W, Haynes R. Evidence-based medicine: how to practice and teach EBM. Fifth edition ed: Elsevier Health Sciences; 2019.

Paci M, Faedda G, Ugolini A, Pellicciari L. Barriers to evidence-based practice implementation in physiotherapy: a systematic review and meta-analysis. Int J Qual Health Care. 2021;33(2).

Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract. 2014;20(6):793–802.

da Silva TM, Costa Lda C, Garcia AN, Costa LO. What do physical therapists think about evidence-based practice? A systematic review. Man Ther. 2015;20(3):388–401.

Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180(S6):S57–60.

Saunders H, Gallagher-Ford L, Kvist T, Vehviläinen-Julkunen K. Practicing Healthcare professionals’ evidence-based practice competencies: an overview of systematic reviews. Worldviews Evid Based Nurs. 2019;16(3):176–85.

Salbach NM, Jaglal SB, Korner-Bitensky N, Rappolt S, Davis D. Practitioner and organizational barriers to evidence-based practice of physical therapists for people with stroke. Phys Ther. 2007;87(10):1284–303.

Saunders H, Vehvilainen-Julkunen K. Key considerations for selecting instruments when evaluating healthcare professionals’ evidence-based practice competencies: a discussion paper. J Adv Nurs. 2018;74(10):2301–11.

de Vet HCW, Terwee CB, Mokkink LB, Knol DL. Measurement in Medicine: a practical guide. Cambridge: Cambridge: Cambridge University Press; 2011.

Book   Google Scholar  

Tilson JK, Kaplan SL, Harris JL, Hutchinson A, Ilic D, Niederman R, et al. Sicily statement on classification and development of evidence-based practice learning assessment tools. BMC Med Educ. 2011;11:78.

Roberge-Dao J, Maggio LA, Zaccagnini M, Rochette A, Shikako K, Boruff J et al. Challenges and future directions in the measurement of evidence-based practice: qualitative analysis of umbrella review findings. J Eval Clin Pract. 2022.

Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, et al. Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006;296(9):1116–27.

Landsverk NG, Olsen NR, Brovold T. Instruments measuring evidence-based practice behavior, attitudes, and self-efficacy among healthcare professionals: a systematic review of measurement properties. Implement Science: IS. 2023;18(1):42.

Hoegen PA, de Bot CMA, Echteld MA, Vermeulen H. Measuring self-efficacy and outcome expectancy in evidence-based practice: a systematic review on psychometric properties. Int J Nurs Stud Adv. 2021;3:100024.

Oude Rengerink K, Zwolsman SE, Ubbink DT, Mol BW, van Dijk N, Vermeulen H. Tools to assess evidence-based practice behaviour among healthcare professionals. Evid Based Med. 2013;18(4):129–38.

Leung K, Trevena L, Waters D. Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence-based practice. J Adv Nurs. 2014;70(10):2181–95.

Buchanan H, Siegfried N, Jelsma J. Survey instruments for Knowledge, skills, attitudes and Behaviour related to evidence-based practice in Occupational Therapy: a systematic review. Occup Ther Int. 2016;23(2):59–90.

Fernández-Domínguez JC, Sesé-Abad A, Morales-Asencio JM, Oliva-Pascual-Vaca A, Salinas-Bueno I, de Pedro-Gómez JE. Validity and reliability of instruments aimed at measuring evidence-based practice in physical therapy: a systematic review of the literature. J Eval Clin Pract. 2014;20(6):767–78.

Belita E, Squires JE, Yost J, Ganann R, Burnett T, Dobbins M. Measures of evidence-informed decision-making competence attributes: a psychometric systematic review. BMC Nurs. 2020;19:44.

Egeland KM, Ruud T, Ogden T, Lindstrom JC, Heiervang KS. Psychometric properties of the Norwegian version of the evidence-based practice attitude scale (EBPAS): to measure implementation readiness. Health Res Policy Syst. 2016;14(1):47.

Rye M, Torres EM, Friborg O, Skre I, Aarons GA. The evidence-based practice attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples. Implement Science: IS. 2017;12(1):44.

Grønvik CKU, Ødegård A, Bjørkly S. Factor Analytical Examination of the evidence-based practice beliefs scale: indications of a two-factor structure. scirp.org; 2016.

Moore JL, Friis S, Graham ID, Gundersen ET, Nordvik JE. Reported use of evidence in clinical practice: a survey of rehabilitation practices in Norway. BMC Health Serv Res. 2018;18(1):379.

Titlestad KB, Snibsoer AK, Stromme H, Nortvedt MW, Graverholt B, Espehaug B. Translation, cross-cultural adaption and measurement properties of the evidence-based practice profile. BMC Res Notes. 2017;10(1):44.

McEvoy MP, Williams MT, Olds TS. Development and psychometric testing of a trans-professional evidence-based practice profile questionnaire. Med Teach. 2010;32(9):e373–80.

Hu MY, Wu YN, McEvoy MP, Wang YF, Cong WL, Liu LP, et al. Development and validation of the Chinese version of the evidence-based practice profile questionnaire (EBP < sup > 2 Q). BMC Med Educ. 2020;20(1):280.

Jia Y, Zhuang X, Zhang Y, Meng G, Qin S, Shi WX, et al. Adaptation and validation of the evidence-based Practice Profile Questionnaire (EBP(2)Q) for clinical postgraduates in a Chinese context. BMC Med Educ. 2023;23(1):588.

Panczyk M, Belowska J, Zarzeka A, Samolinski L, Zmuda-Trzebiatowska H, Gotlib J. Validation study of the Polish version of the evidence-based Practice Profile Questionnaire. BMC Med Educ. 2017;17(1):38.

Belowska J, Panczyk M, Zarzeka A, Iwanow L, Cieślak I, Gotlib J. Promoting evidence-based practice - perceived knowledge, behaviours and attitudes of Polish nurses: a cross-sectional validation study. Int J Occup Saf Ergon. 2020;26(2):397–405.

Knowledge A. Confidence, and Behavior Related to Evidence-based Practice Among Healthcare Professionals Working in Primary Healthcare. Protocol of a Cross-sectional Survey [Internet]. OSF. 2023. Available from: https://doi.org/10.17605/OSF.IO/428RP

Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, Patrick DL, Alonso J, et al. COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study. Qual Life Res. 2018;27(5):1159–70.

Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57.

Mokkink LB, de Vet HCW, Prinsen CAC, Patrick DL, Alonso J, Bouter LM, et al. COSMIN Risk of Bias checklist for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1171–9.

Mokkink LB, Prinsen CA, Patrick D, Alonso J, Bouter LM, Vet HCD et al. Cosmin Study design checklist for patient-reported outecome measurement instruments [PDF]. 2019. https://www.cosmin.nl/tools/checklists-assessing-methodological-study-qualities/ . https://www.cosmin.nl/wp-content/uploads/COSMIN-study-designing-checklist_final.pdf

Gagnier JJ, Lai J, Mokkink LB, Terwee CB. COSMIN reporting guideline for studies on measurement properties of patient-reported outcome measures. Qual Life Res. 2021;30(8):2197–218.

Bjerk M, Flottorp SA, Pripp AH, Øien H, Hansen TM, Foy R, et al. Tailored implementation of national recommendations on fall prevention among older adults in municipalities in Norway (FALLPREVENT trial): a study protocol for a cluster-randomised trial. Implement Science: IS. 2024;19(1):5.

Presser S, Couper MP, Lessler JT, Martin E, Martin J, Rothgeb JM et al. Methods for Testing and Evaluating Survey Questions. Methods for Testing and Evaluating Survey Questionnaires2004. pp. 1–22.

Willis GB. Analysis of the cognitive interview in Questionnaire Design. Cary: Cary: Oxford University Press, Incorporated;; 2015.

StataCorp. Stata Statistical Software. 18 ed. College Station, TX: StataCorp; 2023.

Hu L-t, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1):1–55.

Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42.

Mokkink LB, Prinsen CA, Patrick DL, Alonso J, Bouter LM, de Vet HC et al. COSMIN methodology for systematic reviews of Patient-Reported Outcome Measures (PROMs) – user manual. 2018. https://www.cosmin.nl/tools/guideline-conducting-systematic-review-outcome-measures/

Brown TA. Confirmatory Factor Analysis for Applied Research. New York: New York: Guilford; 2015.

Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. New York, New York: Oxford University Press; 2015.

de Leeuw ED, Hox JJ, Dillman DA. International handbook of survey methodology. New York, NY: Taylor & Francis Group/Lawrence Erlbaum Associates; 2008. x, 549-x, p.

Solís Salazar M. The dilemma of combining positive and negative items in scales. Psicothema. 2015;27(2):192–200.

Bowling A. Research methods in health: investigating health and health services. 4th ed. ed. Maidenhead: Open University, McGraw-Hill;; 2014.

Download references

Acknowledgements

The authors would like to thank all the participants of this study, and partners in the FALLPREVENT research project.

Open access funding provided by OsloMet - Oslo Metropolitan University. Internal founding was provided by OsloMet. The funding bodies had no role in the design, data collection, data analysis, interpretation of the results or decision to submit for publication.

Open access funding provided by OsloMet - Oslo Metropolitan University

Author information

Authors and affiliations.

Department of Rehabilitation Science and Health Technology, Faculty of Health Science, Oslo Metropolitan University, Oslo, Norway

Nils Gunnar Landsverk & Therese Brovold

Department of Health and Functioning, Faculty of Health and Social Sciences, Western Norway University of Applied Sciences, Bergen, Norway

Nina Rydland Olsen

Faculty of Health Sciences, OsloMet - Oslo Metropolitan University, Oslo, Norway

Are Hugo Pripp

Department of Welfare and Participation, Faculty of Health and Social Sciences, Western Norway University of Applied Sciences, Bergen, Norway

Kristine Berg Titlestad

You can also search for this author in PubMed   Google Scholar

Contributions

NGL, TB, and NRO initiated the study and contributed to the design and planning. NGL managed the data collection (qualitative interviews and the web-based survey) and conducted the data analyses. NGL, TB, NRO, and KBT formed the panel group that developed the interview guide, discussed the results of the interviews in several meetings, and made minor linguistic revisions to the items. AHP assisted in planning the cross-sectional survey, performing statistical analyses, and interpreting the results of the statistical analyses. NGL wrote the manuscript draft, and TB, NRO, and KBT reviewed and revised the text in several rounds. All authors contributed to, reviewed, and approved the final manuscript.

Corresponding author

Correspondence to Nils Gunnar Landsverk .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1:

The EBP2-N questionnaire

Supplementary Material 2:

The interview guide

Supplementary Material 3:

Details on item revisions

Supplementary Material 4:

Reporting guideline

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Landsverk, N.G., Olsen, N.R., Titlestad, K.B. et al. Adaptation and validation of the evidence-based practice profile (EBP 2 ) questionnaire in a Norwegian primary healthcare setting. BMC Med Educ 24 , 841 (2024). https://doi.org/10.1186/s12909-024-05842-z

Download citation

Received : 09 April 2024

Accepted : 30 July 2024

Published : 06 August 2024

DOI : https://doi.org/10.1186/s12909-024-05842-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Healthcare professional
  • Primary healthcare
  • Content validity
  • Construct validity
  • Structural validity
  • Internal consistency
  • Self-efficacy

BMC Medical Education

ISSN: 1472-6920

qualitative research uses structured research instruments like questionnaires

ORIGINAL RESEARCH article

Understanding users of online energy efficiency counseling: comparison to representative samples in norway.

\r\nChristian A. Klckner*

  • 1 Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
  • 2 Department of Psychology, University of Bergen, Bergen, Norway

Introduction: To achieve substantial energy efficiency improvements in the privately owned building stock, it is important to communicate with potential renovators at the right point in time and provide them with targeted information to strengthen their renovation ambitions. The European Union recommends using one-stop-shops (OSSs), which provide information and support throughout the whole process, from planning to acquisition of funding, implementation, and evaluation as a measure to remove unnecessary barriers.

Methods: For this paper, we invited visitors of two Norwegian websites with OSS characteristics to answer an online survey about their renovation plans and energy efficiency ambitions. The participants visited the websites out of their own interest; no recruitment for the websites was conducted as part of the study ( N = 437). They also rated a range of psychological drivers, facilitators, and barriers to including energy upgrades in a renovation project. Their answers were then compared to existing data from representative samples of Norwegian households regarding home renovation in 2014, 2018, and 2023, as well as data from a sample of people who were engaged in renovation projects in 2014, which was collected by the research team with a similar online survey. Furthermore, 78 visitors completed a brief follow-up online survey one year later to report the implemented measures.

Results: We found that visitors of the websites are involved in more comprehensive renovation projects and have substantially higher ambitions for the upgrade of energy efficiency compared to the representative samples. They also perceive stronger personal and social norms, as well as have a different profile of facilitators and barriers.

Discussion: The findings suggest to policymakers that OSSs should be marketed especially to people motivated to upgrade energy efficiency but lack information and are unable to implement their plans alone. Also, the construction industry might refer interested people to such low-threshold online solutions to assist informed and more ambitious decisions.

1 Introduction

Reducing energy use in the building sector by increasing energy efficiency is a key pillar of decarbonising Europe as formulated in the EU’s “Fit for 55” legislation ( Schlacke et al., 2022 , 4). On a global level, the residential sector is the third largest energy consumer, representing 27–30% of the energy consumption, almost at the same level as transportation and industry ( Nejat et al., 2015 , 843; IEA, 2023 ). Also in Europe, the residential sector stands for 26% of final energy consumption, being the second largest consumption sector after transportation ( Tsemekidi et al., 2019 , 1). Whereas the primary energy consumption in the residential sector decreased by 4.6% between 2000 and 2016 ( Tsemekidi et al., 2019 , 9), there is still a substantial untapped potential for further improvement of energy efficiency in the sector. This can be achieved through energy efficiency renovation of the existing building stock ( Pohoryles et al., 2020 , 11–12). Realizing this potential requires that also private house owners invest in energy efficiency measures. However, the annual rate of housing renovation in Europe is only about 1% ( Biere-Arenas and Marmolejo-Duarte, 2022 , 185), which is far too slow to reach the ambitious energy conservation targets. Besides, not all of those renovations include energy efficiency improvements. This raises the question of how property owners make decisions about renovating and energy efficiency measures and how they can be efficiently supported in these processes. To alleviate this problem, one-stop-shops (OSS), which are places where interested citizens can get counseling and support for the whole process of an energy retrofit, have gained a lot of attention lately as a means to support citizens in the matter of energy retrofits also from the European Union (as for example reflected in recently finished EU projects like “EUROPE one stop” or “ProRetro”).

1.1 One-stop-shops in energy counseling

Bertoldi et al. (2021 , 3–12) analysed the role of OSSs across Europe. They concluded that OSSs may be able to address some of the main barriers that households face when deciding about energy efficiency renovations. Often, these barriers can be categorized as economic (upfront costs, need for loan, split incentives between landlords and renters/disagreement between owners), information (information asymmetries, outcome uncertainties, incorrect beliefs), and decision-making (limited attention, social invisibility of the action, cognitive burden, loss aversion, status quo bias). Their analysis of 63 OSSs over Europe showed that the services the OSSs offer differ considerably, as do their business models. Some of them are public entities that often offer services for free, others are commercial enterprises. Their clients are usually homeowners living in relatively old buildings, and only a few of them work with social housing. Also Bagaini et al. (2022 , 3–4) analysed and categorized 29 OSS initiative around Europe and formulated five key elements on which the different OSS differed: (a) value proposition, (b) services, (c) partnership management, (d) revenue stream, and (e) shared value. Based on these dimensions, they destilled three archetypes for OSS models: They refer to them as the Facilitation Model (mostly focused on providing information to homeowners without a revenue generation model behind), the Coordination Model (also taking in a project management role with the contractors and generating revenue by fixed fees), and the Development Model (similar to the Coordination Model but with a revenue generated dynamically from the shared energy savings). Along similar lines, Pardalis et al. (2022) compared publicly and privately funded OSSs. In addition to the facilitation and the coordination model they separate the development model into “all inclusive models” (where the renovation process is fully managed by the OSS under one single contract, but energy savings are not guaranteed) and “ESCO models” (where Energy Service Companies−ESCOs−manage the whole renovation package and also guarantee energy savings). Whereas publicly funded OSSs are evaluated as providing homeowners with crucial services at the right time, privately funded OSSs struggle more with generating revenue and providing access to financing.

According to Bertoldi et al. (2021) , a key activity all of the surveyed OSSs cover is the assessment of the status quo, which is done in different ways (sometimes as a guided online self-assessment). Then, a stage of guidance toward possible measures is started, usually resulting in an individual renovation plan. In the next stage, financing is secured (either directly or indirectly, for example by supporting applications for subsidies). In the implementation stage, OSSs either manage the implementation themselves or recommend contractors who will do that. Often OSSs are involved in quality assurance of the implemented measures afterwards, sometimes certifying the result. Some OSSs also monitor the building after the energy upgrade to support the clients, often through a contract where financial benefits are shared between the OSS and the client (often in ESCO models). Finally, most OSSs also engage in campaigns for energy efficiency in buildings to increase awareness.

McGinley et al. (2020 , 355–57) formulate some key considerations for OSS design. They define OSS as offering full-service retrofitting, including initial building evaluation and thorough analysis, proposal of retrofitting solutions, retrofit execution, and quality assurance. However, they also state that little is known about characteristics and motivations of households that are drawn to OSS and how household decisions are impacted by OSSs, a research gap we aim to fill with this paper.

A number of recent EU projects have addressed the issue of OSSs in detail. In particular, the “EUROPA one stop” project (europaonestop.eu) is interesting as it created an online platform (SUNShINE−savehomesave.eu) to connect homeowners, facility managers, and contractors working on energy efficiency upgrades and provide them with easy access tools to online diagnose their renovation potential. This platform is structurally comparable with the platforms analysed in this paper and can be considered a concept following the facilitation model. However, to understand how homeowners may be affected by OSSs, it is important to take a look at decision-making processes.

1.2 Psychological drivers of implementing energy efficiency in renovation of privately owned dwellings

In a detailed study of decision-making about energy retrofits in Norwegian households data of which was also used as a comparison for this study, Klöckner and Nayum (2017 , 1014) found that an extended Theory of Planned Behaviour ( Ajzen, 1991 , 182; Klöckner, 2013 , 1032) formed a viable theoretical framework to structure these decision processes. They were able to show that personal norms, positive attitudes, and high self-efficacy were the decisive factors for forming intentions to include energy efficiency upgrades in renovation projects. Social norms were closely related to personal norms and an important trigger of these. More distal factors were problem awareness, value orientations, perceived consumer effectiveness, and innovativeness. The most central concepts are briefly introduced in the next paragraph.

In this context, personal norms are a feeling of moral obligation to invest in better energy efficiency. Positive attitudes are the overall evaluation of the pros and cons of the decision to invest. That is how good or bad this would be, all taken into account. Self-efficacy captures how capable one feels to implement the investment, a factor that most likely will be directly affected by engaging with an OSS. Following the theoretical framework as outlined and tested by Klöckner and Nayum (2017 , 1014), an intention to invest will thus be formed: (a) if people feel that they are morally obliged to do that because wasting energy is a bad thing which is more likely; (b) if other people who are important to them support this view. Furthermore, c) a positive attitude to energy efficiency investments d) and a high self-efficacy (i.e., knowing how to implement these measures and/or who to contract to do it) also contribute. As attitudes are a combination of positive and negative beliefs about the behavioral alternatives that people choose between ( Ajzen, 1996 , 385–403), a closer look at assumed barriers and facilitators underlying those alternatives could help in understanding the decision process further, as discussed in the next section.

1.3 Barriers and facilitators of energy efficiency measures in buildings

A number of studies analyzed facilitators of or barriers against implementing energy efficiency in a residential building from different theoretical and methodological perspectives. In his PhD thesis, Pardalis (2021 , 60) finds, based on an online survey with almost 1000 homeowners in Sweden, that the house age and time lived in a house but also energy concern trigger the decision to renovate. These factors are, again, influenced by sociodemographic factors of the occupants. Thus, structural aspects seem of importance as drivers of the retrofit decision.

Digging deeper into the decision process, Xue et al. (2022 , 5) conducted interviews with 39 professionals in the retrofit market to identify barriers to energy retrofitting from the perspective of the public sector, the private sector, and the owners who conduct the retrofit. They found financial issues as the most important barrier in all three groups. For owners who are supposed to implement energy efficiency measures, they further named lack of information, lack of creative models or cases, risks connected to the project, trust, and negative social influence as important barriers. Also, problems of reaching an agreement, time consuming processes, limited added value, and concerns about payback time were named.

Many of these aspects were also reflected in another qualitative study. Klöckner et al. (2013 , 406–408) interviewed 70 Norwegians on drivers and barriers regarding energy efficiency behaviour. They found that economic barriers (e.g., lack of investment money), motivational barriers (e.g., too much effort, loss of comfort, low perceived efficacy), structural barriers (e.g., building structure, ownership), and informational barriers (e.g., lack of trust, uncertainty, lack of specific information) were central.

Departing from practice theory in an ethnographic study of renovation projects, Judson and Maller (2014) interviewed 49 Australians involved in renovation projects and unraveled the process of renovation even more. They found that renovation projects, to a large degree, are shaped and reshaped by the existing or evolving practices people have within their buildings. Energy efficiency is traded off against other needs and meanings, negotiation between different household members occur, and focus shifts dynamically. Some parts of the home have a meaning for its inhabitants as part of their daily practices which cannot just be changed to enhance energy efficiency.

With a quantitative perspective, Klöckner and Nayum (2016 , 5) studied barriers in different stages of renovation processes in a representative sample of Norwegian households. Their findings indicate that facilitators like perceived increase in comfort, anticipated better living conditions or increased marked value were important in the early stages of decision making. Information about subsidy schemes or trustworthy information about the procedures came out as important at a later stage when planning was more advanced. Correspondingly, some barriers like building protection regulations, planning to move soon, or not owning the building were relevant already early in the process before people started even thinking about an energy retrofit, whereas barriers like too much disturbance of everyday life, contractors with a lack of competence, the need to supervise contractors, or a lack of economic resources were turned out to be relevant barriers later in the process. A particularly important barrier appeared to be the feeling that “the right point in time for a larger renovation project has not come, yet”.

In an economic modeling approach comparing expected utility theory (which assumes that decision makers chose the alternative with the best possible utility for them) and cumulative prospect theory (which assumes that decisions about investments are strongly affected by specific decision biases), Ebrahimigharehbaghi et al. (2022) found that cumulative prospect theory, which takes biases like “reference dependence” (utility changes are interpreted differently with respect to difference reference points), “loss aversion” (losses weigh higher than gains of the same size), “diminishing sensitivity” (avoiding risk for positive outcomes but taking risks for negative outcomes), and “probability weighting” (events with low probability but more extreme outcomes are overestimated) is much better equipped to predict homeowners investments in home energy efficiency in a large sample from the Netherlands than classical expected utility theory. This shows that people’s decision-making in such cases takes other aspects than economic utility into consideration to a large degree.

Studies such as the ones briefly mentioned above show that the selection of aspects that can interfere with or facilitate a decision-making process about energy retrofits is plentiful. In addition, they even have different importance depending on where in the process a decision-maker is. This makes it demanding to provide the most helpful support for decision-makers in the residential sector. It seems important to provide the right information at the right time to the right people, which underscores the need for careful targeting and timing of information provision. Flexible and interactive online counseling systems, which can take people through all stages of the process, similar OSSs, may be a way to find a good balance between resources needed and effects achieved in targeted energy counseling. Interestingly, Pardalis (2021 , 66) asked homeowners what would be most important for them with respect to OSSs, and guarantees for costs and quality, as well as having one contact and one contract and a preliminary check and counseling were on top of the list, directly addressing some of the issues identified as barriers in many of the studies above.

1.4 The present study

Summarizing what has been outlined in the introduction, energy efficiency upgrades of residential buildings are a major contributor to reaching the targets of the energy transition of the European Union. However, the private residential sector is lagging behind in this process. Renovation rates of the aging building stock are low. Even when the buildings are renovated, energy efficiency measures are not always implemented. In cases where some energy efficiency measures are included, they are often not to the standard that would be recommendable. One-stop-shops have been heavily promoted recently as a way of removing the burden of planning, financing, and implementing a deep renovation project from the individual house owners. Consequently, many such services have been implemented around Europe with differing business models, financing, and mandate. However, relatively little is known about who uses these services and what effect they have on their users. Especially, it is unknown to a large degree how interacting with a low-threshold digital OSS following a facilitation model shapes its users’ perception of barriers and facilitators of a retrofit decision, and if it affects their motivations and ambitions for this project. This research gap is addressed by the present study. More specifically, we are analysing if visitors of energy efficiency counceling websites differ in their engagement in retrofits, their energy efficiency ambitions, the profile of psychological variables, the drivers and barriers from representative samples of the population and a sample of home renovators.

Our study is, thus, contributing to the literature by providing new insights into how natural users of websites with OSS characteristics differ from the general population of homeowners on a number of psychological and socio-demographic characteristics. This helps on the one hand to identify who are the target group for such low-threshold website services, but on the other hand, we also provide an assessment if their renovation ambitions, and especially the level to which they intend to implement energy efficiency measures in these updates differs after they visited the service. Through a one-year follow-up, we can also provide an assessment of to which degree the planned measures were implemented. Taken together, the focus on primarily psychological drivers and barriers of energy efficiency investments in homes for a very specific target group in comparison to large, representative samples of homeowners paints a new, and informative picture of who the users of these websites are not only socio-demographically, but also psychologically, what they are looking for on these websites, and to which degree the websites support them in their pathway towards more energy efficient homes. Being able to run the comparisons of a relatively large sample of website users to several, large representative comparison samples which were surveyed with the same methodology in the same country over the course of 10 years provides an unique opportunity to understand the target group.

2 Materials and Methods

2.1 study design.

For this study, we collected responses from users of two online energy efficiency counseling websites, which have a similar structure that might be conceptualized as OSS following a facilitating model. These websites offer an analysis of the current energy standard of privately owned residential buildings (either as a guided self-assessment or based on data from the Norwegian building registry). They can also suggest a rough renovation plan and connect the homeowner to potential contractors who can implement energy efficiency measures. Moreover, they can provide information about costs, pay-off rates, subsidies (incl. information on how to apply), etc. Energismart.no is promoted by the environmental organization Friends of the Earth Norway, whereas energiportalen.no is promoted by Viken county. From January 2022 until January 2023, participants for the study were recruited from natural visitors of both websites by messages on the websites and pop-up windows, which promoted participation in our study and provided a link to the online questionnaire. We thus recruited people who visited the websites out of their own interest without promoting using the websites from our end. This sampling strategy was chosen to recruit a ecologically valid group of website users.

In the online survey, participants were then asked about their plans for retrofitting their homes, recently finished or ongoing retrofitting projects, the ambitions for energy efficiency upgrades as part of these retrofits, and psychological drivers and barriers of the decisions.

Since randomization of users of the websites was not possible, as people self-assigned to the websites, we chose a comparison group design, where we compared the means and distributions of key variables in our survey against representative homeowner data collected in 2014, 2018, and 2023 ( Klöckner and Nayum, 2016 , 2017 ; Egner and Klöckner, 2021 ; Egner et al., 2021 ; Peng and Klöckner, 2024 ) with the same survey instrument (see Table 1 for an overview of the survey samples). Because of that design, we are unable to draw causal conclusions, but we can get indications for differences between the samples (for a deeper discussion, see the limitations section below). We were also not able to survey our participants before they entered the websites. Thus, we do not know if the described differences were already there before they used the website, or which differences were caused by the website visit. It is likely that people visit such counseling websites when they already have developed an interest for the information presented there. Thus, some of the differences will have existed already pre-visit. Especially some of the drivers and barriers, but also some parts of the psychological profile might fall into that category and it is important to keep this in mind when interpreting the results. Furthermore, we do not know how long people stayed on the websites, what they read, and how much they used the information to adapt their renovation strategy, which would have given us more insights into their user experience. However, we believe that comparing the visitors to representative homeowners from different historical points in time in the same country surveyed with the same questionnaire can give us some relevant insights and at least input for generating new hypotheses.

www.frontiersin.org

Table 1. Overview of sample statistics in the different samples.

Differences between the samples were identified by comparing 95% confidence intervals for the means. Non-overlapping confidence intervals were interpreted as significant mean differences. Effect sizes for the differences are presented in Supplementary Appendix Table 1 .

One year after the participants answered the survey, we approached them again with a short survey asking if and which retrofitting measures had been implemented in the meantime and if not, why. The follow-up survey was sent to every participant who agreed to be contacted again.

The surveys conducted in all different studies compared here were collected through an online survey platform operated by the University of Oslo (Nettskjema.no). The questions used for the analyses presented in this paper composed only part of the questionnaires; we describe only the relevant questions below. The full survey can be found in the data repository together with the dataset. 1

2.2.1 Sociodemographic information

In the surveys, participants were asked about their gender, age, highest education level, gross household income (in the 2023 data collection, individual gross income was recorded), the type of house they lived in, and if they owned or rented etheir dwellings. The categories of these variables can be found in Table 1 .

2.2.2 Deep renovation

To capture if the participants were just finished, engaged in, or planning what we refer to as a “deep renovation” project, we asked them the following questions:

(1) Within the previous three years, were you involved in a renovation project that involved (a) substantial work on the roof like replacing all tiles, (b) replacing at least 50% of the outer walls, (c) replacing at least 50% of the window area, and/or (d) substantial work on the foundation? This definition was developed for the 2014 study in a collaboration of the researchers behind the studies and the Norwegian Energy Efficiency Agency Enova and used in the same form in all data collections since. The aim of this definition was to differentiate larger renovation projects from smaller, more cosmetic renovation projects.

(2) Are you currently involved in a renovation project according to the definition above or are you planning to engage in such a renovation project within the next three years?

However, the definition does not automatically assume that energy efficiency measures are included in the deep renovation project.

The ambition level of these renovation projects was measured by how many of the four components they (are planning to) implement, and it ranges from 1 to 4.

2.2.3 Energy efficiency upgrade

If participants answered “yes” to either or both of the questions presented in the previous section, they were asked if that renovation project included, includes or is planned to include (a) additional insulation of the roof of at least 10 cm, (b) adding additional insulation to the walls of at least 5 cm, (c) energy saving windows with a μ-value of 1.0 or lower, (d) at least 5 cm additional insulation to the foundation walls, (e) installation of mechanical ventilation, and/or (f) installation of balanced ventilation. Also here, the definition of these measures was agreed upon with Enova in 2014 to represent a substantial improvement in the energy standard of the respective building component. For our analyses, we counted the number of these measures that had been/were planned to be implemented in the deep renovation project. The number could thus be between 0 and 6.

2.2.4 Personal norms, social norms, attitudes, and efficiency

Based on the Theory of Planned Behaviour ( Ajzen, 1991 , 182) extended by personal norms from the Norm-Activation Model ( Schwartz and Howard, 1981 ), four psychological variables are central to understand people’s intentions: attitudes, social norms, perceived behavioral control or behavioral efficacy, and personal norms. Each of these variables was measured by two items in the surveys, with a 7-point Likert scale from −3 to +3. Higher values indicate stronger norms, attitudes, or efficacy.

The two items to measure social norms were “People who influence my decisions think I should insulate my home” and “People who are important to me think I should retrofit my home”. The two items to measure perceived efficacy were “I know which person or company I need to contact to have my home professionally insulated” and “I know what I need to do to insulate my home”. The two items to measure personal norms were “Because of my values/principles, I feel obliged to insulate my home” and “I feel personally obliged to retrofit my home”. For each pair of items, the mean score was calculated and used in subsequent analyses.

Attitudes were measured with four semantic differentials: “Increasing the energy standard of my home would be (a) useless−useful, (b) uncomfortable−comfortable, (c) unfavorable−favorable, and (d) bad−good”. Each pair has −3 as the anchor for the negative word and +3 as the anchor for the positive word. For further analyses, the mean of the four items was calculated.

All items had been used in an identical way since the first study in 2014, as documented elsewhere ( Klöckner and Nayum, 2016 , 2017 ). In the 2023 data collection, different answering scales were used, therefore the results are not comparable and are not reported here ( Peng and Klöckner, 2024 ).

2.2.5 Barriers and facilitators

Finally, a list of potential barriers and facilitators of energy efficiency upgrades was presented in random order to the participants, asking how much they agreed with each item. The items can be found in the Supplementary Appendix . These lists were derived from a qualitative study on reasons why Norwegians upgrade or decide not to upgrade energy standards of their dwellings ( Klöckner et al., 2013 ). In the 2023 data collection, different answering scales had been used, therefore the results are not comparable and are not reported here.

2.3 Sample and comparison groups

The sample of counseling website users was recruited from the first week of January 2022 to the first week of January 2023. In total, 437 answers were collected. These answers were not equally distributed over the year, however, as ( Figure 1 ) shows. Whereas relatively many responses were collected in winter and early spring 2022, the interest was reduced in late spring and summer before it skyrocketed after summer 2022, as well as in winter 2023. This coincided with electricity price peaks in Norway (especially in the South) and media discussions about that topic. Thus, a first conclusion can already be that the interest in using energy efficiency counseling websites clearly follows the pattern of the energy price fluctuation and accompanying societal discussion.

www.frontiersin.org

Figure 1. Number of participants recruited for the counseling website user survey per week in 2022 (the line is the moving average).

Table 1 below shows the sociodemographic statistics of the sample from the counseling websites in comparison to the existing samples in detail. As can be seen, the samples are comparable on most of the dimensions. All samples contain close to 50% males and females (with the most deviation in the sample of renovators from 2014). The average age is around 50 years in all samples, with the youngest average age in the 2023 population sample and the oldest average age in the sample of the users of the websites. Education varies quite strongly, with the population sample from 2014 being the outlier with far lower education level than all other samples. Participants recruited from the counseling websites had the highest education level. The median household gross income category is the same in most samples. However, it is lower in the 2014 population sample and higher in the sample of people who answered the one-year follow-up after the visit on the counseling websites. Income categories of the 2023 sample cannot be compared, as individual gross income was recorded in that data collection. However, it can be extrapolated that the average household income would be comparable to the other samples. The proportion of people living in detached houses is particularly high in the sample of website users and the renovator sample from 2014. Also, the level of people owning their dwelling is close to 100% in these groups and a little lower in all other groups. As a conclusion, it can be said that the samples are comparable on most dimensions. Meanwhile, the website users are most similar to the people who were recruited as being in a renovation project in 2014. That is, they are more likely better educated, more likely to live in a detached house, and more likely to own their dwelling than representative samples of Norwegian households.

In the following section, we present the results of the comparison of the counseling website users with the other available samples. To do this, we examine the 95% confidence intervals as displayed in the figures for overlaps between the group of website users and the other groups. As the data is partly in separate datasets, we did not calculate formal significance tests, but a non-overlapping 95% confidence interval corresponds to an assumed significant difference between the respective groups. The numbers for the website users are always highlighted in the figures. Effect sizes are reported in Supplementary Appendix Table 1 . An overview of all results can be found in Table 2 .

www.frontiersin.org

Table 2. Summary of the differences between the website visitors and the representative homeowner samples from 2014, 2018, and 2023, as well as the renovator sample from 2014.

3.1 Engagement in deep renovation

As can be seen in Figure 2 , the percentage of people who were involved in a deep renovation project is higher in the group of counseling website users than in all three population samples. The same can be said for the ongoing or planned deep renovation projects, which are also more common for people visiting the energy counseling websites. Only the group that was specifically recruited in 2014 to only contain respondents who either just had been, were still, and/or were planning a deep renovation project in the near future has higher numbers (which is not surprising). Interestingly, the number of finished and planned projects in the population sample is lower in 2023 than in 2018 and 2014, likely an effect of renovation saturation after COVID years.

www.frontiersin.org

Figure 2. Percentage of households per group who were, are or plan to be in a deep renovation project (see definition in the text). The columns with the bold lines are the users of the counseling websites, whiskers represent 95% confidence intervals (CI), non-overlapping CI are regarded as indicating a statistically significant difference.

Among the users of the energy counseling websites, the ambition level is higher than in any other group, both for finished, ongoing and planned projects (see Figure 3 ). This means that they are engaged in slightly larger projects, involving more of the four different potential measures (walls, windows, roof, foundation). Thus, these people probably are or plan to be involved in more comprehensive renovation projects.

www.frontiersin.org

Figure 3. Ambition of the deep renovation (how many different measures are included of walls, windows, roof, and basement). The columns with the bold lines are the users of the counseling websites, whiskers represent 95% confidence intervals (CI), non-overlapping CI are regarded as indicating a statistically significant difference.

3.2 Energy efficiency ambitions

When looking at the level of ambitions for integrating energy efficiency upgrades in the renovation projects, the picture is even more interesting (see Figure 4 ). Among the users of the energy counseling websites, the ambition level is substantially higher than in any other group, both for finished, ongoing, and planned projects. On a side note, even if the total percentage of people involved in deep renovation was lower in the population in 2023 than in 2014 and 2018, the degree to which energy efficiency measures are included is increasing as can be seen in Figures 2 , 4 . This may be an effect of the energy crisis in Europe in 2022.

www.frontiersin.org

Figure 4. Ambition of the energy retrofit as part of the renovation (how many different energy efficiency measures are included of more insulation of walls, better windows, more insulation of roof and basement, balanced ventilation system, and heat pump). The columns with the bold lines are the users of the counseling websites, whiskers represent 95% confidence intervals (CI), non-overlapping CI are regarded as indicating a statistically significant difference.

3.3 Psychological drivers

When comparing the psychological profiles of the website users to the population profiles from 2014 and 2018, it can be seen that the website users have substantially higher personal norms. This indicates that they feel more moral pressure to increase the energy efficiency of their dwellings (see Figure 5 ). They also feel stronger social norms, meaning more social pressure from their peers to engage in such energy upgrades. For attitudes, the differences are smaller. Meanwhile, the attitudes are slightly more positive than for the population samples, on the same level as for the renovators in 2014. Interestingly, despite small differences, the website users have the lowest perceived self-efficacy, especially compared to the renovators in 2014. In contrast to renovators in 2014, they feel less convinced that they know how to go about for the renovations.

www.frontiersin.org

Figure 5. Means in key psychological variables driving the decision to renovate and energy upgrade. The bold black line is the sample from the counseling websites, whiskers represent 95% confidence intervals (CI), non-overlapping CI are regarded as indicating a statistically significant difference.

3.4 Facilitators and barriers of energy efficiency upgrades

Figures 6 , 7 show how the website users perceive facilitators and barriers of energy efficiency upgrades of their dwellings in comparison to people in the other samples. For some facilitators and barriers, differences are substantial: counseling website users expect more comfort, a cost reduction, a house that is better to live in, increased property value, and less waste of energy as a result of the renovation. They score the lowest of all samples, though, on availability of information, payback time, and availability of subsidy.

www.frontiersin.org

Figure 6. Means in key facilitators for an energy upgrade. The bold black line is the sample from the counseling websites, whiskers represent 95% confidence intervals (CI), non-overlapping CI are regarded as indicating a statistically significant difference.

www.frontiersin.org

Figure 7. Means in key barriers towards an energy upgrade. The bold black line is the sample from the counseling websites, whiskers represent 95% confidence intervals (CI), non-overlapping CI are regarded as indicating a statistically significant difference.

For the barriers, they score particularly high on perceptions of the renovation taking too much time, on lack of money, difficulty of finding information, a lack of ability to decide what to do, and a lack of capable contractors. They score lower on perceptions of it not being the right time to act.

3.5 Implemented energy efficiency actions

In the one-year follow-up, the participants of the energy counseling website survey were contacted again and asked if they implemented the planned actions. 201 participants (46.0% of all participants) gave permission to be contacted a year after the initial survey was completed, and 78 (38.8% of all who were willing to be contacted) answered the short follow-up survey.

Of the 78 participants, 25 stated that they implemented the energy efficiency upgrades that they were planning to implement (32.1%). 29.2% of these changed at least 50% of the outer walls, 45.8% worked on the roof, 45.8% on the windows, and 37.5% on the foundation walls.

Of the 25 who implemented the measures, 15 added at least 5 cm insulation to the walls, 13 installed highly efficient windows (μ = 1.0 or smaller), 13 installed new mechanical ventilation, 12 insulated the roof with at least 10 cm additional insulation, 10 insulated the foundation walls with at least 5 additional cm of insulation, and 7 installed a balanced ventilation system. In addition to these measures, 11 installed heat pumps, 11 installed clean-burning wood stoves, and 5 installed solar panels on their houses. Overall, the measures taken were fairly ambitious.

The main reasons for not implementing the planned measures among the remaining participants of the follow-up were lack of economic funding (57.1%), lack of subsidies (42.9%), and that the time was not right, yet, to start the renovation, again reflecting some of the main barriers indicated in the introduction.

4 Discussion

The study conducted with the users of two energy efficiency counseling websites had three aims: (a) finding out if users of the website differed from representative samples of Norwegian households in terms of engagement in retrofits and have higher ambitions for their renovation projects and the energy efficiency measures embedded in them, (b) finding out if they differ in the psychological profile in central variables driving the decision-making process, and (c) finding out if they perceive facilitators and barriers in this process differently than representative samples of households. Furthermore, a follow-up study aimed to find out how many participants implement their ambitions up to a year later.

For all three main questions, we find substantial differences. Whereas the website users are mostly comparable to the general population of Norwegian households regarding socio-demographics (but have a higher education level and an even smaller percentage of people renting their dwelling, which reflects well the drivers for renovation projects as identified by Pardalis, 2021 ), their psychological profile differs in two important points. Compared to all other samples (also including the renovators studied in 2014), the website users have far higher levels of personal norms−they feel they really should do something about the energy standard of their homes−and also higher social norms. Considering the importance of these two factors for intentions to implement energy renovations ( Klöckner and Nayum, 2017 , 1014), this finding is relevant. Having such high levels of these two variables makes it more likely that people will form intentions to improve the energy standard of their homes. It also indicates that people like these are a prime target group for interventions like OSSs: They are already motivated to take action because they have high energy-related moral standards, and they feel the social pressure of their peer groups.

Since we could not survey these people before they went to the website, we do not know if they had such high personal and social norm values already before the visit to the website. On the other hand, since one of the websites is promoted by the environmental organization Friends of the Earth Norway, it can be assumed that this is the case. Interestingly, users of the counseling websites had a slightly lower level of self-efficacy, especially compared to the renovators from 2014. This implies that a lower level of self-efficacy might be a barrier to implement the intentions they form, and maybe also a reason for visiting the websites. Again, this means that this group is a very attractive target group for OSS-type interventions: Alleviating the low self-efficacy is something a well-designed OSS can achieve by reducing uncertainties, providing requested information, and not the least making the link between the urge to act on the side of the homeowners and the competence the homeowners are lacking provided by skilled and trustworthy contractors. This finding is, again, very much in line with what Pardalis (2021) found as being the most important features of OSSs from the perspective of potential users.

Also in terms of facilitators and barriers analysed, counseling website users had some values substantially different from the other groups. In particular, increased expected comfort levels, expected cost reductions, and expectations of having a better house to live in after the renovation were more important facilitators for website users than for the population samples or the renovators. Expecting an increased value of the house after the renovation was also higher than for the population samples, but at the same level as for the renovators. Perceiving the current energy standards a waste was standing out again for the website users. This indicates that they enter the process with a different, more energy interested perspective (or they get convinced of that by visiting the website). Interestingly, counseling website users score lower on perceptions that information is easy to find, and that access to subsidy is available. Maybe this is also a reason why they ended up on the websites in the first place.

Among the barriers, the website users mention a lot more often the time demand for supervision and the lack of money as the main barriers. They thereby raise the need to have a facilitator (or even a manager) of the renovation process, again a function OSSs typically fill. The websites we studied are following a facilitation model, but still leave the management of the project to the homeowners. From their answers, we can conclude that many of them would actually prefer a more comprehensive model. Also here, they reiterate that they consider information hard to find, that they cannot decide what to do, and that contractors lack competence. The latter three again might be reasons for being interested in the website services in the first place. The websites seem to partly satisfy their needs, as can be seen in that a significant amount of the website visitors implement their renovation plans within a year. However, some still sit with the same lack of support and the same barriers after a year. Maybe for them, a more comprehensive OSS model with a higher degree of process management would be more appropriate. In line with the renovators from 2014, the website users are to a lesser degree unsure if the right point in time for a renovation project has come. Overall, the order of importance of renovation facilitators and barriers to a large extent reproduces what has been found in earlier studies ( Klöckner et al., 2013 ; Klöckner and Nayum, 2016 , 2017 ; Bertoldi et al., 2021 ; Xue et al., 2022 ).

Most importantly, we found that the visitors of the websites had stronger ambitions for their renovation projects, and in particular for the implementation of energy efficiency measures as part of them. Of course, we do not know if this was caused by visiting the websites or if it was already higher before they visited. Nevertheless, we can assume that there is at least some mutual influence. People with a stronger motivation, but who are unsure about how to implement, visit the websites, which then confirm their motivations and provide hands-on counseling to remove the implementation barriers. This then eventually might result in higher ambitions. This is good news for the OSS concept, even the low-threshold version of it that these websites represent ( McGinley et al., 2020 ). However, not all visitors seem to receive from these websites what they need. For the future, it might be recommendable to use low-threshold OSSs like the ones studied here following a facilitating model as an entry point but implement an (automated, maybe AI-based) detection of who would benefit from more comprehensive OSS models to channel these people to the offers that better suit their needs.

Finally, we could at least tentatively show−even if based upon only relatively few cases and subject to large sample attrition−that about 1/3 of the participants manage to implement their energy upgrade intentions. These people usually combine several measures and implement a deep renovation. For these people, the websites seem to have pushed them in the right direction without too much effort. As such, these websites have their niche as gatekeepers for a deeper process for some people, as the final push and reassurance for others.

5 Limitations and future research needs

Even if the study presented here shows some interesting results in a field where more research is needed, there are a number of limitations that are mostly caused by the design we had to choose. The biggest limitation of this study is that the participants recruited among the website users were, for obvious reasons, not randomly assigned to use the website but self-selected, and they were not surveyed before the visit on the website, a limitation that was already discussed in the methodology section. In addition, the users of the website fall into a narrower sociodemographic category than the population samples, though they seem to be rather comparable with people engaged in renovation projects six years prior to our study. Furthermore, we do not know how long people stayed on the websites, what they read, and how much they used the information to adapt their renovation strategy.

To address these limitations, studies with more controlled experimental designs would be advisable. Assigning participants randomly to different conditions (including no OSS, and different models of OSS) would give a better understanding of what the effects of the OSS are and what differences people come with in the process. Such a study could also test, whether different forms of OSS interact with different sociodemographic and psychological profiles of homeowners. In simple words, it might answer the question, which form of OSS works for which type of homeowner.

6 Conclusion

One-stop-shops have been promoted as a measure to overcome the inertia in energy efficiency retrofitting, especially in the privately owned residential building stock. Results from our study on users of two Norwegian energy efficiency counseling websites, which offer services in many ways similar to an OSS following a facilitator model, show that the users of these websites clearly differ from representative samples of Norwegian households that were surveyed with similar instruments. Their profiles were more like a sample of people who were in the beginning or in the middle of a larger renovation project, which was surveyed in 2014. However, the results also show that they are scoring substantially lower on their perceived access to information and subsidy. Regarding the psychological profiles, they were much more strongly motivated by personal and social norms than average households. Most importantly, it appears that visitors of such low-threshold websites have substantially higher ambitions for the energy upgrades, which about 1/3 of them have implemented a year after they visited the websites. Interest in online energy efficiency counseling services seems to be impacted by societal discussions about energy and/or by energy prices, as suggested by the spike in recruitment to our survey coinciding with an energy price increase during 2022 (however, this intriguing possibility will need to be confirmed in future studies). From a policy perspective, the results are interesting because they indicate that low-threshold OSSs can be gateways capturing people who are motivated for energy efficiency upgrades but not able to make the decision for several reasons. For some of them, the services that these relatively simple online platforms can offer is already enough to reduce their uncertainty and make the missing connections. For those still not satisfied after visiting these platforms, future developments should explore whether they can be automatically directed to more comprehensive forms of OSSs.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://zenodo.org/records/10453810 .

Ethics statement

The studies involving humans were approved by the Norwegian Agency for Shared Services in Education and Research (SIKT). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

CK: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing–original draft, Writing–review and editing. AN: Data curation, Formal analysis, Writing–original draft, Writing–review and editing. SV: Conceptualization, Funding acquisition, Writing–original draft, Writing–review and editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of the article. This study has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 957115 as part of the ENCHANT project: www.enchant-project.eu. Data for three of the comparison groups for the analyses was extracted from two previous projects funded by the Norwegian Energy Efficiency Agency, and one comparison group was extracted from data from an ongoing project funded by the Research Council of Norway (BEHAVIOUR, Project No. 308772).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1364980/full#supplementary-material

  • ^ https://zenodo.org/records/12605729

Ajzen, I. (1991). The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50, 179–211. doi: 10.5964/ejop.v16i3.3107

PubMed Abstract | Crossref Full Text | Google Scholar

Ajzen, I. (1996). “The directive influence of attitudes on behavior,” in The psychology of action: Linking cognition and motivation to behavior , eds M. G. Peter and J. A. Bargh (New York, NY: The Guilford Press), 385–403. doi: 10.1037/0033-2909.132.5.778

Bagaini, A., Croci, E., and Molteni, T. (2022). Boosting energy home renovation through innovative business models: ONE-STOP-SHOP solutions assessment. J. Clean. Prod. 331:129990. doi: 10.1016/j.jclepro.2021.129990

Crossref Full Text | Google Scholar

Bertoldi, P., Boza-Kiss, B., Valle, N. D., and Economidou, M. (2021). The role of one-stop shops in energy renovation-a comparative analysis of OSSs cases in Europe. Energy Build. 250:111273. doi: 10.1016/j.enbuild.2021.111273

Biere-Arenas, R., and Marmolejo-Duarte, C. (2022). “One stop shops on housing energy retrofit. European cases, and its recent implementation in Spain,” in Proceedings of the international conference on sustainability in energy and buildings , (Singapore: Springer Nature Singapore), 185–196.

Google Scholar

Ebrahimigharehbaghi, S., Qian, Q. K., de Vries, G., and Visscher, H. J. (2022). Application of cumulative prospect theory in understanding energy retrofit decision: A study of homeowners in the Netherlands. Energy Build. 261:111958.

Egner, L. E., Christian, A. K., and Giuseppe, P.-M. (2021). Low free-riding at the cost of subsidizing the rich. Replicating Swiss energy retrofit subsidy findings in Norway. Energy Build . 253:111542.

Egner, L. E., and Klöckner, C. A. (2021). Temporal spillover of private housing energy retrofitting: Distribution of home energy retrofits and implications for subsidy policies. Energy Policy 157:112451.

IEA (2023). Building. Available online at: https://www.iea.org/energy-system/buildings (accessed July 01, 2024).

Judson, E., and Maller, C. (2014). Housing renovations and energy efficiency: Insights from homeowners’ practices. Build. Res. Inform. 42, 501–511.

Klöckner, C. (2013). A comprehensive model of the psychology of environmental behaviour–A meta-analysis. Glob. Environ. Change 23, 1028–1038.

Klöckner, C., and Nayum, A. (2016). Specific barriers and drivers in different stages of decision-making about energy efficiency upgrades in private homes. Front. Psychol. 7:1362. doi: 10.3389/fpsyg.2016.01362

Klöckner, C., and Nayum, A. (2017). Psychological and structural facilitators and barriers to energy upgrades of the privately owned building stock. Energy 140, 1005–1017.

Klöckner, C., Sopha, B. M., Matthies, E., and Bjørnstad, E. (2013). Energy efficiency in Norwegian households–identifying motivators and barriers with a focus group approach. Int. J. Environ. Sustain. Dev. 12, 396–415.

McGinley, O., Moran, P., and Goggins, J. (2020). “Key considerations in the design of a one-stop-shop retrofit model,” in Civil Engineering Research in Ireland vol . 5. Available online at: https://sword.cit.ie/ceri/2020/13/5

Nejat, P., Jomehzadeh, F., Taheri, M. M., Gohari, M., Zaimi, M., and Majid, A. (2015). A global review of energy consumption, CO2 emissions and policy in the residential sector (with an overview of the top ten CO2 emitting countries). Renew. Sustain. Energy Rev. 43, 843–862. doi: 10.1016/j.rser.2014.11.066

Pardalis, G. (2021). Prospects for the development of a one-stop-shop business model for energy-efficiency renovations of detached houses in Sweden. Gothenburg: Linnaeus University Press.

Pardalis, G., Mahapatra, K., and Mainali, B. (2022). Comparing public-and private-driven one-stop-shops for energy renovations of residential buildings in Europe. J. Clean. Prod. 365:132683. doi: 10.1016/j.jclepro.2022.132683

Peng, Y., and Klöckner, C. A. (2024). “Factors affecting Norwegian households’ adaptive energy-efficient upgrades in response to the energy crisis,” in Poster presented at the ECEEE summer study , (Lac d’Ailette). doi: 10.1016/j.erss.2022.102498

Pohoryles, D., Maduta, C., Bournas, D. A., and Kouris, L. A. (2020). Energy performance of existing residential buildings in Europe: A novel approach combining energy with seismic retrofitting. Energy Build. 223:110024. doi: 10.1016/j.enbuild.2020.110024

Schlacke, S., Wentzien, H., Thierjung, E. M., and Köster, M. (2022). Implementing the EU Climate Law via the ‘Fit for 55’package. Oxford Open Energy 1:oiab002. doi: 10.1093/ooenergy/oiab002/6501634

Schwartz, S. H., and Howard, J. A. (1981). “A normative decision-making model of altruism,” in Altruism and helping behavior , eds J. P. Rushton and R. M. Sorrentino (Hillsdale, NJ: Lawrence Erlbaum). doi: 10.1016/S0065-2601(08)60358-5

Tsemekidi, T., Bertoldi, S. P., Diluiso, F., Castellazzi, L., Economidou, M., Labanca, N., et al. (2019). Analysis of the EU residential energy consumption: Trends and determinants. Energies 12:1065. doi: 10.1016/S0140-6736(24)00367-2

Xue, Y., Temeljotov-Salaj, A., and Lindkvist, C. M. (2022). Renovating the retrofit process: People-centered business models and co-created partnerships for low-energy buildings in Norway. Energy Res. Soc. Sci. 85: 102406. doi: 10.1016/j.erss.2021.102406

Keywords : energy efficiency, renovation, one-stop-shops, counseling, psychological drivers, theory of planned behaviour, personal norms, facilitators

Citation: Klöckner CA, Nayum A and Vesely S (2024) Understanding users of online energy efficiency counseling: comparison to representative samples in Norway. Front. Psychol. 15:1364980. doi: 10.3389/fpsyg.2024.1364980

Received: 03 January 2024; Accepted: 18 July 2024; Published: 06 August 2024.

Reviewed by:

Copyright © 2024 Klöckner, Nayum and Vesely. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Christian A. Klöckner, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. Qualitative Research: Definition, Types, Methods and Examples (2023)

    qualitative research uses structured research instruments like questionnaires

  2. Qualitative Questionnaire

    qualitative research uses structured research instruments like questionnaires

  3. Qualitative Research: Definition, Types, Methods and Examples

    qualitative research uses structured research instruments like questionnaires

  4. Types Of Qualitative Research Design With Examples

    qualitative research uses structured research instruments like questionnaires

  5. Qualitative Research

    qualitative research uses structured research instruments like questionnaires

  6. Questionnaires: Definition, advantages & examples

    qualitative research uses structured research instruments like questionnaires

COMMENTS

  1. How to use and assess qualitative research methods

    This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions ...

  2. Surveys & questionnaires

    Qualitative surveys aim to elicit a detailed response to an open-ended topic question in the participant's own words. Like quantitative surveys, there are three main methods for using qualitative surveys including face to face surveys, phone surveys, and online surveys. Each method of surveying has strengths and limitations. Face to face surveys.

  3. How to use and assess qualitative research methods

    Aim The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

  4. What Is Qualitative Research?

    Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data. Common approaches include grounded theory, ethnography, action research, phenomenological research, and narrative research.

  5. Qualitative Methods Used to Generate Questionnaire Items: A Systematic

    A systematic review of articles using qualitative methods to generate questionnaire items identified in MEDLINE and PsycINFO from 2000 to 2014 was carried out. Articles were analyzed for (a) year of publication and journal domain, (b) qualitative data collection methods, (c) method of data content analysis, (d) professional experts' input in item generation, and (e) debriefing of the newly ...

  6. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  7. Qualitative Research

    Qualitative Research Qualitative research is a type of research methodology that focuses on exploring and understanding people's beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus groups, observations, and textual analysis.

  8. Structured Questionnaires

    Structured questionnaire is the primary measuring instrument in survey research. The use of structured questionnaire has a close relationship with quantitative analysis. The use of structured questionnaires in social research was pioneered by Francis Galton and is very common in the collection of data in quality of life research nowadays.

  9. Qualitative Research: Your Ultimate Guide

    Plan your qualitative research: Use structured qualitative research instruments like surveys, focus groups, or interviews to ask questions that test your hypothesis.

  10. Hands-on guide to questionnaire research: Selecting, designing, and

    Numerous research students and conference delegates provided methodological questions and case examples of real life questionnaire research, which provided the inspiration and raw material for this series.

  11. Understanding and Evaluating Survey Research

    Understanding and Evaluating Survey Research. A variety of methodologic approaches exist for individuals interested in conducting research. Selection of a research approach depends on a number of factors, including the purpose of the research, the type of research questions to be answered, and the availability of resources.

  12. Questionnaire Design

    A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information. Questionnaires are commonly used in market research as well as in the social and health sciences.

  13. 9 Best Examples of Research Instruments in Qualitative Research Explained

    Qualitative research instruments are tools used to gather non-numerical data, providing researchers with detailed insights into participants' experiences, emotions, and social contexts. In this article, we will delve into the world of qualitative research instruments, specifically focusing on research instrument examples.

  14. Methods of data collection in qualitative research: interviews and

    Qualitative research in dentistry This paper explores the most common methods of data collection used in qualitative research: interviews and focus groups. The paper examines each method in detail ...

  15. Structured Interview

    A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews. In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but this is less common.

  16. Questionnaires in Research: Their Role, Advantages, and Main Aspects

    The questionnaire method stands as a versatile and potent tool for data collection across diverse research domains. Its structured format facilitates standardized data collection, organization ...

  17. Structured Interviews: Definitive Guide with Examples

    Researchers often use this qualitative instrument to probe into personal experiences and testimony, typically toward the beginning of a research study. Often, you'll validate the insights you gather during unstructured and semi-structured interviews with structured interviews, surveys, and similar quantitative research tools.

  18. Survey Instruments

    In qualitative research, survey instruments are used to gather data from participants through structured or semi-structured questionnaires. These instruments are used to gather information on a wide range of topics, including attitudes, beliefs, perceptions, experiences, and behaviors.

  19. (PDF) Questionnaires and Surveys

    PDF | Survey methodologies, usually using questionnaires, are among the most popular in the social sciences, but they are also among the most misused.... | Find, read and cite all the research you ...

  20. Qualitative Data Collection Instruments: the Most Challenging and

    Abstract Deciding on the appropriate data collection instrument to use in capturing the needed data to address a research problem as a novice qualitative researchers can sometimes be very challenging.

  21. Designing and validating a research questionnaire

    However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss how to design (part 1) and how to use and validate (part 2) a research questionnaire. It is important to emphasize that questionnaires seek to gather information from other people and ...

  22. Questionnaire

    A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people.

  23. Interviews and focus groups in qualitative research: an update for the

    Key Points Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection. Suggests the advent of ...

  24. Models and frameworks for assessing the implementation of clinical

    This was primarily done through surveys/questionnaires, qualitative methods (interviews, group discussions), and non-specific measurement instruments. Regarding the subject areas evaluated, most studies focused on a general clinical area, while others explored different clinical areas.

  25. Adaptation and validation of the evidence-based practice profile (EBP2

    Access to valid and reliable instruments is essential in the field of implementation science, where the measurement of factors associated with healthcare professionals' uptake of EBP is central. The Norwegian version of the Evidence-based practice profile questionnaire (EBP2-N) measures EBP constructs, such as EBP knowledge, confidence, attitudes, and behavior.

  26. Frontiers

    This article is part of the Research Topic How Do Behavior Science ... Many of these aspects were also reflected in another qualitative study. ... of these websites clearly differ from representative samples of Norwegian households that were surveyed with similar instruments. Their profiles were more like a sample of people who were in the ...