Hostname: page-component-848d4c4894-wzw2p Total loading time: 0 Render date: 2024-05-31T07:41:27.753Z Has data issue: false hasContentIssue false

The use of recommended interviewing practices by novice engineering designers to elicit information during requirements development

Published online by Cambridge University Press:  16 May 2022

Ibrahim Mohedas
Affiliation:
Department of Mechanical Engineering, University of Michigan, 2350 Hayward St, 3405 George G. Brown, Ann Arbor, MI 48109, USA
Shanna R. Daly
Affiliation:
Department of Mechanical Engineering, University of Michigan, 2350 Hayward St, 3405 George G. Brown, Ann Arbor, MI 48109, USA
Robert P. Loweth
Affiliation:
Department of Mechanical Engineering, University of Michigan, 2350 Hayward St, 3405 George G. Brown, Ann Arbor, MI 48109, USA
Linh Huynh
Affiliation:
School of Education, University of Michigan, 610 E University Ave, Ann Arbor, MI 48109
Grace L. Cravens
Affiliation:
Department of Mechanical Engineering, University of Michigan, 2350 Hayward St, 3405 George G. Brown, Ann Arbor, MI 48109, USA
Kathleen H. Sienko*
Affiliation:
Department of Mechanical Engineering, University of Michigan, 2350 Hayward St, 3405 George G. Brown, Ann Arbor, MI 48109, USA
*
Corresponding author K. H. Sienko sienko@umich.edu
Rights & Permissions [Opens in a new window]

Abstract

Effective stakeholder interviewing is a critical component of a design process. However, interviewing is a complex skill that is difficult for novice designers to learn and incorporate into their design practices. Few studies have investigated how novice designers apply recommended practices for interviewing stakeholders during the development of product requirements. In this research, we studied how novice designers elicited information to inform the development of product requirements during stakeholder interviews. Results included the establishment of a coding methodology developed from a systematic literature review of recommended interviewing practices that was used to reliably evaluate the use of recommended practices in novice designers’ interviews. A correlation existed between the use of recommended practices and the extent to which information gathered from interviews was incorporated into the requirements. Additionally, specific recommended practices, such as encouraging deep thinking and being flexible and opportunistic, differentiated performance among novice designers. The coding methodology could be adapted to guide the development of stakeholder interview protocols and assessment of design interview skills.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

To design effectively, one must identify and deeply integrate the needs and wants of stakeholders (including both users and others impacted by the design) into design decisions (Zhang & Dong Reference Zoltowski, Oakes and Cardella2009). Effective stakeholder engagement is a critical component of a design process and has been shown to lead to more useful, usable, and desirable products (Steen, De Koning & Pikaart Reference Steen, Kuijt-Evers and Klok2004; Walters Reference Wetherbe2005; Zhang & Dong Reference Zoltowski, Oakes and Cardella2009). Design ethnography methods facilitate information elicitation from a variety of stakeholders and comprise interviews, focus groups, surveys, observations, participatory design workshops, and co-creative partnerships (Grudin & Grinter Reference Hands, Peiris and Gregor1995; Crabtree, Rouncefield & Tolmie 2012). Design interviews, often in combination with other methods, are prominently used to gather stakeholder information during front-end design, as they provide opportunities for in-depth conversations with individuals that facilitate hearing first-hand accounts of real experiences (Alam Reference Alam2002; Vredenburg et al. Reference Walters2002; Rosenthal & Capper Reference Sanders and Stappers2006).

While stakeholder engagement has been demonstrated to improve product success, studies have documented that novice designers sometimes struggle to elicit information from stakeholders effectively (Bano et al. Reference Bednar2019; Loweth et al. Reference Loweth, Daly, Sienko, Hortop and Strehl2020a) and to apply information from stakeholders to inform their design decisions (Sugar Reference Sutcliffe and Sawyer2001; Mohedas, Daly & Sienko Reference Mohedas, Sienko, Daly and Cravens2014; Mohedas et al. Reference Moody, Blanton and Cheney2020). Further, research has shown that novice designers’ engagement with stakeholders lessens as their design projects progress (Lai, Honda & Yang Reference Landis and Koch2010; Loweth et al. Reference Luck2019, Reference Loweth, Daly, Hortop, Strehl and Sienko2021). Since many design processes recommend stakeholder interviews to support design decision making, and since understanding stakeholder perspectives is central to design success, this research sought to investigate how novice designers elicit information to inform the development of product requirements during stakeholder interviews.

Given the importance of design interviewing among various design approaches, it is imperative to more fully understand the practices that precipitate success. Thus, our work sought to (a) develop a methodology to evaluate the performance of novices during stakeholder design interviews based on commonly-described recommended practices compiled from design literature, (b) apply this method to data collected during a front-end design task to evaluate the performance of novice designers, (c) determine relationships between the prevalence of recommended interviewing practices and the application to requirements development of information gathered through design interviews, and (d) identify recommended interviewing practices most associated with differentiating performance.

2. Background

2.1. Front-end design and requirements elicitation

Activities associated with front-end design phases, including problem definition, requirements and specifications development, and concept generation, are often described as ambiguous and ill-defined (Cooper Reference Cooper and Kleinschmidt1988; Sanders & Stappers Reference Scheinholtz and Wilmont2008; Park, Han & Childs Reference Ramzan and Ikram2021). Front-end design involves developing a deep understanding of the problem, which is accomplished through iteration of both the problem definition and potential solutions, defined as problem-solution co-evolution (Murphy & Kumar Reference Nelder and Wedderburn1997; Dorst & Cross Reference Dorst and Cross2001; Dorst Reference Dorst2019). During front-end design, the iterative nature of problems and solutions is evident through evolving product requirements, which define the design problem and focus designers’ efforts. Research has tied many product failures to errors made during front-end design phases that could not be cost-effectively resolved during later design phases (Cooper Reference Cooper and Kleinschmidt1988; Park et al. Reference Ramzan and Ikram2021). Studies have also shown that the success of new products depends in part upon how well designers execute front-end design activities during product development (Khurana & Rosenthal Reference Kobayashi and Maekawa1998; Markham Reference Martin and Barnett2013).

A key component of front-end design that impacts product success is the elicitation and development of product requirements (Cooper & Kleinschmidt Reference Crabtree, Rouncefield and Tolmie1987; Hein, Voris & Morkos Reference Huang and Deng2018; Park et al. Reference Ramzan and Ikram2021). Product requirements are any function, constraint, or other property required for a designed artefact to meet the needs or wants of stakeholders (Pahl & Beitz Reference Park, Han and Childs2007; Dieter & Schmidt Reference Dieter and Schmidt2021). These requirements are then translated into engineering specifications that are both quantifiable and measurable in order to guide engineers’ solution development processes. The ambiguous and iterative process of developing product requirements and translating these requirements into engineering specifications is a challenging undertaking in design work (Sutcliffe & Sawyer Reference Tsai, Mojdehbakhsh and Rayadurgam2013; Ferrari, Spoletini & Gnesi Reference Firesmith2016; Rosa et al. Reference Rosenthal and Capper2021). Studies have demonstrated that over half of the product requirements generated during initial problem definition are changed over the course of product development (Kobayashi & Maekawa Reference Krippendorff2001; Ramzan & Ikram Reference Randall, Harper and Rouncefield2005; Hein et al. Reference Huang and Deng2018). Furthermore, mistakes made during the development and management of product requirements can lead to increased complexity, development times and development costs (Hein et al. Reference Huang and Deng2018; Rosa et al. Reference Rosenthal and Capper2021).

2.2. Interviewing to elicit requirements

Stakeholder interviews provide key information to develop product requirements and engineering specifications. The importance of effective stakeholder interviews during design processes, particularly during requirements elicitation, has been documented in a range of fields including automotive systems (Islam & Omasreiter Reference Jonsson and Svingby2005), medical device development (Martin & Barnett Reference Martin, Clark, Morgan, Crowe and Murphy2012), human-computer interaction (Light Reference Loweth, Daly, Hortop, Strehl and Sienko2006) and consumer product design (Rosenthal & Capper Reference Sanders and Stappers2006). Furthermore, interviewing stakeholders is a practice that spans numerous design approaches and philosophies including socially-engaged design, human-centred design, inclusive design, participatory design, design ethnography and lead user approach (Steen, Kuijt-Evers & Klok Reference Strickland2007; Crabtree et al. 2012; Coleman et al. Reference Conradie, De Marez and Saldien2016; Luck Reference Luck2018).

Existing resources on design interviewing have focused on describing a variety of interview question types and organisational strategies that support the flow of design interviews (Eris Reference Evnin and Pries2003; Ozgur Reference Pahl and Beitz2004; Aurisicchio, Bracewell & Wallace Reference Bano, Zowghi, Ferrari, Spoletini and Donati2006). For example, the IDEO toolkit (IDEO Reference Inoue, Furuta, Nakata, Kanno, Aoyama and Brown2015) and interview materials from the d.school (Reference Dym, Little and Orwin2015) provide overarching strategies for interacting with stakeholders, along with example questions for various contexts. However, there is opportunity for more nuanced and specific resources that also leverage the broader literature from across disciplines (e.g., Moody, Blanton & Cheney Reference Murphy and Kumar1998; Browne & Rogich Reference Burnay, Jureta and Faulkner2001) to provide additional support structures for designers in preparing and conducting design interviews with stakeholders. For example, some strategies mentioned in the literature include ensuring that the most important topics are covered during a design interview (Burnay, Jureta & Faulkner Reference Byrd, Cossick and Zmud2014), that questions are appropriate and unbiased (Wetherbe Reference Wiseman and Tulving1991), that questions focus on uncovering stakeholders’ feelings and complete perspective on a given topic (Donoghue Reference Donoghue2010), and that interviews lead to a broader social, political, or cultural understanding of the design problem (Goguen & Linde Reference Grudin and Grinter1993; Leydens & Lucena Reference Light2009). Recent literature in the fields of human–computer interaction (e.g., Ogbonnaya-Ogburu et al. Reference Ozgur2020) and participatory design (e.g., Harrington, Erete & Piper Reference Hein, Voris and Morkos2019) have also emphasised that designers should reflect on how their personal and social identities may impact the types of information that stakeholders may be able or willing to share during interviews. Numerous challenges also exist in addition to executing the interview, such as planning and preparing for interviews, gathering information from multiple stakeholders, synthesising these data, and analysing the data to make design decisions.

2.3. Challenges conducting design interviews

Previous studies have described some of the struggles of novice designers as they engage in design interviewing, such as asking open-ended questions during interviews (Bano et al. Reference Bednar2019; Loweth et al. Reference Loweth, Daly, Sienko, Hortop and Strehl2020a) and adopting interviewee language (Luck Reference Markham2007; Mohedas et al. Reference Mohedas, Sienko, Daly and Cravens2014; Bano et al. Reference Bednar2019; Loweth et al. Reference Loweth, Daly, Sienko, Hortop and Strehl2020a). For example, Bano et al. (Reference Bednar2019) observed 110 first-year graduate students studying information technology as they conducted requirements elicitation interviews for a simulated design task. The authors identified several challenges that participants encountered while conducting their interviews, including asking poorly worded questions, asking questions in an illogical order that interrupted interview flow, using language that might be confusing for the interviewee, and treating the interview more as an interrogation than a conversation. In addition, Loweth et al. (Reference Loweth, Daly, Sienko, Hortop and Strehl2020a) analysed stakeholder and domain expert meetings conducted by six novice design teams in the context of a capstone design course. They identified 11 different interviewing behaviours exhibited by their participants that did not align with recommended practices, such as asking closed-ended questions that implicitly constrained interviewees’ responses and utilising overly technical language that was misaligned with interviewees’ domain knowledge.

Studies have shown that novice designers may also encounter broader difficulties related to gathering and applying information from stakeholders to inform their requirements development and/or solution development processes (Sugar Reference Sutcliffe and Sawyer2001; Mohedas et al. Reference Mohedas, Sienko, Daly and Cravens2014, Reference Moody, Blanton and Cheney2020; Conradie, De Marez & Saldien Reference Coolican2017; Loweth et al. Reference Luck2019, Reference Loweth, Daly, Sienko, Hortop and Strehl2020b). For example, Mohedas et al. (Reference Mohedas, Sienko, Daly and Cravens2014), in their study of capstone design students’ use of design ethnography, found that participants struggled to identify stakeholders who could provide needed design information. Participants also struggled to navigate contradictions between stakeholders’ responses. Furthermore, Sugar (Reference Sutcliffe and Sawyer2001), in their study of novice software designers, and Conradie et al. (Reference Coolican2017), in their study of novice engineering designers, both described cases where interviews with users had little to no impact on novice designers’ solution development outcomes. Novice engineering designers’ difficulties with gathering and applying stakeholder information may stem in part from technocentric beliefs that engineering work primarily involves designing and building technological components (e.g., as described in Khosronejad, Reimann & Markauskaite Reference Khurana and Rosenthal2021). Loweth et al. (Reference Luck2019) and Niles et al. (Reference Nuseibeh and Easterbrook2020) have shown that engineering students with more technocentric mindsets struggle to engage stakeholders effectively and integrate stakeholder perspectives into their design projects.

While novice challenges with conducting design interviews have been well documented in the literature, these challenges are not uniformly encountered by all novice designers. For instance, Loweth et al. (Reference Loweth, Daly, Sienko, Hortop and Strehl2020a) observed several instances of novice designers employing interviewing approaches that aligned well with recommended interview practices. The goal of our work was to explore in greater depth the variation in how novice designers approached stakeholder interviews and determine how differences in interview approaches may relate to novice designers’ subsequent use of interview data to inform their requirements development processes.

3. Methods

3.1. Research objectives

The study was guided by the following research questions:

  1. (i) RQ1: How can novice designer performance of stakeholder interviews be reliably assessed?

  2. (ii) RQ2: How does interview performance relate to the use of interview data in the subsequent development of product requirements?

  3. (iii) RQ3: How does the use of recommended practices during interviews vary across participants?

We posited that the breadth of recommended interviewing practices documented throughout the literature could collectively be used to assess interview performance. We also hypothesised that the presence of more practices recommended by literature would yield more useful data gathered from the interview that would be evident in participants’ developed product requirements. Using the developed assessment of interview performance, we sought to identify what (if any) recommended practices were leveraged most frequently by participants whose interviews contained the most instances of recommended interviewing practices overall.

3.2. Participants

Eight students from two sections of a mechanical engineering capstone design course volunteered to be participants for the design task. Recruitment from this course ensured that all volunteers had completed or were in the final weeks of their capstone design course. Among the eight participants, five were male and three were female, and seven were mechanical engineering majors and one was a biomedical engineering major. Participants completed a pre-task survey detailing their prior design and stakeholder engagement experiences. We used the information collected through the survey and Dreyfus’ model of skill acquisition (Mohedas, Daly & Sienko Reference Mohedas, Daly and Sienko2016) to classify participants’ expertise. All participants were classified as novice designers because of their minimal prior design experience. All participants provided written informed consent, and the study was approved by the University of Michigan Institutional Review Board. Participants were compensated $16 per hour for their time.

3.3. Data collection

We developed a design task to investigate the behaviours of participants as they interviewed stakeholders as a source for developing product requirements. Participants were provided the following prompt:

You are currently working for a large toy company that specializes in toys for young children (0 to 10 years old). You’ve just received a job assignment from your boss. The executives have decided they would like to begin to develop toys that aid young children, between 1 and 5 years of age, in developing their cognitive abilities, specifically children’s ability to explore and learn about cause and effect. You have been assigned the job of investigating this idea in order to understand the design problem, develop user requirements, and translate these user requirements into engineering specifications. You will be using standardized templates that your company has developed in order to document the user requirements and engineering specifications. In the future, you and a team will design the toy based precisely on the user requirements and engineering specifications you are developing, so be sure to include as much detail as possible. For now, your boss only wants to see the user requirements and engineering specifications.

The problem was formulated to be easily understood by all participants, but did not coincide with any of the participants’ prior design experiences, a strategy consistent with other design task studies (e.g., Dally & Zhang Reference Dally and Zhang1993; Atman et al. Reference Atman, Chimka, Bursic and Nachtmann1999, Reference Atman, Adams, Cardella, Turns, Mosborg and Saleem2007).

Participants had six and a half hours to work on the design task – three hours of design activity in the morning and three and a half hours of design activity in the afternoon. During the task, participants were provided with access to a broad range of resources: academic literature, books, Consumer Products Safety Commission guidelines, standards associated with toy development, example toys for the appropriate age range, the opportunity to observe and engage with children playing with toys, and the opportunity to interview various stakeholders. Participants were given standardised templates to document their product requirements and engineering specifications. They were also prompted to indicate the information sources that informed each requirement/specification. The stakeholder interviews (which are the basis for the study presented in this article) were individually scheduled by the participants. Stakeholders available included: two parents, two teachers, an education expert (PhD in education), a psychology expert (PhD in psychology) and a children’s product safety expert. Participants were allowed to schedule as many interviews within the design task period as desired, and they were given the option to interview multiple stakeholders simultaneously (i.e., in a small group format) as well as to re-interview stakeholders if they wanted. Interviews were scheduled on a first come first serve basis; no participants expressed lack of access to interview time slots (approximately 45% of the interview time slots were used).

Data collection included audio and video recordings of stakeholder interviews, audio and video recordings of a focus group discussion with stakeholders, audio recordings of post-task interviews with participants, stakeholders’ evaluations of each interview, and stakeholders’ evaluations of each participant’s requirements. Additionally, overhead video camera footage of the computer workstation room, screenshots (every 20 seconds) of participants’ computers, computer and network surveillance data, participants’ notebooks, and post-task information-use surveys were also collected, but not used in this analysis. For the study reported here, data analysis focused on the audio and video recordings of all stakeholder interviews conducted by participants and the final product requirements and engineering specifications developed by participants.

3.4. Data analysis

Data analysis consisted of three phases. First, we developed a coding methodology for recommended design interviewing practices based on academic literature, established its inter-rater reliability, and used this coding methodology to evaluate the transcripts of the stakeholder interviews conducted by participants. With this evaluation, we developed a generalised linear model to compare participants’ performance in stakeholder interviews, where the dependent variable was the number of recommended practices per question asked by participants.

Next, we assessed the ‘interview effect’ for each participant, which we defined as the extent to which the information elicited during interviews was reflected in the product requirements (which included detailed engineering specifications) developed by the participants. We investigated if the interview effect metric was correlated with interview performance (measured by recommended practice usage). Finally, we examined specific recommended practices used across participants. We discuss these three phases of analysis as follows.

Phase 1 analysis

A literature search was conducted (June–July 2015), drawing from literature related to stakeholder interviews within a design context. With the overarching topic of design guiding our search, we explored a range of application areas (e.g., automotive systems, medical device development, human-computer interaction, and consumer product design) and design philosophies and methods (e.g., human-centred design, participatory design, design ethnography, contextual design, and lead user approach). The search was conducted by the first author on Web of Knowledge and included combinations of the following groups of terms: (a) stakeholder*, user* and end-user*, (b) design*, engineer*, and technology*, and (c) interview*, talk*, and inquiry*. One term from each group was used for each search (e.g., Search 1: stakeholder* AND design* AND interview*). Of the 1105 articles reviewed, 188 were found to be of direct interest because they included some reference to stakeholder interviews during design (or related tasks).

Interviewing requires a complex skillset, and the literature review highlighted the challenges in measuring the quality of an interview. However, we aimed to operationalise interviewing performance to include only the skills that could be identified within interview questions within a transcript of an interview to enable the development of an efficient tool for novice assessment and/or self-assessment. We acknowledge that a more comprehensive assessment would also include gesturing, posture, and other nonverbal behaviours. In the review of the 188 articles, 20 recommended practices were identified and 12 were determined suitable for a coding scheme because they: (a) could be identified by a coder by reading an interview transcript and (b) could be coded at the question level (i.e., is the recommended practice present or not present in the question asked by the participant). Table A3 shows recommended practices that we identified from the literature that were not suitable for coding. The process for determining which recommended practices from the literature to include in our coding scheme involved multiple iterations of a two-stage process. During the first stage, the first author compiled summaries of recommended practices from the literature. During the second stage, the research team discussed the appropriateness of the recommended practice for inclusion in the coding scheme. Following each discussion, the first author gathered additional recommended practice related information from the literature, which was subsequently discussed with the research team until saturation was achieved. The recommended practices in our assessment framework needed to be independent of information that was not available to a coder (e.g., did not depend upon knowing the intent of the interviewer). An example of a recommended practice found in the literature, but not included in our coding scheme was ‘prioritise questions based on the stakeholder’. This recommended practice was not included, because it would have required knowledge of what questions the participant thought were most important for particular stakeholders.

Once the recommended practices that were suitable for our research goals were established from the literature, three coders (members of the research team) participated in the next part of the analysis process. After an initial discussion to clarify the coding scheme, the coders independently reviewed a few transcripts at a time and then discussed the codes for those transcripts, resolving any disagreements, and clarifying the coding scheme as necessary. This coding process included evaluating each question posed by the participants during their stakeholder interviews for each of the 12 recommended practices, noting its presence or absence. Once the coding scheme was finalised, we assessed it for reliability using a subset of five transcripts that were randomly selected for comparison. We selected five transcripts to allow for substantive comparison. Inter-rater reliability was measured using Cohen’s kappa (Cohen Reference Coleman, Clarkson, Dong and Cassim1960); this coefficient ranged from 0.77 to 0.86 (mean 0.82) for the five interviews coded. Kappa coefficients above 0.8 are considered extremely reliable (Krippendorff Reference Lai, Honda and Yang1980; Landis & Koch Reference Lee and Lee2008), while values above 0.60 indicate substantial agreement. Then, coders evaluated the remaining interview transcripts (37 in total) separately using the final coding scheme. The transcripts were divided randomly across the coders. Additionally, portions of these transcripts were coded by multiple coders to ensure that the inter-rater reliability remained at or above the value established from the first five interviews (minimum of 0.77). Ultimately, 589 total questions were coded and the number of questions per participant across all their interviews ranged from 28 to 145.

We used this coding of recommended interviewing practices to represent interview performance. To understand how multiple factors potentially impacted interview performance, we developed a generalised linear model. This type of model is used to understand the level of variance associated with a dependent variable that can be attributed to a specific independent variable, while controlling for other confounding variables (Nelder & Wedderburn Reference Nguyen, Carroll and Swatman1972; Coolican Reference Cooper2009). In our linear model, we investigated how the dependent variable ‘interview performance’ varied as a function of the independent variable ‘participant’ (i.e., interviewer), while controlling for the other variables of interviewee, question number (i.e., the first, second and third question asked in an interview), and interview number (i.e., the participants’ first, second and third interview, during the day-long design task). Through our linear model, we also investigated question number and interview number as independent variables predicting the dependent variable interview performance. In the model, each question asked by a participant represented the unit of analysis.

The dependent variable, ‘interview performance’ was defined as the total number of recommended practices identified by the coder per interview question asked, represented as:

(1)$$ {y}_j=\sum \limits_{i=1}^{12}{x}_i, $$

where $ {x}_i $ represents the presence or absence of a particular (ith) recommended practice (1 indicates presence of the recommended practice, 0 indicates absence of the recommended practice), y is the dependent variable and j is the question being evaluated. We assigned equal weights to each recommended practice when defining overall interview performance because the existing literature did not provide information to more accurately interpret the relative contributions of recommended practices to overall interview performance. For example, if on a particular question (j), a participant (a) encouraged deep thinking and (b) used a co-creative strategy (but no other recommended practices), the value of the interview performance metric (y) was two. We did not aim to establish an absolute metric of interview performance (e.g., to determine the minimum number of recommended practices per question that would indicate ‘good’ performance). Rather, the generalised linear model allowed us to compare interview performance of participants in the study. The standardised metric we developed, based on recommended practices prevalent in the academic literature, allowed us to assess interview performance within the controlled laboratory design task in a consistent way across participants.

Phase 2 analysis

Next, we developed an interview effect metric to investigate the effect interviews had on the product requirements developed by the participants, that is, the extent to which participants gathered data from stakeholders during interviews that contributed to their product requirements. The interview effect metric was calculated using two variables: (a) the number of interviews that contributed to the development of a specific requirement, and (b) the number of times the product requirement was discussed during each interview. We chose these two variables for the metric to align with design literature, which emphasises the importance of (a) consulting diverse stakeholders to elicit and validate requirements and (b) diving deep into specific topics with individual stakeholders (Kaiya et al. Reference Khosronejad, Reimann and Markauskaite2005; Dieter & Schmidt Reference Dieter and Schmidt2021; Loweth et al. Reference Loweth, Daly, Hortop, Strehl and Sienko2021).

The interview effect metric was calculated using all instances within interviews during which participants asked questions related to the subsequently developed requirement. For example, during one interview a participant asked ‘Do you think repetition is more important when learning a cognitive ability or…is variety [more important]?’ The interviewee’s response was later referenced by the participant as contributing to the product requirement: ‘Allow for multiple types of usage’. The number of instances when the product requirement was discussed during each interview was determined by analysing each interview question asked by participants. Every interview question that resulted in a discussed requirement was counted as an instance. Instances were assessed by two coders. All discrepancies were discussed at meetings with all authors, and final decisions were made.

We then represented instances of interview data informing requirements through branch diagrams. For example, in Figure 1, we represent a branch diagram from Participant 1. Participant 1 developed five requirements from 16 instances of interview questions across six unique interviews. To calculate the interview effect metric, we totaled the number of branches: 12 interview branches and 16 requirement reference branches for a total effect of 28.

Figure 1. Example of requirements reference trees with respect to stakeholder interviews for Participant 1.

We chose not to normalise the interview effect metric based on the number of interviews each participant conducted to avoid penalising participants that employed different interview strategies. For example, a participant who used initial interviews to gather background information and later interviews to focus on requirements development would have appeared less successful than a participant who used the internet to gather background information and interviews to focus on requirements development.

After calculating the interview effect for each participant, we then correlated it with the overall quality of the participant’s interviews conducted to determine if higher quality design interviews lead to the elicitation of more relevant information (as defined by the interview effect metric). Only one participant repeated interviews with the same stakeholders (she interviewed four stakeholders twice each). These repeated interviews were not treated differently during analysis.

Phase 3 analysis

To identify what (if any) recommended practices were leveraged most frequently by participants whose interviews contained the most instances of recommended interviewing practices overall, the top and bottom three interview performers as determined by the generalised linear model were compared individually across the 12 recommended practices. Fischer’s exact test was used to determine the specific recommended practices in which a statistically significant difference was observed between the participants whose interviews contained the most instances of recommended interviewing practices overall and those whose interviews contained the fewest instances of recommended practices.

4. Results

In this section, we first describe the coding scheme used to measure design interview performance and the results of the generalised linear model (addressing the first research question). Second, we show the interview effect metric for various participants and display the relationship between interview performance and the interview effect metric (addressing the second research question). Third, we present comparisons of the participants with the highest and lowest scores as determined by the design interview performance coding methodology (addressing the third research question).

4.1. Interview performance assessment and generalised linear model

The literature review of recommended design interviewing practices led to the coding scheme shown in Table 1. The coding scheme consists of 12 independent recommended design interviewing practices that can each be evaluated at the question level of an interview transcript.

Table 1. Recommended practices identified within literature forming the basis for the coding structure used in later analyses.

The coding scheme (as shown in Table 1) was then applied to all interviews conducted by the eight participants and the results are presented in Table 2. Additionally, Table A4 contains examples for each code found in the data. The number of interviews conducted by participants ranged from 3 to 7 and the number of questions asked per participant per interview ranged from 9.3 to 22.3 (interviews lasted approximately 15 minutes each). In total, 708 instances of recommended practices were identified across all 37 transcripts. Participants exhibited between 0.83 and 1.57 recommended practices per question. In addition, a cross-tabulation table (Table A1) and a table of correlation coefficients (Table A2) are shown in the Appendix; they show that no recommended practices were correlated at the question level (which would have led to redundancies in our analysis; for example, there was no evidence that some recommended practices generally came in pairs).

Table 2. Summary statistics for coding of all participants.

The results of the generalised linear regression analysis are shown in Table 3, including coefficient estimates, standard errors, and significance levels for all variables examined. The variables included: all participants (Participants 1 through 8), all interviewees (two teachers, two parents, an education PhD, a psychology PhD and a safety expert), interview number, and question number. In the model, Participant 1 and Teacher 2 were used as the reference within their respective categories. Participants 4, 5 and 6 employed a significantly greater number of recommended practices per question than the reference (all statistically significant); whereas Participants 2, 3, 7 and 8 used fewer recommended practices per question than the reference (only Participant 8 used statistically significantly fewer recommended practices than the reference participant). The model also revealed that interviews conducted with Parent 2, produced significantly fewer recommended practices than interviews conducted with other stakeholders. Additionally, the positive and statistically significant coefficient associated with interview number indicated that the participants’ interview performance improved as they conducted more interviews.

Table 3. Estimates, standard error, and significance of the variables included within the generalised linear model.

*** p < 0.001;

** p < 0.01;

* p < 0.01;

p < 0.1.

4.2. Effect of interview performance on interview information use in requirements

There were key differences in how individual participants used interviews to develop requirements. For example, Participant 6 used interviews extensively to develop all of her requirements and obtained information from multiple interviewees for each requirement, as can be seen in Figure 2. In contrast, Participant 2 incorporated interview information into fewer than half of her requirements and only obtained multiple perspectives on one requirement. Additionally, while Participant 1 did not generate as many requirements from the seven interviews he conducted as Participant 6 generated, he still displayed effective behaviour by consulting multiple stakeholders to discuss each requirement instead of relying only on a single interview to develop each requirement. These four representative examples demonstrate a range of behaviours with respect to how participants used interviews in requirements development.

Figure 2. Interview references to final requirements developed by participants.

Participants whose interviews contained more instances of recommended interviewing practices also gathered information that appeared more frequently in requirements developed compared to participants whose interviews contained fewer instances of recommended practices, as seen in Figure 3. The Pearson’s correlation coefficient between interview performance (as assessed by the linear model) and the interview effect metric was 0.60, indicating a relatively high correlation. This finding suggests that a designer who is able to use a greaters number of recommended interviewing practices (using the linear model results, this would include Participants 4, 5 and 6) may be able to elicit information that is more directly applicable to the development of product requirements. Alternatively, participants who used a greater numbers of recommended practices in their interviews may have ultimately employed stakeholder information at a higher rate to inform their requirements.

Figure 3. Relationship between participants’ performance during interviews and the interview effect metric.

4.3. Recommended practice differentiation

We performed a second analysis to identify what (if any) recommended practices were leveraged most frequently by participants whose interviews contained the most instances of recommended interviewing practices overall. In this analysis, the three highest and lowest scoring (as defined by their coefficients in the generalised linear model, Table 3) participants’ interview transcripts were examined to identify instances of each recommended practice. In Figure 4, we show how frequently participants displayed individual recommended practices when asking questions during stakeholder interviews (the average frequency of Participants 4, 5 and 6 is plotted against that of Participants 3, 7 and 8). As demonstrated by Figure 4, the three participants whose interviews contained the most instances of recommended interviewing practices overall exhibited 10 of the 12 recommended practices more frequently than the three participants whose interviews contained the fewest instances of recommended practices, of which four were statistically significant (p < 0.05): encouraging deep thinking, being flexible and opportunistic, using a co-creative interview strategy, and introducing domain knowledge. Minimal differences were observed in developing rapport with interviewee, avoiding misinterpretations, exploring contradictions, and breaking down expert tasks.

Figure 4. Results of recommended practices coding for Participants 4, 5 and 6 (participants whose interviews contained the most instances of recommended interviewing practices overall) versus Participants 3, 7 and 8 (participants whose interviews contained the fewest instances of recommended interviewing practices overall). The asterisk (*) indicates a statistically significant difference (p<0.05).

5. Discussion

5.1. Interpretations and contributions

This study assessed novice engineering designers as they performed a simulated design task. The coding scheme developed in this study enabled design interview performance to be evaluated during a requirements elicitation task. Grounding the coding methodology in the literature supports its validity, and the high inter-rater reliability (mean kappa of 0.82) demonstrated that multiple coders could apply it consistently (Jonsson & Svingby Reference Kaiya, Shinbara, Kawano and Saeki2007). Additionally, our results included an empirical link between interview performance as assessed using the recommended practice scheme, and information gathered within interviews that were subsequently used to support the development of requirements. Application of the coding scheme to the design task data revealed statistically significant differences in performance within a group of novice designers; specific behaviours accounted for these differences.

The synthesis of the literature resulting in the development of a reliable coding methodology to evaluate interview performance is applicable to design research, practice, and education. While design texts have encouraged extensive stakeholder interaction, this study augments the existing literature by (a) compiling recommended design interviewing practices, (b) establishing a reliable and systematic method for evaluating aspects of design interview performance and (c) providing an empirical foundation for pedagogical and practical tools to support aspects of design interviewing. Importantly, our evaluation method emphasises interviewing approaches that novice designers can implement effectively, and thus serves as a useful counterpoint to other evaluation methods, such as Bano et al. (Reference Bednar2019), that focus on novice designers’ interviewing mistakes.

The positive relationship found in this study between interview performance and the elicitation of information used in the development of product requirements emphasises the tangible effect that using recommended practices can have on a design process; participants whose interviews contained more instances of recommended interviewing practices applied more information gathered from stakeholder interviews to the development of product requirements. This finding adds experimental data to support the value of design interviewing during front-end design, which has been discussed extensively within the literature, but in broader, nonspecific terms (van Rijn et al. Reference Vredenburg, Mao, Smith and Carey2011; Dym, Little & Orwin Reference Eris2013; Dieter & Schmidt Reference Dieter and Schmidt2021). While it is unclear exactly why participants whose interviews contained more instances of recommended interviewing practices ultimately applied more information, one possible explanation is that this finding reflects the quality of information elicited. Prior work indicates that requirements elicitation can be challenging because stakeholders may provide ambiguous responses (Ferrari et al. Reference Firesmith2016) and designers may struggle to gather comprehensive information about stakeholder experiences (Sutcliffe & Sawyer Reference Tsai, Mojdehbakhsh and Rayadurgam2013; Rosa et al. Reference Rosenthal and Capper2021). Hypothetically, recommended interviewing practices should support designers in overcoming these challenges by enabling the elicitation of specific and diverse stakeholder data. We did not evaluate the quality of information elicited by participants. However, it logically follows that participants who successfully gathered relevant stakeholder data would be more likely to subsequently use those data to inform their requirements, while participants whose interviews were less ‘successful’ might rely less on stakeholder data.

Another possible explanation reverses the implied causality of our findings: some participants may have been planning to integrate stakeholder information into their requirements elicitation processes regardless, and thus employed interviewing practices (that aligned with recommended practices) that would enable them to elicit needed information. For instance, Zoltowski, Oakes & Cardella (Reference Zoltowski, Oakes and Cardella2012) described how some engineering students highly value the perspectives of stakeholders for informing their design projects. Loweth et al. (Reference Loweth, Daly, Hortop, Strehl and Sienko2021), in their study of capstone design teams, found that novice designers who highly valued stakeholder perspectives also employed recommended interviewing practices more consistently than peers with comparable levels of interviewing experience. Participants 4, 5 and 6 in our study may similarly have valued stakeholders to inform their requirements development processes, which may have led them to employ more recommended practices. More work is needed to determine relationships between novice designers’ perspectives on stakeholder engagement, their use of recommended interviewing practices, and their use of stakeholder data to inform their design decisions.

Specific recommended practices were leveraged most frequently by participants whose interviews contained the most instances of recommended interviewing practices overall. Both groups (participants whose interviews contained the most instances of recommended practices and participants whose interviews contained the fewest instances of recommended practices) developed rapport with interviewees and avoided misinterpretations during interviews at the same rate, suggesting that these two recommended practices were executed similarly across these groups of participants, or that the differences in executing these skills require a more nuanced analysis to expose. The participants whose interviews contained the most instances of recommended interviewing practices displayed significantly greater usage of specific recommended practices such as encouraging deep thinking, being flexible and opportunistic, using a co-creative strategy, and introducing domain knowledge. While this finding does not necessarily reveal which recommended practices are more critical to interacting with stakeholders, it could suggest that there are certain recommended practices that are more natural or easier to learn while others require a more intentional application or advanced knowledge. This hypothesis aligns with findings from Loweth et al. (Reference Loweth, Daly, Hortop, Strehl and Sienko2021), who observed that novice designers in a capstone design course employed interview practices such as encouraging deep thinking mainly in situations where there seemed to be clear and specific information that they hoped to gather.

Beyond participants’ abilities to enact recommended practices, the generalised linear model revealed other factors that affected interview performance. A statistically significant positive effect with respect to the interview number (i.e., first, second and third interview) indicated that participants improved their interview performance as their experience increased during the design task, which has pedagogical implications. Additionally, interviews with one particular interviewee were statistically negatively correlated with participants’ interview performance, emphasising the impact that an individual stakeholder can have on the interview.

5.2. Implications

In educational and training contexts, the coding scheme can be used by instructors to evaluate students’ audio-recorded or transcribed interviews by providing a structured method of analysis for a generally unstructured process, which inherently makes interviews difficult to evaluate. Instructors can use the coding scheme to identify which recommended practices students are and are not using. Additionally, instructors can also choose to focus on a subset of the recommended practices in the coding scheme, customising their assessment of students’ work to fit the needs of their particular students. This approach would provide instructors with a method for assessing and providing feedback on design interview performance within a classroom context. Because the coding scheme can be applied retroactively to interview transcripts, instructors could provide feedback on the interviews students conduct for their design projects (in project-based courses) as opposed to classroom-based simulated interviews (e.g., providing feedback during/after a mock interview). Additionally, because the coding scheme is applied at the question level, instructors can also provide feedback to student designers as they develop their interview protocols, allowing them to improve their questions prior to conducting their first stakeholder interview.

An additional implication is to encourage focus on design interviewing intentionally within design courses. The positive relationship found between interview performance and the elicitation of information used supports the importance of instruction of design interviewing within engineering curricula. Further, specific recommended practices may be more challenging to learn than others and, given the constraints of adding content to engineering design curricula, these findings could guide instructors in determining where to focus instruction.

The developed coding scheme can also be used by designers to evaluate their own interview protocols prior to conducting interviews. Within both industry practice and education, this scheme can be used as an efficient and low-cost method for designers to develop their interviewing skill set by supporting peer-to-peer learning within training programs or through self-reflection exercises that can be implemented during real-world implementation of stakeholder interviews. Designers can use the recommended practices coding scheme retroactively to assess which recommended practices were or were not used effectively (e.g., Were there moments where the designer could have been more opportunistic in pursuing a line of questioning? Were their times when the designer failed to verify a conclusion or interpretation drawn from a stakeholders response?). The coding scheme can be applied as a tool to help designers develop higher quality interview protocols by comparing the questions they develop to the recommended practices identified and adjusting them accordingly. Whereas design interview protocols are typically improved through pilot testing, use of the recommended practices coding scheme will allow designers to improve the quality of their interviewing protocols during initial development leading to more effective initial interviews. This collection of strategies supports reflective practice, which is key in the development of design expertise (Valkenburg & Dorst Reference van Rijn, Sleeswijk Visser, Stappers and Özakar1998; Adams, Turns & Atman Reference Adams, Turns and Atman2003; Davis et al. Reference Davis2013). This coding scheme could be expanded to a guiding tool that includes the additional recommended practices that were not assessed in this study as well as non-question-based recommended practices (e.g., tone and body language).

These findings can also serve as a foundation for propelling additional design research initiatives aimed at investigating more nuanced interview and information gathering practices within the engineering design community. Many of the existing resources within the engineering design community provide high-level guidance on who to engage during front-end engineering design activities and how to execute the engagements. The list of practices within this scheme can serve as a starting point for additional design research aimed at explicating practitioner behaviour in greater detail to facilitate transfer of knowledge that otherwise is most commonly obtained through years of experience and trial and error. Furthermore, while this study focused on novice engineering designers, subsequent studies can be performed to evaluate the utility of the findings for engineering design practitioners and explore specific details regarding interviewing and information gathering practices.

Ultimately, the findings from this research have the long-term potential to support the success of designed products. For example, if end users’ and stakeholders’ needs are more comprehensively and accurately captured through requirements elicitation interviews by designer practitioners with expert-level interview skills, there is an increased likelihood that their ensuing design stages will yield concept solutions that more appropriately address the targeted problems.

5.3. Limitations

Interviewing is a complex skill and our study was limited in that only a subset of interviewing behaviours was assessed in this study. We acknowledge that other behaviours, such as interview ‘mistakes’ (e.g., Bano et al. Reference Bednar2019), contribute to a holistic assessment of interview performance.

Additionally, in order to conduct a controlled, laboratory-based study of design interview performance, we made study design decisions that may not be representative of real-world design work. For example, participants were given access to seven different stakeholders to interview during the design task. While these stakeholders were purposefully selected to represent a broad range of perspectives (which would be desired when designing a real product), the limited number of stakeholders (and therefore limited perspectives) available to the participants is a study design constraint that would not normally exist during real-world design. Further, because of the experimental design context, we were also unable to control for potential impacts of stakeholders being interviewed multiple times, that is, earlier interviews with participants might have influenced how stakeholders answered questions in later interviews with other participants.

Another limitation was that we were not able to control or assess every stage in a process of moving from data collection during interviews to a set of developed requirements. While we were able to establish a fairly strong relationship between interview performance and the use of the information gathered during the interview to support the development of requirements, we did not control for the multiple phases between these two activities. Beyond effective interviewing skills, the process of developing representative requirements involves eliciting meaningful information from stakeholders, synthesising information across multiple end-users and stakeholders, and navigating potentially conflicting information. In other words, asking an effective question does not automatically translate to successfully leveraging the resulting information in the development of requirements. The linear model did not take into consideration participants’ uses of other information sources that were available to them that may have informed their interviews and requirements. Additionally, this analysis only considered interview information that appeared as a source for the product requirements; however, relevant information from interviewees may not have been captured within requirements. For example, stakeholder interviews can be performed to refine the target market for a product or to better understand features that should not be included in a product. Another limitation is that we assigned equal weights to each recommended practice in defining overall interview performance because the existing literature did not provide information to more accurately interpret the relative contributions of recommended practices to overall interview performance. In practice, some recommended practices could be more important than others, however, without a rigorous method of evaluating the end-product associated with interviews (while being able to control for variability between interview contexts), establishing this weighting was not possible.

6. Conclusions

Design processes that emphasise engagement early and often with stakeholders continue to gain traction within the engineering design community because of their ability to promote deep understanding and integration of stakeholder needs. However, in order to fully leverage the benefits of deep stakeholder understanding, designers need to be able to effectively execute engagements with stakeholders, particularly interviews, which are an important method for eliciting and developing product requirements. While stakeholder engagement has been shown to positively affect design outcomes, studies to date have described novice designers’ challenges with implementing stakeholder engagement including interviewing. Therefore, this research aimed to characterise how novice designers elicited information to inform the development of product requirements during stakeholder interviews. The primary contributions of this work included: the identification and compilation of recommended design interviewing practices garnered through secondary research, which formed the basis of a multifunctional coding scheme. The findings demonstrated that the coding scheme could be used to assess interview performance differences among novice designers and elucidate specific shortcomings among novice design interviewers with respect to recommended practices including encouraging deep thinking and being flexible and opportunistic. The developed and successfully applied recommended interviewing practices coding scheme has the potential to enable design educators to more effectively evaluate their students’ interview performance and enable them to deliver more specific feedback aligned with recommended practices. Likewise, the coding scheme can also be used to inform protocol development, evaluate interview protocols prior to conducting interviews, aid in the performance of self-evaluation and reflective practice on interviews conducted, and provide a mechanism for facilitating peer-to-peer feedback on this complex skill set.

Acknowledgments

The authors acknowledge the statistical analysis support received by the University of Michigan Consulting for Statistics, Computing and Analytics Research, and the study participants.

Financial support

This work was supported by the University of Michigan (I.M., Rackham Merit Fellows; K.H.S. and S.R.D., Center for Research on Learning and Teaching Investigating Student Learning); and the National Science Foundation (I.M., Graduate Research Fellowship; K.H.S. and S.R.D., Research Initiation Grants in Engineering Education 1340459 and K.H.S., CAREER 0846471).

A. Appendix

Table A1. Cross tabulation of the number of recommended practices that were coded simultaneously within the transcripts analysed.

Table A2. Correlation coefficients (Pearson) between the recommended practices coded during our analysis.

Table A3. Recommended practice codes that could not be applied to interview transcripts

Table A4. Examples of each recommended practice found in the data.

References

Adams, R. S., Turns, J. & Atman, C. J. 2003 Educating effective engineering designers: the role of reflective practice. Design Studies 24, 275294; doi:10.1016/S0142-694X(02)00056-X.CrossRefGoogle Scholar
Agarwal, R. & Tanniru, M. R. 1990 Knowledge acquisition using structured interviewing: an empirical investigation. Journal of Management Information Systems 7, 123140; doi:10.2307/40397939.CrossRefGoogle Scholar
Alam, I. 2002 An exploratory investigation of user involvement in new service development. Journal of the Academy of Marketing Sciences 30, 250261.CrossRefGoogle Scholar
Atman, C. J., Adams, R. S., Cardella, M. E., Turns, J., Mosborg, S. & Saleem, J. 2007 Engineering design processes: a comparison of students and expert practitioners. Journal of Engineering Education 96, 359379; doi:10.1002/j.2168-9830.2007.tb00945.x.CrossRefGoogle Scholar
Atman, C. J., Chimka, J. R., Bursic, K. M. & Nachtmann, H. L. 1999 A comparison of freshman and senior engineering design processes. Design Studies 20, 131152.CrossRefGoogle Scholar
Aurisicchio, M., Bracewell, R. H. & Wallace, K. M. 2006 Characterising in detail the information requests of engineering designers. In ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, pp. 112. ASME.Google Scholar
Bano, M., Zowghi, D., Ferrari, A., Spoletini, P. & Donati, B. 2019 Teaching requirements elicitation interviews: an empirical study of learning from mistakes. Requirements Eng 24, 259289; doi:10.1007/s00766-019-00313-0.CrossRefGoogle Scholar
Bednar, P. M. 2009 Contextual analysis. A multiperspective inquiry into emergence of complex socio-cultural systems. In Processes of Emergence of Systems and Systemic Properties, pp. 299312. World Scientific; doi:10.1142/9789812793478_0020.Google Scholar
Browne, G. J. & Rogich, M. B. 2001 An empirical investigation of user requirements elicitation: comparing the effectiveness of prompting techniques. Journal of Management Information Systems 17, 223249; doi:10.2307/40398511.CrossRefGoogle Scholar
Burnay, C., Jureta, I. J. & Faulkner, S. 2014 What stakeholders will or will not say: a theoretical and empirical study of topic importance in requirements engineering elicitation interviews. Information Systems 46, 6181; doi:10.1016/j.is.2014.05.006.CrossRefGoogle Scholar
Byrd, T. A., Cossick, K. L. & Zmud, R. W. 1992 A synthesis of research on requirements analysis and knowledge acquisition techniques. Management Information Systems Quarterly 16, 117138.CrossRefGoogle Scholar
Cohen, J. 1960 A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20, 3746.CrossRefGoogle Scholar
Coleman, R., Clarkson, J., Dong, H. & Cassim, J. 2016 Design for Inclusivity: A Practical Guide to Accessible, Innovative and User-Centered Design. Routledge.CrossRefGoogle Scholar
Conradie, P., De Marez, L. & Saldien, J. 2017 User consultation during the fuzzy front end: evaluating student’s design outcomes. International Journal of Technology and Design Education 27, 563575; doi:10.1007/s10798-016-9361-4.CrossRefGoogle Scholar
Coolican, H. 2009 Research Methods and Statistics in Psychology. Routledge.Google Scholar
Cooper, R. G. 1988 Predevelopment activities determine new product success. Industrial Marketing Management 17, 237247; doi:10.1016/0019-8501(88)90007-7.CrossRefGoogle Scholar
Cooper, R. G. & Kleinschmidt, E. J. 1987 What makes a new product a winner: success factors at the project level. R&D Management 17, 175189; doi:10.1111/j.1467-9310.1987.tb00052.x.Google Scholar
Crabtree, A., Rouncefield, M. & Tolmie, P. 2012 Doing Design Ethnography, Human–Computer Interaction Series. Springer.CrossRefGoogle Scholar
d.school 2015 d.school methods, online document (downloadable on November 1st 2015) http://dschool.stanford.edu/use-our-methods/.Google Scholar
Dally, J. W. & Zhang, G. M. 1993 A freshman engineering design course. Journal of Engineering Education 82, 8391.CrossRefGoogle Scholar
Davis, G. 1982 Strategies for information requirements determination. IBM Systems Journal 21, 430.CrossRefGoogle Scholar
Davis, D., Trevisan, M., Leiffer, P., McCormack, J., Beyerlein, S., Khan, M. J. & Brackin, P. 2013 Reflection and metacognition in engineering practice. In Using Reflection and Metacognition to Improve Student Learning (ed. Kaplan, M., Silver, N., LaVaque-Manty, D. & Meizlish, D.). Stylus.Google Scholar
Dekker, S. W. A., Nyce, J. M. & Hoffman, R. R. 2003 From contextual inquiry to designable futures: what do we need to get there? IEEE Intelligent Systems 18, 7477; doi:10.1109/MIS.2003.1193660.CrossRefGoogle Scholar
Dhillon, J. S., Ramos, C., Wünsche, B. C. & Lutteroth, C. 2011 Designing a web-based telehealth system for elderly people: an interview study in New Zealand. In Proceedings - IEEE Symposium on Computer-Based Medical Systems. IEEE; doi:10.1109/CBMS.2011.5999157.Google Scholar
Dieter, G. E. & Schmidt, L. C. 2021 Engineering Design. 6th edn. McGraw-Hill.Google Scholar
Donoghue, S. 2010 Projective techniques in consumer research. Journal of Family Ecology and Consumer Sciences 28, 4753; doi:10.4314/jfecs.v28i1.52784.CrossRefGoogle Scholar
Dorst, K. 2019 Co-evolution and emergence in design. Design Studies 65, 6077; doi:10.1016/j.destud.2019.10.005.CrossRefGoogle Scholar
Dorst, K. & Cross, N. 2001 Creativity in the design process: co-evolution of problem–solution. Design Studies 22, 425437; doi:10.1016/S0142-694X(01)00009-6.CrossRefGoogle Scholar
Dray, S. M. & Siegel, D. A. 2009 Understanding users in context: an in-depth introduction to fieldwork for user centered design. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 5727 LNCS, pp. 950951. Springer; doi:10.1007/978-3-642-03658-3_126.Google Scholar
Dym, C. L., Little, P. & Orwin, E. 2013 Engineering Design: A Project-Based Introduction. 4th edn. John Wiley & Sons.Google Scholar
Eris, O. 2003 Asking generative design questions: a fundamental cognitive mechanism in design thinking. International Conference on Engineering Design 3, 587588.Google Scholar
Evnin, J. & Pries, M. 2008 Are you sure? really? a contextual approach to agile user research. In Proceedings - Agile 2008 Conference, pp. 537542. Springer; doi:10.1109/Agile.2008.81.CrossRefGoogle Scholar
Ferrari, A., Spoletini, P. & Gnesi, S. 2016 Ambiguity and tacit knowledge in requirements elicitation interviews. Requirements Engineering 21, 333355; doi:10.1007/s00766-016-0249-3.CrossRefGoogle Scholar
Firesmith, D. 2003 Specifying good requirements. Journal of Object Technology 2, 7787.CrossRefGoogle Scholar
Gammack, J. & Anderson, A. 1990 Constructive interaction in knowledge engineering. Expert Systems 7, 1926.CrossRefGoogle Scholar
Goguen, J. A. & Linde, C. 1993 Techniques for requirements elicitation. In 1st IEEE International Symposium on Requirements Engineering (RE’93), pp. 152164. IEEE.Google Scholar
Grudin, J. & Grinter, R. E. 1995 Ethnography and design. Computer Supported Cooperative Work 3, 5559.CrossRefGoogle Scholar
Hands, K., Peiris, D. R. & Gregor, P. 2004 Development of a computer-based interviewing tool to enhance the requirements gathering process. Requirements Engineering 9, 204216; doi:10.1007/s00766-003-0185-x.CrossRefGoogle Scholar
Harrington, C., Erete, S. & Piper, A. M. 2019 Deconstructing community-based collaborative design: towards more equitable participatory design engagements. Proceedings of the ACM on Human–Computer Interaction 3, 125; doi:10.1145/3359318.CrossRefGoogle Scholar
Hein, P. H., Voris, N. & Morkos, B. 2018 Predicting requirement change propagation through investigation of physical and functional domains. Research in Engineering Design 29, 309328; doi:10.1007/s00163-017-0271-6.CrossRefGoogle Scholar
Huang, K. & Deng, Y. 2008 Social interaction design in cultural context : a case study of a traditional social activity. East 2, 8197.Google Scholar
IDEO 2015 The Field Guide to Human-Centered Design. IDEO.org.Google Scholar
Inoue, S., Furuta, K., Nakata, K., Kanno, T., Aoyama, H. & Brown, M. 2012 Cognitive process modelling of controllers in en route air traffic control. Ergonomics 55, 450464; doi:10.1080/00140139.2011.647093.CrossRefGoogle ScholarPubMed
Islam, S. & Omasreiter, H. 2005 Systematic use case interviews for specification of automotive systems. In Proceedings - Asia-Pacific Software Engineering Conference, APSEC 2005, pp. 1724. IEEE; doi:10.1109/APSEC.2005.102.Google Scholar
Jonsson, A. & Svingby, G. 2007 The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review 2, 130144; doi:10.1016/j.edurev.2007.05.002.CrossRefGoogle Scholar
Kaiya, H., Shinbara, D., Kawano, J. & Saeki, M. 2005 Improving the detection of requirements discordances among stakeholders. Requirements Engineering 10, 289303; doi:10.1007/s00766-005-0017-2.CrossRefGoogle Scholar
Khosronejad, M., Reimann, P. & Markauskaite, L. 2021 ‘We are not going to educate people’: how students negotiate engineering identities during collaborative problem solving. European Journal of Engineering Education 46, 557574; doi:10.1080/03043797.2020.1821174.CrossRefGoogle Scholar
Khurana, A. & Rosenthal, S. R. 1998 Towards holistic “front ends” in new product development. Journal of Product Innovation Management 15, 5774.CrossRefGoogle Scholar
Kobayashi, A. & Maekawa, M. 2001 Need-based requirements change management. In Proceedings. Eighth Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems-ECBS 2001, pp. 171178. IEEE; doi:10.1109/ECBS.2001.922419.CrossRefGoogle Scholar
Krippendorff, K. 1980 Content Analysis: An Introduction to its Methodology. Sage.Google Scholar
Lai, J., Honda, T. & Yang, M. C. 2010 A study of the role of user-centered design methods in design team projects. AI EDAM 24, 303316; doi:10.1017/S0890060410000211.Google Scholar
Landis, J. R. & Koch, G. G. 2008 The measurement of observer agreement for categorical data. Biometrics 33, 159174.CrossRefGoogle Scholar
Lee, J. J. & Lee, K. P. 2009 Facilitating dynamics of focus group interviews in East Asia: evidence and tools by cross-cultural study. International Journal of Design 3, 1728; doi:10.1080/15710880600900561.Google Scholar
Leifer, R., Lee, S. & Durgee, J. 1994 Deep structures: real information requirements determination. Information and Management 27, 275285; doi:10.1016/0378-7206(94)90022-1.CrossRefGoogle Scholar
Leonard, D. & Rayport, J. F. 1997 Spark innovation through empathic design. Harvard Business Review 75, 102115.Google ScholarPubMed
Leydens, J. A. & Lucena, J. C. 2009 Listening as a missing dimension in engineering education: implications for sustainable community development efforts. IEEE Transactions on Professional Communication 52, 359376; doi:10.1109/TPC.2009.2032383.CrossRefGoogle Scholar
Light, A. 2006 Adding method to meaning: a technique for exploring peoples’ experience with technology. Behaviour & Information Technology 25, 175187; doi:10.1080/01449290500331172.CrossRefGoogle Scholar
Loweth, R. P., Daly, S. R., Hortop, A., Strehl, E. A. & Sienko, K. H. 2020a An in-depth investigation of student information gathering meetings with stakeholders and domain experts. International Journal of Technology and Design Education 32, 533554; doi:10.1007/s10798-020-09595-w.CrossRefGoogle Scholar
Loweth, R. P., Daly, S. R., Hortop, A., Strehl, E. A. & Sienko, K. H. 2021 A comparative analysis of information gathering meetings conducted by novice design teams across multiple design project stages. Journal of Mechanical Design 143, 092301; doi:10.1115/1.4049970.CrossRefGoogle Scholar
Loweth, R. P., Daly, S. R., Sienko, K. H., Hortop, A. & Strehl, E. A. 2019 Student designers’ interactions with users in capstone design projects: a comparison across teams. In Proceedings of the 126th ASEE Annual Conference & Exposition. Presented at the 126th ASEE Annual Conference & Exposition, Tampa, FL. ASEE; doi:10.18260/1-2—33291.Google Scholar
Loweth, R. P., Daly, S. R., Sienko, K. H., Hortop, A. & Strehl, E. A. 2020b Novice designers’ approaches to justifying user requirements and engineering specifications. In Proceedings of the ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2020), DETC2020-22163. ASME; doi:10.1115/DETC2020-22163.Google Scholar
Luck, R. 2007 Learning to talk to users in participatory design situations. Design Studies 28, 217242; doi:10.1016/j.destud.2007.02.002.CrossRefGoogle Scholar
Luck, R. 2018 Inclusive design and making in practice: bringing bodily experience into closer contact with making. Design Studies 54, 96119; doi:10.1016/j.destud.2017.11.003.CrossRefGoogle Scholar
Markham, S. K. 2013 The impact of front-end innovation activities on product performance. Journal of Product Innovation Management 30, 7792; doi:10.1111/jpim.12065.CrossRefGoogle Scholar
Martin, J. L. & Barnett, J. 2012 Integrating the results of user research into medical device development: insights from a case study. BMC Medical Informatics and Decision Making 12, 74; doi:10.1186/1472-6947-12-74.CrossRefGoogle ScholarPubMed
Martin, J. L., Clark, D. J., Morgan, S. P., Crowe, J. A. & Murphy, E. 2012 A user-centred approach to requirements elicitation in medical device development: a case study from an industry perspective. Applied Ergonomics 43, 184190; doi:10.1016/j.apergo.2011.05.002.CrossRefGoogle ScholarPubMed
Mohedas, I., Daly, S. R. & Sienko, K. H. 2014 Design ethnography in capstone design: investigating student use and perceptions. International Journal of Engineering Education 30, 888900.Google Scholar
Mohedas, I., Daly, S. R. & Sienko, K. H. 2016 Use of skill acquisition theory to understand novice to expert development in design ethnography. International Journal of Engineering Education 32, 13641371.Google Scholar
Mohedas, I., Sienko, K. H., Daly, S. R. & Cravens, G. L. 2020 Students’ perceptions of the value of stakeholder engagement during engineering design. Journal of Engineering Education 109, 760779; doi:10.1002/jee.20356.CrossRefGoogle Scholar
Moody, J. W., Blanton, J. E. & Cheney, P. H. 1998 A theoretically grounded approach to assist memory recall during information requirements determination. Journal of Management Information Systems 15, 7998.CrossRefGoogle Scholar
Murphy, S. A. & Kumar, V. 1997 The front end of new product development: a Canadian survey. R&D Management 27, 515.Google Scholar
Nelder, J. & Wedderburn, R. 1972 Generalized linear models. Journal of the Royal Statistical Society 135, 370384.CrossRefGoogle Scholar
Nguyen, L., Carroll, J. & Swatman, P. A. 2000 Supporting and monitoring the creativity of IS personnel during the requirements engineering process. In Proceedings of the 33rd Annual Hawaii International Conference on System Sciences. IEEE; doi:10.1109/HICSS.2000.926899.Google Scholar
Niles, S., Contreras, S., Roudbari, S., Kaminsky, J. & Harrison, J. L. 2020 Resisting and assisting engagement with public welfare in engineering education. Journal of Engineering Education 109, 491507; doi:10.1002/jee.20323.CrossRefGoogle Scholar
Nuseibeh, B. & Easterbrook, S. 2000 Requirements engineering: a roadmap. In Conference on the Future of Software Engineering, pp. 3546. ACM.CrossRefGoogle Scholar
Ogbonnaya-Ogburu, I. F., Smith, A. D. R., To, A. & Toyama, K. 2020 Critical race theory for HCI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 116. ACM; doi:10.1145/3313831.3376392.Google Scholar
Ozgur, E. 2004 Effective Inquiry for Innovative Engineering Design. Kluwer Academic.Google Scholar
Pahl, G. & Beitz, W. 2007 Engineering Design: A Systematic Approach, 3rd edn. Springer.CrossRefGoogle Scholar
Park, D., Han, J. & Childs, P. R. N. 2021 266 fuzzy front-end studies: current state and future directions for new product development. Research in Engineering Design 32, 377409; doi:10.1007/s00163-021-00365-w.CrossRefGoogle Scholar
Ramzan, S. & Ikram, N. 2005 Making decision in requirement change management. In Proceedings of 1st International Conference on Information and Communication Technology, ICICT 2005, pp. 309312. IEEE; doi:10.1109/ICICT.2005.1598607.Google Scholar
Randall, D., Harper, R. & Rouncefield, M. 2007 Fieldwork for Design. Springer.CrossRefGoogle Scholar
Rinkus, S., Walji, M., Johnson-Throop, K. A., Malin, J. T., Turley, J. P., Smith, J. W. & Zhang, J. 2005 Human-centered design of a distributed knowledge management system. Journal of Biomedical Informatics 38, 417; doi:10.1016/j.jbi.2004.11.014.CrossRefGoogle ScholarPubMed
Rosa, M., Wang, W. M., Stark, R. & Rozenfeld, H. 2021 A concept map to support the planning and evaluation of artifacts in the initial phases of PSS design. Research in Engineering Design 32, 189223; doi:10.1007/s00163-021-00358-9.CrossRefGoogle Scholar
Rosenthal, S. R. & Capper, M. 2006 Ethnographies in the front end: designing for enhanced customer experiences. Journal of Product Innovation Management 23, 215237.CrossRefGoogle Scholar
Sanders, E. & Stappers, P. 2008 Co-creation and the new landscapes of design. CoDesign 4, 518.CrossRefGoogle Scholar
Scheinholtz, L. A. & Wilmont, I. 2011 Interview patterns for requirements elicitation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6606 LNCS, pp. 7277. Springer; doi:10.1007/978-3-642-19858-8_9.Google Scholar
Spradley, J. P. 1979 Step seven: asking structural questions. In The Ethnographic Interview, pp. 120131. Holt, Rinehard and Winston.Google Scholar
Steen, M., De Koning, N. & Pikaart, A. 2004 Exploring human centred approaches in market research and product development – three case studies. In Proceedings of the Conference on Dutch Directions in HCI, pp. 14. ACM.Google Scholar
Steen, M., Kuijt-Evers, L. & Klok, J. 2007 Early user involvement in research and design projects – a review of methods and practices. In 23rd EGOS Colloquium. Citeseer.Google Scholar
Strickland, C. 2001 Mining for information: tactics for interviewing. In IPCC 2001. Communication Dimensions. Proceedings IEEE International Professional Communication Conference, pp. 349352. IEEE; doi:10.1109/IPCC.2001.971584.Google Scholar
Sugar, W. A. 2001 What is so good about user-centered design? documenting the effect of usability sessions on novice software designers. Journal of Research on Computing in Education 33, 235250.CrossRefGoogle Scholar
Sutcliffe, A. & Sawyer, P. 2013 Requirements elicitation: towards the unknown unknowns. In Proceedings of the 2013 International Requirements Engineering Conference (RE), pp. 92104. IEEE; doi:10.1109/RE.2013.6636709.CrossRefGoogle Scholar
Tsai, W. T., Mojdehbakhsh, R. & Rayadurgam, S. 1997 Experience in capturing requirements for safety-critical medical devices in an industrial environment. In High-Assurance Systems Engineering Workshop, pp. 3236. IEEE.Google Scholar
Tulving, E. 1983 Elements of Episodic Memory. Oxford Psychology Series. Oxford University Press.Google Scholar
Valkenburg, R. & Dorst, K. 1998 The reflective practice of design teams. Design Studies 19, 249271; doi:10.1016/S0142-694X(98)00011-8.CrossRefGoogle Scholar
van Rijn, H., Sleeswijk Visser, F., Stappers, P. J. & Özakar, A. D. 2011 Achieving empathy with users: the effects of different sources of information. CoDesign 7, 6577; doi:10.1080/15710882.2011.609889.CrossRefGoogle Scholar
Vredenburg, K., Mao, J.-Y., Smith, P. W. & Carey, T. 2002 A survey of user-centered design practice. In Human Factors in Computing Systems, p. 471. ACM Press; doi:10.1145/503457.503460.Google Scholar
Walters, P. J. 2005 Knowledge in the Making: Prototyping and Human-Centred Design Practice. Sheffield Hallam University.Google Scholar
Wetherbe, B. J. C. 1991 Executive information requirements: getting it right. MIS Quarterly 15, 5165.CrossRefGoogle Scholar
Wiseman, S. & Tulving, E. 1976 Encoding specificity: relation between recall superiority and recognition failure. Journal of Experimental Psychology: Human Learning and Memory 2, 349361.Google Scholar
Wooten, T. C. & Rowley, T. H. 1995 Using anthropological interview strategies to enhance knowledge acquisition. Expert Systems with Applications 9, 469482; doi:10.1016/0957-4174(95)00017-8.CrossRefGoogle Scholar
Zhang, T. & Dong, H. 2009 Human-centred design: an emergent conceptual model. In Proceedings of Include 2009, pp. 17. Royal College of Art.Google Scholar
Zoltowski, C. B., Oakes, W. C. & Cardella, M. E. 2012 Students’ ways of experiencing human-centered design. Journal of Engineering Education 101, 2859; doi:10.1002/j.2168-9830.2012.tb00040.x.CrossRefGoogle Scholar
Figure 0

Figure 1. Example of requirements reference trees with respect to stakeholder interviews for Participant 1.

Figure 1

Table 1. Recommended practices identified within literature forming the basis for the coding structure used in later analyses.

Figure 2

Table 2. Summary statistics for coding of all participants.

Figure 3

Table 3. Estimates, standard error, and significance of the variables included within the generalised linear model.

Figure 4

Figure 2. Interview references to final requirements developed by participants.

Figure 5

Figure 3. Relationship between participants’ performance during interviews and the interview effect metric.

Figure 6

Figure 4. Results of recommended practices coding for Participants 4, 5 and 6 (participants whose interviews contained the most instances of recommended interviewing practices overall) versus Participants 3, 7 and 8 (participants whose interviews contained the fewest instances of recommended interviewing practices overall). The asterisk (*) indicates a statistically significant difference (p<0.05).

Figure 7

Table A1. Cross tabulation of the number of recommended practices that were coded simultaneously within the transcripts analysed.

Figure 8

Table A2. Correlation coefficients (Pearson) between the recommended practices coded during our analysis.

Figure 9

Table A3. Recommended practice codes that could not be applied to interview transcripts

Figure 10

Table A4. Examples of each recommended practice found in the data.