Hostname: page-component-cb9f654ff-fg9bn Total loading time: 0 Render date: 2025-09-02T18:23:15.951Z Has data issue: false hasContentIssue false

Advancing in-process innovation metrics: a framework for navigating uncertainty

Published online by Cambridge University Press:  27 August 2025

Tucker Marion*
Affiliation:
Northeastern University, USA
Sebastian Fixson
Affiliation:
Babson College, USA

Abstract:

Most innovation performance measurement approaches focus on ex-post outcome data, leaving decision-makers without timely guidance during the early phases of new product development (NPD). This gap is particularly critical in high-risk, high-regulation industries such as Urban Air Mobility (UAM), where long development cycles, regulatory hurdles, and uncertain user adoption demand real-time, in-process innovation metrics. In this paper, we propose a Desirability-Feasibility-Viability (DFV) framework that links key innovation phases (Discovery, Development, and Commercialization) to leading indicators that track innovation progress before market entry. Using UAM as an illustrative case study, we demonstrate how our framework enables stakeholders to navigate uncertainty, optimize resource allocation, and make data-driven innovation decisions.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Most performance measurement approaches look at outcome data after the fact. This phenomenon is also true for many measures of innovation performance. At the project level, typical examples of this type of metrics are time-to-market, cost-relative-to-budget, or market share (Reference Dziallas and BlindDziallas and Blind, 2019). Innovation processes exhibit a much higher failure rate than standardized processes like manufacturing. Depending on the definition of mismatch between expectations and actual outcomes, the failure rates of innovation activities exhibit substantial magnitudes. For example, 25% of new products for consumer product goods (CPG) category products are no longer sold after one year, and 40% have disappeared from the shelves after two years (Reference Victory, Nenycz-Thiel, Dawes, Tanusondjaja and CorsiVictory et al. 2021).

Researchers have looked for explanations for these high failure rates for decades, identifying early the innovation process as an important factor (Cooper and Kleinschmidt Reference Cooper and Kleinschmidt1986). Since then, this research stream has investigated various process aspects such as decision gates (Cooper Reference Cooper2008), digital tool use (Fixson and Marion Reference Fixson and Marion2012), combinations with agile approaches (Cooper and Sommer Reference Cooper and Sommer2016), and the critical importance of user needs understanding for creating unique and competitive products (Cooper Reference Cooper2019).

The fundamental challenge for managing any innovation process lies in the inherent uncertainty of the process. Two factors mainly drive this uncertainty. The first factor is represented by the time lag between points of decisions in the innovation process and the emergence of the consequences of these decisions. In other words, the underlying problem is that by the time accurate information becomes available, the time has passed in which one had the most influence on the course of action. One well-known example of this phenomenon are the cost committal and occurrence curves in complex engineering projects. Design decisions made early in these types of projects can have far-reaching cost consequences by committing subsequent design decisions to certain development paths, even though the occurrence of these costs, and thus their true visibility may lie months or even years in the future (Fixson Reference Fixson, Simpson, Siddique and Jiao2006; Kennedy, Sobek II, and Kennedy Reference Kennedy, Sobek II and Kennedy2014). For innovation, more generally, the lag is even more challenging in the case of radical innovations because what constitutes an acceptable solution lies even further in the distance, thus making the lag even more pronounced.

The second factor is that the future is ultimately unknown. The future state of any customer needs is never fully known ex-ante. In addition, it becomes a moving target driven by accelerating and increasingly turbulent global dynamics (due to technological, economic, social, and political forces). All of these factors make most predictions less and less reliable.

A compelling example of an industry grappling with uncertainty is Urban Air Mobility (UAM)—the emerging field of autonomous or piloted air taxis (Airbus, 2025). Companies such as EADS Airbus, Joby Aviation, Archer, and Volocopter are pioneering this space, yet significant technological, regulatory, and societal uncertainties remain. The long lead times for certification, infrastructure investment, and customer adoption make it an ideal case to illustrate the importance of in-process innovation metrics.

If the increasing external turbulence makes predictions through extrapolation of the past less and less useful, the next best option is to shorten the distance between decisions inside the innovation process and their later consequences. One way this shortening can be achieved is by accelerating the innovation process cycle time (Griffin Reference Griffin1997; Griffin, Langerak, and Eling Reference Griffin, Langerak and Eling2019). Another way to shrink this distance is by introducing intermediate process indicators as guideposts. However, empirical research shows that many organizations use only a small share of available innovation performance measures, and those metrics that are used are almost always outcome measures (Day Reference Day2020). Despite substantial research on innovation processes, our understanding of measuring progress and, ultimately, performance inside innovation projects remains underdeveloped. Thus, in this paper, we pursue the following research question:

RQ1: How can managing the innovation process be supported through better in-process measures?

To address this research question, we develop a framework that helps translate key objectives into relevant key metric types, which can be operationalized as intermediate progress metrics. Our framework spans the entire innovation process across the phases of discovery, development, and commercialization and integrates considerations from the disciplines of design, engineering, and entrepreneurship. From the theoretical perspective, our innovation progress measurement framework addresses the multifaceted challenges of project-level innovation, advocating for the inclusion of multiple disciplines and highlighting the organizational challenge of integrating disparate worldviews. Increasing the sampling rate in innovation management, aligned with design thinking and venture community trends, offers a potential solution, emphasizing the importance of contextualizing new measures across diverse project types. We illustrate our framework in the context of UAM.

2. Innovation measurements in literature and practice

The number of suggested innovation measures in the literature is sizable. Prior research identifies over 150 distinct R&D performance measures (Reference Bican and BremBican & Brem, 2020). However, most of these metrics are ex-post indicators, measuring success only after commercialization. This retrospective focus leaves a critical gap—especially in high-uncertainty industries such as UAM—where real-time, in-process metrics are needed to guide decision-making before significant investments are committed.

One classification that can help reduce these overwhelming numbers is the distinction between those aspects that are under the direct influence of a firm and those that are not. For example, Dziallas and Blind Reference Dziallas and Blind(2019) separate company-specific dimensions such as strategy, innovation culture, competence and knowledge, organizational structure, R&D activities and input, and financial performance from contextual dimensions like market, network, and environment.

But even within these classification categories, large numbers of indicators remain. For example, within the category of company-specific dimensions, the range of measures stretches from firm-wide aspects like organizational culture or knowledge to those that focus narrowly on innovation activities like number of ideas produced.

Unpacking this categorization problem further, even if the focus is placed only on measures that follow relatively closely the work directly associated with creating new products and services, the set of measures is still substantial. Several different labels have been developed to signal where the measures are applied along the innovation process. Dziallas and Blind Reference Dziallas and Blind(2019) propose to separate indicators used in the product definition and product concept stage (ex-ante in their terminology) from those used in the market launch phase (ex-post). This categorization is akin to the concept of leading and lagging indicators, often used in economics and financial management, with decades of research undertaken to link outcomes with indicators that might predict future events such as economic recession (e.g., rise or fall of sales of existing homes or durable goods).

Among the activities in R&D are all the typical undertakings such as studying prior research, developing new ideas, testing solutions via experiments, and reporting on the results. These measures often take the form of simple counts of hours worked, experiments run, or workshops conducted. Because R&D activities land at the end of the spectrum of white-collar tasks that are characterized by creativity, discretion, learning, incentives, and technology (Hopp, Iravani, and Liu Reference Hopp, Iravani and Liu2009), these simple measures are, in practice, more used as simple quantitative variables rather than accounting for qualitative differences.

The output or outcome of R&D includes tangibles like product and process designs and intangibles such as abstract knowledge, some of which get codified in publications or patents. Patents, in particular, have a long history as a proxy performance measure of innovation systems (Hausman, Hall, and Griliches Reference Hausman, Hall and Griliches1984; Griliches 1990; Jaffe and Trajtenberg Reference Jaffe and Trajtenberg2002). The fact that patents cite other patents – similar to research publications citing other research publications – has been employed to assess, in addition to patent quantity, patent quality as an output measure (Jaffe and Trajtenberg Reference Jaffe and Trajtenberg2002). Recent research, however, suggests that this quality measure includes substantial noise as it can also be influenced by factors external to the firm, such as patent examiners (Alcacer, Gittelman, and Sampat Reference Alcacer, Gittelman and Sampat2009) or industry life cycle stages (Fixson and Lee Reference Fixson and Lee2012). In addition, patenting activities vary substantially across industries and technologies, and many internally employed innovations are never patented in the first place.

Other outcome measures may measure an innovation system’s effect on consumer demand. For example, have sales improved in absolute terms in units or monetary value, or has market share been increased vis-à-vis competitors? One specific example intended to measure the impact of the R&D activity on business performance is what has been called the “Freshness Index” or R&D Innovation Index (Werner and Souder Reference Werner and Souder1997), often measured as revenue share earned by new products or services introduced in a recent time period (typically 3 to 5 years, depending on the industry).

Traditional industries may rely on well-established innovation indicators, but nascent, highly regulated industries—such as UAM—demand earlier, more predictive measures. Air taxis require long-term infrastructure investments, regulatory approvals, and technological advancements before commercialization. Lagging indicators like market share are ineffective for guiding these decisions.

Based on the insights from the literature and practice, we derive a set of requirements that will set the foundation for our subsequent framework development. The depth of the measurements needs to be taken inside the actual process – just as the instruments in an airplane flight deck provide real-time information about the aircraft’s location, orientation, and status. “Accounting for [R&D project progress] at a more granular level would also enhance practical relevance and applicability.” (Bican and Brem Reference Bican and Brem2020, 280).

3. Methods and data

We took a qualitative, case-based approach (Eisenhardt 1989), following Yin’s recommendations Reference Yin(1994). For over ten years, we have researched innovation processes across multiple organizations. We also investigated university technology development, embedded participant observation at various firms, and executive education engagements. Additionally, we engaged with multiple firms in developing and delivering executive education programs centered on innovation management. These engagements required in-depth interviews during client intake and due diligence meetings to build the organization’s knowledge.

This study develops an analytical framework for in-process innovation metrics and illustrates its application using Urban Air Mobility (UAM) as a case example. While we have engaged with multiple firms in executive education settings, this paper does not conduct an empirical multi-case analysis. Instead, we use UAM to demonstrate the practical challenges of managing innovation uncertainty and show how our framework provides actionable insights.

4. Framework development

Based on the learning from the literature and our insights from our field studies, we set out to design a framework to help construct a set of innovation progress indicators. In doing so, we pursued two primary goals: to shorten the time between decision and effect, and to integrate across disciplines.

A major goal of this work is to increase our understanding of developing better innovation in-process measures. The rationale for this strategy is straightforward: if the time distance between the activity or decision now and the ultimate consequence later is too large, a key question becomes how to reduce this distance? One answer to this question is to shorten this distance by separating the process into smaller segments, i.e., individual phases (Fixson Reference Fixson2023). It is common within new product development (NPD) and innovation management literature to segment this overall process into phases or stages (Cooper, Reference Cooper2008; Ulrich, Eppinger, and Yang, Reference Ulrich, Eppinger and Yang2020. Some research denoted 5-7 phases, while others segment into fewer phases. In this research, we segment the innovation process for our framework development into three phases, namely Discovery, Development, and Commercialization (Reference Marion, Barczak and HultinkMarion et al. 2014). While the innovation management literature provides a range of labels for the different process steps, stages, or phases, some with finer delineation of the individual stages than others, the process can generally be distilled into three overall segments that contain fundamentally different activities and tasks. While we discuss the three phases below sequentially, it does not intend to negate the possibility of iterations within or across phases. In summary, we aim to look in greater detail into each of the three phases of Discovery, Development, and Commercialization to construct a more valuable and insightful series of measures.

Every successful project in the market ultimately has to satisfy requirements along several dimensions. A helpful framework for this task is provided by design thinking. The design thinking approach has been applied to design products, services, environments, digital experiences, and organizations across a wide range of industries ranging from health care to finance, to defense, to non-profit sectors (Brown Reference Brown2009; Liedtka, King, and Bennett Reference Liedtka, King and Bennett2013; Ben Mahmoud-Jouini, Fixson, and Boulet Reference Ben Mahmoud-Jouini, Fixson and Boulet2019; Schweitzer, Ben Mahmoud-Jouini, and Fixson Reference Schweitzer, Ben Mahmoud-Jouini and Fixson2023).

Design firms such as IDEO advocate to assess the outcome of design thinking projects in ways that measure critical aspects such as customer appeal, and potential viability of the business and technology (Reference LiedtkaLiedtka, 2015). In our framework, we expand this notion to align with the three overall phases of the innovation process. We propose that each innovation phase is dominated by one primary uncertainty:

1.) Discovery → Desirability: Will users and stakeholders accept this innovation? 2.) Development → Feasibility: Can the technology and regulations support it? 3.) Commercialization → Viability: Can the business model be sustained? This framework helps reduce uncertainty by aligning key innovation questions with leading indicators rather than relying solely on lagging market outcomes.

However, while the trifecta of DFV is conceptually powerful and broadens the scope of the range of factors that should be considered and assessed during the innovation process, in practice, there is a lack of definition as to how to operationalize these criteria.

To address this issue, we propose establishing the concept of ‘minimal measures’ for each of these three dimensions, ‘ which can help assess progress towards answering these questions. Drawing inspiration from the iterative learning cycles of the Lean Startup methods (Ries Reference Ries2011) combined with the design thinking DFV archetype, we propose minimal concepts such as a minimum desirable experience (MDE), a minimum feasible technology (MFT), and a minimum viable product (MVP). These can be thought of as intermediate output variables. While no perfect match exists between the phases of Discovery, Development, and Commercialization and the lenses of Desirability, Feasibility, and Viability, some correlation exists (Seidel and Fixson Reference Seidel, Fixson, Luchs, Swan and Griffin2015).

For each of the three dimensions of the innovation project, Desirability, Feasibility, Viability, we construct measures to help innovation managers, the pilots of our proverbial innovation exploration planes, navigate the uncertainty better. Specifically, we create a chain of linkages from the overarching “minimum X” measure to the dominant phase, to key objectives, to key metrics types, and finally to actual sample metrics (Table 1).

Table 1. DFV innovation measures framework

To add further specificity, we discuss each innovation phase’s potential metrics in detail.

Discovery Phase – primary innovation measure: Minimum Desirable Experience

The desirability dimension provides the key question for any innovation project. This dimension tests whether an innovation is addressing the right customer problem. If the innovation can solve customers’ key pain points when trying to complete a task, the solution will pass the test for desirability. A progress measure that signals the passing of this test is needed at this early phase in the process. We propose considering layers of Minimum Desirable Experiences (MDE) as the thresholds for further investment in the innovation project. The layers should represent the increasing fidelity of the problem-solution match. For example, early customer tests deploying sacrificial concepts, often long before the complete product exists, are critical to ensure that the innovation resources are invested into the right “it” before the “it” is developed (Savoia Reference Savoia2019).

An Initial MDE may focus on confirming problem-user fit. In other words, a problem relevant to the envisioned customer has been identified. Borrowing data collection techniques from ethnography, design has developed a range of tools for creating this problem-user fit. Qualitative data is often collected through observation, a designer’s experience, and interviews. Design tools such as personas, journey maps, and sacrificial concepts can be quite helpful for synthesizing the data to improve understanding of problem-user fit.

Over time, innovation projects gravitate with early prototyping towards establishing problem-solution fit. In other words, will the envisioned solution satisfactorily address the identified user need? Early prototyping techniques are essential when trying to establish what we call a Minimum Desirable Experience. The measurement of customer response in this early phase is most likely qualitative, and the number of data points is relatively small (although the careful selection of representative customers can strengthen the results). The quality, quantity, and novelty of concepts (Shah, Smith, and Vargas-Hernandez Reference Shah, Smith and Vargas-Hernandez2003) can be assessed by users and other stakeholders.

Progress measures for these activities should consist of both quantitative and qualitative aspects. Ideally, the typically qualitative results of the MDE tests are accompanied by broader, larger-scale industry analyses. These studies can capture certain societal trends that may impact how the technology could be utilized in the future but can also include competitors’ developments and the shape of the regulatory environment. For example, team leaders can keep track of the quality and quantity of outside engagements. These engagements can be with academic, industry, or partner organizations where individuals with diverse skill sets can actively contribute to the project.

Drawing from entrepreneurship and the realm of angel investors and venture capitalists, many applicable measures are focused on validation. These include the number of customer relationships currently established. This is often a key milestone for the first equity investment. Subsequent rounds are then tied to the growth and maturity of these relationships. This initial investment decision includes specific data, such as whether or not these initial users have been converted to paying customers. This is the purpose of what is referred to as the ‘SaaS (Software as a Service) Napkin,1” which is a guide for key metrics when developing software as a service platform. New venture incubators like Techstars push for data and revenue conversion during early development to signal product-market fit. In certain instances, paying customers may not be possible, especially in longer-lead technology projects. However, this is where the validation of MDE can rely on the feedback and assurances from critical stakeholders, for example, through letters of intent (LOIs) or memos of understanding (MOUs).

Development Phase – primary innovation measure: Minimum Feasible Technology

While ensuring the solution addresses the underlying user need continues in this phase, the primary attention shifts to solving technical problems. We propose a layered Minimum Feasible Technology (MFT) measure for the technology dimension to measure project progress. For the layering, we borrow from the world of systems engineering that has conducted research on tracking innovation progress – or at least the technical aspect of it – as it moves through development. For innovations in progress, maturity assessments have been an area of active interest for several decades. Technology maturity assessment is also vital for internal assessment during the product development cycle and product planning when assessing potential technologies to invest in or acquire. One tool for this assessment is the Technology Readiness Level (TRL) scale, developed by NASA in the 1970s. The scale comprises nine levels, each achieved by the technology passing specific testing and demonstration requirements. The various stages in the scale reflect the various degrees of maturity of the technology, often progressing from components to systems. TRL assessment helps in the design of prototypes and stages by creating a structure that can support ongoing progress during development (Reference Fast-Berglund, Bligård, Åkerman and KarlssonFast-Berglund et al. 2014).

At the project level, principal investigators may develop several Development reports, where teams can communicate MFT (and MDE) progress information, process mapping, and inputs/outputs intended for the project. A separate technology assessment and a market/user stakeholder assessment should also be created to understand market insights and reasoning on how the technology can be successfully diffused into the market. By involving stakeholders early in the development process, there is more communication and networking among project teams and a greater chance for commercialization (Talke and Hultink Reference Talke and Hultink2010; Driessen and Hillebrand Reference Driessen and Hillebrand2013).

For the Development phase, it is essential to incorporate traditional project management metrics, such as milestones achieved and budget performance, into the project assessment via MDE and MFT assessments. Milestones may be specific performance targets of virtual and physical prototypes. For example, it may be the precision and accuracy of a sensor. The development team should agree upon milestones and budget signposts at the beginning of the project during planning. Agile metrics such as sprint tracking and progress tracking may be used (Kupiainen, Mäntylä, and Itkonen Reference Kupiainen, Mäntylä, Itkonen, Kupiainen, Mäntylä and Itkonen2015).

Commercialization Phase – primary innovation measure: Minimum Viable Product

While MDE and MFT developments continue, in the commercialization phase, the critical assessment becomes learning about the ultimate impact of the new technology or solution. Here, we built on some concepts from the world of entrepreneurship. Our proposed framework includes impact measures of multiple kinds of viability, specifically the proposed innovation’s economic, societal, and environmental sustainability. The economic portion can be addressed by appropriately modeling underlying business models; the minimum viable product (MVP) is a valuable concept, illustrating the minimum bundle of features a customer would be willing to pay for (Ries Reference Ries2011). The willingness to pay is an important aspect of economic viability, but it should be accompanied by simultaneously building appropriate business models with increasing fidelity with every layer. For example, early versions can follow approaches like discovery-driven planning (McGrath and MacMillan Reference McGrath and MacMillan2009) to build business models and test their underlying assumptions with increasing detail.

The main objective of lean startup methodologies is to test, validate, and learn about the viability of the business model associated with the innovation. In other words, how do you test its underlying assumptions when the actual financial metrics do not show much, i.e., are too small in volume to be meaningful? Working backward, this step requires identifying what Ries calls pre-customer metrics (Ries Reference Ries2017). These early indicators, such as customer acquisition costs, conversion rates, retention, referral, and lifetime value per customer, illustrate how the business model turns innovation into commercial results. While several of these metrics have become well known with digital products, services, and business models, the same fundamental logic can also be used to develop early innovation metrics to measure progress toward a viable business model for “brick-and-mortar” businesses (Toma and Gons Reference Toma and Gons2021).

For specific metrics, we again can draw from the entrepreneurship ecosystem. These include Customer Lifetime Value (Borle, Singh, and Jain Reference Borle, Singh and Jain2008) (LTV), Average Recurring Revenue (Mehta, Steinman, and Murphy Reference Mehta, Steinman and Murphy2016), and reducing new Customer Acquisition Costs (Kupor and Kasireddy Reference Kupor and Kasireddy2014). As with the MDP metrics, many equity investments are predicated on good results and trendlines of these numbers. While the academic literature on these metrics is underdeveloped, they have become standard checkpoints for increases in valuation and the investment cycle. For example, moving from seed funding to Series A typically requires proof of the success of these measures. Established firms should draw inspiration from these examples from the startup world.

The DFV framework for UAMs is in Table 2.

Table 2. DFV innovation measures framework for UAM

The UAM-Specific Innovation Metrics Table helps track progress and reduce uncertainty at different stages of development. In the Discovery phase, the focus is on understanding public opinion, getting regulatory approval, and testing early market interest. The Development phase looks at whether the technology is ready, if it meets safety standards, and how well prototypes perform. Finally, in the Commercialization phase, the key questions are whether the business model works, if cities are ready with infrastructure, and if the cost makes sense compared to other transportation options.

A critical challenge in UAM innovation is the long and unpredictable regulatory approval process. Unlike traditional consumer products, where early user testing is feasible, air taxis require compliance with evolving safety standards. Our framework integrates regulatory engagement metrics (e.g., FAA certification progress) as leading indicators of feasibility. This ensures that technical progress aligns with policy advancements. By using these real-time indicators, companies can make smarter decisions and avoid relying on outdated or late-stage success measures.

5. Discussion

Applying our framework to Urban Air Mobility (UAM) illustrates its ability to guide innovation decisions in high-uncertainty industries. Unlike traditional product categories, UAM requires simultaneous advancements in technology, regulation, infrastructure, and public acceptance. The proposed in-process metrics enable stakeholders to track progress at each phase, even before market entry.

Our research of organizations tasked with developing advanced innovations and technologies found insufficient availability of useful leading indicators, as most existing NPD and innovation metrics are typically lagging indicators. Most current innovation metrics focus on final traditional ex-post project metrics and outcomes, such as financial performance or market share. In response, we suggest that finer-grained metrics be applied in shorter intervals within each phase, with a focus on measuring the quality of ideas during Discovery, measuring technology readiness levels and outside engagement during Development, and a focus on post-project transition and commercial, societal, and environmental impact during the Commercialization phase.

Uncertainty is a significant challenge when measuring innovation progress because both numbers and opinions can be unclear or misleading. Quantitative metrics, like technology readiness levels or financial forecasts, often depend on assumptions that may change due to new regulations, market shifts, or unexpected breakthroughs. Qualitative measures, like customer feedback or public sentiment, can be biased or based on too few data points, making them less reliable. To handle this, innovation managers need a mix of real-time data, past trends, and ongoing testing. In Urban Air Mobility (UAM), for example, tracking FAA and EASA regulatory discussions can help reduce uncertainty about approvals, while public sentiment tracking and pilot programs can give early clues about customer interest. Using both leading and lagging indicators helps teams make better decisions, even when the future is unclear. While this framework helps track early-stage innovation progress, it doesn’t replace financial modeling or forecasting. It provides useful guidance but should be combined with deeper market and economic analysis as projects mature. Future research could improve uncertainty measurement by using machine learning to predict adoption rates, regulatory approvals, and costs. There’s also a trade-off between early qualitative insights (like public sentiment and regulatory interest) and later quantitative validation (like financial performance). A balanced approach using both can help organizations make smarter innovation decisions.

The Urban Air Mobility case demonstrates how traditional innovation metrics fail to provide actionable insights for high-uncertainty industries. Our proposed framework bridges this gap by introducing real-time in-process metrics, allowing stakeholders to navigate uncertainty in desirability, feasibility, and viability. Future research should explore empirical validation across additional high-risk innovation sectors.

References

Alcacer, J., Gittelman, M., and Sampat, B.. 2009. Applicant and examiner citations in U.S. patents: An overview and analysis. Research Policy 38: 415–27.CrossRefGoogle Scholar
Ben Mahmoud-Jouini, S., Fixson, S. K., and Boulet, D.. 2019. Making Design Thinking Work - Adapting an Innovation Approach to Fit a Large Technology-Driven Firm. Research-Technology Management 62 (5): 50–8.CrossRefGoogle Scholar
Bican, P. M., and Brem, A.. 2020. Managing innovation performance: Results from an industry-spanning explorative study on R&D key measures. Creativity and Innovation Management 29: 268–91.CrossRefGoogle Scholar
Borle, S., Singh, S. S., and Jain, D. C.. 2008. Customer Lifetime Value Measurement. Management Science 54 (1): 100–12.CrossRefGoogle Scholar
Brouwer, E., and Kleinknecht, A.. 1997. Measuring the unmeasurable: a country’s non-R&D expenditure on product and service innovation. Research Policy 25 (8): 1235–42.CrossRefGoogle Scholar
Brown, M. G., and Svenson, R. A.. 1998. Measuring R&D productivity. Research-Technology Management 41 (6): 30–5.CrossRefGoogle Scholar
Brown, T. 2009. Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. HarperBusiness; First Edition.Google Scholar
Cooper, R. G. 2008. Perspective: The Stage-Gate (R) idea-to-launch process-update, what’s new, and NexGen systems. Journal of Product Innovation Management 25 (3): 213–32.CrossRefGoogle Scholar
Cooper, R. G. 2019. The drivers of success in new-product development. Industrial Marketing Management 76: 3647.CrossRefGoogle Scholar
Cooper, R. G., and Kleinschmidt, E. J.. 1986. An Investigation into the New Product Process: Steps, Deficiencies, and Impact. Journal of Product Innovation Management 3 (2): 7185.CrossRefGoogle Scholar
Cooper, R. G., and Sommer, A. F.. 2016. From Experience: The Agile–Stage-Gate Hybrid Model: A Promising New Approach and a New Research Opportunity. Journal of Product Innovation Management 33 (5): 513–26.CrossRefGoogle Scholar
Day, G. S. 2020. Metrics for Managing Innovation - Lessons from Growth Leaders. White Paper. Wharton School at the University of Pennsylvania.Google Scholar
Driessen, P. H., and Hillebrand, B.. 2013. Integrating Multiple Stakeholder Issues in New Product Development: An Exploration. Journal of Product Innovation Management 30 (2): 364–79.CrossRefGoogle Scholar
Dziallas, M., and Blind, K.. 2019. Innovation indicators throughout the innovation process: An extensive literature analysis. Technovation 80–81: 329.CrossRefGoogle Scholar
Fast-Berglund, Å., Bligård, L.-O., Åkerman, M., and Karlsson, M.. 2014. Using the TRL-methodology to design supporting ICT-tools for production operators. Procedia CIRP 17: 726–31.CrossRefGoogle Scholar
Fixson, S. K. 2006. A Roadmap for Product Architecture Costing. In Product Platform and Product Family Design: Methods and Applications, ed. Simpson, T. W., Siddique, Z., and Jiao, R. J., 305–33. New York: Springer.CrossRefGoogle Scholar
Fixson, S. K. 2023. An Operations Management Perspective on Design Thinking. Foundations and Trends in Technology, Information and Operations Management 17 (3): 155–234.CrossRefGoogle Scholar
Fixson, S. K., and Lee, W.. 2012. Shifting grounds: How industry emergence changes the effectiveness of knowledge creation strategies - The case of the U.S. automotive airbag industry. Technology Analysis & Strategic Management 24 (1): 119.CrossRefGoogle Scholar
Fixson, S. K., and Marion, T. J.. 2012. Back-loading: A Potential Side Effect of Employing Digital Design Tools in New Product Development. Journal of Product Innovation Management 29 (S1): 140–56.CrossRefGoogle Scholar
Frost, B. 2007. Designing Metrics - Crafting balanced measures for managing performance. Dallas, TX: Measurement International.Google Scholar
Goffin, K., Åhlström, P., Bianchi, M., and Richtnér, A.. 2019. State-of-the-Art: The Quality of Case Study Research in Innovation Management. Journal of Product Innovation Management 36 (5): 586–615.CrossRefGoogle Scholar
Griffin, A. 1997. Modeling and measuring product development cycle time across industries. Journal of Engineering and Technology Management 14: 124.Google Scholar
Griffin, A., Langerak, F., and Eling, K.. 2019. The Evolution, Status, and Research Agenda for the Future of Research in NPD Cycle Time. Journal of Product Innovation Management 36 (2): 263–80.CrossRefGoogle Scholar
Hausman, J., Hall, B. H., and Griliches, Z.. 1984. Econometric-Models for Count Data with an Application to the Patents R and D Relationship. Econometrica 52 (4): 909–38.CrossRefGoogle Scholar
Hopp, W. J., Iravani, S. M. R., and Liu, F.. 2009. Managing White-Collar Work: An Operations-Oriented Survey. Production and Operations Management 18 (1): 132.CrossRefGoogle Scholar
Jaffe, A. B., and Trajtenberg, M.. 2002. Patents, Citations and Innovations: A Window on the Knowledge Economy. Cambridge, Massachusetts: The MIT Press.CrossRefGoogle Scholar
Jaruzelski, B., Chwalik, R., and Goehle, B.. 2018. What the top innovators get right. strategy + business.Google Scholar
Kennedy, B. M., Sobek II, D. K., and Kennedy, M. N.. 2014. Reducing Rework by Applying Set-Based Practices Early in the Systems Engineering Process. Systems Engineering 17 (3): 278–96.CrossRefGoogle Scholar
Klomp, L., and Van Leeuwen, G.. 2001. Linking Innovation and Firm Performance: A New Approach. International Journal of the Economics of Business 8 (3): 343–64.CrossRefGoogle Scholar
Kupiainen, E., Mäntylä, M. V., and Itkonen, J.. 2015. Kupiainen, E., Mäntylä, M. V., & Itkonen, J. (2015). Using metrics in Agile and Lean Software Development–A systematic literature review of industrial studies. Information and software technology, 62, 143163.Google Scholar
Kupor, S., and Kasireddy, P.. 2014. Understanding SaaS: Why the Pundits have it wrong, May 13.Google Scholar
Liedtka, J. (2015). Perspective: Linking design thinking with innovation outcomes through cognitive bias reduction. Journal of product innovation management, 32(6), 925938.CrossRefGoogle Scholar
Liedtka, J., King, A., and Bennett, K.. 2013. Solving Problems with Design Thinking: Ten Stories of What Works. New York: Columbia Business School Publishing.Google Scholar
Lifschitz-Assaf, H. 2018. Dismantling Knowledge Boundaries at NASA: The Critical Role of Professional Identity in Open Innovation. Administrative Science Quarterly 63 (4): 746–82.CrossRefGoogle Scholar
Marion, T.J., Barczak, G. and Hultink, E.J. 2014. Do Social Media Tools Impact the Development Phase? An Exploratory Study. Journal of Product Innovation Management, 31(1): 1829.CrossRefGoogle Scholar
McGrath, R. G., and MacMillan, I. C.. 2009. Discovery-driven Growth. Harvard Business School Press.Google Scholar
Mehta, N., Steinman, D., and Murphy, L.. 2016. Customer Success. Hoboken, New Jersey: Wiley & Sons.Google Scholar
Ries, E. 2011. The Lean Startup. Crown Business.Google Scholar
Ries, E. 2017. The Startup Way. New York: Currency Books.CrossRefGoogle Scholar
Savoia, A. 2019. The Right It - Why so many ideas fail and how to make sure yours succeed. New York: Harper One.Google Scholar
Schweitzer, J., Ben Mahmoud-Jouini, S., and Fixson, S. K.. 2023. Transform with Design - Creating new Innovation Capabilities with Design Thinking. Toronto, Ontario: Rotman - Toronto University Press.Google Scholar
Seidel, V. P., and Fixson, S. K.. 2015. Design-Thinking for Non-Designers: A Guide for Team Training and Implementation. In Design Thinking - New Product Development Essentials from the PDMA, ed. Luchs, M. G., Swan, K. S., and Griffin, A., 143–55. Wiley.CrossRefGoogle Scholar
Shah, J. J., Smith, S. M., and Vargas-Hernandez, N.. 2003. Metrics for measuring ideation effectiveness. Design Studies 24 (2): 111–34.CrossRefGoogle Scholar
Talke, K., and Hultink, E. J.. 2010. Managing diffusion barriers when launching new products. Journal of Product Innovation Management 27 (4): 537–53.CrossRefGoogle Scholar
Toma, D., and Gons, E.. 2021. Innovation Accounting. Amsterdam: BIS Publishers.Google Scholar
Ulrich, K. T., Eppinger, S. D., and Yang, M.. 2020. Product Design and Development. New York: McGraw Hill.Google Scholar
Victory, K., Nenycz-Thiel, M., Dawes, J., Tanusondjaja, A., and Corsi, A. M.. 2021. How common is new product failure and when does it vary? Marketing Letters 32: 1732.CrossRefGoogle Scholar
Werner, B. M., and Souder, W. E.. 1997. Measuring R&D Performance - State of the Art. Research-Technology Management\ 40 (2): 3442.Google Scholar
Yin, R. K. 1994. Case Study Research - Design and Methods. 2nd ed. Vol. 5. Applied Social Research Methods Series. Thousand Oaks, CA: Sage Publications.Google Scholar
Figure 0

Table 1. DFV innovation measures framework

Figure 1

Table 2. DFV innovation measures framework for UAM