What Is a Feasibility Study and How to Conduct It? (+ Examples)

Appinio Research · 26.09.2023 · 28min read

What Is a Feasibility Study and How to Conduct It Examples

Are you ready to turn your project or business idea into a concrete reality but unsure about its feasibility? Whether you're a seasoned entrepreneur or a first-time project manager, understanding the intricate process of conducting a feasibility study is vital for making informed decisions and maximizing your chances of success.

This guide will equip you with the knowledge and tools to navigate the complexities of market, technical, financial, and operational feasibility studies. By the end, you'll have a clear roadmap to confidently assess, plan, and execute your project.

What is a Feasibility Study?

A feasibility study is a systematic and comprehensive analysis of a proposed project or business idea to assess its viability and potential for success. It involves evaluating various aspects such as market demand, technical feasibility, financial viability, and operational capabilities. The primary goal of a feasibility study is to provide you with valuable insights and data to make informed decisions about whether to proceed with the project.

Why is a Feasibility Study Important?

Conducting a feasibility study is a critical step in the planning process for any project or business. It helps you:

  • Minimize Risks: By identifying potential challenges and obstacles early on, you can develop strategies to mitigate risks.
  • Optimize Resource Allocation: A feasibility study helps you allocate your resources more efficiently, including time and money.
  • Enhance Decision-Making: Armed with data and insights, you can make well-informed decisions about pursuing the project or exploring alternative options.
  • Attract Stakeholders: Potential investors, lenders, and partners often require a feasibility study to assess the project's credibility and potential return on investment.

Now that you understand the importance of feasibility studies, let's explore the various types and dive deeper into each aspect.

Types of Feasibility Studies

Feasibility studies come in various forms, each designed to assess different aspects of a project's viability. Let's delve into the four primary types of feasibility studies in more detail:

1. Market Feasibility Study

Market feasibility studies are conducted to determine whether there is a demand for a product or service in a specific market or industry. This type of study focuses on understanding customer needs, market trends, and the competitive landscape. Here are the key elements of a market feasibility study:

  • Market Research and Analysis: Comprehensive research is conducted to gather market size, growth potential , and customer behavior data. This includes both primary research (surveys, interviews) and secondary research (existing reports, data).
  • Target Audience Identification: Identifying the ideal customer base by segmenting the market based on demographics, psychographics, and behavior. Understanding your target audience is crucial for tailoring your product or service.
  • Competitive Analysis : Assessing the competition within the market, including identifying direct and indirect competitors, their strengths, weaknesses, and market share.
  • Demand and Supply Assessment: Analyzing the balance between the demand for the product or service and its supply. This helps determine whether there is room for a new entrant in the market.

2. Technical Feasibility Study

Technical feasibility studies evaluate whether the project can be developed and implemented from a technical standpoint. This assessment focuses on the project's design, technical requirements, and resource availability. Here's what it entails:

  • Project Design and Technical Requirements: Defining the technical specifications of the project, including hardware, software, and any specialized equipment. This phase outlines the technical aspects required for project execution.
  • Technology Assessment: Evaluating the chosen technology's suitability for the project and assessing its scalability and compatibility with existing systems.
  • Resource Evaluation: Assessing the availability of essential resources such as personnel, materials, and suppliers to ensure the project's technical requirements can be met.
  • Risk Analysis: Identifying potential technical risks, challenges, and obstacles that may arise during project development. Developing risk mitigation strategies is a critical part of technical feasibility.

3. Financial Feasibility Study

Financial feasibility studies aim to determine whether the project is financially viable and sustainable in the long run. This type of study involves estimating costs, projecting revenue, and conducting financial analyses. Key components include:

  • Cost Estimation: Calculating both initial and ongoing costs associated with the project, including capital expenditures, operational expenses, and contingency funds.
  • Revenue Projections: Forecasting the income the project is expected to generate, considering sales, pricing strategies, market demand, and potential revenue streams.
  • Investment Analysis: Evaluating the return on investment (ROI), payback period, and potential risks associated with financing the project.
  • Financial Viability Assessment: Analyzing the project's profitability, cash flow, and financial stability to ensure it can meet its financial obligations and sustain operations.

4. Operational Feasibility Study

Operational feasibility studies assess whether the project can be effectively implemented within the organization's existing operational framework. This study considers processes, resource planning, scalability, and operational risks. Key elements include:

  • Process and Workflow Assessment: Analyzing how the project integrates with current processes and workflows, identifying potential bottlenecks, and optimizing operations.
  • Resource Planning: Determining the human, physical, and technological resources required for successful project execution and identifying resource gaps.
  • Scalability Evaluation: Assessing the project's ability to adapt and expand to meet changing demands and growth opportunities, including capacity planning and growth strategies.
  • Operational Risks Analysis: Identifying potential operational challenges and developing strategies to mitigate them, ensuring smooth project implementation.

Each type of feasibility study serves a specific purpose in evaluating different facets of your project, collectively providing a comprehensive assessment of its viability and potential for success.

How to Prepare for a Feasibility Study?

Before you dive into the nitty-gritty details of conducting a feasibility study, it's essential to prepare thoroughly. Proper preparation will set the stage for a successful and insightful study. In this section, we'll explore the main steps involved in preparing for a feasibility study.

1. Identify the Project or Idea

Identifying and defining your project or business idea is the foundational step in the feasibility study process. This initial phase is critical because it helps you clarify your objectives and set the direction for the study.

  • Problem Identification: Start by pinpointing the problem or need your project addresses. What pain point does it solve for your target audience?
  • Project Definition: Clearly define your project or business idea. What are its core components, features, or offerings?
  • Goals and Objectives: Establish specific goals and objectives for your project. What do you aim to achieve in the short and long term?
  • Alignment with Vision: Ensure your project aligns with your overall vision and mission. How does it fit into your larger strategic plan?

Remember, the more precisely you can articulate your project or idea at this stage, the easier it will be to conduct a focused and effective feasibility study.

2. Assemble a Feasibility Study Team

Once you've defined your project, the next step is to assemble a competent and diverse feasibility study team. Your team's expertise will play a crucial role in conducting a thorough assessment of your project's viability.

  • Identify Key Roles: Determine the essential roles required for your feasibility study. These typically include experts in areas such as market research, finance, technology, and operations.
  • Select Team Members: Choose team members with the relevant skills and experience to fulfill these roles effectively. Look for individuals who have successfully conducted feasibility studies in the past.
  • Collaboration and Communication: Foster a collaborative environment within your team. Effective communication is essential to ensure everyone is aligned on objectives and timelines.
  • Project Manager: Designate a project manager responsible for coordinating the study, tracking progress, and meeting deadlines.
  • External Consultants: In some cases, you may need to engage external consultants or specialists with niche expertise to provide valuable insights.

Having the right people on your team will help you collect accurate data, analyze findings comprehensively, and make well-informed decisions based on the study's outcomes.

3. Set Clear Objectives and Scope

Before you begin the feasibility study, it's crucial to establish clear and well-defined objectives. These objectives will guide your research and analysis efforts throughout the study.

Steps to Set Clear Objectives and Scope:

  • Objective Clarity: Define the specific goals you aim to achieve through the feasibility study. What questions do you want to answer, and what decisions will the study inform?
  • Scope Definition: Determine the boundaries of your study. What aspects of the project will be included, and what will be excluded? Clarify any limitations.
  • Resource Allocation: Assess the resources needed for the study, including time, budget, and personnel. Ensure that you allocate resources appropriately based on the scope and objectives.
  • Timeline: Establish a realistic timeline for the feasibility study. Identify key milestones and deadlines for completing different phases of the study.

Clear objectives and a well-defined scope will help you stay focused and avoid scope creep during the study. They also provide a basis for measuring the study's success against its intended outcomes.

4. Gather Initial Information

Before you delve into extensive research and data collection, start by gathering any existing information and documents related to your project or industry. This initial step will help you understand the current landscape and identify gaps in your knowledge.

  • Document Review: Review any existing project documentation, market research reports, business plans, or relevant industry studies.
  • Competitor Analysis: Gather information about your competitors, including their products, pricing, market share, and strategies.
  • Regulatory and Compliance Documents: If applicable, collect information on industry regulations, permits, licenses, and compliance requirements.
  • Market Trends: Stay informed about current market trends, consumer preferences, and emerging technologies that may impact your project.
  • Stakeholder Interviews: Consider conducting initial interviews with key stakeholders, including potential customers, suppliers, and industry experts, to gather insights and feedback.

By starting with a strong foundation of existing knowledge, you'll be better prepared to identify gaps that require further investigation during the feasibility study. This proactive approach ensures that your study is comprehensive and well-informed from the outset.

How to Conduct a Market Feasibility Study?

The market feasibility study is a crucial component of your overall feasibility analysis. It focuses on assessing the potential demand for your product or service, understanding your target audience, analyzing your competition, and evaluating supply and demand dynamics within your chosen market.

Market Research and Analysis

Market research is the foundation of your market feasibility study. It involves gathering and analyzing data to gain insights into market trends, customer preferences, and the overall business landscape.

  • Data Collection: Utilize various methods such as surveys, interviews, questionnaires, and secondary research to collect data about the market. This data may include market size, growth rates, and historical trends.
  • Market Segmentation: Divide the market into segments based on factors such as demographics, psychographics , geography, and behavior. This segmentation helps you identify specific target markets.
  • Customer Needs Analysis: Understand the needs, preferences, and pain points of potential customers . Determine how your product or service can address these needs effectively.
  • Market Trends: Stay updated on current market trends, emerging technologies, and industry innovations that could impact your project.
  • SWOT Analysis: Conduct a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis to identify internal and external factors that may affect your market entry strategy.

In today's dynamic market landscape, gathering precise data for your market feasibility study is paramount. Appinio offers a versatile platform that enables you to swiftly collect valuable market insights from a diverse audience.

With Appinio, you can employ surveys, questionnaires, and in-depth analyses to refine your understanding of market trends, customer preferences, and competition.

Enhance your market research and gain a competitive edge by booking a demo with us today!

Book a Demo

Target Audience Identification

Knowing your target audience is essential for tailoring your product or service to meet their specific needs and preferences.

  • Demographic Analysis: Define the age, gender, income level, education, and other demographic characteristics of your ideal customers.
  • Psychographic Profiling: Understand the psychographics of your target audience, including their lifestyle, values, interests, and buying behavior.
  • Market Segmentation: Refine your target audience by segmenting it further based on shared characteristics and behaviors.
  • Needs and Pain Points: Identify your target audience's unique needs, challenges, and pain points that your product or service can address.
  • Competitor's Customers: Analyze the customer base of your competitors to identify potential opportunities for capturing market share.

Competitive Analysis

Competitive analysis helps you understand the strengths and weaknesses of your competitors, positioning your project strategically within the market.

  • Competitor Identification: Identify direct and indirect competitors within your industry or market niche.
  • Competitive Advantage: Determine the unique selling points (USPs) that set your project apart from competitors. What value can you offer that others cannot?
  • SWOT Analysis for Competitors: Conduct a SWOT analysis for each competitor to assess their strengths, weaknesses, opportunities, and threats.
  • Market Share Assessment: Analyze each competitor's market share and market penetration strategies.
  • Pricing Strategies: Investigate the pricing strategies employed by competitors and consider how your pricing strategy will compare.

Leveraging the power of data collection and analysis is essential in gaining a competitive edge. With Appinio , you can efficiently gather critical insights about your competitors, their strengths, and weaknesses. Seamlessly integrate these findings into your market feasibility study, empowering your project with a strategic advantage.

Demand and Supply Assessment

Understanding supply and demand dynamics is crucial for gauging market sustainability and potential challenges.

  • Market Demand Analysis: Estimate the current and future demand for your product or service. Consider factors like seasonality and trends.
  • Supply Evaluation: Assess the availability of resources, suppliers, and distribution channels required to meet the expected demand.
  • Market Saturation: Determine whether the market is saturated with similar offerings and how this might affect your project.
  • Demand Forecasting: Use historical data and market trends to make informed projections about future demand.
  • Scalability: Consider the scalability of your project to meet increased demand or potential fluctuations.

A comprehensive market feasibility study will give you valuable insights into your potential customer base, market dynamics, and competitive landscape. This information will be pivotal in shaping your project's direction and strategy.

How to Conduct a Technical Feasibility Study?

The technical feasibility study assesses the practicality of implementing your project from a technical standpoint. It involves evaluating the project's design, technical requirements, technological feasibility, resource availability, and risk analysis. Let's delve into each aspect in more detail.

1. Project Design and Technical Requirements

The project design and technical requirements are the foundation of your technical feasibility study. This phase involves defining the technical specifications and infrastructure needed to execute your project successfully.

  • Technical Specifications: Clearly define the technical specifications of your project, including hardware, software, and any specialized equipment.
  • Infrastructure Planning: Determine the physical infrastructure requirements, such as facilities, utilities, and transportation logistics.
  • Development Workflow: Outline the workflow and processes required to design, develop, and implement the project.
  • Prototyping: Consider creating prototypes or proof-of-concept models to test and validate the technical aspects of your project.

2. Technology Assessment

A critical aspect of the technical feasibility study is assessing the technology required for your project and ensuring it aligns with your goals.

  • Technology Suitability: Evaluate the suitability of the chosen technology for your project. Is it the right fit, or are there better alternatives?
  • Scalability and Compatibility: Assess whether the chosen technology can scale as your project grows and whether it is compatible with existing systems or software.
  • Security Measures: Consider cybersecurity and data protection measures to safeguard sensitive information.
  • Technical Expertise: Ensure your team or external partners possess the technical expertise to implement and maintain the technology.

3. Resource Evaluation

Resource evaluation involves assessing the availability of the essential resources required to execute your project successfully. These resources include personnel, materials, and suppliers.

  • Human Resources: Evaluate whether you have access to skilled personnel or if additional hiring or training is necessary.
  • Material Resources: Identify the materials and supplies needed for your project and assess their availability and costs.
  • Supplier Relationships: Establish relationships with reliable suppliers and consistently assess their ability to meet your resource requirements.

4. Risk Analysis

Risk analysis is a critical component of the technical feasibility study, as it helps you anticipate and mitigate potential technical challenges and setbacks.

  • Identify Risks: Identify potential technical risks, such as hardware or software failures, technical skill gaps, or unforeseen technical obstacles.
  • Risk Mitigation Strategies: Develop strategies to mitigate identified risks, including contingency plans and resource allocation for risk management.
  • Cost Estimation for Risk Mitigation: Assess the potential costs associated with managing technical risks and incorporate them into your project budget.

By conducting a thorough technical feasibility study, you can ensure that your project is technically viable and well-prepared to overcome technical challenges. This assessment will also guide decision-making regarding technology choices, resource allocation, and risk management strategies.

How to Conduct a Financial Feasibility Study?

The financial feasibility study is a critical aspect of your overall feasibility analysis. It focuses on assessing the financial viability of your project by estimating costs, projecting revenue, conducting investment analysis, and evaluating the overall financial health of your project. Let's delve into each aspect in more detail.

1. Cost Estimation

Cost estimation is the process of calculating the expenses associated with planning, developing, and implementing your project. This involves identifying both initial and ongoing costs.

  • Initial Costs: Calculate the upfront expenses required to initiate the project, including capital expenditures, equipment purchases, and any development costs.
  • Operational Costs: Estimate the ongoing operating expenses, such as salaries, utilities, rent, marketing, and maintenance.
  • Contingency Funds: Allocate funds for unexpected expenses or contingencies to account for unforeseen challenges.
  • Depreciation: Consider the depreciation of assets over time, as it impacts your financial statements.

2. Revenue Projections

Revenue projections involve forecasting the income your project is expected to generate over a specific period. Accurate revenue projections are crucial for assessing the project's financial viability.

  • Sales Forecasts: Estimate your product or service sales based on market demand, pricing strategies, and potential growth.
  • Pricing Strategy: Determine your pricing strategy, considering factors like competition, market conditions, and customer willingness to pay.
  • Market Penetration: Analyze how quickly you can capture market share and increase sales over time.
  • Seasonal Variations: Account for any seasonal fluctuations in revenue that may impact your cash flow.

3. Investment Analysis

Investment analysis involves evaluating the potential return on investment (ROI) and assessing the attractiveness of your project to potential investors or stakeholders.

  • Return on Investment (ROI): Calculate the expected ROI by comparing the project's net gains against the initial investment.
  • Payback Period: Determine how long it will take for the project to generate sufficient revenue to cover its initial costs.
  • Risk Assessment: Consider the level of risk associated with the project and whether it aligns with investors' risk tolerance.
  • Sensitivity Analysis: Perform sensitivity analysis to understand how changes in key variables, such as sales or costs, affect the investment's profitability.

4. Financial Viability Assessment

A financial viability assessment evaluates the project's ability to sustain itself financially in the long term. It considers factors such as profitability, cash flow, and financial stability.

  • Profitability Analysis: Assess whether the project is expected to generate profits over its lifespan.
  • Cash Flow Management: Analyze the project's cash flow to ensure it can cover operating expenses, debt payments, and other financial obligations.
  • Break-Even Analysis: Determine the point at which the project's revenue covers all costs, resulting in neither profit nor loss.
  • Financial Ratios: Calculate key financial ratios, such as debt-to-equity ratio and return on equity, to evaluate the project's financial health.

By conducting a comprehensive financial feasibility study, you can gain a clear understanding of the project's financial prospects and make informed decisions regarding its viability and potential for success.

How to Conduct an Operational Feasibility Study?

The operational feasibility study assesses whether your project can be implemented effectively within your organization's operational framework. It involves evaluating processes, resource planning, scalability, and analyzing potential operational risks.

1. Process and Workflow Assessment

The process and workflow assessment examines how the project integrates with existing processes and workflows within your organization.

  • Process Mapping: Map out current processes and workflows to identify areas of integration and potential bottlenecks.
  • Workflow Efficiency: Assess the efficiency and effectiveness of existing workflows and identify opportunities for improvement.
  • Change Management: Consider the project's impact on employees and plan for change management strategies to ensure a smooth transition.

2. Resource Planning

Resource planning involves determining the human, physical, and technological resources needed to execute the project successfully.

  • Human Resources: Assess the availability of skilled personnel and consider whether additional hiring or training is necessary.
  • Physical Resources: Identify the physical infrastructure, equipment, and materials required for the project.
  • Technology and Tools: Ensure that the necessary technology and tools are available and up to date to support project implementation.

3. Scalability Evaluation

Scalability evaluation assesses whether the project can adapt and expand to meet changing demands and growth opportunities.

  • Scalability Factors: Identify factors impacting scalability, such as market growth, customer demand, and technological advancements.
  • Capacity Planning: Plan for the scalability of resources, including personnel, infrastructure, and technology.
  • Growth Strategies: Develop strategies for scaling the project, such as geographic expansion, product diversification, or increasing production capacity.

4. Operational Risk Analysis

Operational risk analysis involves identifying potential operational challenges and developing mitigation strategies.

  • Risk Identification: Identify operational risks that could disrupt project implementation or ongoing operations.
  • Risk Mitigation: Develop risk mitigation plans and contingency strategies to address potential challenges.
  • Testing and Simulation: Consider conducting simulations or testing to evaluate how the project performs under various operational scenarios.
  • Monitoring and Adaptation: Implement monitoring and feedback mechanisms to detect and address operational issues as they arise.

Conducting a thorough operational feasibility study ensures that your project aligns with your organization's capabilities, processes, and resources. This assessment will help you plan for a successful implementation and minimize operational disruptions.

How to Write a Feasibility Study?

The feasibility study report is the culmination of your feasibility analysis. It provides a structured and comprehensive document outlining your study's findings, conclusions, and recommendations. Let's explore the key components of the feasibility study report.

1. Structure and Components

The structure of your feasibility study report should be well-organized and easy to navigate. It typically includes the following components:

  • Executive Summary: A concise summary of the study's key findings, conclusions, and recommendations.
  • Introduction: An overview of the project, the objectives of the study, and a brief outline of what the report covers.
  • Methodology: A description of the research methods , data sources, and analytical techniques used in the study.
  • Market Feasibility Study: Detailed information on market research, target audience, competitive analysis, and demand-supply assessment.
  • Technical Feasibility Study: Insights into project design, technical requirements, technology assessment, resource evaluation, and risk analysis.
  • Financial Feasibility Study: Comprehensive information on cost estimation, revenue projections, investment analysis, and financial viability assessment.
  • Operational Feasibility Study: Details on process and workflow assessment, resource planning, scalability evaluation, and operational risks analysis.
  • Conclusion: A summary of key findings and conclusions drawn from the study.

Recommendations: Clear and actionable recommendations based on the study's findings.

2. Write the Feasibility Study Report

When writing the feasibility study report, it's essential to maintain clarity, conciseness, and objectivity. Use clear language and provide sufficient detail to support your conclusions and recommendations.

  • Be Objective: Present findings and conclusions impartially, based on data and analysis.
  • Use Visuals: Incorporate charts, graphs, and tables to illustrate key points and make the report more accessible.
  • Cite Sources: Properly cite all data sources and references used in the study.
  • Include Appendices: Attach any supplementary information, data, or documents in appendices for reference.

3. Present Findings and Recommendations

When presenting your findings and recommendations, consider your target audience. Tailor your presentation to the needs and interests of stakeholders, whether they are investors, executives, or decision-makers.

  • Highlight Key Takeaways: Summarize the most critical findings and recommendations upfront.
  • Use Visual Aids: Create a visually engaging presentation with slides, charts, and infographics.
  • Address Questions: Be prepared to answer questions and provide additional context during the presentation.
  • Provide Supporting Data: Back up your findings and recommendations with data from the feasibility study.

4. Review and Validation

Before finalizing the feasibility study report, conducting a thorough review and validation process is crucial. This ensures the accuracy and credibility of the report.

  • Peer Review: Have colleagues or subject matter experts review the report for accuracy and completeness.
  • Data Validation: Double-check data sources and calculations to ensure they are accurate.
  • Cross-Functional Review: Involve team members from different disciplines to provide diverse perspectives.
  • Stakeholder Input: Seek input from key stakeholders to validate findings and recommendations.

By following a structured approach to creating your feasibility study report, you can effectively communicate the results of your analysis, support informed decision-making, and increase the likelihood of project success.

Feasibility Study Examples

Let's dive into some real-world examples to truly grasp the concept and application of feasibility studies. These examples will illustrate how various types of projects and businesses undergo the feasibility assessment process to ensure their viability and success.

Example 1: Local Restaurant

Imagine you're passionate about opening a new restaurant in a bustling urban area. Before investing significant capital, you'd want to conduct a thorough feasibility study. Here's how it might unfold:

  • Market Feasibility: You research the local dining scene, identify target demographics, and assess the demand for your cuisine. Market surveys reveal potential competitors, dining preferences, and pricing expectations.
  • Technical Feasibility: You design the restaurant layout, plan the kitchen setup, and assess the technical requirements for equipment and facilities. You consider factors like kitchen efficiency, safety regulations, and adherence to health codes.
  • Financial Feasibility: You estimate the initial costs for leasing or purchasing a space, kitchen equipment, staff hiring, and marketing. Revenue projections are based on expected foot traffic, menu pricing, and seasonal variations.
  • Operational Feasibility: You create kitchen and service operations workflow diagrams, considering staff roles and responsibilities. Resource planning includes hiring chefs, waitstaff, and kitchen personnel. Scalability is evaluated for potential expansion or franchising.
  • Risk Analysis: Potential operational risks are identified, such as food safety concerns, labor shortages, or location-specific challenges. Risk mitigation strategies involve staff training, quality control measures, and contingency plans for unexpected events.

Example 2: Software Development Project

Now, let's explore the feasibility study process for a software development project, such as building a mobile app:

  • Market Feasibility: You analyze the mobile app market, identify your target audience, and assess the demand for a solution in a specific niche. You gather user feedback and conduct competitor analysis to understand the competitive landscape.
  • Technical Feasibility: You define the technical requirements for the app, considering platforms (iOS, Android), development tools, and potential integrations with third-party services. You evaluate the feasibility of implementing specific features.
  • Financial Feasibility: You estimate the development costs, including hiring developers, designers, and ongoing maintenance expenses. Revenue projections are based on app pricing, potential in-app purchases, and advertising revenue.
  • Operational Feasibility: You map out the development workflow, detailing the phases from concept to deployment. Resource planning includes hiring developers with the necessary skills, setting up development environments, and establishing a testing framework.
  • Risk Analysis: Potential risks like scope creep, technical challenges, or market saturation are assessed. Mitigation strategies involve setting clear project milestones, conducting thorough testing, and having contingency plans for technical glitches.

These examples demonstrate the versatility of feasibility studies across diverse projects. Whatever type of venture or endeavor you want to embark on, a well-structured feasibility study guides you toward informed decisions and increased project success.

In conclusion, conducting a feasibility study is a crucial step in your project's journey. It helps you assess the viability and potential risks, providing a solid foundation for informed decision-making. Remember, a well-executed feasibility study not only enables you to identify challenges but also uncovers opportunities that can lead to your project's success.

By thoroughly examining market trends, technical requirements, financial aspects, and operational considerations, you are better prepared to embark on your project confidently. With this guide, you've gained the knowledge and tools needed to navigate the intricate terrain of feasibility studies.

How to Conduct a Feasibility Study in Minutes?

Speed and precision are paramount for feasibility studies, and Appinio delivers just that. As a real-time market research platform, Appinio empowers you to seamlessly conduct your market research in a matter of minutes, putting actionable insights at your fingertips.

Here's why Appinio stands out as the go-to tool for feasibility studies:

  • Rapid Insights: Appinio's intuitive platform ensures that anyone, regardless of their research background, can effortlessly navigate and conduct research, saving valuable time and resources.
  • Lightning-Fast Responses: With an average field time of under 23 minutes for 1,000 respondents, Appinio ensures that you get the answers you need when you need them, making it ideal for time-sensitive feasibility studies.
  • Global Reach: Appinio's extensive reach spans over 90 countries, allowing you to define the perfect target group from a pool of 1,200+ characteristics and gather insights from diverse markets.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

What is Predictive Modeling Definition Types Techniques

21.03.2024 | 28min read

What is Predictive Modeling? Definition, Types, Techniques

What is Brand Equity Definition Model Measurement Examples

18.03.2024 | 29min read

What is Brand Equity? Definition, Measurement, Examples

Discrete vs Continuous Data Differences and Examples

14.03.2024 | 23min read

Discrete vs. Continuous Data: Differences and Examples

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

How to conduct a feasibility study: Template and examples

research paper for feasibility study

Opportunities are everywhere. Some opportunities are small and don’t require many resources. Others are massive and need further analysis and evaluation.

How To Conduct A Feasibility Study: Template And Examples

One of your key responsibilities as a product manager is to evaluate the potential success of those opportunities before investing significant money, time, and resources. A feasibility study, also known as a feasibility assessment or feasibility analysis, is a critical tool that can help product managers determine whether a product idea or opportunity is viable, feasible, and profitable.

So, what is a feasibility analysis? Why should product managers use it? And how do you conduct one?

What is a feasibility study?

A feasibility study is a systematic analysis and evaluation of a product opportunity’s potential to succeed. It aims to determine whether a proposed opportunity is financially and technically viable, operationally feasible, and commercially profitable.

A feasibility study typically includes an assessment of a wide range of factors, including the technical requirements of the product, resources needed to develop and launch the product, the potential market gap and demand, the competitive landscape, and economic and financial viability.

Based on the analysis’s findings, the product manager and their product team can decide whether to proceed with the product opportunity, modify its scope, or pursue another opportunity and solve a different problem.

Conducting a feasibility study helps PMs ensure that resources are invested in opportunities that have a high likelihood of success and align with the overall objectives and goals of the product strategy .

What are feasibility analyses used for?

Feasibility studies are particularly useful when introducing entirely new products or verticals. Product managers can use the results of a feasibility study to:

  • Assess the technical feasibility of a product opportunity — Evaluate whether the proposed product idea or opportunity can be developed with the available technology, tools, resources, and expertise
  • Determine a project’s financial viability — By analyzing the costs of development, manufacturing, and distribution, a feasibility study helps you determine whether your product is financially viable and can generate a positive return on investment (ROI)
  • Evaluate customer demand and the competitive landscape — Assessing the potential market size, target audience, and competitive landscape for the product opportunity can inform decisions about the overall product positioning, marketing strategies, and pricing
  • Identify potential risks and challenges — Identify potential obstacles or challenges that could impact the success of the identified opportunity, such as regulatory hurdles, operational and legal issues, and technical limitations
  • Refine the product concept — The insights gained from a feasibility study can help you refine the product’s concept, make necessary modifications to the scope, and ultimately create a better product that is more likely to succeed in the market and meet users’ expectations

How to conduct a feasibility study

The activities involved in conducting a feasibility study differ from one organization to another. Also, the threshold, expectations, and deliverables change from role to role.

For a general set of guidelines to help you get started, here are some basic steps to conduct and report a feasibility study for major product opportunities or features.

1. Clearly define the opportunity

Imagine your user base is facing a significant problem that your product doesn’t solve. This is an opportunity. Define the opportunity clearly, support it with data, talk to your stakeholders to understand the opportunity space, and use it to define the objective.

2. Define the objective and scope

Each opportunity should be coupled with a business objective and should align with your product strategy.

research paper for feasibility study

Over 200k developers and product managers use LogRocket to create better digital experiences

research paper for feasibility study

Determine and clearly communicate the business goals and objectives of the opportunity. Align those objectives with company leaders to make sure everyone is on the same page. Lastly, define the scope of what you plan to build.

3. Conduct market and user research

Now that you have everyone on the same page and the objective and scope of the opportunity clearly defined, gather data and insights on the target market.

Include elements like the total addressable market (TAM) , growth potential, competitors’ insights, and deep insight into users’ problems and preferences collected through techniques like interviews, surveys, observation studies, contextual inquiries, and focus groups.

4. Analyze technical feasibility

Suppose your market and user research have validated the problem you are trying to solve. The next step should be to, alongside your engineers, assess the technical resources and expertise needed to launch the product to the market.

Dig deeper into the proposed solution and try to comprehend the technical limitations and estimated time required for the product to be in your users’ hands.

5. Assess financial viability

If your company hasa product pricing team, work closely with them to determine the willingness to pay (WTP) and devise a monetization strategy for the new feature.

Conduct a comprehensive financial analysis, including the total cost of development, revenue streams, and the expected return on investment (ROI) based on the agreed-upon monetization strategy.

6. Evaluate potential risks

Now that you have almost a complete picture, identify the risks associated with building and launching the opportunity. Risks may include things like regulatory hurdles, technical limitations, and any operational risks.

7. Decide, prepare, and share

Based on the steps above, you should end up with a report that can help you decide whether to pursue the opportunity or not. Either way, prepare your findings, including any recommended modifications to the product scope, and present your final findings and recommendations to your stakeholders.

Make sure to prepare an executive summary for your C-suite; they will be the most critical stakeholders and the decision-makers at the end of the meeting.

Feasibility study example

Imagine you’re a product manager at a digital software company that specializes in building project management tools.

Your team has identified a potential opportunity to expand the product offering by developing a new AI-based feature that can automatically prioritize tasks for users based on their deadlines, workload, and importance.

To assess the viability of this opportunity, you can conduct a feasibility study. Here’s how you might approach it according to the process described above:

  • Clearly define the opportunity — In this case, the opportunity is the development of an AI-based task prioritization feature within the existing project management software
  • Define the objective and scope — The business objective is to increase user productivity and satisfaction by providing an intelligent task prioritization system. The scope includes the integration of the AI-based feature within the existing software, as well as any necessary training for users to understand and use the feature effectively
  • Conduct market and user research — Investigate the demand for AI-driven task prioritization among your target audience. Collect data on competitors who may already be offering similar features and determine the unique selling points of your proposed solution. Conduct user research through interviews, surveys, and focus groups to understand users’ pain points regarding task prioritization and gauge their interest in the proposed feature
  • Analyze technical feasibility — Collaborate with your engineering team to assess the technical requirements and challenges of developing the AI-based feature. Determine whether your team has the necessary expertise to implement the feature and estimate the time and resources required for its development
  • Assess financial viability — Work with your pricing team to estimate the costs associated with developing, launching, and maintaining the AI-based feature. Analyze the potential revenue streams and calculate the expected ROI based on various pricing models and user adoption rates
  • Evaluate potential risks — Identify any risks associated with the development and implementation of the AI-based feature, such as data privacy concerns, potential biases in the AI algorithm, or the impact on the existing product’s performance
  • Decide, prepare, and share — Based on your analysis, determine whether the AI-based task prioritization feature is a viable opportunity for your company. Prepare a comprehensive report detailing your findings and recommendations, including any necessary modifications to the product scope or implementation plan. Present your findings to your stakeholders and be prepared to discuss and defend your recommendations

Feasibility study template

The following feasibility study template is designed to help you evaluate the feasibility of a product opportunity and provide a comprehensive report to inform decision-making and guide the development process.

Remember that each study will be unique to your product and market, so you may need to adjust the template to fit your specific needs.

  • Briefly describe the product opportunity or feature you’re evaluating
  • Explain the problem it aims to solve or the value it will bring to users
  • Define the business goals and objectives for the opportunity
  • Outline the scope of the product or feature, including any key components or functionality
  • Summarize the findings from your market research, including data on the target market, competitors, and unique selling points
  • Highlight insights from user research, such as user pain points, preferences, and potential adoption rates
  • Detail the technical requirements and challenges for developing the product or feature
  • Estimate the resources and expertise needed for implementation, including any necessary software, hardware, or skills
  • Provide an overview of the costs associated with the development, launch, and maintenance of the product or feature
  • Outline potential revenue streams and calculate the expected ROI based on various pricing models and user adoption rates
  • Identify any potential risks or challenges associated with the development, implementation, or market adoption of the product or feature
  • Discuss how these risks could impact the success of the opportunity and any potential mitigation strategies
  • Based on your analysis, recommend whether to proceed with the opportunity, modify the scope, or explore other alternatives
  • Provide a rationale for your recommendation, supported by data and insights from your research
  • Summarize the key findings and recommendations from your feasibility study in a concise, easily digestible format for your stakeholders

Overcoming stakeholder management challenges

The ultimate challenge that faces most product managers when conducting a feasibility study is managing stakeholders .

Stakeholders may interfere with your analysis, jumping to conclude that your proposed product or feature won’t work and deeming it a waste of resources. They may even try to prioritize your backlog for you.

Here are some tips to help you deal with even the most difficult stakeholders during a feasibility study:

  • Use hard data to make your point — Never defend your opinion based on your assumptions. Always show them data and evidence based on your user research and market analysis
  • Learn to say no — You are the voice of customers, and you know their issues and how to monetize them. Don’t be afraid to say no and defend your team’s work as a product manager
  • Build stakeholder buy-in early on — Engage stakeholders from the beginning of the feasibility study process by involving them in discussions and seeking their input. This helps create a sense of ownership and ensures that their concerns and insights are considered throughout the study
  • Provide regular updates and maintain transparency — Keep stakeholders informed about the progress of the feasibility study by providing regular updates and sharing key findings. This transparency can help build trust, foster collaboration, and prevent misunderstandings or misaligned expectations
  • Leverage stakeholder expertise — Recognize and utilize the unique expertise and knowledge that stakeholders bring to the table. By involving them in specific aspects of the feasibility study where their skills and experience can add value, you can strengthen the study’s outcomes and foster a more collaborative working relationship

Final thoughts

A feasibility study is a critical tool to use right after you identify a significant opportunity. It helps you evaluate the potential success of the opportunity, analyze and identify potential challenges, gaps, and risks in the opportunity, and provides a data-driven approach in the market insights to make an informed decision.

By conducting a feasibility study, product teams can determine whether a product idea is profitable, viable, feasible, and thus worth investing resources into. It is a crucial step in the product development process and when considering investments in significant initiatives such as launching a completely new product or vertical.

LogRocket generates product insights that lead to meaningful action

Get your teams on the same page — try LogRocket today.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #product strategy

research paper for feasibility study

Stop guessing about your digital experience with LogRocket

Recent posts:.

Saurabh Saraf Leader Spotlight

Leader Spotlight: Unlocking meaningful opportunities, with Saurabh Saraf

Saurabh Saraf shares how organizations overlook meaningful opportunities by focusing on shiny new initiatives over incremental improvements.

research paper for feasibility study

Understanding BPMN diagrams for process management

A BPMN diagram is a visual representation of the steps, activities, and flows of processes within your organization.

research paper for feasibility study

Leader Spotlight: Scholarships as a performance marketing channel, with Jelena Stajić

Jelena Stajić discusses ScholarshipOwl’s transition from purely being a scholarship platform for students to adding a B2B component.

research paper for feasibility study

An overview of the 8D problem-solving method

The 8D problem-solving method is designed to address and resolve problems by identifying, correcting, and eliminating recurring issues.

research paper for feasibility study

Leave a Reply Cancel reply

  • Open access
  • Published: 07 September 2015

Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers

  • Alicia O’Cathain 1 ,
  • Pat Hoddinott 2 ,
  • Simon Lewin 3 , 4 ,
  • Kate J. Thomas 1 ,
  • Bridget Young 5 ,
  • Joy Adamson 6 ,
  • Yvonne JFM. Jansen 7 ,
  • Nicola Mills 8 ,
  • Graham Moore 9 &
  • Jenny L. Donovan 8  

Pilot and Feasibility Studies volume  1 , Article number:  32 ( 2015 ) Cite this article

47k Accesses

191 Citations

79 Altmetric

Metrics details

Feasibility studies are increasingly undertaken in preparation for randomised controlled trials in order to explore uncertainties and enable trialists to optimise the intervention or the conduct of the trial. Qualitative research can be used to examine and address key uncertainties prior to a full trial. We present guidance that researchers, research funders and reviewers may wish to consider when assessing or undertaking qualitative research within feasibility studies for randomised controlled trials. The guidance consists of 16 items within five domains: research questions, data collection, analysis, teamwork and reporting. Appropriate and well conducted qualitative research can make an important contribution to feasibility studies for randomised controlled trials. This guidance may help researchers to consider the full range of contributions that qualitative research can make in relation to their particular trial. The guidance may also help researchers and others to reflect on the utility of such qualitative research in practice, so that trial teams can decide when and how best to use these approaches in future studies.

Peer Review reports

Introduction

The United Kingdom Medical Research Council (UK MRC) guidance on the development and evaluation of complex interventions recommends an early phase of assessing feasibility prior to a full evaluation [ 1 ]. In this feasibility and pilot phase, researchers can identify and address problems which might undermine the acceptability and delivery of the intervention or the conduct of the evaluation. When the outcome evaluation is a randomised controlled trial, this feasibility phase increases the chances of researchers evaluating the optimum intervention using the most appropriate and efficient recruitment practices and trial design. Alternatively, at the feasibility phase, researchers may identify fundamental problems with the intervention or trial conduct and return to the development phase rather than proceed to a full trial. The feasibility phase thus has the potential to ensure that money is not wasted on an expensive trial which produces a null result due to problems with recruitment, retention or delivery of the intervention [ 2 ].

Feasibility studies for randomised controlled trials can draw on a range of methods. Some feasibility studies use quantitative methods only. For example, researchers concerned about whether they could recruit to a trial, and whether the intervention was acceptable to health professionals and patients, undertook a pilot trial with outcomes related to recruitment and surveys to measure the acceptability of the intervention [ 3 ]. Increasingly, qualitative or mixed methods are being used within feasibility studies for randomised controlled trials. A review of 296 journal articles reporting the use of qualitative research with trials published between 2008 and 2010 identified that 28 % of articles reported qualitative research undertaken prior to the full trial [ 4 ]. Qualitative research was not only undertaken with trials of complex interventions but was also used with trials of drugs and devices where researchers recognised the complexity of the patient group receiving the intervention or the complexity of the environment in which the trial was to be undertaken [ 5 ]. Yet, there is little guidance available on how to use qualitative methods within feasibility studies for trials. Here, we offer guidance in order to help researchers maximise the opportunities of this endeavour.

Getting the language right: feasibility studies, pilot studies and pilot trials

Before offering guidance on using qualitative methods at the feasibility phase of a trial, we first need to be clear about the meaning of the term ‘feasibility study’ because the language used to describe the preparatory phase for a trial is inconsistent [ 6 ]. These types of studies can be called feasibility or pilot studies, with researchers making no clear distinction between the two when reporting their studies in journal articles [ 7 ]. The MRC guidance for developing and evaluating complex interventions describes this as the ‘feasibility and piloting’ stage. The UK funding body, the National Institute for Health Research (NIHR), offers definitions of feasibility and pilot studies, distinguishing between the two [ 8 ]. A feasibility study is undertaken to address the question ‘can the planned evaluation be done?’. In contrast, pilot studies are miniature versions of the main study. In the case of a randomised controlled trial, the pilot study is a pilot trial. A feasibility study for a randomised controlled trial does not necessarily involve a pilot randomised controlled trial [ 1 ] but may do so, and indeed, some researchers have described their studies as a ‘feasibility study and pilot trial’ in the titles of their journal articles [ 9 ]. Other terms may be used to describe a feasibility study for a trial, for example a ‘formative’ study as part of ‘evidence-based planning’ [ 10 ] or an exploratory pilot study [ 11 ] or a process evaluation with a pilot trial [ 12 ]. In this guidance, we use the term ‘feasibility study’ to describe any study that addresses the question of whether the planned evaluation trial can be done regardless of the labels other researchers might use.

The need for guidance on using qualitative methods in feasibility studies for randomised controlled trials

With the use of qualitative research in feasibility studies for randomised controlled trials becoming increasingly common, guidance on how to do this would be useful to both researchers and those commissioning and reviewing this research. Guidance is available or emerging in areas related to feasibility studies for trials. Guidance exists for undertaking quantitative pilot studies [ 13 , 14 ], and a Consolidated Standards of Reporting Trials (CONSORT) statement for reporting feasibility studies (rather than undertaking them) is under development [ 6 ]. UK MRC guidance has recently been developed for process evaluations undertaken alongside randomised controlled trials [ 15 ]. This new guidance recommends that, in most cases, it is useful to use both qualitative and quantitative methods concurrently with a pilot or full trial. It also states that as feasibility studies will usually aim to refine understanding of how the intervention works, and facilitate ongoing adaptation of intervention and evaluation design in preparation for a full trial, qualitative data will likely be of particular value at this stage. However, that guidance does not address in any depth issues specific to the use of qualitative research during the feasibility phase of a trial. There is also guidance for writing proposals for using qualitative research with trials [ 16 ] and reporting qualitative research undertaken with trials [ 5 ]. However, the feasibility phase of a trial is unique in that it may involve the ongoing adaptation of plans for conducting the trial and of the intervention in preparation for the full trial. Therefore, our guidance complements recent and upcoming guidance by focusing on the role of qualitative research specifically rather than the overall feasibility study and by addressing the iterative nature of research that may occur in feasibility studies for trials.

The focus of the guidance

This guidance focuses on how to use qualitative research within a feasibility study undertaken prior to a fully randomised controlled trial where the aim is to improve the intervention or trial conduct for the full trial. Appropriate and well-conducted qualitative research can make an important contribution to feasibility studies for randomised controlled trials. The guidance presented here may help researchers to consider the full range of possible contributions that qualitative research can make in relation to their particular trial and reflect on the utility of this research in practice, so that others can decide when and how best to use qualitative research in their studies. Prior to presenting the guidance, we clarify six issues about the scope of the guidance:

A feasibility study may or may not include a pilot randomised controlled trial.

The feasibility phase follows the development phase of an intervention, in which qualitative methods may also be used [ 1 ]. Although there may be overlap between the development of the intervention and the feasibility phase of the trial, this guidance assumes that an intervention has been developed, but that it might need further modification, including assessment of its practicability in the health care setting.

Qualitative methods can be used alone or in conjunction with quantitative methods, such as modelling and surveys, in the feasibility phase [ 1 ].

The definition of qualitative research is the explicit use of both qualitative data collection and analysis methods. This is distinguished from trialists’ reflective reports on the problems that they encountered in running a feasibility study and from the use of methods that may draw on qualitative approaches but do not meet our definition. For example, some researchers report using ‘observation’ and ‘field notes’ but show no evidence of qualitative data collection or analysis in their article and do not label these as qualitative research [ 8 ]. Reflective practice by trialists and intervention deliverers is important for learning about trial conduct but is not the focus of the guidance presented here.

The guidance focuses on maximising the opportunities of qualitative research by presenting options, rather than delineating required actions. This is based on the understanding that the strengths of qualitative research are its flexibility and responsiveness to emerging issues from the field.

The guidance may be used by researchers when writing proposals and undertaking or reporting qualitative research within feasibility studies. If the feasibility study includes a pilot randomised controlled trial, reporting should follow the CONSORT statement that is currently under development [ 6 ].

Processes used to develop the guidance

This guidance is based on the experience of the authors of this paper. The authors came together in a workshop to write this guidance after meeting to discuss a study of how to maximise the value of qualitative research with randomised controlled trials which had been undertaken by two of the authors of this guidance (AOC, KJT) [ 4 , 5 ]. That study involved undertaking a systematic mapping review of journal articles reporting qualitative research undertaken with randomised controlled trials and interviews with qualitative researchers and trialists; some of these articles are referenced to illustrate points made. Towards the end of this study, the UK MRC Hubs for Trials Methodology Research funded a conference to disseminate the findings from this study and a related 1-day workshop to develop guidance for using qualitative research with trials. The nine workshop members, all of whom are authors of this guidance, were identified for their experience in using qualitative research with trials. One member had also published a review of the use of qualitative research alongside trials of complex interventions [ 17 ].

The workshop focused on feasibility studies because these were identified as an underdeveloped aspect of trial methodology. The workshop members put forward items for the guidance, based on their experience and expert knowledge. Discussion took place about the importance of items and the different viewpoints within each item. Draft guidance was produced by AOC after the workshop. Subsequent development of the guidance was undertaken by email correspondence and meetings between sub-groups of the workshop membership. A draft of the guidance was then presented at a meeting of an MRC Methodology Hub for researchers with experience in undertaking qualitative research in feasibility studies for trials. Attendees viewed the guidance as helpful, and further insights emerged from this process, particularly around the analysis domain of the guidance.

The guidance

The guidance is detailed below and summarised in Table  1 . The structure follows the stages of a research project from identifying research questions to reporting findings and consists of 16 items within five domains: research questions, data collection, analysis, teamwork and reporting. Although the table presents a neat and linear process, in practice, this research is likely to be messy and iterative, with researchers moving backwards and forwards between steps as insights emerge and the priority of different research questions changes. Figure  1 shows how the guidance meshes with this more dynamic process. We illustrate some of the items in the guidance using case studies of published qualitative research undertaken within feasibility studies for trials. Some items tend not to be visible in publications, particularly those on teamworking, and therefore are not illustrated in these case studies.

Key steps for qualitative research in a feasibility study for a trial

Research questions

When designing the feasibility study, consider the full range of questions that could be addressed. Then, consider those best addressed by qualitative research.

Some researchers have produced lists of questions that could be addressed in feasibility studies for trials, focusing on the conduct of the trial and on the intervention [ 8 ]. A review of feasibility and pilot trials identified the range of questions actually addressed in a subset of feasibility studies that included a randomised controlled trial, [ 18 ] although it was not clear which questions were actually addressed by qualitative research. Other researchers have identified frameworks or typologies of questions for feasibility studies. For example, a description of feasibility studies for cancer prevention in the USA identified a typology of the questions addressed and some of the methodologies used [ 19 ]. Qualitative research was identified as useful for issues concerning acceptability, implementation, practicality and expansion (in terms of understanding use of a known intervention in a different sub-group). There is also a framework for the work undertaken by qualitative research with trials [ 4 ]. Using the latter framework, we drew on the literature cited here and our own experience of feasibility studies to identify the range of issues qualitative research can address in a feasibility study for a trial (Table  2 ). Although not noted explicitly in Table  2 , the context in which the intervention is delivered is relevant to a large number of the questions identified in Table  2 and should be considered during a feasibility study as well as in the full trial [ 15 ]. The important role of context within complex intervention trials was highlighted in a recent study which found that contextual threats to trial conduct were often subtle, idiosyncratic and complex [ 20 ], and therefore best explored using qualitative research.

Prioritise the questions for the qualitative research by identifying key uncertainties.

Many questions can be addressed in a feasibility study, but resource limitations require that these are prioritised. The whole team will need to identify the key uncertainties that the feasibility study should address. Thereafter, a search of the evidence base for systematic reviews (including mixed reviews based on both qualitative and quantitative researches) relevant to these uncertainties may yield useful insights. Where no systematic reviews exist, and there is no resource to undertake them, studies of similar interventions or similar types of trials may be helpful. Questions on which there is currently little or no existing evidence can then be prioritised for new primary qualitative research.

Consider often overlooked questions.

Researchers commonly use qualitative research to address the acceptability and feasibility of the intervention [ 10 , 21 – 24 ] or its perceived benefits [ 11 , 22 ]. During our workshop, we identified four important questions that can be overlooked and are worth considering:

How do the intervention components and delivery processes work in the real world?

Guidance for process evaluations recommends developing a logic model or explanatory model of the intervention [ 15 ]. This logic model includes the intervention components and pathways to delivering desired outcomes. However, even if trialists, intervention deliverers, patients and the public, and qualitative researchers have been involved in developing this logic model, some aspects of the intervention in practice may be hidden or not understood, and these hidden aspects may be the key to delivering outcomes. For example, intervention deliverers may adapt the intervention in unanticipated ways in order to deliver it in their local context. Qualitative research, including non-participant observation and interviews with intervention deliverers and recipients, may be helpful in identifying how and why they have done this. This may facilitate replication of the intervention in the subsequent trial or rollout and also raise questions about the most appropriate trial design required. In addition, it may offer insights into which aspects of the intervention should be fixed or flexible in the full trial [ 25 ] and how the intervention needs to be tailored to different contexts. The wider context in which the trial operates may also affect the implementation of the intervention, the control or the trial, for example staff shortages, media scares or the economic climate. Intervention vignettes can be a helpful tool in qualitative interviews to talk potential participants through each step of an intervention in a concrete way [ 26 ].

How does the choice of comparator affect the trial?

The focus of qualitative research undertaken with trials tends to be on the intervention, but qualitative research can also help to understand the control. Interventions can be compared with active controls or usual care, and there may be issues to explore regarding the comparability of an active control and the intervention or the extent to which the trial may change usual care [ 27 ]. Such research may help the trial team to consider whether there is sufficient difference between the groups being compared in any trial. For instance, the planned intervention may not be that different from usual care in some settings and may need to be enhanced prior to use in the full trial. Differences between the intervention and usual care will have implications for the relative effectiveness of the intervention and the transferability of the trial findings to other contexts.

Understanding usual care is also important because it represents a key feature of the context in which the new intervention will be implemented. Where a new intervention represents a fundamental change from usual practice, one would perhaps expect to encounter greater challenges in implementation and would need to pay more attention to the resources and structures required to achieve change compared to where the intervention represents a more incremental change.

To what degree does equipoise exist?

Key stakeholders may not be in equipoise around the intervention [ 28 ]. These stakeholders include the trial designers, recruiters, patient and public representatives and participants, as well as health care staff who are not directly involved in the trial but will use the evidence produced by it. A lack of equipoise amongst stakeholders may lead to poor recruitment practices, low recruitment rates or a lack of utility of the evidence in the real world [ 29 ]. Consideration of the question of equipoise at the feasibility phase can offer opportunities to address this, for example through education, increasing awareness and enabling open discussion of the issues, or highlight the option of not progressing to an expensive full trial [ 30 , 31 ]. This has been highlighted as a particular problem for behavioural intervention trials, with recommendations to explore this issue at the pilot stage of a trial [ 32 ].

Design and data collection

Consider the range of qualitative methods that might be used to address the key feasibility questions, including dynamic or iterative approaches which allow learning from early qualitative research findings to be implemented before further qualitative research is undertaken as part of the feasibility study.

When undertaking qualitative research in feasibility studies for trials, it is common for researchers to undertake a cross-sectional interview study with intervention deliverers and recipients and not to specify explicitly an approach or design [ 12 , 21 , 22 , 24 ]. Although sometimes it may be important to mirror closely the expected approach of the planned full trial in terms of recruitment practices, it may be helpful for the research team to take a flexible approach to the qualitative research. The team may make changes during the feasibility study itself, based on findings from the qualitative research, and then assess the impact of these changes [ 33 ]. This is sometimes called a ‘dynamic approach’. Such changes could include taking action to modify the pilot trial conduct, as well as working with intervention stakeholders to feedback and resolve difficulties in implementing the intervention. Further qualitative research can then be undertaken to inform further improvements throughout the feasibility study. This can help to optimise trial conduct or an intervention rather than simply identify problems with it. Case study 1 describes an example of this dynamic approach to data collection [ 33 ].

Other approaches suitable for feasibility studies include iterative ‘rapid ethnographic assessment’ which has been used to adapt and tailor interventions to the different contexts in which the trial was planned [ 34 ]. This approach applies a range of methods including participant observation, focus groups, interviews and social mapping [ 34 ]. Other researchers have used ‘mixed methods formative research’ at the feasibility stage [ 10 ] and action research where potential participants and practitioners are actively involved in the research to assess the feasibility of an intervention and to ensure a good intervention-context fit [ 35 , 36 ]. For instance, a participatory approach informed by the principles of action research was used to design, implement and evaluate the FEeding Support Team (FEST) intervention [ 35 , 36 ].

A dynamic or iterative approach to qualitative research in a feasibility study, where concurrent changes are made to the intervention or trial conduct, would not be suitable for a full trial where care is taken to protect the experiment. In a fully randomised controlled trial, researchers may be concerned that an excessive volume or intensity of qualitative research may contaminate the experiment by acting as an intervention [ 37 ]. Or, they may be concerned about early reporting of findings of the qualitative research detrimentally affecting staff delivering the intervention or the trial [ 38 ]. Any risks will depend on the size and type of the trial and the qualitative research and may be far outweighed by the benefits in practice of undertaking the qualitative research throughout the full trial. These concerns are less relevant during the feasibility phase.

Select from a range of appropriate qualitative methods to address the feasibility questions and provide a rationale for the choices made; non-participant observation may be an important consideration.

Researchers need to select from a range of qualitative methods including telephone and face-to-face interviews, focus groups, non-participant observation, paper/audio/video diaries, case notes kept by health professionals and discussions in online chat rooms and social media. Decisions on data collection and analysis methods should depend on the research questions posed and the context in which data will be collected. To date, feasibility studies for trials have often tended to rely solely upon interviews or focus group discussions with participants and intervention deliverers and have not drawn on the wider range of methods available [ 21 – 24 ]. Researchers tend also to use focus groups and may do this because they think they are cheap and quick when in practice, they are challenging to both organise and analyse. Some researchers are explicit about why focus groups are the best approach for their study. For example, in a randomised trial on the use of diaphragms to prevent sexually transmitted infection, the research team conducted 12 focus groups with women before and after they received the intervention to consider its acceptability and feasibility. This data collection approach was justified on the basis that the researchers felt focus groups would generate more open discussion [ 10 ]. However, focus groups may be problematic in a feasibility study because they tend towards consensus and can mask dissenting views, with the possibility of premature conceptual closure. It may also be the case that participants who are prepared to talk openly within a group setting may differ from the target population for a trial as, in general, focus groups tend to attract more educated and confident individuals [ 39 ].

Non-participant observation, including the use of audio or video recordings of intervention delivery or recruitment sessions, can help to identify implementation constraints at the feasibility phase. Observation has also proved to be very useful when exploring recruitment practices for a full trial [ 33 , 40 ]. ‘Think aloud’ protocols may also be helpful—for example, in one feasibility study of a technology to deliver behaviour change, the approach was used to allow users to talk about the strengths and weaknesses of the technology as they attempted to use it [ 41 ].

Pay attention to diversity when sampling participants, groups, sites and stage of intervention.

All of the different approaches to sampling in qualitative research—such as purposive, key informant and snowballing—are relevant to feasibility studies. A particular challenge for sampling within the feasibility phase is the need to address the wide range of uncertainties about the full trial or the intervention within the resource limitations of the study.

It can be difficult to decide when enough has been learnt about the trial intervention or the conduct of the trial (or when data saturation has occurred) to recommend moving on to the full trial. Researchers will need to make pragmatic decisions on which emerging analysis themes warrant more data collection and where sufficient data are available. In practice, sample sizes for qualitative research in feasibility studies are usually small (typically between 5 and 20 individuals [ 10 , 12 , 22 – 24 ]). This may be reasonable, given that simulations suggest that 10 users will identify a minimum of 80 % of the problems with the technology during usability testing, and 20 users will identify 95 % of the problems [ 42 ]. However, sample size will be dependent on the study; for example, there may be therapist effects to consider and a need to sample a range of patients using different therapists or a range of contexts.

Diversity of sampling is probably more important at the feasibility phase than the number of interviews or focus groups conducted, and some researchers have rightly highlighted as a limitation the lack of diversity in the sampling process for their qualitative feasibility study [ 20 ]. Paying attention to the diversity of sampling needed may be important for identifying the wide range of problems likely to be faced by the group/s to which the intervention is directed. Including a diverse range of health professionals and patients (for an individual-level trial) and sites (for a cluster trial) can be beneficial. In individual-level multicentre trials, including more than one centre at the feasibility stage can reduce the chance of refining an intervention or trial that will only work within that single centre. As in other forms of qualitative research, sampling may be very broad at the start of the feasibility study, when there are lots of questions and uncertainty, with later sampling focusing on disconfirming cases to test emerging findings.

Appreciate the difference between qualitative research and public and patient involvement.

In the UK and many other settings, it is considered good practice to have public and patient involvement in health research [ 43 ]. This is highly relevant to a feasibility study where patients and the public can contribute to prioritising which key uncertainties to address and are therefore involved at an early stage of the design of the full trial. Indeed, there is guidance available on patient and public involvement in trials, showing how service users can be involved at the feasibility/pilot stage of a trial by being members of the management group, steering committee and research team and by contributing to the design, analysis and reporting of the feasibility study [ 44 ]. A potential concern is that some researchers conflate qualitative research and public and patient involvement; this may be more common during a feasibility study if the public or patient involvement group is asked to provide feedback on the intervention. Although patient and public representatives on research teams can provide helpful feedback on the intervention, this does not constitute qualitative research and may not result in sufficiently robust data to inform the appropriate development of the intervention. That is, qualitative research is likely to be necessary in conjunction with any patient and public involvement. Case study 2 describes an example of a qualitative study undertaken with patient involvement [ 45 ].

Consider the timing of analysis, which might be in stages in a dynamic approach.

For many types of qualitative research, it is suggested that data are analysed as they are collected so that the sampling for the next round of data collection benefits from the analysis of these earlier data. If a dynamic approach is applied in a feasibility study, it is important to have available sufficient resources to analyse the data collected early in the study in order to feed findings back to the wider team and allow changes to be made to the intervention and trial conduct prior to the next set of data collection. This can be quite different from using qualitative research in the full trial, where all data might be collected prior to any formal analysis and sharing of findings with the wider team.

Many different approaches to analysis can be used, including framework, thematic and grounded theory-informed analysis.

Many different approaches can be used to analyse qualitative data in the context of a feasibility study, and the approach should be chosen based on the research question and the skills of the research team. Some researchers simply describe the steps they take within their analysis rather than citing a named approach [ 12 ]. Other researchers use combinations of known approaches such as framework analysis and grounded theory [ 36 ].

Data can cover a breadth of issues, but the analysis may focus on a few key issues.

An important challenge for analysis may be the specificity of the questions that need to be addressed by a qualitative feasibility study, in order to inform trial development. Analysis will need to focus on the questions prioritised at the beginning, or those emerging throughout the feasibility study, from the large amounts of qualitative data generated. The analysis process needs to consider ‘fatal flaws’ that may require tailoring or refining of the intervention or trial conduct, as well as the mechanisms of action for the intervention.

Teamworking

Have a qualitative researcher as part of feasibility study design team.

Planning the feasibility study needs qualitative expertise to determine what can be done, how long it might take, how it is best done and the resources needed. It is therefore important that an expert in qualitative methods be included in both the planning and delivery teams for the feasibility study.

Consider relationships between the qualitative researchers and the wider feasibility study team.

How the qualitative researchers interact with the wider feasibility study team is an important concern. If study participants view the qualitative researchers as closely aligned with the team delivering the intervention or conducting the pilot trial, then participants may feel less able to offer honest criticisms of the intervention or trial conduct. On the other hand, where qualitative researchers work too independently from the wider team, they may not develop a deep understanding of the needs of the trial and the implications of their findings for the trial.

Qualitative researchers may identify issues that are uncomfortable for the rest of the research team. For example, they may consider that an intervention does not simply need refining but has a fundamental flaw or weakness in the context in which it is being tested. This may be particularly difficult if the intervention developer is part of the team. Indeed, some members of the team may not be in equipoise about the intervention (see earlier); they may have strong prior beliefs about its feasibility, acceptability and effectiveness and be unable to acknowledge any weaknesses. However, without openness to change, the qualitative research is unlikely to reach its potential for impact on the full trial. On the other hand, the wider team may need to challenge the findings of the qualitative research to ensure that any proposed changes are necessary. Qualitative researchers may also identify problems with the trial conduct that the rest of the team do not see as important because, for example, the recruitment statistics are adequate or it is an effort to change plans. There may also be tensions between what the trial design team need and what the qualitative researcher sees as important. For instance, the trial team may want to understand the feasibility of the intervention whilst the qualitative researcher is more interested in understanding mechanisms of action of the intervention. The team will need to discuss these differences as they plan and undertake the research. The only solution to these tensions is open communication between team members throughout the feasibility study.

Consider who will make changes to the intervention or trial conduct.

Qualitative researchers can identify strengths and weaknesses of the intervention or the conduct of the trial. However, they are usually not responsible for redesigning the intervention or trial either during the feasibility study (if a dynamic approach is taken) or at the end of the feasibility study when the full trial is being considered and planned. It is helpful to be explicit about who is responsible for making changes based on the qualitative findings and how and when they will do this.

Publish feasibility studies where possible because they help other researchers to consider the feasibility of similar interventions or trials.

Other researchers can learn from feasibility studies, and where this is likely to be the case, we recommend publishing them in peer-reviewed journal articles. Other researchers might be willing to take forward to full trial an intervention that the original researchers were unable or unwilling to take beyond the feasibility study. Or, other researchers might learn how to develop better interventions or trials within the same field or understand which qualitative methods are most fruitful in different contexts. Publishing what went wrong within a feasibility study can be as helpful as publishing what went right. Explicit description of how decisions were made about which research questions and uncertainties were prioritised may help others to understand how to make these types of decisions in their future feasibility studies.

Researchers may choose to publish the qualitative findings in the same article as the findings from the pilot trial or quantitative study or may publish them separately if there are detailed and different stories to tell. For example, Hoddinott and colleagues published separate articles related to the outcome evaluation and the process evaluation of a feasibility study of a breastfeeding intervention for women in disadvantaged areas [ 35 , 36 ]. Feasibility studies may generate multiple papers, each of which will need to tell one part of a coherent whole story. Regardless of how many articles are published from a single feasibility study, identifying each one as a feasibility study in the article title will help other researchers to locate them.

Describe the qualitative analysis and findings in detail.

When publishing qualitative research used with trials, researchers sometimes offer very limited description of the qualitative methods, analysis and findings or rely on limited data collection [ 5 , 17 ]. This ‘qual-lite’ approach limits the credibility of the qualitative research because other researchers and research users cannot assess the quality of the methods and interpretation. This may be due to the word limits of journal articles, especially if a range of quantitative and qualitative methods are reported in the same journal article. Electronic journals allowing longer articles, and the use of supplementary tables, can facilitate the inclusion of both more detail on the methods used and a larger number of illustrative data extracts [ 12 ]. Researchers may wish to draw on guidelines for the reporting of qualitative research [ 46 ].

Be explicit about the learning for a future trial or a similar body of interventions or generic learning for trials.

Qualitative research in a feasibility study for a trial can identify useful learning for the full trial and for researchers undertaking similar trials or evaluating similar interventions. This makes it important to be explicit about that learning in any report or article. Reporting the impact of the qualitative research on the trial, and potential learning for future trials, in the abstract of any journal article can make it easier for other researchers to learn from the qualitative research findings [ 12 ]. Examples of the impact that qualitative research in feasibility studies can have on the full trial include changes in the information given to participants in the full trial [ 10 ], recruitment procedures [ 21 , 28 ], intervention content or delivery [ 12 , 22 , 24 ], trial design [ 23 ] or outcome measures to be used [ 47 ]. For example, in the ProtecT trial, initial expectations were that only a two-arm trial comparing radical treatments would be possible, but following the qualitative research, an active monitoring arm that was acceptable was developed and included in the main trial [ 21 ]. Learning from the qualitative research may be unexpected. For example, the aim of the qualitative research in one feasibility study was to explore the acceptability of the intervention, but in practice, it identified issues about the perceived benefits of the intervention which affected the future trial design [ 23 ]. See case study 3 for an example of qualitative research undertaken with a pilot trial where the learning for the full trial is explicitly reported in the published paper [ 47 ]

Once a feasibility study is complete, researchers must make the difficult decision of whether to progress to the full trial or publish why a full trial cannot be undertaken. There is guidance on how to make this decision, which encourages the systematic identification and appraisal of problems and potential solutions and improves the transparency of decision-making processes [ 48 ]. Too often, progression criteria are framed almost entirely in quantitative terms and it is unclear the extent to which qualitative data may or not play a direct role in informing the decision on whether to proceed to a full trial. For example, if researchers fall just short of a quantitative criterion, but have a sufficient qualitative understanding of why this happened and how to improve it, then it might be possible to proceed. Related to this, qualitative research may identify potential harms at the feasibility stage; the intervention could be modified to avoid these in the full trial, or a decision could be made not to proceed to a full trial even if progression criteria were met.

Conclusions

Exploring uncertainties before a full trial is underway can enable trialists to address problems or optimise the intervention or conduct of the trial. We present guidance that researchers, research funders and reviewers may wish to consider when assessing or undertaking qualitative research in feasibility studies. This guidance consists of 16 items framed around five research domains: research questions, data collection, analysis, teamwork and reporting. A strength of the guidance is that it is based on a combination of experiences from both published feasibility studies and researchers from eight universities in three countries. A limitation is that the guidance was not developed using consensus methods. The guidance is not meant as a straitjacket but as a way of helping researchers to reflect on their practice. A useful future exercise would be to develop worked examples of how research teams have used the guidance to plan and undertake their qualitative research within feasibility studies for trials. This would help to highlight the strengths and limitations of the guidance in different contexts. Using qualitative research with trials is still a developing area, and so, we present this guidance as a starting point for others to build on, as understanding of the importance of this vital stage of preparation for randomised controlled trials grows. Researchers may also wish to reflect on the utility of different qualitative methods and approaches within their studies to help other researchers make decisions about their future feasibility studies.

Abbreviations

Medical Research Council

National Institute for Health Research

United Kingdom

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. doi: 10.1136/bmj.a1655 .

PubMed   PubMed Central   Google Scholar  

Ioannidis JPA, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Research Waste series: increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–75.

Soliman EZ, Mendis S, Dissanayake WP, Somasundaram NP, Gunaratne PS, Jayasingne IK, et al. A Polypill for primary prevention of cardiovascular disease: a feasibility study of the World Health Organization. Trials. 2011;12:3.

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Hewison J. What can qualitative research do for randomised controlled trials? A systematic mapping review. BMJ Open. 2013;3:e002889.

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Goode J, Hewison J. Maximising the value of combining qualitative research and randomised controlled trials in health research: the QUAlitative Research in Trials (QUART) study—a mixed methods study. Health Technol Assess. 2014;18(38).

Eldridge S, Bond C, Campbell M, Lancaster G, Thabane L, Hopwell S. Definition and reporting of pilot and feasibility studies. Trials. 2013;14 Suppl 1:O18. doi: 10.1186/1745-6215-14-S1-O18 .

PubMed Central   Google Scholar  

Arain M, Campbell MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. 2010;10:67.

National Institute for Health Research http://www.nets.nihr.ac.uk/glossary/feasibility-studies . Accessed 14 January 2015.

Wesson J, Clemson L, Brodaty H, Lord S, Taylor M, Gitlin L, et al. A feasibility study and pilot randomised trial of a tailored prevention program to reduce falls in older people with mild dementia. BMC Geriatr. 2013;13:89.

Behets FM, Van Damme K, Turner AN, Rabenja NL, Ravelomanana NL, Raharinivo MS, et al. Evidence-based planning of a randomized controlled trial on diaphragm use for prevention of sexually transmitted infections. Sex Transm Dis. 2008;35:238–42.

Hauser M, Lautenschlager M, Gudlowski Y, Ozgürdal S, Witthaus H, Bechdolf A, et al. Psychoeducation with patients at-risk for schizophrenia—an exploratory pilot study. Patient Educ Counsel. 2009;76(1):138–42.

Poston L, Briley AL, Barr S, Bell R, Croker H, Coxon K, et al. Developing a complex intervention for diet and activity behaviour change in obese pregnant women (the UPBEAT trial); assessment of behavioural change and process evaluation in a pilot randomised controlled trial. BMC Pregnancy Childbirth. 2013;13:148.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10(2):307–12.

PubMed   Google Scholar  

Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10:1.

Moore G, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

Drabble SJ, O’Cathain A, Thomas KJ, Rudolph A, Hewison J. Describing qualitative research undertaken with randomised controlled trials in grant proposals: a documentary analysis. BMC Med Res Methodol. 2014;14:24.

Lewin S, Glenton C, Oxman AD. Use of qualitative methods alongside randomised controlled trials of complex healthcare interventions: methodological study. BMJ. 2009;339:b3496.

Shanyinde M, Pickering RM, Weatherall M. Questions asked and answered in pilot and feasibility randomised controlled trials. BMC Med Res Methodol. 2011;11:117. doi: 10.1186/1471-2288-11-117 .

Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, et al. How we design feasibility studies. Am J Prev Med. 2009;36(5):452–7.

Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13:95.

Dow B, Moore K, Scott P, Ratnayeke A, Wise K, Sims J, et al. Rural carers online: a feasibility study. Aust J Rural Health. 2008;16:221–5.

Jackson C, Cheater FM, Peacock R, Leask J, Trevena L. Evaluating a web-based MMR decision aid to support informed decision-making by UK parents: a before-and-after feasibility study. Health Educ J. 2010;69(1):74–83.

Google Scholar  

Ni Mhurchu C, Roberts V, Maddison R, Dorey E, Jiang Y, Jull A, et al. Effect of electronic time monitors on children’s television watching: pilot trial of a home-based intervention. Prev Med. 2009;49(5):413–7.

Mittal D, Owen RR, Lacro JP DP, Landes RD, Edlund M, Valenstein M, et al. Antipsychotic adherence intervention for veterans over 40 with schizophrenia: results of a pilot study. Clin Schizophr Relat Psychoses. 2009;24 Suppl 1:S1171.

Hawe P, Shiell A, Riley T. Complex interventions: how ‘out of control’ can a randomised controlled trial be? BMJ. 2004;328(7455):1561–3.

Hoddinott P, Morgan H, Thomson G, Crossland N, Craig L, Britten J, et al. Intervention vignettes as a qualitative tool to refine complex intervention design. Trials. 2013;14 Suppl 1:O55. doi: 10.1186/1745-6215-14-S1-O55 .

McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J Clin Epidemiol. 2014;67(3):267–77.

Mills N, Donovan JL, Smith M, Jacoby A, Neal DE, Hamdy FC. Perceptions of equipoise are crucial to trial participation: a qualitative study of men in the ProtecT study. Control Clin Trials. 2003;24(3):272–82.

de Vasconcellos K, Sneyd JR. Nitrous oxide: are we still in equipoise? A qualitative review of current controversies. Br J Anaesth. 2013;111(6):877–85.

Eborall HC, Dallosso HM, Daly H, Martin-Stacey L, Heller SR. The face of equipoise—delivering a structured education programme within a randomized controlled trial: qualitative study. Trials. 2014;15:15.

Donovan JL, Paramasivan S, de Salis I, Toerien M. Clear obstacles and hidden challenges: understanding recruiter perspectives in six pragmatic randomised controlled trials. Trials. 2014;15:5.

McCambridge J, Kypri K, Elbourne D. In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials. J Clin Epidemiol. 2014;67(3):247–53.

Donovan J, Mills N, Smith M, Brindle L, Jacoby A, Peters T, et al. Quality improvement report: improving design and conduct of randomised trials by embedding them in qualitative research: ProtecT (prostate testing for cancer and treatment) study. Commentary: presenting unbiased information to patients can be difficult. BMJ. 2002;325(7367):766–70.

NIMH Collaborative HIV/STD Prevention Trial Group. Design and integration of ethnography within an international behavior change HIV/sexually transmitted disease prevention trial. AIDS. 2007;21 suppl 2:S37–48.

Hoddinott P, Craig L, Maclennan G, Boyers D, Vale L. The FEeding Support Team (FEST) randomised, controlled feasibility trial of proactive and reactive telephone support for breastfeeding women living in disadvantaged areas. BMJ Open. 2012;2, e000652. doi: 10.1136/bmjopen-2011-000652 .

Hoddinott P, Craig L, MacLennan G, Boyers D, Vale L. Process evaluation for the FEeding Support Team (FEST) randomised controlled feasibility trial of proactive and reactive telephone support for breastfeeding women living in disadvantaged areas. BMJ Open. 2012;2, e001039. doi: 10.1136/bmjopen-2012-001039 .

O’Cathain A, Goode J, Drabble SJ, Thomas KJ, Rudolph A, Hewison J. Getting added value from using qualitative research with randomized controlled trials: a qualitative interview study. Trials. 2014;15:215.

Cooper C, O’Cathain A, Hind D, Adamson J, Lawton J, Baird W. Conducting qualitative research within Clinical Trials Units: avoiding potential pitfalls. Contemp Clin Trials. 2014;38(2):338–43.

Hoddinott P, Allen K, Avenell A, Britten J. Group interventions to improve health outcomes: a framework for their design and delivery. BMC Public Health. 2010;10:800.

de Salis I, Tomlin Z, Toerien M, Donovan J. Using qualitative research methods to improve recruitment to randomized controlled trials: the Quartet study. J Health Serv Res Policy. 2008;13 suppl 3:92–6.

Harris LT, Tufano J, Le T, Rees C, Lewis GA, Evert AB, et al. Designing mobile support for glycemic control in patients with diabetes. J Biomed Inform. 2010;43(5 Suppl):S37–40.

Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput. 2003;35(3):379–83.

Barber R, Boote JD, Cooper CL. Involving consumers successfully in NHS research: a national survey. Health Expect. 2007;10(4):380–91.

Evans BA, Bedson E, Bell P, Hutchings H, Lowes L, Rea D, et al. Involving service users in trials: developing a standard operating procedure. Trials. 2013;14:219.

Hind D, O’Cathain A, Cooper CL, Parry GD, Isaac CL, Rose A, et al. The acceptability of computerised cognitive behaviour therapy for the treatment of depression in people with chronic physical disease: a qualitative study of people with multiple sclerosis. Psychol Health. 2009;25(6):699–712.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International J Qual Health Care. 2007;19(6):349–57.

Farquhar M, Ewing G, Higginson IJ, Booth S. The experience of using the SEIQoL-DW with patients with advanced chronic obstructive pulmonary disease (COPD): issues of process and outcome. Qual Life Res. 2010;19(5):619–29.

Bugge C, Williams B, Hagen S, Logan J, Glazener C, Pringle S, et al. A process for Decision-making after Pilot and feasibility Trials (ADePT): development following a feasibility study of a complex intervention for pelvic organ prolapse. Trials. 2013;14:353.

Download references

Acknowledgements

The workshop was funded by the MRC North West and ConDuCT Hubs for Trial Methodology. This work was undertaken with the support of the MRC ConDuCT-II Hub (Collaboration and innovation for Difficult and Complex randomised controlled Trials In Invasive procedures—MR/K025643/1). We would like to thank attendees at the ConDuCT-II Hub workshop on feasibility studies held at the University of Bristol in October 2014 who discussed and commented on a presentation of an earlier version of this guidance. JLD is a NIHR Senior Investigator. SL is supported by funding from the South African Medical Research Council.

Author information

Authors and affiliations.

Medical Care Research Unit, School of Health and Related Research, University of Sheffield, Regent Street, Sheffield, S1 4DA, UK

Alicia O’Cathain & Kate J. Thomas

Primary Care, Nursing Midwifery and Allied Health Professionals Research Unit, University of Stirling, Stirling, FK9 4LA, Scotland, UK

Pat Hoddinott

Global Health Unit, Norwegian Knowledge Centre for the Health Services, Oslo, Norway

Simon Lewin

Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Institute of Psychology, Health and Society, University of Liverpool, Waterhouse Building, Block B, Brownlow Street, Liverpool, L69 3GL, UK

Bridget Young

Department of Health Sciences, University of York, Seebohm Rowntree Building, Heslington, York, YO10 5DD, UK

Joy Adamson

Behavioural and Societal Sciences, Work, Health & Care, Schoemakerstraat 97 (Gebouw A), Delft, 2628 VK, Netherlands

Yvonne JFM. Jansen

School of Social and Community Medicine, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK

Nicola Mills & Jenny L. Donovan

Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement, Cardiff University, Cardiff, CF10 3XQ, UK

Graham Moore

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Alicia O’Cathain .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

AOC, JLD, KJT, BY and SL developed the idea and obtained the funding for the workshop. AOC, PH, KJT, BY, JA, YJFMJ, SL, GM and JLD attended the workshop where the core content of the guidance was developed. AOC wrote the first draft of the manuscript. AOC, PH and NM presented the guidance to the researchers engaged in this type of work to further develop the guidance. All authors commented on the drafts of the manuscript and read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

O’Cathain, A., Hoddinott, P., Lewin, S. et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud 1 , 32 (2015). https://doi.org/10.1186/s40814-015-0026-y

Download citation

Received : 20 February 2015

Accepted : 13 August 2015

Published : 07 September 2015

DOI : https://doi.org/10.1186/s40814-015-0026-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Randomised controlled trial
  • Feasibility studies
  • Pilot studies
  • Qualitative methods

Pilot and Feasibility Studies

ISSN: 2055-5784

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research paper for feasibility study

  • Search Search Please fill out this field.

What Is a Feasibility Study?

Understanding a feasibility study, how to conduct a feasibility study.

  • Feasibility Study FAQs

The Bottom Line

  • Business Essentials

Feasibility Study

research paper for feasibility study

Yarilet Perez is an experienced multimedia journalist and fact-checker with a Master of Science in Journalism. She has worked in multiple cities covering breaking news, politics, education, and more. Her expertise is in personal finance and investing, and real estate.

research paper for feasibility study

A feasibility study is a detailed analysis that considers all of the critical aspects of a proposed project in order to determine the likelihood of it succeeding.

Success in business may be defined primarily by return on investment , meaning that the project will generate enough profit to justify the investment. However, many other important factors may be identified on the plus or minus side, such as community reaction and environmental impact.

Although feasibility studies can help project managers determine the risk and return of pursuing a plan of action, several steps should be considered before moving forward.

Key Takeaways

  • A company may conduct a feasibility study when it's considering launching a new business, adding a new product line, or acquiring a rival.
  • A feasibility study assesses the potential for success of the proposed plan or project by defining its expected costs and projected benefits in detail.
  • It's a good idea to have a contingency plan on hand in case the original project is found to be infeasible.

Investopedia / Lara Antal

A feasibility study is an assessment of the practicality of a proposed plan or project. A feasibility study analyzes the viability of a project to determine whether the project or venture is likely to succeed. The study is also designed to identify potential issues and problems that could arise while pursuing the project.

As part of the feasibility study, project managers must determine whether they have enough of the right people, financial resources, and technology. The study must also determine the return on investment, whether this is measured as a financial gain or a benefit to society, as in the case of a nonprofit project.

The feasibility study might include a cash flow analysis, measuring the level of cash generated from revenue versus the project's operating costs . A risk assessment must also be completed to determine whether the return is enough to offset the risk of undergoing the venture.

When doing a feasibility study, it’s always good to have a contingency plan that is ready to test as a viable alternative if the first plan fails.

Benefits of a Feasibility Study

There are several benefits to feasibility studies, including helping project managers discern the pros and cons of undertaking a project before investing a significant amount of time and capital into it.

Feasibility studies can also provide a company's management team with crucial information that could prevent them from entering into a risky business venture.

Such studies help companies determine how they will grow. They will know more about how they will operate, what the potential obstacles are, who the competition is, and what the market is.

Feasibility studies also help convince investors and bankers that investing in a particular project or business is a wise choice.

The exact format of a feasibility study will depend on the type of organization that requires it. However, the same factors will be involved even if their weighting varies.

Preliminary Analysis

Although each project can have unique goals and needs, there are some best practices for conducting any feasibility study:

  • Conduct a preliminary analysis, which involves getting feedback about the new concept from the appropriate stakeholders
  • Analyze and ask questions about the data obtained in the early phase of the study to make sure that it's solid
  • Conduct a market survey or market research to identify the market demand and opportunity for pursuing the project or business
  • Write an organizational, operational, or business plan, including identifying the amount of labor needed, at what cost, and for how long
  • Prepare a projected income statement, which includes revenue, operating costs, and profit
  • Prepare an opening day balance sheet
  • Identify obstacles and any potential vulnerabilities, as well as how to deal with them
  • Make an initial "go" or "no-go" decision about moving ahead with the plan

Suggested Components

Once the initial due diligence has been completed, the real work begins. Components that are typically found in a feasibility study include the following:

  • Executive summary : Formulate a narrative describing details of the project, product, service, plan, or business.
  • Technological considerations : Ask what will it take. Do you have it? If not, can you get it? What will it cost?
  • Existing marketplace : Examine the local and broader markets for the product, service, plan, or business.
  • Marketing strategy : Describe it in detail.
  • Required staffing : What are the human capital needs for this project? Draw up an organizational chart.
  • Schedule and timeline : Include significant interim markers for the project's completion date.
  • Project financials .
  • Findings and recommendations : Break down into subsets of technology, marketing, organization, and financials.

Examples of a Feasibility Study

Below are two examples of a feasibility study. The first involves expansion plans for a university. The second is a real-world example conducted by the Washington State Department of Transportation with private contributions from Microsoft Inc.

A University Science Building

Officials at a university were concerned that the science building—built in the 1970s—was outdated. Considering the technological and scientific advances of the last 20 years, they wanted to explore the cost and benefits of upgrading and expanding the building. A feasibility study was conducted.

In the preliminary analysis, school officials explored several options, weighing the benefits and costs of expanding and updating the science building. Some school officials had concerns about the project, including the cost and possible community opposition. The new science building would be much larger, and the community board had earlier rejected similar proposals. The feasibility study would need to address these concerns and any potential legal or zoning issues.

The feasibility study also explored the technological needs of the new science facility, the benefits to the students, and the long-term viability of the college. A modernized science facility would expand the school's scientific research capabilities, improve its curriculum, and attract new students.

Financial projections showed the cost and scope of the project and how the school planned to raise the needed funds, which included issuing a bond to investors and tapping into the school's endowment . The projections also showed how the expanded facility would allow more students to be enrolled in the science programs, increasing revenue from tuition and fees.

The feasibility study demonstrated that the project was viable, paving the way to enacting the modernization and expansion plans of the science building.

Without conducting a feasibility study, the school administrators would never have known whether its expansion plans were viable.

A High-Speed Rail Project

The Washington State Department of Transportation decided to conduct a feasibility study on a proposal to construct a high-speed rail that would connect Vancouver, British Columbia, Seattle, Washington, and Portland, Oregon. The goal was to create an environmentally responsible transportation system to enhance the competitiveness and future prosperity of the Pacific Northwest.

The preliminary analysis outlined a governance framework for future decision-making. The study involved researching the most effective governance framework by interviewing experts and stakeholders, reviewing governance structures, and learning from existing high-speed rail projects in North America. As a result, governing and coordinating entities were developed to oversee and follow the project if it was approved by the state legislature.

A strategic engagement plan involved an equitable approach with the public, elected officials, federal agencies, business leaders, advocacy groups, and indigenous communities. The engagement plan was designed to be flexible, considering the size and scope of the project and how many cities and towns would be involved. A team of the executive committee members was formed and met to discuss strategies, lessons learned from previous projects and met with experts to create an outreach framework.

The financial component of the feasibility study outlined the strategy for securing the project's funding, which explored obtaining funds from federal, state, and private investments. The project's cost was estimated to be between $24 billion to $42 billion. The revenue generated from the high-speed rail system was estimated to be between $160 million and $250 million.

The report bifurcated the money sources between funding and financing. Funding referred to grants, appropriations from the local or state government, and revenue. Financing referred to bonds issued by the government, loans from financial institutions, and equity investments, which are essentially loans against future revenue that needs to be paid back with interest.

The sources for the capital needed were to vary as the project moved forward. In the early stages, most of the funding would come from the government, and as the project developed, funding would come from private contributions and financing measures. Private contributors included Microsoft Inc., which donated more than $570,000 to the project.

The benefits outlined in the feasibility report show that the region would experience enhanced interconnectivity, allowing for better management of the population and increasing regional economic growth by $355 billion. The new transportation system would provide people with access to better jobs and more affordable housing. The high-speed rail system would also relieve congested areas from automobile traffic.

The timeline for the study began in 2016 when an agreement was reached with British Columbia to work together on a new technology corridor that included high-speed rail transportation. The feasibility report was submitted to the Washington State land Legislature in December 2020.

What Is the Main Objective of a Feasibility Study?

A feasibility study is designed to help decision-makers determine whether or not a proposed project or investment is likely to be successful. It identifies both the known costs and the expected benefits.

In business, "successful" means that the financial return exceeds the cost. In a nonprofit, success may be measured in other ways. A project's benefit to the community it serves may be worth the cost.

What Are the Steps in a Feasibility Study?

A feasibility study starts with a preliminary analysis. Stakeholders are interviewed, market research is conducted, and a business plan is prepared. All of this information is analyzed to make an initial "go" or "no-go" decision.

If it's a go, the real study can begin. This includes listing the technological considerations, studying the marketplace, describing the marketing strategy, and outlining the necessary human capital, project schedule, and financing requirements.

Who Conducts a Feasibility Study?

A feasibility study may be conducted by a team of the organization's senior managers. If they lack the expertise or time to do the work internally it may be outsourced to a consultant.

What Are the 4 Types of Feasibility?

The study considers the feasibility of four aspects of a project:

Technical: A list of the hardware and software needed, and the skilled labor required to make them work.

Financial: An estimate of the cost of the overall project and its expected return.

Market: An analysis of the market for the product or service, the industry, competition, consumer demand, sales forecasts, and growth projections

Organizational: An outline of the business structure and the management team that will be needed.

Feasibility studies help project managers determine the viability of a project or business venture by identifying the factors that can lead to its success. The study also shows the potential return on investment and any risks to the success of the venture.

A feasibility study contains a detailed analysis of what's needed to complete the proposed project. The report may include a description of the new product or venture, a market analysis, the technology and labor needed, as well as the sources of financing and capital. The report will also include financial projections, the likelihood of success, and ultimately, a go-or-no-go decision.

Washington State Department of Transportation. " Ultra-High-Speed Rail Study ."

Washington State Department of Transportation. " Cascadia Ultra High Speed Ground Transportation Framework for the Future ."

Washington State Department of Transportation. " Ultra-High-Speed Rail Study: Outcomes ."

Washington State Department of Transportation. " Ultra-High-Speed Ground Transportation Business Case Analysis ." Page ii.

research paper for feasibility study

  • Terms of Service
  • Editorial Policy
  • Privacy Policy
  • Your Privacy Choices

Investigating the Feasibility of Using a Wearable Device to Measure Physiologic Health Data in Emergency Nurses and Residents: Observational Cohort Study

JMIR Formative Research

Anish K. Agarwal, Rachel Gonzales, Kevin Scott, Raina Merchant

CHIBE Experts

  • Anish K. Agarwal MD, MPH, MS
  • Raina Merchant, MD, MSHP, FAHA

Research Areas

  • Digital Health & Technology

You might also be interested in...

Artificial Intelligence and Technology Collaboratories: Empowering Innovation in AI + AgeTech

Evaluation of four interventions using behavioural economics insights to increase demand for voluntary medical male circumcision in south africa through the moyaapp: a quasi-experimental study, pilot study of a mobile phone chatbot for medication adherence and toxicity management among patients with gi cancers on capecitabine, clinician perspectives on virtual specialty palliative care for patients with advanced illnesses, algorithmic identification of persons with dementia for research recruitment: ethical considerations, detection of medication taking using a wrist-worn commercially available wearable device, association of electronic self-scheduling and screening mammogram completion, protocol to evaluate sequential electronic health record-based strategies to increase genetic testing for breast and ovarian cancer risk across diverse patient populations in gynecology practices, peernaija-a mobile health platform incentivizing medication adherence among youth living with hiv in nigeria: study protocol for a randomized controlled trial, remote patient-reported outcomes and activity monitoring to improve patient-clinician communication regarding symptoms and functional status: a randomized controlled trial, food access support technology (fast): a centralized city-wide platform for rapid response to food insecurity, examination of text message plans and baseline usage of families enrolled in a text message influenza vaccine reminder trial: survey study, sign up for our healthy nudge newsletter.

  • Introduction
  • Conclusions
  • Article Information

RV indicates rhinovirus; RT-PCR, reverse transcription polymerase chain reaction; and wearables, wearable biometric monitoring sensors.

A, H1N1 influenza. B, Rhinovirus. ID indicates identification.

A, Receiver operating characteristic (ROC) curves for the best-performing models of infection status for the H1N1 influenza, rhinovirus, and combined virus challenges. B, Confusion matrices for a sample of models in A. C, Mean (SD) accuracy of leave-one-out, cross-validated (LOOCV) models in A. NI-B indicates noninfected vs infected, both; NI-C, noninfected vs infected, clinical; and NI-D, noninfected vs infected, data driven.

A, H1N1 influenza. B, Rhinovirus. C, Both viruses combined. Mild-moderate, mild to moderate; NI-C, noninfected vs infected, clinical; NI-mild, noninfected vs infected, mild; NI-mild-moderate, noninfected vs infected, mild to moderate; NI-moderate, noninfected vs infected, moderate; and wearables, wearable biometric monitoring sensors.

A, Receiver operating characteristic (ROC) curves for the best-performing models of infection status for the H1N1 influenza, rhinovirus, and combined virus challenges. B, Confusion matrices for a sample of models in A. C, Mean (SD) accuracy of leave-one-out, cross-validated (LOOCV) models in A. Mild-moderate indicates mild to moderate; NI-C, noninfected vs infected, clinical; NI-mild, noninfected vs infected, mild; NI-mild-moderate, noninfected vs infected, mild to moderate; and NI-moderate, noninfected vs infected, moderate.

eTable 1. Features Used in Random Forest Models

eFigure 1. Feature Sets for Every Model

eFigure 2. Confusion Matrices for Best Performing Model Across Viral Challenges and Infection Status Comparisons

eFigure 3. Confusion Matrices for Best Performing Model Across Viral Challenges and Infection Severity Comparisons

eFigure 4. Relative Feature Importance for Best Performing Model for Each Viral Challenge, Infection Status Grouping, and Infection Severity Grouping

eTable 2. Racial Demographics, Sex, and Median Age Across Infection Severity Groups for H1N1 Influenza Viral Challenge and Rhinovirus Viral Challenge

eTable 3. Mean Accuracy, Precision, Sensitivity, F1-Score, AUC of Every Infection Status Model Tested Across Viral Challenges, Number of Hours Post-Inoculation, and Infection Severity Comparisons

eTable 4. Mean Accuracy, Precision, Sensitivity, Specificity, F1-Score, AUC of Every Infection Severity Model Tested Across Individual Viral Challenges, Number of Hours Post-Inoculation, and Infection Severity Comparisons

eTable 5. Mean Accuracy, Precision, Sensitivity, Specificity, F1-Score, AUC of Every Infection Severity Model Tested Across Combined Viral Challenges, Number of Hours Post-Inoculation, and Infection Severity Comparisons

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Grzesiak E , Bent B , McClain MT, et al. Assessment of the Feasibility of Using Noninvasive Wearable Biometric Monitoring Sensors to Detect Influenza and the Common Cold Before Symptom Onset. JAMA Netw Open. 2021;4(9):e2128534. doi:10.1001/jamanetworkopen.2021.28534

Manage citations:

© 2024

  • Permissions

Assessment of the Feasibility of Using Noninvasive Wearable Biometric Monitoring Sensors to Detect Influenza and the Common Cold Before Symptom Onset

  • 1 Biomedical Engineering Department, Duke University, Durham, North Carolina
  • 2 Duke Center for Applied Genomics and Precision Medicine, Duke University Medical Center, Durham, North Carolina
  • 3 Durham Veterans Affairs Medical Center, Durham, North Carolina
  • 4 Department of Medicine, Duke Global Health Institute, Durham, North Carolina
  • 5 Department of Infectious Disease, Imperial College London, London, United Kingdom
  • 6 Department of Pediatrics, University of Virginia School of Medicine, Charlottesville
  • 7 Department of Psychiatry, Duke University School of Medicine, Durham, North Carolina
  • 8 Department of Medicine, Duke University School of Medicine, Durham, North Carolina
  • 9 Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor
  • 10 Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina

Question   Can noninvasive, wrist-worn wearable devices detect acute viral respiratory infection and predict infection severity before symptom onset?

Findings   In a cohort study of 31 participants inoculated with H1N1 and 18 participants with rhinovirus, infection detection and severity prediction models trained using data on wearable devices were able to distinguish between infection and noninfection with 92% accuracy for H1N1 and 88% accuracy for rhinovirus and were able to distinguish between mild and moderate infection 24 hours prior to symptom onset with 90% accuracy for H1N1 and 89% accuracy for rhinovirus.

Meaning   This study suggests that the use of wearable devices to identify individuals with presymptomatic acute viral respiratory infection is feasible; because wearable devices are common in the general population, using them for infection screening may help limit the spread of contagion.

Importance   Currently, there are no presymptomatic screening methods to identify individuals infected with a respiratory virus to prevent disease spread and to predict their trajectory for resource allocation.

Objective   To evaluate the feasibility of using noninvasive, wrist-worn wearable biometric monitoring sensors to detect presymptomatic viral infection after exposure and predict infection severity in patients exposed to H1N1 influenza or human rhinovirus.

Design, Setting, and Participants   The cohort H1N1 viral challenge study was conducted during 2018; data were collected from September 11, 2017, to May 4, 2018. The cohort rhinovirus challenge study was conducted during 2015; data were collected from September 14 to 21, 2015. A total of 39 adult participants were recruited for the H1N1 challenge study, and 24 adult participants were recruited for the rhinovirus challenge study. Exclusion criteria for both challenges included chronic respiratory illness and high levels of serum antibodies. Participants in the H1N1 challenge study were isolated in a clinic for a minimum of 8 days after inoculation. The rhinovirus challenge took place on a college campus, and participants were not isolated.

Exposures   Participants in the H1N1 challenge study were inoculated via intranasal drops of diluted influenza A/California/03/09 (H1N1) virus with a mean count of 10 6 using the median tissue culture infectious dose (TCID50) assay. Participants in the rhinovirus challenge study were inoculated via intranasal drops of diluted human rhinovirus strain type 16 with a count of 100 using the TCID50 assay.

Main Outcomes and Measures   The primary outcome measures included cross-validated performance metrics of random forest models to screen for presymptomatic infection and predict infection severity, including accuracy, precision, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AUC).

Results   A total of 31 participants with H1N1 (24 men [77.4%]; mean [SD] age, 34.7 [12.3] years) and 18 participants with rhinovirus (11 men [61.1%]; mean [SD] age, 21.7 [3.1] years) were included in the analysis after data preprocessing. Separate H1N1 and rhinovirus detection models, using only data on wearble devices as input, were able to distinguish between infection and noninfection with accuracies of up to 92% for H1N1 (90% precision, 90% sensitivity, 93% specificity, and 90% F1 score, 0.85 [95% CI, 0.70-1.00] AUC) and 88% for rhinovirus (100% precision, 78% sensitivity, 100% specificity, 88% F1 score, and 0.96 [95% CI, 0.85-1.00] AUC). The infection severity prediction model was able to distinguish between mild and moderate infection 24 hours prior to symptom onset with an accuracy of 90% for H1N1 (88% precision, 88% sensitivity, 92% specificity, 88% F1 score, and 0.88 [95% CI, 0.72-1.00] AUC) and 89% for rhinovirus (100% precision, 75% sensitivity, 100% specificity, 86% F1 score, and 0.95 [95% CI, 0.79-1.00] AUC).

Conclusions and Relevance   This cohort study suggests that the use of a noninvasive, wrist-worn wearable device to predict an individual’s response to viral exposure prior to symptoms is feasible. Harnessing this technology would support early interventions to limit presymptomatic spread of viral respiratory infections, which is timely in the era of COVID-19.

Approximately 9% of the world is infected with influenza annually, resulting in 3 million to 5 million severe cases and 300 000 to 500 000 deaths per year. 1 Adults are infected with approximately 4 to 6 common colds per year, and children are infected with approximately 6 to 8 common colds per year, with more than half of infections caused by human rhinoviruses (RVs). 2 , 3 Given the highly infectious nature of respiratory viruses and their variable incubation periods, infections are often transmitted unwittingly in a manner that results in community spread, especially as no presymptomatic screening methods currently exist to identify respiratory viral diseases. 4 , 5 With the increasing emergence of novel viruses, such as SARS-CoV-2, 6 it is critical to quickly identify and isolate contagious carriers of a virus, including presymptomatic and asymptomatic individuals, at the population level to minimize viral spread and associated severe health outcomes.

Wearable biometric monitoring sensors (hereafter referred to as wearables ) have been shown to be useful in detecting infections before symptoms occur. 7 - 9 Low-cost and accessible technologies that record physiologic measurements can empower underserved groups with new digital biomarkers. 8 , 10 - 12 Digital biomarkers are digitally collected data that are transformed into indicators of health and disease. 13 , 14 For example, resting heart rate, heart rate variability, accelerometry, electrodermal skin activity, and skin temperature can indicate a person’s infection status 8 , 9 , 15 - 27 or predict if and when a person will become infected after exposure. 7 Therefore, detecting abnormal biosignals using wearables could be the first step in identifying infections before symptom onset. 8

Here, we developed digital biomarker models for early detection of infection and severity prediction after pathogen exposure but before symptoms develop ( Figure 1 ). Our results highlight the opportunity for the identification of early presymptomatic or asymptomatic infection that may support individual treatment decisions and public health interventions to limit the spread of viral infections.

A total of 39 participants (12 women and 27 men; aged 18-55 years; mean [SD] age, 36.2 [11.8] years; 2 [5.1%] Black, 6 [15.4%] Asian, 25 [64.1%] White, 2 [5.1%] ≥2 race categories [1 (2.6%) White and Caribbean; 1 (2.6%) mixed/other category], and 4 [10.3%] did not fall into any of the ethnic groups listed, so they identified as “all other ethnic groups”) were recruited for the H1N1 influenza challenge study. Data were collected from September 11, 2017, to May 4, 2018. The influenza challenge study was reviewed and approved by the institutional review board at Duke University and the London-Fulham Research Ethics Committee. Written informed consent was obtained from all participants. A total of 24 participants (8 women and 16 men; aged 20-34 years; mean [SD] age, 22 [3.1] years; and 1 [4.2%] Black, 6 [25.0%] Asian, and 15 [62.5%] White, including 3 [12.5%] Hispanic or Latinx, 1 [4.2%] White and Black mixed, and 1 [4.2%] unknown) were recruited for the RV challenge study. Data were collected from September 14 to 21, 2015. The RV challenge study was reviewed and approved by the institutional review board at Duke University and the University of Virginia. Written informed consent was obtained from all participants. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) and the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis ( TRIPOD ) reporting guidelines.

Exclusion criteria for the influenza challenge included current pregnancy, breastfeeding, or smoking; history of chronic respiratory, allergy, or other significant illness; recent upper respiratory tract infection; nose abnormalities; or immunocompromised status. Participants were screened for high levels of serum antibodies against the challenge strain by hemagglutination inhibition assay (titers >1:10 excluded). 7 Exclusion criteria for the RV challenge included pregnancy; chronic respiratory illness; high blood pressure; history of tobacco, drug, or alcohol use; and serum antibody titers more than 1:4.

Participants in the H1N1 challenge study wore the E4 wristband (Empatica Inc) 1 day before and 11 days after the inoculation on the morning of day 2, before clinical discharge. The E4 wristband measures heart rate, skin temperature, electrodermal activity, and movement. Participants were inoculated via intranasal drops of the diluted influenza A/California/03/09 (H1N1) virus with a mean count of 10 6 using the median tissue culture infectious dose (TCID50) assay in 1-mL phosphate-buffered saline and were isolated for at least 8 days after inoculation after negative results of a nasal lavage polymerase chain reaction test. 7 We defined symptoms as either observable events (fever, stuffy nose, runny nose, sneezing, coughing, shortness of breath, hoarseness, diarrhea, and wheezy chest) or unobservable events (muscle soreness, fatigue, headache, ear pain, throat discomfort, chest pain, chills, malaise, and itchy eyes). 28 Viral shedding was quantified by nasal lavage polymerase chain reaction each morning, and symptoms were self-reported twice daily.

Participants in the RV challenge study wore the E4 wristband for 4 days before and 5 days after inoculation, which occurred in the afternoon (1-5 pm ) via intranasal drops of diluted human RV strain type 16 with a count of 100 using the TCID50 assay in 1 mL of lactated Ringer solution. Participants underwent daily nasal lavage, and the symptoms were reported as previously described. Participants lived on a college campus and were not isolated.

We grouped individuals by infection similarity ( Figure 2 ) using data-driven methods based on infection severity (asymptomatic or noninfected [AON], mild, or moderate signs of infection) and trajectory (early, middle, or late signs of infection). Multivariate functional clustering (bayesian information criteria loss function) was done on 3 daily aggregate measurements: observable symptoms, unobservable symptoms, and viral shedding. 29 , 30 Clinical infection groups were determined by previous definitions of symptomatic (modified Jackson symptom score >5 within first 5 days of inoculation) and viral shedders (>2 days of shedding). 31 - 34 Participants who were positive in one criterion but not the other were excluded from further analysis in the clinical groupings. For both infection groupings, we defined symptom onset as the first day of a 2-day period in which the symptom score was at least 2 points. 32

Mean (SD) and median values of heart rate, skin temperature, and accelerometry were calculated every minute from baseline to 60 hours after inoculation. If several preinoculation days were present, then the baseline was defined as the mean value of each wearable metric at the same time of day. A total of 8 and 3 participants were removed from the H1N1 and RV analyses, respectively, owing to lack of sufficient data caused by nonwear, miswear, or device errors, which were detected following the methods of She et al. 7

Resting heart rate and temperature were defined by a 5-minute median accelerometer cutoff determined from the baseline day’s data. 35 For each 12-hour interval, several interbeat interval features were calculated using the 5-minute rolling mean with baseline subtraction: mean heart rate variability, median heart rate variability, number of successive N-N intervals that differ by more than 50 milliseconds, percentage of N-N intervals that differ by more than 50 milliseconds, SD of N-N intervals, and root mean square of successive R-R interval differences (eTable 1 in the Supplement ). 35 To account for circadian effects, model features were calculated as the difference between preinoculation and postinoculation summary metrics occurring at the same 1-hour clock time of day (eTable 1 and eFigure 1A in the Supplement ). 36

Models predicting infection further in time after inoculation included progressively more features (9 features added for each 12-hour block; eFigure 1B in the Supplement ). Performance relative to symptom onset was calculated by differencing the time after inoculation from the median symptom onset of each viral challenge. The resulting feature set consisted of 40 features calculated from 9 delta summary wearable metrics generated from five 12-hour intervals. Forward stepwise selection simultaneously tuned models and performed feature selection to prevent overfitting. 37

Bootstrapped binary or multiclass random forest classifiers were built using Python Scikit-learn and validated using leave-one-person-out cross-validation (trees = 1000). 12 , 37 , 38 This procedure was repeated for every 12-hour period feature set that was added to a model (eFigure 1B in the Supplement ).

Evaluation metrics of the models included accuracy, precision, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AUC). 39 The primary metric of model success was accuracy. For multiclass models, the weighted mean value for each metric was recorded. For binary models, receiver operating characteristic curves were derived from the predicted class probabilities of an input sample, and the resulting AUC and 95% CI were reported.

The data were generated as part of 2 large challenge studies involving nasal lavage inoculation of human volunteers with either influenza (H1N1) or human RV. For the influenza prediction models, 31 participants were included in the analysis after data preprocessing (7 women and 24 men; aged 18-55 years; mean [SD] age, 34.7 [12.3] years; and 5 [16.1%] Asian, 21 [67.7%] White, 1 [3.2%] mixed/other category, and 4 [12.9%] did not fall into any of the ethnic groups listed, so they identified as “all other ethnic groups”). For the RV prediction models, 18 participants were included in the analysis after preprocessing (7 women and 11 men; aged 20-33 years; mean [SD] age, 21.7 [3.1] years; and 2 [11.1%] Asian, 2 [11.1%] Black, and 14 [77.8%] White, including 3 [16.7%] Hispanic or Latinx). The primary demographic difference between the 2 viral challenges was that the H1N1 group contained a wider age range and a higher mean age of participants (eTable 2A and B in the Supplement ).

Functional clustering indicated that there were 3 distinct classes of infection status that, on visual inspection, roughly equated to (1) AON, (2) mild, and (3) moderate ( Figure 2 ). Based on this clustering, we defined the data-driven “infected” group as the combined mild and moderate classes and the “noninfected” group as the AON class. All clinically driven labels of infected vs noninfected were perfectly replicated by the data-driven groupings for the RV challenge but not for the H1N1 challenge. 40

We developed 25 binary, random forest classification models to predict infection vs noninfection using features derived from wearables. Each model covered a different time period after inoculation or used a different definition of infected vs noninfected. For infected participants in the H1N1 challenge, the median symptom onset after inoculation was 48 hours (range, 9-96 hours). At 36 hours after inoculation, models predicting the data-driven groupings from the H1N1 challenge reached an accuracy of 89% (87% precision, 100% sensitivity, 63% specificity, 93% F1 score, and 0.84 [95% CI, 0.60-1.00] AUC). Because 7 participants were either symptomatic nonshedders (n = 6) or AON shedders (n = 1), the clinically driven H1N1 infection groupings had 7 fewer observations than the data-driven groupings. Models predicting the clinically driven groupings for H1N1 reached an accuracy of 79% (72% precision, 80% sensitivity, 79% specificity, 76% F1 score, and 0.68 [95% CI, 0.46-0.89] AUC) within 12 hours after inoculation and an accuracy of 92% (90% precision, 90% sensitivity, 93% specificity, 90% F1 score, and 0.85 [95% CI, 0.70-1.00] AUC) within 24 hours after inoculation. Regardless of whether the data-driven or clinically driven grouping method was used, we could assess whether or not a participant was infected with H1N1 between 24 and 36 hours before symptom onset ( Figure 3 A-C; Figure 4 A; and eFigure 2 and eTable 3 in the Supplement ).

The median symptom onset for RV was 36 hours after inoculation (range, 24-36 hours). The models predicting whether or not a participant was infected with RV achieved an early accuracy of 78% (78% precision, 78% sensitivity, 78% specificity, 78% F1 score, and 0.77 [95% CI, 0.54-0.99] AUC) at 12 hours after inoculation. This time point corresponded to 24 hours prior to symptom onset. Model performance peaked at the time of symptom onset, which was 36 hours after inoculation, with an accuracy of 88% at the same time as symptom onset (100% precision, 78% sensitivity, 100% specificity, 88% F1 score, and 0.96 [95% CI, 0.85-1.00] AUC) ( Figure 3 A-C; Figure 4 B; and eFigure 2 and eTable 3 in the Supplement ).

When both viral challenges were combined, models predicting the data-driven infection groupings reached an early accuracy of 78% (81% precision, 83% sensitivity, 68% specificity, 82% F1 score, and 0.66 [95% CI, 0.50-0.82] AUC) at 12 hours after inoculation. The models predicting clinically driven infection groupings reached an accuracy of 76% (76% precision, 68% sensitivity, 83% specificity, 72% F1 score, and 0.75 [95% CI, 0.60-0.90] AUC) at 24 hours after inoculation ( Figure 3 A-C; Figure 4 C; and eFigure 2 and eTable 3 in the Supplement ).

Infection severity was defined as (1) AON, (2) mild, or (3) moderate based on the data-driven functional clustering results ( Figure 2 ). We developed 66 binary and multiclass random forest models to predict class membership using features derived from the wearables for different time periods after inoculation. After automated feature selection, all 41 of the single viral challenge models included only 1 to 3 of the 9 to 45 possible features per model (eFigure 4A and B in the Supplement ). Interbeat interval features were retained in every model, and resting heart rate features were present in almost half (47.4% [9 of 19]) of the models (eTable 1 in the Supplement ).

At 12 hours after inoculation, the binary classification model predicting the future development of AON vs moderate H1N1 achieved 83% accuracy (78% precision, 88% sensitivity, 80% specificity, 82% F1 score, and0.88 [95% CI, 0.71-1.00] AUC). For RV, the model predicting the future development of AON vs moderate infection reached 92% accuracy (80% precision, 100% sensitivity, 89% specificity, 89% F1 score, and 1.00 [95% CI, 1.00-1.00] AUC). For both viruses combined, the model predicting the future development of AON vs moderate infection peaked at 84% accuracy (77% precision, 83% sensitivity, 84% specificity, 80% F1 score, and 0.78 [95% CI, 0.61-0.94] AUC) at 12 hours after inoculation ( Figure 4 A-C; Figure 5 A-C; and eFigure 3, eTable 4, and eTable 5 in the Supplement ).

Of the binary classification models for both viral challenge studies, we found that the AON vs moderate models achieved the highest accuracy and AUC toward predicting infection severity prior to symptom onset. This finding was expected given that these were the 2 most divergent classes of infection severity. At 12 hours after inoculation, the model predicting mild vs moderate H1N1 distinguished between the 2 symptomatic groups with 81% accuracy (75% precision, 75% sensitivity, 85% specificity, 75% F1 score, and 0.86 [95% CI, 0.69-1.00] AUC). By 24 hours after inoculation, this model achieved 90% accuracy (88% precision, 88% sensitivity, 92% specificity, 88% F1 score, and 0.88 [95% CI, 0.72-1.00] AUC). After excluding H1N1 challenge participants in the mild and moderate classes who did not have an infection per the clinically driven definition, the model predicting mild vs moderate H1N1 achieved 100% accuracy (100% precision, 100% sensitivity, 100% specificity, 100% F1 score, and 1.00 [95% CI, 1.00-1.00] AUC). By 24 hours after inoculation, the infection severity prediction model was able to distinguish between mild and moderate infection with an accuracy of 89% for RV (100% precision, 75% sensitivity, 100% specificity, 86% F1 score, and 0.95 [95% CI, 0.79-1.00] AUC). The model predicting mild vs moderate illness for both viruses combined distinguished between the 2 symptomatic groups with an accuracy of 86% (90% precision, 75% sensitivity, 94% specificity, 82% F1 score, and 0.91 [95% CI, 0.80-1.00] AUC). After excluding H1N1 challenge participants in the mild and moderate classes who did not have an infection per the clinically driven definition, the model predicting mild vs moderate illness for both viruses combined reached an accuracy of 94% (100% precision, 89% sensitivity, 100% specificity, 94% F1 score, and 0.94 [95% CI, 0.82-1.00] AUC) ( Figure 4 B; Figure 5 A and C; eFigure 3, eTable 4, and eTable 5 in the Supplement ). Receiving operator characteristic curves for both viral challenge studies ( Figure 5 A) demonstrated that the model predicting development of AON vs moderate illness and the model predicting development of mild vs moderate illness yielded higher discriminative ability than the model predicting AON vs mild illness.

The multiclass models were built to predict both infection status and infection severity per the data-driven definitions. We found that the highest performing multiclass models (predicting development of AON vs mild vs moderate illness) reached 77% accuracy for H1N1 (24 hours after inoculation; 76% precision, 77% sensitivity, 88% specificity, and 76% F1 score) and 82% accuracy for RV (36 hours after inoculation; 85% precision, 82% sensitivity, 88% specificity, and 82% F1 score) ( Figure 4 B; Figure 5 B and C; eFigure 2 and eTable 4 in the Supplement ).

The aim of this work was to evaluate a novel and scalable approach to identify whether or not a person will develop an infection after virus exposure and to predict eventual disease severity using noninvasive, wrist-worn wearables. The approach was tested using 2 viral challenge studies with influenza H1N1, human RV, or both viruses combined. This study shows that it is feasible to use wearable data to predict infection status and infection severity 12 to 36 hours before symptom onset, with most of our models reaching greater than 80% accuracy. Presymptomatic detection of respiratory viral infection and infection severity prediction may enable better medical resource allocation, early quarantine, and more effective prophylactic measures. Our results show that an accuracy plateau occurred in the 12- to 24-hour period after inoculation for 24 of 25 infection detection models (96.0%) and for 64 of 66 infection severity models (97.0%). This finding indicates that the most critical of the physiologic changes that occur in response to viral inoculation and that predict pending illness severity occurred within 12 to 24 hours after exposure.

Two factors associated with model accuracy are (1) knowledge of the exact time and dosage of inoculation and (2) the high-fidelity measurements of the research-grade wearable that enable intricate feature engineering, neither of which are possible in existing observational studies using consumer-grade devices. Because the outcome labeling is robust and accurate, there is a significant reduction in noise that would be present in an observational study. 41 The participants in both studies experienced clinically mild disease, so the physiologic changes in patients with severe disease outcomes would likely be even more extreme and therefore easier to detect. The timing of the models’ detection and severity prediction is particularly relevant to current work aimed at early detection of COVID-19 from smartwatches, as presymptomatic and asymptomatic spread are significant contributors to the SARS-CoV-2 pandemic. 9 , 20 - 24 , 26 , 42 - 45 The most important features for predicting infection severity were resting heart rate and mean heart rate variability. Thus, our model could be extensible to commercial wearables, which are used by 21% of US adults, for population-level detection of respiratory viral infections. 46 , 47

Several factors may be associated with the higher accuracy of the RV severity models compared with the H1N1 severity models, including the longer RV baseline period (4 days vs 1 day) and the morning vs afternoon inoculation time that may include circadian effects. This possibility was addressed in part by calculating the differences between baseline and postinoculation only from measurements taken at the same times of day. The same influenza challenge data were recently used to predict viral shedding timing, with an AUC of 0.758 using heart rate during sleep as a model feature. 7 Nighttime and early morning biometric measurements are potentially more useful than daytime measurements owing to their increased consistency, which should be explored further in future studies. 8 , 36

This study has some limitations. It focuses on 2 common respiratory viruses in a fairly small population. Expanding the data set to include larger and more diverse populations and other types of viruses will be necessary to demonstrate the broad applicability of these findings. Inclusion of negative control groups (ie, participants with no pathogen exposure and those with conditions that masquerade as infections [eg, asthma or allergies]) would further improve the work.

This study suggests that routine physiologic monitoring using common wearable devices may identify impending viral infection before symptoms develop. The ability to identify individuals during this critical early phase, when many may be spreading the virus without knowing it, and when therapies (if available) and public health interventions are most likely to be efficacious, may have a wide-ranging effect. In the midst of the global SARS-CoV-2 pandemic, the need for novel approaches like this has never been more apparent, and future work to validate these findings in individuals with other respiratory infections, such as COVID-19, may be critical given the highly variable and potentially severe or even fatal presentation of SARS-CoV-2 infection. The ability to detect infection early, predict how an infection will change over time, and determine when health changes occur that require clinical care may improve resource allocation and save lives.

Accepted for Publication: July 13, 2021.

Published: September 29, 2021. doi:10.1001/jamanetworkopen.2021.28534

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2021 Grzesiak E et al. JAMA Network Open .

Corresponding Author: Jessilyn Dunn, PhD, Biomedical Engineering Department, Duke University, 2424 Erwin Rd, Durham, NC 27705 ( [email protected] ).

Author Contributions: Ms Grzesiak and Dr Dunn had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: McClain, Woods, Tsalik, Nicholson, Burke, Chiu, Doraiswamy, Ginsburg, Dunn.

Acquisition, analysis, or interpretation of data: Grzesiak, Bent, McClain, Woods, Tsalik, Veldman, Burke, Gardener, Bergstrom, Turner, Chiu, Doraiswamy, Hero, Henao, Dunn.

Drafting of the manuscript: Grzesiak, Bent, Dunn.

Critical revision of the manuscript for important intellectual content: Grzesiak, McClain, Woods, Tsalik, Nicholson, Veldman, Burke, Gardener, Bergstrom, Turner, Chiu, Doraiswamy, Hero, Henao, Ginsburg, Dunn.

Statistical analysis: Grzesiak, Bent, Hero, Dunn.

Obtained funding: McClain, Woods, Chiu.

Administrative, technical, or material support: Veldman, Burke, Gardener, Turner, Henao, Dunn.

Supervision: McClain, Woods, Burke, Turner, Chiu, Ginsburg, Dunn.

Conflict of Interest Disclosures: Dr McClain reported receiving grants from Defense Advanced Research Projects Agency (DARPA) during the conduct of the study; in addition, Dr McClain had a patent for molecular signatures of acute respiratory infections pending. Dr Tsalik reported receiving personal fees from and being the cofounder of Predigen Inc outside the submitted work. Dr Burke reported receiving grants from DARPA during the conduct of the study and serving as a consultant for Predigen Inc outside the submitted work. Dr Turner reported receiving grants from Duke University during the conduct of the study. Dr Chiu reported receiving grants from DARPA during the conduct of the study and grants from Wellcome Trust, Medical Research Council, and the European Commission outside the submitted work. Dr Doraiswamy reported receiving grants from DARPA and nonfinancial support from Lumos Labs during the conduct of the study and receiving grants from Salix, Avanir, Avid, the National Institutes of Health, Cure Alzheimer’s Fund, Karen L. Wrenn Trust, Steve Aoki Foundation, the Office of Naval Research, and the Department of Defense and personal fees from Clearview, Verily, Vitakey, Transposon, Neuroglee, Brain Forum, and Apollo outside the submitted work; in addition, Dr Doraiswamy had a patent for infection detection using wearables pending, a patent for diagnosis of Alzheimer disease pending, a patent for treatment of Alzheimer disease pending, and a patent for infection detection through cognitive variability pending. Dr Ginsburg reported being the founder of Predigen Inc outside the submitted work. No other disclosures were reported.

Funding/Support: Dr Chiu is supported by the Biomedical Research Centre award to the Imperial College Healthcare National Health Service (NHS) Trust. Infrastructure support was provided by the National Institute for Health Research (NIHR) Imperial Biomedical Research Centre and the NIHR Imperial Clinical Research Facility. The Influenza A/California/04/09 challenge virus was donated by ITS Innovation, London, UK. The influenza viral challenge was supported by grant N66001-17-2-4014 from Prometheus Program of the DARPA and the rhinovirus study was supported by grant D17AP00005 from the Biochronicity Program of DARPA.

Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

This paper is in the following e-collection/theme issue:

Published on 25.3.2024 in Vol 26 (2024)

Where Do Oncology Patients Seek and Share Health Information? Survey Study

Authors of this article:

Author Orcid Image

Research Letter

  • Eric Freeman 1 , BA   ; 
  • Darshilmukesh Patel 2 , BA   ; 
  • Folasade Odeniyi 1 , MPH, MBA   ; 
  • Mary Pasquinelli 2 , DNP   ; 
  • Shikha Jain 2 , MD  

1 College of Medicine, University of Illinois at Chicago, Chicago, IL, United States

2 Department of Medicine, University of Illinois at Chicago, Chicago, IL, United States

Corresponding Author:

Eric Freeman, BA

College of Medicine

University of Illinois at Chicago

1853 West Polk Street

Chicago, IL, 60612

United States

Phone: 1 847 791 0189

Email: [email protected]

Introduction

Social media in health care has many benefits, including the dissemination of health information [ 1 ] and health promotion [ 2 ]. The COVID-19 pandemic has highlighted the benefits of the internet and social media as tools through which individuals can exchange health information. While little is known about oncology patients’ preferences for social media platforms, particularly among minority populations and those in low socioeconomic status communities, some studies have shown its use is linked to the alleviation of patient stress and loneliness, increased feelings of self-efficacy and control of care, and efficient delivery of health information from health practitioners [ 3 ]. The study aims to assess where patients from marginalized communities receive a majority of their health care information by surveying patients in a cancer clinic. This study was conducted at the University of Illinois Chicago, which is a public hospital that mainly serves patients from underresourced communities.

Between March 2021 to June 2021, we administered a 16-item survey ( Multimedia Appendix 1 ) adapted from the National Cancer Institute’s Health Information National Trends Survey (HINTS) [ 4 ] to patients scheduled for an oncology visit at the Outpatient Care Center at UI Health. The survey was administered to 145 patients via email and 161 patients in person. Respondents were asked to identify sources used to self-educate about their diagnosis, preferred information source, social media use and preferences, and demographics. We used chi-square tests to assess associations between categorical variables.

Ethics Approval

This study was approved by the institutional review board at the University of Illinois Chicago and was found to meet the criteria for exemption as defined in the US Department of Health and Human Services Regulations for the Protection of Human Subjects (45 CFR 46.104(d)).

The demographics of our sample can be found in Table 1 . Respondents routinely accessed several forms of health information sources. The top three included their doctor or health care provider (n=274, 89.3%), internet search engines (n=218, 71.2%), and brochures and pamphlets (n=125, 40.7%). However, when directed to choose just one source, 207 (67.4%) chose their doctor or health care provider, while 67 (21.8%) chose internet search engines. The majority of respondents used a smartphone with the internet (n=237, 77.2%), a home desktop or laptop with the internet (n=192, 62.5%), or a tablet with the internet (n=188, 61.2%). However, approximately one-quarter of respondents indicated that they used a mobile phone without internet or a data plan.

We found that the majority of respondents accessed social media in the past year (n=198, 64.7%). Using social media was associated with age ( χ 2 3 =18.7; P <.001) and sex (Fisher P =.001). While respondents primarily used Facebook (n=69, 22.5%), YouTube (n=66, 21.5%), and Instagram (n=25, 8.1%) to receive health information, few shared health information with a medical professional (n=17, 5.5%), and if they did, they primarily used Facebook (n=8, 48.7%).

Principal Findings

Understanding how patients exchange health information is important to ensure access to accurate information and promote engagement with the health care team. We found that a majority of our patients use social media to find health-related information. However, there continues to be an internet access disparity that can limit patients’ ability to improve their health literacy. As social media engagement is linked to positive patient outcomes, using social media interventions can help us improve oncology patients’ illness experience. While both oncology providers and patients are increasingly using social media as a learning and sharing tool [ 5 ], the exact information-seeking behavior of patients with cancer has yet to be fully examined, especially in disadvantaged populations. In the current climate of rampant online medical misinformation, health care workers should find innovative ways to disseminate evidence-based patient-facing information using the platforms most accessed by oncology patients. Our study highlights the need to further explore communication preferences to help develop tailored communication strategies to support underserved patients and their families.

Limitations

Our study has various limitations. This study was a single clinic, single institution study with a relatively small sample size. Additionally, our patient population was older, which could have influenced preferred social media platforms.

Data Availability

The data sets generated or analyzed during this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

None declared.

Social media survey.

  • Moorhead SA, Hazlett DE, Harrison L, Carroll JK, Irwin A, Hoving C. A new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication. J Med Internet Res. Apr 23, 2013;15(4):e85. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Khatri C, Chapman SJ, Glasbey J, Kelly M, Nepogodiev D, Bhangu A, et al. STARSurg Committee. Social media and internet driven study recruitment: evaluating a new model for promoting collaborator engagement and participation. PLoS One. 2015;10(3):e0118899. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Leist AK. Social media use of older adults: a mini-review. Gerontology. 2013;59(4):378-384. [ CrossRef ] [ Medline ]
  • National Cancer Institute. Healthcare Information National Trends Survey. 2018. URL: https://hints.cancer.gov/ [accessed 2023-09-12]
  • Watson J. Social media use in cancer care. Semin Oncol Nurs. May 2018;34(2):126-131. [ CrossRef ] [ Medline ]

Abbreviations

Edited by A Mavragani; submitted 21.03.22; peer-reviewed by S El kefi, S Hargreavess, K Na; comments to author 17.11.22; revised version received 16.06.23; accepted 04.07.23; published 25.03.24.

©Eric Freeman, Darshilmukesh Patel, Folasade Odeniyi, Mary Pasquinelli, Shikha Jain. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.03.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Search form

Doing more, but learning less: the risks of ai in research.

Abstract illustration of data

(© stock.adobe.com)

Artificial intelligence (AI) is widely heralded for its potential to enhance productivity in scientific research. But with that promise come risks that could narrow scientists’ ability to better understand the world, according to a new paper co-authored by a Yale anthropologist.

Some future AI approaches, the authors argue, could constrict the questions researchers ask, the experiments they perform, and the perspectives that come to bear on scientific data and theories.

All told, these factors could leave people vulnerable to “illusions of understanding” in which they believe they comprehend the world better than they do.

The paper published March 7 in Nature .

“ There is a risk that scientists will use AI to produce more while understanding less,” said co-author Lisa Messeri, an anthropologist in Yale’s Faculty of Arts and Sciences. “We’re not arguing that scientists shouldn’t use AI tools, but we’re advocating for a conversation about how scientists will use them and suggesting that we shouldn’t automatically assume that all uses of the technology, or the ubiquitous use of it, will benefit science.”

The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review.

“ We hope this paper offers a vocabulary for talking about AI’s potential epistemic risks,” Messeri said.

Added Crockett: “To understand these risks, scientists can benefit from work in the humanities and qualitative social sciences.”

Messeri and Crockett classified proposed visions of AI spanning the scientific process that are currently creating buzz among researchers into four archetypes:

  • In study design, they argue, “AI as Oracle” tools are imagined as being able to objectively and efficiently search, evaluate, and summarize massive scientific literatures, helping researchers to formulate questions in their project’s design stage.
  • In data collection, “AI as Surrogate” applications, it is hoped, allow scientists to generate accurate stand-in data points, including as a replacement for human study participants, when data is otherwise too difficult or expensive to obtain.
  • In data analysis, “AI as Quant” tools seek to surpass the human intellect’s ability to analyze vast and complex datasets.
  • And “AI as Arbiter” applications aim to objectively evaluate scientific studies for merit and replicability, thereby replacing humans in the peer-review process.   

The authors warn against treating AI applications from these four archetypes as trusted partners, rather than simply tools , in the production of scientific knowledge. Doing so, they say, could make scientists susceptible to illusions of understanding, which can crimp their perspectives and convince them that they know more than they do.

The efficiencies and insights that AI tools promise can weaken the production of scientific knowledge by creating “monocultures of knowing,” in which researchers prioritize the questions and methods best suited to AI over other modes of inquiry, Messeri and Crockett state. A scholarly environment of that kind leaves researchers vulnerable to what they call “illusions of exploratory breadth,” where scientists wrongly believe that they are exploring all testable hypotheses, when they are only examining the narrower range of questions that can be tested through AI.

For example, “Surrogate” AI tools that seem to accurately mimic human survey responses could make experiments that require measurements of physical behavior or face-to-face interactions increasingly unpopular because they are slower and more expensive to conduct, Crockett said.

The authors also describe the possibility that AI tools become viewed as more objective and reliable than human scientists, creating a “monoculture of knowers” in which AI systems are treated as a singular, authoritative, and objective knower in place of a diverse scientific community of scientists with varied backgrounds, training, and expertise. A monoculture, they say, invites “illusions of objectivity” where scientists falsely believe that AI tools have no perspective or represent all perspectives when, in truth, they represent the standpoints of the computer scientists who developed and trained them.

“ There is a belief around science that the objective observer is the ideal creator of knowledge about the world,” Messeri said. “But this is a myth. There has never been an objective ‘knower,’ there can never be one, and continuing to pursue this myth only weakens science.”  

There is substantial evidence that human diversity makes science more robust and creative, the authors add.

“ Acknowledging that science is a social practice that benefits from including diverse standpoints will help us realize its full potential,” Crockett said. “Replacing diverse standpoints with AI tools will set back the clock on the progress we’ve made toward including more perspectives in scientific work.”

It is important to remember AI’s social implications, which extend far beyond the laboratories where it is being used in research, Messeri said.

“ We train scientists to think about technical aspects of new technology,” she said. “We don’t train them nearly as well to consider the social aspects, which is vital to future work in this domain.”

Science & Technology

Social Sciences

Media Contact

Bess Connolly : [email protected] ,

research paper for feasibility study

Bulldogs defeat No. 4 seed Auburn 78-76 to advance in NCAA Tournament

research paper for feasibility study

NASA’s Goddard Institute director to join opening of campus art exhibition

research paper for feasibility study

Does pregnancy accelerate aging? Study suggests that it does — at first

research paper for feasibility study

Spring events celebrate ISM’s 50th anniversary at Yale

  • Show More Articles
  • Share full article

Advertisement

Supported by

More Studies by Columbia Cancer Researchers Are Retracted

The studies, pulled because of copied data, illustrate the sluggishness of scientific publishers to address serious errors, experts said.

research paper for feasibility study

By Benjamin Mueller

Scientists in a prominent cancer lab at Columbia University have now had four studies retracted and a stern note added to a fifth accusing it of “severe abuse of the scientific publishing system,” the latest fallout from research misconduct allegations recently leveled against several leading cancer scientists.

A scientific sleuth in Britain last year uncovered discrepancies in data published by the Columbia lab, including the reuse of photos and other images across different papers. The New York Times reported last month that a medical journal in 2022 had quietly taken down a stomach cancer study by the researchers after an internal inquiry by the journal found ethics violations.

Despite that study’s removal, the researchers — Dr. Sam Yoon, chief of a cancer surgery division at Columbia University’s medical center, and Changhwan Yoon, a more junior biologist there — continued publishing studies with suspicious data. Since 2008, the two scientists have collaborated with other researchers on 26 articles that the sleuth, Sholto David, publicly flagged for misrepresenting experiments’ results.

One of those articles was retracted last month after The Times asked publishers about the allegations. In recent weeks, medical journals have retracted three additional studies, which described new strategies for treating cancers of the stomach, head and neck. Other labs had cited the articles in roughly 90 papers.

A major scientific publisher also appended a blunt note to the article that it had originally taken down without explanation in 2022. “This reuse (and in part, misrepresentation) of data without appropriate attribution represents a severe abuse of the scientific publishing system,” it said .

Still, those measures addressed only a small fraction of the lab’s suspect papers. Experts said the episode illustrated not only the extent of unreliable research by top labs, but also the tendency of scientific publishers to respond slowly, if at all, to significant problems once they are detected. As a result, other labs keep relying on questionable work as they pour federal research money into studies, allowing errors to accumulate in the scientific record.

“For every one paper that is retracted, there are probably 10 that should be,” said Dr. Ivan Oransky, co-founder of Retraction Watch, which keeps a database of 47,000-plus retracted studies. “Journals are not particularly interested in correcting the record.”

Columbia’s medical center declined to comment on allegations facing Dr. Yoon’s lab. It said the two scientists remained at Columbia and the hospital “is fully committed to upholding the highest standards of ethics and to rigorously maintaining the integrity of our research.”

The lab’s web page was recently taken offline. Columbia declined to say why. Neither Dr. Yoon nor Changhwan Yoon could be reached for comment. (They are not related.)

Memorial Sloan Kettering Cancer Center, where the scientists worked when much of the research was done, is investigating their work.

The Columbia scientists’ retractions come amid growing attention to the suspicious data that undergirds some medical research. Since late February, medical journals have retracted seven papers by scientists at Harvard’s Dana-Farber Cancer Institute . That followed investigations into data problems publicized by Dr. David , an independent molecular biologist who looks for irregularities in published images of cells, tumors and mice, sometimes with help from A.I. software.

The spate of misconduct allegations has drawn attention to the pressures on academic scientists — even those, like Dr. Yoon, who also work as doctors — to produce heaps of research.

Strong images of experiments’ results are often needed for those studies. Publishing them helps scientists win prestigious academic appointments and attract federal research grants that can pay dividends for themselves and their universities.

Dr. Yoon, a robotic surgery specialist noted for his treatment of stomach cancers, has helped bring in nearly $5 million in federal research money over his career.

The latest retractions from his lab included articles from 2020 and 2021 that Dr. David said contained glaring irregularities . Their results appeared to include identical images of tumor-stricken mice, despite those mice supposedly having been subjected to different experiments involving separate treatments and types of cancer cells.

The medical journal Cell Death & Disease retracted two of the latest studies, and Oncogene retracted the third. The journals found that the studies had also reused other images, like identical pictures of constellations of cancer cells.

The studies Dr. David flagged as containing image problems were largely overseen by the more senior Dr. Yoon. Changhwan Yoon, an associate research scientist who has worked alongside Dr. Yoon for a decade, was often a first author, which generally designates the scientist who ran the bulk of the experiments.

Kun Huang, a scientist in China who oversaw one of the recently retracted studies, a 2020 paper that did not include the more senior Dr. Yoon, attributed that study’s problematic sections to Changhwan Yoon. Dr. Huang, who made those comments this month on PubPeer, a website where scientists post about studies, did not respond to an email seeking comment.

But the more senior Dr. Yoon has long been made aware of problems in research he published alongside Changhwan Yoon: The two scientists were notified of the removal in January 2022 of their stomach cancer study that was found to have violated ethics guidelines.

Research misconduct is often pinned on the more junior researchers who conduct experiments. Other scientists, though, assign greater responsibility to the senior researchers who run labs and oversee studies, even as they juggle jobs as doctors or administrators.

“The research world’s coming to realize that with great power comes great responsibility and, in fact, you are responsible not just for what one of your direct reports in the lab has done, but for the environment you create,” Dr. Oransky said.

In their latest public retraction notices, medical journals said that they had lost faith in the results and conclusions. Imaging experts said some irregularities identified by Dr. David bore signs of deliberate manipulation, like flipped or rotated images, while others could have been sloppy copy-and-paste errors.

The little-noticed removal by a journal of the stomach cancer study in January 2022 highlighted some scientific publishers’ policy of not disclosing the reasons for withdrawing papers as long as they have not yet formally appeared in print. That study had appeared only online.

Roland Herzog, the editor of the journal Molecular Therapy, said that editors had drafted an explanation that they intended to publish at the time of the article’s removal. But Elsevier, the journal’s parent publisher, advised them that such a note was unnecessary, he said.

Only after the Times article last month did Elsevier agree to explain the article’s removal publicly with the stern note. In an editorial this week , the Molecular Therapy editors said that in the future, they would explain the removal of any articles that had been published only online.

But Elsevier said in a statement that it did not consider online articles “to be the final published articles of record.” As a result, company policy continues to advise that such articles be removed without an explanation when they are found to contain problems. The company said it allowed editors to provide additional information where needed.

Elsevier, which publishes nearly 3,000 journals and generates billions of dollars in annual revenue , has long been criticized for its opaque removals of online articles.

Articles by the Columbia scientists with data discrepancies that remain unaddressed were largely distributed by three major publishers: Elsevier, Springer Nature and the American Association for Cancer Research. Dr. David alerted many journals to the data discrepancies in October.

Each publisher said it was investigating the concerns. Springer Nature said investigations take time because they can involve consulting experts, waiting for author responses and analyzing raw data.

Dr. David has also raised concerns about studies published independently by scientists who collaborated with the Columbia researchers on some of their recently retracted papers. For example, Sandra Ryeom, an associate professor of surgical sciences at Columbia, published an article in 2003 while at Harvard that Dr. David said contained a duplicated image . As of 2021, she was married to the more senior Dr. Yoon, according to a mortgage document from that year.

A medical journal appended a formal notice to the article last week saying “appropriate editorial action will be taken” once data concerns had been resolved. Dr. Ryeom said in a statement that she was working with the paper’s senior author on “correcting the error.”

Columbia has sought to reinforce the importance of sound research practices. Hours after the Times article appeared last month, Dr. Michael Shelanski, the medical school’s senior vice dean for research, sent an email to faculty members titled “Research Fraud Accusations — How to Protect Yourself.” It warned that such allegations, whatever their merits, could take a toll on the university.

“In the months that it can take to investigate an allegation,” Dr. Shelanski wrote, “funding can be suspended, and donors can feel that their trust has been betrayed.”

Benjamin Mueller reports on health and medicine. He was previously a U.K. correspondent in London and a police reporter in New York. More about Benjamin Mueller

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Defining Feasibility and Pilot Studies in Preparation for Randomised Controlled Trials: Development of a Conceptual Framework

Sandra m. eldridge.

1 Centre for Primary Care and Public Health, Queen Mary University of London, London, United Kingdom

Gillian A. Lancaster

2 Department of Mathematics and Statistics, Lancaster University, Lancaster, Lancashire, United Kingdom

Michael J. Campbell

3 School of Health and Related Research, University of Sheffield, Sheffield, South Yorkshire, United Kingdom

Lehana Thabane

4 Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Sally Hopewell

5 Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, Oxfordshire, United Kingdom

Claire L. Coleman

Christine m. bond.

6 Centre of Academic Primary Care, University of Aberdeen, Aberdeen, Scotland, United Kingdom

Conceived and designed the experiments: SE GL MC LT SH CB. Performed the experiments: SE GL MC LT SH CB CC. Analyzed the data: SE GL MC LT SH CB CC. Contributed reagents/materials/analysis tools: SE GL MC LT SH CB. Wrote the paper: SE GL MC LT SH CB CC.

Associated Data

Due to a requirement by the ethics committee that the authors specified when the data will be destroyed, the authors are not able to give unlimited access to the Delphi study quantitative data. These data are available from Professor Sandra Eldridge. Data will be available upon request to all interested researchers. Qualitative data from the Delphi study are not available because the authors do not have consent from participants for wider distribution of this more sensitive data.

We describe a framework for defining pilot and feasibility studies focusing on studies conducted in preparation for a randomised controlled trial. To develop the framework, we undertook a Delphi survey; ran an open meeting at a trial methodology conference; conducted a review of definitions outside the health research context; consulted experts at an international consensus meeting; and reviewed 27 empirical pilot or feasibility studies. We initially adopted mutually exclusive definitions of pilot and feasibility studies. However, some Delphi survey respondents and the majority of open meeting attendees disagreed with the idea of mutually exclusive definitions. Their viewpoint was supported by definitions outside the health research context, the use of the terms ‘pilot’ and ‘feasibility’ in the literature, and participants at the international consensus meeting. In our framework, pilot studies are a subset of feasibility studies, rather than the two being mutually exclusive. A feasibility study asks whether something can be done, should we proceed with it, and if so, how. A pilot study asks the same questions but also has a specific design feature: in a pilot study a future study, or part of a future study, is conducted on a smaller scale. We suggest that to facilitate their identification, these studies should be clearly identified using the terms ‘feasibility’ or ‘pilot’ as appropriate. This should include feasibility studies that are largely qualitative; we found these difficult to identify in electronic searches because researchers rarely used the term ‘feasibility’ in the title or abstract of such studies. Investigators should also report appropriate objectives and methods related to feasibility; and give clear confirmation that their study is in preparation for a future randomised controlled trial designed to assess the effect of an intervention.

Introduction

There is a large and growing number of studies in the literature that authors describe as feasibility or pilot studies. In this paper we focus on feasibility and pilot studies conducted in preparation for a future definitive randomised controlled trial (RCT) that aims to assess the effect of an intervention. We are primarily concerned with stand-alone studies that are completed before the start of such a definitive RCT, and do not specifically cover internal pilot studies which are designed as the early stage of a definitive RCT; work on the conduct of internal pilot studies is currently being carried out by the UK MRC Network of Hubs for Trials Methodology Research. One motivating factor for the work reported in this paper was the inconsistent use of terms. For example, in the context of RCTs ‘pilot study’ is sometimes used to refer to a study addressing feasibility in preparation for a larger RCT, but at other times it is used to refer to a small scale, often opportunistic, RCT which assesses efficacy or effectiveness.

A second, related, motivating factor was the lack of agreement in the research community about the use of the terms ‘pilot’ and ‘feasibility’ in relation to studies conducted in preparation for a future definitive RCT. In a seminal paper in 2004 reviewing the literature in relation to pilot and feasibility studies conducted in preparation for an RCT [ 1 ], Lancaster et al reported that they could find no formal guidance as to what constituted a pilot study. In the updated UK Medical Research Council (MRC) guidance on designing and evaluating complex interventions published four years later, feasibility and pilot studies are explicitly recommended, particularly in relation to identifying problems that might occur in an ensuing RCT of a complex intervention [ 2 ]. However, while the guidance suggests possible aims of such studies, for example, testing procedures for their acceptability, estimating the likely rates of recruitment and retention of subjects, and the calculation of appropriate sample sizes, no explicit definitions of a ‘pilot study’ or ‘feasibility study’ are provided. In 2010, Thabane and colleagues presented a number of definitions of pilot studies taken from various health related websites [ 3 ]. While these definitions vary, most have in common the idea of conducting a study in advance of a larger, more comprehensive, investigation. Thabane et al also considered the relationship between pilot and feasibility, suggesting that feasibility should be the main emphasis of a pilot study and that ‘a pilot study is synonymous with a feasibility study intended to guide the planning of a large scale investigation’. However, at about the same time, the UK National Institute for Health Research (NIHR) developed definitions of pilot and feasibility studies that are mutually exclusive, suggesting that feasibility studies occurred slightly earlier in the research process and that pilot studies are ‘a version of the main study that is run in miniature to test whether the components of the main study can all work together’. Arain et al . felt that the NIHR definitions were helpful, and showed that studies identified using the keyword ‘feasibility’ had different characteristics from those identified as ‘pilot’ studies [ 4 ]. The NIHR wording for pilot studies has been changed more recently to ‘a smaller version of the main study used to test whether the components of the main study can all work together’ ( Fig 1 ). Nevertheless, it still contrasts with the MRC framework guidance that explicitly states: ‘A pilot study need not be a “scale model” of the planned main-stage evaluation, but should address the main uncertainties that have been identified in the development work’ [ 2 ]. These various, sometimes conflicting, approaches to the interpretation of the terms ‘pilot’ and ‘feasibility’ exemplify differences in current usage and opinion in the research community.

An external file that holds a picture, illustration, etc.
Object name is pone.0150205.g001.jpg

While lack of agreement about definitions may not necessarily affect research quality, it can become problematic when trying to develop guidance for research conduct because of the need for clarity over what the guidance applies to and therefore what it should contain. Previous research has identified weaknesses in the reporting and conduct of pilot and feasibility studies [ 1 , 3 , 4 , 7 ], particularly in relation to studies conducted in preparation for a future definitive RCT assessing the effect of an intervention or therapy. While undertaking research to develop guidance to address some of the weaknesses in reporting these studies, we became convinced by the current interest in this area, the lack of clarity, and the differences of opinion in the research community, that a re-evaluation of the definitions of pilot and feasibility studies was needed. This paper describes the process and results of this re-evaluation and suggests a conceptual framework within which researchers can operate when designing and reporting pilot/feasibility studies. Since our work on reporting guidelines focused specifically on pilot and feasibility studies in preparation for an RCT assessing the effect of some intervention or therapy, we restrict our re-evaluation to these types of pilot and feasibility studies.

The process of developing and validating the conceptual framework for defining pilot and feasibility studies was, to a large extent, integral to the development of our reporting guidelines, the core components of which were a large Delphi study and an international expert consensus meeting focused on developing an extension of the 2010 CONSORT statement for RCTs [ 8 ] to randomised pilot studies. The reporting guidelines, Delphi study and consensus meeting are therefore referred to in this paper. However, the reporting guidelines will be reported separately; this paper focuses on our conceptual framework.

Developing a conceptual framework—Delphi study

Following research team discussion of our previous experience with, and research on, pilot and feasibility studies we initially produced mutually exclusive definitions of pilot and feasibility studies based on, but not identical to, the definitions used by the NIHR. We drew up two draft reporting checklists based on the 2010 CONSORT statement [ 8 ], one for what we had defined as feasibility studies and one for what we had defined as pilot studies. We constructed a Delphi survey, administered on-line by Clinvivo [ 9 ], to obtain consensus on checklist items for inclusion in a reporting guideline, and views on the definitions. Following user-testing of a draft version of the survey with a purposive sample of researchers active in the field of trials and pilot studies, and a workshop at the 2013 Society for Clinical Trials Conference in Boston, we further refined the definitions, checklists, survey introduction and added additional questions.

The first round of the main Delphi survey included: a description and explanation of our definitions of pilot and feasibility studies including examples (Figs ​ (Figs2 2 and ​ and3); 3 ); questions about participants’ characteristics; 67 proposed items for the two checklists and questions about overall appropriateness of the guidelines for feasibility or pilot studies; and four questions related to the definitions of feasibility and pilot studies: How appropriate do you think our definition for a pilot study conducted in preparation for an RCT is ? How appropriate do you think our definition for a feasibility study conducted in preparation for an RCT is ? How appropriate is the way we have distinguished between two different types of study conducted in preparation for an RCT ? How appropriate are the labels ‘pilot’ and ‘feasibility’ for the two types of study we have distinguished ? Participants were asked to rate their answers to the four questions on a nine-point scale from ‘not at all appropriate’ to ‘completely appropriate’. There was also a space for open comments about the definitions. The second round included results from the first round and again asked for further comments about the definitions.

An external file that holds a picture, illustration, etc.
Object name is pone.0150205.g002.jpg

Participants for the main survey were identified as likely users of the checklist including trialists, methodologists, statisticians, funders and journal editors. Three hundred and seventy potential participants were approached by email from the project team or directly from Clinvivo. These were individuals identified based on personal networks, authors of relevant studies in the literature, members of the Canadian Institute of Health Research, Biostatistics section of Statistics Society of Canada, and the American Statistical Society. The International Society for Clinical Biostatistics and the Society for Clinical Trials kindly forwarded our email to their entire membership. There was a link within the email to the on-line questionnaire. Each round lasted three weeks and participants were sent one reminder a week before the closure of each survey. The survey took place between August and October 2013. Ethical approval was granted by the ScHARR research ethics committee at the University of Sheffield.

Developing a conceptual framework—Open meeting and research team meetings

The results of the Delphi survey pertaining to the definitions of feasibility and pilot studies were presented to an open meeting at the 2 nd UK MRC Trials Methodology Conference in Edinburgh in November 2013 [ 13 ]. Attendees chose their preferred proposition from four propositions regarding the definitions, based variously on our original definitions, the NIHR and MRC views of pilot and feasibility studies and different views expressed in the Delphi survey. At a subsequent two-day research team meeting we collated the findings from the Delphi survey and the open meeting, and considered definitions of piloting and feasiblity outside the health research context found from on-line searches using the terms ‘pilot definition’, ‘feasiblity definition’, ‘pilot study definition’ and ‘feasibility study definition’ in Google. We expected all searches to give a very large number of hits and examined the first two pages of hits only from each search. From this, we developed a conceptual framework reflecting consensus about the definitions, types and roles of feasibility and pilot studies conducted in preparation for an RCT evaluating the effect of an intervention or therapy. To ensure we incorporated the views of all researchers likely to be conducting pilot/feasiblity studies, two qualitative researchers joined the second day of the meeting which focused on agreeing this framework. Throughout this process we continually referred back to examples that we had identified to check that our emerging definitions were workable.

Validating the conceptual framework—systematic review

To validate the proposed conceptual framework, we identified a selection of recently reported studies that fitted our definition of pilot and feasibility studies, and tested a number of hypotheses in relation to these studies. We expected that approximately 30 reports would be sufficient to test the hypotheses. We conducted a systematic review to identify studies that authors described as pilot or feasibility studies, by searching Medline via PubMed for studies that had the words ‘pilot’ or ‘feasibility’ in the title. To increase the likelihood that the studies would be those conducted in preparation for a randomised controlled trial of the effect of a therapy or intervention we limited our search to those that contained the word ‘trial’ in the title or abstract. For full details of the search strategy see S1 Fig .

To focus on current practice, we selected the 150 most recent studies from those identified by the electronic search. We did not exclude protocols since we were primarily interested in identifying the way researchers characterised their study and any possible future study and the relationship between them; we expected investigators to describe these aspects of their studies in a similar way in protocols and reports of findings. Two research team members independently reviewed study abstracts to assess whether each study fitted our working definition of a pilot or feasibility study in preparation for an RCT evaluating the effect of an intervention or therapy. Where reviewers disagreed, studies were classed as ‘possible inclusions’ and disagreements resolved by discussion with referral to the full text of the paper as necessary. Given the difficulty of interpreting some reports and to ensure that all research team members agreed on inclusion, the whole team then reviewed relevant extracted sections of the papers provisionally agreed for inclusion. We recognised that abstracts of some studies might not include appropriate information, and therefore that our initial abstract review could have excluded some relevant studies; we explored the extent of this potential omission of studies by reviewing the full texts of a random sample of 30 studies from the original 150. Since our prime goal was to identify a manageable number of relevant studies in order to test our hypotheses rather than identify all possible relevant studies we did not include any additional studies as a result of this exploratory study.

We postulated that the following hypotheses would support our conceptual framework:

  • The words ‘pilot’ and ‘feasibility’ are both used in the literature to describe studies undertaken in preparation for an RCT evaluating the effect of an intervention or therapy
  • It is possible to identify a subset of studies within the literature that are RCTs conducted in preparation for a larger RCT which evaluates the effect of an intervention or therapy. Authors do not use the term ‘pilot trial’ consistently in relation to these studies.
  • Within the literature it is not possible to apply unique mutually exclusive definitions of pilot and feasibility studies in preparation for an RCT evaluating the effect of an intervention or therapy that are consistent with the way authors describe their studies.
  • Amongst feasibility studies in preparation for an RCT which evaluates the effect of an intervention or therapy it is possible to identify some studies that are not pilot studies as defined within our conceptual framework, but are studies that acquire information about the feasibility of applying an intervention in a future study.

In order to explore these hypotheses, we categorised included studies into three groups that tallied with our framework (see results for details): randomised pilot studies, non-randomised pilot studies, feasibility studies that are not pilot studies. We also extracted data on objectives, and the phrases that indicated that the studies were conducted in preparation for a subsequent RCT.

Validating the conceptual framework—Consensus meeting

We also took an explanation and visual representation of our framework to an international consensus meeting primarily designed to reach consensus on an extension of the 2010 CONSORT statement to randomised pilot studies. There were 19 invited participants with known expertise, experience, or interest in pilot and feasibility studies, including representatives of CONSORT, funders, journal editors, and those who had been involved in writing the NIHR definitions of pilot and feasibility studies and the MRC guidance on designing and evaluating complex interventions. Thus this was an ideal forum in which to discuss the framework also. This project was not concerned with any specific disease, and was methodological in design; no patients or public were involved.

Ninety-three individuals, including chief investigators, statisticians, trial managers, clinicians, research assistants and a funder, participated in the first round of the Delphi survey and 79 in the second round. Over 70% of participants in the first round felt that our definitions, the way we had distinguished between pilot and feasibility studies, and the labels ‘pilot’ and ‘feasibility’ were appropriate. However, these four items had some of the lowest appropriateness ratings in the survey and there were a large number of comments both in direct response to our four survey items related to appropriateness of definitions, and in open comment boxes elsewhere in the survey. Some of these comments are presented in Fig 4 . Some participants commented favourably on the definitions we had drawn up (quote 1) but others were confused by them (quote 2). Several compared our definitions to the NIHR definitions pointing out the differences (quote 3) and suggesting this might make it particularly difficult for the research community to understand our definitions (quote 4). Some expressed their own views about the definitions (quote 5); largely these tallied with the NIHR definitions. Others noted that both the concept of feasibility and the word itself were often used in relation to studies which investigators referred to as pilot studies (quote 6). Others questioned whether it was practically and/or theoretically possible to make a distinction between pilot and feasibility studies (quote 6, quote 7), suggesting that the two terms are not mutually exclusive and that feasibility was more of an umbrella term for studies conducted prior to the main trial. Some participants felt that, using our definitions, feasibility studies would be less structured and more variable and therefore their quality would be less appropriately assessed via a checklist (quote 8). These responses regarding definitions mirrored what we had found in the user-testing of the Delphi survey, the Society for Clinical Trials workshop, and differences of opinion already apparent in the literature. In the second round of the survey there were few comments about definitions.

An external file that holds a picture, illustration, etc.
Object name is pone.0150205.g004.jpg

There was a wide range of participants in the open meeting, including senior quantitative and qualitative methodologists, and a funding body representative. The four propositions we devised to cover different views about definitions of pilot and feasibility studies are shown in Fig 5 . Fourteen out of the fifteen attendees who voted on these propositions preferred propositions 3 or 4, based on comments from the Delphi survey and the MRC guidance on designing and evaluating complex interventions respectively. Neither of these propositions implied mutually exclusive definitions of pilot and feasibility studies.

An external file that holds a picture, illustration, etc.
Object name is pone.0150205.g005.jpg

Definitions of feasibility outside the health research context focus on the likelihood of being able to do something. For example, the Oxford on-line dictionary defines feasibility as: ‘The state or degree of being easily or conveniently done’ [ 14 ] and a feasibility study as: ‘An assessment of the practicality of a proposed plan or method’ [ 15 ]. Some definitions also suggest that a feasibility study should help with decision making, for example [ 16 ]: ‘The feasibility study is an evaluation and analysis of the potential of a proposed project. It is based on extensive investigation and research to support the process of decision making’. Outside the health research context the word ‘pilot’ has several different meanings but definitions of pilot studies usually focus on an experiment, project or development undertaken in advance of a future wider experiment, project or development. For example the Oxford on-line dictionary describes a pilot study as: ‘Done as an experiment or test before being introduced more widely’ [ 17 ]. Several definitions carry with them ideas that the purpose of a pilot study is also to facilitate decision making, for example ‘a small-scale experiment or set of observations undertaken to decide how and whether to launch a full-scale project’ [ 18 ] and some definitions specifically mention feasibility, for example: ‘a small scale preliminary study conducted in order to evaluate feasibility’ [ 19 ].

In keeping with these definitions not directly related to the health research context, we agreed that feasiblity is a concept encapsulating ideas about whether it is possible to do something and that a feasibility study asks whether something can be done , should we proceed with it , and if so , how . While piloting is also concerned with whether something can be done and whether and how we should proceed with it, it has a further dimension; piloting is implementing something, or part of something, in a way you intend to do it in future to see whether it can be done in practice. We therefore agreed that a pilot study is a study in which a future study or part of a future study , is conducted on a smaller scale to ask the question whether something can be done , should we proceed with it , and if so , how . The corollary of these definitions is that all pilot studies are feasibility studies but not all feasibility studies are pilot studies. Within the context of RCTs, the focus of our research, the ‘something’ in the definitions can be replaced with ‘a future RCT evaluating the effect of an intervention or therapy’. Studies that address the question of whether the RCT can be done, should we proceed with it and if so how, can then be classed as feasibility or pilot studies. Some of these studies may, of course, have other objectives but if they are mainly focusing on feasiblity of the future RCT we would include them as feasiblity studies. All three studies used as examples in our Delphi survey [ 10 – 12 ] satisfy the definition of a feasiblity study. However, a study by Piot et al , that we encountered while developing the Delphi study, does not. This study is described as a pilot trial in the abstract but the authors present only data on effectiveness and although they state that their results require confirmation in a larger study it is not clear that their pilot study was conducted in preparation for such a larger study [ 20 ]. On the other hand, Palmer et al ‘performed a feasibility study to determine whether patient and surgeon opinion was permissive for a Randomised Controlled Trial (RCT) comparing operative with non-operative treatment for FAI [femoroacetabular impingement]’ [ 12 ]. Heazell et al describe the aim of their randomised study as ‘to address whether a randomised controlled trial (RCT) of the management of RFM [reduced fetal movement] was feasible’ [ 10 ]. Their study was piloting many of the aspects they hoped to implement in a larger trial of RFM, thus making this also a pilot study, whereas the study conducted by Palmer et al , which comprised a questionnare to clinicians and seeking patient opinion, is not a pilot study but is a feasibility study.

Within our framework, some important studies conducted in advance of a future RCT to evaluate the effect of a therapy or intervention are not feasibility studies. For example, a systematic review, usually an essential pre-requisite for such an RCT, normally addresses whether the future RCT is necessary or desirable , not whether it is feasible . To reflect this, we developed a comprehensive diagrammatical representation of our framework for studies conducted in preparation for an RCT which, for completeness, includes, on the left hand side, early studies that are not pilot and feasibility studies, such as systematic reviews and, along the bottom, details of existing or planned reporting guidelines for different types of study ( S2 Fig ).

Validating the conceptual framework—Systematic review

From the 150 most recent studies identified by our electronic search, we identified 27 eligible reports ( Fig 6 ). In keeping with our working definition of a pilot or feasibility study, to be included the reports had to show evidence that investigators were addressing at least some feasibility objectives and that the study was in preparation for a future RCT evaluating the effect of an intervention. Ideally we would have stipulated that the primary objective of the study should be a feasibility objective but, given the nature of the reporting of most of these studies, we felt this would be too restrictive.

An external file that holds a picture, illustration, etc.
Object name is pone.0150205.g006.jpg

The 27 studies are reported in Table 1 and results relating to terminology that authors used summarised in Table 2 . Results in Table 2 support our first hypothesis that the words ‘pilot’ and ‘feasibility’ are both used in the literature to describe studies undertaken in preparation for a randomised controlled trial of effectiveness; 63% (17/27) used both terms somewhere in the title or abstract. The table also supports our second hypothesis that amongst the subset of feasibility studies in preparation for an RCT that are themselves RCTs, authors do not use the term ‘pilot trial’ consistently in relation to these studies; of the 18 randomised studies only eight contained the words ‘pilot’ and ‘trial’ in the title. Our third hypothesis, namely that it is not possible to apply unique mutually exclusive definitions of pilot and feasibility studies in preparation for an RCT that are consistent with the way authors describe their studies, is supported by the characteristics of studies presented in Table 1 and summarised in Table 2 . We could find no design or other features (such as randomisation or presence of a control group) that distinguished between those that investigators called feasibility studies and those that they called pilot studies. However, the fourth hypothesis, that amongst studies in preparation for an RCT evaluating the effect of an intervention or therapy it is possible to identify some studies that explore the feasibility of a certain intervention or acquire related information about the feasibility of applying an intervention in a future study but are not pilot studies, was not supported; we identified no such studies amongst those reported in Table 1 . Nevertheless, we had identified two prior to carrying out the review [ 10 , 15 ].

Out of our exploratory sample of 30 study reports for which we reviewed full texts rather than only titles and abstracts, we identified 10 that could be classed as pilot or feasibility studies using our framework. We had already identified four of these in our sample reported in Table 1 , but had failed to identify the other six. As expected, this was because key information to identify them as pilot or feasiblity studies such as the fact that they were in preparation for a larger RCT, or that the main objectives were to do with feasiblity were not included in the abstract. Thus our assumption that an initial screen using only abstracts resulted in the omission of some pilot and feasiblity studies was correct.

International consensus meeting participants agreed with the general tenets of our conceptual framework including the ideas that all pilot studies are feasibility studies but that some feasibility studies are not pilot studies. They suggested that any definitive diagrammatic representation should more strongly reflect non-linearity in the ordering of feasibility studies. As a result of their input we produced a new, simplified, diagrammatical representation of the framework ( Fig 7 ) which focuses on the key elements represented inside an oval shape on our original diagram, omits the wider context outside this shape, and highlights some features, including the non-linearity, more clearly.

An external file that holds a picture, illustration, etc.
Object name is pone.0150205.g007.jpg

The finalised framework

Fig 7 represents the framework. The figure indicates that where there is uncertainty about future RCT feasibility, a feasibility study is appropriate. Feasibility is thus an overarching concept within which we distinguish between three distinct types of study. Randomised pilot studies are those studies in which the future RCT, or parts of it, including the randomisation of participants, is conducted on a smaller scale (piloted) to see if it can be done. Thus randomised pilot studies can include studies that for the most part reflect the design of a future definitive trial but, if necessary due to remaining uncertainty, may involve trying out alternative strategies, for example, collecting an outcome variable via telephone for some participants and on-line for others. Within the framework randomised pilot studies could also legitimately be called randomised feasibility studies. Two-thirds of the studies presented in Table 1 are of this type.

Non-randomised pilot studies are similar to randomised pilot studies; they are studies in which all or part of the intervention to be evaluated and other processes to be undertaken in a future trial is/are carried out (piloted) but without randomisation of participants. These could also legitimately be called by the umbrella term, feasibility study. These studies cover a wide range from those that are very similar to randomised pilot studies except that the intervention and control groups have not been randomised, to those in which only the intervention, and no other trial processes, are piloted. One-third of studies presented in Table 1 are of this type.

Feasibility studies that are not pilot studies are those in which investigators attempt to answer a question about whether some element of the future trial can be done but do not implement the intervention to be evaluated or other processes to be undertaken in a future trial, though they may be addressing intervention development in some way. Such studies are rarer than the other types of feasibility study and, in fact, none of the studies in Table 1 were of this type. Nevertheless, we include these studies within the framework because they do exist; the Palmer study [ 15 ] in which surgeons and patients were asked about the feasibility of randomisation is one such example. Other examples might be interviews to ascertain the acceptability of an intervention, or questionnaires to assess the types of outcomes participants might think important. Within the framework these studies can be called feasibility studies but cannot be called pilot studies since no part of the future randomised controlled trial is being conducted on a smaller scale.

Investigators may conduct a number of studies to assess feasibility of an RCT to test the effect of any intervention or therapy. While it may be most common to carry out what we have referred to as feasibility studies that are not pilot studies before non-randomised pilot studies , and non-randomised pilot studies prior to randomised pilot studies , the process of feasibility work is not necessarily linear and such studies can in fact be conducted in any order. For completeness the diagram indicates the location of internal pilot studies.

There are diverse views about the definitions of pilot and feasibility studies within the research community. We reached consensus over a conceptual framework for the definitions of these studies in which feasibility is an overarching concept for studies assessing whether a future study, project or development can be done. For studies conducted in preparation for a RCT assessing the effect of a therapy or intervention, three distinct types of study come under the umbrella of feasibility studies: randomised pilot studies, non-randomised pilot studies, feasibility studies that are not pilot studies. Thus pilot studies are a subset of feasibility studies. A review of the literature confirmed that it is not possible to apply mutually exclusive definitions of pilot and feasibility studies in preparation for such an RCT that are consistent with the way authors describe their studies. For example Lee et al [ 31 ], Boogerd et al [ 22 ] and Wolf et al [ 38 ] all describe randomised studies exploring the feasibility of introducing new systems (brain computer interface memory training game, on-line interactive treatment environment, bed-exit alarm respectively) but Lee et al describe their study as a ‘A Randomized Control Pilot Study’, with the word ‘feasibility’ used in the abstract and text, while the study by Boogerd et al . is titled ‘Teaming up: feasibility of an online treatment environment for adolescents with type 1 diabetes’, and Wolf at al describe their study as a pilot study without using the word ‘feasibility’.

Our re-evaluation of the definitions of pilot and feasibility studies was conducted over a period of time with input via a variety of media by multi-disciplinary and international researchers, publishers, editors and funders. It was to some extent a by-product of our work developing reporting guidelines for such studies. Nevertheless, we were able to gather a wide range of expert views, and the iterative nature of the development of our thinking has been an important part of obtaining consensus. Other parallel developments, including the recent establishment of the new Pilot and Feasibility Studies journal [ 48 ], suggest that our work is, indeed, timely. We encountered several difficulties in reviewing empirical study reports. Firstly, it was sometimes hard to assess whether studies were planned in preparation for an RCT or whether the authors were conducting a small study and simply commenting on the fact that a larger RCT would be useful. Secondly, objectives were sometimes unclear, and/or effectiveness objectives were often emphasised in spite of recommendations that pilot and feasibility studies should not be focusing on effectiveness [ 1 , 4 ]. In identifying relevant studies we erred on the side of inclusiveness, acknowledging that getting these studies published is not easy and that there are, as yet, no definitive reporting guidelines for investigators to follow. Lastly, our electronic search was unable to identify any feasibility studies that were not pilot studies according to our definitions. Subsequent discussion with qualitative researchers suggested that this is because such studies are often not described as feasibility studies in the title or abstract.

Our framework is compatible with the UK MRC guidance on complex interventions which suggests a ‘feasibility and piloting’ phase as part of the work to design and evaluate such interventions without any explicit distinction between pilot and feasibility studies. In addition, although our framework has a different underlying principle from that adopted by UK NIHR, the NIHR definition of a pilot study is not far from the subset of studies we have described as randomised pilot studies. Although there appears to be increasing interest in pilot and feasibility studies, as far as we are aware no other funding bodies specifically address the nature of such studies. The National Institute for Health in the USA does, however, routinely require published pilot studies before considering funding applications for certain streams, and the Canadian Institutes of Health Research routinely have calls for pilot or feasibility studies in different clinical areas to gather evidence necessary to determine the viability of new research directions determined by their strategic funding plans. These approaches highlight the need for clarity regarding what constitutes a pilot study.

There are several previous reviews of empirical pilot and feasibility studies [ 1 , 4 , 7 ]. In the most recent, reviewing studies published between 2000 and 2009 [ 7 ], the authors identified a large number of studies, described similar difficulty in identifying whether a larger study was actually being planned, and similar lack of consistency in the way the terms ‘pilot’ and ‘feasibility’ are used. Nevertheless, in methodological work, many researchers have adopted fairly rigid definitions of pilot and feasibility studies. For example, Bugge et al in developing the ADEPT framework refer to the NIHR definitions and suggest that feasibility studies ask questions about ‘whether the study can be done’ while pilot trials are ‘(a miniature version of the main trial), which aim to test aspects of study design and processes for the implementation of a larger main trial in the future’ [ 49 ]. Although not explicitly stated, the text seems to suggest that pilot and feasibility studies are mutually exclusive. Our work indicates that this is neither necessary nor desirable. There is, however, general agreement in the literature about the purpose of pilot and feasibility studies. For example, pilot trials are ‘to provide sufficient assurance to enable a larger definitive trial to be undertaken’ [ 50 ], and pilot studies are ‘designed to test the performance characteristics and capabilities of study designs, measures, procedures, recruitment criteria, and operational strategies that are under consideration for use in a subsequent, often larger, study’ [ 51 ], and ‘play a pivotal role in the planning of large-scale and often expensive investigations’ [ 52 ]. Within our framework we define all studies aiming to assess whether a future RCT is do-able as ‘feasibility studies’. Some might argue that the focus of their study in preparation for a future RCT is acceptability rather than feasibility, and indeed, in other frameworks, such as the RE-AIM framework [ 53 ], feasibility and acceptability are seen as two different concepts. However, it is perfectly possible to explore the acceptability of an intervention, of a data collection process or of randomisation in order to determine the feasibility of a putative larger RCT. Thus the use of the term ‘feasibility study’ for a study in preparation for a future RCT is not incompatible with the exploration of issues other than feasibility within the study itself.

There are numerous previous studies in which the investigators review the literature and seek the counsel of experts to develop definitions and clarify terminology. Most of these relate to clinical or physiological definitions [ 54 – 56 ]. A few explorations of definitions relate to concepts such as quality of life [ 57 ]. Implicit in much of this work is that from time to time definitions need rethinking as knowledge and practice moves on. From an etymological point of view this makes sense. In fact, the use of the word ‘pilot’ to mean something that is a prototype of something else only appears to emerge in the middle of the twentieth century and the first use of the word in relation to research design that we could find was in 1947—a pilot survey [ 58 ]. Thus we do not have to look very far back to see changes in the use of one of the words we have been dealing with in developing our conceptual framework. We hope what we are proposing here is helpful in the early twenty-first century to clarify the use of the words ‘pilot’ and ‘feasibility’ in a health research context.

We suggest that researchers view feasibility as an overarching concept, with all studies done in preparation for a main study open to being called feasibility studies, and with pilot studies as a subset of feasibility studies. All such studies should be labelled ‘pilot’ and/or ‘feasibility’ as appropriate, preferably in the title of a report, but if not certainly in the abstract. This recommendation applies to all studies that contribute to an assessment of the feasibility of an RCT evaluating the effect of an intervention. Using either of the terms in the title will be most helpful for those conducting future electronic searches. However, we recognise that for qualitative studies, authors may find it convenient to use the terms in the abstract rather than the title. Authors also need to describe objectives and methods well, reporting clearly if their study is in preparation for a future RCT to evaluate the effect of an intervention or therapy.

Though the focus of this work was on the definitions of pilot and feasibility studies and extensive recommendations for the conduct of these studies is outside its scope, we suggest that in choosing what type of feasibility study to conduct investigators should pay close attention to the major uncertainties that exist in relation to trial or intervention. A randomised pilot study may not be necessary to address these; in some cases it may not even be necessary to implement an intervention at all. Similarly, funders should look for a justification for the type of feasibility study that investigators propose. We have has also highlighted the need for better reporting of these studies. The CONSORT extension for randomised pilot studies that our group has developed are important in helping to address this need and will be reported separately. Nevertheless, further work will be necessary to extend or adapt these reporting guidelines for use for non-randomised pilot studies and for feasibility studies that are not pilot studies. There is also more work to be done in developing good practice guidance for the conduct of pilot and feasibility studies.

Supporting Information

Acknowledgments.

We thank Alicia O’Cathain and Pat Hoddinot for discussions about the reporting of qualitative studies, and consensus participants for their views on our developing framework. Claire Coleman was funded by a National Institute for Health Research (NIHR) Research Methods Fellowship. This article presents independent research funded by the NIHR. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.

Funding Statement

The authors received small grants from Queen Mary University of London (£7495), University of Sheffield (£8000), NIHR RDS London (£2000), NIHR RDS South East (£2400), Chief Scientist Office Scotland (£1000). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability

IMAGES

  1. 48 Feasibility Study Examples & Templates (100% Free) ᐅ TemplateLab

    research paper for feasibility study

  2. 48 Feasibility Study Examples & Templates (100% Free) ᐅ TemplateLab

    research paper for feasibility study

  3. Feasibility Study Template Small Business

    research paper for feasibility study

  4. 48 Feasibility Study Examples & Templates (100% Free) ᐅ TemplateLab

    research paper for feasibility study

  5. 48 Feasibility Study Examples & Templates (100% Free) ᐅ TemplateLab

    research paper for feasibility study

  6. 48 Feasibility Study Examples & Templates (100% Free) ᐅ TemplateLab

    research paper for feasibility study

VIDEO

  1. Economic Feasibility Analysis

  2. Feasibility studies & Financial Resource Management Dr Attia Gomaa / Lec4

  3. Working on a Research Paper

  4. 12 Conducting market research and feasibility studies

  5. feasibility study // bsc 1st year // 2nd semester // system analysis and design

  6. Feasibility study in system analysis and design in hindi || Akant 360

COMMENTS

  1. Review Feasibility studies for novel and complex projects: Principles synthesised through an integrative review

    To answer this question, this review paper draws upon feasibility literature across fields, alongside contemporary management theories to distil seven principles for undertaking feasibility assessments. ... Like almost all other research, a feasibility study must identify and answer specific research questions, which must build on the existing ...

  2. A PRACTICAL GUIDE TO WRITING A FEASIBILITY STUDY

    The purpose of the book is to provide practical guide to write a feasibility study to determine the viability of a specific project. Specifically, this will book will provide the description of ...

  3. How We Design Feasibility Studies

    How We Design Feasibility Studies is a comprehensive guide for researchers who want to plan and conduct feasibility studies in various settings and contexts. It covers the definition, purpose, types, and methods of feasibility studies, as well as the ethical and practical issues involved. The article also provides examples and recommendations for conducting and reporting feasibility studies.

  4. 37035 PDFs

    37035 PDFs | Review articles in FEASIBILITY STUDIES. Science topics: Real Estate Economics Feasibility Studies. Science topic. Jan 2024. Chinwe Onukwugha.

  5. Guidance for conducting feasibility and pilot studies for

    Implementation trials aim to test the effects of implementation strategies on the adoption, integration or uptake of an evidence-based intervention within organisations or settings. Feasibility and pilot studies can assist with building and testing effective implementation strategies by helping to address uncertainties around design and methods, assessing potential implementation strategy ...

  6. What Is a Feasibility Study and How to Conduct It? (+ Examples)

    A feasibility study is a systematic and comprehensive analysis of a proposed project or business idea to assess its viability and potential for success. It involves evaluating various aspects such as market demand, technical feasibility, financial viability, and operational capabilities.

  7. (PDF) Feasibility Assessment Framework (FAF): A Systematic and

    The formula underwent a feasibility test using the TELOS approach based on the benchmark criteria and an operating feasibility assessment using the PIECES framework before the data analysis. [26 ...

  8. The Distinctive Features of a Feasibility Study: Objectives and Guiding

    In this article, we highlight the distinctive features of a feasibility study, identify the main objectives and guiding questions of a feasibility study, and illustrate the use of these objectives. We synthesized the research methods literature related to feasibility studies to identify five overarching objectives of feasibility studies that ...

  9. Are some feasibility studies more feasible than others? A review of the

    Previous analysis of feasibility studies funded by the National Institute for Health Research's (NIHR) Research for Patient Benefit's (RfPB) programme showed that on average a feasibility study demonstrated that the RCT was feasible in 64% of cases . The average cost of a feasibility study was £219,048 and took 31 months to complete.

  10. Guidance for conducting feasibility and pilot studies for

    This paper provides a resource for those undertaking preliminary work to enrich and inform larger scale implementation trials. Keywords: Feasibility, ... Consistent with the aims of Hybrid Type 1 feasibility and pilot studies, the research designs employed are likely to be non-comparative. Cross-sectional surveys, ...

  11. How to conduct a feasibility study: Template and examples

    For a general set of guidelines to help you get started, here are some basic steps to conduct and report a feasibility study for major product opportunities or features. 1. Clearly define the opportunity. Imagine your user base is facing a significant problem that your product doesn't solve. This is an opportunity.

  12. Maximising the impact of qualitative research in feasibility studies

    Feasibility studies are increasingly undertaken in preparation for randomised controlled trials in order to explore uncertainties and enable trialists to optimise the intervention or the conduct of the trial. Qualitative research can be used to examine and address key uncertainties prior to a full trial. We present guidance that researchers, research funders and reviewers may wish to consider ...

  13. How To Write Feasibility Studies (With Tips and Examples)

    Here is a step-by-step guide to help you write your own feasibility study: Describe the project. Outline the potential solutions resulting from the project. List the criteria for evaluating these solutions. State which solution is most feasible for the project. Make a conclusion statement. 1.

  14. PDF Points to consider when assessing the feasibility of research

    Equally, a record of running successful studies that answer the key research question is a strong metric of success for funders. This document contains some pointers for funders who review clinical research studies to assist in the grant review process. In this document, we define feasibility assessments as paper or IT-based

  15. Feasibility Study

    Feasibility Study: A feasibility study is an analysis of how successfully a project can be completed, accounting for factors that affect it such as economic, technological, legal and scheduling ...

  16. PDF A Feasibility Study for a Quick-Service Restaurant in Chengdu, China by

    A Feasibility Study for a Quick-Service Restaurant in Chengdu, China by Hui Guo A Research Paper Submitted in Partial Fulfillment of the Requirements for the Master of Science Degree in Hospitality and Tourism Brian Bergquist , Ph.D. The Graduate School University of Wisconsin-Stout June, 2006

  17. PDF FEASIBILITY STUDY FOR TWC CAFÉ JUANA

    3 TESDA Women's Center Feasibility Study The proposed income generating project is deemed feasible and viable. It is therefore recommended for implementation for year 2018. Further studies on increasing customer retention and marketing, as well as development of signature product lines are also suggested.

  18. Group 3 Final paper chapter 1 and 2 feasible study

    A FEASIBILITY STUDY ON THE BANANA PSEUDO-STEM AS AN ECO BAG. A Research Paper Presented to Ms. Shirley B. Pena FEU Cavite Senior High School Department MetroGate Silang Estates, Silang, Cavite. In Partial Fulfillment of the Requirements for the Course Research Project and Practical Research 2 Accountancy, Business and Management Strand

  19. Feasibility Studies and Important Aspect of Project Management

    Feasibility Studies and Important Aspect of. Project Management. Momin Mukherjee and Sahadev Roy. Abstract. In this paper we want to describe stepwise, differen t. studies are essential for design ...

  20. Methodological reporting in feasibility studies: a descriptive review

    In terms of study design, no restrictions were used aside from our search strategy keywords which already included 'Pilot Projects' and 'Feasibility Studies'. Papers written in English and French from 2015 to 2017 were retained to portray the reporting practices around and after the publication of recommendations cited in this paper's ...

  21. Investigating the Feasibility of Using a Wearable Device to Measure

    Algorithmic Identification of Persons With Dementia for Research Recruitment: Ethical Considerations Informatics for Health and Social Care January 10, 2024 Detection of Medication Taking Using a Wrist-Worn Commercially Available Wearable Device ... Pilot and Feasibility Studies October 27, 2023 Remote Patient-Reported Outcomes and Activity ...

  22. Assessment of the Feasibility of Using Noninvasive Wearable Biometric

    The approach was tested using 2 viral challenge studies with influenza H1N1, human RV, or both viruses combined. This study shows that it is feasible to use wearable data to predict infection status and infection severity 12 to 36 hours before symptom onset, with most of our models reaching greater than 80% accuracy.

  23. 8-hour time-restricted eating linked to a 91% higher risk of

    The research authors have shared their full poster presentation for updated details about their research abstract. Please see the digital file attached, under additional resources below, for these details. ... Research Highlights: A study of over 20,000 adults found that those who followed an 8-hour time-restricted eating schedule, a type of ...

  24. In One Key A.I. Metric, China Pulls Ahead of the U.S.: Talent

    China has produced a huge number of top A.I. engineers in recent years. New research shows that, by some measures, it has already eclipsed the United States. By Paul Mozur and Cade Metz Paul Mozur ...

  25. Journal of Medical Internet Research

    Journal of Medical Internet Research 8245 articles ... Survey Study Where Do Oncology Patients Seek and Share Health Information? Survey Study ... This paper is in the following e-collection/theme issue: Research Letter (23) Demographics of Users, Social & Digital Divide (644 ...

  26. Study on 'World's Oldest Pyramid' Is Retracted by Publisher

    The study, based on research featured in a Netflix documentary, fueled debate over a site that is used for Islamic and Hindu rituals. By Mike Ives Reporting from Seoul The American publisher of a ...

  27. Guidelines for Designing and Evaluating Feasibility Pilot Studies

    Pilot studies are a necessary first step to assess the feasibility of methods and procedures to be used in a larger study. Some consider pilot studies to be a subset of feasibility studies (), while others regard feasibility studies as a subset of pilot studies.As a result, the terms have been used interchangeably ().Pilot studies have been used to estimate effect sizes to determine the sample ...

  28. Doing more, but learning less: The risks of AI in research

    The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review. " We hope this paper offers a vocabulary for talking about AI's potential epistemic risks," Messeri said.

  29. More Studies by Columbia Cancer Researchers Are Retracted

    "For every one paper that is retracted, there are probably 10 that should be," said Dr. Ivan Oransky, co-founder of Retraction Watch, which keeps a database of 47,000-plus retracted studies.

  30. Defining Feasibility and Pilot Studies in Preparation for Randomised

    While undertaking research to develop guidance to address some of the weaknesses in reporting these studies, we became convinced by the current interest in this area, the lack of clarity, and the differences of opinion in the research community, that a re-evaluation of the definitions of pilot and feasibility studies was needed. This paper ...