The task is always to define the What, Why, and How needed to move forward.

Case Study: Validation Testing

This case study illustrates my approach to running user research, using the H5N1 validation testing project at EIT as an example. My process combines structured research questions, hands-on prototype testing, and close engagement with relevant domain experts. The focus is always on uncovering pain points, validating assumptions, and identifying opportunities for improvement that align with both user needs and business goals. The outcome is not only product-specific insights but also broader recommendations that can strengthen future design work and reporting logic across pipelines.

Executive Summary

The study aimed to review the Influenza A prototype design and identify pain points and opportunities. It also explored how users interpret reporting outputs. Participants were clinical microbiologists and bioinformaticians, some with data science expertise.

Overall, the prototype performed well, with users completing tasks from upload to reporting. However, key areas of confusion were identified, particularly around reporting logic and data relationships. Feedback highlighted a desire for greater flexibility, control, and clarity in how results are displayed and compared.

Key recommendations emerged in three areas:

  1. Clearer communication of the relationships between data sets.

  2. Greater control over reference genomes and SNP distance metrics.

  3. More visual genomic clustering tools, such as phylogenetic trees.

These findings provide a foundation for both immediate design improvements and longer-term enhancements across the wider reporting framework.

Context & Aim

I conducted a user study on a Figma prototype of the Influenza A (H5N1) reporting workflow. The goal was to:

  • Validate whether users could successfully navigate the flow.

  • Identify pain points and opportunities in the reporting logic.

  • Gather feedback on how users want to view and interact with genomic data outputs.

Participants included clinical microbiologists, bioinformaticians, and data scientists, representing the primary end-user groups.

Process

  • Prototype testing: Users were asked to upload a sample, explore reporting features, and interpret three theoretical output charts.

  • Unmoderated testing (Useberry) was used, with follow-up interviews to clarify results where platform issues occurred.

  • Participants: A small but targeted group of clinical and research users.

Key Findings

Navigation

  • Users successfully completed the main flow (upload to report).

  • Minor excess clicks were attributed to prototype limitations, not user confusion.

Reporting Logic

  • Confusion: All users believed reports reflected batches of samples, not individual samples as intended.

  • Graphs lacked keys and context, leading to misinterpretation.

Outputs & Data Control

  • Users wanted more flexibility in filtering and selecting outputs (e.g. SNP distance thresholds, reference genomes).

  • Strong demand for visual clustering tools (such as phylogenetic trees) to support outbreak investigations.

  • Desire for customisable outputs depending on role (e.g. public health vs wet-lab).

Opportunities

  • Enhanced “relatedness” features to compare data:

  • Within a batch (e.g. local outbreak).

  • Between batches (e.g. longitudinal tracking).

  • Across organisations (e.g. early warning).

  • Export options (print/PDF) to be customisable by role.

Challenges

  • Prototype platform: Safari compatibility issues caused drop-offs. Moderated sessions were needed to clarify data.

  • Sample size: The research pool was still maturing, limiting participant diversity.

Recommendations

  • Clarify reporting logic: Improve labelling and contextual keys to avoid misinterpretation.

  • Customisable outputs: Allow users to control batch views and select which data to include in reports.

  • Enhanced visualisation: Incorporate genomic clustering tools (e.g. phylogenetic trees, geographic overlays).

  • Role-specific reporting: Enable persona-based configurations (bioinformatician, public health, lab worker).

Outcomes & Learnings

  • The study validated the core usability of the workflow while surfacing critical improvements for reporting.

  • Feedback directly influenced design updates, including:

    • A sortable batch page for greater user control.

    • A redesigned print/export modal for flexible outputs.

  • Broader insight: Users see reporting not just as a feature, but as a decision-making tool, requiring adaptability to different research and public health contexts.

Reflection

This study reinforced the importance of:

  • Testing reporting logic early to avoid costly misinterpretations later.

  • Designing with flexibility and role-specific needs in mind.

  • Using research even at the MVP stage to guide which features are “must-have” versus future enhancements.

Despite platform and recruitment challenges, the study provided clear, actionable insights and strengthened the alignment between product, science, and user needs.