Skip to main content

E-Tongue Sensory Decision Support Platform

A human-centered decision-support platform that combines E-Tongue machine taste data with semi-trained panel feedback to help food developers compare prototypes and make Go / Tweak / Stop decisions.

  • UX Researcher + UI Designer
  • Human Factor Product Design
  • Decision-Support UX
  • Prototyping
  • Usability Testing

Project Context

In Human Factor Product Design, our team explored how food developers could evaluate alternative product samples by combining two very different forms of evidence: E-Tongue machine taste measurements and feedback from semi-trained sensory panelists.

The design challenge was not just to display more data. We needed to help teams compare samples, understand disagreement between machine readings and human perception, and move from raw sensory evidence toward a clearer product decision.

Commercial E-Tongue taste sensing instrument used as domain reference

Understanding the Sensing System

The project began with a physical sensing system, not just a screen. E-Tongue workflows involve preparing samples, running machine measurements, and interpreting outputs that are meaningful to food-science teams but not always transparent to broader product stakeholders.

Using the device context from our course research helped us design the platform around the real handoff point: turning technical measurements into decisions that a development team can discuss, trust, and act on.

The UX Problem

Machine taste data can be precise and repeatable, but it is difficult to interpret without domain context. Human panel feedback is richer and closer to real perception, but it can be subjective, slow, and inconsistent across participants.

We framed the core UX problem as decision support: how might a platform help food developers understand what changed between samples, why it matters, and whether the next step should be Go, Tweak, or Stop?

Platform Concept: ISSF Dashboard

The ISSF Dashboard translates the mixed-method vision into a working surface. A status line communicates the core methodology — Semi-trained panel + E-Tongue + GC-O — backed by accuracy (94%), correlation (r=0.91), and sample size (n=127), giving food developers a clear line of evidence confidence before they even begin comparing prototypes.

Each sample card surfaces a hedonic score at a glance, while the dashboard panels for CATA attributes, intensity ratings, and emotional profiles let developers drill into the specific evidence behind every number.

ISSF Dashboard showing mixed panel method with E-Tongue + semi-trained data achieving 94% accuracy

Process Timeline

The project moved from domain discovery into task analysis, concept selection, role-based workflows, early prototyping, and formative usability testing.

  1. 01

    Client & Domain Discovery

    We reviewed the sensory-evaluation workflow, E-Tongue data outputs, and the decisions food developers need to make when comparing product prototypes.

  2. 02

    Task Analysis

    We mapped how different users would configure samples, collect panelist responses, interpret machine testing, and compare evidence across products.

  3. 03

    Concept Direction

    We compared dashboard, comparison-tool, and decision-support directions, then prioritized a workflow that could translate evidence into actionable recommendations.

  4. 04

    Role-Based Prototype

    We separated the experience into panelist-facing questionnaires and food-developer-facing analysis views so each role could focus on the right level of detail.

  5. 05

    Formative Testing

    We ran think-aloud usability sessions to observe questionnaire clarity, terminology comprehension, navigation behavior, and confusion around rating scales.

Early Prototype Walkthrough

This early prototype walkthrough shows the team's first end-to-end product direction: role-based entry, sample workflows, sensory questionnaire interactions, and the beginnings of a decision-support layer for interpreting results.

We are intentionally treating this as process evidence rather than a final polished product. The value of the prototype was that it made the workflow concrete enough to critique, test, and iterate.

Designing for Hybrid Sensory Evidence

The platform concept connected panelist tasks such as CATA selection, intensity ratings, hedonic ratings, and emotional response with developer tasks such as configuring samples, reviewing machine testing, analyzing agreement, and making final product calls.

Instead of asking users to manually reconcile every chart and response, the experience emphasized interpretation: confidence, rationale, agreement patterns, off-note detection, and clear Go / Tweak / Stop recommendations.

Hybrid Evidence in Practice

Two views of the same hybrid evidence workflow. The Taste Profile radar visualizes 9-axis E-Tongue output — sourness through richness — from the Insent TS-5000Z with precise measurement conditions (2:5 dilution, 40°C, 7000rpm). The ISSF Score view combines machine data with panelist feedback into a single actionable recommendation: Go, Tweak, or Stop — with confidence, risk level, and estimated savings.

Together, these surfaces let developers move between the raw sensory signal and the product-level decision without losing traceability back to the evidence.

Final Prototype: ISSF Platform

The final prototype materialized the ISS-F platform across four core views. Machine Testing lets teams compare up to 12 product samples and 2 dairy controls across E-Tongue, GC-O aroma, and chemical composition data. Analyze Results surfaces CATA attributes, intensity ratings, hedonic scores, and emotional profiles from semi-trained panelists alongside machine readings.

Final Decision translates the hybrid evidence into Go / Tweak / Stop recommendations — each backed by ISSF confidence scores, sensory profiles, trained-panel validation deltas, and estimated cost savings. Configure Products supports administrator workflows for creating evaluation sessions, managing panelist rosters, and tracking product status through Active → Complete.

Close-up of E-Tongue sensor probes used as context for sensing workflow design

From Sensor Readings to Decision Cues

The sensing hardware made the hidden complexity of the workflow visible: multiple probes, repeated measurements, calibration expectations, and sample-level comparisons all sit behind the simple question of whether a food prototype is ready to move forward.

That complexity shaped our interface priorities. We wanted the platform to preserve technical credibility while still giving users a readable path from sensor evidence to product-level interpretation.

Testing Snapshot

Our usability testing was formative: the goal was to identify where terminology, scale design, and workflow expectations could break down before the system became more polished.

3Panelist participants tested
19–60Participant age range
3Core questionnaire tasks observed
Terminology support
Participants found in-flow definitions helpful, especially when sensory terms were unfamiliar or technically specific.
Scale clarity
Inconsistent scoring formats and unclear emotional-response labels created confusion and became a priority for iteration.
Workflow expectation
Testing surfaced moments where users expected clearer start states, stronger progress cues, and more consistent interaction patterns.

Reflection

This project showed that complex data products do not become usable just by adding dashboards. When people need to make decisions under uncertainty, the interface has to organize evidence, surface tradeoffs, and explain why a recommendation is being made.

For our team, the most important shift was moving from data visualization toward decision-support UX: designing a shared interpretation layer between machine measurements, human perception, and the next product-development action.