Vous êtes sur la page 1sur 26

Performance Evaluation and Market Position of the MS

Longitudinal Viewer
January 23rd, 2019

Oreoluwa Adesina
Alper Anik
Emmanuel Shedu
Gabriel Sinclair
Austin Wilson
Keming Xu

from the
Center for Leadership Education, Whiting School of Engineering

in collaboration with the


Technology Innovation Center, Johns Hopkins Hospital.

1
Contents
Executive Summary .......................................................................................................................................................3
Internal Analysis ....................................................................................................................................................... 3
Market and Customer Analysis ................................................................................................................................. 3
Competitor Analysis & Deployment Considerations ................................................................................................ 3
Introduction ................................................................................................................................................................... 3
Internal Analysis ............................................................................................................................................................4
Heuristic Evaluation of User Experience .................................................................................................................. 4
Recommendations for Usability Testing ...................................................................................................................6
Physiological Measures in Usability Testing ............................................................................................................7
Market and Customer Analysis ..................................................................................................................................... 7
Target Market & Customers ...................................................................................................................................... 8
Prospective Market & Customers.............................................................................................................................. 9
Competitive Analysis .................................................................................................................................................. 10
MS BioScreen (University of California San Francisco) ........................................................................................ 10
MS Mosaic (Duke University) ................................................................................................................................ 11
AnamneVis (Stony Brook University) .................................................................................................................... 11
Vie-Visu (University of Vienna) ............................................................................................................................. 11
VISITORS [VISualizatIon of Time-Oriented RecordS] (Ben-Gurion University of the Negev) ........................... 11
Lifelines2 (University of Maryland) ....................................................................................................................... 12
EventFlow (University of Maryland) ...................................................................................................................... 12
Dartmouth Atlas Project (Dartmouth College) ........................................................................................................ 12
Key Features ................................................................................................................................................................ 12
Deployment Considerations......................................................................................................................................... 13
Annotated Bibliography .............................................................................................................................................. 14
Visualization - General............................................................................................................................................ 14
Visualization - Patient-Oriented .............................................................................................................................. 15
Visualization - Specific Tools ................................................................................................................................. 15
Evaluation - User-Based.......................................................................................................................................... 17
Evaluation - Heuristic.............................................................................................................................................. 18
Evaluation - Physiological ...................................................................................................................................... 19
Other Works Cited ....................................................................................................................................................... 20
Appendix 1: Patient-Neurologist Barriers to Communication ..................................................................................... 22
Appendix 2: Heuristic Evaluation................................................................................................................................ 22
Appendix 3: 4: ........................................................................................................................................................23
Appendix 5: American Academy of Neurology Survey Including (i) US Neurologists by Subspecialty, (ii)
Neurology Member Type ............................................................................................................................................ 23
Appendix 6: Price of Commercial Visualization Softwares ........................................................................................ 24
Appendix 7: Additional Information for Serviceable Addressable Market ................................................................. 24
Appendix 8: Top 16 NIH Funded Institutions in US with High Neurologist Populations........................................... 25
Appendix 9: Countries with the Highest Prevalence of MS ........................................................................................ 25
Appendix 10: Additional Information on Researcher Spending & Populations (Heart Disease & Prostate Cancer) .. 25
Appendix 11: AnamneVis Hierarchical Layout .......................................................................................................... 26

2
Executive Summary
The Multiple Sclerosis (MS) Longitudinal Viewer visualization tool receives medical
information from an Electronic Medical Record (EMR) system and presents it in an easily
digestible visual format. This provides multiple benefits over the traditional EMR interface: it
identifies symptoms faster, improves effectiveness of the treatment process, and decreases the
medical institution’s expenses.

Internal Analysis
The performance of the MS Longitudinal Viewer tool can be analyzed from two
standpoints: heuristic and practical. In this report, we conduct our own heuristic analysis,
comparing the tool’s performance to a set of guidelines defined by usability researchers; we also
recommend strategies for practical usability testing with sample users in the future. We present a
set of problems identified in heuristic evaluation, sample tasks for practical usability testing, and
the possibility of heart rate variability as a metric for assessing usability in some contexts.

Market and Customer Analysis


The visualization software market and the MS treatment market have great potential to
grow and are projected to reach $5.6B and $27.4B by 2025, respectively. According to our
analysis, the market sizes for the MS Longitudinal Viewer are:

Total Addressable Market: $30M


Serviceable Addressable Market: $16.2M (54% of TAM)
Initial Target Market: $2.4M (15% of SAM)

We also identify prostate cancer and heart diseases, such as hypertrophic cardiomyopathy
(HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC) as ideal market spaces for
the MS Longitudinal Viewer to serve in the future.

Competitor Analysis & Deployment Considerations


We review products similar to the MS Longitudinal Viewer that either pose a threat or
offer insight on important features required in a MS visualization tool. There are currently two
main competitors in the MS space: MS BioScreen and MS Mosaic. MS BioScreen’s current tool
is superior to the MS Longitudinal Viewer due to features discussed in this report. We suggest
additional features to make the MS Longitudinal Viewer competitively viable, as well as four
phases in which the MS Longitudinal Viewer should be deployed to make it an unparalleled tool
in its market space.

Introduction
Today's extensive medical information systems have increased the amount and
availability of general medical knowledge and patient-specific data. And yet, information gaps at
the point-of-care are widening; clinicians are under increased pressure to synthesize best-
evidence, review more patient data and complete more clinical tasks in less time (Lesselroth &
Pieczkiewicz, 2011). In a 2014 survey of MS neurologists (Appendix 1), 47% of these
neurologists said that they did not have enough time to discuss all of their patients’ concerns.
While EMR systems have enhanced physicians’ productivity, human interface with EMR
systems has been hindered. In order to illustrate, 37% of physicians in a previous study reported
that interacting with their EMR databases was too time consuming (Rind et al., 2013). Another

3
study showed that 68% of physician time was spent on EMR documentation and paperwork and
that EMRs were designed as billing systems, not for patient care (“Our Story,” 2018).
A data visualization tool can leverage large, complicated datasets and enable physicians
to more rapidly contextualize relationships. For instance, Huber, Krishnaraj, Monaghan, &
Gaskin (2018) reported that by using Tableau, they created a clinical dashboard to display data
that allowed clinicians to analyze data quickly and to identify trends more rapidly. A data
visualization tool also enables clinicians and researchers to track a patient’s disease progress
over time and compare the individual’s trajectory to that of a reference group of similar patients
(Onukwugha, Plaisant, & Shneiderman, 2016). Further, a data visualization tool enables
researchers to look at population data in novel ways, identify extreme outliers, and stratify
subgroups of people based upon data from health records, genomic tests, imaging, and disease
progression (Haynes, Yao, McDonald, Sahota, & Ackloo, 2008).
The economic benefit of visualization software is evidenced at Massachusetts General
Hospital, where doctors and nurses used Tableau to access and view data that enabled them to
reduce rates of hospital-acquired infections (Erler & Ohmann, 2015). Ultimately, they reduced
catheter-related urinary tract infections by 85% (ibid). Piedmont Healthcare in North Georgia
leveraged Tableau to help them reorganize physician schedules, freeing physicians to focus all
their attention on one patient-centered activity at a time (Heimer, 2018). They reduced heart
failure readmissions by 10% and heart attack readmission by 12% (ibid). In total, they saved
$2M annually and increased patient satisfaction 7% (ibid).

Internal Analysis
To better understand the market prospects for the MS Longitudinal Viewer tool, we
recommend analyzing its functionality in two standard ways: heuristic evaluation and usability
testing. We have conducted a heuristic evaluation ourselves, though we recommend repeating it
throughout the development process with evaluators of varying expertise. We have also
developed a framework and guidelines for future usability testing.
A heuristic evaluation entails a comparison of the tool’s user experience to an accepted
set of heuristics for optimal usability; this comparison is usually conducted by experts in either
usability or in the field of the tool’s use (“Heuristic Evaluations,” 2013). While the heuristic
evaluation can identify sources or categories of issues that users may experience, the problems
identified in heuristic evaluation often differ from those identified in practical usability testing,
so the two cannot be substituted for each other (ibid).
Usability testing is an evaluation of the tool’s performance with representative users and
tasks. Participants with expertise levels similar to the intended end users are assigned to
complete typical tasks, and their success rates, speed, satisfaction, and any problems encountered
are recorded by observers (“Usability Testing,” 2013). Additionally, studies have shown that
certain physiological metrics, such as changes in galvanic skin resistance or heart rate, may mark
the occurrence of problems that test users are not even consciously aware of (Wilson & Sasse,
2000). Therefore, we recommend a usability testing scheme that incorporates elements of both
human observation and biometrics.

Heuristic Evaluation of User Experience


Many different sets of heuristics have been defined in the field of human-computer
interfaces, and more recently, sets of heuristics specific to data visualization have also become
commonplace. Some are short, simple, and general, such as the Visual Information-Seeking

4
Mantra: “Overview first, zoom and filter, then details on demand” (Shneiderman, 2003). These
types of heuristic sets rely heavily on the expertise of evaluators to impute contextual meaning
and develop potential problem scenarios. On the other hand, some sets of heuristics are highly
rigorous and detailed; for instance, Pierotti’s expansion on the Nielsen heuristics takes the form
of a checklist totaling hundreds of questions (Tarrell et al., 2014). Virtually anyone, regardless of
prior knowledge, could explore an interface and analyze it using this checklist; however,
applying it to particularly specialized visualizations may lead to ignoring usability problems of
which the checklist did not conceive.
We have selected a set of heuristics that falls somewhere between these extremes and that
is supported by a thorough methodology and practical testing. Forsell and Johansson (2010)
conducted a meta-analysis of six common sets of heuristics by presenting study participants with
a list of known usability problems and asking them to classify how well each was explained by
each heuristic in the six sets. From this study, they compiled a set of ten heuristics that generated
the broadest coverage of usability issues (Table 1). This set of heuristics has since been
incorporated in further meta-analyses (Tarrell et al., 2014; Oliveira & Silva, 2017; Santos, Silva,
& Dias, 2018) and studied on its own (Väätäjä et al., 2016); its usage is well supported in this
context.

Heuristic Explanation
Information coding Visual elements (icons, colors, etc.) map intuitively to the data they convey.
Minimal actions The fewest possible user actions are needed to accomplish tasks.
Goals may be accomplished in multiple ways; interface allows customization to the
Flexibility
user’s workflow and requirements.
Orientation and help Task support, additional information, and undo/redo are available.
Spatial organization Space is used efficiently; visual layout supports user understanding.
Consistency Similar design indicates similarity and different design indicates difference.
Recognition rather than recall Memorization on the user’s part is minimized.
Prompting User is directed to all possible functions when multiple options are available.
Remove the extraneous Extra information or visual elements do not obscure the needed data.
Data set reduction Features for data set reduction (e.g. filtering) are accessible and efficient.
Table 1: Summary of data visualization usability heuristics as refined by Forsell and Johansson (2010).

A potentially-catastrophic problem we identified is the lack of correspondence between


graph color and medication regimen: the same medication regimen may be displayed in different
colors at different time spans, or spans of the same color may represent different regimens. This
could lead clinicians to draw incorrect conclusions from the visualization and thereby treat the
patient inappropriately. We also noted major usability problems pertaining to the use of
smoothed line graphs; the lack of an option to compare multiple graphs in one page; the use of
hover labels to indicate medication regimens; the lack of normative ranges on any graph; and the
absence of a function to exclude single data points that a user suspects to be erroneous. In
addition to these six catastrophic or major issues, we identified an additional 14 minor or
negligible usability issues that should also be considered in the further development of this tool
(Appendix 2).
We recommend continuing to evaluate new iterations of the tool against this set of
heuristics, as well as considering other sets of heuristics that may be more suitable at different
stages of the development process: the further along in development a visualization is, the more

5
low-level and specific heuristics become appropriate (Zuk, Schlesier, Neumann, Hancock, &
Carpendale, 2006). Additionally, we recommend that these heuristic evaluations be conducted by
both experts in data visualization as well as expert users (i.e. clinicians who are thoroughly
familiar with the data being presented) as this has been shown to be more effective than
assessments by single experts (Lin, Guerguerian, & Laussen, 2015).

Recommendations for Usability Testing


Usability testing with sample users of the tool should take a multi-pronged approach.
First, users can be asked directly about the usability of the tool with the System Usability Scale,
a standardized five-point, ten-question Likert scale (Sauro, 2011):

1. I think that I would like to use this system frequently. (+)


2. I found the system unnecessarily complex. (-)
3. I thought the system was easy to use. (+)
4. I think that I would need the support of a technical person to be able to use this system. (-)
5. I found the various functions in this system were well integrated. (+)
6. I thought there was too much inconsistency in this system. (-)
7. I would imagine that most people would learn to use this system very quickly. (+)
8. I found the system very cumbersome to use. (-)
9. I felt very confident using the system. (+)
10. I needed to learn a lot of things before I could get going with this system. (-)

Users’ responses to each prompt are converted to a numerical score, which can then be
normalized to produce a percentile ranking of the interface usability (ibid). This scale covers the
attributes of usability as defined by a number of leaders in the field of usability research:
learnability, efficiency, correctness, memorability, and subjective satisfaction (Bruno & Al-
Qaimari, 2004).
However, a person’s responses to a Likert scale may not always correlate with the tool’s
actual performance (Douven, 2017). Therefore, we also recommend a study of representative
tasks and users. This study should comprise both quantitative measures - speed and correctness
of task completion - as well as qualitative observations of user satisfaction and any faults
encountered. Ensure the tasks are not overly specific so as to prevent handing out clues to the
participant, as this will inevitably taint results. The tasks should be open-ended such that the goal
is not immediately clear to the participant. To this end, we have created tasks to measure
effectiveness, efficiency, satisfaction, and error rate, which will provide an assessment of the UI
's usability.

1. Effectiveness
Objective: How often does the clinician refer to the MS Longitudinal Viewer?
Task: No questions or specific task
Observation: Observe the user, note if the clinician generally uses the MS Longitudinal Viewer or opts to
use the Hospital's traditional EMR system.
2. Efficiency
Objective: How intuitive is the UI?
Task 1: Ask a first time user to open up relapses
Observation: Observe the user, note how long it takes for the first-time user to find that
information.
Objective: How learnable is the UI, how difficult is it to learn?
Task 2: Ask user to perform one task every week
Observation: Observe the time difference it takes to complete the task each week

6
3. Satisfaction
Objective: What is the user's experience with the UI overall?
Task: Anonymous survey of the clinicians time with the UI
Observation: Statistically analyze the results of the survey and infer what needs to be changed to improve
satisfaction
4. Error rate
Objective: How many errors are made and how can tweaking the UI fix it?
Task: Assign a new task to a regular user.
Observation: Note how many errors are made trying to perform this task, and note how the UI can be
modified to reduce the errors

Although the user behavior and interaction might change with the premise of observation
(Sonderegger, 2009), the task-observation method combined with the System Usability Scale
across various users is a good practical indication of the usability of the tool. We recommend
controlling the trial in several ways: by separating trial users of the MS Longitudinal Viewer into
EMR-experienced and EMR-naive groups, in order to discern any connection between the
presentation of information in the standard EMR interface and the viewer; and by comparing task
performance on the MS Longitudinal Viewer with EMR-experienced and EMR-naive task
performance on the traditional EMR interface. These comparisons will not only indicate any
usability concerns in the interface, but also provide an estimate of time saved by using the MS
Longitudinal Viewer, which can then be incorporated into market and pricing strategies.

Physiological Measures in Usability Testing


An area of significant recent research interest is the application of biometrics to usability
testing, especially ocular, cardiological, and respiratory measures along with galvanic skin
resistance (Qu, Guo, & Duffy, 2017; Foglia, Prete, & Zanda, 2008). Studies by Wilson and Sasse
(2000) indicate that physiological responses could be used to indicate usability problems that are
below the level of conscious perception, such as a low video frame rate. Additionally, for a
variety of reasons, participants in usability testing may not give honest appraisals when asked
directly about a system’s usability; physiological metrics could point out faults that were noticed
but not disclosed by the user (ibid).
The best-supported metric from these studies is heart rate variability (HRV) as an
indicator of “mental load,” with higher HRV correlated with worse interface designs and lower
usability and satisfaction scores (Qu et al., 2017; Hercegfi, 2011). HRV monitoring would be
relatively easy to implement in a usability study and its correlation with usability metrics is well-
supported. However, HRV alone does not provide the full story: for instance, one test user may
have high HRV because they are struggling to comprehend poorly-designed graphics, while
another may also have high HRV because they are especially engaged by the information in a
particular graphic (Qu et al., 2017). Even though these studies show the former case to be more
typical, the possibility of different interpretations still necessitates pairing biometrics with
surveys and human observations to generate a complete picture for usability assessment.

Market and Customer Analysis


The international visualization software market was $3.8B in 2016 and is projected to
grow at a compound annual growth rate (CAGR) of 4.7% to $5.6B by 2025 (Appendix 3)
Further, the treatment markets for MS internationally in 2016 were valued at $16.1B, and were
projected to reach $27.4B by 2025 with a CAGR of 6.3% (Appendix 4). To analyze the market
size and potential for the MS Longitudinal Viewer, we determined the software’s Total

7
Addressable Market (TAM), Serviceable Addressable Market (SAM), and Initial Target Market
(ITM). In addition, we identified prospective diseases to benefit from longitudinal visualization
softwares and determined the potential market sizes of these spaces.

Target Market & Customers


Currently, MS affects 947,000 people in the US and 2.5M people internationally (Luxner,
2017). The TIC’s MS Longitudinal Viewer aids clinicians and researchers in contextualizing
information in order to improve the efficiency and outcome of treatment. We see two potential
routes for addressing the viewer’s target market and customers for commercialization:

1. Keep the MS longitudinal viewer in house to benefit from its competitive advantage.
2. License the software to other institutions for clinical and research use.

To determine the US TAM, we first determined the number of potential customers for the
MS Longitudinal Viewer. To obtain this number, we analyzed a 2018 survey of neurologists,
approximating about 9,700 researchers and clinicians actively working in the MS space
(Appendix 5). Further, we averaged the prices of commercial visualization softwares to estimate
a $3,100 per user per year price tag for the MS Longitudinal Viewer software (Appendix 6).
Using these figures, we estimate a US TAM of $30M for the MS Longitudinal Viewer. However,
the softwares we considered in estimating price are off-the-shelf solutions not customized to the
EMR or to clinical data for specific diseases. Because of this, we expect the MS Longitudinal
Viewer’s tailored functionality may command a premium above our estimated price. Thus, we
believe these market size evaluations are conservative. Future considerations include accounting
for the value proposition of the MS Longitudinal Viewer as well as the spending propensities in
the clinical and research settings of MS. We recommend additional pricing and marketing
development to more accurately determine a price point and market size for the MS Longitudinal
Viewer.
To estimate the US SAM, the portion of the market that can theoretically be reached with
current technology, we considered the EMR compatibility of the MS Longitudinal Viewer.
Because the MS Longitudinal Viewer is only compatible with Epic EMRs and 54% of US
patient records are managed in Epic (Appendix 7), we make the rough estimate that 54% of the
market is reachable (Glaze, 2015). Therefore, we estimate a US SAM of $16M. Further, despite
Epic’s popular use within the US, they only service 2.5% of global patients (ibid). Most global
EMR demand is fulfilled by Cerner Corp, who are the first or second in market share in 90% of
global regions (Naughton, 2018). Thus, to serve demand internationally, the MS Longitudinal
Viewer will need to seek compatibility with other EMRs, most likely Cerner’s EMR.
Lastly, we cross-reference the top 16 NIH-funded institutions with the domestic
distribution of neurologists to identify an ITM of twelve prospective medical institutions (2018
Insights Report, 2018; Philippidis, 2018) (Appendix 8). Although the actual target market will
most likely include those in the network of the Center of Excellence for MS, we utilized our
potential scenario to calculate an ITM size of $2.4M. Additional consideration when defining a
target market may include the geographic epidemiology of MS. Studies suggest that MS is most
prevalent in the northernmost regions of the US (Sadovnick & Ebers, 1993; Dilokthornsakul et
al., 2016). Further, countries farther from the equator have higher rates of MS (Appendix 9). In
addition, given the physical presence of direct competitors in California (BioScreen), we may
want to first concentrate on eastern institutions.

8
Figure 1: MS Longitudinal Viewer TAM, SAM, and ITM.

Prospective Market & Customers


We identify prostate cancer and heart diseases, namely hypertrophic cardiomyopathy
(HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC), as ideal market spaces
into which to diversify the MS Longitudinal Viewer. To select these diseases, we searched for
highly complex chronic diseases for which Johns Hopkins Medical Institute has leverageable
resources in the form a Center of Excellence (Figure AW.2). In the Gourraud et al. (2014) review
of BioScreen, a disease is characterized as sufficiently complex to benefit from a visualization
tool when it has etiological heterogeneity, diverse disease expression, and emergent properties.
To be etiologically heterogeneous, a disease must result from the interplay of genes and the
environment, and cannot primarily be dominated by one of these factors. Further, diseases with
diverse expression have high variability in disease course, from mild to aggressive. In addition,
emergent properties occur when underlying biology is understood by the integration of two or
more factors (ibid). Prostate cancer and cardiomyopathies fulfill these criteria, and have sizeable
markets in terms of the number of US patients, so we believe they are ideal targets for future
development. These disease selections are based on our own non-medical judgment, and we
recommend seeking the input of medical professionals in these fields in order to establish the
practical utility of a longitudinal viewer should the TIC decide to pursue these avenues.

Figure 2: Criteria for Selecting Product Development Areas (NCCDPHP, 2018; Gourraud, 2014).

To understand the size of these prospective market spaces we first estimated the number
of patients, clinicians, and researchers. In the US, there are about 651,000 cases of HCM and

9
162,000 cases of ARVC. In addition, about 84M people in the US suffer from some form of
heart disease (“Cardiovascular Disease Statistics,” n.d.). Although we could not segment the
number of cardiologists into those who treat HCM and ARVC, we did find that 22,000 active US
physicians clinically treat heart disease (Number of People…, 2016). Further, given that 1 in
every 9 men will be diagnosed with prostate cancer in their lifetime, nearly 18M men in the US
have or will develop prostate cancer (“Treating Prostate Cancer,” 2019; “Male to Female
Ratio…,” 2015). We estimate that 29,100 active physicians treat prostate cancer (Number of
People…, 2016; “Treating Prostate Cancer,” 2019). Moreover, to estimate the number of prostate
cancer and heart disease researchers, we analyzed the NIH’s research spending (Appendix 10).
Thus, we can approximate that there are 6,900 research and 28,900 medical (research & clinical)
professionals involved with heart disease and 1,300 research and 30,400 medical professionals
involved with prostate cancer.
Finally, we calculated the size of TAM and SAM for both heart disease and prostate
cancer. Utilizing the same price model as MS (Appendix 6), we estimate a TAM of $90M for
heart disease (Figure 3) and $94M for prostate cancer (Figure 4). Similar to MS, by accounting
for the softwares compatibility with the Epic EMR we approximate a SAM of $48M for heart
disease (Figure 3) and $52M for prostate cancer (Figure 4). We recommend further research on
these markets to determine their target addressable sizes.

Figure 3 (L): Heart Disease Longitudinal Viewer TAM, SAM.


Figure 4 (R): Prostate Cancer Longitudinal Viewer TAM, SAM.

Competitive Analysis
We reviewed products similar to the MS Longitudinal Viewer that are competitive in the
MS space or offer insight on important features required in a MS visualization tool.

MS BioScreen (University of California San Francisco)


MS BioScreen is a tablet-based visualization system personalized to an individual patient
and coupled with a cloud-based database infrastructure. This visualization system integrates
multiple dimensions of disease information, including clinical evolution; therapeutic
interventions; brain, eye, and spinal cord imaging; environmental exposures; genomics and
biomarker data. MS BioScreen’s most important feature is its individual patient timeline tracking
that collates disease status (baseline metrics, relapse history, treatment data, genomic data and
biomarkers), and its disease course (Ahmad, 2018). MS BioScreen also aggregates and visualizes
data from patients in similar cohort in order to help an individual patient understand the nature of

10
their condition with respect to other patients. MS BioScreen currently offers three different
platforms:

1. Open MS BioScreen: It is available to any patient, caregiver or clinician with a web browser. It gives its
users the opportunity to enter data on their condition, obtain a richly contextualized, digestible and
actionable predictive output, free of commercial interest, and participate in a shared decision-making
process.
2. Weill BioScreen: This platform pulls data from many disparate sources, including the traditional EMR,
research studies, and patient surveys. Then it processes and visualizes the data in a single cohesive display.
3. NeuroShare: This system allows data to be seamlessly shared throughout a health system.

Of all the above platforms, Open MS BioScreen is the only one that has been launched.

MS Mosaic (Duke University)


MS Mosaic is a unique iPhone application that uses a mix of surveys and tasks to track
MS patients’ health and symptoms. This is an ongoing study that is scheduled to end on August
2025. It utilizes the sensors in smart phones to read and record physiological measures in order
to characterize the fluctuations in MS symptoms and there relationship to medications and
disease progression (Hartsell, 2018).

AnamneVis (Stony Brook University)


AnamneVis is a system where a patient’s information is represented using a radial
sunburst visualization that captures all health conditions of the past and present to serve as a
quick overview to the interrogating physician (Zhang et al., 2011). The patient’s body is
represented by a body map that can be zoomed in for further anatomical details. Nodes
representing health events such as diagnoses, symptoms, and treatments are drawn as wedges or
bars around the body map. The angle of a node represents its number of incidents whereas the
severity of a specific health event is color-coded. Three layers of nodes are used to cluster
categories: layer one stands for the highest hierarchy level, layer two represents more detailed
categories, and layer three contains the discrete incident node. Red dots on the body map encode
incident locations (Appendix 10). The main downside of AnamneVis is the lack of multi-patient
comparison and that the overall chronological order of health events is not available (ibid).

Vie-Visu (University of Vienna)


Vie-Visu uses an interactive glyph technique for time oriented analysis of electronic
patient records (Popow et al., 2001). This is similar to AnamneVis because they both present
patient’s health history in a holistic view. Vie-Visu’s motivation was the fact that paper-based
analysis of patient records is very hard to conduct because many parameters are involved and an
overall assessment of the patient's situation is hard to maintain. The glyph display helps to
combine different measurements, maintain their relationships, show their development over time
and make specific, possibly life threatening situations, easy to spot. The used glyph basically
consists of three parts that represent circulation, respiration and fluid balance parameters. Fifteen
different patient parameters are combined to form this glyph. Each glyph represents a one hour
period and 24 glyphs are combined in a single screen (ibid).

VISITORS [VISualizatIon of Time-Oriented RecordS] (Ben-Gurion University of the Negev)


VISITORS is a framework for intelligent visualization of multiple time-oriented data.
VISITORS combines intelligent temporal analysis and information visualization techniques. The

11
system includes tools for retrieval, visualization, exploration and analysis of raw time-oriented
data and derived concepts for multiple patient records (Klimov, Shahar, & Taieb-Maimon,
2010). To derive meaningful interpretations from raw time-oriented data, VISITORS uses a
method known as knowledge-based temporal-abstraction. There are three main unique features
to their visual exploration research, whose combination distinguishes their approach from others,
they are treatment of multiple records, treatment of the temporal dimension as a first class
citizen, and that the user interface is based on the temporal-abstraction ontology, which enables
navigation and exploration of semantically related raw and abstract concepts.

Lifelines2 (University of Maryland)


Lifelines2 is an interactive visualization system for searching and exploring certain
sequences in categorical temporal health record data. Lifelines2 color-codes different categories
and designs a dynamic filter to align all records by a chosen specific event type. Lifelines2 also
ranks a cohort of designated events by their number of occurrences. Users can search for a
certain event sequence, for example, event A followed by event B, but without event C in
between. Lifelines2 also uses histograms to show temporal distribution of selected event types
and users can build their own temporal summary with selected records and compare it with other
multiple groups (Rind et al., 2013).

EventFlow (University of Maryland)


EventFlow and Lifelines2 are similar, however, while Lifelines2 only utilizes time point
events, EventFlow takes interval queries into consideration in its visualization system to provide
more specification on the time between events (Du, 2016). This brings a deeper understanding of
the nature and continuity of a patient’s medication. An enhanced function of EventFlow is its
advanced graphical-based search interface. In LifeLines2, users can only search for subsequent
events such as “before” and “after” relationships while EventFlow’s searching model enables its
users to search events in overlapping relationship, (for example, searching for “Stroke while
taking medication A and B”). This enlargement of filter categories can bring users a clearer
understanding of the relationship and relevance between diseases and treatments.

Dartmouth Atlas Project (Dartmouth College)


The Dartmouth Atlas project uses Medicare data to provide comprehensive information
and analysis about national, regional, and local markets as well as individual hospitals and their
affiliated physicians. These reports and the research upon which they are based, have helped
policymakers, the media, health care analysts and others improve their understanding of the
efficiency and effectiveness of the health care system (Bärtschi, 2011). This valuable data forms
the foundation for many of the ongoing efforts to improve health and health systems across
America. The data visualized in the platform is a multiple-patient multivariate visualization for
single and multiple region. This is useful when studying the effects of a treatment or a disease on
a region, hence useful for research.

Key Features
In this section, we highlight requirements that are essential for an efficient MS
visualization tool. Some of these features were inspired by the tools that were discussed above.

12
1. Single Sign-On User Authentication: Single sign-on (SSO) is a common enterprise authentication process
that gives a user access to multiple applications with one set of login credentials. SSO User Authentication
is important when dealing with applications that has a potential to grow.
2. String Search: A string search algorithm makes it easier to find topics within a complex program.
3. Timeline with (semantic) zoom and pan functionality: Semantic zoom allows objects to change their display
form or display additional information and panning allows for smooth movement of a viewing frame. It can
be used to get an overview and detailed information on patients’ longitudinal data.
4. Single-patient multivariate pattern visualization: Like the visual representation used in AnamneVis, a
holistic visualization of multiple variables for a patient allows the user to keep track of their MS patients’
progress very easily.
5. Multi-patient multivariate pattern visualization: A multivariate and multi-patient comparison can help
physicians and researchers link symptoms to specific diseases and find treatments that were successful in
helping other patients. Acquiring this data requires intelligent data acquisition, organization and
presentation.

We strongly believe that a tool meeting all the above requirements will enable clinicians to attain
both a holistic and detailed understanding of their patients. This will help offer diagnoses and
treatment plans more quickly and accurately. In addition to the application requirements needed
for clinical use, researchers in MS seeking to find new connections between new variables can
benefit from the following additional features:

1. Regional multivariate pattern visualization: MS’ pathogenesis is unknown but researchers believe that the
environment plays a big factor, so it will be important to collect and visualize health data for multiple
patients in multiple region. Consider a visualization like the Dartmouth Atlas project, which uses Medicare
data to provide comprehensive information and analysis about national, regional, local markets as well as
individual hospitals and their affiliated physicians.
2. Dynamic query filtering: A dynamic query filtering system enables researchers to compare different data
points and aid in discovering causes and new relationships between health variables.

Deployment Considerations
The MS Longitudinal Viewer uses HighCharts to visualize an already organized dataset.
It generates a linear graph using the dataset provided by Epic smartforms. Due to the MS
Longitudinal Viewer’s current state and possible threats from emergent health visualization
tools, we suggest that the MS Longitudinal Viewer be deployed and upgraded in multiple phases
until we can attain an unparalleled tool in the market space.

1. (Pre)Deployment Phase 1: Before deploying the longitudinal viewer, there are essential features it must
possess to make it market-worthy and secure for clinical use.
a. Single Sign-on User Authentication: We recommend the MS Longitudinal Viewer utilize a single
sign-on user authentication not only to prevent unauthorized users from gaining access to sensitive
information, but also to allow for the flexibility to upgrade the system.
b. Patient-Specific Zoom and Pan Functionality: We also recommend clinicians and researchers have
the capability to enter a patient’s name, medical record number (MRN) or Epic Identity (EID) to
access patient specific information. The visualized information should also be navigable via
semantic zoom and pan functionalities.
c. Support Line Communication: In order to get feedback on the performance of the application, we
recommend the MS Longitudinal Viewer include a help functionality that will not only help the
user understand the features provided in the tool but also provide a way to collect written reports,
feedback and suggestions from the users.

2. Deployment Phase 2: Starting in Phase 2, we recommend placing priority on debugging and upgrading to
user specifications and suggestions shared through the support communication line.

13
a. Holistic Single Patient Multivariate Pattern Visualization: We suggest creating a dashboard using
a single patient multivariate visualization similar to AnamneVis.
i. Each sector in the radial sunburst should represent a variable in the MS study.
ii. We also suggest using color to represent how well the patient is doing in each variable.
When a sector is clicked, it should display the corresponding information via a linear
graph.
b. String Search: To help the user get acquainted with the large amount of information that will be
displayed, we suggest adding a string search functionality.

3. Deployment Phase 3:
a. Multi-Patient Multivariate Visualization: In this phase, we recommend that the MS Longitudinal
Viewer should include data from patients with similar disease in its timeline view.
b. Regional Multi-Patient Multivariate Visualization: We suggest the MS Longitudinal Viewer
introduce a regional multi-patient multivariate information in order to give researchers the ability
to study the disease for a given region.
c. Dynamic Query Filtering: We also recommend adding a query filter system to give researchers the
flexibility to find new data connections.

4. Deployment Phase 4:
a. Machine Learning Algorithms: The goal of this phase is to use machine learning algorithms to
collect and organize data, and generate clusters or features that might not be easily detectable by
the naked eyes (we suggest using Principal Component Analysis or Linear Discriminant Analysis).

Annotated Bibliography
Visualization - General
1. West, V. L., Borland, D., & Hammond, W. E. (2014). Innovative information
visualization of electronic health record data: A systematic review. Journal of the
American Medical Informatics Association, 22(2), 330-339. doi:10.1136/amiajnl-2014-
002955
A review of the available literature 1996-2013 on innovative visualizations for medical
data, including both individual and “big data” multi-patient systems. Authors note the
specific challenges in visualization of EMR data, and conclude that few visualization
methods exist to adequately confront these challenges.
2. Aigner, W., Miksch, S., Muller, W., Schumann, H., & Tominski, C. (2008). Visual
Methods for Analyzing Time-Oriented Data. IEEE Transactions on Visualization and
Computer Graphics, 14(1), 47-60. doi:10.1109/tvcg.2007.70415
Analysis of graphical techniques specific to temporal data, including temporal
abstraction, principal component analysis, and data aggregation. Focuses on the utility
of user interaction and user-interest design.
3. Boyd, A. D., Young, C. D., Amatayakul, M., Dieter, M. G., & Pawola, L. M. (2017).
Developing Visual Thinking in the Electronic Health Record. Studies in Health
Technology and Informatics, 245, 308-312. doi:10.3233/978-1-61499-830-3-308
Overview of EMR downsides and historical utility in institutions, baseline documentation
(19 papers) of data visualization utilizing the EMR. Good visualization on correlations
between data, models, knowledge, and visualization.
4. Holzinger, A., Schwarz, M., Ofner, B., Jean-quartier, F., Calero-Valdez, A., Roecker, C.,
& Ziefle, M. (2014). Towards Interactive Visualization of Longitudinal Data to Support
Knowledge Discovery on Multi-touch Tablet Computers. Lecture Notes in Computer
Science, 124-137. doi:10.1007/978-3-319-10975-6_9

14
Longitudinal visualization overview, with considerations specific to touchscreen
displays.
5. Bui, A. A. T., & Hsu, W. (2009). Medical Data Visualization: Toward Integrated Clinical
Workstations. Medical Imaging Informatics, 139-193. doi:10.1007/978-1-4419-0385-3_4
Highly detailed analysis of the utility of visualization in medical settings, including types
of visualization, user modeling and assumptions, workflow, and integrated display.
Finishes with a section on patient-centric visualization and concerns of audience,
expectation, and access.
6. Hildebrand, C., Stausberg, J., Englmeier, K. H., & Kopanitsa, G. (2013). Visualization of
Medical Data Based on EHR Standards. Methods of Information in Medicine, 52(01), 43-
50. doi:10.3414/me12-01-0016
Defines goals for medical data visualization within EMRs and suggests considerations
for the development of a standard EMR data viewer, but concludes that it will be difficult
to overcome the “contradiction between a generic method and a flexible and user-
friendly data layout.”

Visualization - Patient-Oriented
1. Dolan, J. G., Veazie, P. J., & Russ, A. J. (2013). Development and initial evaluation of a
treatment decision dashboard. BMC Medical Informatics and Decision Making, 13(1).
doi:10.1186/1472-6947-13-51
Design and assessment of a visualization tool used to guide patient in choosing
treatment method. Includes patient risks of adverse reaction and drug interactions.
Discusses the benefit and risks of altering patients’ (and doctors’) cognitive load.
2. Faisal, S., Blandford, A., & Potts, H. W. (2013). Making sense of personal health
information: Challenges for information visualization. Health Informatics Journal,19(3),
198-217. doi:10.1177/1460458212465213
Outlines the current challenges facing medical data visualization, and how to better
optimize data visualization tools for both medical professionals and patients. Also points
out data that’s important to both patients and medical professionals.
3. Wågbø, H. D. (2014). The Patient Perspective: Utilizing Information Visualization to
Present Health Records to Patients (thesis). Norwegian University of Science and
Technology.
In depth analysis of patient’s perspectives towards gaining access to their EMR data and
if access to such data is beneficial in any way to the patient. If beneficial, how feasible is
using state of the art EMR visualization techniques to present this data to the patients.

Visualization - Specific Tools


1. Klimov, D., Shahar, Y., & Taieb-Maimon, M. (2010). Intelligent visualization and
exploration of time-oriented data of multiple patients. Artificial Intelligence in Medicine,
49(1), 11-31. doi:10.1016/j.artmed.2010.02.001
Design and development of the VISITORS system for visualization of raw data and
abstract concepts across multiple patients. Also contains specific information on the
assessment of the VISITORS system by clinicians and medical informaticians.
2. Milash, B., Plaisant, C., Rose, A., Widoff, S., & Shneiderman, B. (1996). LifeLines:
Visualizing Personal Histories. Conference on Human Factors in Computing Systems,
221-227. doi:10.1145/257089.257391

15
Description of LifeLines, a simple and generalizable visualization tool for longitudinal
viewing of a person’s lifetime, with specific examples in legal and medical fields. Special
attention paid to the implications of visual design - colors, sizes, icon selection - as well
as user feedback.
3. Wang, T. D., Wongsuphasawat, K., Plaisant, C., & Shneiderman, B. (2010). Visual
information seeking in multiple electronic health records. International Conference on
Health Informatics, 46-55. doi:10.1145/1882992.1883001
User case study results for Lifelines2, a system for visualizing temporal categorical data
across multiple patient records. Contains detailed data on users’ interactions with the
system and develops a “process model” for the manner in which users explore data.
4. Wang, T. D., Wongsuphasawat, K., Plaisant, C., & Shneiderman, B. (2011). Extracting
Insights from Electronic Health Records: Case Studies, a Visual Analytics Process
Model, and Design Recommendations. Journal of Medical Systems, 35(5), 1135-1152.
doi:10.1007/s10916-011-9718-x Published follow-up to
“Visual information seeking in multiple electronic health records.”
5. Wongsuphasawat, K., Gómez, J. A., Plaisant, C., Wang, T. D., Taieb-Maimon, M., &
Shneiderman, B. (2011). LifeFlow: Visualizing an Overview of Event Sequences.
Conference on Human Factors in Computing Systems, 1747-1756.
doi:10.1145/1978942.1979196
Development of LifeFlow, a tool for summary and visualization of sequences across
multiple records using a hierarchical system inspired by “icicle” plots and phylogenetic
trees. Includes case studies in medical and transportation fields as well as a user
evaluation study and verbal feedback.
6. Ordonez, P., Oates, T., Lombardi, M. E., Hernandez, G., Holmes, K. W., Fackler, J., &
Lehmann, C. U. (2012). Visualization of multivariate time-series data in a neonatal ICU.
IBM Journal of Research and Development, 56(5), 7:1-7:12.
doi:10.1147/jrd.2012.2200431 Design and evaluation of a system for visually presenting
the progression of an individual patient in small-scale time - hours or days. Uses spider
graphs, rather than line graphs, to display information more compactly. Allows both
customizing (user selects bounds) and personalizing (bounds extrapolated from data)
display to each patient.
7. Widanagamaachchi, W., Livnat, Y., Bremer, P.-T., Duvall, S., & Pascucci, V. (2018).
Interactive Visualization and Exploration of Patient Progression in a Hospital Setting.
Retrieved from http://www.huduser.org/Datasets/IL/IL08/in_fy2008.pdfOverview of a
visualization and analysis tool to understand patient progression over time. This tool
stands out as its able to visualize and analyze group data. This allows medical
professionals to understand how a patient group is progressing
8. Kopanitsa, G., Veseli, H., & Yampolsky, V. (2015). Development, implementation and
evaluation of an information model for archetype based user responsive medical data
visualization. Journal of Biomedical Informatics, 55, 196-205.
doi:10.1016/j.jbi.2015.04.009
Conceptual framework for the development and evaluation of a visualization module in
the Avrora EMR. Includes diagrams depicting the criteria and questions answered by
evaluation methods as well as results: functionality (modeling efficiency, data
accessibility), efficiency (cognitive efficiency, doctors’ performance), and usability
(learnability).

16
9. Rind, A., Wang, T. D., Aigner, W., Miksch, S., Wongsuphasawat, K., Plaisant, C., &
Shneiderman, B. (2013). Interactive Information Visualization to Explore and Query
Electronic Health Records. Foundations and Trends in Human-Computer Interaction,
5(3), 207-298. doi:10.1561/1100000039
Comparison of 14 visualization tools, both individual and multi-patient, with
categorization of their available functionalities, as well as briefer analysis of built-in
visualizations in commercial EMR systems. Includes details on the unconventional glyph-
based VIE-VISU system.
10. Popow, C., Unterasinger, L., & Horn, W. (2001). Support for Fast Comprehension of
ICU Data: Visualization using Metaphor Graphics. Methods of Information in Medicine,
40(5), 421-424. doi:10.1055/s-0038-1634202
Development of the VIE-VISU small multiples visualization tool for NICU care.
11. Bade, R., Schlechtweg, S., & Miksch, S. (2004). Connecting time-oriented data and
information to a coherent interactive visualization. Conference on Human Factors in
Computing Systems. doi:10.1145/985692.985706
Prototype for Midgaard semantic zoom system.

Evaluation - User-Based
1. Sauro, J. (2011, February 2). Measuring Usability with the System Usability Scale (SUS).
Retrieved from https://measuringu.com/sus/
Overview of the standard System Usability Scale and how to interpret its results.
2. Pohl, M., Wiltner, S., Rind, A., Aigner, W., Miksch, S., Turic, T., & Drexler, F. (2011).
Patient Development at a Glance: An Evaluation of a Medical Data Visualization.
Lecture Notes in Computer Science, 292-299. doi:10.1007/978-3-642-23768-3_24
Overview of a user study of nine physicians using a longitudinal data viewer for diabetic
patients.
3. Santos, B. S., & Dillenseger, J. (2005). Quality evaluation in medical visualization: Some
issues and a taxonomy of methods. Medical Imaging: Visualization, Image-Guided
Procedures, and Display, 5744, 612-620. doi:10.1117/12.594549
Highly theoretical paper on the conceptual basis for evaluating data visualization, with
some consideration of the practical implications for how such evaluations could be
conducted. Core concepts are “level of information representation, types of visualization
evaluation, and evaluation methodologies.”Nykänen, P., Brender, J., Talmon, J., Keizer,
N. D., Rigby, M., Beuscart-Zephir, M., & Ammenwerth, E. (2011). Guideline for good
evaluation practice in health informatics (GEP-HI). International Journal of Medical
Informatics, 80(12), 815-827. doi:10.1016/j.ijmedinf.2011.08.004
Rigorous European guideline for every stage of a health informatics evaluation,
including a list of dozens of issues to be considered in each stage.
4. Cusack, C. M., Byrne, C. M., Hook, J. M., McGowan, J., Poon, E., & Zafar, A. (2009).
Health Information Technology Evaluation Toolkit: 2009 Update (U.S. Department of
Health and Human Services, Agency for Healthcare Research and Quality). Rockville,
MD.
United States government toolkit for conducting an evaluation of health IT tools step-by-
step. Walks through the basics of determining goals and measures of performance, survey
design, sources of data, and sample implementations.

17
Evaluation - Heuristic
1. Zuk, T., Schlesier, L., Neumann, P., Hancock, M. S., & Carpendale, S. (2006). Heuristics
for information visualization evaluation. Novel Evaluation Methods for Information
Visualization. doi:10.1145/1168149.1168162
Meta-analysis of several common heuristic sets for information visualization - Zuk &
Carpendale, Shneiderman, and Amar & Stasko - with discussion of the suitability of each
for different points in the design process, the supporting research, and progress towards
a more unified set of heuristics.
2. Shneiderman, B. (2003). The Eyes Have It: A Task by Data Type Taxonomy for
Information Visualizations. The Craft of Information Visualization, 364-371.
doi:10.1016/b978-155860915-0/50046-9
Classic framework of tasks and data types in order to understand the usage of a data
visualization. The origin of the visual information-seeking mantra “overview first, zoom
and filter, then details on demand.”
3. Forsell, C. (2012). Evaluation in Information Visualization: Heuristic Evaluation.
International Conference on Information Visualization, 1550-6037/12.
doi:10.1109/IV.2012.33
Analysis of heuristic evaluations and in-depth explanation of its characteristics. This
provides information that enables the user to use heuristic evaluations to evaluate
information visualization in the best way possible.
4. Forsell, C., & Johansson, J. (2010). An Heuristic Set for Evaluation in Information
Visualization. Conference on Advanced Visual Interfaces, 199-206.
Meta-analysis of six heuristic sets for information visualization based on how effectively
heuristics covered a list of common problems. Assesses coverage of each heuristic set as
well as refining a set of 10 excerpted heuristics that are deemed most effective.
5. Lin, Y. L., Guerguerian, A., & Laussen, P. (2015). Heuristic Evaluation of Data
Integration and Visualization Software Used for Continuous Monitoring to Support
Intensive Care: A Bedside Nurse's Perspective. Journal of Nursing & Care, 4(6).
doi:10.4172/2167-1168.1000300
Overview of a heuristic evaluation of T3 ICU monitoring displays by teams of critical
care nurses and usability experts.
6. Tarrell, A.E., Forsell, C., Fruhling, A. L., Grinstein, G., Borgo, R., & Scholtz, J. (2014).
Toward Visualization-Specific Heuristic Evaluation. Interdisciplinary Informatics
Faculty Proceedings & Presentations, 1. doi:10.1145/2669557.2669580
In-depth analysis of various heuristic evaluation methods; advantages, limitations, and
disadvantages. Also procures solutions and other evaluation methods that provide more
accurate, comprehensive, and community-accepted set of visualization-specific
heuristics.
7. Oliveira, M. R., & Silva, C. G. (2017). Adapting Heuristic Evaluation to Information
Visualization: A Method for Defining a Heuristic Set by Heuristic Grouping.
International Joint Conference on Computer Vision, Imaging and Computer Graphics
Theory and Applications, 225-232. doi:10.5220/0006133202250232Clusters 62 common
information visualization heuristics into a new set of 15 focus-distinct heuristics usable
for a heuristic evaluation.

18
8. Väätäjä, H., Varsaluoma, J., Heimonen, T., Tiitinen, K., Hakulinen, J., Turunen, M., . . .
Ihantola, P. (2016). Information Visualization Heuristics in Practical Expert Evaluation.
Novel Evaluation Methods for Visualization. doi:10.1145/2993901.2993918
Practical study and critique of the heuristics proposed by Forsell & Johansson, 2010 by
five expert participants.
9. Santos, B. S., Silva, S., & Dias, P. (2018). Heuristic Evaluation in Visualization: An
empirical study.
Novel Evaluation Methods for Visualization.Comparison of heuristic evaluations under
three heuristic sets - Nielsen, Forsell & Johansson, and Zuk & Carpendale - with a
usability study in order to discern which issues will be noted by either type of evaluation.
Contrary to the paper’s hypothesis, it was found that not all problems identified in
heuristic evaluations will be detected by users, even when study tasks are directed in such
a way as to expose users to the problems.
10. Gerhardt‐Powals, J. (1996). Cognitive engineering principles for enhancing human‐
computer performance. International Journal of Human-Computer Interaction, 8(2), 189-
211. doi:10.1080/10447319609526147
Development and test application of a set of ten “cognitive engineering principles,” i.e.
user-interface design heuristics, in a mock anti-submarine warfare task.
11. Scapin, D. L., & Bastien, J. M. (1997). Ergonomic criteria for evaluating the ergonomic
quality of interactive systems. Behaviour & Information Technology, 16(4-5), 220-231.
doi:10.1080/014492997119806
Design and assessment of a set of concrete heuristics for interaction with a data
visualization system; more about action than visual design.

Evaluation - Physiological
1. Qu, Q., Guo, F., & Duffy, V. G. (2017). Effective use of human physiological metrics to
evaluate website usability. Aslib Journal of Information Management,69(4), 370-388.
doi:10.1108/ajim-09-2016-0155
Chinese study evaluating the correlation of eye fixation duration, fixation count, blink
rate, and heart rate variability with satisfaction, efficiency, effectiveness, learnability,
and memorability of an interface. Found strong support for most hypothesized
connections.
2. Foglia, P., Prete, C. A., & Zanda, M. (2008). Relating GSR Signals to Traditional
Usability Metrics: Case Study with an anthropomorphic Web Assistant. Instrumentation
and Measurement Technology Conference. doi:10.1109/imtc.2008.4547339
Study evaluating the correlation of heart rate, respiration rate, and galvanic skin
resistance with ease of use and approval of a web interface. Found increased respiration
rate correlated with approval of the interface.
3. Hercegfi, K. (2011). Heart Rate Variability Monitoring during Human-Computer
Interaction. Acta Polytechnica Hungarica,8(5), 205-224.
Study of the correlation between mid-frequency power of heart rate variability with
mental effort; found tentative confirmation of the hypothesis. MF monitoring is
potentially very high-resolution in time (down to 6.2 seconds in this study) and valuable
for continuous monitoring during usability evaluation.
4. Wilson, G. M., & Sasse, M. A. (2000). Do Users Always Know What’s Good For Them?
Utilising Physiological Responses to Assess Media Quality. People and Computers,14,

19
327-339. doi:10.1007/978-1-4471-0515-2_22
Study evaluating galvanic skin resistance, heart rate, and blood volume pulse as
measurements of stress induced by low-frame-rate video tasks.

Other Works Cited


1. 2018 Insights Report (Rep.). (2018). American Academy of Neurology.
2. Ahmad, S. (2018). Precision medicine in multiple sclerosis. Precision Medicine in Cancers and Non-
Communicable Diseases,269-278. doi:10.1201/9781315154749-15
3. American Academy of Neurology. (2013, April 17). The Doctor Won't See You Now? Study: US Facing a
Neurologist Shortage [Press release]. Retrieved January 23, 2019, from
https://www.aan.com/PressRoom/Home/PressRelease/1178
4. Average Total Costs of Research Project Grants (Chart). (2015). In NIH Data Book. Retrieved January 23,
2019, from
https://report.nih.gov/NIHDatabook/Charts/Default.aspx?sid=1&index=0&catId=2&chartId=155
5. Bärtschi, M. (2011). Health Data Visualization - A Review. Collaborative Data Visualization.
6. Best Hospitals in the U.S. (2018). Retrieved January 23, 2019, from https://health.usnews.com/best-
hospitals/rankings
7. Bruno, V. & Al-Qaimari, G. (2004). Usability attributes: an initial step toward effective user-centred
development. Australian Computer Human Interaction Conference. Wollongong.
8. Cardiovascular Disease Statistics. (n.d.). Retrieved January 23, 2019, from
https://www.hopkinsmedicine.org/healthlibrary/conditions/cardiovascular_diseases/cardovascular_disease_
statistics_85,P00243
9. Dilokthornsakul, P., Valuck, R. J., Nair, K. V., Corboy, J. R., Allen, R. R., & Campbell, J. D. (2016).
Multiple sclerosis prevalence in the United States commercially insured population. Neurology,86(11),
1014-1021. doi:10.1212/wnl.0000000000002469
10. Douven, I. (2017). A Bayesian perspective on Likert scales and central tendency. Psychonomic Bulletin &
Review, 25(3), 1203-1211. doi:10.3758/s13423-017-1344-2
11. Du, F. (2016). EventFlow: Visual Analysis of Temporal Event Sequences and Advanced Strategies for
Healthcare Discovery. Retrieved from http://hcil.umd.edu/eventflow/
12. Erler, D., & Ohmann, A. (2015, July 21). Big Data and Visualization 101: Saving money, saving lives.
Retrieved January 23, 2019, from https://www.himss.org/news/big-data-and-visualization-101-saving-
money-saving-lives
13. Everything You Need to Know About DOMO's New Pricing Page. (2018, September 22). Retrieved
January 23, 2019, from https://www.yurbi.com/blog/everything-need-know-domos-new-pricing-page/
14. Glaze, Jeff. (2015, January 6). Epic Systems draws on literature greats for its next expansion.
Madison.com. Retrieved January 23, 2019, from https://madison.com/news/local/govt-and-politics/epic-
systems-draws-on-literature-greats-for-its-next-expansion/article_4d1cf67c-2abf-5cfd-8ce1-
2da60ed84194.html
15. Groman, J. (2017, March 27). Data Visualization: A Tool for Effective Health Communication. Retrieved
January 15, 2019, from https://healthcommcapacity.org/data-visualization-a-tool-for-effective-health-
communication/
16. Hartsell, F. L. (2018, June 11). MS Mosaic: A Longitudinal Research Study on Multiple Sclerosis.
Retrieved January 14, 2019, from https://clinicaltrials.gov/ct2/show/NCT02845635
17. Haynes, R. B., Yao, X., McDonald, H. P., Sahota, N., & Ackloo, E. (2008). Interventions for enhancing
medication adherence. Cochrane Database of Systematic Reviews,16(2).
doi:10.1002/14651858.CD000011.pub3
18. Heimer, M. (2018, March 19). How Software Maker Tableau Helps Doctors Tame Data. Retrieved January
23, 2019, from http://fortune.com/2018/03/19/software-maker-tableau-doctors-health-data/
19. Heuristic Evaluations and Expert Reviews. (2013, October 09). Retrieved January 14, 2019, from
https://www.usability.gov/how-to-and-tools/methods/heuristic-evaluation.html
20. How Much Does Zoho Analytics Cost? (2018). Retrieved January 23, 2019, from
https://help.zoho.com/portal/kb/articles/how-much-does-zoho-analytics-cost
21. Huber, T. C., Krishnaraj, A., Monaghan, D., & Gaskin, C. M. (2018). Developing an Interactive Data
Visualization Tool to Assess the Impact of Decision Support on Clinical Operations. Journal of Digital
Imaging,31(5), 640-645. doi:10.1007/s10278-018-0065-z

20
22. Koskie, B. (2018, June 20). Multiple Sclerosis: Facts, Statistics, and You (S. Kim MD, Ed.). Retrieved
January 23, 2019, from https://www.healthline.com/health/multiple-sclerosis/facts-statistics-infographic#1
23. Looker vs. Tableau: Pricing and Features Comparison. (2018, September 21). Retrieved January 23, 2019,
from https://www.betterbuys.com/bi/looker-vs-tableau/
24. Lorang, N. (2016, November 03). Let's Chart: Stop those lying line charts. Retrieved January 15, 2019,
from https://m.signalvnoise.com/lets-chart-stop-those-lying-line-charts-60020e299829
25. Luxner, L. (2017, November 20). Nearly 1 Million Americans Have Multiple Sclerosis, NMSS Prevalence
Study Finds. Multiple Sclerosis News Today. Retrieved January 23, 2019, from
https://multiplesclerosisnewstoday.com/2017/11/20/nearly-1-million-americans-have-multiple-sclerosis-
nmss-prevalence-study-finds/
26. Male to Female Ratio of the Total Population. (2015). Retrieved January 23, 2019, from
https://knoema.com/atlas/United-States-of-America/topics/Demographics/Population/Male-to-female-ratio
27. Naughton, Marc. (2018). HIMSS Investment Community Meeting. Presentation, Las Vegas. Retrieved
January 23, 2019 from https://cernercorporation.gcs-web.com/static-files/c92a1999-be90-4964-a6de-
fc0945c22280
28. National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP). (2018, November 19).
Retrieved January 23, 2019, from https://www.cdc.gov/chronicdisease/about/index.htm
29. Number of People per Active Physician by Specialty (Rep.). (2016, April). Retrieved January 23, 2019,
from Association of American Medical Colleges website:
https://www.aamc.org/data/workforce/reports/458490/1-2-chart.html
30. Onukwugha, E., Plaisant, C., & Shneiderman, B. (2016). Data Visualization Tools for Investigating Health
Services Utilization Among Cancer Patients. Oncology Informatics,207-229. doi:10.1016/b978-0-12-
802115-6.00011-2
31. Peddie, J. (2019). Computer Graphics Software Market Worldwide Segments, 2013-2021. Retrieved
January 23, 2019, from https://www.statista.com/statistics/269250/computer-graphics-application-software-
market-volume-worldwide-by-segment/
32. Philippidis, A. (2018, June 4). Top 50 NIH-Funded Institutions of 2018. Genetic Engineering &
Biotechnology News. Retrieved January 23, 2019, from https://www.genengnews.com/a-lists/top-50-nih-
funded-institutions-of-2018/
33. Usability Testing. (2013, November 13). Retrieved January 14, 2019, from https://www.usability.gov/how-
to-and-tools/methods/usability-testing.html
34. Sadovnick, A. D., & Ebers, G. C. (1993). Epidemiology of Multiple Sclerosis: A Critical Overview.
Canadian Journal of Neurological Sciences,20(1), 17-29. doi:10.1017/s0317167100047351
35. Sonderegger, A., & Sauer, J. (2009). The influence of laboratory set-up in usability tests: Effects on user
performance, subjective ratings and physiological measures. Ergonomics, 52(11), 1350-1361.
doi:10.1080/00140130903067797
36. State of MS: Global Survey Fact Sheet (Rep. No. FCH-1009120). (2014). State of MS Consortium.
37. Straight Talk: Review of Sisense; The Pros and Cons. (2018, September 19). Retrieved January 23, 2019,
from https://www.yurbi.com/blog/straight-talk-review-of-sisense-the-pros-and-cons/
38. Treating Prostate Cancer. (2019). Retrieved January 23, 2019, from
https://www.cancer.org/cancer/prostate-cancer/treating.html
39. Vaidya, A. (2017, May 2). Epic, Cerner hold 50% of hospital EHR market share: 8 things to know.
Becker's Hospital Review. Retrieved January 23, 2019, from
https://www.beckershospitalreview.com/healthcare-information-technology/epic-cerner-hold-50-of-
hospital-ehr-market-share-8-things-to-know.html
40. Who Uses Epic? (2019). Retrieved January 23, 2019, from https://www.epic.com/community#NIH
41. Wood, L. (2018, January 29). Global $27.3 Billion Multiple Sclerosis Drugs Market 2017-2025. Retrieved
January 23, 2019, from https://www.businesswire.com/news/home/20180129005741/en/Global-27.3-
Billion-Multiple-Sclerosis-Drugs-Market
42. Yellowfin BI Pricing. (2019). Retrieved January 23, 2019, from
https://www.g2crowd.com/products/yellowfin-bi/pricing
43. Zhang, Z., Ahmed, F., Mittal, A., Ramakrishnan, I., Zhao, R., Viccellio, A., & Mueller, K. (2011).
AnamneVis: A Framework for the Visualization of Patient History and Medical Diagnostics Chains.
Proceedings of the IEEE VisWeek Workshop on Visual Analytics in Health Care.

21
Appendix 1: Patient-Neurologist Barriers to Communication

Source: (State of MS…, 2014)

Appendix 2: Heuristic Evaluation


Severity Explanation
Catastrophic The problem is major, and additionally is likely to directly interfere with patient care.
Major The problem may prevent task completion or lead to unnoticed inaccuracies in conclusions.
Minor The problem may cause inconvenience, but will not prevent the user from completing tasks.
Negligible The problem will not interfere with user’s ability to complete tasks (i.e. aesthetic issues).
Table 1: Explanation of usability problem severity rankings.

Severity Usability Problem Heuristic(s)


Catastrophic Medication regimen colors not consistent Information coding, consistency
Smoothed line graphs may misrepresent data (Lorang, 2016) Information coding
Recognition rather than recall,
No option to display multiple stats on one page
minimal actions
Major Recognition rather than recall,
Medication regimens not displayed by default (must hover)
minimal actions
No normative ranges Recognition rather than recall
Cannot exclude individual data points Data set reduction
No axis numbering in “Neuro QOL” graph Orientation and help
Data labels (e.g. “hscore,” “lscore”) are not easily comprehensible Orientation and help
Hard to pan and zoom on timeline without affecting vertical axes Data set reduction
Lack of error reporting or support contact functionality Orientation and help
Minor Relapse hover labels do not contain any details Orientation and help
No units on vertical axes Orientation and help
Data set reduction, orientation
No way to note most important metrics for a clinician or patient
and help
No help buttons or functionality explanations Orientation and help
Negligible “MRI” label repeated on every point despite legend Remove the extraneous

22
Stacked MRI icons move on hover Remove the extraneous
“No new lesions found” hover labels unnecessary when icon for no-
Remove the extraneous
lesion MRI is already distinct
Single icon for medication regimen implies one-time event; not Information coding, spatial
clear that colored graph sections represent medication regimen organization
Date format different between medication regimens and events Consistency
Spacing between rows of events is off, some overlap Spatial organization
Table 2: Usability problems identified.

Appendix 3: Appendix 4:

Source: (Peddie, 2019) Source: (Wood, 2018)

Appendix 5: American Academy of Neurology Survey Including (i) US Neurologists by


Subspecialty, (ii) Neurology Member Type

Source: (2018 Insights Report, 2018)

23
Appendix 5 (cont’d.)
(i) In the previous survey, 34.6% of neurologists listed general neurology as their primary
subspecialty and 4.2% listed Neuroimmunology and MS for their subspecialty. Since patients
with MS see both general neurologists and MS specialists, this survey implies that up to 38.8%
of neurologists work with MS.

Source: (2018 Insights Report, 2018)


(ii) This survey from American Academy of Neurology estimated 24,953 active
professionals in the neurology space including 17,308 neurologists, 1,056 advanced practice
providers, 851 neurology researchers, and 5,738 students and interns. Thus, taking 24,953 active
professionals and a 38.8% rate of neurologists working in the MS space, we can calculate that
~9,700 researchers and clinicians work in the MS space (2018 Insights Report, 2018;
American Academy of Neurology, 2013).

Appendix 6: Price of Commercial Visualization Softwares

Source: (“Straight Talk…,” 2018; “How Much Does Zoho Analytics Cost?,” 2018; “Yellowfin BI Pricing,” 2019;
“Everything You Need to Know…,” 2018; “Looker vs. Tableau…,” 2018)

Appendix 7: Additional Information for Serviceable Addressable Market


1. According to Epic systems, 54% of US patients use their EMR (Glaze, 2015).
2. This claim is validated by the facts that large, well-funded institutions such as the top 20 US hospitals, 8 of
the top 10 Pediatric hospitals, and the top 16 NIH funded institutions all use Epic (“Best Hospitals in the
U.S.,” 2018; “Who Uses Epic?,” 2019).
3. This claim is validated by the facts that large, well-funded institutions such as the top 20 US hospitals, 8 of
the top 10 Pediatric hospitals, and the top 16 NIH funded institutions all use Epic (“Best Hospitals in the
U.S.,” 2018; “Who Uses Epic?,” 2019).
4. Cerner has typically been the most utilized EMR among small hospitals in the US (ibid).

24
Appendix 8: Top 16 NIH Funded Institutions in US with High Neurologist Populations

Funding amount: $5.985 of total NIH $37.5B = ~15%


Source: (2018 Insights Report, 2018; Philippidis, 2018)

Appendix 9: Countries with the Highest Prevalence of MS

Source: (Koskie, 2018)

Appendix 10: Additional Information on Researcher Spending & Populations (Heart


Disease & Prostate Cancer)
1. In 2018, the NIH spent $1.9B towards heart disease research, $233M towards prostate cancer research, and
$37.3B in total. (Philippidis, 2018; “Cardiovascular Disease Statistics,” n.d.).
2. We assumed a redundancy for those physicians who both work clinically and in research.
3. Additionally, we considered that the NIH issued about 50,000 grants to 300,000 researchers in 2018 and the
average 2017 grant size was about $520,000 (Philippidis, 2018; “Average Total Costs…,” 2015).

25
Appendix 11: AnamneVis Hierarchical Layout

Source: (Zhang, 2011)

26

Vous aimerez peut-être aussi