Vous êtes sur la page 1sur 7

The Laryngoscope

C 2016 The American Laryngological,


V
Rhinological and Otological Society, Inc.

Validation of a Self-Administered Audiometry Application:


An Equivalence Study

Jonathon P. Whitton, AuD; Kenneth E. Hancock, PhD; Jeffrey M. Shannon, AuD; Daniel B. Polley, PhD

Objectives/Hypothesis: To compare hearing measurements made at home using self-administered audiometric software
against audiological tests performed on the same subjects in a clinical setting
Study Design: Prospective, crossover equivalence study
Methods: In experiment 1, adults with varying degrees of hearing loss (N 5 19) performed air-conduction audiometry,
frequency discrimination, and speech recognition in noise testing twice at home with an automated tablet application and
twice in sound-treated clinical booths with an audiologist. The accuracy and reliability of computer-guided home hearing tests
were compared to audiologist administered tests. In experiment 2, the reliability and accuracy of pure-tone audiometric
results were examined in a separate cohort across a variety of clinical settings (N 5 21).
Results: Remote, automated audiograms were statistically equivalent to manual, clinic-based testing from 500 to 8,000
Hz (P  .02); however, 250 Hz thresholds were elevated when collected at home. Remote and sound-treated booth testing of
frequency discrimination and speech recognition thresholds were equivalent (P  5 3 1025). In the second experiment,
remote testing was equivalent to manual sound-booth testing from 500 to 8,000 Hz (P  .02) for a different cohort who
received clinic-based testing in a variety of settings.
Conclusion: These data provide a proof of concept that several self-administered, automated hearing measurements are
statistically equivalent to manual measurements made by an audiologist in the clinic. The demonstration of statistical equiva-
lency for these basic behavioral hearing tests points toward the eventual feasibility of monitoring progressive or fluctuant
hearing disorders outside of the clinic to increase the efficiency of clinical information collection.
Key Words: Telehealth, mobile health, audiology, automation, diagnostics, audiometry, speech, auditory processing.
Level of Evidence: 2b.
Laryngoscope, 00:000–000, 2016

INTRODUCTION patients who need assessment and treatment is expected


Diagnosing and treating hearing loss (HL) repre- to grow to unsustainable levels in the coming decade.2,3
sents a formidable public health challenge that is Widening the lens beyond the United States, of the esti-
expected to worsen in the near future. An estimated 48 mated 360 million people worldwide who live with dis-
million individuals in the United States currently live abling HL, most live in regions where audiological
with HL, and this number is expected to nearly double services are scarce.4
over the next 20 years due to changing population demo- To address the supply and demand mismatch in the
graphics.1,2 Audiological service providers in the United U.S. hearing services market, the efficiency or size of the
States are already in high demand, and the mismatch current cadre of hearing healthcare providers would have
between the capacity of Audiologists and the number of to increase by a factor of two over the next two decades.2,3
There is no indication that this gap in patient care will
be filled by a doubling of trained personnel.2,3 Logically,
Additional supporting information may be found in the online
version of this article.
this points toward the central importance of increasing
From the Eaton-Peabody Laboratories, Massachusetts Eye and clinical efficiency as the most plausible means of solving
Ear Infirmary (J.P.W., K.E.H., D.B.P.); the Department of Otology and the impending supply and demand dilemma. Some sav-
Laryngology, Harvard Medical School (K.E.H., D.B.P.), Boston, Massachu-
setts; the Program in Speech Hearing Bioscience and Technology, Har- ings are likely to be found in developing more efficient
vard–MIT Division of Health Sciences, and Technology (J.P.W.), standards for the diagnosis and management of HL.
Cambridge, Massachusetts; and the Hudson Valley Audiology Center
(J.M.S.), Pomona, New York.
Audiologic behavioral tests represent a direct measure of
Editor’s Note: This Manuscript was accepted for publication hearing ability and currently require substantial time
February 25, 2016. costs from doctorally trained clinicians because they are
Financial Support: Human research was supported by a Mass. Eye performed manually, making this form of information col-
and Ear Curing Kids Award and a One Fund Boston research award
(D.B.P). The Bose Corporation donated the headphones that were used in lection a reasonable target for increased efficiency.
this study and Microsoft Corporation donated the tablets. The authors Audiologic behavioral tests can be divided into three
have no other funding, financial relationships, or conflicts of interest to classes: 1) absolute detection thresholds, 2) feature dis-
disclose.
Send correspondence to Dr. Jonathon Whitton, EPL, Mass Eye crimination thresholds, and 3) speech recognition test-
and Ear Infirmary, 243 Charles St., Boston, MA 02114. ing. As the gold standard test of hearing sensitivity,
E-mail: Jonathon_Whitton@meei.harvard.edu
measurements of absolute detection thresholds (i.e., the
DOI: 10.1002/lary.25988 pure tone audiogram) are the most commonly performed

Laryngoscope 00: Month 2016 Whitton et al.: Equivalence of Mobile and Clinic-Based Tests
1
Fig. 1. (A, left) Air-conduction thresholds from 19 subjects collected in the clinic (red 5 right ear, blue 5 left ear). (A, right) Distribution of
PTAs (.5–2 kHz) in the sample plotted according to American Academy of Otolaryngology–Head and Neck Surgery recommendations.48 (B,
top) Study design. (B, bottom) Average ambient noise measurements made at the subjects’ homes (gray line) and in the sound-treated clin-
ical booth (black line) with the tablet computer. Actual sound-treated clinical booth noise (black broken line) measured with a high-quality
microphone and signal analyzer revealed that the measurement noise floor of the tablet exceeded all clinic and some home ambient-noise
measurements (see Supporting Methods for details). HL 5 hearing level; PTA 5 pure tone average; SPL 5 sound pressure level.

audiological behavioral tests. The latter two classes of crimination thresholds, and speech-in-noise recognition
behavioral tests are employed in some cases to obtain sup- scores from participants of varying ages and hearing
plementary information to pure tone audiograms (e.g., abilities. We chose these three measures because each
multilevel word recognition testing5,6) but also when char- test represents one of the three classes of audiological
acterizing disorders that have little or no correlation with behavioral tests (absolute detection, feature discrimina-
basic audibility, such as auditory neuropathy spectrum tion, and speech recognition) and thus affords a proof of
disorder and auditory processing disorder.7–14 The admin- concept for automated, remote testing across the behav-
istration procedures for all three classes of behavioral ioral test battery. The accuracy of these remote tests
tests are standardized to ensure comparable results was assessed by comparing home-based results to meas-
across testing facilities, making them amenable to auto- urements made in the same subjects by an audiologist in
mation and improved clinical efficiency.3,15,16 a clinical sound booth. Our aim was to test whether the
Margolis and Morgan3 estimated that by automat- results of home-based, self-administered audiology tests
ing 80% of pure tone audiograms currently performed by were equivalent to the same tests administered by an
clinicians, each audiologist could see an additional 139 audiologist in a sound-treated booth.
patients annually (1.5 million patients total), which
would only partially close the gap between supply and MATERIALS AND METHODS
demand in the United States for basic hearing services.
But the Margolis and Morgan capacity estimates were Participants
based on the need for testing to be performed in avail- All procedures were approved by the Human Studies Com-
mittee at Massachusetts Eye and Ear Infirmary and the Com-
able sound-treated booths with specialized equipment.
mittee on the Use of Humans as Experimental Subjects at the
The processing power of smartphones and tablet com- Massachusetts Institute of Technology in Cambridge, Massachu-
puters that are now ubiquitous in both developed and setts. Informed consent was obtained from each participant. For
emerging nations17,18 make it feasible to distribute experiment 1, 19 individuals (7 female) participated in the
applications that could perform sophisticated, automated study, presenting with various degrees of hearing loss (Fig. 1A),
audiological testing with consumer-grade hardware out- ranging in age from 25 to 82 years, and reporting a variety of
side of sound-treated booths. If remote and clinic-based auditory complaints (Supp. Fig. S1). Two subjects only com-
audiology tests were ever to be considered interchange- pleted the first two time points of the study. For experiment 2,
we recruited 21 additional subjects to test the repeatability and
able for the eventual purposes of differential diagnosis
generalization of the audiometric findings when clinic-based
or monitoring response to treatment, it must first be testing was performed by various ear, nose, and throat (ENT)
established whether remote and clinic-based tests are and audiology clinics (see Supporting Methods for more detail).
statistically equivalent. This level of rigorous quantita- Most of the participants in experiment 2 reported chronic tinni-
tive scrutiny has not been applied to automated, remote tus and presented with hearing thresholds that ranged from
hearing assessment.19–26 This was the purpose of the normal hearing to profound HL.
present study.
We programmed an interactive application for tab-
Procedures
let computers (Surface Pro 2, Windows 8.1 operating sys-
For experiment 1, participants were asked to complete
tem; Microsoft Corporation, Reading WA). Using pure-tone audiometric, frequency discrimination, and speech-in-
consumer-grade headphones model AE2i; Bose Corpora- noise recognition testing in a sound-treated room under the
tion, Framingham, MA, this application provided a care of a licensed audiologist on two separate visits (clinic visits
means to measure pure-tone audiograms, frequency dis- 1 and 2). Between clinic visits, participants were given a tablet

Laryngoscope 00: Month 2016 Whitton et al.: Equivalence of Mobile and Clinic-Based Tests
2
computer and headphones, shown how to log in and open the Statistical Testing
application, and asked to complete testing from home on two occa- We quantified the accuracy of automated home-based
sions (home visits 1 and 2). Explanations for how to use the soft- hearing assessments by performing statistical equivalency tests
ware were communicated through text and images in the using the two one-sided testing (TOST) procedure.27 The TOST
software; subjects were not provided with any detailed instruc- requires a defined clinical equivalency margin (i.e., difference in
tions by the experimenters. The tablet microphone was used to measurement that would not be clinically significant). The
measure noise levels before each session. Our experimental design shorthand equivalency margin for clinic-based pure tone audi-
allowed us to characterize the accuracy of automated home testing ometry is considered 6 10 dB,33–36 based on American National
twice for each subject (home test 1 vs. clinic test 1 and home test 2
Standards Institute specifications for equipment variation and
vs. clinic test 2). After establishing accuracy using statistical
the testing resolution used with most audiometers.35,37 The clin-
equivalence testing,27 we asked whether the reliability of home
ical margin of equivalence can also be empirically defined from
testing was significantly different from clinic-based testing by
the test–retest difference for any diagnostic measure. Using
comparing the means and variances of test–retest differences
this approach, we conservatively defined equivalency as an
(home 2 vs. home 1 and clinic 2 vs. clinic 1). The means and stand-
audiometric difference value that fell within the measurement
ard deviations (SD) for all difference testing described in this
margin of error (80% confidence), as defined by a previous
study can be found in Supporting Tables SI and SII. In experiment
audiometric reliability study38 and our own clinical test–retest
2, we measured the accuracy of remote audiometric tests in a sepa-
rate sample of patients to manual audiograms that they had dataset for speech and frequency discrimination. Cases where
received under the care of various U.S. audiologists, providing a the 90% confidence interval for accuracy (clinic vs. home) falls
test of the generalizability of the remote testing approach and a within the clinical equivalency margin provide a visual proxy
replication of the main findings in experiment 1. Tablet-based soft- for the TOST equivalence hypothesis. P values for all TOST sta-
ware was developed with the Unity game engine (Unity Technolo- tistics are listed in Supporting Table SIII. We tested for differen-
gies, Bellevue WA) and scripted in C# (Microsoft). ces in reliability between remote and clinic-based testing by
comparing test–retest differences for each condition using mixed-
model analysis of variance for multiple comparisons and two-
Behavioral Measurements sample t tests for paired comparisons. We also compared test–
Pure-Tone Audiometry. For clinic-based testing, a licensed retest variances for remote and clinic-based measurements with
audiologist measured detection thresholds following standard clini- two-sample F-tests. Data were log transformed for statistical test-
cal procedures28 in a sound-treated booth. Contralateral masking ing when they were not normally distributed (normality tested
was employed according to the optimized masking approach29 with one-sample Kolmogorov-Smirnov tests). A post hoc power
when detection thresholds between ears differed by 35 dB or analysis indicated the study was adequately powered (b 5 0.2) to
greater based on the reported interaural attenuation for the TDH- test for equivalence (a 5 0.025) for all reported comparisons
39 headphone (Telephonics; Farmingdale, NY).30 For remote test- except for  250 Hz and  12,500 Hz pure tone thresholds. Impor-
ing, participants interacted with an automated tablet audiogram tantly, a result was only considered statistically significant if cri-
application using the Bose AE2i consumer-grade circumaural terion significance level (P  0.025) was met for all measurement
headphones (Framingham, MA) (see Supporting Methods and conditions. For example, a home-based audiometric threshold
Supp. Fig. S2 for description of equipment calibration). These head- was not considered equivalent to a clinic-based threshold unless
phones do not provide active noise cancellation. Although the significance was met for both left and right ears during both the
equipment and reliance on computer guidance were different from first and second home to clinic comparisons. We viewed this
the standard approach for threshold measurements, the testing method as the most conservative approach to data analysis.
algorithm28 followed the same rules as a clinician. Specifically,
tones were presented for 1 second with an interstimulus interval of
3 to 7 seconds. Responses (indicated by virtual button press) were RESULTS
considered hits if they occurred within a response window of 2.5
seconds. The tone level was decreased by 10 dB following hits and
Automated, Remote Pure-Tone Audiometry Is
increased by 5 dB following misses. Threshold was defined as the Largely Equivalent to Clinic-Based Testing
lowest level that evoked a hit response on two of three ascending We measured pure tone thresholds at home and in
runs, or three of six runs if there was no concordance after three the clinic from subjects with a wide range of hearing
runs. thresholds (Fig. 1A), ages (Supp. Fig S1A), histories of
Frequency Discrimination and Speech-in-Noise auditory impairment, and computer tablet experience
Thresholds. For both clinic and remote testing, frequency dis- (Supp. Fig. S1B–C). The mean differences between home
crimination thresholds were measured diotically through an and clinic testing were small and fell within the clinical
interactive two-alternative forced-choice software interface. The
equivalency margin from 500 Hz to 8,000 Hz (Fig. 2A
center frequency was roved around 2,000 Hz, and the threshold
was adaptively measured using standard psychophysical proce-
and Supp. Table SIV, N 5 19 participants). This finding
dures (see Supporting Methods for details).31 was repeatable, with the same pattern emerging from
Speech-in-noise thresholds were estimated using two lists of the home versus clinic comparisons at tests 1 and 2 in
35 monosyllabic words from the Northwestern University 6 lists32 both left and right ear comparisons. However, very low-
presented diotically at 70 dB HL. Our software allowed participants frequency thresholds ( 250 Hz) were slightly but con-
to initiate trials wherein they heard a female speaker produce sistently elevated when collected at home. Low-
monosyllabic words while six-talker babble played in the back- frequency threshold elevation could be attributed to ele-
ground. Participants were then cued to respond, and their voices vated levels of low-frequency background noise in the
were recorded via the tablet microphone. Recorded responses were
home environment (Fig. 1B bottom) and was only
transmitted to Massachusetts Eye and Ear Infirmary computer
servers and scored offline. For clinic-based testing, an audiologist observed in subjects with thresholds in the normal range
listened to their responses through the talk back on the audiometer (Supp. Fig. S3). We next compared the test–retest reliabil-
and scored whether or not each word was correct. Thresholds were ity of home versus clinic audiogram measurements. The
computed using the Spearman-K€ arber equation. difference scores for each testing environment are plotted

Laryngoscope 00: Month 2016 Whitton et al.: Equivalence of Mobile and Clinic-Based Tests
3
Reliability

Fig. 2. (A) Means and 90% confidence intervals are plotted for differences between home and clinic testing. Difference scores were meas-
ured twice (home 1–clinic 1 indicated by broken lines and home 2–clinic 2 indicated by darker solid lines). The clinical equivalence margin
defines the upper and lower bounds of the dark gray area. Values above zero reflect home thresholds that were elevated above clinic-
based thresholds (H>C) and vice versa. (B) Means and standard deviations are plotted for differences between testing sessions 1 and 2
made at home (gray) and by an audiologist at the clinic (black). Measurements made for the right (A & B, top) and left (A & B, bottom) ears
were plotted and analyzed separately. Black cross and gray home icons represent test points described in Fig 1B.

in Figure 2B and demonstrate considerable overlap across both testing repetitions (home 1 vs. clinic 1 and home
between measurements, with no repeatable significant dif- 2 vs. clinic 2, N 5 19 participants). We next compared the
ferences in means (P  0.44, group and group 3 frequency test–retest reliability of home versus clinic frequency dis-
interaction terms) or variances (P  0.09, Supp. Table SI). crimination and speech-recognition-in-noise threshold
measurements. The difference scores for each testing envi-
ronment are plotted in Figure 3A–B (right side) and reveal
Automated, Remote Measurement of Frequency
considerable overlap between measurements, with no statis-
Discrimination and Word Recognition in Noise
tical difference in means or variances (frequency discri-
Is Equivalent to Clinic-Based Measurements
mination, P 5 0.6 means and P 5 0.6 variances; speech
We did not expect a slight threshold elevation at very
recognition in noise, P 5 0.09 means and P 5 0.5 variances).
low frequencies to have any influence on discrimination and
recognition abilities that are measured at sound levels well
above ambient room noise. Indeed, we observed that home Age Predicts Absolute Perceptual Scores But
tests were statistically equivalent to the clinic-based meas- Not Remote Testing Accuracy
urements for suprathreshold tests of frequency discrimina- Hearing loss is most prevalent in middle-aged and
tion (Fig. 3A) and speech recognition in noise (Fig. 3B) older adults.1 Thus any method that is intended to

Fig. 3. (A, left) Differences between home and clinic frequency discrimination testing are plotted for individual subjects (gray circles). The 90%
confidence intervals for differences in home and clinic testing for tests 1 and 2 are indicated by gray boxes. The clinical equivalence margin
defines the upper and lower bounds of the dark gray area. (A, right) Measurement differences between tests 1 and 2 made by subjects at home
(gray circles) and by an audiologist at the clinic (black circles). Mean differences 6 standard deviation are indicated by boxes for each compari-
son. (B) The same plotting conventions described for (A) are used to depict the accuracy and reliability of speech in noise testing. Directions of
threshold change for accuracy and reliability comparisons match the convention used in Fig 2. SNR 5 signal-to-noise ratio. Black cross and
gray home icons represent test points described in Fig 1B.

Laryngoscope 00: Month 2016 Whitton et al.: Equivalence of Mobile and Clinic-Based Tests
4
increase the efficiency and accessibility of hearing
assessment must be amenable to implementation in this
population. While older adults are adopting mobile tech-
nologies at higher rates, they still lag behind younger
adults.39 Based on this discrepancy, one could speculate
that older adults might be less able to accurately self-
administer hearing measures using mobile devices and
applications. We enrolled a diverse sample in order to
assess the predictive power of participant age on test
results (Supp. Fig S1A). Subject age could explain a sig-
nificant amount of the variability in audiometric thresh-
olds and speech recognition in noise thresholds
(accounting for 38% and 52% of the variability respec-
tively, P 5 0.005 for both). However, neither age nor
degree of HL was a significant factor when comparing
measurements made in clinical settings versus unsuper-
vised testing at home (R2  0.17, P  0.12 for all associ-
ations). Furthermore, accuracy was not worse for any of
the audiologic tests when individuals who did not own
tablets were compared to tablet owners (P  0.67 for all
comparisons). Within our sample, there was no evidence
that participant age, hearing status, or tablet ownership

Threshold diff (dB)


conferred a disadvantage in diagnostic accuracy on any
of the automated, remote audiological tests that were
administered.

Generalization of the Home Testing Approach


to a Different Sample
As a final step, we conducted a second experiment
wherein we tested the generalizability of remote testing
in a separate sample of patients (N 5 21) who had previ-
ously received clinical testing from other audiology and
ENT clinics (see Supporting Methods). As an example,
Figure 4A shows that an audiogram collected at a Mid- Fig. 4. (A) Audiograms for a patient diagnosed with sudden sen-
sorineural HL in the left ear. The solid lines (red 5 right ear, blue
western ENT clinic from a patient who presented with 5 left ear) represent measurements made by an audiologist at a
sudden sensorineural HL in her left ear (solid lines) was Midwestern ear, nose, and throat clinic, and broken lines indicate
captured with a high degree of accuracy using tablet- automated measurements made with a tablet from the patient’s
based software at that patient’s home 2 days later (bro- home 2 days later. (B) Means and 90% confidence intervals are
plotted for differences between automated home testing and clini-
ken lines). When comparing clinic-based audiograms from cal audiograms from 21 patients’ medical records. The left ear
all patients’ medical records to automated, remote test and right ear data were analyzed separately but are plotted on the
results, we observed equivalency from 500 Hz to 8,000 same figure (left 5 blue, right 5 red). The clinical equivalence
Hz, replicating the results from experiment 1 (Fig. 4B). margin defines the upper and lower bounds of the dark gray area,
as per Figure 2A. HL 5 hearing loss.

DISCUSSION
There are three classes of behavioral audiologic tests tion tests to identify individuals with elevated detection
that are used for hearing assessment (absolute detection, thresholds.19–26 One exception was a study that reported
feature discrimination, and speech recognition). We chose the differences between tone detection thresholds
a single test from each class and assessed whether meas- obtained with an iOS application in patients’ homes ver-
urements made from home with an automated tablet sus manual measurements made in the clinic.40
application using consumer-grade headphones were stat- Although statistical equivalency was not tested in that
istically equivalent to manual measurements made by an study, the findings were qualitatively similar to the data
audiologist in the clinic. Absolute detection thresholds col- described here.
lected remotely were equivalent to clinic-based measures Whereas the behavioral tests examined in this
across frequencies that convey speech information (500– study were as accurate when performed remotely by an
8,000 Hz). Frequency discrimination and speech- application as when an audiologist manually collected
recognition thresholds were equivalent when measured them in the clinic, several obstacles prevent the realiza-
remotely and in the clinic. tion of interchangeable remote and clinic-based test
Validation studies for automated, remote hearing results: 1) When hearing sensitivity was normal, accu-
testing have predominately focused on the sensitivity racy of remote audiogram measurements was reduced
and specificity of audiograms or speech-in-noise recogni- for 250 Hz tones, likely as a consequence of

Laryngoscope 00: Month 2016 Whitton et al.: Equivalence of Mobile and Clinic-Based Tests
5
environmental noise contamination. 2) The results of air- 3. Margolis RH, Morgan DE. Automated pure-tone audiometry: an analysis
of capacity, need, and benefit. Am J Audiol 2008;17:109–113.
conducted tests must be interpreted with caution in the 4. WHO. The global burden of disease: 2004 update. Geneva, Switzerlnd:
absence of initial otoscopic examination. 3) Without cou- World Health Organization; 2008.
5. Jerger J, Jerger S. Psychoacoustic comparison of cochlear and viiith nerve
pling bone-conduction to air-conduction measurements, disorders. J Speech Lang Hear Res 1967;10:659–688.
the data from remote audiograms do not afford clinicians 6. Jerger J, Jerger S. Diagnostic significance of PB word functions. Arch Oto-
laryngol 1971;93:573–580.
the necessary information to characterize type in addi- 7. Ruggles D, Bharadwaj H, Shinn-Cunningham BG. Why middle-aged lis-
tion to severity of HL. We do not believe that these teners have trouble hearing in everyday settings. Curr Biol 2012;22:
obstacles are insurmountable. Active41 and passive42 1417–1422.
8. Ruggles D, Bharadwaj H, Shinn-Cunningham BG. Normal hearing is not
noise-reduction techniques reduce low-frequency thresh- enough to guarantee robust encoding of suprathreshold features impor-
old elevations measured outside of sound-treated booths tant in everyday communication. Proc Natl Acad Sci U S A 2011;108:
15516–15521.
and are incorporated in consumer-grade circumaural 9. Strelcyk O, Dau T. Relations between frequency selectivity, temporal fine-
and in-ear headphones. Additionally, smartphone com- structure processing, and speech reception in impaired hearing.
J Acoust Soc Am 2009;125:3328–3345.
patible equipment exists to remotely obtain otoscopic 10. Bharadwaj HM, Masud S, Mehraei G, Verhulst S, Shinn-Cunningham BG.
images and transmit these data to clinicians for review.43 Individual differences reveal correlates of hidden hearing deficits.
J Neurosci 2015;35:2161–2172.
Although the widespread release of Google Glass (Moun- 11. Narne VK. Temporal processing and speech perception in noise by listen-
tain View, CA) has been suspended, it served as an ers with auditory neuropathy. PLoS One 2013;8:e55995.
12. Musiek, FE, Baran, JA, Bellis, TJ, et al. Diagnosis, Treatment and Man-
example of consumer-grade hardware that stimulated agement of Children and Adults With Central Auditory Processing Dis-
the cochlea via bone-conduction pathways.44,45 The order. Reston, VA: American Academy of Audiology; 2010.
experiments reported here suggest that, without further 13. American Speech Language and Hearing Association. (Central) auditory
processing disorders [Technical Report]. Rockville, MD: American
hardware solutions, the remote testing approach could Speech-Language-Hearing Association; 2005.
serve as a means to monitor patients with known pathol- 14. Whitton JP, Polley DB. Evaluating the perceptual and pathophysiological
consequences of auditory deprivation in early postnatal life: a comparison
ogy or as an initial screening wherein normal scores of basic and clinical studies. J Assoc Res Otolaryngol 2011;12:535–547.
would be interpretable, but measured loss would provide 15. Mahomed F, Swanepoel DW, Eikelboom RH, Soer M. Validity of automated
threshold audiometry: a systematic review and meta-analysis. Ear Hear
only screening level information. If remote testing were 2013;34:745–752.
ever to move beyond screening to provide true 16. Rudmose W. Automatic audiometry. In: Jerger J, ed. Modern Develop-
ments in Audiology. New York, NY: Academic Press; 1963:30–75.
diagnostic-grade measurements, combined hardware and 17. Smith A, McGeeney, K, Duggan, M, Rainie, L, Keeter, S. The smartphone
software solutions will need to reduce ambient low- difference. Washington DC: Pew Research Center; 2015. Available at:
frequency noise levels below approximately 0 dB HL at http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015/. Ac-
cessed October 15, 2015.
the tympanic membrane46,47 and provide information 18. Wike R, Simmons K, Poushter J, et al. Emerging nations embrace inter-
concerning external and middle ear transmission. net, mobile technology. Washington DC: Pew Research Center; 2014.
Available at: http://www.pewglobal.org/2014/02/13/emerging-nations-
embrace-internet-mobile-technology/. Accessed October 15, 2015.
19. Abu-Ghanem S, Handzel O, Ness L, Ben-Artzi-Blima M, Fait-Ghelbendorf
CONCLUSION K, Himmelfarb M. Smartphone-based audiometric test for screening
Automated, unsupervised audiometry is a feasible, hearing loss in the elderly. Eur Arch Otorhinolaryngol 2016;273:333–
339. doi: 10.1007/s00405-015-3533-9. Epub 2015.
accurate, and reliable approach for the measurement of 20. Bexelius C, Honeth L, Ekman A, et al. Evaluation of an internet-based
tone detection thresholds, frequency discrimination abil- hearing test—comparison with established methods for detection of
hearing loss. J Med Internet Res 2008;10:e32.
ities, and word recognition in noise scores. These three 21. Smits C, Kapteyn T, Houtgast T. Development and validation of an auto-
behavioral tests represent each major class of audiologic matic speech-in-noise screening test by telephone. Int J Audiol 2004;43:
behavioral measures. This study provides a proof of con- 15:28.
22. Smits C, Merkus P, Houtgast T. How we do it: the Dutch functional
cept that automated, remote testing in relatively quiet hearing-screening tests by telephone and internet. Clin Otolaryngol
environments can provide equivalent accuracy and reli- 2006;31:436–440.
23. Watson CS, Kidd GR, Miller JD, Smits C, Humes LE. Telephone screening
ability to clinic-based measures across a battery of tests for functionally impaired hearing: current use in seven countries
audiometric behavioral tests. and development of a US version. J Am Acad Audiol 2012;23:757–767.
24. Jansen S, Luts H, Wagener KC, Frachet B, Wouters J. The French digit
triplet test: a hearing screening tool for speech intelligibility in noise.
Acknowledgment Int J Audiol 2010;49:378–387.
25. Meyer C, Hickson L, Khan A, Hartley D, Dillon H, Seymour J. Investiga-
We thank M. Ravicz for assistance in performing acoustic tion of the actions taken by adults who failed a telephone-based hearing
measurements. We thank Drs. A. Remenschneider and E. screen. Ear Hear 2011;32:720–731.
Kozin for providing feedback on an earlier version of this 26. Vlaming MSMG, Mackinnon RC, Jansen M, Moore DR. Automated screen-
ing for high-frequency hearing loss. Ear Hear 2014;35:667–679.
article. 27. Walker E, Nowacki AS. Understanding equivalence and noninferiority
testing. J Gen Intern Med 2011;26:192–196.
28. Carhart R, Jerger J. A preferred method for clinical determination of
AUTHOR CONTRIBUTIONS pure-tone thresholds. J Speech Hear Disord 1959;24:330–345.
29. Turner RG. Masking redux. I: an optimized masking method. J Am Acad
J.W., K.H., and D.P. designed the experiments; K.H. Audiol 2004;15:17–28.
programmed the software; J.W. and J.S. performed the 30. Killion MC, Wilber LA, Gudmundsen GI. Insert earphones for more inter-
aural attenuation. Hear Instr 1985;36:1–2.
behavioral experiments; J.W. analyzed the data; and J.W. 31. Levitt H. Transformed up-down methods in psychoacoustics. J Acoust Soc
and D.P. wrote the article. Am 1971;49:467–477.
32. Wilson RH, Burks CA. Use of 35 words for evaluation of hearing loss in
signal-to-babble ratio: a clinic protocol. J Rehabil Res Dev 2005;42:839–852.
33. Occupational Safety and Health Administration. 1910.95 CFR Occupa-
BIBLIOGRAPHY tional Noise Exposure: Hearing Conservation Amendment (Final Rule)
1. Lin FR, Niparko JK, Ferrucci L. Hearing loss prevalence in the United Federal Register. 1983:9738–9785.
States. Arch Intern Med 2011;171:1851–1852. 34. National Institute for Occupational Safety and Health. Compendium of
2. Windmill IM, Freeman BA. Demand for audiology services: 30-yr projec- materials for noise control. Cincinnati, OH: NIOSH; 1980:80–116.
tions and impact on academic programs. J Am Acad Audiol 2013;24: 35. Eikelboom R, Swanepoel DW, Motakef S, Upson G. Clinical validation of
407–416. the AMTAS automated audiometer. Int J Audiol 2013;52:342–349.

Laryngoscope 00: Month 2016 Whitton et al.: Equivalence of Mobile and Clinic-Based Tests
6
36. Lemkens N, Vermeire K, Brokx JPL, Fransen E, Van Camp G, Van De 42. Maclennan-Smith F, Swanepoel DW, Hall JW. Validity of diagnostic pure-
Heyning PH. Interpretation of pure-tone thresholds in sensorineural tone audiometry without a sound-treated environment in older adults.
hearing loss (SNHL): a review of measurement variability and age- Int J Audiol 2013;52:66–73.
specific references. Acta Otorhinolaryngol Belg 2002;56:341–352. 43. CellScope company website. CellScope Inc. Available at: https://www.cell-
37. American National Standards Institute. American National Standard Speci- scope.com/. Accessed April 9, 2015.
fication for Audiometers. (ANSI S3.6-2010). New York, NY: ANSI; 2010. 44. Kupershmidt H, Blanka L, inventors. Indication of quality for placement
38. Landry JA, Green WB. Pure-tone audiometric threshold test-retest vari- of bone conduction transducers. US patent 20140363002 A1. June 9,
ability in young and elderly adults. Can J Speech Lang Pathol and 2014.
Audiol 1999;23:74–80. 45. Heiman A, Haiut M, Yehuday U. Equalization and power control of bone
39. Smith A. Older adults and technology use. Washington DC: Pew Research conduction elements. US patent 20140363033 A1. Dec 30, 2013.
Center; 2014. Available at: http://www.pewinternet.org/2014/04/03/older- 46. Frank T, Williams DL. Effects of background noise on earphone thresh-
adults-and-technology-use/. Accessed October 15, 2015. olds. J Am Acad Audiol 1993;4:201–212.
40. Foulad A, Bui P, Djalilian H. Automated audiometry using Apple IOS-based 47. Hawkins JE, Stevens SS. The masking of pure tones and of speech by
application technology. Otolaryngol Head Neck Surg 2013;149:700–706. white noise. J Acoust Soc Am 1950;22:6–13.
41. Bromwich MA, Parsa V, Lanthier N, Yoo J, Parnes LS. Active noise reduc- 48. Gurgel RK, Jackler RK, Dobie RA, Popelka GR. A new standardized for-
tion audiometry: a prospective analysis of a new approach to noise man- mat for reporting hearing outcome in clinical trials. Otolaryngol Head
agement in audiometric testing. Laryngoscope 2008;118:104–109. Neck Surg 2012;147:803–807.

Laryngoscope 00: Month 2016 Whitton et al.: Equivalence of Mobile and Clinic-Based Tests
7

Vous aimerez peut-être aussi