ubiquitous BAL sensing, which is why we propose drunk
user interfaces that can work on an unmodified smartphone.
Mobile Software for Measuring Alcohol Consumption
Because smartphones are ubiquitous, researchers have
explored ways that mobile devices can be used to curb
alcohol abuse without supplemental hardware. One area
where smartphones have been used is education. Hundreds
of publicly available apps, such as BAC Calculator
and
IntelliDrink PRO
, allow users to log their drinking
behavior. Using demographic information (e.g., height,
weight) and data on the drinks themselves (e.g., proof,
frequency, quantity), these apps estimate the users’ BAL;
however, a study by Weaver et al. [47] found that the
estimates reported by 98 such apps were inaccurate
compared to a breathalyzer. Of course, these apps also rely
on self-report, which is prone to error.
Shifting to more automatic means of sensing inebriation,
Hossain et al. [22] mined geotagged tweets to determine
whether or not people were drunk. They assumed that
tweets with words like “hangover” and “drunk” came from
drunk individuals. They then propagated that inference to
tweets that were posted by the same person near that time.
One of the most common tasks explored by the HCI and
ubicomp communities for predicting inebriation is gait
analysis. The vision of these projects is an app that
continuously processes the smartphone’s accelerometer
data for features such as step amplitude and cadence
variation [2,26]. BreathalEyes [5] reports a BAL estimate
by detecting nystagmus, or involuntary eye movement,
during horizontal gaze shifts. To the best of our knowledge,
there is no publicly available study that describes
BreathalEyes’ accuracy. Our work is most similar to that of
Bae et al [3], who detected heavy drinking episodes in a
study involving the collection of mobile sensor data and
experience sampling methods for ground truth. Their sensor
data included location, network usage, and motion data.
Unlike our work, Bae et al. did not use human performance
data. They also made a categorical assessment (sober, tipsy,
or drunk), not a continuous-scale BAL estimate as we do.
THE DESIGN OF DUI
The DUI app comprises five different drunk user interfaces:
(1) typing, (2) swiping, (3) balancing+heart rate, (4) simple
reaction, and (5) choice reaction. For each task, we cite a
subset of clinical experiments that informed them, how they
were adapted for use on a mobile device, and some of the
features calculated on human performance and sensor data.
Unfortunately, limitations of space preclude a complete
listing of every feature used for each task. A more detailed
listing can be found on the project’s webpage
. We then
https://play.google.com/store/apps/details?id=com.simonm.blood
alcoholcontentcalculator
https://itunes.apple.com/us/app/intellidrink-pro-blood-alcohol-
content-bac-calculator/id440759306
https://atm15.github.io/extra/DUI_feature_list.csv
describe how those features are processed and analyzed to
produce a final BAL estimate.
(1) Typing Task
DUI’s typing task is intended to measure the user’s fine
motor coordination abilities and cognition as they text.
Anecdotal evidence suggests that texting is more difficult
while a person is inebriated; to the best of our knowledge,
though, there has been no work that has quantitatively
analyzed the effect of alcohol on smartphone touchscreen
typing. However, research in medicine and psychology has
examined similar tasks that require small, controlled
movements, such as the Purdue Pegboard Test [6].
For DUI’s typing task, the user is presented with a random
phrase from the MacKenzie-Soukoreff phrase set [33] and
asked to type the phrase “as quickly and accurately” as
possible, relying on their own internal speed-accuracy
tradeoff. Auto-correct is disabled, and no cursor is provided
for the user to jump back to make corrections; if the user
makes a mistake, they must decide for themselves whether
or not to remedy the mistake with a backspace or to leave it.
We imposed these restrictions in keeping with standard text
entry evaluation methodology [52].
There are two levels of features that emerge from this test.
At a high level, DUI utilizes the error rate analysis
proposed by Soukoreff and MacKenzie for text entry
analysis [42]. In such an analysis, each character is
classified into one of four categories: “correct” (C), “fix”
(F), “incorrect fixed” (IF), and “incorrect not fixed” (INF).
DUI calculates different text entry metrics involving these
character categories that not only measure how often the
user made mistakes, but also how often they decided to
correct those mistakes. Other quantities that can be
calculated include “utilized bandwidth” (i.e., the fraction of
correct keystrokes made) and “participant
conscientiousness” (i.e., the fraction of mistakes corrected):
At a lower level, DUI examines the mechanics of the user’s
typing through the touchscreen, accelerometer, and
gyroscope, similar to how Goel et al. [16] used those
sensors to compensate for typing errors that were made
while walking. DUI’s typing task uses a custom keyboard,
similar in appearance to the smartphone’s default keyboard,
which records the precise position and radius of each touch.
From this data, DUI calculates features like the Euclidean
distance between the center of the selected key and the
user’s touch position. Motion sensor features include the
peak acceleration before a touch and variation in phone
orientation during the task. One interesting hypothesis
within this task is that people could have different reactions
to mistakes that could be detected through sensor data. If a
person is drunk, they could overreact to the mistake and