Hi! I'm Florian, a joint PhD candidate
at the University of Glasgow / University of Edinburgh.

I am researching in the field of HCI, Security, Privacy, and VR.

About Me

A picture of Florian Mathis

As of October 2019, I am a PhD candidate in the Glasgow Interactive SysTems group (GIST) at the University of Glasgow, UK and a member of the Technology Usablity Lab in Privacy and Security (TULiPS) at the University of Edinburgh, UK. I research in Human-computer Interaction (HCI) and Human-centred Security (HCS).

I am holding a bachelor's degree in Media Informatics and Human-computer Interaction (first class, GPA: 4.0) and a master's degree in Human-computer Interaction (first class, GPA: 4.0) from the LMU Munich, Germany.

University of Munich Aarhus University University of Glasgow University of Edinburgh


GazeWheels: Comparing Dwell-time Feedback and Methods for Gaze Input

Misahael Fernández Moncayo, Florian Mathis, Mohamed Khamis
In Proceedings of the Nordic forum for Human-Computer Interaction (NordiCHI), 14.7% acceptance rate
Tallin, Estonia, October 2020 (NordiCHI 2020)

We present an evaluation and comparison of GazeWheels: techniques for dwell time gaze input and feedback. In GazeWheel, visual feedback is shown to the user in the form of a wheel that is filled. When completely filled, a selection is made where the user is gazing. We compare three methods for responding to the user when gazing away from the target: Resetting GazeWheel, Pause-and-ResumeGazeWheel, and Infinite GazeWheel. We also compare the position of the GazeWheel; Co-located Feedback: shown on the target being gazed at, and Remote Feedback: shown at the top of the interface. To this end, we report on results of a user study (N=19) that investigates the benefits and drawbacks of each method at different dwell times: 500ms, 800ms, and 1000ms. Results show that Infinite GazeWheel and Pause-and-Resume GazeWheel are more error prone but significantly faster than Resetting GazeWheel when using 800-1000 ms dwell time, even when including the time for correcting errors.

Assessing Social Text Placement in Mixed Reality TV
Best Late-Breaking-Work Award (top 7.14% of accepted submissions)

Florian Mathis, Xuesong Zhang, Mark McGill, Adalberto L. Simeone, Mohamed Khamis
In Proceedings of the International Conference on Interactive Media Experiences
Barcelona, Spain, June 2020 (IMX 2020)

TV experiences are often social, be it at-a-distance (through text) or in-person (through speech). Mixed Reality (MR) headsets offer new opportunities to enhance social communication during TV viewing by placing social artifacts (e.g. text) anywhere the viewer wishes, rather than being constrained to a smartphone or TV display. In this paper, we use VR as a test-bed to evaluate different text locations for MR TV specifically. We introduce the concepts of wall messages, below-screen messages, and egocentric messages in addition to state-of-the-art on-screen messages (i.e., subtitles) and controller messages (i.e., reading text messages on the mobile device) to convey messages to users during TV viewing experiences. Our results suggest that a) future MR systems that aim to improve viewers’ experience need to consider the integration of a communication channel that does not interfere with viewers’ primary task, that is watching TV, and b) independent of the location of text messages, users prefer to be in full control of them, especially when reading and responding to them. Our findings pave the way for further investigations towards social at-a-distance communication in Mixed Reality.

Augmenting TV Viewing using Acoustically Transparent Auditory Headsets

Mark Mcgill, Florian Mathis, Julie Williamson, Mohamed Khamis
In Proceedings of the International Conference on Interactive Media Experiences
Barcelona, Spain, June 2020 (IMX 2020)

This paper explores how acoustically transparent auditory headsets could be exploited to improve the TV viewing experience. We do so by intermixing headset and TV audio, facilitating personal, private auditory enhancements and augmentations of TV content whilst avoiding significant occlusion of the sounds of other viewers or altering their experiences. In a user study we evaluate the impact of synchronously mirroring select audio channels from the 5.1 mix (dialogue, environmental sounds, and the full mix), and selectively augmenting TV viewing with additional speech content (e.g. Audio Description, Directors Commentary, and Alternate Language). For TV content, auditory headsets enable better spatialization and more immersive, enjoyable viewing; the intermixing of TV and headset audio creates unique listening experiences; and private augmentations offer new ways to (re)watch content with others. Finally, we reflect on how these headsets might facilitate more immersive augmented TV viewing experiences within reach of consumers.

Shared and Synchronous Mixed Reality Experiences

Mark Mcgill, Florian Mathis, Stephen Brewster
In CHI 2020 Workshop on SocialVR 2020: Social Virtual Reality Workshop
Honolulu, Hawaiʻi, USA, April 2020 (CHI 2020)

Our position for the Social VR workshop is that the remit should be expanded to more broadly consider Mixed Reality (MR) -- (a)synchronous communication at-a-distance is not exclusively limited to visually-oriented telepresence delivered through VR HMDs. Rather, there is a space within which we might facilitate shared MR-driven experiences visually (using traditional AR/VR/MR headsets) and aurally (e.g. using auditory headsets), synchronously (same time) and asynchronously (different times), in the same place or at-a-distance, and asymetrically (e.g. with mixed headset types), with a variety of permutations of these factors -- and perhaps the most impactful permutations may not be grounded in VR headset-driven experiences.

RubikAuth: Fast and Secure Authentication in Virtual Reality

Florian Mathis, John H Williamson, Kami Vaniea, Mohamed Khamis
In Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems
Honolulu, Hawaiʻi, USA, April 2020 (CHI 2020)

There is a growing need for usable and secure authentication in virtual reality (VR). Established concepts (e.g., 2D graphical PINs) are vulnerable to observation attacks, and proposed alternatives are relatively slow. We present RubikAuth, a novel authentication scheme for VR where users authenticate quickly by selecting digits from a virtual 3D cube that is manipulated with a handheld controller. We report two studies comparing how pointing using gaze, head pose, and controller tapping impacts RubikAuth’s usability and observation resistance under three realistic threat models. Entering a four-symbol RubikAuth password is fast: 1.69 s to 3.5 s using controller tapping, 2.35 s to 4.68 s using head pose, and 2.39 s to 4.92 s using gaze and highly resilient to observations; 97.78% to 100% of observation attacks were unsuccessful. Our results suggest that providing attackers with support material contributes to more realistic security evaluations.

Knowledge-driven Biometric Authentication in Virtual Reality

Florian Mathis, Hassan Ismail Fawaz, Mohamed Khamis
In Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems
Honolulu, Hawaiʻi, USA, April 2020 (CHI 2020)

With the increasing adoption of virtual reality (VR) in public spaces, protecting users from observation attacks is becoming essential to prevent attackers from accessing context-sensitive data or performing malicious payment transactions in VR. In this work, we propose RubikBiom, a knowledge-driven behavioural biometric authentication scheme for authentication in VR. We show that hand movement patterns performed during interactions with a knowledgebased authentication scheme (e.g., when entering a PIN) can be leveraged to establish an additional security layer. Based on a dataset gathered in a lab study with 23 participants, we show that knowledge-driven behavioural biometric authentication increases security in an unobtrusive way. We achieve an accuracy of up to 98.91% by applying a Fully Convolutional Network (FCN) on 32 authentications per subject. Our results pave the way for further investigations towards knowledge-driven behavioural biometric authentication in VR.

Privacy, Security and Safety Concerns of using HMDs in Public and Semi-Public Spaces

Florian Mathis, Mohamed Khamis
In Proceedings of the CHI 2019 Workshop on Challenges Using Head-Mounted Displays in Shared and Social Spaces
Glasgow, Scotland, UK, May 2019 (CHI 2019)

Head-Mounted Displays (HMDs) are increasingly used in public and semi-public spaces nowadays. However, this development comes with implications on the privacy, security, and safety of the HMD user. Based on prior work on interaction in public space, usable privacy and security, and Head-Mounted Displays, this position paper discusses the implications of HMD usage in public on the user’s privacy, security and safety. We provide examples of said threats and present potential solutions that are promising for future work.

Can Privacy-Aware Lifelogs Alter Our Memories?

Passant ElAgroudy, Mohamed Khamis, Florian Mathis, Diana Irmscher, Andreas Bulling, Albrecht Schmidt
In Proceedings of the 2019 CHI Extended Abstracts Conference on Human Factors in Computing Systems
Glasgow, Scotland, UK, May 2019 (CHI 2019)

The abundance of automatically-triggered lifelogging cameras is a privacy threat to bystanders. Countering this by deleting photos limits relevant memory cues and the informative content of lifelogs. An alternative is to obfuscate bystanders, but it is not clear how this impacts the lifelogger’s recall of memories. We report on a study in which we compare viewing 1) unaltered photos, 2) photos with blurred people, and 3) a subset of the photos after deleting private ones, on memory recall. Findings show that obfuscated content helps users recall a lot of content, but it also results in recalling less accurate details, which can sometimes mislead the user. Our work informs the design of privacyaware lifelogging systems that maximizes recall and steers discussion about ubiquitous technologies that could alter human memories.

The Bird is the Word: A Usability Evaluation of Emojis inside Text Passwords
Honourable Mention Award

Tobias Seitz, Florian Mathis, Heinrich Hussmann
In Proceedings of the 29th Australian Conference on Human-Computer Interaction
Brisbane, QLD, Australia, November 2017 (OzCHI 2017)

Passwords still represent an annoying burden for millions of Internet users. Helping people create memorable and secure credentials is therefore an important goal for web-service providers to satisfy user needs. Due to the good memorability of pictures, emojis may be a suitable tool to create memorable and secure passwords. These small pictograms have seen an enormous rise in recent years, but their usage in regular passwords has not been explored for the Web. In a two-part user study with 40 participants we investigated if and how emojis are suitable in this context. We asked users to create passwords that contained both regular alphanumeric characters and emojis. The study shows that users’ primary selection strategy was to create meaningful relationships between the emoji and the rest of the password. We also found that platform dependent renderings of emojis do not necessarily reduce usability, if the object represented by the emoji is distinctive enough. As websites are already starting to allow emojis in passwords, it is important to evaluate this step carefully. Our results can inform this decision and provide pointers to the usability implications.

For an up-to-date record, please also refer to my Google Scholar page.

Professional Services

I am an external reviewer for a variety of human-computer interaction conferences and journals, with a focus on usable security and privacy, and virtual reality (VR). For example, I reviewed for ACM TVX 2019/IMX 2020, ACM EICS 2019, ACM IDC 2019/2020, ACM CHI 2019/2020, ACM ETRA 2020, IEEE VR 2020, ACM Mobile HCI 2020, ACM UIST 2020, and ACM UbiComp 2020, in the research areas of Usable Security and Privacy, Virtual Reality, User Experience and Usability and many more.

I received a special recognition award for providing outstanding reviews at CHI 2020.

Reviews 2021 (1 Journal)


Reviews 2020 (7 Conferences, 1 Journal)

  • CHI 2020 IEEE VR ETRA 2020 IDC 2020 IMX 2020 Mobile HCI 2020 UBICOMP 2020 UIST 2020

Reviews 2019 (4 Conferences)

  • CHI 2019TVX 2019IDC 2019EICS 2019

Student Volunteering and Co-Organised Events (2019-2020)

  • [SV] NordiCHI 2020 (Tallinn, Estonia)
  • [SV] MobileHCI 2020 (Oldenburgh,Germany)
  • [Co-Organiser] Scottish SICSA Pre-CHI (Glasgow, UK)
  • [SV] IMX 2020 (Barcelona, Spain)
  • [SV] TVX 2019 (Manchester,UK)

Teaching and Supervision

I am supervised by Dr. Mohamed Khamis (University of Glasgow) and Dr. Kami Vaniea (University of Edinburgh).

If you are a researcher or practitioner and interested in a collaboration, or an undergraduate or postgraduate student interested in a research internship in Human-computer Interaction (HCI), Usable Security and Privacy, or Virtual Reality (VR), please do not hesitate to get in touch with me: florian.mathis(at)glasgow.ac.uk.