Thesis/Project Final Defense Schedule

Join us as the School of STEM master’s degree candidates present their culminating thesis and project work. The schedule is updated throughout the quarter, check back for new defenses.

View previous quarter schedules

Select a master’s program to navigate to candidates:

Master of Science in Computer Science & Software Engineering

AUTUMN 2025

Thursday, November 20

JAMES TRUONG

Chair: Dr. Kelvin Sung
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; Join James Truong’s Online Defense
Project: Realistic Illumination in AR

Augmented Reality (AR) creates a unique user experience by superimposing digital content onto real-world environments. With the addition of applying realistic visual cues, AR objects can be further involved and properly integrated into a user’s real-world environment to achieve higher immersion and believability. Unfortunately, accurately applying realistic illumination and spatial conditions remains a technically challenging task. Diverse environments and exposure to real-world phenomena often vary in complexity and visual acuity.

Two primary approaches exist to address this challenge: machine learning and image-based methods. Machine learning based methods can simulate real-world illumination on AR objects by recognizing illumination patterns and features in images, but are reliant on the quality, quantity, and diversity of their training data. In contrast, image-based approaches offer advantages in flexibility and usability where a balance in image quality and hardware performance can still provide a satisfactory result. Unfortunately, image-based approaches remain sensitive to hardware limitations and image quality which can significantly affect the performance and outcome.

This project focuses on the latter, emphasizing the development of a practical and accessible solution for realistic AR lighting on common consumer devices. The implemented system utilizes two mobile devices: one device gathers real-world environmental data, and the other establishes and renders the AR object. This configuration allows realistic illumination of AR objects in near real-time under dynamic indoor lighting conditions. The resulting system provides a practical and accessible solution for integrating digital objects into real indoor environments allowing them to appear consistent with real physical objects in terms of lighting, shadow, and reflection. However, the performance is limited in complex or rapidly changing environments where noise is more prominent.

Friday, November 21

SIQIAO YE

Chair: Dr. Michael Stiber
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; Join Siqiao Ye’s Online Defense
Project: Incident Classification of Emergency Services Communication System (ESCS)

In the context of next-generation 911 (NG911), Emergency Service Communications Systems (ESCS) are becoming increasingly complex, requiring operators to answer calls and then categorize events based on the content of the conversation. This study explores metadata-based event classification methods and how their performance depends on the spatiotemporal structure of incoming calls, providing operators with an initial pre-classification before answering. We synthesized 911-like datasets using a graph-based clustering point process simulator, built a dual-input Long Short-Term Memory (LSTM) network classifier, and compared it with classic non-sequential baseline models. We generated three spatiotemporal separation modes: temporally overlapping but spatially separable; spatially overlapping but temporally separable; and simultaneously spatiotemporally overlapping. The results reveal clear patterns, especially when spatial cues are ambiguous, where temporal memory becomes crucial, and the LSTM model performs better. On a representative test set, the overall accuracy and macro F1 score typically exceed 85%, demonstrating robust performance and highlighting the necessity and option of sequence modeling in metadata-only ESCS classification.

Tuesday, November 25

POULAMI DAS GHOSH

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; Join Poulami Das Ghosh’s Online Defense
Project: Visual Question Answering System Using Gated Bidirectional Cross-Attention

This capstone project presents a cost-effective and reliable Visual Question Answering (VQA) system designed to improve accessibility for visually impaired users. VQA is a multimodal AI task that requires joint reasoning over image and text inputs to answer open-ended questions. The delivered solution leverages a novel gated bidirectional Cross-Attention mechanism that integrates BERT-based text embeddings with spatial and global image representations from ResNet-50 to generate answers. In bidirectional Cross-Attention, the AI model dynamically fosters combined learning by allowing the text to attend to visual features and vice versa, enabling a richer multimodal understanding. Meanwhile, the gating algorithm enables the model to filter out irrelevant information by regulating the flow of data between attended and original features, thereby reducing the overall computational overhead of the VQA system. Using this approach, we achieved an accuracy of 58% on the VQA v2 validation set, a 10% improvement over the hierarchical co-attention-based baseline work. Furthermore, the system is deployed as a user-friendly and intuitive web application that supports both voice and text input/output for enhanced usability.

Wednesday, November 26

DHURKA ROHINI MADASAMY

Chair: Dr. Wooyoung Kim
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; Join Dhurka Rohini Madasamy’s Online Defense
Thesis: Predictive Robustness and Biomarker Discovery in Rare Cancers: An Interpretable Machine Learning Approach

Rare cancers such as Glioblastoma Multiforme (GBM, a brain rare cancer) pose persistent challenges in computational oncology due to severe data scarcity, biological noise, and the difficulty of identifying disease specific molecular signatures. This thesis develops an interpretable machine learning framework to evaluate the predictive robustness of GBM gene expression signals and to isolate biologically meaningful biomarkers under extreme class imbalance. Across four classification settings and multiple resampling strategies, the models achieve consistently strong performance (F1 and MCC values often above 0.95), demonstrating that the GBM transcriptomic signature remains highly separable even under challenging experimental conditions. To refine this signal, a novel Cascade Learning approach systematically removes common cancer pathways and reveals biomarkers specific to the rare cancer. SHAP based interpretability identifies a stable set of genes aligned with known glioma biology, and a complementary Tab2Image analysis offers visual validation of class separability through structured image based embeddings. Alongside these methodological results, the thesis also addresses ethical considerations inherent to rare disease research by emphasizing fairness, representation, and accountability when working with structurally imbalanced datasets. Overall, the study provides a robust, transparent, and ethically grounded pathway for biomarker discovery in rare cancers.


ARAVIND TALLAPRAGADA

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; Join Aravind Tallapragada’s Online Defense
Project: Fish Species Identification System for Enhanced Fishing Experience

The goal of this project is to research and set the foundations to develop an ML-based fish species identification system to assist recreational anglers while supporting marine research and species preservation in Washington State. The system employs a ResNet-50 convolutional neural network fine-tuned on 2291 images of eight fish species commonly found in Washington’s coastal waters. The project explores several experimental setups and configurations, leading to a 94.25% classification accuracy.

The model is integrated into a full-stack web application that aims to enable users to upload fish images and receive prompt identification (under 5 seconds). The system also captures and stores environmental metadata including location, timestamp, and historical weather conditions via the OpenWeatherMap API. Species-specific information is retrieved through the iNaturalist API, while conservation status is provided via the IUCN Red List API.

By providing a platform for accessible species identification along with taxonomic and environmental context, this system aims to deliver a practical tool recreational fishers and an infrastructure for citizen science contributions for ecological monitoring. This project demonstrates practical application of machine learning and cloud technologies to create a functional application that helps with ecological conservation research while serving as a tool for more conscientious recreational fishing.

Monday, December 1

NICOLAS POSEY

Chair: Dr. Michael Stiber
Candidate: Master of Science in Computer Science & Software Engineering
8:45 A.M.; Join Nicolas Posey’s Online Defense
Project: Implementing a GPU Model for Simulating Emergency Services Communication Systems

Graphitti is a general-purpose, high-performance, graph-based simulator designed to significantly speed up the simulation of network event models, such as biological neural simulations, using NVIDIA GPUs. Graphitti also provides an architecture designed to allow for the validation of GPU simulation results against single-threaded CPU simulation results. This enables researchers and other users of Graphitti to be confident in the correctness of their parallel implementations. Previous work at the University of Washington, Bothell’s Intelligent Networks Laboratory (INL) has succeeded in creating and validating a single-threaded CPU implementation of a Next Generation 911 (NG911) Emergency Services Communication System (ESCS) model. This Master’s capstone project builds on this work, specifically, by creating a NG911 GPU implementation to support the simulation of larger networks. Two kernels, advance911VerticesDevice and maybeTakeCallFromEdge, were implemented using the CUDA programming language to parallelize the most computationally intensive components of the NG911 model. The runtimes of these kernels and of a simulation consisting of 1932 vertices, 34,119 calls, and 288,000 timesteps were measured to assess the performance of the implementation. Runtimes of the GPU implementation were slower than those of the CPU implementation, for advance911VerticesDevice (2.63x); for maybeTakeCallFromEdge (1.84x); and for the total simulation time (2.08x). Future work will be needed to achieve performance improvements for the GPU implementation by examining possible causes such as the complexity of the advance911VerticesDevice kernel and the inability to get the advance911VerticesDevice kernel sufficiently busy during the simulation. Because this is the first instance in the INL of creating a GPU implementation from an existing CPU implementation, the difficulty of the creation process using the existing architecture is analyzed to understand if architectural changes should be made to improve this process for future models. A normalized functional code change (NFCC) metric is computed to measure the difficulty. A NFCC value for all files changed during the GPU implementation was found to be 0.1359, demonstrating that the process for creating a GPU implementation from an existing CPU implementation is not difficult.


JEREMY NEWTON

Chair: Dr. Wooyoung Kim
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; UW2 (Commons Hall) Room 327
Thesis: Interpretable Machine Learning for Biomarker Identification in RNA Seq Cancer Data

Existing research on RNA Seq gene expression biomarkers has provided various methods to select a small list of genes as cancer biomarkers from a large number of gene expression data. Previous methods for identifying potential gene expression cancer biomarkers have focused on statistical analysis, but other methods have incorporated machine learning, often including Interpretable Machine Learning (iML) techniques. On 16 cancer type cohorts of TCGA data, we used inherently interpretable machine learning models: Logistic Regression, Random Forest, and Linear Support Vector Machine to narrow down lists of potential genes as biomarkers using the trained model’s feature importance rankings. We subsequently applied the model-agnostic iML techniques Shapley Additive Explanations (SHAP) and Permutation Importance to narrow down the subsets even further. We compared classification performance between machine learning models trained on selected features from these methods with features selected by statistical methods, and biomarkers from external research. We cross-checked the highly scored potential biomarkers with biomedical annotations and gene pathway analysis. We found that these biomarker selection methods lead to comparable or better classification performance on these datasets than the biomarkers from outside research, or from statistical analysis alone. Mutual Information estimation was a surprisingly useful technique for initial feature selection, and iML techniques improved the selection of short lists of biomarkers for classification.

Tuesday, December 2

AKBARBEK RAKHMATULLAEV

Chair: Dr. Munehiro Fukuda
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; UW2 (Commons Hall) Room 327
Thesis: IVF Semantical: Agent-Based Implementation of Vector Search on GPU

Vector search plays a crucial role in large-scale similarity search applications, with IVF (Inverted File Index) being a widely used indexing method due to its balance between accuracy and efficiency. However, traditional vector search algorithms that used IVF as an indexing method, such as IVF Flat and IVFPQ, yield results by brute force searching within each cluster/list. This paper introduces a new IVF-based vector search algorithm, called IVF Semantical, which does the search within each cluster/list through a different arrangement of data and traversal using the Binary Search. In order to accelerate the development phase, the author used MASS CUDA to handle the searching part, which allowed to leverage the abstraction level of the code. We evaluated our IVF Semantical, implemented for GPUs using MASS CUDA, against two other algorithms, IVF Flat and IVFPQ, demonstrating the significant speed efficiency of the approach. The findings suggest IVF Semantical can make vector search more efficient and robust on infrastructure that requires immediate response, such as navigation systems or robots.


SARAH MARTEL

Chair: Dr. Annuska Zolyomi
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; Join Sarah Martel’s Online Defense
Project: Sail the Ship, Steer the Self: Using Social Accountability and Gamification for Goal Attainment in Users with Attention Difficulties

People with attention difficulties, such as those with Attention-Deficit/Hyperactivity Disorder (ADHD), tend to face challenges achieving their goals due to struggling with breaking goals into tasks, procrastination, and planning and remembering important deadlines.

This project explores a novel approach for motivating people with attention difficulties to set and achieve goals using two motivating factors identified in literature: social accountability and gamification. Social accountability occurs in the form of a body doubling simulator, while a visual journey map of the user’s goals and milestones helps make their progress more meaningful with gamification.

In the first stage of a user-centered design process, a prototype of the app was designed using the design tool, Figma. Usability testing and interviews with four users who self-identified as having attention difficulties revealed key findings regarding engagement and ease of use. The findings revealed the importance of gamification and novelty in users with attention difficulties, the strength of peer accountability, and the importance of a clear, distraction-free UI. Building on these insights, an Android app, “”Sail the Ship””, was developed. This second stage, where six participants used the app, showed that peer accountability through body doubling increased task initiation and feelings of social pressure for most users. Additionally, the journey map supported engagement through milestone completion and gamification, with participants requesting even more gamified features. This work contributes a novel app design that helps people with attention difficulties venture forward and achieve their goals.

Wednesday, December 3

MARINA ROSENWALD

Chair: Dr. Michael Stiber
Candidate: Master of Science in Computer Science & Software Engineering
8:45 A.M.; Join Marina Rosenwald’s Online Defense
Thesis: Application of Graph Neural Networks in Spike-Time-Dependent-Plasticity Graphitti Simulations

Spike-timing-dependent plasticity (STDP) is a core mechanism that drives how biological neural networks’ connectivity changes over time, but understanding the structural changes for large networks is still difficult. Graphitti, the simulator used in the Intelligent Networks Laboratory, can generate large, densely-interconnected STDP networks, yet interpreting what emerges from those simulations is not straightforward. This thesis uses Graph Neural Networks (GNNs) to analyze these networks and to uncover patterns that are hard to see through traditional tools.

The main goal of this work is to apply different GNN architectures, most notably Graph Attention Networks (GATs), to Graphitti-generated networks in order to identify key structural and behavioral changes. The focus of this thesis is to identify key neurons as STDP unfolds. By treating each simulation as a series of graphs over time, we can apply Graph Neural Networks to explore how connectivity shifts over time, which nodes receive higher attention weights in a Graph Neural Network, and whether these “key neurons” lead to the expected graph structure. Alongside this analysis, an equally important objective is to improve our understanding and implementation of GNNs for large, fully connected neural graphs. We document the full workflow, challenges, and model choices, and contribute these findings for future work.

Overall, this thesis links large scale STDP simulation with modern graph learning methods, using attention mechanisms to highlight influential nodes and offering new tools and insights for analyzing complex neural systems.


COLLEEN LEMAK

Chair: Dr. Geethapriya Thamilarasu
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; DISC (Discovery Hall) Room 464
Thesis: Hybrid Static-Dynamic Feature-Weighted Analysis for IoT Botnet Malware Detection

As the Internet of Things (IoT) domain continues to evolve, IoT devices face escalating security challenges. Recent waves of IoT botnets have exploited device vulnerabilities to launch dangerous large-scale Distributed Denial of Service (DDoS) attacks from compromised, resource-constrained devices. These networks of infected devices pose a unique threat to modern infrastructure, homes, schools, medical facilities, and transportation systems at heightened risk of malicious exploitation.

This paper proposes a novel hybrid framework that combines static and dynamic analysis techniques for IoT botnet malware detection without relying on complex Machine Learning (ML) models. By extracting and weighing the importance of key features from malware binaries based on their relevance to DDoS behavior, the framework maintains statistical adaptability to observed data while avoiding large memory usage and opaque black-box decision processes common in ML. Designed for interpretability and efficiency, this malware detection framework bridges code-level structure and runtime behavior, offering a transparent and practical botnet detection strategy for diverse resource-constrained IoT ecosystems.


LOGAN CHOI

Chair: Dr. Wooyoung Kim
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; Join Logan Choi’s Online Defense
Thesis: Benchmarking TenSEAL’s Homomorphic Encryption Through Predicting Encrypted RNA Sequencing Data

This study addresses the growing need to protect sensitive healthcare data as digital technologies and cloud-based analytics become integral to modern medical research and care delivery. Healthcare data, such as clinical or genomic information, holds immense potential to enhance disease understanding and improve diagnostics through machine learning models; however, adopting third-party cloud technologies increases the risks of data breaches and noncompliance with regulations such as the Health Insurance Portability and Accountability Act (HIPAA). To address these concerns, this research investigates homomorphic encryption, a cryptographic method that allows computations on encrypted data without exposing sensitive information. The study benchmarks the TenSEAL library to evaluate its performance in encrypting healthcare test datasets and executing predictions through a pre-trained machine learning model, while also evaluating memory utilization and encryption time. The findings show that TenSEAL’s CKKS encryption scheme effectively enables data encryption and secure machine learning inference on genomic datasets for breast, lung, and prostate cancers, achieving an average accuracy of 90% across all datasets. On the other hand, our results also highlight a key trade-off: as encryption strength and dataset size increase, computational overhead rises sharply. Thus, medical professionals and data scientists must carefully balance the need for security with the practical deployment in real-world healthcare systems.


KAMERON VUONG

Chair: Dr. David Socha
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; Join Kameron Vuong’s Online Defense
Project: Understanding Software Engineering Principles by Evolving Luna mHealth’s Support for PowerPoint

Luna mHealth is a mobile health education platform designed to deliver culturally relevant, offline-accessible health education content to resource-constrained communities. Luna mHealth is composed of two parts: Luna authoring system and Luna mobile. Luna authoring system is a desktop application that converts PowerPoint-based health content into structured, interactive learning modules called Luna content modules. Luna mobile is a mobile application that readers use to view Luna content modules on their mobile phone without the need for an active Internet connection.

In this project, I evolved Luna mHealth’s support for the core PowerPoint features needed for Luna, such as connection shapes (aka lines), textbox shapes, picture shapes, hyperlinks, and placeholder shapes from slide layout templates. For Luna mobile, I evolved Luna mobile’s rendering logic to render the extracted text and images, ensuring that visuals appear consistently and accurately. I also evolved Luna mobile’s navigation system to support inter-page hyperlinks, allowing readers to navigate to different pages of a Luna content module. I also did enough work to demonstrate that we could extend this system to support LibreOffice Impress presentation files.

Through practices like Test-Driven Development and principles like Single Responsibility and You Aren’t Gonna Need It, I ensured the code I contributed to the Luna mHealth’s codebase was clean, simple, and maintainable.


WENTAO GAO

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering
5:45 P.M.; Join Wentao Gao’s Online Defense
Project: Virtual Reality and sEMG Integration for Phantom Limb Pain Therapy

Phantom limb pain (PLP) affects a substantial proportion of amputees and remains challenging to treat despite decades of research. Virtual reality (VR) has emerged as a promising non-pharmacological intervention, offering immersive visual feedback and controlled movement experiences. However, existing VR systems face significant barriers to widespread adoption to treat PLP: high costs, limited adaptability to individual differences due to reliance on pre-trained models and fixed control schemes, and restriction to specific upper or lower limb applications. Systematic reviews highlight the need for more affordable, customizable, and versatile rehabilitation technologies.

This project presents the design and preliminary evaluation of a novel, low-cost VR system capable of supporting both below-elbow and below-knee amputation scenarios through a self-calibrating machine learning pipeline. The system integrates consumer-grade VR hardware (Meta Quest 2), surface electromyography (sEMG) armband (8 channels total), and AprilTag visual markers to provide real-time, personalized limb control in immersive game environments. For limb control, the system classifies three discrete gestures (clench, extension, rest) combined with positional tracking and rotation state, which drive a rigged virtual limb model through animation blending to achieve naturalistic movement suitable for amputee rehabilitation. The system features an automated guided data collection and training workflow that allows users to record EMG signatures and train custom Random Forest gesture recognition models in approximately 5 minutes, without requiring technical expertise or pre-trained models with great accuracy.

System feasibility is validated through a pilot study with able-bodied participants who complete both upper and lower limb training sessions and interact with purpose-designed VR games. Preliminary results demonstrate successful gesture classification (mean accuracy >85%), acceptable system latency (<50ms end-to-end), and positive user experience metrics. The system addresses a critical gap in accessible rehabilitation technology by providing a single platform adaptable to multiple amputation types while maintaining low implementation costs.

Thursday, December 4

YU WEN

Chair: Dr. Annuska Zolyomi
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; Join Yu Wen’s Online Defense
Project: ConverSwim: A Visual and Emotion-Aware Platform to Support Reflective Conversation for Neurodivergent Users

Neurodivergent individuals—including those with autism and ADHD—often face challenges in conversations, such as interpreting social cues, recognizing emotional signals, and maintaining mutual understanding. These difficulties can hinder effective communication and self-reflection. To address these issues, we developed ConverSwim, a web-based platform that supports users in reflecting on conversational dynamics through an integrated and interactive canvas. This canvas combines real-time chat, NLP-driven emotion detection, and a structured swimlane diagram that chronologically maps each user’s internal emotional state and external verbal expression. Swimlanes within the canvas visualize the conversation across five columns: each participant has two parallel lanes—Inside (representing emotions and internal states) and Outside (representing spoken or written expressions)—while the central Common Ground lane captures moments of shared understanding as they emerge. By unifying interaction, emotional analysis, and visual structure in a single space, ConverSwim is designed to enable users to explore communication patterns and observe emotional shifts as conversations unfold. The platform includes a step-by-step tutorial to guide users through its features and reduce cognitive load. Through iterative, user-centered design, ConverSwim promotes emotional awareness, improves conversational reflection, and facilitates more empathetic, collaborative dialogue. Preliminary user sessions with neurodivergent participants evaluate usability, emotional awareness, and collaborative reflection. Early feedback indicates that participants found the system intuitive to use, helpful for noticing emotional cues, and supportive of reflecting on conversational flow.


BREANNA POWELL

Chair: Dr. Afra Mashhadi
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; DISC (Discovery Hall) Room 464
Project: Accelerating Climate Forecasts: Vision Transformer Models for Arctic Sea Ice Concentration Prediction with Novel Tokenization Techniques

Reliable forecasting of Arctic Sea Ice Concentration (SIC) is critical for safe shipping, sustainable settlement planning for indigenous communities, and effective wildlife conservation. Yet, accurate SIC prediction remains difficult: observational data are sparse, models often fail at extremes, and climate science depends heavily on computationally expensive physics-based simulations.

This work introduces IceForecastTransformer, a Vision Transformer–based architecture trained on synthetic data from the U.S. Department of Energy’s Energy Exascale Earth System Model (E3SM) Version 3. By incorporating Freeboard—a satellite-observable metric derived from ice and snow thickness—alongside flexible tokenization and positional encoding strategies, the model provides a possible bridge between simulation outputs and real-world observations. IceForecastTransformer is trained on 163 years of monthly SIC and Freeboard data, achieving 0.16 RMSE and 0.32 JSD for 12-month SIC forecasts, and 0.84 F1 Score with 0.98 AUC for Sea Ice Extent (SIE), all with training times under 15 minutes.

These results demonstrate that transformer-based models can efficiently scale down complex Earth system data while maintaining high accuracy, offering a practical complement to physics-based simulations. The approach provides a pathway toward operational Arctic forecasting, with potential for fine-tuning on satellite data and broader applications in climate research and planning.

Friday, December 5

JONATHAN CHO

Chair: Dr. Wooyoung Kim
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; Join Jonathan Cho’s Online Defense
Project: Interpretable Machine Learning Studio Platform Web Application

Machine learning models are often treated as black boxes, creating challenges in domains such as healthcare where interpretability is critical. This project enhanced a Machine Learning Studio Platform (MLSP) originally developed at the University of Washington by improving reliability, expanding supervised and unsupervised model support, and redesigning the interface for usability. Interpretability methods including SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), Partial Dependence Plot (PDP), Permutation Feature Importance (PFI), and Exploratory Data Analysis (EDA) were integrated and tested on benchmark datasets such as the Breast Cancer Wisconsin dataset and others of varying size and complexity. The enhanced system produced consistent metrics and visual explanations, enabling users to evaluate models more effectively and approach results with greater trust and understanding. These contributions demonstrate how interpretability features can make machine learning workflows more transparent and accessible for students, researchers, and healthcare professionals.


XIU ZHOU

Chair: Dr. Geethapriya Thamilarasu
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; DISC (Discovery Hall) Room 464
Project: Privacy Information Detection Using Imagined Speech EEG-to-Text Analysis

Brain–Computer Interface (BCI) devices are rapidly entering consumer markets and offer several benefits, such as hands-free interaction and improved accessibility. However, their passive recording capability introduces a severe privacy threat, as malicious actors could intercept neural signals and apply machine learning techniques to infer sensitive information. Among neural modalities, imagined speech—the internal process of thinking words or characters without vocalization—is especially concerning, as it may reveal private content such as passwords or Personal Identification Numbers (PINs).

To assess this risk, we investigate whether passwords can be decoded purely from imagined-speech EEG without prior knowledge of candidates. Moving beyond prior work limited to classifying among known passwords, we propose a novel hierarchical Binary-Digit-Letter (BDL) system for character-level reconstruction. Using a commercial headset, our method achieves an unprecedented end-to-end accuracy exceeding 60\% on the full alphanumeric set. This performance markedly reduces the effective entropy of real-world passwords (e.g., from $62^{8}$ to roughly $2^{8}$ candidates for an 8-character password).

We further present a systematic comparison of two deep neural architectures (CNN+BiLSTM and CNN+Transformer) across four experimental conditions to evaluate their password decoding performance and robustness. The results offer strong evidence that sensitive information can be extracted from passively collected EEG and establish key architectural baselines for future security analyses and privacy-preserving BCI design.


NEELIMA CHOWDHURY

Chair: Dr. Annuska Zolyomi
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; Join Neelima Chowdhury’s Online Defense
Project: HealthMate: Designing a Senior Centered Digital Twin App for Health Monitoring and Predictive Insights

This project introduces HealthMate, a senior centered mobile health app that uses the concept of a digital twin to present personal health information and future health predictions in a simple format. Current commercially available fitness apps often depict numbers in complex data dashboards designed for mainstream users, not older adults. HealthMate is designed from the beginning to be accessible for older adults and to provide meaningful data visualization and health predictions for health areas that are a priority for them. Due to the popularity of the Apple ecosystem with older adults, HealthMate was chosen to be developed as an iOS app that is compatible with the Apple Watch.

The key features of HealthMate are to provide seniors with a Digital Twin interface to access their daily health data by tapping on body parts of the Digital Twin or avatar, provide predictive insights based on past week statistics, enable customized notifications for daily health alerts, and provide an intuitive and clear interface with minimal visual clutter and focused views. This is achieved by the app reading user data from wearable devices like Apple Watch, analyzing the past week’s data and statistics, and displaying likely trends or potential issues in areas the user wants to focus on. Currently the data shown reflects a user’s most consistent heart rate, step count consistency, and sleep habit consistency. Future enhancements aim to show anxiety or high stress propensity, fall detection, cardiovascular anomaly, dizziness propensity, and irregular heart rhythm. The predictive insights module analyzes last week’s data and forecasts the current week’s activity, heart rate, and sleep patterns, and may indicate possible incidents such as anxiety, falls, or heart palpitations linked to inadequate sleep. The prework for this module uses generated synthetic datasets that are similar to typical wearable device data. This supports model development and safe investigation of uncommon problems without exposing actual users. The objective is to create a trustworthy and understandable companion that assists senior citizens in identifying changes and helps inspire small improvements to daily routines for better and healthier living.

Master of Science in Cybersecurity Engineering

AUTUMN 2025

Master of Science in Electrical & Computer Engineering

AUTUMN 2025