{"id":22905,"date":"2022-09-30T14:17:30","date_gmt":"2022-09-30T14:17:30","guid":{"rendered":"http:\/\/www.uwb.edu\/?p=22905"},"modified":"2026-04-27T10:40:56","modified_gmt":"2026-04-27T17:40:56","slug":"archive","status":"publish","type":"page","link":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive","title":{"rendered":"Previous Quarters"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" id=\"top\">Previous final examination defense schedule<\/h2>\n\n\n\n<p>An archive of some previous School of STEM master\u2019s degree thesis\/projects.<\/p>\n\n\n\n<p><strong>Select a master&#8217;s program to navigate to candidates:<\/strong><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><a href=\"#mscsse\">Computer Science &amp; Software Engineering<\/a><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><a href=\"#mscyber\">Cybersecurity Engineering<\/a><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><a href=\"#msece\">Electrical &amp; Computer Engineering<\/a><\/p>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"mscsse\">Master of Science in Computer Science &amp; Software Engineering<\/h2>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">WINTER 2026<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, February 25<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">DEBOSHRI DAS<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; <a href=\"https:\/\/washington.zoom.us\/j\/96541693519\">Join Deboshri Das&#8217; Online Defense<\/a><br><strong>Project:<\/strong> Blackfoot &#8211; Play and Learn: Designing a Game-Based Language Learning Application<\/p>\n\n\n\n<p>Blackfoot (Niits\u00ed\u02bcpowahsin) is a culturally and historically significant Indigenous language of the Blackfoot Confederacy and a primary medium through which community knowledge, identity, and oral traditions are sustained. As with many Indigenous languages, Blackfoot is experiencing ongoing endangerment, highlighting the need for accessible learning tools and culturally grounded resources that support intergenerational transmission and revitalization.<\/p>\n\n\n\n<p>This paper presents the design and implementation of Blackfoot &#8211; Play and Learn, a web-based, game-driven language-learning application that integrates cultural context with structured, practice-oriented activities. The application is organized as a suite of modular learning experiences, including a Blackfoot History module and several interactive activities: Flashcards for incremental vocabulary acquisition, Blackfoot Builder for scaffolded construction of meaning through guided word selection, Voices of the Blackfoot for listening-centered engagement with narrative content and dialect identification, and a Quiz module for rapid recall and self-assessment. A persistent leaderboard provides lightweight gamification to encourage continued engagement and make progress visible across activities. Because Blackfoot is spoken across multiple dialects, the system incorporates dialect awareness throughout its modules rather than assuming a single standardized variety. The system is implemented as a React-based web application using client-side routing and reusable, modular UI components.<\/p>\n\n\n\n<p>The project follows Agile development practices with iterative feature delivery and periodic review feedback. Emphasis was placed on code quality, modular design, reusable utilities, and backend best practices such as normalized answer checking, debounced availability validation, provider-aware reauthentication, and guarded asynchronous data access. The resulting system demonstrates a scalable, maintainable architecture for culturally informed, game-based language learning, and provides a foundation for future adaptive and community-driven extensions.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, March 6<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">BENJAMIN V. SCHIPUNOV<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Clark Olson<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; <a href=\"https:\/\/washington.zoom.us\/j\/96571731630\">Join Benjamin V. Schipunov&#8217;s Online Defense<\/a><br><strong>Project:<\/strong> AROfficeNav: A Scalable Indoor Mapping and Augmented Reality Navigation System<\/p>\n\n\n\n<p>Indoor navigation has emerged as a growing challenge in large, complex buildings such as university campuses, hospitals, shopping malls, and corporate offices, where traditional signage is not always sufficient to provide consistent, accessible guidance. Outdoor Global Navigation Satellite Systems (GNSS) are unreliable indoors due to signal attenuation and lack of line-of-sight to satellites, while existing positioning systems such as QR-code markers and dedicated infrastructure introduce installation and maintenance overhead that scale poorly over time. These limitations motivate the development of scalable indoor positioning systems that leverage advances in computer vision, sensor fusion, and modern mobile computing. This project presents the design, implementation, and evaluation of an augmented reality-based (AR) indoor building navigation system that integrates 3D spatial modeling to enable scalable, user-friendly indoor guidance.<\/p>\n\n\n\n<p>The system uses a server-backed architecture and a modular reconstruction pipeline to create building-scale 3D maps from phone-captured scans, organized into workspace-based reconstructions to support iterative refinement and long-term data management. For mapping, the pipeline uses learned features and robust pose estimation to produce a sparse 3D reconstruction. For navigation, the system performs real-time visual localization with rolling-window multi-frame matching against a pre-built 2D-3D descriptor database, estimating camera pose using Perspective-n-Point (PnP) with Random Sampling Consensus (RANSAC). A coordinate alignment transform maps reconstructed building coordinates into the user\u2019s live AR session coordinates, enabling stable AR labeling and route visualization. The overall design prioritizes minimal installation and maintenance compared to infrastructure-heavy alternatives such as fiducials and beacons, while supporting multi-floor buildings and scalable deployment.<\/p>\n\n\n\n<p>The project is evaluated in UW Bothell\u2019s Innovation Hall, which involves scanning hallways, open spaces, and staircases. After the scanning process, all rooms, exits, elevators, restrooms, and scan connections are labeled using a label projection system that operates directly on the captured scan images. The scans are then merged into a unified building model, from which a navigation graph is generated and prepared for publication and consumer use. The resulting platform demonstrates the feasibility of scalable, infrastructure-light indoor navigation while highlighting practical challenges in visual positioning under descriptor ambiguity and perceptual aliasing.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, March 11<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SAMMAN TYATA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; <a href=\"https:\/\/washington.zoom.us\/j\/97526972895\">Join Samman Tyata&#8217;s Online Defense<\/a><br><strong>Project:<\/strong> Design Decisions and Architectural Rationale in the Luna mHealth Mobile Application: UI Enhancements and Module Page Navigation<\/p>\n\n\n\n<p>This paper presents a critical analysis of design decisions and technical contributions made during the development of the Luna mHealth Mobile application, a Flutter-based mobile health education platform. The work encompasses four primary areas of the user interface: the module homepage, the module reading experience, peer-to-peer module sharing, and a breadcrumb-based navigation system with stack-based history. Each area is evaluated against foundational software engineering principles such as SOLID design, DRY, and Clean Code practices, illustrating how these principles shape architectural decisions in real-world Flutter development. Rather than cataloging all activities, this paper focuses on the reasoning behind significant architectural choices, the iterative thought processes that led to final implementations, and the broader lessons learned about mobile application development in a production environment.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, March 13<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PAUL OLADELE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.; <a href=\"https:\/\/washington.zoom.us\/j\/98989199038\">Join Paul Oladele&#8217;s Online Defense<\/a><br><strong>Project:<\/strong> Offline Health Module Sharing in Luna mHealth: A Flutter-Based Peer-to-Peer Approach with Google Nearby Connections<\/p>\n\n\n\n<p>This capstone project presents the design and implementation of a peer-to-peer content sharing system for the Luna mHealth mobile application, an offline-first platform that delivers health education in low-connectivity environments. Luna distributes structured, audio-visual health modules on low-cost smartphones, where internet access cannot be assumed. To support local content dissemination, this work implements a device-to-device transfer system using Google Nearby Connections within a Flutter architecture. The project evaluates prior Bluetooth Low Energy approaches and motivates the selection of a higher-level framework to reduce protocol complexity and improve reliability. The system is modeled using finite state machines to provide predictable connection lifecycles, failure recovery, and multi-peer coordination. A module transfer pipeline with metadata exchange, file streaming, and integrity validation ensures safe ingestion and resharing. The result is a module sharing system that strengthens Luna\u2019s offline distribution model and supports community-driven propagation of health education content.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">AUTUMN 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, November 20<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JAMES TRUONG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> Realistic Illumination in AR<\/p>\n\n\n\n<p>Augmented Reality (AR) creates a unique user experience by superimposing digital content onto real-world environments. With the addition of applying realistic visual cues, AR objects can be further involved and properly integrated into a user\u2019s real-world environment to achieve higher immersion and believability. Unfortunately, accurately applying realistic illumination and spatial conditions remains a technically challenging task. Diverse environments and exposure to real-world phenomena often vary in complexity and visual acuity.<\/p>\n\n\n\n<p>Two primary approaches exist to address this challenge: machine learning and image-based methods. Machine learning based methods can simulate real-world illumination on AR objects by recognizing illumination patterns and features in images, but are reliant on the quality, quantity, and diversity of their training data. In contrast, image-based approaches offer advantages in flexibility and usability where a balance in image quality and hardware performance can still provide a satisfactory result. Unfortunately, image-based approaches remain sensitive to hardware limitations and image quality which can significantly affect the performance and outcome.<\/p>\n\n\n\n<p>This project focuses on the latter, emphasizing the development of a practical and accessible solution for realistic AR lighting on common consumer devices. The implemented system utilizes two mobile devices: one device gathers real-world environmental data, and the other establishes and renders the AR object. This configuration allows realistic illumination of AR objects in near real-time under dynamic indoor lighting conditions. The resulting system provides a practical and accessible solution for integrating digital objects into real indoor environments allowing them to appear consistent with real physical objects in terms of lighting, shadow, and reflection. However, the performance is limited in complex or rapidly changing environments where noise is more prominent.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, November 21<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SIQIAO YE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Michael Stiber<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> Incident Classification of Emergency Services Communication System (ESCS)<\/p>\n\n\n\n<p>In the context of next-generation 911 (NG911), Emergency Service Communications Systems (ESCS) are becoming increasingly complex, requiring operators to answer calls and then categorize events based on the content of the conversation. This study explores metadata-based event classification methods and how their performance depends on the spatiotemporal structure of incoming calls, providing operators with an initial pre-classification before answering. We synthesized 911-like datasets using a graph-based clustering point process simulator, built a dual-input Long Short-Term Memory (LSTM) network classifier, and compared it with classic non-sequential baseline models. We generated three spatiotemporal separation modes: temporally overlapping but spatially separable; spatially overlapping but temporally separable; and simultaneously spatiotemporally overlapping. The results reveal clear patterns, especially when spatial cues are ambiguous, where temporal memory becomes crucial, and the LSTM model performs better. On a representative test set, the overall accuracy and macro F1 score typically exceed 85%, demonstrating robust performance and highlighting the necessity and option of sequence modeling in metadata-only ESCS classification.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, November 25<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">POULAMI DAS GHOSH<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> Visual Question Answering System Using Gated Bidirectional Cross-Attention<\/p>\n\n\n\n<p>This capstone project presents a cost-effective and reliable Visual Question Answering (VQA) system designed to improve accessibility for visually impaired users. VQA is a multimodal AI task that requires joint reasoning over image and text inputs to answer open-ended questions. The delivered solution leverages a novel gated bidirectional Cross-Attention mechanism that integrates BERT-based text embeddings with spatial and global image representations from ResNet-50 to generate answers. In bidirectional Cross-Attention, the AI model dynamically fosters combined learning by allowing the text to attend to visual features and vice versa, enabling a richer multimodal understanding. Meanwhile, the gating algorithm enables the model to filter out irrelevant information by regulating the flow of data between attended and original features, thereby reducing the overall computational overhead of the VQA system. Using this approach, we achieved an accuracy of 58% on the VQA v2 validation set, a 10% improvement over the hierarchical co-attention-based baseline work. Furthermore, the system is deployed as a user-friendly and intuitive web application that supports both voice and text input\/output for enhanced usability.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, November 26<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">DHURKA ROHINI MADASAMY<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis:<\/strong> Predictive Robustness and Biomarker Discovery in Rare Cancers: An Interpretable Machine Learning Approach<\/p>\n\n\n\n<p>Rare cancers such as Glioblastoma Multiforme (GBM, a brain rare cancer) pose persistent challenges in computational oncology due to severe data scarcity, biological noise, and the difficulty of identifying disease specific molecular signatures. This thesis develops an interpretable machine learning framework to evaluate the predictive robustness of GBM gene expression signals and to isolate biologically meaningful biomarkers under extreme class imbalance. Across four classification settings and multiple resampling strategies, the models achieve consistently strong performance (F1 and MCC values often above 0.95), demonstrating that the GBM transcriptomic signature remains highly separable even under challenging experimental conditions. To refine this signal, a novel Cascade Learning approach systematically removes common cancer pathways and reveals biomarkers specific to the rare cancer. SHAP based interpretability identifies a stable set of genes aligned with known glioma biology, and a complementary Tab2Image analysis offers visual validation of class separability through structured image based embeddings. Alongside these methodological results, the thesis also addresses ethical considerations inherent to rare disease research by emphasizing fairness, representation, and accountability when working with structurally imbalanced datasets. Overall, the study provides a robust, transparent, and ethically grounded pathway for biomarker discovery in rare cancers.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ARAVIND TALLAPRAGADA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> Fish Species Identification System for Enhanced Fishing Experience<\/p>\n\n\n\n<p>The goal of this project is to research and set the foundations to develop an ML-based fish species identification system to assist recreational anglers while supporting marine research and species preservation in Washington State. The system employs a ResNet-50 convolutional neural network fine-tuned on 2291 images of eight fish species commonly found in Washington\u2019s coastal waters. The project explores several experimental setups and configurations, leading to a 94.25% classification accuracy.<\/p>\n\n\n\n<p>The model is integrated into a full-stack web application that aims to enable users to upload fish images and receive prompt identification (under 5 seconds). The system also captures and stores environmental metadata including location, timestamp, and historical weather conditions via the OpenWeatherMap API. Species-specific information is retrieved through the iNaturalist API, while conservation status is provided via the IUCN Red List API.<\/p>\n\n\n\n<p>By providing a platform for accessible species identification along with taxonomic and environmental context, this system aims to deliver a practical tool recreational fishers and an infrastructure for citizen science contributions for ecological monitoring. This project demonstrates practical application of machine learning and cloud technologies to create a functional application that helps with ecological conservation research while serving as a tool for more conscientious recreational fishing.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, December 1<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">NICOLAS POSEY<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Michael Stiber<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project:<\/strong> Implementing a GPU Model for Simulating Emergency Services Communication Systems<\/p>\n\n\n\n<p>Graphitti is a general-purpose, high-performance, graph-based simulator designed to significantly speed up the simulation of network event models, such as biological neural simulations, using NVIDIA GPUs. Graphitti also provides an architecture designed to allow for the validation of GPU simulation results against single-threaded CPU simulation results. This enables researchers and other users of Graphitti to be confident in the correctness of their parallel implementations. Previous work at the University of Washington, Bothell\u2019s Intelligent Networks Laboratory (INL) has succeeded in creating and validating a single-threaded CPU implementation of a Next Generation 911 (NG911) Emergency Services Communication System (ESCS) model. This Master\u2019s capstone project builds on this work, specifically, by creating a NG911 GPU implementation to support the simulation of larger networks. Two kernels, advance911VerticesDevice and maybeTakeCallFromEdge, were implemented using the CUDA programming language to parallelize the most computationally intensive components of the NG911 model. The runtimes of these kernels and of a simulation consisting of 1932 vertices, 34,119 calls, and 288,000 timesteps were measured to assess the performance of the implementation. Runtimes of the GPU implementation were slower than those of the CPU implementation, for advance911VerticesDevice (2.63x); for maybeTakeCallFromEdge (1.84x); and for the total simulation time (2.08x). Future work will be needed to achieve performance improvements for the GPU implementation by examining possible causes such as the complexity of the advance911VerticesDevice kernel and the inability to get the advance911VerticesDevice kernel sufficiently busy during the simulation. Because this is the first instance in the INL of creating a GPU implementation from an existing CPU implementation, the difficulty of the creation process using the existing architecture is analyzed to understand if architectural changes should be made to improve this process for future models. A normalized functional code change (NFCC) metric is computed to measure the difficulty. A NFCC value for all files changed during the GPU implementation was found to be 0.1359, demonstrating that the process for creating a GPU implementation from an existing CPU implementation is not difficult.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JEREMY NEWTON<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; UW2 (Commons Hall) Room 327<br><strong>Thesis:<\/strong> Interpretable Machine Learning for Biomarker Identification in RNA Seq Cancer Data<\/p>\n\n\n\n<p>Existing research on RNA Seq gene expression biomarkers has provided various methods to select a small list of genes as cancer biomarkers from a large number of gene expression data. Previous methods for identifying potential gene expression cancer biomarkers have focused on statistical analysis, but other methods have incorporated machine learning, often including Interpretable Machine Learning (iML) techniques. On 16 cancer type cohorts of TCGA data, we used inherently interpretable machine learning models: Logistic Regression, Random Forest, and Linear Support Vector Machine to narrow down lists of potential genes as biomarkers using the trained model&#8217;s feature importance rankings. We subsequently applied the model-agnostic iML techniques Shapley Additive Explanations (SHAP) and Permutation Importance to narrow down the subsets even further. We compared classification performance between machine learning models trained on selected features from these methods with features selected by statistical methods, and biomarkers from external research. We cross-checked the highly scored potential biomarkers with biomedical annotations and gene pathway analysis. We found that these biomarker selection methods lead to comparable or better classification performance on these datasets than the biomarkers from outside research, or from statistical analysis alone. Mutual Information estimation was a surprisingly useful technique for initial feature selection, and iML techniques improved the selection of short lists of biomarkers for classification.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, December 2<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AKBARBEK RAKHMATULLAEV<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; UW2 (Commons Hall) Room 327<br><strong>Thesis:<\/strong> IVF Semantical: Agent-Based Implementation of Vector Search on GPU<\/p>\n\n\n\n<p>Vector search plays a crucial role in large-scale similarity search applications, with IVF (Inverted File Index) being a widely used indexing method due to its balance between accuracy and efficiency. However, traditional vector search algorithms that used IVF as an indexing method, such as IVF Flat and IVFPQ, yield results by brute force searching within each cluster\/list. This paper introduces a new IVF-based vector search algorithm, called IVF Semantical, which does the search within each cluster\/list through a different arrangement of data and traversal using the Binary Search. In order to accelerate the development phase, the author used MASS CUDA to handle the searching part, which allowed to leverage the abstraction level of the code. We evaluated our IVF Semantical, implemented for GPUs using MASS CUDA, against two other algorithms, IVF Flat and IVFPQ, demonstrating the significant speed efficiency of the approach. The findings suggest IVF Semantical can make vector search more efficient and robust on infrastructure that requires immediate response, such as navigation systems or robots.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SARAH MARTEL<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Annuska Zolyomi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> Sail the Ship, Steer the Self: Using Social Accountability and Gamification for Goal Attainment in Users with Attention Difficulties<\/p>\n\n\n\n<p>People with attention difficulties, such as those with Attention-Deficit\/Hyperactivity Disorder (ADHD), tend to face challenges achieving their goals due to struggling with breaking goals into tasks, procrastination, and planning and remembering important deadlines.<\/p>\n\n\n\n<p>This project explores a novel approach for motivating people with attention difficulties to set and achieve goals using two motivating factors identified in literature: social accountability and gamification. Social accountability occurs in the form of a body doubling simulator, while a visual journey map of the user\u2019s goals and milestones helps make their progress more meaningful with gamification.<\/p>\n\n\n\n<p>In the first stage of a user-centered design process, a prototype of the app was designed using the design tool, Figma. Usability testing and interviews with four users who self-identified as having attention difficulties revealed key findings regarding engagement and ease of use. The findings revealed the importance of gamification and novelty in users with attention difficulties, the strength of peer accountability, and the importance of a clear, distraction-free UI. Building on these insights, an Android app, &#8220;&#8221;Sail the Ship&#8221;&#8221;, was developed. This second stage, where six participants used the app, showed that peer accountability through body doubling increased task initiation and feelings of social pressure for most users. Additionally, the journey map supported engagement through milestone completion and gamification, with participants requesting even more gamified features. This work contributes a novel app design that helps people with attention difficulties venture forward and achieve their goals.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, December 3<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">MARINA ROSENWALD<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Michael Stiber<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Thesis:<\/strong> Application of Graph Neural Networks in Spike-Time-Dependent-Plasticity Graphitti Simulations<\/p>\n\n\n\n<p>Spike-timing-dependent plasticity (STDP) is a core mechanism that drives how biological neural networks&#8217; connectivity changes over time, but understanding the structural changes for large networks is still difficult. Graphitti, the simulator used in the Intelligent Networks Laboratory, can generate large, densely-interconnected STDP networks, yet interpreting what emerges from those simulations is not straightforward. This thesis uses Graph Neural Networks (GNNs) to analyze these networks and to uncover patterns that are hard to see through traditional tools.<\/p>\n\n\n\n<p>The main goal of this work is to apply different GNN architectures, most notably Graph Attention Networks (GATs), to Graphitti-generated networks in order to identify key structural and behavioral changes. The focus of this thesis is to identify key neurons as STDP unfolds. By treating each simulation as a series of graphs over time, we can apply Graph Neural Networks to explore how connectivity shifts over time, which nodes receive higher attention weights in a Graph Neural Network, and whether these \u201ckey neurons\u201d lead to the expected graph structure. Alongside this analysis, an equally important objective is to improve our understanding and implementation of GNNs for large, fully connected neural graphs. We document the full workflow, challenges, and model choices, and contribute these findings for future work.<\/p>\n\n\n\n<p>Overall, this thesis links large scale STDP simulation with modern graph learning methods, using attention mechanisms to highlight influential nodes and offering new tools and insights for analyzing complex neural systems.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">COLLEEN LEMAK<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geethapriya Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; DISC (Discovery Hall) Room 464<br><strong>Thesis:<\/strong> Hybrid Static-Dynamic Feature-Weighted Analysis for IoT Botnet Malware Detection<\/p>\n\n\n\n<p>As the Internet of Things (IoT) domain continues to evolve, IoT devices face escalating security challenges. Recent waves of IoT botnets have exploited device vulnerabilities to launch dangerous large-scale Distributed Denial of Service (DDoS) attacks from compromised, resource-constrained devices. These networks of infected devices pose a unique threat to modern infrastructure, homes, schools, medical facilities, and transportation systems at heightened risk of malicious exploitation.<\/p>\n\n\n\n<p>This paper proposes a novel hybrid framework that combines static and dynamic analysis techniques for IoT botnet malware detection without relying on complex Machine Learning (ML) models. By extracting and weighing the importance of key features from malware binaries based on their relevance to DDoS behavior, the framework maintains statistical adaptability to observed data while avoiding large memory usage and opaque black-box decision processes common in ML. Designed for interpretability and efficiency, this malware detection framework bridges code-level structure and runtime behavior, offering a transparent and practical botnet detection strategy for diverse resource-constrained IoT ecosystems.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">LOGAN CHOI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Thesis:<\/strong> Benchmarking TenSEAL\u2019s Homomorphic Encryption Through Predicting Encrypted RNA Sequencing Data<\/p>\n\n\n\n<p>This study addresses the growing need to protect sensitive healthcare data as digital technologies and cloud-based analytics become integral to modern medical research and care delivery. Healthcare data, such as clinical or genomic information, holds immense potential to enhance disease understanding and improve diagnostics through machine learning models; however, adopting third-party cloud technologies increases the risks of data breaches and noncompliance with regulations such as the Health Insurance Portability and Accountability Act (HIPAA). To address these concerns, this research investigates homomorphic encryption, a cryptographic method that allows computations on encrypted data without exposing sensitive information. The study benchmarks the TenSEAL library to evaluate its performance in encrypting healthcare test datasets and executing predictions through a pre-trained machine learning model, while also evaluating memory utilization and encryption time. The findings show that TenSEAL\u2019s CKKS encryption scheme effectively enables data encryption and secure machine learning inference on genomic datasets for breast, lung, and prostate cancers, achieving an average accuracy of 90% across all datasets. On the other hand, our results also highlight a key trade-off: as encryption strength and dataset size increase, computational overhead rises sharply. Thus, medical professionals and data scientists must carefully balance the need for security with the practical deployment in real-world healthcare systems.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KAMERON VUONG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> Understanding Software Engineering Principles by Evolving Luna mHealth\u2019s Support for PowerPoint<\/p>\n\n\n\n<p>Luna mHealth is a mobile health education platform designed to deliver culturally relevant, offline-accessible health education content to resource-constrained communities. Luna mHealth is composed of two parts: Luna authoring system and Luna mobile. Luna authoring system is a desktop application that converts PowerPoint-based health content into structured, interactive learning modules called Luna content modules. Luna mobile is a mobile application that readers use to view Luna content modules on their mobile phone without the need for an active Internet connection.<\/p>\n\n\n\n<p>In this project, I evolved Luna mHealth\u2019s support for the core PowerPoint features needed for Luna, such as connection shapes (aka lines), textbox shapes, picture shapes, hyperlinks, and placeholder shapes from slide layout templates. For Luna mobile, I evolved Luna mobile\u2019s rendering logic to render the extracted text and images, ensuring that visuals appear consistently and accurately. I also evolved Luna mobile\u2019s navigation system to support inter-page hyperlinks, allowing readers to navigate to different pages of a Luna content module. I also did enough work to demonstrate that we could extend this system to support LibreOffice Impress presentation files.<\/p>\n\n\n\n<p>Through practices like Test-Driven Development and principles like Single Responsibility and You Aren\u2019t Gonna Need It, I ensured the code I contributed to the Luna mHealth\u2019s codebase was clean, simple, and maintainable.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">WENTAO GAO<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.<br><strong>Project:<\/strong> Virtual Reality and sEMG Integration for Phantom Limb Pain Therapy<\/p>\n\n\n\n<p>Phantom limb pain (PLP) affects a substantial proportion of amputees and remains challenging to treat despite decades of research. Virtual reality (VR) has emerged as a promising non-pharmacological intervention, offering immersive visual feedback and controlled movement experiences. However, existing VR systems face significant barriers to widespread adoption to treat PLP: high costs, limited adaptability to individual differences due to reliance on pre-trained models and fixed control schemes, and restriction to specific upper or lower limb applications. Systematic reviews highlight the need for more affordable, customizable, and versatile rehabilitation technologies.<\/p>\n\n\n\n<p>This project presents the design and preliminary evaluation of a novel, low-cost VR system capable of supporting both below-elbow and below-knee amputation scenarios through a self-calibrating machine learning pipeline. The system integrates consumer-grade VR hardware (Meta Quest 2), surface electromyography (sEMG) armband (8 channels total), and AprilTag visual markers to provide real-time, personalized limb control in immersive game environments. For limb control, the system classifies three discrete gestures (clench, extension, rest) combined with positional tracking and rotation state, which drive a rigged virtual limb model through animation blending to achieve naturalistic movement suitable for amputee rehabilitation. The system features an automated guided data collection and training workflow that allows users to record EMG signatures and train custom Random Forest gesture recognition models in approximately 5 minutes, without requiring technical expertise or pre-trained models with great accuracy.<\/p>\n\n\n\n<p>System feasibility is validated through a pilot study with able-bodied participants who complete both upper and lower limb training sessions and interact with purpose-designed VR games. Preliminary results demonstrate successful gesture classification (mean accuracy &gt;85%), acceptable system latency (&lt;50ms end-to-end), and positive user experience metrics. The system addresses a critical gap in accessible rehabilitation technology by providing a single platform adaptable to multiple amputation types while maintaining low implementation costs.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, December 4<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">YU WEN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Annuska Zolyomi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> ConverSwim: A Visual and Emotion-Aware Platform to Support Reflective Conversation for Neurodivergent Users<\/p>\n\n\n\n<p>Neurodivergent individuals\u2014including those with autism and ADHD\u2014often face challenges in conversations, such as interpreting social cues, recognizing emotional signals, and maintaining mutual understanding. These difficulties can hinder effective communication and self-reflection. To address these issues, we developed ConverSwim, a web-based platform that supports users in reflecting on conversational dynamics through an integrated and interactive canvas. This canvas combines real-time chat, NLP-driven emotion detection, and a structured swimlane diagram that chronologically maps each user\u2019s internal emotional state and external verbal expression. Swimlanes within the canvas visualize the conversation across five columns: each participant has two parallel lanes\u2014Inside (representing emotions and internal states) and Outside (representing spoken or written expressions)\u2014while the central Common Ground lane captures moments of shared understanding as they emerge. By unifying interaction, emotional analysis, and visual structure in a single space, ConverSwim is designed to enable users to explore communication patterns and observe emotional shifts as conversations unfold. The platform includes a step-by-step tutorial to guide users through its features and reduce cognitive load. Through iterative, user-centered design, ConverSwim promotes emotional awareness, improves conversational reflection, and facilitates more empathetic, collaborative dialogue. Preliminary user sessions with neurodivergent participants evaluate usability, emotional awareness, and collaborative reflection. Early feedback indicates that participants found the system intuitive to use, helpful for noticing emotional cues, and supportive of reflecting on conversational flow.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">BREANNA POWELL<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; DISC (Discovery Hall) Room 464<br><strong>Project:<\/strong> Accelerating Climate Forecasts: Vision Transformer Models for Arctic Sea Ice Concentration Prediction with Novel Tokenization Techniques<\/p>\n\n\n\n<p>Reliable forecasting of Arctic Sea Ice Concentration (SIC) is critical for safe shipping, sustainable settlement planning for indigenous communities, and effective wildlife conservation. Yet, accurate SIC prediction remains difficult: observational data are sparse, models often fail at extremes, and climate science depends heavily on computationally expensive physics-based simulations.<\/p>\n\n\n\n<p>This work introduces IceForecastTransformer, a Vision Transformer\u2013based architecture trained on synthetic data from the U.S. Department of Energy\u2019s Energy Exascale Earth System Model (E3SM) Version 3. By incorporating Freeboard\u2014a satellite-observable metric derived from ice and snow thickness\u2014alongside flexible tokenization and positional encoding strategies, the model provides a possible bridge between simulation outputs and real-world observations. IceForecastTransformer is trained on 163 years of monthly SIC and Freeboard data, achieving 0.16 RMSE and 0.32 JSD for 12-month SIC forecasts, and 0.84 F1 Score with 0.98 AUC for Sea Ice Extent (SIE), all with training times under 15 minutes.<\/p>\n\n\n\n<p>These results demonstrate that transformer-based models can efficiently scale down complex Earth system data while maintaining high accuracy, offering a practical complement to physics-based simulations. The approach provides a pathway toward operational Arctic forecasting, with potential for fine-tuning on satellite data and broader applications in climate research and planning.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Friday, December 5<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JONATHAN CHO<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> Interpretable Machine Learning Studio Platform Web Application<\/p>\n\n\n\n<p>Machine learning models are often treated as black boxes, creating challenges in domains such as healthcare where interpretability is critical. This project enhanced a Machine Learning Studio Platform (MLSP) originally developed at the University of Washington by improving reliability, expanding supervised and unsupervised model support, and redesigning the interface for usability. Interpretability methods including SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), Partial Dependence Plot (PDP), Permutation Feature Importance (PFI), and Exploratory Data Analysis (EDA) were integrated and tested on benchmark datasets such as the Breast Cancer Wisconsin dataset and others of varying size and complexity. The enhanced system produced consistent metrics and visual explanations, enabling users to evaluate models more effectively and approach results with greater trust and understanding. These contributions demonstrate how interpretability features can make machine learning workflows more transparent and accessible for students, researchers, and healthcare professionals.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">XIU ZHOU<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geethapriya Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; DISC (Discovery Hall) Room 464<br><strong>Project:<\/strong> Privacy Information Detection Using Imagined Speech EEG-to-Text Analysis<\/p>\n\n\n\n<p>Brain\u2013Computer Interface (BCI) devices are rapidly entering consumer markets and offer several benefits, such as hands-free interaction and improved accessibility. However, their passive recording capability introduces a severe privacy threat, as malicious actors could intercept neural signals and apply machine learning techniques to infer sensitive information. Among neural modalities, imagined speech\u2014the internal process of thinking words or characters without vocalization\u2014is especially concerning, as it may reveal private content such as passwords or Personal Identification Numbers (PINs).<\/p>\n\n\n\n<p>To assess this risk, we investigate whether passwords can be decoded purely from imagined-speech EEG without prior knowledge of candidates. Moving beyond prior work limited to classifying among known passwords, we propose a novel hierarchical Binary-Digit-Letter (BDL) system for character-level reconstruction. Using a commercial headset, our method achieves an unprecedented end-to-end accuracy exceeding 60\\% on the full alphanumeric set. This performance markedly reduces the effective entropy of real-world passwords (e.g., from $62^{8}$ to roughly $2^{8}$ candidates for an 8-character password).<\/p>\n\n\n\n<p>We further present a systematic comparison of two deep neural architectures (CNN+BiLSTM and CNN+Transformer) across four experimental conditions to evaluate their password decoding performance and robustness. The results offer strong evidence that sensitive information can be extracted from passively collected EEG and establish key architectural baselines for future security analyses and privacy-preserving BCI design.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading\">NEELIMA CHOWDHURY<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Annuska Zolyomi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> HealthMate: Designing a Senior Centered Digital Twin App for Health Monitoring and Predictive Insights<\/p>\n\n\n\n<p>This project introduces HealthMate, a senior centered mobile health app that uses the concept of a digital twin to present personal health information and future health predictions in a simple format. Current commercially available fitness apps often depict numbers in complex data dashboards designed for mainstream users, not older adults. HealthMate is designed from the beginning to be accessible for older adults and to provide meaningful data visualization and health predictions for health areas that are a priority for them. Due to the popularity of the Apple ecosystem with older adults, HealthMate was chosen to be developed as an iOS app that is compatible with the Apple Watch.<\/p>\n\n\n\n<p>The key features of HealthMate are to provide seniors with a Digital Twin interface to access their daily health data by tapping on body parts of the Digital Twin or avatar, provide predictive insights based on past week statistics, enable customized notifications for daily health alerts, and provide an intuitive and clear interface with minimal visual clutter and focused views. This is achieved by the app reading user data from wearable devices like Apple Watch, analyzing the past week\u2019s data and statistics, and displaying likely trends or potential issues in areas the user wants to focus on. Currently the data shown reflects a user\u2019s most consistent heart rate, step count consistency, and sleep habit consistency. Future enhancements aim to show anxiety or high stress propensity, fall detection, cardiovascular anomaly, dizziness propensity, and irregular heart rhythm. The predictive insights module analyzes last week\u2019s data and forecasts the current week\u2019s activity, heart rate, and sleep patterns, and may indicate possible incidents such as anxiety, falls, or heart palpitations linked to inadequate sleep. The prework for this module uses generated synthetic datasets that are similar to typical wearable device data. This supports model development and safe investigation of uncommon problems without exposing actual users. The objective is to create a trustworthy and understandable companion that assists senior citizens in identifying changes and helps inspire small improvements to daily routines for better and healthier living.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SUMMER 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, June 25<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ARJUN TANEJA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Michael Stiber<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> Spatiotemporal Analysis of Neuronal Avalanches in Self-Organized Criticality<\/p>\n\n\n\n<p>Neural spike activity forms the fundamental basis of information processing and communication in the brain, exhibiting complex spatiotemporal dynamics. Recent advances in computational neuroscience have enabled high-resolution numerical simulations of large-scale neural networks, generating spike-time and location data that capture emergent phenomena such as neuronal avalanches and whole-network bursting. In the case of neuronal avalanches, prior analysis has almost exclusively focused on temporal information, ignoring the spatial information associated with spike data. This project presents an efficient algorithm that incorporates both spatial and temporal constraints for avalanche classification, moving beyond conventional spike-train analyses. Through systematic comparison of temporal-only and spatiotemporal methods, we investigate how the inclusion of spatial information affects avalanche identification and characterization. Finally, we examine how varying spatiotemporal constraints influence the detection and properties of whole-network bursting events, providing insight into the large-scale organization of neural activity.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, August 7<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PANKTI BHASKAR SHAH<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Professor Mark Kochanski<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> Design and Implementation of a Unified Appointment Scheduling System for Canvas LMS<\/p>\n\n\n\n<p>This project explores how instructor-student appointment scheduling can be integrated into a university\u2019s Learning Management System (LMS), specifically Canvas, while enabling synchronization with external calendars such as Google and Outlook. Although Canvas centralizes academic workflows, it lacks built-in support for course-aware, role-sensitive scheduling. As a result, instructors rely on disjoint third-party tools that lack integration with course data and often lead to scheduling conflicts and increased administrative burden. This project implements a unified scheduling system that uses OAuth authentication, Canvas API integration and supports external calendar providers for event synchronization. Users can create and manage availability programs, manage appointments, and seamlessly sync events across their preferred calendars. The system was developed using agile practices, offering practical experience in scalable software design, modular architecture, and API integration. This solution addresses a clear gap in academic infrastructure and offers a scalable approach for improved user-experience and efficient scheduling for students, instructors, and administrators.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, August 8<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AATMAN RAJESHKUMAR PRAJAPATI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> An Enhancement of Distributed Graph Queries in an Agent-Based Graph Database<\/p>\n\n\n\n<p>Graph databases are increasingly employed in domains that demand efficient data-modelling and querying of highly interconnected data. This project enhances an agent-based graph database system built on the MASS (Multi-Agent Spatial Simulation) Java library, which adopts the property graph model to support dynamic schemas and rich relationship semantics in distributed environments. Unlike traditional graph systems, this implementation leverages autonomous agents to navigate and manipulate in-memory data across multiple computing nodes, enabling scalable and parallel graph operations.<\/p>\n\n\n\n<p>This work focuses on extending the system&#8217;s query capabilities by integrating support for the Cypher WHERE clause, a critical feature for filtering and refining data retrieval. A modular approach was adopted to implement this functionality\u2014beginning with an abstract syntax tree (AST) for parsing, followed by a dynamic evaluation mechanism for efficient constraint resolution. The enhancements improve the expressiveness and performance of read operations, while preserving the system&#8217;s core agent-based execution model. This development not only broadens the practical utility of the system but also establishes a foundation for future support of more complex query patterns and mutation operations.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, August 11<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">RAVITEJA TANIKELLA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Thesis:<\/strong> Emotionally Intelligent Voice Language Models For Mental Health Therapy<\/p>\n\n\n\n<p>Conversational assistants for mental health therapy primarily rely on text-based models or cascaded architectures that first transcribe speech to text, a process that discards crucial paralinguistic information. This information bottleneck limits the AI&#8217;s ability to perceive critical emotional cues and broader psychological states, hindering its capacity to reason over human voice and provide effective therapy. This thesis details the development of an emotionally intelligent voice language model designed to overcome these limitations. The process began with a systematic evaluation of heuristic-based approaches and end-to-end model architectures, where automated benchmarking and a double-blind human study confirmed that large audio language models provided the most effective foundation for voice understanding and reasoning. Building on these findings, we propose and implement policy optimization methods to fine-tune this base voice language model on therapy data. The resulting aligned model demonstrated improved performance in benchmark evaluations, exhibiting emotional intelligence and generating therapeutically relevant responses. By presenting a complete development framework, from architectural validation to targeted alignment, this research establishes a clear and proven roadmap for creating the next generation of efficient and adaptive voice language models for mental health therapy and multimodal conversational AI.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KAUSHIK REDDY MITTA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> Demographic Disparities in Cybersecurity: Analyzing Victimization, Tool Adoption, and Awareness Across Ethnicity and Region<\/p>\n\n\n\n<p>Cybersecurity disparities across different demographic groups and regions have become a growing concern, as certain populations face disproportionate risks in terms of cyber victimization, tool adoption, and security awareness. These disparities are often overlooked in conventional cybersecurity research, which tends to simplify the relationship between demographic factors and cyber preparedness. This study analyzes these disparities by examining how ethnicity and geographic region influence cybersecurity outcomes in the United States. Using survey data from 470 participants collected by Professor Marc Dupuis, the research employs comprehensive statistical methods including correlation analysis, ANOVA, Kruskal-Wallis tests, and logistic regression to explore patterns across ethnic groups and urban, suburban, and rural settings. The study focuses on four key areas: cyber victimization, cybersecurity tool adoption, cyber incident experiences, and levels of cybersecurity awareness. The findings reveal significant disparities that challenge conventional assumptions about cybersecurity preparedness and risk exposure, highlighting the complex interactions between technology adoption, socioeconomic factors, and the strategies of threat actors targeting specific groups. The study calls for culturally competent interventions to address the unique vulnerability profiles of different demographic groups.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">MANPREET KAUR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> A Multimodal AI System for Mental Health Support Using Emotion Recognition<\/p>\n\n\n\n<p>Emotion\u2011aware conversational agents have the potential to transform mental health support by providing accessible, empathetic and context\u2011aware interactions. This project enhances the iCare Carebot by expanding it beyond a text\u2011only interface to a multimodal system that integrates text, audio and visual cues to better infer and respond to user emotions. We develop dedicated pipelines for each modality and benchmark both unimodal and multimodal emotion\u2011recognition models, fusing their outputs through cutting\u2011edge foundation models. For visual emotion recognition, we evaluate several vision\u2011language models including Phi\u20114, Qwen2.5\u2011VL\u20117B and Qwen2.5\u2011Omni\u20117B and find that Qwen2.5\u2011VL\u20117B offers the best balance of accuracy and inference efficiency. Across multimodal tasks, instruction\u2011tuned models consistently outperform zero\u2011shot and supervised\u2011fine\u2011tuned baselines. The system\u2019s performance is assessed via the Multimodal Language Analysis benchmark and complemented by a user survey, which indicates broad receptiveness to a multimodal chatbot provided privacy concerns are addressed. Comparative evaluation using the Multimodal Language Analysis benchmark reveals that, among these models, Phi-4-multimodal-instruct provides the most balanced and effective performance across tasks. This work delivers a complete end-to-end framework from emotion signal extraction to context-aware response generation and marks a significant step toward making iCare a more empathetic and emotionally perceptive mental health support tool.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, August 12<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SWANAND WAGH<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> Designing a User-Centered Digital Experience for the Entomon Institute<\/p>\n\n\n\n<p>In the digital era, a strong online presence is essential for educational institutions to effectively engage with their audiences. However, typical institute websites for Zoology often suffer from poor accessibility, suboptimal UI\/UX design, excess resources cluttered altogether, and inadequate adaptability, hindering their effectiveness. This project asks: how can I design a digital platform that meaningfully enhances public interaction with zoology &amp; entomology?<\/p>\n\n\n\n<p>To answer this, I partnered with the Entomon Institute to develop a cutting-edge, user-centric web platform to address these issues. The envisioned website serves as a dynamic resource hub, offering personalized materials on zoology and entomology, facilitating collaboration among institutes and research labs, and managing events such as insect hunts and ant colony explorations. Key features include interactive blogs, streamlined event management &amp; proper user &amp; admin dashboards. By leveraging modern web technologies, the platform aims to enhance educational outreach for zoology &amp; entomology, fostering an engaging community, and provide up-to-date information in an accessible and interactive format.<\/p>\n\n\n\n<p>At its core, this project is about making an institutional website feel less like a brochure and more like a space people actually want to visit. The result is a fast, accessible, and visually engaging platform that finally brings the Entomon Institute into the modern digital era.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">HARI PRIYA DHANASEKARAN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Michael Stiber<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Thesis:<\/strong> Understanding Burst Patterns Through Graph Neural Network Explainability in Simulated Neuronal Networks<\/p>\n\n\n\n<p>Spontaneous bursting activity in neural networks represents a fundamental mode of information processing in the brain, yet the mechanisms triggering these synchronized events remain poorly understood. While graph-based representations of neural networks are established, discovering the specific connectivity and activity patterns that predict burst initiation remains a significant challenge. This work leverages graph neural networks (GNNs) to classify and explain burst initiation in simulated cortical networks generated by the Graphitti simulator, advancing beyond conventional spike-train analysis to incorporate the rich relational structure of neural connectivity. By representing neural populations as graphs where nodes encode individual neurons with their temporal firing statistics and edges capture the synaptic connections, this work naturally integrates both activity patterns and network architecture. However, accurate prediction alone provides limited scientific insight. To move beyond black-box classification, we applied GNNExplainer to identify the minimal neural connectivity patterns driving model predictions. This explainability analysis revealed which specific neurons and synaptic connections the model deemed most critical for each prediction. This work demonstrates how explainable AI can transform our understanding of complex neural dynamics, providing insights that pure predictive modeling cannot offer. By combining the representation power of graph neural networks with explainability techniques, we bridge the gap between prediction and understanding. Our findings challenge prevailing views of burst initiation as a localized phenomenon, instead revealing the role of distributed precursor patterns in driving network-wide synchronization. This methodology opens new avenues for investigating emergent behaviors in complex networks.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">VICTOR LONG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project:<\/strong> Building Continuous Integration Tools for Luna mHealth<\/p>\n\n\n\n<p>The Luna mHealth group is a graduate research group building a medical education application for use in low-connectivity areas of the world. Developing good quality code in a larger code base with other developers is a difficult and complex task. Industry professionals turn towards toolsets and processes, like continuous integration, to provide a robust framework for teams to develop quality code. This project implements a continuous integration and continuous delivery toolset to automate a variety of quality gates that help propel Luna developers toward best practices in every part of the development cycle. By using automated testing and linting tools, developers\u2019 code gets checked for testing issues and best-practice deviations before getting merged with the whole project. This project enhanced the student development experience and team consistency by providing a containerized environment. Such an environment allows students to develop on the same version of tools and dependencies. By developing with these continuous integration and continuous delivery tools, students are introduced to the tooling used in many industry settings and experience best practices for building good quality code, and the code quality has risen dramatically.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, August 15<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">RUOXI SU<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> Emotion-Adaptive Retrieval-Augmented Chatbot: Elevating Mindfulness-Based Mental-Health Conversations with Lightweight LLM Reasoning<\/p>\n\n\n\n<p>Large-language models (LLMs) have begun to transform mental-health support, yet most systems still ignore the user\u2019s moment-to-moment affect. We present an emotion-adaptive retrieval-augmented generation (RAG) pipeline that grounds counseling responses in a curated mindfulness corpus while dynamically tailoring both retrieval and language style to the client\u2019s emotional state. At each dialogue turn, a few-shot light-weighted LLM classifier assigns one of 13 clinically salient emotions and produces a \u2264 25-word situation summary; this lightweight front-end attains 83\\% accuracy on a balanced 100-scenario test set. The two signals seed a MiniLM-L6 vector index built over 600-token chunks from 10 expert-reviewed articles, and the top-4 passages are merged with the chat history before generation. This design contracts the average context to $\\sim 800$ tokens and trims retrieval latency to $\\approx$ 1800 ms per turn while preserving semantic completeness.<\/p>\n\n\n\n<p>We benchmark the system against an otherwise identical, emotion-agnostic RAG baseline. On a 100-dialogue test set, the emotion-aware front-end consistently improves quality over an otherwise identical RAG baseline. With GPT-4o-mini, human ratings rise for Empathy (+0.61 on a 0\u20134 scale; 2.35\u21922.96), Topical Relevance (+0.40 on 0\u20133; 1.98\u21922.38), and Helpfulness (+0.19 on 1\u20133; 2.07\u21922.26), while Safety is unchanged (2.70\u21922.73; 0\/100 unsafe). With Llama-3 8B, gains are smaller but consistent\u2014Empathy +0.06 (3.53\u21923.59), Relevance +0.17 (2.39\u21922.56), Helpfulness +0.22 (2.38\u21922.60)\u2014with Safety stable at 2.95. The emotion classifier itself achieves Macro-F1 = 0.81 (AUROC \u2248 0.91) without fine-tuning. Together, these results indicate that a lightweight emotion-classification and situation-summary layer can sharpen topical fit and increase empathic\/helpful tone without compromising safety.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHRISTI SRIVASTAVA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Thesis:<\/strong> AutismCarebot: An Empathy-Enhanced and Retrieval-Augmented Chatbot for Autistic Users<\/p>\n\n\n\n<p>Autism Spectrum Disorder (ASD) typically requires specialized communication strategies, tailored emotional understanding, and personalized resources &#8211; areas commonly overlooked by general mental health chatbots. This thesis introduces AutismCarebot, an empathy-enhanced, retrieval-augmented conversational AI built upon the LLaMA-3.2 (3B) Instruct model, designed explicitly for autistic users. Initially fine-tuned on empathetic dialogue datasets (Empathetic Dialogues, Amod Mental Health Counseling Conversations, Psych8k, ExTES), AutismCarebot was further specialized through fine-tuning on autism-specific data (TASD), enabling nuanced recognition and appropriate responses to autistic communication styles. The chatbot employs keyword-based heuristics to detect emotional cues such as anxiety, sadness, and overwhelm, responding with clear, empathetic validation and tailored reassurance. Retrieval-Augmented Generation (RAG) ensures accuracy by embedding evidence-based advice from credible autism-related resources, transparently cited within responses. Additionally, proactive crisis detection identifies self-harm indicators, immediately connecting users to global crisis support resources. Usability testing and qualitative feedback from autistic individuals, caregivers, and educators demonstrated favorable usability, enhanced empathy, emotional responsiveness, and increased user trust. AutismCarebot provides a replicable, ethical, and effective approach to enhancing emotional support and crisis safety for autistic users, highlighting its practical value in therapeutic, educational, and peer-support environments.<\/p>\n\n\n\n<h4 class=\"wp-block-heading is-style-default\">Tuesday, August 19<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JANYA BYSANI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geethapriya Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>12:00 P.M.<br><strong>Project:<\/strong> Low-Latency Packet-Processing Pipeline for Anomaly Detection in IoMT Networks Using<\/p>\n\n\n\n<p>The adoption of Internet-of-Medical-Things (IoMT) devices has turned IoMT networks into<br>dense, latency sensitive fabrics that relay vast quantities of protected health information. To<br>safeguard this traffic, this project develops an edge based intrusion prevention system that<br>combines a Data Plane Development Kit (DPDK) fast path with a machine learning inference<br>engine. Packets are captured in zero copy, kernel bypass mode and streamed through a<br>feature extraction stage before being classified by a LightGBM model that delivers sub-500<br>\u00b5s inference latency.<\/p>\n\n\n\n<p>This pipeline further introduces a privacy preserving adaptation loop that uses federated<br>learning. Each gateway retrains the classifier on its local traffic and transmits the model to a<br>custom aggregator. The aggregator evaluates incoming models and selects the candidate<br>with the lowest loss as the new global model, which is then redistributed a<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SPRING 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, May 19<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SAVANNA (sAV) WHEELER<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; UW2 (Commons Hall) 327<br><strong>Thesis:<\/strong> Investigating The Relationships Between SNS Usage, Personal Information Disclosure, And Cybersecurity<\/p>\n\n\n\n<p>As social networking sites (SNSs) become ubiquitous for daily activities, users and regulators have raised concerns with the state of digital privacy and security. Cyberattacks on SNSs have exposed private data of millions of users, and cybersecurity threats propagate through social engineering over SNSs. To determine whether frequent SNS users who disclose personal information are at greater risk of cyberattacks, I conducted a hybrid survey-interview study measuring the correlations between personal information disclosure, SNS usage, cybersecurity practices, and past experiences with cybersecurity threats. The survey findings (n = 276) suggest that SNS usage frequency and usage for MNPS (meeting new people and socializing) or MEPO (make, express, or present more popular oneself) purposes have positive correlations to personal information disclosure. SNS usage frequency and personal information disclosure also had positive correlations with experienced cybersecurity threats, but little correlation with cybersecurity behavior. Interview responses highlighted how subjects experienced cybersecurity threats within SNSs.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, May 22<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ELIAS MARTIN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: AuditAI &#8211; An Interactive Platform to Audit Question-Answer (QA) Pairs from AI and LLM Agents<\/p>\n\n\n\n<p>AuditAI is an interactive platform to help users audit a question-answer pair that they received from conversations with an artificial intelligence (AI) agent.<\/p>\n\n\n\n<p>This tool is especially relevant to current happenings in the world, as large language models (LLMs) and other AI agents become more prevalent and used in many contexts globally. In my capstone, I built out a framework for users to be able to gain insight into whether their question-answer pair is accurate and appropriate, taking into account machine learning (ML) models combined with other sources of content from the internet as a ground truth (using Google Search to fact check answer).<\/p>\n\n\n\n<p>The goal of AuditAI is not to definitively say whether a question-answer pair being audited is correct or incorrect, but to give the user unprecedented access to information at their fingertips to come to their own conclusion about the information they are trying to audit.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PRATHAMESH PRAKASH BHALANGE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Peer-to-Peer Experience Synchronization System: Enhancing Communication and Control through Chat, Video Streaming, and Admin Features<\/p>\n\n\n\n<p>As the demand for real-time collaboration and media sharing tools has increased in the past decade, the development of real-time innovative communication platforms capable of delivering a seamless communication experience has risen rapidly. Traditional systems rely heavily on centralized infrastructures to maintain coordination across distributed clients. While centralized architecture is effective for smaller-scale deployments, it poses significant challenges in scalability, performance overhead, higher operational costs, and infrastructure complexity.<\/p>\n\n\n\n<p>This project presents a Peer-to-Peer Experience Synchronization System, implemented as a browser extension, that leverages decentralized communication using WebRTC to enable scalable, real-time synchronization without depending on persistent server connections. This project introduces session creation and connection establishment through a lightweight signaling server, which minimizes asset exchange cycles and reduces the server communication burden.<\/p>\n\n\n\n<p>Beyond synchronization, the system integrates a robust communication framework designed to improve user interaction and control. Core features include real-time communication chat functionality, a two-person video conferencing enabling face-to-face video with synchronized audio collaboration, and admin access control mechanisms embedded within the browser extension UI.<br>By extending the communication mechanisms leveraging WebRTC and unifying these features within a single browser extension, this project demonstrates a holistic approach to decentralized streaming and communication. The system not only reduces infrastructure dependency but also enriches user experience through integrated tools for interaction and control.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">VIVEKANANDA REDDY LENKALA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. William Erdly<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:00 P.M.<br><strong>Project<\/strong>: BuzzMagnet: A Bidding Ecosystem for Democratizing Access Between Social Media Promoters and Marketing Opportunities<\/p>\n\n\n\n<p>In today&#8217;s digital landscape, social media usage has grown exponentially, creating opportunities for influencers to reach massive audiences through innovative approaches. However, many influencers struggle to establish sustainable partnerships with businesses that would allow them to monetize their content creation efforts. Simultaneously, businesses face challenges identifying appropriate influencers whose audience and style align with their brand values and marketing objectives. BuzzMagnet addresses this market inefficiency by creating a transparent bidding marketplace where businesses post advertising opportunities and influencers can bid to secure these campaigns. This reverse-auction approach empowers businesses to evaluate multiple potential influencers based on their proposals, while giving influencers of all sizes equal opportunity to showcase their value beyond mere follower counts. The platform provides comprehensive analytics to business owners, enabling data-driven decisions when selecting influencers based on relevant metrics such as engagement rates, and previous campaign performance. By democratizing access to advertising opportunities, BuzzMagnet particularly benefits emerging influencers who might otherwise be overlooked in traditional influencer marketing approaches. For businesses, the platform offers the advantage of discovering high-potential content creators at earlier stages of their careers, potentially securing more authentic promotion at competitive rates. Through this innovative bidding system, BuzzMagnet aims to transform the influencer marketing ecosystem by facilitating more transparent, merit-based connections between businesses seeking promotion and the diverse community of digital content creators.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, May 23<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KEVIN JOSEPH GRAFF<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Dual-Modality Alzheimer\u2019s Detection with Interpretable Models<\/p>\n\n\n\n<p>Early diagnosis of Alzheimer\u2019s disease (AD) is critical for enabling timely intervention and improving patient outcomes. However, traditional diagnostic methods often face limitations in accessibility, accuracy, efficiency, and specifically interpretability. This capstone project explores the application of both Machine Learning (ML) models and Deep Learning (DL) techniques by addressing the limitations but focusing on interpretability for detecting Alzheimer\u2019s disease using demographic tabular data and 3D structural brain MRI scans from the OASIS-2 (Open Access Series of Imaging Studies) dataset. It investigates classical iML models including Decision Trees, Random Forests, and Logistic Regression, applied on demographic tabular data, with an emphasis on interpretability and transparency in addition to predictive performance. Evaluation metrics such as accuracy, precision, recall, and F1-score are used to assess model quality. To further interpret model behavior, global and local agnostic explanation techniques are applied, including SHAP (SHapley Additive exPlanations), Permutation Feature Importance (PFI), and Partial Dependence Plots (PDPs). In parallel, a 3D Convolutional Neural Network (CNN), as one of DL models, is trained to classify patients as either \u201cDemented\u201d or \u201cNon-Demented\u201d based on medical imaging of entire brain volumes. Explainability techniques such as Grad-CAM and saliency maps are employed to generate spatial visualizations that highlight key regions of the brain contributing to each prediction. These visual explanations help bridge the gap between model output and clinical insight, potentially increasing trust in ML-assisted diagnostics. This dual approach provides both structured insights from tabular data and spatial insights from medical imaging, illustrating how iML and DL can complement one another in the context of Alzheimer\u2019s detection.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">WonWhoo (Andrew) nah<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Understanding Software Engineering Principles: Developing a PowerPoint Builder Framework for Luna mHealth\u2019s Authoring System<\/p>\n\n\n\n<p>Luna mHealth has evolved over time, but its architecture required refactoring to improve extensibility and maintainability. This project explores how software engineering principles can transform PowerPoint based educational content into a structured format in the Luna authoring system. At its core, PptxTreeBuilder is a modular framework built upon test-driven development that extracts and organizes essential elements from .pptx files into Luna content modules, ensuring efficiency, readability, maintainability, and scalability. Through this project, I learned how to build a clean architecture, and use lean development principles. Beyond its technical contributions, this system establishes a sustainable, extensible authoring platform, empowering future developers to refine and expand its capabilities.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, May 28<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PRAGATI AMOL DODE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; UW1 (Founders Hall) 370<br><strong>Thesis<\/strong>: Exploring Classification Methods for Motor Imagery and Execution EEG Signal Fluctuations<\/p>\n\n\n\n<p>Brain-Computer Interfaces (BCIs) offer promising applications in neurological rehabilitation through motor imagery (MI)-based training, which is the &#8220;intent&#8221; of performing an action. This research addresses the challenge of accurately classifying MI and motor execution (ME) based on Electroencephalography (EEG) signals. This kind of data is often limited by subject variability, non-stationarity, environmental noise during data collection, EEG device quality, and small dataset sizes. For our study, we propose to make use of an external large dataset, including data from 103 subjects (compared to the 9\u201312 subject datasets used in prior work). One of the main goals of this research is to integrate multiple feature extraction techniques spanning time, frequency, and spatial domains. Effective EEG channel selection was guided by fMRI studies identifying MI- and ME-relevant Brodmann areas, combined with EEG-based statistical analysis, resulting in a refined set of 12 informative electrodes. Several machine learning models (SVM, RF, KNN, XGBoost, MLP) are evaluated, achieving up to 80\\% accuracy with improved robustness across subjects. These findings demonstrate enhanced generalizability and support the development of more reliable BCI applications for real-world rehabilitation scenarios.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">YUZE WANG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. William Erdly<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: The QuickCheck Vision Screening Application: Multi-platform Mobile Development (iOS and Android) and Preparation for Clinical Trials<\/p>\n\n\n\n<p>According to the World Health Organization, roughly 2.2 billion people worldwide have some visual impairment. The key challenge in treating many forms of visual impairment is detection. Visual impairment is often gradual, meaning that people with it may not notice the changes in their perceptions. Also, children who do not initially develop normal visual abilities can lack a baseline of \u201cgood\u201d vision to compare their experiences against. And because of that, they may not realize they have vision problems. These impairments can affect individuals\u2019 independence, educational achievements, and mobility, and may also lead to physical injuries and mental health challenges. Traditionally, these vision conditions are expected to be detected in school eye exams. However, eye exams only take place a few times throughout the school year.<br>QuickCheck is a quick and easy-to-use vision screening tool for mobile devices that can perform vision tests to determine if a patient is likely to have any specific visual conditions as the first step toward diagnosis and treatment.<br>Throughout the project, functions were restored in QuickCheck. By adding functions for generating and sending result reports, QuickCheck became a standalone application. Also, support for IOS devices was added to QuickCheck during this project, bringing it to a wider user group.<\/p>\n\n\n\n<p>Keywords: Vision Screening, Mobile Application Development, Unity, Xcode<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">MICHAEL LEE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Learning Software Engineering Principles: Developing a Navigation System for Luna mHealth<\/p>\n\n\n\n<p>Luna mHealth is a mobile health education platform designed to deliver culturally relevant, offline-accessible content to underserved and low-connectivity communities. Luna transforms PowerPoint-based health materials into structured, interactive learning modules. The system supports both guided and exploratory navigation through sequences of content, using a lightweight mobile interface optimized for usability in resource-constrained settings.<\/p>\n\n\n\n<p>This project focuses specifically on the design and implementation of Luna\u2019s navigation system. Navigation is a core component of the user experience, as it determines how users interact with and progress through health education materials. Foundational software engineering principles were applied throughout the development process, including test-driven development (TDD), the SOLID principles, and the KISS (Keep It Simple, Stupid) and YAGNI (You Aren\u2019t Gonna Need It) philosophies. These practices helped ensure a clean, maintainable architecture focused on simplicity, scalability, and long-term extensibility. The goal was to design a streamlined, intuitive application that enhances the user experience while providing a strong architectural foundation for future development.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JASWANTH SRIVAN LAGADAPATI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.; <a href=\"https:\/\/washington.zoom.us\/j\/99302556006\">Join Jaswanth Srivan Lagadapati&#8217;s Online Defense<\/a><br><strong>Project<\/strong>: Understanding Software Engineering Principles: Building and Evolving the Architecture of the Luna mHealth Project<\/p>\n\n\n\n<p>Luna mHealth is a mobile health education platform designed to work offline in underserved communities. As the project grew, it became clear that its architecture needed to evolve. What began as a fast-moving prototype required a more deliberate structure, one that future contributors could understand, maintain, and build upon. In this project, I applied core software engineering principles to rework key areas of the Luna system: the authoring pipeline, the domain model, and the mobile rendering layer. I redesigned the Luna domain model to introduce clearer abstractions like LineComponent, Dimension, and SequenceOfPages, reflecting how real content is structured and consumed. I replaced a monolithic module generator with a modular builder-based framework, allowing content to be constructed in clean, testable steps. On the mobile side, I implemented logic to render line components directly from PowerPoint data, ensuring that visuals appeared consistently and accurately. Through practices like test-driven development, I gained firsthand experience with principles like Single Responsibility and Open\/Closed. More than just improving code, this work laid the foundation for a maintainable and extensible system, one that can support Luna\u2019s mission for years to come.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, May 29<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">STEPHANIE ANNE MURRAY<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: Exploring Quantum Machine Learning-Enhanced Models for EEG Data Classification<\/p>\n\n\n\n<p>Electroencephalography (EEG) records brain activity linked to both executed and imagined movements, but separating real hand movement from the surrounding noise in high-dimensional EEG remains difficult. Reliable classifiers are therefore vital for accurately tracking patient progress over time. This work is part of a larger initiative, dubbed Smart NeuroRehab Ecosystem, which has two main goals: 1) to propose innovative physical-rehabilitation strategies for certain neurologic conditions (e.g., stroke) using cutting-edge technologies, making therapy more accessible, and 2) collecting EEG data for analysis and building machine-learning (ML) models to classify brain signals.<\/p>\n\n\n\n<p>EEG data are complex and difficult to analyze and classify. In this research we explore a quantum-machine-learning (QML) strategy for that purpose. Compared with traditional ML approaches, qubits and quantum circuits offer a different way to represent and process features, which could lead to more efficient classification.<\/p>\n\n\n\n<p>We implement and analyze a ten-qubit Variational Quantum Classifier (VQC) and compare its performance against a \u201ctraditional\u201d classification method\u2014a tuned Random Forest baseline. An existing publicly available data set is used for the experiments, which involve discriminating between specific motor tasks based on signals collected from a 64-channel EEG device.<\/p>\n\n\n\n<p>Across 40 preliminary runs, the VQC achieves macro-F1 \u2248 0.75, accuracy \u2248 0.76, and AUROC \u2248 0.83, outperforming the Random Forest (macro-F1 \u2248 0.71, AUROC \u2248 0.79). The majority of experiments were performed on a quantum simulator, with a subset of jobs submitted to a cloud-based quantum computer for testing.<\/p>\n\n\n\n<p>These findings show that hybrid quantum-classical models can match and occasionally surpass strong classical pipelines without expanding the computational footprint, offering a practical path toward longitudinal EEG monitoring within the scope of the Smart NeuroRehab project. Moreover, because quantum computing is still emerging, this work may open avenues for new advancements in the field.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">MANU HEGDE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Project TLDR: Standalone desktop application for question answering and summarization using resource-efficient LLMs<\/p>\n\n\n\n<p>This project presents the design and development of a standalone desktop application for offline question answering and summarization over a user-provided document corpus, using resource-efficient large language models (LLMs). Targeted for Apple\u2019s M1\/M2 hardware, the application leverages on-device computation via the Apple Neural Engine (ANE) and Metal shaders, exploring the use of the NPU beyond traditional CoreML applications.<\/p>\n\n\n\n<p>The application addresses key concerns around data privacy, resource efficiency, and accessibility. Unlike cloud-based services that require constant internet access and raise privacy risks, this application offers a secure, local alternative optimized for researchers and students. It features a graphical interface and supports retrieval-augmented generation (RAG) over the user&#8217;s corpus, all while utilizing only a fraction of system resources to support seamless multitasking.<\/p>\n\n\n\n<p>Evaluation is conducted using both functional metrics (e.g., BERTScore against ChatGPT outputs) and non-functional metrics (e.g., memory and CPU usage). The result is a practical, efficient application that enables interaction with large academic corpora while preserving system responsiveness and data confidentiality.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, May 30<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">VEDANG PARASNIS<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geethapriya Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Discovery Hall 464<br><strong>Project<\/strong>: Linux Kernel Integrated Security Framework for Real-Time Prevention of DNS Data Exfiltration in Distributed Environments<\/p>\n\n\n\n<p>Data exfiltration remains one of the most persistent and sophisticated threats in cybersecurity, with DNS frequently exploited as a covert channel for tunneling and command-and-control (C2) operations. Advanced persistent threat (APT) malware leverages DNS not only for stealthy data exfiltration but also for remote exploitation across multiple compromised nodes using techniques such as remote code execution, port forwarding, reverse tunneling, and process side-channel attacks. These threats are especially critical for hyperscalers managing large-scale server data planes running on Linux, where data sovereignty and integrity are paramount. DNS\u2019s ubiquity, inherent security flaws, and mandated unblocked availability on firewalls make it a high-risk vector for stealthy breaches and C2 attacks\u2014particularly difficult to prevent in real time across distributed environments.<\/p>\n\n\n\n<p>Existing solutions for DNS exfiltration security primarily rely on passive traffic analysis from centralized locations, using anomaly detection based on statistical heuristics and machine learning models trained on DNS traffic anomalies. Others depend on domain reputation scoring and static blacklists\u2014all operating in userspace. However, these approaches are inherently reactive, slow to respond, increase latency, lack security enforcement at the endpoint, and are often ineffective against stealthy, adaptive C2 implants that utilize domain generation algorithms (DGAs) to obfuscate attacker infrastructure. As a result, they offer no guarantees of preventing data loss before detection and are inefficient at combating C2 attacks across distributed environments. By the time an alert is triggered, malicious commands may have already been executed and significant damage inflicted\u2014especially in the case of advanced C2 botnet vectors.<\/p>\n\n\n\n<p>This project develops a novel, scalable framework for real-time prevention of DNS-based data exfiltration across distributed environments. It uses an agent-based, endpoint-centric approach, embedding security code throughout the Linux kernel network stack, the mandatory access control layer, and other core kernel subsystems. The framework performs deep packet inspection for advanced DNS payload analysis by parsing the DNS protocol inside the Linux kernel via eBPF, supported by real-time inference from a deep learning model trained on diverse data obfuscation techniques to detect malicious exfiltrated DNS payloads. Malicious C2 implants are terminated at the source on the endpoint, with detailed observability metrics exported to characterize implant behavior\u2014minimizing dwell time, accelerating incident response, and containing threats immediately. Additionally, the framework supports event streaming, dynamic domain blacklisting on DNS servers, in-kernel network policy enforcement, and cross-endpoint security policies for scalability. Experimental results demonstrate horizontal scalability, deep learning model accuracy near 99%, and sub-second response and containment of sophisticated DNS C2 attacks\u2014all with negligible or no data loss. This approach enhances real-time visibility, accelerates threat containment, and fortifies endpoint defense against emerging threats across distributed environments.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">BRENDA SUGEY VEGA CONTRERAS<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geethapriya Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Discovery Hall 464<br><strong>Project<\/strong>: SAFE-MD: A Secure Architecture Framework for Enhanced Mobile Development<\/p>\n\n\n\n<p>The use of mobile applications has increased over the years since the introduction of smartphones and the creation of centralized repositories like Google Play and App Store, where mobile health or mHealth apps have gained traction over. Their user base is expected to continue to grow, making them a target for attackers as they can carry over critical tasks and contain sensitive patient information. Different vulnerabilities have been identified, with multiple contributing factors. These often arise from a combination of insufficient knowledge of security best practices, limited expertise in implementation, and practical challenges, such as the need to meet development deadlines and manage financial limitations.<\/p>\n\n\n\n<p>Research has proposed conceptual and prototypic frameworks to mitigate these security risks. However, most implementations focus exclusively on a single platform, primarily Android. Therefore, it is crucial to develop solutions that enable mHealth app developers to reduce the likelihood of user data breaches across multiple mobile platforms.<\/p>\n\n\n\n<p>This project presents SAFE-MD, a comprehensive security framework with a platform-agnostic approach that facilitates the integration of security mechanisms into mHealth applications at any time during the app development lifecycle. To ensure multiplatform compatibility, the security framework is built using Kotlin Multiplatform, with an initial focus on Android and iOS devices. Different platform-agnostic and platform-specific security mechanisms were integrated to protect users, such as password strength validation and biometric authentication.<\/p>\n\n\n\n<p>To validate the effectiveness and portability of the security framework across different mobile development environments, two mock mHealth apps were built using Kotlin Multiplatform and Flutter, running on two of the most attractive platforms for attackers, Android and iOS. Preliminary findings during the implementation phase indicate that the incorporation of the platform-agnostic security mechanisms of the SAFE-MD in Android and iOS mock apps for both Kotlin Multiplatform and Flutter, can achieve smooth integration across all the instances of the mock app. However, the availability of platform-specific security mechanisms in the security framework may vary across platforms due to architectural or implementation differences.<\/p>\n\n\n\n<p>SAFE-MD simplifies the integration of security mechanisms in mHealth apps for multiple platforms and lays the foundation for future enhancements, including broader platform support, the improvement of current security mechanisms, and the integration of new ones to protect users.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, June 2<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ATUL AHIRE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project:<\/strong> MASS Java Library Towards it\u2019s Use for a Graph Database System<\/p>\n\n\n\n<p>Distributed graph databases are foundational to modern applications such as social networks, recommendation systems, and transportation platforms. The MASS (Multi-Agent Spatial Simulation) framework\u2014an agent-based parallel computing library for distributed systems\u2014designed to simulate spatial computations by deploying mobile agents over distributed data structures such as graphs. MASS\u2019s agent-based graph computing model enables intuitive programming and parallel traversal, making it well-suited for scalable graph database applications. This research intends to address two core limitations in its graph database model: inefficient vertex placement and high communication overhead during agent migration. To improve data locality and reduce inter-node communication during graph traversal, we explored and integrated graph partitioning strategy which minimizes edge cuts and better aligns with graph topology. This approach significantly reduced inter node communication for graph benchmarks like Triangle Counting and Articulation Points by localizing vertex adjacency across fewer computing nodes.<\/p>\n\n\n\n<p>The second challenge targeted the messaging inefficiencies caused by TCP-based agent migration. We redesigned the MASS communication infrastructure using Aeron, a high-performance UDP messaging library, to support low-latency, high-throughput data exchange. This refactoring introduced a dual communication pattern: a structured master-worker protocol for control synchronization and a peer-to-peer mesh for direct agent exchanges. By combining optimized graph partitioning with Aeron-based messaging, the enhanced MASS system reduced the communication latency and increased agent migration efficiency across large, distributed graphs.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">VANESSA ARNDORFER<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Michael Stiber<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: Network Behavior Analysis of Spike Timing Dependent Plasticity (STDP) in Simulated Neural Networks<\/p>\n\n\n\n<p>The machine learning landscape is rapidly evolving with researchers often turning toward<br>nature for inspiration. Understanding the development of neural networks in vivo contributes<br>significant transferrable insight for advancing both neuroscience and computational research.<br>This project applies a multiplicative Spike Timing Dependent Plasticity (STDP) model to<br>the weighted graph output from neural growth simulations and analyzes the resulting spike<br>and weight changes over time. This preliminary investigation establishes a baseline process<br>for understanding the effects of STDP on a neural network and provides a framework for<br>defining the resulting network behavior. Through rigorous data analysis, we qualify bursting<br>behavior during the refinement phase, analyze the progressive effects of STDP on synapse<br>weights, and compare how the network behavior changes between the growth and refinement<br>phases of neural development.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, June 3<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AVIKANT WADHWA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Michael Stiber<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Thesis<\/strong>: Abstractions for Code Migration from CPU to GPU in Simulation Domain<\/p>\n\n\n\n<p>Large-scale simulations are crucial in science, enabling the modeling of complex phenomena that are difficult to study empirically. As they scale, they demand greater performance and efficiency. To meet this need, computing has shifted toward heterogeneous architectures that combine CPUs and GPUs. While effective, this shift introduces software engineering challenges, making abstraction an increasingly important solution. Abstractions hide low-level implementation details behind clean interfaces, improving clarity and reducing complexity.<\/p>\n\n\n\n<p>This thesis reviews existing abstractions, analyzing their integration effort, performance trade-offs, and limitations. It presents the design and implementation of DeviceVector, a lightweight abstraction that unifies host and device memory management in Graphitti, a high-performance graph-based simulation platform. DeviceVector reduces code duplication, introduces CPU-GPU data relationship, and abstracts CUDA boilerplate through an interface that closely mirrors a standard C++ container. The thesis also discusses design approaches for extending support in the future to object hierarchies and general function-level abstractions, further minimizing logic duplication between host and device code. Overall, this work highlights how thoughtful abstraction design can bridge the usability-performance gap in heterogeneous computing systems.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">WILLIAM D. SELKE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: Pairwise Normalization Reveals Predictive Immunosignatures in CAR-T Cell Therapy<\/p>\n\n\n\n<p>Immunoprecipitation (IP) assays are widely used to quantify protein-protein interactions (PPIs), yet their susceptibility to variability in protein concentration and experimental conditions often obscures meaningful biological signals. This variability poses a significant challenge for downstream analyses, particularly machine learning (ML) models that require well-separated feature spaces. To address this, we introduce Pair Vector Centralization (PVC), a normalization strategy that eliminates concentration-dependent noise while preserving biologically relevant variation. PVC operates by pairing each altered-state sample (e.g., Cytokine Release Syndrome, Neurotoxicity) with a corresponding baseline (Optimal Response) under similar experimental conditions, then transforming these pairs into vectors of change. A midpoint centering step further removes global shifts unrelated to the underlying phenotype.<\/p>\n\n\n\n<p>We applied PVC to a high-dimensional, high-variability dataset comprising 121 PPIs from 98 CAR-T therapy patients across 27 independent experiments. Compared to standard normalization techniques. including Z-score, quantile normalization, and PCA; PVC substantially improved both clustering quality and predictive performance. Notably, the F1 score of a baseline Random Forest classifier improved from 75.71% to 95.24% post-normalization. Heatmap comparisons of multiple normalization methods and ML classifiers further demonstrate PVC\u2019s consistent superiority across classification and unsupervised clustering metrics.<\/p>\n\n\n\n<p>These results underscore PVC\u2019s potential as a robust preprocessing tool for biological datasets affected by batch effects and concentration variability. Its applicability extends to other domains involving biological state transitions, including drug perturbation studies and disease progression modeling. Future work will explore its integration with broader computational pipelines and validation across additional high-throughput datasets.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">NEERAJ BAIPUREDDY<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Arkady Retik<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Smart Umpire: AI-Powered Decision Support and Analytics for Grassroots Cricket<\/p>\n\n\n\n<p>While professional cricket benefits from advanced technologies like DRS, local and amateur-level matches often lack reliable tools for decision-making. The few existing solutions for grassroots cricket are typically limited to LBW detection and do not support other important decisions such as run-outs and no-balls. They are also costly and not accessible to most local teams.<\/p>\n\n\n\n<p>Smart Umpire aims to provide a more complete and affordable alternative by using computer vision and machine learning to bring real-time decision support to local matches. The system analyzes video footage to detect LBWs, run-outs, and no-balls, as well as delivery patterns such as bounce location, deviation after pitching, and length classification. All models are developed in Python using custom datasets trained and tested on the Roboflow platform.<\/p>\n\n\n\n<p>This project establishes the foundation for a mobile application that will offer decision support, player analytics, and a subscription-based access model\u2014making professional-level insights available to the everyday game.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, June 4<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PHAT TIEN TRAN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: StackBERT-Enhancer: A Dual-Layer BERT-Based Framework for Enhancer Identification and Strength Classification in Genomic Data<\/p>\n\n\n\n<p>Enhancers are crucial regulatory DNA sequences that control gene expression and play essential roles in cellular differentiation, development, and disease. Despite their significance, accurately identifying and classifying enhancers remains challenging due to their context-dependent activity, complex sequence features, and lack of consistent structural patterns. Traditional computational approaches, such as sequence heuristics and shallow machine learning models, often fail to capture the long-range dependencies and multi-scale contextual information embedded in genomic sequences. Moreover, they typically lack interpretability, making it difficult to extract relevant biological understanding.<\/p>\n\n\n\n<p>This thesis introduces StackBERT-Enhancer, a novel deep learning framework designed to overcome these limitations. It addresses two core tasks: distinguishing enhancer sequences from non-enhancer sequences and classifying identified enhancers based on their functional strength. The framework leverages multiple transformer-based language models, each trained independently on DNA sequences tokenized with different k-mer sizes. This multi-k-mer strategy enables the capture of sequence dependencies and context across various scales. These individual models then serve as base learners within a stacking ensemble architecture. This ensemble approach significantly enhances classification accuracy, robustness, and generalization performance across both tasks, achieving state-of-the-art results.<\/p>\n\n\n\n<p>To handle the computational demands of large genomic datasets, the framework employs distributed multi-GPU systems for efficient model training and hyperparameter optimization. In addition, to bridge the gap between model predictions and biological insight, interpretability techniques are incorporated. SHapley Additive exPlanations (SHAP) are used to evaluate feature importance, while attention score analysis supports sequence motif discovery.<\/p>\n\n\n\n<p>Overall, StackBERT-Enhancer combines advanced machine learning with biological insight to create a robust and interpretable framework for enhancer identification and strength classification. By uncovering complex sequence patterns, it holds strong potential for applications in disease modeling and broader biomedical research.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">HOLT OGDEN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Multi-GPU Parallelized Agent Based Modeling<\/p>\n\n\n\n<p>Agent based modeling (ABM) is a method for simulating emergent behaviors in complex systems by modeling the interactions of individual agents. These simulations often require substantial computational resources, necessitating increased parallelization and spatial scalability to produce usable results. One approach is to use a computer\u2019s Graphics Processing Unit (GPU) to allow increased parallelization by utilizing the greater number of threads available on the GPU compared to the Central Processing Unit (CPU). The Multi-Agent Spatial Simulation (MASS) library for NVIDIA\u2019s Compute Unified Device Architecture (CUDA) provides a platform that allows users to write ABM programs to run on the GPU. However, these programs are limited in their spatial scalability by the memory available on a single graphics card. In this project, we improved the MASS CUDA library\u2019s Place object implementation by extending it to function over multiple GPUs connected via NVIDIA NVLink, increasing the potential simulation size of MASS CUDA programs. This required splitting ABM data between multiple graphics cards, ensuring memory synchronization between the cards, and recombining result data at the end of the simulation. These changes improved the runtime of ABM simulations by 35% and increased the maximum simulation size by 75%. In addition, these changes were designed to be abstracted from the user so that minimal changes are required by ABM programs written for previous versions of MASS CUDA. Overall, this project significantly expands the computational resources available to the MASS CUDA library, allowing the running of larger and more complex ABM programs.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AZMEEN KAUSAR MOHAMMAD<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Thesis<\/strong>: Enhancing Coach Bot Features through Adaptive Learning using NLP and Fine Tuning<\/p>\n\n\n\n<p>Artificial Intelligence (AI) is revolutionizing mental health care by providing innovative, scalable, and adaptive training solutions for caregivers. Caregivers often face challenges in adapting to diverse mental health scenarios and require tools that support personalized and effective training. To address this, Coach Bot was developed to train caregivers in empathetic communication through scenario-based learning. However, to meet the diverse needs of caregivers and make learning more effective, it is essential for the chatbot to adapt dynamically to individual user preferences and provide real-time, actionable feedback. This project focuses on developing an advanced AI-powered Coach Bot that incorporates adaptive learning paths and personalized feedback mechanisms. A detailed review of existing literature on conversational agents and applications of NLP in mental health was conducted, along with an analysis of data classification to difficulty levels and caregiver response evaluation. The overarching goal of this work is to enhance the design and functionality of adaptive chatbots to improve caregiver training and ultimately support better mental health outcomes and include detailed feedback to the user along with the EPITOME framework scores.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AMALAYE OYAKE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.; Discovery Hall 464<br><strong>Thesis<\/strong>: Resilient State Convergence in Interplanetary and Edge Networks Using Conflict-Free Replicated Data Types (CRDTs)<\/p>\n\n\n\n<p>Embedded devices operating in harsh environments, such as outer space or remote locations, present significant challenges for system design, mainly due to their functionality under conditions that disrupt traditional assumptions. Advancements in computing power and the miniaturization of modern electronics have enabled the implementation of distributed computing systems in isolated terrestrial locations as well as in space. The most extreme environments, including polar regions, outer space, and places characterized by resource limitations such as power or network connectivity, stand to benefit from these advancements in networking, computing, and Artificial Intelligence (AI).<\/p>\n\n\n\n<p>Space presents particular challenges, including high communication latency, intermittent connectivity, obscured signals, and bodies in motion. Deploying enterprise software or conventional IT approaches in these extreme conditions is ineffective. Conventional enterprise software solutions adhere to design principles that are tested to their limits, often leading to fragile and unreliable systems.<\/p>\n\n\n\n<p>Disruption-tolerant networking (DTN) is a relatively new set of Internet standards designed to address the challenges networked systems face in highly remote locations. It enables communication despite delays, disruptions, and disconnections. One limitation of the DTN standards is the lack of a standardized mechanism for exchanging participant contact information and validating the updated contact information. This exchange is crucial for maintaining a coherent global state across a DTN network, an essential capability for real-world applications such as Edge Networks and satellite communications.<\/p>\n\n\n\n<p>This research explores theoretical methods for exchanging contact information among participants and applies these theories to DTN. Specifically, it examines Eventual Consistency Theory in conjunction with established Distributed Snapshot algorithms such as Chandy-Lamport, Lai-Yang, Spezialetti-Kearns, and Epidemic Model Gossip Protocols, assessing how prior research in distributed systems can be applied to extremely remote systems. The study proposes suitable algorithms and combined approaches, evaluates their advantages and disadvantages, and presents quantitative results demonstrating system performance.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, June 5<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SUMIT HOTCHANDANI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Link Prediction in Agent-Based Graph Database System<\/p>\n\n\n\n<p>This work introduces a comprehensive link prediction framework within an agent-based graph database system built on the Multi-Agent Spatial Simulation (MASS) library. Extending prior enhancements to the MASS framework\u2014including support for property graphs and distributed shared graph infrastructure\u2014the system integrates both topological and embedding-based approaches to infer potential links in complex networks. Classical topological methods such as Adamic-Adar, Resource Allocation, and Preferential Attachment are implemented alongside Fast Random Projection (FastRP), a dimensionality reduction technique used to generate node embeddings. These embeddings are subsequently leveraged in a k-Nearest Neighbors (kNN) classifier to identify probable future connections. The FastRP algorithm is realized using a novel agent-based propagation mechanism that exploits MASS\u2019s parallel and distributed architecture.<\/p>\n\n\n\n<p>By embedding these capabilities into the core MASS library, the proposed framework enables scalable, interpretable, and extensible link prediction for applications in domains such as social networks, biological systems, and recommendation engines.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SURBHI GUPTA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Developing Culturally Aware and Multilingual AI for Mental Health Support<\/p>\n\n\n\n<p>The development of multilingual and culturally adaptive AI-based chatbots to facilitate mental health intervention is focused on and explored in this project. This involves the chatbot providing its users with faith-based reassurance, emotional support, and practical guidance while taking cognizance of cultural nuances through the fine-tuning of its natural language processing models. It finally achieves culturally relevant, semantically accurate responses by applying techniques such as prompt engineering, translation evaluation, and answer strategy alignment. To validate the statements as mentioned earlier, the following metrics were used for evaluation purposes: BLEU scores to analyze translation consistency, recall-based accuracy for strategy containment, and Cultural Alignment Score (CAS) as a measure of faith, family, and affective sensitivity within the chatbot construct. Human evaluation, to ensure the response\u2019s relevance, empathy, and cultural appropriateness, was also conducted. The system aligns well with therapeutic strategies. Currently, the chatbot is serving four cultural groups, and future developments will enable the coverage of more diverse communities. Furthermore, emphasis will be placed on user personalization and increased human feedback input to help grow the system&#8217;s adaptability and effectiveness.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHARAN VAITHEESWARAN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Data-Driven Washington College Grant Prediction<\/p>\n\n\n\n<p>The Washington State Grant (WSG) is one of the state\u2019s largest sources of need-based financial aid, yet students often struggle to anticipate how much they will receive. Existing tools offer limited personalization and often overlook the complexity of eligibility rules, leaving students with uncertainty during college planning. This project addresses that gap by developing a machine learning\u2013based system to predict individualized WSG award amounts using a combination of institutional and macroeconomic data.<\/p>\n\n\n\n<p>We built on the College Affordability Model (CAM), a curated dataset of approximately 5,000 student-year records from Washington\u2019s public universities spanning 2008 to 2022. The dataset includes features such as household income range, dependency status, institution group, and actual WSG awards. To provide broader context, we enriched this data with external variables including GDP per capita, Consumer Price Index (CPI), graduation rates, and state education appropriations. These additions were informed by expert input and policy review, and aimed to capture systemic factors influencing annual funding patterns.<\/p>\n\n\n\n<p>After experimenting with several modeling approaches, we selected gradient-boosted decision trees (GBDTs) for their strong performance on tabular data. We compared CatBoost, LightGBM, and XGBoost, training each model using a consistent pipeline that included 10-fold cross-validation, hyperparameter tuning, and lag feature integration. CatBoost emerged as the top-performing model, offering high accuracy with minimal preprocessing.<\/p>\n\n\n\n<p>The final CatBoost model achieved an R\u00b2 of 0.94 and Root Mean Square Error (RMSE) of $670 for a mean value of $4490, demonstrating strong predictive performance. A Shapley Additive Explanations (SHAP) analysis confirmed the model&#8217;s interpretability, highlighting key features such as income range, dependency status, institution group, and prior-year funding metrics.<\/p>\n\n\n\n<p>This work shows that combining student-level and macroeconomic data can produce reliable, individualized predictions of financial aid outcomes. The resulting tool has the potential to help Washington students make more informed decisions about college affordability with greater confidence and clarity.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JACOB (JAKE) KEMPLE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Thesis<\/strong>: Evaluating Vision-Language-Action Models in Robotic Manipulation: Performance, Implementation, and Comparison with Deterministic Systems<\/p>\n\n\n\n<p>Robotic manipulation systems typically use deterministic policies for perception, decision-making, and task planning, which achieve millimeter-level precision but require extensive<br>specialized development and cannot easily generalize to new tasks. Emerging vision-language-action (VLA) foundation models promise to reduce this specialized effort and inflexibility through learned multimodal reasoning. However, their practicality in the real world and associated development costs remain largely unknown.<\/p>\n\n\n\n<p>This thesis presents a real-world comparison of a strong open-source VLA foundation model (OpenVLA-7B) against a fine-tuned deterministic control system. Both systems are evaluated on identical hardware consisting of a WidowX 250 6-DoF robotic arm, an Intel RealSense D415 camera, an NVIDIA Jetson AGX Orin edge computer, and the ROS 2 (Robot Operating System 2) framework. Each system repeatedly executes a pick-and-place robotic task under randomized initial conditions. Performance is measured using goal-oriented, object-centric metrics of accuracy, repeatability, and cycle time, adapted from ISO 9283 standards. Additionally, a qualitative analysis examines the installation effort and configuration challenges associated with each system.<\/p>\n\n\n\n<p>The primary contributions of this research include: (i) a comparative evaluation of performance and setup complexity between robotic systems utilizing a VLA-based control policy and conventional deterministic control logic; (ii) documentation of hardware, software, and configuration challenges encountered during VLA system implementation; and (iii) qualitative insights from real-world deployment, emphasizing usability and adaptability.<\/p>\n\n\n\n<p>Results indicate that current VLA foundation models underperform compared to deterministic control systems in terms of accuracy, repeatability, and cycle time, limiting their immediate viability for production-level robotic tasks. However, the inherent flexibility of VLA models suggests strong potential as future replacements for deterministic approaches, contingent upon improvements through fine-tuning, future optimizations, enhanced integration frameworks, and better overall performance metrics. These findings offer practical insights and set realistic expectations for developers considering transitioning from deterministic robotics systems to VLA-based implementations.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, June 6<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PRATIK NARSING JADHAV<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Multilingual Carebot and Coachbot<\/p>\n\n\n\n<p>This capstone enhances the iCare project by developing Multilingual Carebot and Coachbot\u2014two conversational AI agents designed to provide mental health support and caregiver training in both English and Spanish. To address disparities in access for non-English speakers, the project fine-tunes advanced LLaMA-based large language models using domain-specific datasets and introduces a multilingual framework that dynamically adapts responses based on user-selected language while preserving therapeutic alignment. Carebot delivers empathetic support using psychologically informed dialogue strategies, while Coachbot generates and evaluates caregiving scenarios across affective and interpretive dimensions. The system is evaluated through traditional NLP metrics (BLEU, ROUGE, Perplexity), human feedback, and LLM-as-a-Judge assessments, along with techniques like Active Listening and Socratic questioning. Results show that the LLaMA 3.1 8B Instruct model consistently outperforms others in empathy and coherence, demonstrating that instruction-tuned LLMs can be effectively adapted for high-quality, multilingual mental health support.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JEREMY GAO<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Real Time Snow Deformation<\/p>\n\n\n\n<p>In modern interactive 3D applications, realistic snow deformation plays a vital role in enhancing environmental immersion and visual fidelity. This project introduces a real-time solution for simulating dynamic snow surface deformation in response to object interaction. By combining orthographic depth capture, GPU tessellation, and heightmap-based vertex displacement in a unified rendering approach, the system delivers high visual fidelity and remains efficient across different object types and interaction scenarios.<\/p>\n\n\n\n<p>The solution consists of a pipeline with three core stages: pre-processing, geometric processing, and illumination. In the pre-processing stage, an orthographic camera positioned beneath the snow surface captures depth information from deforming objects and updates a global heightmap in real time. The geometric processing stage adaptively tessellates and dynamically refines mesh resolution in regions requiring high detail, enabling smooth and continuous deformations. The final illumination stage recalculates surface normals based on the deformed geometry and enhances them using reoriented normal mapping. This allows lighting to reflect both the physical contours of the displaced surface and subtle variations introduced by the deformation, resulting in a more cohesive and dynamic visual effect. Together, these three stages form a coherent pipeline that enables real-time snow deformation with high visual fidelity and computational efficiency, supporting a broad range of artistic and gameplay requirements.<\/p>\n\n\n\n<p>Extensive testing demonstrates that the system is capable of handling a diverse set of object shapes\u2014from sharp-edged cubes to organically irregular models\u2014with high fidelity. The delivered solution achieves a balance between realism and performance, operating efficiently under interactive conditions while supporting detailed visual effects such as footprint trails and rim lifting. Although current limitations include terrain flatness and lack of force-based deformation, the framework provides a solid foundation for future extensions toward physically grounded and terrain-adaptive snow simulation.<br><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AMULYA HOLENARASIPURA NARAYANA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Enhancing a Content Management System for Language Learning: Performance, Usability, and Engagement (MeTILDA)<\/p>\n\n\n\n<p>The extinction of languages is a global concern, with over 1,500 languages at risk of disappearing by the end of this century. Melodic Transcription in Language Documentation and Analysis (MeTILDA) is a web-based toolset designed to support the documentation, preservation, and education of endangered languages. As part of this initiative, the project focuses on enhancing the Content Management System (CMS) of MeTILDA, an e-learning platform specifically developed to facilitate language preservation.<\/p>\n\n\n\n<p>The platform is designed to engage indigenous communities, linguists, and educators specializing in language preservation by providing an interactive and user-centric learning experience. To improve the usability and effectiveness of the system, several key features are introduced for enhancing teaching and learning experience. These include a teacher dashboard for visualizing student performance, grading assignments, creating announcements, and sending notifications. Additionally, students can access their test and assignment grades, utilize the Play and Learn feature for an engaging language learning experience.<\/p>\n\n\n\n<p>The platform leverages a modern web technology stack to ensure scalability, responsiveness, and maintainability. The frontend is developed using React.js, providing a dynamic and intuitive user interface that enhances the overall learning experience. The backend services are powered by Firebase, offering real-time database capabilities and authentication for seamless user interactions. PostgreSQL serves as the primary relational database, ensuring robust data management and efficient querying for storing and retrieving linguistic and user-related data.<br>To enhance system performance and minimize latency, API call optimization techniques such as connection pooling and data caching were implemented, effectively reducing redundant database queries and improving response times. The middleware, built using Python Flask, acts as an intermediary between the frontend and backend, handling data processing, request validation, and business logic execution. By integrating these technologies, the system effectively supports endangered language education, making learning more accessible and impactful.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">WINTER 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, February 24<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHIVAM GANESH PAWAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; UW2 (Commons Hall) Room 327<br><strong>Project<\/strong>: Spy-Shield: Hidden Wi-Fi Camera Detection through Light Manipulation and Traffic Analysis<\/p>\n\n\n\n<p>Wi-Fi cameras are optical devices which can record\/stream a video of a scene to a remote device. The term \u2019Wi-Fi\u2019 indicates that they can be directly accessed over the internet. Advancements in Internet of Things (IoT) has made it easier for them to be small and affordable. These devices are wireless, easy to install, and can stream video in real time, making them popular for security and monitoring purposes. However, these cameras has raised serious concerns about privacy infringement. These discreet devices facilitate clandestine surveillance and are difficult to detect manually. This raises privacy concerns for third-party occupants, such as a guest in a hotel room who may not be aware of the deployed spy cameras. According to current methods of detecting hidden cameras, by analyzing the responsive Wi-Fi traffic flow, the detection accuracy is approximately 97%-99%. However, Wi-Fi cameras can intentionally cause delays in data transmission to avoid real-time correlations between the environment and the Wi-Fi transmission. To address this challenge specifically, Spy-Shield presents a delay-proof method for detecting Wi-Fi cameras. Spy-Shield builds on the research of Dr. Brent Lagesse in the field of &#8220;Detecting Streaming Wireless Cameras with Timing Analysis&#8221;. Using environmental light manipulation and machine learning algorithms, Spy-Shield looks for a causality between patterns in observable Wi-Fi traffic and a predetermined pattern of light flashing (signature). The proposed detection methodology can successfully discover hidden Wi-Fi cameras with an accuracy rate of 97.5%, even with a stream delay. Notably, this approach can detect without requiring connection to the same Wi-Fi network or prior knowledge of the camera\u2019s location.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, February 26<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">Austin M. Yao<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.; Online Defense<br><strong>Project<\/strong>: Infinite 2D Map Generation with Nested Wave Function Collapse<\/p>\n\n\n\n<p>Procedural Content Generation (PCG) enables scalable, dynamic virtual worlds by automating the creation of game assets such as textures, maps, and environments. Among PCG techniques, 2D map generation, particularly for expansive, continuously generated worlds, stands out as a critical application, balancing functional design and visual coherence to shape gameplay experiences. The Wave Function Collapse (WFC) algorithm is particularly suited for this task, offering strengths in constraint satisfaction and the ability to expand small input samples into diverse, coherent outputs. This project investigated variations of WFC to identify those most suitable for real-time scalability and result coherency. This project implements a hybrid WFC design that combines a lightweight process developed by \u201cGame Dev Garnet\u201d for efficient runtime, a hierarchical structure from \u201cY. Nie\u201d et al. for the on-demand generation, dynamic weighting as implemented by \u201cChrisHanna\u201d for map content control, and important visual variables as used by \u201cM. Kleineberg\u201d for map aesthetics.<\/p>\n\n\n\n<p>Additionally, the system integrates an efficient User-Controlled Travel System, enabling dynamic content assembly through weighted tile controls and real-time adjustments. The results from this project provide four key accomplishments: 1) supporting efficient infinite 2D map generation in real-time, 2) enabling designers to influence map design through weight control on tiles, 3) showcasing the impact of tile variation and decoration on map aesthetics, and 4) providing guidelines for tile and fallback tile design to maintain map coherence. Although the project demonstrates WFC\u2019s ability to create seamless and diverse maps, challenges encountered included backtracking inefficiencies and computational overhead. These challenges serve as a reference for future investigations into 2D infinite map generation.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, February 27<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JOHN FISCHER<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.; Online Defense<br><strong>Project<\/strong>: A Reusable Inventory System<\/p>\n\n\n\n<p>Video Games which commonly need an Inventory system use them to satisfy specific requirements for a set of mechanics. These systems can be complex due to the mechanics, their item data management, and the nuances of the game engine that they are developed for. The common game mechanic functionality of adding, removing, and swapping items, along with data management for them, is shared across many different Inventory Systems. Identification of these similarities highlights the potentials for their reuse. The focus of this work was to create a unified and coherent interface to reduce the inherent complexity of Inventory Systems by providing a generic software module that can perform the common game mechanics and manage related item data.<\/p>\n\n\n\n<p>Numerical, Allocated, and Spatial (NAS) categorizations were adopted for Inventory Systems which allowed for common vocabulary, game mechanics, and logical requirements to be established related to the multitude of implementations available. Design guidelines were established to focus the direction of the implementation efforts that were organized through an outlined scope. The initial approach was to create a backend independent of game engine functions, which would be consumed by a frontend to instantiate the display and manage user input.<\/p>\n\n\n\n<p>Verification of the success of the interface throughout the development process was accomplished by explicit demonstrations. The item data management, NAS common functionality and unique restrictions were used to confirm the working backend. The frontend game engine portion was verified by having visual displays that a user could interact with for each of the NAS categorizations as well as a combination of them. The generic software module, to use as the foundation of an Inventory System, was successfully created using the backend and frontend specific API approach. Future work considerations could include incorporating 3D Spatial configurations, providing specific examples on how to add niche mechanics, and generalizations for logic implementation requirements for any game engine frontend.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, March 3<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">WILLIAM SHEN HAO<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Online Defense<br><strong>Project<\/strong>: Extending the Algorithm and Enhancing the User Interface of a Rhyme Detection Application for English Rap Lyrics<\/p>\n\n\n\n<p>Rhyme is a key component in hip-hop music. Over the course of the genre, it has evolved from simple patterns such as end rhymes to more complex internal and multisyllabic rhymes. Although it is possible to highlight rhymes in lyrics by hand, this project demonstrates that an automated system for highlighting rhymes is useful for helping people understand the role of rhyme in rap music. Survey results also indicate that automatic rhyme detection aids in lyric composition. Most existing literature focuses on simpler rhyme patterns whereas research on more complex rhyme types tends to focus on lyric generation instead of detection. This project uses one of the few published works that detect more complex rhyme patterns and extends the application\u2019s algorithm to detect a different rhyme type. It also makes improvements to usability, primarily by migrating the application from a Java GUI to a deployed website with a Spring Boot server and a React frontend. A usability survey was conducted with participants of varying amounts of background knowledge on the hip-hop music genre and responses are generally favorable. Further improvements can be made to the usability of the website, including increasing the load that the website is able to handle. Future work also includes highlighting identical patterns with the same color and making another extension to the algorithm.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">NIDHI KUMAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Online Defense<br><strong>Project<\/strong>: Rhythm Art in Melodic Transcription for Indigenous Language Documentation and Application (MeTILDA)<\/p>\n\n\n\n<p>Linguistic diversity is under significant threat, with over 1,500 languages projected to become extinct by 2100. One such language is Blackfoot, spoken in Alberta, Canada, and Montana, U.S.A. Blackfoot is a pitch-accent language, where variations in pitch can change word meanings, posing challenges for documentation and teaching.<\/p>\n\n\n\n<p>This project enhances MeTILDA (Melodic Transcription in Language Documentation and Application), a cloud-based system for analyzing Blackfoot word pronunciation and generating Pitch Art. MeTILDA is a collaborative effort between researchers from the University of Washington Bothell and the University of Montana, combining expertise from linguistics, computer science, and language documentation.<\/p>\n\n\n\n<p>Key improvements include the development of a Rhythmic Visualization tool for better representation of linguistic rhythm, a redesigned Learn page for improved usability, and a more modular codebase to facilitate long-term maintainability. These enhancements support ongoing efforts to document and revitalize the Blackfoot language while contributing to broader linguistic preservation efforts.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, March 7<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">RITHI AFRA JERALD JOTHI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geethapriya Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Discovery Hall Room 464<br><strong>Project<\/strong>: A Hybrid Noise Distribution Framework for Privacy Protection in Mobile Environments<\/p>\n\n\n\n<p>The proliferation of mobile devices has resulted in generation of huge amounts of data. While Differential Privacy (DP) has become the standard for protecting sensitive information on mobile platforms, existing noise mechanisms struggle with the privacy-utility tradeoff. In mobile environments, mechanisms with zero-centered probability density and sharp kurtosis, such as Laplace, are computationally efficient and provide good utility but are ineffective against outliers due to their light tails. This makes mobile device users&#8217; data vulnerable to adversarial reconstruction attacks. The Pareto distribution\u2019s heavy tails enhance outlier privacy by injecting amplified noise, but its steep central truncation degrades utility for non-outliers. The Exponential distribution provides smooth, consistent perturbation, preserving statistical continuity, yet its rapid tail decay weakens privacy for extreme values, making it vulnerable to membership and attribute inference attacks. These issues are particularly observed in mobile environments, where high utility and sensitivity are required for effective data processing.<\/p>\n\n\n\n<p>To address these challenges, this research introduces a Hybrid Noise Distribution (HND) framework that combines the strengths of Laplace, Exponential and Pareto mechanisms. The HND framework integrates the sharp central peak of Laplace, the heavy tails of Pareto, and the smooth decay of Exponential. This hybrid approach ensures optimal privacy and utility in mobile environments, where both data utility and privacy are critical. The proposed framework has been implemented and tested in mobile environments using Kotlin Multiplatform, enabling seamless integration across Android and iOS devices. Empirical evaluation demonstrates that hybrid mechanism outperforms Laplace, Exponential and Pareto in their privacy-utility tradeoff. Future work will focus on adaptive noise scaling techniques and integration with federated learning architectures.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, March 10<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">RAJ DIPESH PAREKH<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geethapriya Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Discovery Hall Room 464<br><strong>Project<\/strong>: ExplainFed: An Explainable Gradient-based Framework for Detecting and Characterizing Poisoning Attacks in Federated Learning<\/p>\n\n\n\n<p>Federated learning systems are increasingly vulnerable to poisoning attacks, yet existing defense mechanisms often lack interpretability, making it difficult for system administrators to trust and act upon their detection decisions. We present a novel gradient based detection framework that not only identifies poisoning attacks but also provides comprehensive explanations for its decisions. Our approach combines temporal gradient analysis, layer-wise examination, and pattern recognition to detect three types of attacks: label flipping, distributed backdoor, and model poisoning with activation functions (MPAF). Experiments on MNIST and Fashion-MNIST datasets demonstrate that our method achieves superior detection rates (&gt; 90% TPR) while maintaining low false positive rates (&lt;4%) across all attack types. The integrated explanation system generates detailed insights that reduce administrator response time by 62.4% and improve resolution accuracy by 15.1%. Our framework maintains high accuracy (&gt;91% on MNIST, &gt;83% on Fashion-MNIST) under attack conditions, significantly outperforming existing defenses while providing actionable explanations for detected anomalies.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ALAN LAI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.; Online Defense<br><strong>Project<\/strong>: Learning Software Engineering Principles Through Designing a Validation Framework for Luna mHealth&#8217;s Authoring System<\/p>\n\n\n\n<p>This paper describes two main aspects from my capstone project: designing a validator framework for Luna mHealth\u2019s authoring system, and the software engineering principles learned along the way. Luna mHealth is a mobile health education framework that empowers communities to access essential educational information in an offline setting. Luna content modules are converted from PowerPoint, which are made by authors.<\/p>\n\n\n\n<p>I created a validator framework to help alert content module authors of potential issues with their PowerPoint files. I designed and implemented validator and validation issue interfaces, and created an issue renderer class to render an issue in text for authors to read.<\/p>\n\n\n\n<p>Along the way, I learned about my fast and ambiguous coding habits. The Luna project helped me learn about the importance of targeted refactoring to unify naming conventions, reduce duplication, and separate responsibilities. This led to a codebase that was more comprehensible and maintainable. I applied key improvements to the Luna codebase such as splitting multi-responsibility classes and systematically renaming vague variables to clarify intent. I created an onboarding document and a pull request checklist to promote iterative, test-driven development and enforce modular, reliable coding practices.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, March 12<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SIDDHANT THAKUR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Online Defense<br><strong>Project<\/strong>: Peer-to-Peer Experience Synchronization System: Discovery, Connection, and State Synchronization<\/p>\n\n\n\n<p>The rapid evolution of digital collaboration platforms has spurred innovative approaches to synchronized streaming, a domain traditionally dominated by centralized architectures such as WebSockets. While existing systems effectively coordinate media playback across distributed clients, they suffer from scalability limitations and high operational costs due to their reliance on persistent server-mediated channels. These constraints hinder the efficiency of large-scale deployments, necessitating alternative solutions that reduce dependence on centralized infrastructure.<\/p>\n\n\n\n<p>This project presents a decentralized streaming synchronization system that minimizes reliance on central servers for synchronization event transmission by leveraging peer-to-peer (P2P) communication via WebRTC. The proposed system optimizes the WebRTC P2P connection establishment workflow, reducing dependency on centralized signaling components by decreasing the number of connection asset exchange cycles required for successful P2P connectivity. This enhancement significantly streamlines connection initialization, improving overall system efficiency and reducing server load.<\/p>\n\n\n\n<p>Furthermore, the system reduces the number of persistent bidirectional communication links with the signaling server, resulting in an N-fold improvement in serving capacity and the ability to handle higher concurrent loads. By alleviating server-side constraints, the approach enhances scalability and enables seamless synchronization across distributed clients.<\/p>\n\n\n\n<p>The project includes a comprehensive evaluation of the system\u2019s scalability and user experience improvements through performance evaluation methods such as load and end-to-end integration tests that reveal an average response time of 2 milliseconds for concurrent loads of up to 800 requests per second, with a maximum observed end-to-end connection latency of 8 seconds, demonstrating the viability of a decentralized approach in optimizing real-time streaming experience synchronization<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, March 13<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">HARI KRISHNAN RAJ KUMAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Online Defense<br><strong>Project<\/strong>: Rhythm Art in Melodic Transcription for Indigenous Language Documentation and Application (MeTILDA)<\/p>\n\n\n\n<p>Linguistic diversity is under increasing threat, with over 1,500 languages expected to become extinct by 2100. Blackfoot, a pitch-accent language spoken in Alberta, Canada, and Montana, U.S.A., is one such language at risk. In Blackfoot, variations in pitch can alter the meanings of words, making documentation and teaching a complex challenge.<\/p>\n\n\n\n<p>This project aims to enhance MeTILDA (Melodic Transcription in Language Documentation and Application), a cloud-based platform developed to analyze Blackfoot word pronunciation and generate Pitch Art. MeTILDA is a collaborative initiative between the University of Washington Bothell and the University of Montana, bringing together expertise in linguistics, computer science, and language preservation to address the unique challenges of documenting and revitalizing the Blackfoot language.<\/p>\n\n\n\n<p>Key improvements include the development of a Rhythmic Visualization tool for more effective representation of linguistic rhythm and a more modular codebase that supports long-term maintainability. Furthermore, the entire infrastructure has been re-setup to improve scalability and performance. Another addition is implementation of a robust feedback system, enabling users to provide real-time input and submit suggestions for continuous improvement. The feedback feature is designed to ensure that the system evolves in line with user needs and remains flexible in addressing the dynamic requirements of linguists, language learners, and the broader community.<\/p>\n\n\n\n<p>These updates play a crucial role in supporting the ongoing efforts to document and revitalize the Blackfoot language while contributing to broader initiatives aimed at preserving linguistic diversity.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">THUAN TRAN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Online Defense<br><strong>Project<\/strong>: Calendar Retrieval and Organization Using Smart Home Assistant<\/p>\n\n\n\n<p>For calendar users, existing processes of using hands and relying on physical devices to interact with digital schedules are inefficient and inaccessible. Event organizers are not consistent in ways to create and distribute schedules for users, which makes it difficult for users to keep track of.<\/p>\n\n\n\n<p>This project presents the development of an Alexa Skill voice assistant designed to make querying schedules more efficient and accessible, along with a set of APIs for event organizers to create and manage such schedules. Users can manage their personalized schedules with voice commands, which is ideal for visually impaired or those who want hands-free interaction. The system is backed by a back-end infrastructure where event organizers can create, modify, and manage schedules via API.<\/p>\n\n\n\n<p>By having a common platform for both organizers and users to manage schedules, this project significantly helps reduce duplication of efforts and ensures consistent and improved experience for both organizers and users when interacting with schedules. Using this platform allows organizers to update their schedules when needed, and users to receive the latest schedule details.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Tuesday, March 18<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">BOYAN HRISTOV<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Yang Peng<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.; Online Defense<br><strong>Project<\/strong>: Modular and Interactive Simulation Tool (MIST) for Vehicular Edge Computing: A Benchmarking Platform for Dynamic Task Offloading<\/p>\n\n\n\n<p>As the demand for processing more and more data from the Internet of Things grows, edge computing systems face increasing challenges to produce results in real-time. During peak times or emergencies, the conventional static edge servers often struggle to meet the low-latency requirements of time-critical tasks. This report presents MIST, a novel simulation tool that has been designed for vehicular edge computing scenarios, in which a mobile vehicular computing unit dynamically assists the edge servers that are unable to meet their task-processing requirements. MIST is built on top of the Unity engine with a modular architecture that decouples its key components (task generation, server monitoring, vehicle movement, and scheduling logic). This design allows for both real-time interactive visualization of the offloading process and a benchmarking platform. In its current version, the simulator implements several heuristic scheduling algorithms (Greedy, Closest Overloaded, and Center-of-Load-Based). These algorithms are used as the baseline for benchmarking various dynamic task offloading algorithms. This platform is designed to facilitate the rapid prototyping and analysis of future more advanced scheduling algorithms developed by researchers or regular users interested in the topic. By having a flexible and user-friendly e<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">AUTUMN 2024<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, October 14<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">YANG YU<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Yang Peng<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>9:00 A.M.; Online Defense<br><strong>Project<\/strong>: Design and Development of a Scalable Communication Plane for Efficient TinyML Model Deployment on IoT Devices<\/p>\n\n\n\n<p>This project focuses on designing and developing an efficient model deployment service for TinyML on IoT devices. The system addresses the critical challenges of scalability, end-to-end communication, and model deployment in resource-constrained environments. The proposed solution provides flexibility for various IoT use cases by incorporating both device-as-client and device-as-server operation modes. The device-as-client mode allows edge devices to check with a central server for new models periodically. In contrast, the device-as-server mode enables real-time model deployment via a lightweight HTTP server running on the edge devices.<\/p>\n\n\n\n<p>A key feature of the system is the Over-The-Air (OTA) download update mechanism, which allows remote models to be updated, thus reducing the need for physical device access. The combination of HTTP server processing of requests and the integration of heartbeat monitoring and Redis-based data management enhances the system&#8217;s robustness. It ensures efficient communication between devices and servers. Performance evaluations have shown that the system can handle large-scale deployments while maintaining low-latency, energy-efficient operation. Future work may include further optimizations for scalability, model versioning, and enhanced security protocols to better support more extensive and complex IoT systems.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, October 25<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHENG WANG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Yang Peng<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:00 P.M.; Online Defense<br><strong>Project<\/strong>: TinyML Deployment Optimizer: A Management System Solution for IoT and Embedded Ecosystems<\/p>\n\n\n\n<p>With the rapid advancement of machine learning and the widespread adoption of Internet of Things (IoT) devices, deploying artificial intelligence on resource-constrained devices has become an inevitable topic. This paper addresses the critical challenge of efficiently deploying and managing machine learning models on small, resource-constrained Internet of Things and embedded devices, known as Tiny Machine Learning. It presents a novel, comprehensive management system streamlining the deployment process, ensuring compatibility between models and devices in resource-limited environments. It introduces a comprehensive platform that integrates model conversion, compatibility checking, and deployment tracking, brings tools for converting standard machine learning models to formats suitable for embedded system devices, and prepares device firmware for over-the-air updates. The platform&#8217;s architecture, built on a microservices framework and utilizing in-memory data storage, ensures scalability and real-time responsiveness. The system demonstrates improvements in deployment efficiency and management capabilities throughout the rigid simulations. It shows significant success in model-device compatibility matching, considerably reducing model deployment time compared to manual methods. The results demonstrate the robustness of the methodology in tackling critical challenges related to the deployment and management of TinyML systems. This work contributes to TinyML by providing a scalable, user-friendly solution that addresses the complexities of managing machine learning deployments on resource-constrained devices, paving the way for widespread adoption of TinyML and enabling applications ranging from smart home devices to advanced industrial sensors.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, November 27<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JACOB CHESNUT<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.; Online Defense<br><strong>Project<\/strong>: Achieving Real Time Ray Traced Rendering for a Virtual Reality Headset Using Foveated Rendering<\/p>\n\n\n\n<p>Virtual reality (VR) is a system which supports immersive observation and interaction of 3D virtual scenes. Ray tracing is a rendering method which is capable of producing high-quality images of 3D virtual scenes. Ray tracing in VR allows realistic lighting effects to be combined with immersive 3D. However, rendering to VR hardware using a ray tracing renderer is an expensive task which can be especially challenging to meet the performance requirements for real-time rendering. This is because of the high-resolution screens of most VR headsets and the high cost per-pixel computations of ray tracing.<\/p>\n\n\n\n<p>Foveated rendering, using the user focusing position to inform the distribution of rendering resources across the screen, is a technique which can reduce the cost of rendering and allow the use of both VR and ray tracing. This method has its own challenges including the need for eye-tracking hardware, the need to balance quality drop-off and performance, and the need to adjust to each user. Modern VR systems have eye tracking capabilities, and as such are a good choice for supporting foveated rendering systems.<\/p>\n\n\n\n<p>This project explores the use of adaptive resolution foveated rendering in order to provide real-time ray tracing for VR systems. To better address differences between users, my system supports the use of user calibration and postprocessing effects to overcome common quality issues resulting from foveated rendering. The final system is capable of real-time ray tracing of a moderately complex environment to a VR device. The final image is of high quality with virtually unnoticeable effects from foveated rendering. Future work includes improvements to handling of dynamic objects and fine tuning of runtime performance by balancing CPU and GPU processing.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, December 2<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">HAIHAN JIANG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Annuska Zolyomi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Online Defense<br><strong>Project<\/strong>: Development and Evaluation of a User-Centric Online Pediatric Portal for the Developmental Disability Community<\/p>\n\n\n\n<p>This project centers on the redesign of the Developmental and Behavioral Pediatrics (DBP) online portal, aiming to address the limited accessibility and user engagement of its current static design. Guided by principles of user-centered design (UCD) and human-computer interaction (HCI), the project seeks to create a dynamic, responsive web platform tailored to the diverse needs of clinicians, educators, and families. Key objectives include improving navigation, enhancing accessibility, and introducing features like personalized content and advanced search functionality.<\/p>\n\n\n\n<p>The redesigned portal will leverage modern web technologies, such as React, to transform static resources into an interactive and intuitive platform. The project integrates iterative stakeholder feedback to refine its architecture and user interface, emphasizing content organization, usability, and stakeholder satisfaction. It also aspires to set benchmarks in pediatric health information dissemination, balancing technical excellence with user engagement.<\/p>\n\n\n\n<p>Deliverables include a responsive portal design, user role-based access paths, and robust administrative features, ensuring content management and sustainability. By evaluating the portal&#8217;s impact on user trust, accessibility, and information retrieval efficiency, this project contributes significantly to advancing digital resources for the neurodevelopmental disability community.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, December 4<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SIRI CHANDANA PRIYA BADDULA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.; Online Defense<br><strong>Project<\/strong>: Enhancing Hidden Device Localization and Room Layout Estimation<\/p>\n\n\n\n<p>The widespread adoption of IoT devices has brought significant privacy concerns, particularly with hidden streaming devices misused in private spaces such as homes, Airbnb rentals, and hotel rooms. Traditional localization methods require time-intensive physical traversal and network traffic analysis, often taking 5 to 30 minutes. This work introduces an innovative system that eliminates the need for physical traversal, leveraging LiDAR technology to capture environmental dimensions and density, combined with Channel State Information (CSI) for enhanced accuracy. Processed through a Transformer Neural Network (TNN), the system achieves an average localization precision of around 70% with a reduced localization time of just 30 seconds. Tested across varied environments, the system demonstrates adaptability and precision, though challenges remain in dense spaces. This novel approach sets a new benchmark for efficient, accessible, and privacy-conscious localization of hidden devices, offering a significant advancement in safeguarding personal spaces.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SRAVYA KODI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Yang Peng<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Online Defense<br><strong>Project<\/strong>: Latency Reduction for Data Processing in Vehicular Edge Computing using Reverse Offloading<\/p>\n\n\n\n<p>Vehicular Edge Computing (VEC) has emerged as a transformative paradigm to address the growing demand for low-latency and high-throughput data processing in connected and autonomous vehicular networks. As vehicular applications become increasingly sophisticated, efficient data processing at the network edge becomes imperative. This project implements a reverse offloading-based scheme to reduce data processing latency in VEC environments. The proposed scheme strategically redistributes computational tasks from roadside units (RSU) to nearby vehicles with sufficient computing power and capacity. By intelligently offloading processing tasks closer to the data source, this approach avoids the latency caused by RSU overloading, and it further reduces the communication delay associated with sending data to remote cloud servers during RSU overloading time. To validate the effectiveness of reverse offloading, we have conducted extensive simulations in various scenarios, considering factors such as communication overhead, vehicle resource availability, and varying network conditions. The obtained results indicate a significant reduction in end-to-end latency, showcasing the practical viability of the proposed approach. This project contributes to the ongoing efforts in optimizing data processing in VEC environments, paving the way for enhanced real-time applications and services in connected vehicular ecosystems.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, December 5<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">LUO LENG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 PM.; Online Defense<br><strong>Project<\/strong>: An Incremental Enhancement of MASS GUI<\/p>\n\n\n\n<p>This research enhances the Multi-Agent Spatial Simulation (MASS) framework&#8217;s application usability and development workflow, addressing the challenge of fragmented workflow in distributed computing simulations. The project transforms multiple separate interfaces\u2014InMASS for simulation execution, Cytoscape for visualization, and a web-based monitoring interface\u2014into a unified platform, reducing operational complexity by consolidating three separate tools into one integrated system. Key contributions include enhanced agent visualization in quadtree-based 2D space, implementation of octree data structure for 3D space simulation, and development of interactive MASS simulation capabilities within Cytoscape. The integration leverages Cytoscape&#8217;s OSGi framework and JShell&#8217;s REPL features to encapsulate web browser and CLI functionalities as plugins, enabling seamless interaction between components while reducing development complexity and learning curve. Notable technical implementations include comprehensive agent tracking functionality, SSH-tunneled remote access capabilities, and real-time cluster monitoring features. Compared to existing open-source agent-based modeling systems, our enhanced MASS framework offers distributed computing capabilities, while maintaining an intuitive, integrated development environment. The results show that time spent on common simulation tasks (modifying agent behaviors, adjusting input, and configuring output) was reduced by 60% while enhancing user experience in developing and monitoring distributed agent-based simulations.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, December 6<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">TYSON HEO<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Online Defense<br><strong>Project<\/strong>: EMG Dance: Proof of Concept VR Rhythm Game for Phantom Limb Pain Rehabilitation<\/p>\n\n\n\n<p>There are various challenges for patients undergoing Physical Therapy (PT) that may result in longer recovery time, for instance, lack of patient adherence, access limitations, geographical location, and financial barriers. These issues can become particularly problematic when PT exercises are monotonous and patients feel bored and\/or are not encouraged to complete the exercise routines. The focus of our work targets lower-limb amputation patients, and it aims to be part of a larger initiative to involve the use of new technologies for the gamification of PT, which can potentially help improve the recovery of patients suffering from neurological conditions. The larger umbrella project is dubbed the &#8220;Smart Neurorehab Ecosystem&#8221; and it&#8217;s an ongoing effort involving the University of Washington (UW) Bothell CSSE, UW Seattle Neuroscience, and Rehabilitation Medicine at Harborview Medical Center (UWHM).<\/p>\n\n\n\n<p>This capstone project aims to develop a gaming platform that allows for the creation of strategies to help health professionals start to address some of the aforementioned issues. At the same time, this software platform will allow for the collection of data that can be used to understand: 1) how to improve the game design to better serve amputee patients and 2) and how to track changes that could reflect a patient&#8217;s progress in their rehabilitation. This game is first being tested through the involvement of healthy subjects, as their usage can be used to debug the software and evaluate functionality that would be relevant in a real patient setting. This information can be used in the future to better tailor such a game to amputee patients.<\/p>\n\n\n\n<p>This capstone introduces Electromyography (EMG) Dance, a prototype Phantom Limb Pain (PLP) rehabilitation game that is designed to alleviate the effects of below-the-knee PLP. The game proposes to use at-home technologies such as portable EMG sensors and Virtual Reality (VR) to create a rhythm game that could be used to motivate patients to complete their PT exercises through daily &#8220;dancing&#8221; sessions from the comfort of their home. The evaluation of such a software application is limited without access to its target audience (lower limb amputee patients), hence, we have focused on defining and setting up practices metrics that can be used for future development that can benefit this, and other related projects. In addition, preliminary testing was carried out using healthy subjects and showed some encouraging results, particularly those that point to game usability and immersion experience. Finally, this capstone seeks to lay the foundation for future research and developments of this application for amputee patients, as well as providing a framework for other projects in the NeuroRehab effort.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, December 13<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">REZA RAHIMIDERIMI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Hazeline Asuncion<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>9:00 A.M.; Online Defense<br><strong>Project<\/strong>: Enhancing Security Vulnerability Prediction in Software Projects Using Large Language Models and Deep Learning Techniques<\/p>\n\n\n\n<p>Security vulnerabilities pose significant risks to software products, leading to financial losses, data breaches, and reputational damage for organizations. Traditional vulnerability detection methods often occur post-production, resulting in increased costs and delayed mitigation. This project aims to predict the severity levels of software vulnerabilities, as defined by the Common Vulnerability Scoring System (CVSS), based on software product specification documents and vulnerability descriptions. By leveraging deep learning models and Large Language Models (LLMs) such as DistilBERT, this research seeks to enable early identification of high-severity vulnerabilities during the development lifecycle, allowing organizations to address security issues proactively.<\/p>\n\n\n\n<p>The methodology involves a dual approach of training models on both software specification documents and Common Vulnerabilities and Exposures (CVE) descriptions to predict CVSS severity levels. This comparative analysis determines which source provides better predictive performance. Baseline machine learning models were implemented for initial assessments, including Support Vector Machines and Random Forests. Advanced deep learning architectures, such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory Networks (LSTMs), were employed to capture complex patterns and sequential dependencies in textual data. Feature extraction techniques utilizing LLMs were explored to enhance model performance by leveraging contextual embeddings and semantic understanding of the text.<\/p>\n\n\n\n<p>Results indicate no noticeable improvements in model performance for deep learning and large language models (LLMs) compared to black box classifiers. After hyperparameter tuning and training on the vulnerability summary data, the Random Forest model achieved the highest accuracy among black box classifiers, reaching 79%. Among deep learning models, the convolutional neural network (CNN) recorded an accuracy of 79%, while DistilBERT, representing the LLMs, achieved 72%. Moreover, when we compared these results against the model performance on document specifications, we observed approximately a 15 to 20% decline in accuracy. Specifically, the results were as follows: Random Forest had 52% accuracy, CNN scored 67%, and DistilBERT attained 60% accuracy.<\/p>\n\n\n\n<p>The analysis illustrated the differences, revealing that training on software specification documents has the potential for early-stage predictions in vulnerability assessment.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SUMMER 2024<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, July 24<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">TASNIM BASHAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Clark Olson<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Enhancing Image Inpainting with a Novel Deep Learning Fusion<\/p>\n\n\n\n<p>Image inpainting, the art of seamlessly filling missing or damaged parts of an image, has seen remarkable progress with deep learning. However, achieving consistently high-quality restorations across diverse image content remains a challenge. This project presents a novel image inpainting framework that harnesses the strengths of partial convolutions, Generative Adversarial Networks, and self-attention mechanisms. Our approach utilizes partial convolutions to effectively address irregular holes and preserve intricate image details. A Generative Adversarial Network architecture is incorporated to encourage the generation of realistic and visually plausible image content. Furthermore, integrating self-attention enables the model to capture long-range dependencies and contextual information within the image, leading to more coherent and higher-quality reconstructions. Evaluations using established image quality metrics demonstrate that our framework achieves superior performance compared to existing state-of-the-art methods, confirming the effectiveness of this innovative fusion in significantly enhancing image quality and restoration precision.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">GREESHMA SREE PARIMI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Enhancing Collaborative Language Analysis and Study in MeTILDA<\/p>\n\n\n\n<p>The &#8220;Global Predictors of Language Endangerment and the Future of Linguistic Diversity&#8221; study predicts that over 1500 languages will become extinct by 2100. Blackfoot is one such endangered language majorly used in the regions of Alberta in Canada and Montana in the U.S.A. Blackfoot is a pitch accent language where the meaning of a word varies depending on the pitch used when speaking it. So, it is very challenging to document and teach the language. To address this, researchers from the University of Washington Bothell (UW) and the University of Montana (UM) have collaborated on an interdisciplinary project named MeTILDA (Melodic Transcription in Language Documentation and Application). It is a cloud-based system that analyzes the pronunciation of individual Blackfoot words, generates Pitch Art, and assists in documenting, teaching, and learning Blackfoot.<\/p>\n\n\n\n<p>This capstone project primarily aims to enhance the MeTILDA application&#8217;s analysis, collaboration, and security features. To improve analytical capabilities, we developed the Pitch Art Version Control System, allowing users to work on and save multiple versions of the same Pitch Art. For improved collaboration, we integrated a communication mode within the application, enabling users to interact with both MeTILDA and non-MeTILDA users. To strengthen security, we implemented role-based access control for all features of the MeTILDA application and introduced email verification for new user registrations. Furthermore, substantial improvements were made to the project documentation, particularly in the areas of front-end components and database information.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">HARPREET KOUR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Predictive Modelling of Substance Abuse: Analysing Key Features with Machine Learning<\/p>\n\n\n\n<p>Substance abuse remains a critical public health challenge with multifaceted implications for individuals, families, and communities worldwide. This project explores how machine learning techniques can predict and classify substance abuse behaviours, focusing on alcohol, tobacco, marijuana, and nicotine vaping. Leveraging data from the (2021-2022) Behavioural Risk Factor Surveillance System (BRFSS) in 2023, we aim to identify the key predictors of substance abuse and groups at higher risk, taking into account factors like adverse childhood experiences, financial conditions, and mental health status.<\/p>\n\n\n\n<p>Prediction models were developed using four types of machine learning algorithms, including Linear Models (Logistic Regression &amp; SVM), Tree based Models (Random Forest, XGBoost &amp; ADABoost), Neural Networks (MLP &amp; CNN) and Clustering algorithm. Respondents were randomly divided into training and testing samples. The performance of all the models was compared using accuracy, precision, recall, AUC and false positive rate. The study included 31060 respondents of whom, 5867 (19%) were found to be substance abusers. Of the respondents who reported substance abuse 62.93% were between the ages of 18-64, 60.61% were males and 84.76% were non-Hispanic Whites. Random Forest was the best performing model with AUC 0.86, followed by XGBoost (AUC 0.85). The most important factors for substance abuse were BMI, male sex, lower income levels, young adult age group, lower education levels, poor mental health and adverse childhood experiences. Data mining methods were useful in examining patterns across demographics, health conditions and lifestyle behaviours so as to understand the co-morbidities associated with substance abuse.<\/p>\n\n\n\n<p>Another goal of this project was to highlight the importance of collaboration between domain experts and machine learning practitioners and assess the impact it has on the results compared to when domain experts are not involved. Their contributions in feature selection and data interpretability solutions were instrumental in achieving this enhancement. We prioritised model interpretability to foster trust and refine understanding. Additionally, our project introduces a novel approach to interpretability, analysing misclassified data, offering insights into substance abuse dynamics. These results can be used to generate further hypotheses for research, increase public awareness and help provide targeted substance abuse education.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, July 26<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JEFFREY MCCREA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Enhancement of Agent Performance with Q-Learning<\/p>\n\n\n\n<p>Graphs store data from various domains, including social networks, biological networks, transportation systems, and computer networks. As these graphs grow in size and complexity, single-machine solutions become impractical due to limitations in computational resources. Distributed graph computing addresses these challenges by leveraging multiple machines to process and analyze large-scale graphs collaboratively.<\/p>\n\n\n\n<p>This capstone project investigates the enhancement of distributed graph computing performance in the Multi-Agent Spatial Simulation (MASS) library by integrating Q-learning for computing shortest path, closeness centrality, and betweenness centrality on distributed large-scale dynamic graphs. This approach is compared to traditional and agent-based graph computing algorithms. Previous approaches in the MASS framework relied on large populations of unintelligent agents to exhaustively traverse graphs to compute solutions, making them inefficient when faced with dynamic graph data. By leveraging Q-learning and the distributed agent-based graph capabilities of MASS, we aim to optimize the decision-making processes of distributed agents, thus improving computational efficiency and accuracy.<\/p>\n\n\n\n<p>Experimental results demonstrate that the adaptive learning mechanism of Q-learning, coupled with the MASS library, allows agents to dynamically adjust to changing graph structures, leading to a more robust and scalable distributed graph computing solution. This research contributes to the field of distributed systems and artificial intelligence by providing an innovative approach to enhancing multi-agent intelligence for graph computing tasks.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, July 29<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SAURAV JAYAKUMAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: GraphConv: Geometric Deep Learning for Multiple Conformation Generation from Electron Density Images<\/p>\n\n\n\n<p>In the realm of cryo-electron microscopy (cryo-EM) structural analysis, the precise prediction of molecular conformations within datasets stands as a fundamental endeavor. Despite strides made in deep learning methodologies, existing solutions often yield volumes of suboptimal quality. Addressing this critical limitation, our research introduces GraphConv, an innovative encoder model designed to embed particle images into a latent space, thereby supplanting the conventional encoder utilized by CryoDRGN. This novel approach employs a Graph Neural Network (GNN) architecture featuring multiple GraphConv and Convolutional layers, aimed at capturing richer information from particle images and faithfully reconstructing corresponding 3D volumes. Rigorous testing across two authentic datasets and three simulated datasets underscores the efficacy of our model, showcasing marked enhancements in reconstruction quality. Notably, our findings reveal enhancements in resolution by up to 20\\% compared to CryoDRGN. By harnessing the power of GNNs, our methodology heralds significant advancements in the fidelity and accuracy of output volumes, thereby contributing to the ongoing refinement of cryo-EM structural analysis methodologies.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, July 30<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SUGAM JAISWAL<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Thesis<\/strong>: Development of Personality Adaptive Conversational AI For Mental Health Therapy Using LLMs<\/p>\n\n\n\n<p>Many individuals with mental health issues cannot get access to professional help due to reasons such as lack of awareness, limited availability, and high costs. Conversational agents present a viable alternative to deliver mental health support that is accessible, affordable, and scalable. However, the effectiveness of these agents can vary among users, as different users have different personality types such as extroversion, agreeability, etc. which influence how users interact with chatbots. Therefore, it is important to develop therapy chatbots that adapt to individual personalities. In this study, we highlight the significant role of Personality Adaptive Conversational Agents (PACAs) in mental healthcare. We designed an architecture around traditional ML models and open-source LLMs to build a PACA for mental health (based on the existing iCare project). We built a functional prototype based on it and conducted a user study, which concluded that personality adaptability is a critical feature for mental health chatbots.<\/p>\n\n\n\n<p>During this research, we were able to build a personality classifier that achieved an average F1-score of 0.96 across the Big Five personality dimensions &#8211; Agreeableness, Extraversion, Openness, Conscientiousness, and Neuroticism, and successfully integrated that with an open-source LLM to generate adapted responses. The remote user study demonstrated that 95% of the test users found the responses from the adaptive chatbot relevant to their situation compared to 30% for the non-adaptive chatbot, and 55% of users agreed that the responses felt suited according to their personality as opposed to 15% for the non-adaptive chatbot. With this study, we have shown that it is feasible to create free, accessible, and personalized mental healthcare solutions and that the adoption of PACAs could represent a pivotal step toward making mental healthcare more personalized and widely available.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ANI AVETIAN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Annuska Zolyomi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: The Views Application: Introducing Augmented Reality into Travel Applications<\/p>\n\n\n\n<p>The travel and tourism industry is growing worldwide and experiencing a resurgence following the COVID-19 global pandemic travel restrictions. When people travel to unfamiliar places, they need to orient themselves to their surroundings, find their way around, and gain information about the local culture. Traditional travel applications, like websites and smartphone apps, are for arranging travel and providing information. However, the information and services provided by websites and apps are presented out-of-context with the real-world surroundings of the traveler. Emerging technology, specifically, Augmented Reality (AR), presents an opportunity to enhance the traveler&#8217;s experience with real-time information that is easy to access and contextually relevant.<\/p>\n\n\n\n<p>Here we introduce a novel AR travel application called &#8220;Views.&#8221; This app uses image detection to allow users to easily learn about the landmarks they visit. Once detection is complete, users will have an AR environment where they can interact with their surroundings. A fact will be displayed in AR with the help of Artificial Intelligence (AI) working in the background. This provides an experience that a user can effortlessly incorporate into their travel plans and activities. Additionally, this project entailed identifying potential cybersecurity risks that come with these kinds of applications and developing a potential solution to them. We looked at attacks of clickjacking in AR environments and developed a solution to combat these attacks with image detection. As we envision a future where wearable devices may become the norm, applications like this will be at the forefront of these innovations. Using this application as a stepping stone, we begin to understand the important aspects of AR travel user experience and implementation approaches to build usable and secure AR features.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, August 1<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">HARIKA CHADALAVADA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: I&#8217;poy\u00edt &#8211; Blackfoot Language Learning Application<\/p>\n\n\n\n<p>The Blackfoot language, a vital part of the cultural heritage of the Blackfoot tribes of the Great Plains in North America, is facing imminent risk of extinction with only about 3,000 fluent speakers remaining. This paper details the development of an application aimed at revitalizing the Blackfoot language by leveraging modern technology to make learning accessible and engaging. The application was developed using the versatile Flutter framework, ensuring seamless cross-platform compatibility.<\/p>\n\n\n\n<p>The primary goal of this application was to build a comprehensive solution that addresses and fills the gaps identified in existing Blackfoot language learning applications and provides a more effective and engaging learning experience. To achieve this goal, the application features distinct student and admin login interfaces, with Firebase serving as the backend to manage user data and interactions efficiently. The application provides a comprehensive suite of interactive learning modules that include vocabulary exercises with flashcards and audio pronunciation guides, along with phrase modules that facilitate practical language use in everyday contexts. To enhance the learning experience, the application incorporates gamified elements such as experience points, badges for progress, and quizzes that test and reinforce language skills. A leaderboard and a discussion forum are integrated to foster a community of learners who motivate each other through shared achievements and discussions. Significantly, the content within the application is curated by a linguistics expert known for her extensive research in Blackfoot phonology to ensure that the educational material is authentic and culturally resonant. This project not only offers a practical solution to the preservation of the Blackfoot language but also serves as a model for preserving other endangered languages, ultimately demonstrating how digital technology can be harnessed to safeguard linguistic diversity.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ARSHEYA RAJ<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Thesis<\/strong>: NeuroGaming: Innovative Rehabilitation for Upper Limb Neurologic Conditions Using Mixed-Reality Simulation Games and EEG\/EMG Biofeedback<\/p>\n\n\n\n<p>Recent advances in Augmented Reality (AR), Mixed Reality (MR), Electroencephalogram (EEG), and Electromyogram (EMG) offer significant opportunities in medicine and neuroscience. This research aims to use these technologies to aid stroke patients with upper limb extremity weakness. This work extends the Edge Computing Ecosystem for Neuroscience Patients&#8217; Rehabilitation, part of the &#8216;Stroke Rehabilitation Project&#8217; by University of Washington Bothell Engineering, University of Washington Seattle Neuroscience, and Rehabilitation Medicine at Harborview Medical Center (UWHM). Recently, UW Bothell&#8217;s CSSE has also contributed, focusing on solutions for stroke patient rehabilitation. Traditional motor rehabilitation is costly, resource intensive and often monotonous, reducing patient engagement. We introduce &#8220;NeuroGaming&#8221;, an interactive approach engaging elements of AR \/ MR games that can enhance rehabilitation programs and improve patient outcomes. Using augmented and mixed reality technologies, interactive environments can be created on mobile devices, providing engaging and motivating experiences for patients. AR simulates real-world scenarios, offering a safe and fun way to practice tasks and aid in<br>rehabilitation.<\/p>\n\n\n\n<p>We used EEG and EMG sensors to conduct experiments and to collect data in a controlled environment targeting a reduced set of representative relevant motor tasks. The data were processed using various signal processing and statistical techniques, which in combination with the MR \/ AR game can be used to build a novel feedback and guidance system. This system is a building block of our &#8220;NeuroRehab&#8221; ecosystem, which will use various ML models and algorithms for sequential prediction, with the aim of guiding patients through an optimal rehabilitation path within the game environment.<\/p>\n\n\n\n<p>Results showed that the combination of Frequency Filtering, ICA, and ERP with FNN and SVM models yield, so far, the best accuracy for classifying the motor tasks in EEG and EMG data. These findings contribute to the field of stroke rehabilitation of upper limb extremity weakness. It also contributes to a larger project that aims for better understanding and rehabilitation of other neurological ailments by offering insights to different hand gestures using EMG, and EEG data, and creating a framework for data processing, and feedback systems.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, August 2<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">NOURA ALROOMI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Automated Expansion of Sketch Datasets<\/p>\n\n\n\n<p>Free-hand sketches offer a unique reflection of human perception and creativity, playing a crucial role in various computer vision and machine learning applications. These sketches are used in image and sketch recognition algorithms, sketch-based retrieval systems, and generative art neural networks, enhancing our understanding of artistic expression. However, the limited variety of object categories in existing sketch datasets restricts their applications and the development of robust machine-learning models. This limitation is primarily due to the manual curation required for these datasets, which is time-consuming and slows the addition of new sketches. Automated systems using advanced computer vision techniques present a solution to enhance the diversity and quality of sketch datasets efficiently.<\/p>\n\n\n\n<p>In this capstone project, we introduce an Automated Dataset Expansion System designed to streamline and automate the process of adding new categories to sketch datasets. Our system employs a web-based platform that integrates user-generated sketches with an advanced auto-expansion pipeline. This pipeline consists of two phases: the first phase involves synthetically generating baseline sketches from photos using a pre-trained Photo-Sketching model, effectively capturing the structural attributes of simpler objects. The second phase evaluates the similarity of user-generated sketches to these baselines using both Structural Similarity Index Measure (SSIM)-based and Convolutional Neural Network (CNN)-based encoder similarity metric pipelines. Our findings indicate that the CNN-based encoder with the Cosine Similarity measure provides consistent and reliable performance across categories, achieving the highest precision and true positive rates. This reliable method was subsequently implemented in our system. The enriched datasets produced by this system can support innovative machine learning and computer vision applications, fostering advancements in these fields.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, August 5<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AISHWARYA PANI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: I\u2019poy\u00edt &#8211; Platform for Learning Blackfoot Language<\/p>\n\n\n\n<p>Preserving endangered languages is essential for maintaining cultural identity. Blackfoot is categorized as endangered, with nearly 2,900 speakers remaining. Existing solutions for preserving Blackfoot lack engagement, accessibility, community learning, and gamified experiences. To address this, we developed &#8220;I\u2019poy\u00edt \u2013 Blackfoot Language Learning Application,&#8221; a cross-platform app designed to facilitate learning and preservation. In collaboration with the University of Montana, the application features content from researchers and linguists.<\/p>\n\n\n\n<p>The primary objective of our development was to establish a codebase that is both maintainable and robust. This culturally sensitive and interactive application utilizes the Flutter framework for cross-platform compatibility and Firebase for backend services, which provide real-time updates and data storage. State management was handled using Riverpod, ensuring optimization across the application. This approach led to enhanced performance by decoupling UI components from business logic. Loading performance was optimized through asynchronous programming, ensuring minimal delays and seamless navigation. The real-time database capabilities of Firebase provided instant synchronization and updating of user data. Flutter&#8217;s integration with Firebase enabled users to upload audio files, which were then instantly accessible to all users. The features were designed to be easily extendable, allowing new functionality to be added with minimal changes to existing code.<\/p>\n\n\n\n<p>The key features of the application include vocabulary flashcards, a phrases learning module with audio translations from native speakers, and customizable quizzes with detailed visual analytics to provide an immersive experience. Gamified elements, such as XP points, a leaderboard, and a robust notification system, ensure daily study engagement. User profile management and discussion forums stimulate community engagement. A content management system was designed for efficient phrase management, batch uploading, and fault-tolerant multimedia integration. &#8220;I\u2019poy\u00edt&#8221; demonstrates the potential for cross-platform accessibility to improve language learning and preserve the Blackfoot language. By combining modern technology with cultural insights, the application offers an engaging and effective learning experience, highlighting the potential for similar solutions to support other endangered languages.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">GNANA SANJANA KILLI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.<br><strong>Project<\/strong>: Facilitating Endangered Language Revitalization: An E-Learning Platform for Blackfoot<\/p>\n\n\n\n<p>The preservation and revitalization of endangered languages are crucial for maintaining cultural diversity and heritage. This project introduces an innovative e-learning platform specifically designed for the revitalization of endangered languages, with a particular focus on the Blackfoot language, which is currently at risk of extinction. The platform aims to engage indigenous communities, educators specializing in language revitalization, and linguists through interactive language learning facilitated by a user-centric design.<\/p>\n\n\n\n<p>The development of the platform was driven by the need for accessible, engaging, and pedagogically sound language learning tools that overcome the limitations of traditional methods. Key features of the platform include secure account creation and management, seamless access to courses, lessons, and assignments, an intuitive and adaptive user interface for various devices, developed using React.js, and robust data management capabilities using SQL databases ensuring the security and integrity of user data. The system&#8217;s architecture supports multimedia content managed through AWS, accommodating diverse teaching methods that appeal to different learning styles.<\/p>\n\n\n\n<p>Throughout its development, the platform has evolved through continuous stakeholder feedback, leading to significant enhancements such as updated administrative controls for user management, enhanced security measures, and enriched interactive content. This adaptive approach has significantly improved the platform&#8217;s effectiveness. Future enhancements will focus on expanding the range of languages offered, integrating adaptive learning technologies, and enhancing peer-to-peer interaction features. This project not only contributes to the field of language revitalization but also serves as a significant model for the further development of educational technologies that respect and promote cultural heritage.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, August 6<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SONAL YADAV<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Clustered Federated Learning for Next-Point Prediction in Mobility Datasets<\/p>\n\n\n\n<p>Individual trajectories vary significantly, making the task of next-point prediction challenging due to privacy concerns and potential convergence issues in machine learning models. To address these challenges, I added functionality to the Inflorescence framework, developed by Professor Mashhadi\u2019s Lab, enabling trajectory datasets to leverage clustered federated learning (CFL) strategies. This project evaluates the performance of CFL on state-of-the-art mobility datasets, GeoLife and MDC, demonstrating its robustness compared to traditional federated learning. Additionally, the study analyzes which types of clients benefit the most from CFL, finding that high-entropy clients, characterized by more non-identical and random datasets, experience the greatest advantages. This enhanced functionality is now part of the open Python package Inflorescence, which extends Flower.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JOSHUA MEDVINSKY<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Exploring the Influence of Human Factors on the Effective Utilization of Password Managers<\/p>\n\n\n\n<p>In today&#8217;s digital realm, password managers are important tools for securely managing login credentials. Amidst an array of options like LastPass, RoboForm, Dashlane, 1Password, and Bitwarden, these tools not only handle online account credentials but also extend their utility to desktop application credentials.<\/p>\n\n\n\n<p>But despite their proven effectiveness, users rarely use password managers, which raises questions about online security given the growing threat of password attacks. This capstone project proposes an exploration into the impact of user education on enhancing the adept utilization of password managers. By addressing this challenge, the aim is to uplift security practices and bridge existing gaps in cybersecurity knowledge.<\/p>\n\n\n\n<p>Through an extensive literature review, the project identifies a notable research gap regarding the influence of targeted user education on password manager effectiveness. By exploring user motivations and concerns, this study seeks to contribute valuable insights to the cybersecurity field.<\/p>\n\n\n\n<p>The project&#8217;s methodology entails in-person interviews with students from the University of Washington Bothell, dividing them into groups receiving user education and a control group. Data collection suggests that targeted education may significantly enhance password manager adoption and usage.<\/p>\n\n\n\n<p>This project shows significant improvements among participants who receive targeted education. It emphasizes the critical role of organizations in making sure their users have a good understanding of password manager benefits with practical application, which improves cybersecurity measures and mitigates data breach risks. Moving forward, the project aims to highlight the importance of exploring long-term educational intervention effects and scalability within organizational contexts to improve cybersecurity defenses effectively.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, August 8<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SUNJIL GAHATRAJ<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Thesis<\/strong>: Enhancing Neurological Rehabilitation: Combining Gaming, Robotics and Machine Learning in an Edge Computing Environment<\/p>\n\n\n\n<p>Recent advancements in Electroencephalography (EEG) technology has made it more accessible, opening new avenues for collecting and studying EEG data. In the context of neurological rehabilitation, it enables us to develop new strategies to enhance this experience for patients recovering from conditions such as stroke, spine surgery and nerve damage. On the other hand, the continued evolution of hardware and rapid onset of High Performance<br>Computing (HPC) has allowed for smaller devices to become more computationally powerful, making Edge Computing (EC) environments ubiquitous and suitable for many applications.<\/p>\n\n\n\n<p>The goal of this research is to explore the combined use of Computer Vision (CV), Gaming, Machine Learning (ML) and robotics technologies to develop improved rehabilitation approaches for neurology patients. Specifically, our target is the gamification of some aspects of this area of rehabilitation to make it more engaging to help patients achieve their goals. Our approach is to investigate the feasibility of this strategy by studying EEG data collected<br>from healthy subjects to analyse brain signals associated with specific hand movements. The EEG data is then processed and ML models are used to predict these hand movements, which then can be used to guide a Raspberry Pi robot. In addition, a camera is used to supervise hand gestures. In the future, together with medical expert input, these experiments can help train such a robot to follow optimal paths based on patient hand exercises. In our experiments, we used a reduced set of hand movements and healthy subjects to serve as a proxy to real patients. Initial results using CNN and FNN models are encouraging, showing high precision and accuracy in predicting hand movements, indicating that our research serves as a proof of concept and continued exploration is justifiable.<\/p>\n\n\n\n<p>This research is a novel work carried out as an interdisciplinary collaboration between computer science, engineering, neuroscience involving the Computing and Software Systems (CSS) division at the University of Washington Bothell collaborated with the University of Washington Seattle Neuroscience and Rehabilitation Medicine at Harborview Medical Center (UWHM).<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">HONGYANG LIU<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: A Comprehensive Categorization Framework for Interactive Fiction Games<\/p>\n\n\n\n<p>Interactive Fiction (IF) games are digital experiences that merge storytelling with interactive gameplay, allowing players to navigate and influence story-driven adventures. These games have evolved significantly, integrating advanced visual and interactive elements alongside traditional textual narratives, making them an intriguing area of study. However, currently, there are few structured frameworks designed for the systemic classification of IF games and it can be challenging to analyze these games wholistically.<\/p>\n\n\n\n<p>This thesis presents a comprehensive categorization framework for IF games, designed to facilitate systematic classification and analysis. Based on features derived from a human-computer interface, story genre, game mechanics, and business model, the framework supports the classification of IF games into distinct categories. This structured approach allows feature-based examination and facilitates the holistic analysis of IF games and their evolution.<\/p>\n\n\n\n<p>Validation for the proposed framework involved three rounds of sampling and categorizing IF games. The first round sampled popular IF games developed based on well-established game engines to demonstrate the fundamental robustness of the framework. The second round sampled popular IF games over time for insights into potential trends as IF games continue to develop and evolve. The third round was based on popular IF game series and traditional action-adventure series to examine potential similarities between the two genres.<\/p>\n\n\n\n<p>The three rounds of sampling and categorizing reveal potential patterns and trends that enhance our understanding of IF games. Key findings include the trend from text-only to image-based or even animation-based, the trend from single-defined endings to multiple-defined endings, the trend from no or little towards more sophisticated support for stats and resource management, and the potential overlapping and merging of IF and action-adventure games.<\/p>\n\n\n\n<p>These findings demonstrate that the proposed framework is an effective tool for systematic analysis that can offer valuable insights into the development and trends of IF games. Since classification involves subjectivity, future work should repeat the process based on stakeholders with distinct backgrounds, e.g., publishers, developers, and gamers. Additionally, the proposed framework is but a first step and should be continuously reviewed and refined.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ESHA GAVALI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Dong Si<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Thesis<\/strong>: Enhancing the Performance of GNN and Utilizing 3D Instance Segmentation for Ligand Binding Site Prediction<\/p>\n\n\n\n<p>This study addresses the challenge of accurately predicting ligand binding sites (LBS) on proteins, a critical aspect of structure-based drug design. Ligand binding site prediction is crucial for designing effective drugs and understanding protein functions, benefiting pharmaceutical companies, biotechnologists, and researchers by accelerating drug discovery and improving therapeutic interventions. We employ and improve Graph Neural Networks (GNNs) and innovative 3D point cloud instance segmentation to refine and advance LBS prediction methods. This research demonstrates significant enhancements in predictive accuracy by evaluating these methods on widely used datasets. Our novel clustering algorithm, which combines density-based and fuzzy clustering, notably improves the definition and identification of ligand binding sites without prior knowledge of the number of clusters. This methodology allows for more precise predictions, effectively managing binding sites&#8217; overlapping nature. Implementing instance segmentation further delineates individual binding pockets, offering a more granular understanding of ligand-protein interactions. The results illustrate that our approaches meet the current state-of-the-art for ligand binding site prediction and support their potential utility in real-world pharmaceutical applications. Future work will focus on refining these methods and extending their application to molecular docking studies.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, August 9<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KARAN BHATT<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Privacy Concerns and User Perception in Targeted Digital Advertising: A Survey Project<\/p>\n\n\n\n<p>This study investigates the relationship between users&#8217; perceptions of privacy control and transparency in data usage practices and their subsequent trust and acceptance of targeted advertising. The purpose of this research is to understand how perceived control over personal data and transparency in data usage influence user trust and acceptance of targeted ads, thereby aiding in the design of more user-friendly and trustworthy advertising strategies. Utilizing an extensive literature review and a comprehensive survey, data were collected on users&#8217; awareness of data collection practices, perceived control over personal data, trust in advertisers, and acceptance of targeted advertisements. Additional questions addressed concerns regarding data privacy, the effectiveness of privacy-enhancing technologies, and user preferences for regulatory measures. The primary hypothesis posits that users&#8217; perception of control over their data significantly influences their acceptance and trust in targeted digital advertising, while the secondary hypothesis examines how transparency in data usage and collection practices affects users&#8217; trust and comfort with targeted advertising. The survey results highlight critical factors influencing user acceptance and trust in targeted advertising. The findings of this study contribute to the body of knowledge on digital marketing and privacy, offering practical recommendations for advertisers on designing more user-friendly and trustworthy advertising strategies. By understanding the importance of perceived control and transparency, advertisers can improve their practices to better align with user expectations, thereby enhancing the overall effectiveness and acceptance of targeted advertising.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">NIKHITA TITHI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Wooyoung Kim<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Interpretable AutoML Web Application<\/p>\n\n\n\n<p>The adoption of automated machine learning (AutoML) platforms across interdisciplinary fields, such as bioinformatics, has been limited despite their potential to broaden access to machine learning tools and accelerate scientific discovery. This paper investigates the specific challenges and barriers that hinder the widespread utilization of AutoML platforms among bioinformatics researchers. The iBioML framework, developed by Dr. Wooyoung Kim, aims to enhance the effectiveness and efficiency of machine learning applications in bioinformatics by providing an interactive and interpretable machine learning infrastructure. Despite the promise of AutoML to simplify and accelerate data analysis, its adoption remains constrained due to the complexity of setup, registration processes, subscription models of commercial platforms, and the steep learning curve associated with their advanced features.<\/p>\n\n\n\n<p>Our research delves into the current state of AutoML applications, exploring the factors influencing researchers&#8217; decisions to use or not use these platforms. Through a comprehensive analysis of commercial and open-source solutions such as Google Cloud AI Platform, Amazon SageMaker, Microsoft Azure ML, DataRobot, and H2O.ai, we evaluate their features, user experience, and interpretable capabilities. A survey was conducted among bioinformatics researchers to understand their needs and the barriers they face. Our findings revealed significant challenges related to the usability and accessibility of these platforms, with non-technical users particularly struggling with the initial setup, unclear subscription models, and the high cost of commercial platforms. Additionally, while commercial platforms offer advanced data analysis and visualization features, they often require extensive resources and commitment, which can be a deterrent. Non-commercial platforms like H2O and Orange provide essential AutoML functionalities but are limited by their user interface complexities and performance issues with large datasets.<\/p>\n\n\n\n<p>To address these challenges, we developed a new application within the iBioML framework, integrating enhanced AutoML and interpretability features based on user feedback and comprehensive analysis. Our new application simplifies the setup process, offers clear usage guidelines, and provides robust data visualization tools, built-in AutoML capabilities, and extensive interpretability. By improving these aspects, we aim to empower bioinformatics researchers to leverage AutoML for more accurate, reliable, and interpretable models. This research contributes to the advancement of AutoML not only in bioinformatics but also in other fields, offering targeted strategies to enhance its accessibility, usability, and effectiveness.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SPRING 2024<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, May 14<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">NAIMA NOOR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Discovery Hall 464<br><strong>Thesis<\/strong>: Fairness in Continual Federated Learning<\/p>\n\n\n\n<p>Continual Federated Learning (CFL) is a distributed machine learning technique that enables multiple clients to collaboratively train a shared model without sharing their data, while also adapting to new classes without forgetting previously learned ones. Currently, there are limited evaluation models and metrics for measuring fairness in CFL, and ensuring fairness over time can be challenging as the system evolves. To address this, our study explores temporal fairness in CFL, examining how the fairness of the model can be influenced by the selection and participation of clients over time.<\/p>\n\n\n\n<p>We introduce novel fairness metrics\u2014Delta Accuracy Fairness (DAF) and Delta Forgetting Fairness (DFF)\u2014specifically designed to ensure temporal fairness in a CFL context. Additionally, we propose a set of client selection strategies that enhance the temporal fairness of the CFL model by addressing disparities in knowledge retention. Through comprehensive analysis, we demonstrate that while no single strategy guarantees perfect temporal fairness, the Low Participation and Low Average strategies consistently outperform others in terms of stability and equity. Furthermore, our findings underscore the adaptability of the Dynamic strategy, which shows significant promise in certain tasks. These insights pave the way for refining client selection strategies, enhancing CFL&#8217;s fairness, and fostering more equitable learning environments.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, May 15<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHENYAN CAO<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.; Discovery Hall 464<br><strong>Project<\/strong>: An Incremental Enhancement of Agent-Based Graph Database<\/p>\n\n\n\n<p>In the domain of big data analytics, graph Database (DB) is vital for managing complex data structures. This project focuses on enhancing an existing agent-based graph DB within the MASS Java framework. Motivated by the limitations of the existing agent-based graph DB, this project aims to enrich its capability to handle data with more detailed property information, aligning with the Property Graph Model. Through a comparative analysis of popular industry graph DBs such as Neo4j, RadisGraph, JanusGraph, and ArangoDB, this project establishes design principles focusing on the adoption of the Property Graph Model, Cypher query language, in-memory distributed graph structures, and agent utilization. The project provides detailed insights into the design and implementation processes, including parsing Cypher queries to Abstract Syntax Tree (AST), planning execution strategies, and comprehensive testing to ensure system functionality and reliability. Overall, the project demonstrates the successful extension of the agent-based graph DB to handle complex and interconnected data structures, accurate execution of CREATE and MATCH cypher queries, and outlined plans for future development.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">VEDANTI PAWAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Adversarial Defense: Implementing and Evaluating Multi-Layered Strategies Against Adversarial Attacks<\/p>\n\n\n\n<p>Deep learning (DL) has become a cornerstone in image classification tasks across various industries, notably in the development of autonomous driving systems, where it significantly enhances vehicle perception and decision-making capabilities. However, reliance on single defense mechanisms often falls short in safeguarding these models against sophisticated adversarial attacks. This research investigates the potential of combining various defense strategies to enhance the robustness of DL models, focusing on the ResNet34 and ResNet50 architectures. By employing widely-used attack methods, this study simulates real-world threats to assess whether these combined defenses can improve model accuracy and security. Testing these strategies on different network architectures across various datasets, the analysis determines the impact of each defense combination along with their computational costs. The findings provide valuable insights into which strategies are most effective in different settings, guiding the development of more resilient DL systems against sophisticated attacks.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, May 16<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">WUBE ALEMAYEHU TUFFA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. William Erdly<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Transfer Learning in Neural Machine Translation for Low-Resource Languages<\/p>\n\n\n\n<p>This project paper explores the impact of transfer learning and pre-trained models on improving Neural Machine Translation (NMT) between the low-resource language, Amharic, and the resource-rich language, English. Given the unique challenges associated with NMT for low-resource languages, this study proposes to use two innovative architectures: the Concerted Training NMT (CTNMT) and a Bert-fused NMT model, aimed at improving translation quality. These models are evaluated against a conventional transformer model to determine their ability to effectively leverage pre-trained knowledge for language translation tasks.<\/p>\n\n\n\n<p>The experimental approach employs the fairseq and neurST toolkit to conduct controlled experiments, with translation accuracy assessed through BLEU scores. The research consolidates two smaller corpora into an expanded Amharic-English dataset, ensuring robustness and integrity for model training and evaluation while safeguarding against data leakage into the test set. The CTNMT architecture utilizes rate-scheduling and dynamic switch to maximize learning from BERT through sophisticated training methodologies. Meanwhile, the Bert-fused model leverages BERT&#8217;s capabilities by embedding it within a custom-build sequence-to-sequence encoder-decoder framework.<\/p>\n\n\n\n<p>The results suggest that both innovative models are effective, with the Bert-fused model achieving higher BLEU scores in both Amharic-English and English-Amharic translations compared to the baseline transformer. While the CTNMT model performed well in English-Amharic translation, it was not applicable for the opposite direction. These findings highlight the potential of pre-trained models to improve the quality of Neural Machine Translation (NMT), especially for languages with limited linguistic resources. Particularly the success of these models validates the hypothesis that integrating deep bidirectional language understanding can substantially enhance translation quality, presenting a notable advancement in the field of machine translation.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, May 20<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">YUAN MA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: An Implementation of Multi-User Distributed Shared Graph<\/p>\n\n\n\n<p>Many real-world applications such as social or biological networks can be modeled as graphs. With the increasing size of graphs, graph databases also become a popular research area. Graph databases usually require maintaining the original structure of the graph over distributed disks or preferably over distributed memory to function. Compare to those popular data-streaming tools that need to disassemble the graph into texts before processing, it\u2019s reasonable to introduce agent-based graph computing in which we deploy agents to graphs without modifying the original shape. In this research, we introduce using Multi-Agent Spatial Simulations Library (MASS) for graph computing. Currently, most agent-based modeling (ABM) libraries including MASS focus on parallelization of ABM simulation programs. However, database systems need to accept, handle, and protect many queries from different users simultaneously, while MASS hasn\u2019t provided users with this capability. Therefore, this project aims at implementing a high-performance multi-user distributed shared graph and trying to add this feature to the MASS library. We have conducted research on many popular data streaming tools and distributed cache. By addressing the challenges they have on programming graph applications, we proposed and implemented a high-performance distributed shared graph structure within the MASS library. Through the performance and programmability comparison between MASS and Hazelcast (which has a distributed HashMap data structure, thus enables distributed graph construction), we demonstrated that MASS GraphPlaces has better speed when processing graph queries and at the same time offers an easier way to program graph applications such as Triangle Counting.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KENNETH TRAN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: LSTAR Framework: Lightweight Framework for Standardizing Tests for Adversarial Robustness<\/p>\n\n\n\n<p>The role of neural networks in various tasks has exploded in recent years, becoming prevalent in many safety-critical applications. However, improving neural network robustness has become a challenge due to the existence of adversarial examples\u2014imperceptible perturbations to the inputs of machine learning models that mislead classifiers into producing incorrect outputs. While there have been numerous advancements in crafting adversarial attacks and defenses, research on the basis of adversarial examples has notably lagged behind, largely due to the computational difficulty of analyzing high-dimensional spaces. This inherent difficulty has led researchers to construct models for understanding adversarial examples divergent from conventional paradigms, with some relying on commonly used frameworks while others utilize their own tailored frameworks to meet their unique needs. Consequently, replicating and building upon research in this field presents a significant challenge.<\/p>\n\n\n\n<p>In this paper, we present a modular, lightweight framework to assist researchers in addressing these challenges by providing a comprehensive approach to evaluating machine learning models through a standardized experimentation platform. We present several potential hypotheses regarding the basis of adversarial examples and utilize our framework to verify them more robustly under complex attacks and datasets through controlled experiments. Our experimental results indicate that geometric causes directly affect the robustness of machine learning models, while statistical factors amplify the effects of adversarial attacks. These findings provide a baseline for further studies to better understand the phenomenon of adversarial examples, allowing researchers to design more robust machine learning models.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, May 21<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">UTKARSH DARBARI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.; Discovery Hall 464<br><strong>Project<\/strong>: Analyzing and Optimizing Fairness in Spatial Temporal Predictive Privacy Models<\/p>\n\n\n\n<p>Sharing spatial-temporal data, which includes individuals\u2019 locations and movements over time, requires careful privacy considerations to prevent re-identification from unique trajectories. This study addresses this challenge by integrating fairness principles into privacy preserving models for such data. Existing models prioritize the balance between privacy and utility (predictive accuracy), but often neglect the impact on different user groups. This can lead to potential discrimination against users based on their mobility patterns.<\/p>\n\n\n\n<p>We propose FairMoPAE, a method that incorporates fairness metrics into the Mo-PAE model, a framework known for anonymizing spatial-temporal data. FairMoPAE leverages techniques to evaluate and improve fairness during anonymization. These techniques analyze the entropy difference between original and anonymized trajectories, ensuring a more balanced trade-off between privacy and utility fairness for all users. By incorporating fairness metrics and optimizing hyperparameters, FairMoPAE aims to mitigate potential biases in the Mo-PAE model and contribute to the development of more equitable and socially responsible practices for spatial-temporal data analysis.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, May 22<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ASHISH NAGAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Empowering Mobile Learning in Resource-Constrained Communities: A Technical Exploration of Luna mHealth&#8217;s Development Process and Mobile UI Solutions<\/p>\n\n\n\n<p>The rapid proliferation of mobile phones in resource-constrained settings presents a unique opportunity to leverage mobile health (mHealth) applications to improve healthcare accessibility. This research addresses the critical question of how to optimize mobile learning in areas such as the Comarca Ng\u00e4be-Bugl\u00e9 in Panama, where barriers to health care and education are pronounced due to low literacy levels, limited economic access to services, and the scarcity of healthcare facilities. The vision is to create a tailored mobile application framework that can enhance the delivery of health education in these underserved areas, supporting interactive and multimedia content even in offline environments.<\/p>\n\n\n\n<p>The imperative for this study stems from the need to bridge the digital divide and extend health education to remote and marginalized communities, where traditional healthcare delivery models fail to meet local needs. In the Comarca Ng\u00e4be-Bugl\u00e9, for example, the maternal mortality rate is 58 times higher than the national average, and preventable diseases remain the leading causes of death. This region&#8217;s challenges underscore the potential impact of accessible and culturally relevant health education.<\/p>\n\n\n\n<p>My contributions focused on a variety of modern software development techniques, such as using a component-driven architecture and focusing on efficient image file handling and content module parsing. I utilized JSON Schema for structured data interchange and employed Luna mobile app UI rendering. I followed SOLID design principles to ensure robustness and scalability. Through iterative testing, we have developed a robust foundation for the Luna mHealth framework that will, when complete, empower content creators, regardless of their technical expertise, to develop interactive mobile learning modules. This approach will help to democratize the development process and ensure that the modules are adaptable to the unique cultural and learning contexts of the target user base.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PURVA AVINASH PATIL<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Exploring the Dynamics of \u2018Show and Discuss\u2019 Sessions: Visualizing Interactive Dialogues and Rapid Feedback Cycles in a Software Engineering Course<\/p>\n\n\n\n<p>The badge challenges in the Software Engineering Studio undergraduate course at the University of Washington Bothell are done in a \u2018Show and Discuss\u2019 format that provides a dynamic platform for collaborative learning. The interactive format in which students show their ongoing work and engage in adaptive and emergent discussions with professors allows for rapid feedback cycles that are known to be beneficial pedagogically and in other design context such as software engineering. Our research aimed to visualize and quantify the iterative and adaptive nature of those sessions to be better able to explore the cognitive work occurring in the \u2018Show and Discuss\u2019 sessions. Using Transana software, we established a categorization system of 24 types of dialogs found in these interactions between professors and students. We then used this categorization system to code four video recordings of badge challenges for each of three different badge levels, covering a total of 7 hours and 23 minutes across the twelve sessions. We then created a visualization to help uncover interaction patterns within and across the challenge sessions. In particular, the visualizations make apparent the highly interactive and probing nature of these \u2018Show and Discuss\u2019 sessions.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHAHRUZ MANNAN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Analysis and Improvement of MASS-based GIS<\/p>\n\n\n\n<p>Geographical Information Systems (GIS) are important in several sectors due to their ability to perform the basic functions needed to capture, manage, analyze, and visualize spatial data. However, the increasing complexity and volume of the geospatial data throws a challenge to traditional GIS processing techniques. Therefore, enhanced computational strategies should be investigated to meet demanding requirements for timely and scalable analysis. This project focuses on improving the existing integration of the Multi-Agent Spatial Simulation (MASS) library with GIS, focusing particularly on computational geometry problems used within queries. The work includes a comprehensive analysis of the existing MASS-GIS system identifying the inefficiencies. It proposes strategies for improvement, implementations, and benchmarks of existing and newly identified computational geometry problems including range search, convex hull, largest empty circle, and Euclidean shortest path using both Message Passing Interface (MPI) and MASS for parallel processing, and conducting performance evaluations assessing CPU scalability, spatial scalability, and execution efficiency. The findings reveal that MASS implementations have enhanced the organization and execution of spatial queries. Findings reveal that while the MASS implementations enhance the organization and execution of spatial queries, they present challenges in CPU scalability compared to traditional MPI-based systems. Notably, the MASS framework demonstrated substantial improvements in managing the existing computational geometry problems with enhanced CPU and spatial scalability compared to previous implementations in the MASS-GIS system.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, May 23<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">THOMAS PINKAVA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.; Discovery Hall 464<br><strong>Thesis<\/strong>: Deep Reinforcement Learning for Data-Agnostic Post-Training Debiasing of Black-Box Machine Learning Models<\/p>\n\n\n\n<p>As reliance on Machine Learning systems in real-world decision-making processes grows,<br>ensuring these systems are free of bias against sensitive demographic groups is of increasing<br>importance. Existing techniques for automatically debiasing ML models generally require<br>access to either the models\u2019 internal architectures, the models\u2019 training datasets, or both. In<br>this paper we outline the reasons why such requirements are disadvantageous, and present<br>an alternative novel debiasing system that is both data- and model-agnostic. We implement<br>this system as a Reinforcement Learning Agent and employ it to debias four target ML<br>model architectures over three datasets. Our results show performance comparable to data-<br>and\/or model-gnostic state-of-the-art debiasers.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">WARREN LIU<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Programmability and Performance Enhancement of MASS CUDA<\/p>\n\n\n\n<p>Agent-based modeling (ABM) has proven valuable across various fields for capturing the intricacies and heterogeneity of real-world systems. However, as ABM simulations become more sophisticated and larger in scale, the need for efficient parallelization arises. Graphics Processing Units (GPUs) have emerged as a compelling alternative for parallelizing ABM simulations, offering high computational power and parallelism. In this project, we aimed to enhance the MASS CUDA library, a GPU-accelerated ABM framework, by improving its programmability and performance. We implemented essential agent functions, redesigned data structures to enable coalesced memory access, and introduced a dynamic attribute setting mechanism. These enhancements led to significant improvements in programmability and performance, as demonstrated through benchmarking against previous version of MASS CUDA and a competing library, FLAME GPU 2, using five diverse applications. The evaluation showcased MASS CUDA&#8217;s effectiveness in terms of programmability, performance, and scalability. The improved programmability and performance of MASS CUDA enable users to focus on the modeling aspects of their simulations while harnessing the computational capabilities of GPUs. By offering a scalable and accessible framework for GPU-accelerated ABM, MASS CUDA has the potential to accelerate scientific discovery and decision-making processes in numerous fields.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">MICHELLE DEA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Munehiro Fukuda<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Discovery Hall 464<br><strong>Project<\/strong>: An Agent-Based Graph Database Benchmarking Program<\/p>\n\n\n\n<p>Graphs can be used to represent complex relationships between different entities. These are stored as edges and nodes in a graph database. This type of data makes it easier and more flexible to query connected data items, and identify insights from those relationships. Two common graph database systems are Neo4j and ArangoDB. These systems store graph data differently as Neo4j is a native graph database, and ArangoDB is a multi-model database. Both database system rely on storing data in the disk, which can be slow for data retrieval when the data is not in memory. A graph database using the Multi-Agent Spatial Simulation (MASS) library is being implemented to pursue CPU and spatial scalability by leveraging distributed memory to store graph data. This project aims to provide a benchmarking protocol for performance testing of MASS compared to Neo4j and ArangoDB. This will identify the current strengths and weaknesses of MASS, and provide a standard benchmarking tool for future researchers to use. The work includes using data pulled from real-world applications as the foundation of a random graph generator that takes user input regarding the topology and size of the graph that will be generated, a standard set of queries both in Cypher and AQL, a manual for testing in Neo4j, and a script for testing in ArangoDB. The graph sizes used in testing are 1K nodes, 10K nodes, 20K nodes, and 30K nodes for spatial scalability evaluation via graph traversal. CPU scalability of MASS is performed on a cluster of eight computing nodes using a 10K node graph.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">MINGJUN MA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Yang Peng<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.<br><strong>Project<\/strong>: Optimizing Continuity of Applications with Parallel Machine Learning Models During Edge Server Handovers<\/p>\n\n\n\n<p>The development of deep learning has continually benefited various fields, such as autonomous driving and real-time video applications. As deep learning models become increasingly complex, applications composed of deep learning models often require substantial computational resources. This computing resource can be allocated within an edge computing network. However, due to the mobility of end devices, handovers can occur when an end device moves from one signal zone to another. This transition can interrupt the inference process of deep learning applications, leading to temporary service disruptions. Frequent handovers can increase the latency of services, affecting the overall user experience. This issue is critical for real-time applications, where timely and accurate data is essential for making immediate decisions. To improve the inference quality during handover, we designed a solution and a corresponding prototype system to address this challenge. Our objective is to optimize the scheduling algorithm for non-handover and handover scenarios. For non-handover scenarios, we have optimized the system by reducing inference times. During handovers, we focus on maximizing the benefits of inference. We evaluated our design against the greedy solution and found that our approach saves more inference time and yields greater benefits than the greedy solution. The results demonstrate that our solution improves inference quality in handover and non-handover scenarios.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, May 28<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ABDUL-MUIZZ IMTIAZ<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Detecting Toxic Emotes Across Twitch Channels<\/p>\n\n\n\n<p>Twitch is a popular live-streaming platform with a niche language that is very different from traditional English. The language used in Twitch is marked with several grammatical errors, as well as by the abundant use of emotes, which are emoji-like icons that can be used to express different emotions. This means that someone unfamiliar with the language used in Twitch may not comprehend the content of chat messages.<\/p>\n\n\n\n<p>Pioneering research in the field of natural language processing (NLP) in Twitch proposed different techniques for sentiment analysis of Twitch comments. This project extends that work by extracting toxic emotes in a Twitch channel from chat logs, and then uses an embedding space of emotes created using Word2Vec to detect toxic emotes in other popular channels.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">PARKER FORD<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Kelvin Sung<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: Real-Time Rendering of Atmospheric Clouds<\/p>\n\n\n\n<p>Rendering realistic clouds is an important aspect of creating believable virtual worlds. The detailed shapes and complex light interactions present in clouds make this a daunting task to complete in a real-time application. Our solution, based on Schneider\u2019s volumetric rendering and noise generation framework for low-altitude cloudscapes, supports increased realism and performance in cloud rendering. For efficient approximations of radiance measurements, we adopt Hillaire\u2019s energy-conserving integration method for light scattering. To simulate the effect of multiple light scattering, we followed Wrenninge&#8217;s approach for computing the multi-bounce diffusion of light within a volume. To capture the details of light interreflection off microscopic water droplets, the complex behavior of Mie scattering is approximated with Jenderise and d\u2019Eon\u2019s phase function modeling technique. To capture the details with nominal computational cost, we introduce a temporal anti-aliasing strategy that unifies pixel and volumetric sampling. The pixel area sampling integrates a blue-noise distribution with an n-rooks offset, while volumetric samples follow a stratification strategy, amortizing results over n frames.<\/p>\n\n\n\n<p>The resulting system is capable of rendering scenes consisting of expansive cloudscapes well within real-time requirements, achieving frame rates between 2 and 3 milliseconds on a standard laptop. Users can adjust parameters to control various types of low-altitude cloud formations and weather conditions, with presets available for easily transitioning between settings. Our unique combination of techniques adopted in the volumetric rendering process enhances both efficiency and visual fidelity where the novel approach to volumetric temporal anti-aliasing efficiently and effectively unifies the sampling of pixel areas and volumetric intervals. Looking forward, this technique could be adapted for real-time applications such as video games or flight simulations. Further improvements could refine the cloud modeling system, incorporating procedural generation for high-altitude clouds, thus broadening the range of cloudscapes that can be represented. Additionally, recent work by Schneider has shown the potential for voxel-based cloud modeling. This modeling approach could be paired with our volumetric rendering method to further improve the appearance of the clouds.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">GURKIRAT SINGH GULIANI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Min Chen<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: MeTILDA and MultiLinguify for Language Learning<\/p>\n\n\n\n<p>The capstone project focuses on the development and enhancement of language learning applications including MeTILDA ( Melodic Transcription in Language Documentation and Application) and MultiLinguify.<\/p>\n\n\n\n<p>MeTILDA is an on-going project in our research group. It\u2019s a web-based application for endangered language application and education. My work focused on enhancing the user experience, resolving major bugs and adopting Azure DevOps for project management. In addition, significant collaborative efforts were invested in developing comprehensive project documentation that provides detailed insights into the development process, feature implementations, and project management strategies.<\/p>\n\n\n\n<p>The capstone also developed MultiLinguify, a cross platform mobile application from scratch. MultiLinguify supports the learning of different languages and enables multiple learning modes such as speaking and writing. With the target users being children aged 5-11, UI has been kept simple and intuitive. To promote self-learning, learners can draw characters and practice their pronunciations, and get real-time feedback. With scalability in mind, the current design framework accommodates scope for future enhancement like gamification, notification and additional regional languages.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AGUSTIN CASTILLO<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Erika Parsons<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>5:45 P.M.; Discovery Hall 464<br><strong>Thesis<\/strong>: Evaluating the Effectiveness of the Convolutional LSTM Neural Network for Simulations in Computational Fluid Dynamics<\/p>\n\n\n\n<p>Computational Fluid Dynamics (CFD) is an important part of engineering design, with applications in diverse areas, such as aeronautics, civil engineering, and medicine. Although its practical application is widespread, critical challenges hinder its utilization. These challenges are mainly related to computing resource requirements and long execution times of the fluid flow simulations. Recently, Machine Learning has achieved success in multiple domains, and Deep Learning techniques have been applied to enhance CFD. However, most of these techniques address a small fraction of the CFD process, for instance, approximating parameters for the algorithm to reduce error or focusing only on principal component analysis of the fluid flow. Compared with DL applications in other fields like Computer Vision, this application is relatively unexplored, and little research has been done on complete end-to-end models to simulate the fluid flow evolution when interacting with an obstacle. This research evaluates the effectiveness of the Convolutional LSTM (ConvLSTM) neural network for Computational Fluid Dynamics simulations when creating Reduce Order Models (ROM) and simulating turbulent fluid flows interacting with an obstacle. We propose a novel end-to-end Artificial Neural Network (ANN) model architecture based entirely on ConvLSTM that can successfully predict the spatiotemporal evolution of a fluid flow. This data-driven approach achieves similar results to a classical CFD method with direct numerical simulation in a quarter of its execution time. The model can potentially accelerate CFD simulations, leading to a faster engineering development process. By providing rapid preliminary results for prototype testing, engineers can explore more design ideas without having to wait for days or even weeks for simulation results.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, May 29<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">GAGNEET SACHDEVA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Professor Mark Kochanski<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Strengthening the Teaching Tools Platform through CI\/CD deployment<\/p>\n\n\n\n<p>The Software Engineering Studio at the University of Washington Bothell serves as an innovative platform where students engage in live software projects, fostering real-world experience in software development. The &#8220;Teaching Tools&#8221; website, a product of this studio, is designed as a full-stack solution that significantly enhances collaboration between students and faculty. It focuses on refining grading processes and optimising the exchange of feedback, directly addressing inefficiencies within the existing Canvas Learning Management System. This capstone project aims to further improve the educational experience at the University by identifying and resolving critical gaps in Canvas, specifically targeting issues that hinder user efficiency and complicate routine tasks.<\/p>\n\n\n\n<p>My aim for this project is to construct a robust Continuous Integration (CI) and Continuous Deployment (CD) pipeline for the Teaching Tools website. This involves automating the build and testing processes and ensuring reliable deployment of the website to Azure cloud services, thereby extending its accessibility to a broader audience. The approach incorporates CI\/CD best practices leveraged through the Azure DevOps platform to enhance deployment efficiency. A strategic branching model supports this framework, maintaining a stable main branch for production releases while facilitating ongoing development in feature branches. This pipeline will not only streamline updates and feature integrations but also enable quicker releases, ensuring that enhancements and bug fixes improve user experiences in a timely fashion. By automating tests and deployments, the project reduces manual errors and increases the productivity of the development team. This allows them to focus more on creating innovative features and less on the mechanics of the deployment process. Ultimately, this infrastructure supports a dynamic educational tool, adapting quickly to the evolving needs of educators and students at the University of Washington Bothell, making it an indispensable asset in the educational landscape.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHAUN STANGLER<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. David Socha<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Building the Foundations for Extending Local First Software Techniques with Luna mHealth<\/p>\n\n\n\n<p>The Luna mHealth project aims to bridge the healthcare information divide by developing a framework of content authoring and rendering applications tailored for remote and underserved populations. Our focus lies on addressing challenges in accessing medical educational content in areas with limited technology, connectivity, and literacy levels.<\/p>\n\n\n\n<p>In regions like the Comarca reservation in Panama, standard mobile application systems face hurdles in distribution and functionality due to limited internet connectivity and access to app stores. To mitigate these issues, we employ local-first software techniques, prioritizing data sovereignty and offline functionality. Our approach extends to building the foundations for local peer-to-peer distributed file transfer applications using Bluetooth networking for seamless operation in low-connectivity environments with content distribution and cloud telemetry propagation.<\/p>\n\n\n\n<p>To achieve these goals, we performed a complete redesign of Luna architecture and built foundational components such as base storage systems, logging and telemetry libraries, core object model, and presentation parsing. We also built a mobile Bluetooth-based file transfer proof-of-concept application to showcase low-touch configuration-based device pairing and application-level Bluetooth service and characteristic control using custom native Flutter device plugins.<\/p>\n\n\n\n<p>The Luna project is driven by a commitment to innovation and inclusivity in healthcare technology, aiming to empower remote communities with tailored and localized self-service medical education content. Through partnerships with organizations providing outreach services, Luna seeks to supplement health education in underserved communities, leveraging mobile applications to enhance training content for providers and empower families with self-education opportunities.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, May 30<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SHUBHAM SHANTARAM PATIL<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Afra Mashhadi<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.; Discovery Hall 464<br><strong>Project<\/strong>: Performance Enhanced Drowsiness Detection for Drivers Using Inception V3 and Haar Cascade.<\/p>\n\n\n\n<p>The work on performance increased driver drowsiness detection system is to establish a driving system for all drivers around the globe who are at risk of falling into sleepy or drowsy states that highly compromise road safety contributing to numerous road accidents in America every year. The study published by the NHTSA (National Highway Traffic Safety Administration) U.S. Department of Transportation estimates that in 2017, 91,000 police-reported crashes involved drowsy drivers. These crashes led to an estimated 50,000 people injured and nearly 800 deaths. For this capstone project, we propose to create such a drowsiness detection system which will help overcome challenges in the previous studies mentioned in related works. Hybrid state-of-the-art algorithms like Inception V3 Depp CNN algorithm and Haar Cascade have been employed by the implemented systems which effectively analyze face and behavioral positions as regions of interest. To train, a large dataset consisting of 100K images containing closed\/open eyes with different people (human subjects), under various lighting conditions while driving was used so that this system could become more accurate at detecting such states. As a result, we saw that our system has achieved an accuracy of 92.35% on tests. The model&#8217;s performance can be evaluated by examining its Receiver Operating Characteristic curve (ROC). The ROC curve generated by our model has an Area Under the Curve (AUC) value of 0.70 which implies that our system performs better than random guessing with typical ranges being between 0.7-0.8 representing good performance although may have limitations under certain contexts as well.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, May 31<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KARAN CHOPRA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Professor Mark Kochanski<br><strong>Candidate<\/strong>: Master of Science in Computer Science &amp; Software Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Applying Software Engineering To Develop Features In The Teaching Tool<\/p>\n\n\n\n<p>Canvas is an all-inclusive learning management system (LMS) that runs on the web and is intended to help with digital learning by giving institutions the ability to efficiently oversee online instruction. It gives teachers the resources they need to design, present, and evaluate online courses, and it gives students the chance to engage in classes, monitor their skill growth, and get feedback on their academic achievement. Canvas is a key platform in the field of digital education, with features designed to meet the various needs of educational institutions and their learning communities. Even with its extensive capability, there is still room for improvement to raise the bar for both teacher and student productivity and learning outcomes.<\/p>\n\n\n\n<p>Seeing this room for growth, this capstone project takes the form of an effort to create a full-stack application that runs in the browser. Through feature enhancements and a comparative study of current functionalities, this project aims to deliver new features that follow strict software engineering principles. The development of multiple stand-alone features for the Teaching Tools program is the main goal of this project. These features include importing and exporting quizzes, refactoring previous code to make it more comprehensible, redesigning the architecture of the application, and working on the UI development of new features. It is intended for these elements to be smoothly integrated with Canvas, enhancing the University of Washington Bothell&#8217;s virtual learning environment. Additionally, acting as a mentor to undergraduate students and fostering communication and collaboration are key components of this initiative. The project intends to give all users a more dynamic and engaging learning experience by directly integrating these advancements into Canvas. In addition, this project is dedicated to applying good software engineering principles, guaranteeing an efficient system&#8217;s design and execution that places a premium on a clear and captivating user interface. With this project, the hope is to raise the bar for digital learning platforms and create a setting where education and technology meet to improve the educational experience.<\/p>\n\n\n\n<p><a href=\"#top\">Back to top<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"mscyber\">Master of Science in Cybersecurity Engineering<\/h2>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">WINTER 2026<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, March 5<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SIDDHA MEHTA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Geetha Thamilarasu<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>11:00 A.M.; UW2 (Commons Hall) Room 327<br><strong>Project:<\/strong> Adaptive Multi-Vector BLE Attack Framework: A Context Aware Approach<\/p>\n\n\n\n<p>Bluetooth Low Energy (BLE) is the standard for modern IoT infrastructure, yet current security tools struggle to test these devices effectively because they lack context. Standard scanners treat every target whether it be a smart lock or a heart monitor as a generic black box target, applying the same random testing strategies to all of them. This &#8220;one size fits all&#8221; approach routinely misses logic specific vulnerabilities and fails to detect silent failures, where a device&#8217;s software crashes while its radio connection stays active.<\/p>\n\n\n\n<p>This project presents &#8220;Adaptive Multi-Vector BLE Attack Framework: A Context Aware<br>Approach&#8221;, an automated framework that moves beyond blind fuzzing to intelligent and context-aware testing. The framework uses a Large Language Model (LLM) to plan attacks based on three dimensions: semantic context (inferring what the device is from its data), temporal context (tracking how the device behaves over time), and operational context (mapping its available attack surface). To confirm the presence of vulnerabilities, we introduce a Differential State Engine that compares the device&#8217;s state before and after an attack. This allows the system to prove that a flaw exists by detecting persistent changes, rather than relying on simple connection errors.<\/p>\n\n\n\n<p>The framework was tested against a diverse set of simulated ESP32-based targets, which included medical, industrial, and smart home profiles. The testing confirmed that the system was able to successfully identify complex logic-based vulnerabilities, such as authentication bypasses, which are often invisible to context blind tools. These results demonstrate the feasibility and effectiveness of using LLM for planning and decision making with device context to enhance the depth of automated BLE security testing.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SUMMER 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, August 14<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">BENARD BIRUNDU<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>11:00 A.M.<br><strong>Project:<\/strong> Exploring the Impact of Previous Experience and Threats Awareness on influencing Regular Information Backup Among University Students<\/p>\n\n\n\n<p>Cybersecurity has gained momentum in the past decade as cyberthreats revolutionize the sophistication and deployment of artificial intelligence (AI) to launch cyberattacks. Technical solutions have been the first line of defense in cybersecurity for a while. But the focus has lately shifted to human security solutions as cyberattacks target individuals to achieve their goals. Moreover, humans are majorly considered as the weakest link in cybersecurity, which positions university students as most vulnerable due to their constant use of the internet. This study focused on exploring the impact of threat awareness and past experience on influencing information backup among university students in the USA. Protection Motivation Theory (PMT) was referenced, where a survey questionnaire was used to collect data from respondents. Correlation statistical analysis was deployed to conduct data analysis enabling correlation comparison between variables. The findings will be very instrumental in improving cybersecurity among students and the university computing environment.<br>Keywords: Cybersecurity, Protection Motivation Theory, threats awareness, past experience, information backup, university students,<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SPRING 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Friday, May 30<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">CHRISTIAN BERGH<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>3:30 P.M.<br><strong>Thesis:<\/strong> Cybersecurity is Stressful: The Impact of Stress on Identifying Phishing Attacks<\/p>\n\n\n\n<p>Stress is an emotion that impacts everyone; it can have profound physical impacts on health and cognitive functions. The responsibilities placed on average workers in many fields can increase workplace stress. As technology continues to be integrated into all facets of life and work, many industries have become de facto technology companies with large or valuable technical datasets. Many of these industries are facing an onslaught of cyber-attacks in an attempt to gain access to those large or valuable datasets. One of the most common cyber-attacks, often found to be the entryway for many data breaches, remains phishing attacks. This combination of factors places a large amount of responsibility for a business\u2019 digital security on the shoulders of every employee. In contrast, security is often not the primary responsibility of this employee. Malicious actors know that humans are the most vulnerable portion of any security network and understand that employees who may not understand how their company\u2019s digital network functions and do not understand the value of access credentials to a malicious actor can be the easiest to target. Phishing attacks are designed and created to elicit stress, emotions, and a sense of urgency from the target to trick that target into clicking a link, downloading a file, or entering their credentials somewhere they should not. This study uses the framework and core concepts developed in the Trier Social Stress Test (TSST) in a novel application to reveal the potential impacts of acute stress on common cybersecurity attacks. The TSST creates an acute instance of increased stress on participants. Participants are then shown several screenshots of emails and asked to determine if the email shown is a phishing attack. By analyzing the participant&#8217;s stress levels before and after the phishing attack identification test, along with their performance on the test, we can determine if there Is a significant impact on a participant\u2019s ability to identify phishing attacks under increased stress.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, June 2<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">NIMISH SRIVASTAV<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>1:15 P.M.<br><strong>Project:<\/strong> Privacy-Preserving Malware Detection Framework<\/p>\n\n\n\n<p>Malware attacks are becoming increasingly sophisticated, exploiting the centralized architectures of traditional detection systems to compromise both security and user privacy. To address these challenges, we propose a privacy-preserving malware detection framework that combines federated learning with additive secret sharing, enabling multiple clients to collaboratively train neural models\u2014Dense, Deep, and Residual\u2014implemented in TensorFlow and PyTorch via the Flower platform, without exposing raw data. Our lightweight secret-sharing protocol masks client updates to achieve a differential privacy guarantee of \u03f5 = 0.5, while allowing noise cancellation in aggregate and avoiding the computational overhead of homomorphic encryption. Evaluated on the BODMAS dataset, our approach achieves over 99.2%accuracy and F1-score within five communication rounds, and under adversarial Projected Gradient Descent (PGD) attacks retain up to 17% more accuracy than standard federated learning. In contrast, centralized baselines such as Logistic Regression, Random Forest, and Gradient Boosting suffer catastrophic performance drops under adversarial conditions, highlighting the necessity of privacy-aware decentralization. Privacy analyses confirm strong protection with moderate reconstruction risk, illustrating a practical balance between robustness and privacy for sensitive domains like healthcare, finance, and critical infrastructure.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, June 3<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JUI ANIKET BANGALI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>11:00 A.M.<br><strong>Thesis:<\/strong> Evaluating the Impact of Responsible AI as a Security Control in Machine Learning<\/p>\n\n\n\n<p>Responsible Artificial Intelligence (RAI) is an approach to developing and deploying AI systems in a manner that is ethical, trustworthy, and safe. While RAI is often framed in terms of fairness, transparency, and social responsibility, its potential role in improving the security and robustness of machine learning (ML) models remains underexplored. This research proposes that integrating RAI principles during the development lifecycle of ML models can serve not only as a foundation for ethical AI but also as a proactive security control. Using Microsoft\u2019s Responsible AI framework, this study examines whether models built with responsible development practices are more resilient to adversarial attacks compared to those developed without them. The findings confirm that RAI practices significantly improve model robustness, with the improved model reducing the average accuracy drop due to adversarial attacks by 46.23% compared to the baseline model. Notably, this result was achieved without applying any additional security-specific defenses, demonstrating that RAI alone can serve as an effective and independent layer of protection. This positions RAI not only as an ethical imperative but also as a practical, adaptable defense mechanism that can complement existing security techniques, offering valuable guidance for AI practitioners building trustworthy and secure systems.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">AUTUMN 2024<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, December 2<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">JEFFREY MURRAY JR.<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>11:00 A.M.; Online Defense<br><strong>Project<\/strong>: GhostPeerShare<\/p>\n\n\n\n<p>Fully Homomorphic Encryption (FHE) schemes allow computations over encrypted data without access to the decryption key. Microsoft&#8217;s Simple Encryption Arithmetic Library (SEAL) provides a state-of-the-art C++ library that enables addition, subtraction, and multiplication on encrypted integers or real numbers. SEAL is difficult to implement on Android due to its outdated documentation. Furthermore, it requires expertise in C++, Kotlin, and Gradle. This project presents a natively compiled Dart plugin that abstracts the underlying C\/C++ library. The FHEL plugin enables developers to access SEAL&#8217;s full functionality within other Dart plugins and Flutter applications. To evaluate the versatility of the plugin, we employ two implementations: 1) the Distance Measure Dart plugin and 2) the GhostPeerShare Flutter application. The Distance Measure plugin implements Kullback-Leibler Divergence, Bhattacharyya Coefficient, and Cramer Distance, along with their FHE counterparts. GhostPeerShare adapts the implementation from Proof of Presence Share (Pop-Share), a mobile application that can detect similar videos using FHE. GhostPeerShare achieved a precision score of 0.9614 and an F1 score of 0.9709 for identifying similar videos. Although video pre-processing was considerably slower, the average FHE computation of each distance measure matched that of Pop-Share. These results demonstrate that GhostPeerShare represents a significant advance in the accessibility of FHE within the mobile community.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SUMMER 2024<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Monday, August 5<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KRISHNA PAUDEL<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>8:45 A.M<br><strong>Project<\/strong>: MalBuster: An Integrated Approach to Securing Systems through Port Monitoring and Malware Detection<\/p>\n\n\n\n<p>The importance of computer security in today&#8217;s digital world cannot be overstated. As cyber threats become more sophisticated and widespread, strong security measures are essential to protect information and ensure smooth operations. This is where traditional anti-malware solutions come in, but our approach takes a different route through the continuous monitoring of network ports for any signs of malicious activity. While most malware scanners merely examine files, MalBuster does its work by detecting malware that tries to communicate with external systems by monitoring system ports in real time and identifying suspicious processes.<\/p>\n\n\n\n<p>MalBuster works by continuously watching local machine\u2019s network ports for inbound and outbound connections, capturing data packets in real-time, linking them to the corresponding processes and classifying the processes using both static and predictive malware analysis techniques. The tool also features a user interface where users can access the generated reports and act on suspected malware. The tool is implemented using various python libraries to perform static and predictive ML analysis on the processes. The data for analysis and testing were collected from real network traffic, malware sample repositories, and normal software applications. Benchmarking various KPIs, our results show that MalBuster is effective at detecting threats accurately, processing data quickly, and using minimal system resources across various test scenarios. These results underline its capabilities in providing proactive detection of threats and mitigation, improving the accuracy and reliability of current security measures. There is scope for improvement through further research to fine-tune the algorithms and expand MalBuster&#8217;s capabilities, ensuring even better threat detection and protection.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ANURODH ACHARYA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Examining the Interplay of Psychological Factors in Remote Workers\u2019 Cybersecurity Practices<\/p>\n\n\n\n<p>This research examines the interplay of psychological factors that are influencing cybersecurity practices of remote workers focusing on how perceived severity, perceived vulnerability, self-efficacy, response efficacy, and response cost can impact both protection intention and protection behavior. A survey conducted with a diverse sample of remote workers showed that perceived severity and perceived vulnerability significantly enhance the protection intentions which in turn strongly predict the protection behaviours. Self-efficacy and response efficacy positively influence protection intentions while perceived response cost impacts protection intention but not the actual protective behaviours.<\/p>\n\n\n\n<p>This study also identifies common cybersecurity challenges including network and device security and highlights prevalent issues such as malware attacks and data breaches. Survey participants showed a high level of caution in dealing with suspicious emails, often opting for verification and reporting to the IT departments. Emotional responses to the cybersecurity roles also varied with many of the respondents feeling either positive or neutral showing a general sense of awareness and responsibility. The findings also highlight the important role that psychological factors play in shaping cybersecurity behaviours among the remote workers and further suggest that targeted intervention can greatly improve the cybersecurity practices in remote work environments.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, August 7<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">GAUTAM KUMAR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Foodereum: Revolutionizing Food Supply Chains with Blockchain Authentication<\/p>\n\n\n\n<p>Blockchain technology has the potential to revolutionize supply chain management by enhancing transparency, traceability, and security. The proposed work suggests adopting a private Ethereum-based blockchain system to record, track, and verify food products throughout the supply chain. The immutability of blockchain ensures that transactions cannot be altered or concealed, as they are recorded in a distributed ledger accessible to all authorized parties. This transparency allows receivers to access the entire history of a product on the blockchain network, improving food quality and safety. By implementing this blockchain-based solution, the food industry aims to increase consumer appeal and overcome challenges such as centralized systems, third-party involvement, high costs, food waste due to spoilage and contamination, lack of accountability, and communication gaps between supply chain partners.<\/p>\n\n\n\n<p>However, the system faces security threats like cross-site scripting (XSS) attacks, sensitive information exposure, and hardcoded credentials. To mitigate these risks, the project incorporates robust input validation, output encoding, secure storage, and strong authentication mechanisms, including two-factor authentication and strong password policies. Regular security audits, penetration testing, and software updates are also implemented to identify and address vulnerabilities. These security enhancements, combined with blockchain&#8217;s inherent benefits, create a more robust and trustworthy system for food supply chain management, increasing transparency, and efficiency, and providing a significantly more secure environment for all stakeholders involved in the food supply chain by increasing consumer appeal and overcoming challenges such as centralized systems, third-party involvement, high costs, food waste, lack of accountability, and communication gaps between supply chain partners.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">MANUEL M. DUARTE<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Professor Mark Kochanski<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Analysis of Enhancing Security Measures in Docker Containers: A Threat Model Approach<\/p>\n\n\n\n<p>This project aims to explore new mitigation strategies and understand the risks<br>associated with deploying Docker as an architectural solution. We researched various security<br>issues with Docker container images and reviewed Docker\u2019s architecture and its application<br>usage. In addition to this analysis, we investigated industry best practices to understand<br>what can mitigate these threats effectively. We created a Threat Model specifically for<br>Docker Components and analyzed threats using STRIDE analysis.<\/p>\n\n\n\n<p>Based on our findings, we present a few recommendations for enhancing the secure use of<br>Docker in application deployment, i.e. when using Docker Hub, always use a security scanner<br>tool before deploying any images and disable unused services on a Dockerfile; we mention<br>other good security practices that should be applied in the early stages of development.<br>Our future work will involve further experiments focusing on using ML real-time anomaly<br>detection algorithms such as support vector machine (SVM) and Isolation Forrest to detect<br>vulnerabilities early on, in SSL protocols and other attacks to root access. Also, using<br>machine learning to improving certificate management. Continued research in this area is<br>essential to deepen our understanding of Docker container security and to develop robust<br>defenses against emerging threats.<\/p>\n\n\n\n<p>Keywords: Docker, Dockerfile, Anomaly detection, Encryption STRIDE, DREAD.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SPRING 2024<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color has-link-color wp-elements-b93dcbdc3839348686fe77069d427ce5\">Wednesday, May 15<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">TIMOTHY LUM<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. William Erdly<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>3:30 P.M.; Discovery Hall 464<br><strong>Project<\/strong>: The Howdu (&#8220;How do I\u2026?&#8221;) Project: A Knowledge Management System to Support Cybersecurity Implementation<\/p>\n\n\n\n<p>Knowledge capture and distribution has been a perennial human endeavor since prehistory. In the modern era, the internet has facilitated an exponential growth in the volume of information available, however it presents a sub-ideal resource in providing consistent, structured instruction for how best to implement effective cyber defenses.<\/p>\n\n\n\n<p>In this capstone, we evaluate the technical pressures and missing linkages that have prevented guidance from being presented to mitigate cyberattacks. We then build a cyber defense knowledge repository that allows users to create a cumulative snapshot of their experiences and insights.<\/p>\n\n\n\n<p>In limited testing, Howdu facilitated a 61% speedup of an arbitrary and partially obfuscated task for those performing it (Practitioners). It further altered the subjective perception of this task from &#8220;Confusing&#8221;, &#8220;Frustrating&#8221;, and &#8220;Ambiguous&#8221; to a more positive outlook of &#8220;Easy&#8221;, &#8220;Comprehensible&#8221;, and &#8220;Fun&#8221;. These initial results suggest the application&#8217;s ability to improve network defense by aiding defender efficiency, decreasing stress, and reducing burnout.<\/p>\n\n\n\n<p>Future Works include an integration of the system for grading information &#8211; called the Trust Index &#8211; and implementation of a system for translating knowledge articles across languages &#8211; called the Gnosetta. Other long term goals include containerization of the application and reductions in third-party service reliance.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color has-link-color wp-elements-f0697be92b586d6edcbf96b30af083e6\">Monday, May 20<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">KEVIN HUANG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>1:15 P.M.<br><strong>Project<\/strong>: Targeted Cybersecurity Awareness and Training Programs for Two Asiatic Minority Groups in the U.S.<\/p>\n\n\n\n<p>Cybersecurity education provides the knowledge and skills necessary for consumers to navigate the digital world safely. This knowledge is not equally accessible for all consumers. There is currently a lack of cybersecurity education available for Asian minority groups in the U.S. Language and cultural barriers make it difficult for consumers in these groups to receive the knowledge that they need. The goal of our project was to create effective cybersecurity training programs for the Chinese and Vietnamese minority groups. We accomplished this by first creating education materials through incorporating the latest research and guidelines for password security and social engineering prevention. We then found participants to attend our training sessions through the help of nonprofit organizations. After conducting the training sessions, our results confirmed that there is a demand from the Chinese and Vietnamese minority groups for cybersecurity education. Our analysis also showed that the training sessions made an impact on the participants&#8217; intentions to improve their password hygiene and be more vigilant against social engineering attempts. The insight gained from this project can be used to expand the research and development of cybersecurity education to different Asian minority groups, additional cybersecurity topics, and additional cities across the U.S.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color has-link-color wp-elements-9aa2da45f1c08f7bc9fe42f5b100de88\">Thursday, May 23<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">CHALERMWAT PUAPOLTHEP<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>3:30 P.M.<br><strong>Project<\/strong>: Privacy-Preserving Source Code Similarity Detection<\/p>\n\n\n\n<p>Reusing or sharing the source code could potentially violate license agreements and lead to lawsuits for an organization. These infringements can be identified using source code similarity detection tools. However, existing methods lack sufficient protection for the privacy of the source code. Due to privacy concerns, the software creator might be hesitant to share their source code for investigation. Therefore, we propose a source code similarity detection technique that preserves the privacy of the source code by implementing a normalizing process to standardize the source code before hashing it. Our comparison algorithm employs fuzzy hashing, which enables us to evaluate similarity based on the hash values. Our proof-of-concept application efficiently detects Type I and II code clones with minor time trade-offs. To verify the trade-off, we experimented to compare the time usage with and without the normalizing process. Additionally, the accuracy of Type II code clone detection is investigated. The result showed an improvement in precision and recall, with an average time taken of 0.0005 seconds longer than the process without normalizing.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color has-link-color wp-elements-887ebd2bebbebfe750c852541611a881\">Friday, May 24<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">CHHEANG DUONG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>8:45 A.M.<br><strong>Project<\/strong>: Generating Synthetic Data and Evaluating its Privacy<\/p>\n\n\n\n<p>The field of Generative AI continues to grow in popularity in recent years from organizations seeking new ways to leverage synthetic data to optimize their processes. This technology is especially useful in areas like healthcare, where sharing data for research needs to balance between keeping information private and making sure the data is still useful. The deployment of generative AI for data anonymization prior to public release is being increasingly considered by organizations. Synthetic data offers the benefit of preserving privacy while maintaining the utility of the data for research purposes. However, the assumption that synthetic data does not contain real data can create a false sense of security if the underlying machine learning models are inadequately configured. Thus, organizations must be able to assess the privacy risk of the synthetic data.<\/p>\n\n\n\n<p>This study sought to establish the foundations for a comprehensive framework that evaluates the privacy risks associated with synthetic data. Our approach incorporates previous works on synthetic data generation and privacy assessment into a single workflow consisting of three phases. First, our Data Synthesis module generates synthetic data utilizing Conditional Tabular Generative Adversarial Networks (CTGAN) or Differentially Private CTGAN (DP-CTGAN). Next, our Privacy Attack module runs Membership Inference Attacks (MIAs) against the synthetic data to identify potential privacy leaks. As one of the three types of privacy attack strategies, MIAs aim to reverse engineer the machine learning model and deduce if a data sample is part of the original data. Finally, our Privacy Evaluation module analyzes the Data Utility and Privacy Defense of the synthetic datasets by translating the results of the Privacy Attack module into quantitative metrics.<\/p>\n\n\n\n<p>The results of our testing provide valuable insights for future research in optimizing and adapting our workflow. For example, current literature on synthetic data privacy recommends leveraging Differential Privacy to improve privacy risk, but they did not validate data utility. Low data utility would essentially render the synthetic data useless. Our results found that while Differential Privacy significantly reduced the probability of a successful privacy attack, data utility also decreased considerably. This finding supports the need for further optimization of our Data Synthesis module, assisting users in generating statistically similar synthetic data. In the future, we may be able to build a robust framework that would allow organizations to confidently generate and release synthetic datasets for public consumption without compromising individual privacy.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SAJA ALSULAMI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: A Study on the Effectiveness of Education and Fear Appeal to Prevent Spear Phishing of Online Users<\/p>\n\n\n\n<p>Spear phishing attack is considered one of the most elaborate attacks in social engineering. It presupposes that an attacker designs a scam to obtain the personal information of specific users from their social media accounts. It involves a preliminary analysis of targeted users and their online behaviors needed to persuade them that a malicious link or attachment is sent by a trusted person. This attack implies that human beings are the weakest link within a security system; their vulnerabilities could be exploited. The most detrimental consequences following spear phishing attacks are financial losses, network compromises, loss of login credentials, and malware installation.<\/p>\n\n\n\n<p>This quantitative study aims to examine the impact of education and fear appeals on users\u2019 knowledge and abilities to identify spear phishing attacks. Three interventions were implemented: an educational intervention, a fear appeal intervention, and a combined educational-fear appeal intervention. The control group was used for comparison purposes. This study was conducted as an online experiment with 726 participants, and they were assigned randomly into four groups; after interventions, there was a test to evaluate their knowledge and abilities to identify spear phishing attacks. The test was administered to compare the efficacy of every intervention group (educational, fear appeal, and combined educational-fear appeal) to the control group. The experiment findings revealed no statistically significant differences in the mean test for these four groups. The study results indicate further research is needed to develop an effective intervention program that would considerably enhance users\u2019 knowledge of spear phishing attacks and their resilience to them.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color has-link-color wp-elements-58c35240b33b646a8f422f044b8cf9d4\">Tuesday, May 28<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">ANTHONY JESUS BUSTAMANATE SUAREZ<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Brent Lagesse<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>3:30 P.M.; Discovery Hall 464<br><strong>Thesis<\/strong>: Advancing Deep Packet Inspection in SDNs: A Comparative Analysis of P4 and OpenFlow Programmability<\/p>\n\n\n\n<p>This thesis undertakes a critical examination of Deep Packet Inspection (DPI) capabilities within<br>Software-Defined Networking (SDN) frameworks, emphasizing the comparative efficacy of P4 programming language against the conventional OpenFlow protocol.<\/p>\n\n\n\n<p>OpenFlow, while foundational in SDN\u2019s evolution, exhibits notable constraints in DPI\u2019s domain,<br>primarily due to its limited packet inspection depth, confined largely to the transport layer (Layer 4).<br>In contrast, this research advocates for the adoption of P4 for its unparalleled flexibility and programmability, extending DPI functionalities to the application layer (Layer 7), thereby addressing and potentially surpassing OpenFlow\u2019s limitations.<\/p>\n\n\n\n<p>Employing a methodical approach, this study harnesses Open vSwitch and BMv2 (Behavioral Model<br>version 2) switches to simulate real-world network scenarios. These simulations facilitate a head-to-<br>head comparison of OpenFlow and P4 in executing DPI tasks, particularly focusing on HTTP and SQL protocols \u2014 common vectors for network threats. Through a comprehensive suite of protocols including OpenFlow, gRPC (Google Remote Procedure Call), and P4Runtime, the research crafts a robust DPI framework, further complemented by a custom-developed controller designed for the BMv2 and P4 ecosystem. This controller\u2019s introduction marks a significant step in demonstrating DPI\u2019s operational viability and efficiency within an SDN environment, leveraging application-layer traffic management. The research culminates providing three different implementations to do Deep Packet Inspection within SDN benchmarking each of them to measure their advantages and disadvantages. With these implementations and benchmarking, we not only aim to validate P4\u2019s superiority over OpenFlow in managing DPI tasks but we also seek to dynamically adapt packet-processing techniques to the ever-evolving landscape of network threats. By advancing SDN functionalities beyond traditional layer boundaries, this thesis contributes significantly to the discourse on network security, management, and optimization, paving the way for future innovations in increasingly complex network environments.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color has-link-color wp-elements-c8c523fc632f00e71b69d9841de282b1\">Wednesday, May 29<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">EDUARD RASKIN<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Marc Dupuis<br><strong>Candidate<\/strong>: Master of Science in Cybersecurity Engineering<br>11:00 A.M.<br><strong>Project<\/strong>: American Automobile Association Penetration Test<\/p>\n\n\n\n<p>Penetration testing simulates attacks against systems, networks, and applications. Such audits are a cornerstone of successful attack surface management programs since they help organizations identify and close security gaps and meet industry-specific security standards and requirements. This document outlines the learning, preparation, technical work and the results of a black box security review conducted on American Automobile Association of Washington&#8217;s publicly accessible network infrastructure and company website. The goal of the assessment was to identify vulnerabilities that an outside attacker could leverage to gain unauthorized access and assess the overall security posture of AAAWA\u2019s external perimeter. The test analyzed AAAWA\u2019s digital footprint, enumerated publicly accessible servers and authentication portals and examined infrastructure and web application vulnerabilities. The work included an authentication, access control, and user management assessment. The key deliverable was a detailed penetration test report that outlined findings and provided recommendations based on risks s<\/p>\n\n\n\n<p> storage. For encryption and decryption technique we have used Hybrid RSA AES algorithm. Through results of various experiments performed, we can conclude that this algorithm has higher efficiency, increased accuracy, better performance, and security benefits.<\/p>\n\n\n\n<p><a href=\"#top\">Back to top<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"msece\">Master of Science in Electrical &amp; Computer Engineering<\/h2>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SUMMER 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Wednesday, August 13<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">SRIVAISHNAVI JAMALAPURAPU<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Seungkeun Choi<br><strong>Candidate<\/strong>: Master of Science in Electrical &amp; Computer Engineering<br>8:45 A.M.; Discovery Hall Room 464<br><strong>Thesis:<\/strong> Characterization of a Sneak Path Current Effect in a PEDOT: PSS-based ReRAM Crossbar Array<\/p>\n\n\n\n<p>This thesis presents an experimental characterization of Cu\/PEDOT: PSS\/Ag crossbar ReRAM devices fabricated on glass substrates, with a particular focus on understanding the impact of sneak-path currents on the device performances. Resistive memory cells, particularly in crossbar configurations, are widely considered as promising memory technology due to their scalability, high density, non-volatility, and low power consumption. However, despite these advantages, they are prone to sneak-path currents\u2014unintended leakage through unselected cells\u2014which pose a significant challenge to performance and reliability in dense memory arrays.<\/p>\n\n\n\n<p>In this work, we investigate how a ReRAM cell in a crossbar array responds to prolonged electrical stress by applying low-voltage cycling corresponding to a sneak-path conduction. The device has a PEDOT: PSS as a switching layer sandwiched between Cu bottom electrode and Ag top electrode. The width of the electrodes is 10 \u00b5m and the crossbar array space is 60 \u00b5m. This research elaborates how a resistive switching memory cell in a crossbar array degrades its performance under various biasing conditions.<\/p>\n\n\n\n<p>Three categories of electrical testing were conducted. First, cycling measurements under repeated \u00b12 V sweeps were used to assess endurance until hysteresis degradation. Devices showed stable switching behavior for up to 121 cycles, with well-defined LRS, HRS, and consistent Vset, Vreset values before breakdown. Second, voltage sweep experiments were designed to compare switching behaviors during an initial \u00b12 V sweep versus a final \u00b12 V sweep, following intermediate low-voltage steps from \u00b10.5 V to \u00b11.9 V. The results demonstrated that cumulative sneak-path conduction led to subtle hysteresis collapse and reduced switching margins.<br>Finally, narrow-to-broad sweep testing evaluated whether limited low-voltage cycling could condition the device and influence future switching dynamics. Devices exposed only to narrow ranges remained non-switching, but subsequent full-range sweeps revealed altered I\u2013V characteristics, confirming that sneak-path effects accumulate even in the absence of complete switching events.<\/p>\n\n\n\n<p>Overall, the findings highlight that while selector-less resistive memory cells offer structural simplicity and potential scalability, their performance degrades under repeated low-voltage stress because of sneak-path current. This study provides experimental insights into how cumulative leakage conduction paths affect long-term device reliability, informing future strategies for robust ReRAM integration in compute-in-memory and neuromorphic architectures.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SPRING 2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Thursday, June 5<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">AUSTEN CLAIR<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Walter Charczenko<br><strong>Candidate<\/strong>: Master of Science in Electrical Engineering<br>5:45 P.M.; Discovery Hall 464<br><strong>Thesis<\/strong>: Signal Integrity of Coplanar Waveguides versus Microstrip Interconnects for Clock Distribution<\/p>\n\n\n\n<p>This dissertation investigates the use of Grounded Coplanar Waveguide (GCPW) interconnects, commonly employed in monolithic millimeter-wave circuits, versus microstrip interconnects for distributing digital clock signals on printed circuit boards (PCBs) with the goal of improving electromagnetic compatibility (EMC) and signal integrity (SI). Two prototype PCBs, one using microstrip traces and the other using GCPW traces, were designed and fabricated using identical materials and layout constraints. Both designs were simulated and measured in the frequency and time domain to evaluate their performance in clock signal splitting and isolation from adjacent transmission lines. Results show that the GCPW-based PCB outperforms the microstrip design in the frequency range from 300kHz to at least 6GHz. The GCPW circuit demonstrated a superior suppression of electromagnetic coupling and crosstalk to adjacent nets, reduced clock skew, lower reflection, and minimized signal attenuation. After propagating through the clock distribution network, clock signals at the output ports of the GCPW PCB displayed greater signal integrity and a higher isolation between output ports than the microstrip PCB throughout the measured spectrum. Crosstalk measurements taken on a net adjacent to the clock distribution circuit, while clearly quantifiable on the microstrip PCB, were nearly undetectable on the GCPW PCB. This study supports the use of GCPW structures over microstrip for high-performance digital clock distribution to improve EMC and SI.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">SPRING 2024<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left is-style-default has-purple-300-color has-text-color\">Tuesday, May 14<\/h4>\n\n\n\n<h5 class=\"wp-block-heading is-style-default\">CHERYL KUNG<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Walter Charczenko<br><strong>Candidate<\/strong>: Master of Science in Electrical Engineering<br>3:30 P.M.; Discovery Hall 464<br><strong>Thesis<\/strong>: Low-Cost 3-D Printed Helical Antenna with Dielectric Support<\/p>\n\n\n\n<p>Satellite-based internet connection requires high directivity, millimeter wave phased array antennas to be able to receive and transmit signals effectively. Phased array antennas for millimeter waves have historically been very expensive to manufacture. Exploring low-cost methods for manufacturing high directivity antennas may bring down the costs of these systems, allowing more equitable access to internet.<\/p>\n\n\n\n<p>Helical antennas are a type of high directivity antenna that can be used for these purposes. However, helical antennas are difficult to manufacture and scale due to its three dimensional (3-D) shape of the helix conductor. New 3-D printing technology allows the creation of a dielectric support for the helical antenna element. This adds mechanical rigidity to the antenna and is feasible for high volume manufacturing at a lower cost.<\/p>\n\n\n\n<p>This thesis explores the design of a low-cost helical antenna using a 3-D printed dielectric core for mechanical support. The research in this thesis concludes that it is possible to design a helical antenna using low-cost dielectric materials with high relative permittivity at microwave frequencies. As a proof of principle, a 5 GHz helical antenna embedded in a solid dielectric was designed and modeled using electromagnetic field simulation software. At 5 GHz, the software simulations can be compared to helical antennas that are manufactured on conventional 3-D printers and commonly used resin dielectrics. The conclusion and results of the computer simulations show that helical antennas with dielectric support will radiate in the axial mode with high directivity and circular polarization.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">Summer 2023<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color\">Friday, August 11<\/h4>\n\n\n\n<h5 class=\"wp-block-heading\">PRITAM BHANDARI<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Seungkeun Choi<br><strong>Candidate<\/strong>: Master of Science in Electrical Engineering<br>11:00 A.M.<br><strong>Thesis<\/strong>: Characterization of Nafion Based Resistive RAM Devices<\/p>\n\n\n\n<p>The development of computers in the modern era has escalated the race towards the development of powerful and efficient memory devices. In the past hundred years, we have gone from using punch cards for a mere kilobyte of storage to an enormous capability of storing terabytes of data. This progress has been mainly possible due to the advancement in the field of non- volatile memory devices and the fabrication technologies. By the use of advanced miniaturization techniques and new materials, we have been able to dramatically reduce the size of the memory devices while increasing the storage capacity and computing performance.<\/p>\n\n\n\n<p>Despite having achieved this feat of miracle, we are reaching a point of slower growth in the computing performance of MOSFET-based non-volatile memory devices. It becomes very difficult to further decrease the size of memory devices. Hence, the next generation memory technology must have the following features to meet the high computing performance in the era of artificial intelligence: low-power consumption, fast switching, non-volatile, high-density fabrication. Resistive Random-Access Memory Devices (ReRAM) meets all those requirements; hence, is considered as one of the best candidates for the next generation memory technologies.<\/p>\n\n\n\n<p>In this research, a ReRAM device with Nafion as a switching layer was fabricated. To characterize the resistive switching performance, Nafion was annealed at three different temperatures: 30\u00b0C, 60\u00b0C, and 90\u00b0C. In order to study the effect of different electrode, we used two different bottom electrodes (Au and Cu) and Al as a top electrode. The devices with Cu as a bottom electrode exhibits good resistive switching properties while the device with Au as a bottom electrode shows little or negligible switching performance. We found that the performance of switching is best when Nafion is annealed at 60\u00b0C. However, the experiment shows a wide variation of device performance even in the same substrate, indicating the importance of uniform film thickness and quality of Nafion.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">Summer 2022<\/h3>\n\n\n\n<h4 class=\"wp-block-heading has-purple-300-color has-text-color\">Tuesday, August 9<\/h4>\n\n\n\n<h5 class=\"wp-block-heading\">MOOSA RAZA<\/h5>\n\n\n\n<p><strong>Chair<\/strong>: Dr. Seungkeun Choi<br><strong>Candidate<\/strong>: Master of Science in Electrical Engineering<br>8:45 A.M.<br><strong>Thesis<\/strong>: Multilevel Resistive Switching in a Metal Oxide Semiconductor based on MoO<sub>3<\/sub><\/p>\n\n\n\n<p>Over the years a resistive random-access memory (ReRAM) has received so much attention among many emerging memory technologies due to its simple structure and fabrication process, cost-effective development, low-power consumption, scalability, high throughput, and other attracting memory characteristics.<\/p>\n\n\n\n<p>Multilevel switching operation of the stack ReRAM device based on the MoO<sub>3<\/sub> has been investigated by the compliance current control method, where the device exhibited 2-bit per cell memory storage density. Device realized the bipolar resistive switching mode with the varying levels of high resistive state (HRS)  between 11.7 \u03a9 to 90 \u03a9 and low resistive state (LRS) between 3.89 \u03a9 to 47 \u03a9 read at 0.01 V during the endurance characteristics, exhibiting the variable OFF\/ON ratio between 1.6 to 15. The device also showed insignificant variations in the switching voltages such as set voltage (V<sub>set<\/sub>) between 0.22 V to 0.27 V and reset voltage  (V<sub>reset<\/sub>) between 0.15 V to 0.30 V was observed over 11 resistive switching cycles, when swept between -0.5 V to +0.5 V.<\/p>\n\n\n\n<p>Also, the unique resistive switching behavior of the novel lateral ReRAM device based on MoO<sub>3<\/sub> has been reported, showing multiple set and reset voltages in both the positive and negative voltage regimes and maintained the consistency across the switching voltages i.e., V<sub>set<\/sub> A, V<sub>set<\/sub> B, V<sub>set<\/sub> C, V<sub>reset<\/sub> A and V<sub>reset<\/sub> B were noticed around -40 V, 40 V, -10 V, 40 V and -40 V respectively throughout the 105 switching cycles. The device also exhibits the self-compliance property at much smaller currents around few microamperes (\u2245 0.9 \u03bcA), making it suitable for the wide range of power applications. Further investigation is required to determine the plausible applications of the unique resistive switching properties achieved from the lateral ReRAM<\/p>\n\n\n\n<p><a href=\"#top\">Back to top<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Previous final examination defense schedule An archive of some previous School of STEM master\u2019s degree thesis\/projects. Select a master&#8217;s program to navigate to candidates: Computer Science &amp; Software Engineering Cybersecurity Engineering Electrical &amp; Computer Engineering Master of Science in Computer Science &amp; Software Engineering WINTER 2026 Wednesday, February 25 DEBOSHRI DAS Chair: Dr. Min ChenCandidate:&#8230;<\/p>\n","protected":false},"author":6,"featured_media":0,"parent":22903,"menu_order":16,"comment_status":"open","ping_status":"open","template":"","meta":{"_acf_changed":false,"_is_archived":false,"_archived_contact_email":"","footnotes":""},"class_list":["post-22905","page","type-page","status-publish","hentry"],"acf":{"related_links":{"toggle_visibility":false,"link_1":"","link_2":"","link_3":"","link_4":"","link_5":""},"highlight_box":{"toggle_visibility":false,"title":"","content":"","button":"","button_style":"angled-purple-button","button_screen_reader_text":""},"contact_type_1":{"toggle_visibility":true,"contact_title":"School of Science, Technology, Engineering &amp; Mathematics","email":"stemgrad@uw.edu","phone":"425.352.5490","box":"","address_line_1":"","address_line_2":"","location":""},"contact_type_2":{"toggle_visibility":false,"contact_title":"","email":"","phone":"","box":"","address_line_1":"","address_line_2":"","location":""},"social_media":{"toggle_visibility":false,"facebook_url":"","instagram_url":"","linkedin_url":"","twitter_url":"","youtube_url":""},"blog_archive_sidebar_visibility":false},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.0 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Previous Quarters - School of Science, Technology, Engineering &amp; Mathematics<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Previous Quarters - School of Science, Technology, Engineering &amp; Mathematics\" \/>\n<meta property=\"og:description\" content=\"Previous final examination defense schedule An archive of some previous School of STEM master\u2019s degree thesis\/projects. Select a master&#8217;s program to navigate to candidates: Computer Science &amp; Software Engineering Cybersecurity Engineering Electrical &amp; Computer Engineering Master of Science in Computer Science &amp; Software Engineering WINTER 2026 Wednesday, February 25 DEBOSHRI DAS Chair: Dr. Min ChenCandidate:...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive\" \/>\n<meta property=\"og:site_name\" content=\"School of Science, Technology, Engineering &amp; Mathematics\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-27T17:40:56+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"169 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive\",\"url\":\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive\",\"name\":\"Previous Quarters - School of Science, Technology, Engineering &amp; Mathematics\",\"isPartOf\":{\"@id\":\"\/#website\"},\"datePublished\":\"2022-09-30T14:17:30+00:00\",\"dateModified\":\"2026-04-27T17:40:56+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.uwb.edu\/stem\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Graduate Programs\",\"item\":\"https:\/\/www.uwb.edu\/stem\/graduate\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Thesis\/Project Final Defense Schedule\",\"item\":\"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Previous Quarters\"}]},{\"@type\":\"WebSite\",\"@id\":\"\/#website\",\"url\":\"\/\",\"name\":\"School of Science, Technology, Engineering &amp; Mathematics\",\"description\":\"Just another UW Bothell site\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Previous Quarters - School of Science, Technology, Engineering &amp; Mathematics","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive","og_locale":"en_US","og_type":"article","og_title":"Previous Quarters - School of Science, Technology, Engineering &amp; Mathematics","og_description":"Previous final examination defense schedule An archive of some previous School of STEM master\u2019s degree thesis\/projects. Select a master&#8217;s program to navigate to candidates: Computer Science &amp; Software Engineering Cybersecurity Engineering Electrical &amp; Computer Engineering Master of Science in Computer Science &amp; Software Engineering WINTER 2026 Wednesday, February 25 DEBOSHRI DAS Chair: Dr. Min ChenCandidate:...","og_url":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive","og_site_name":"School of Science, Technology, Engineering &amp; Mathematics","article_modified_time":"2026-04-27T17:40:56+00:00","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"169 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive","url":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive","name":"Previous Quarters - School of Science, Technology, Engineering &amp; Mathematics","isPartOf":{"@id":"\/#website"},"datePublished":"2022-09-30T14:17:30+00:00","dateModified":"2026-04-27T17:40:56+00:00","breadcrumb":{"@id":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule\/archive#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.uwb.edu\/stem\/"},{"@type":"ListItem","position":2,"name":"Graduate Programs","item":"https:\/\/www.uwb.edu\/stem\/graduate"},{"@type":"ListItem","position":3,"name":"Thesis\/Project Final Defense Schedule","item":"https:\/\/www.uwb.edu\/stem\/graduate\/defense-schedule"},{"@type":"ListItem","position":4,"name":"Previous Quarters"}]},{"@type":"WebSite","@id":"\/#website","url":"\/","name":"School of Science, Technology, Engineering &amp; Mathematics","description":"Just another UW Bothell site","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/pages\/22905","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/comments?post=22905"}],"version-history":[{"count":44,"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/pages\/22905\/revisions"}],"predecessor-version":[{"id":37322,"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/pages\/22905\/revisions\/37322"}],"up":[{"embeddable":true,"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/pages\/22903"}],"wp:attachment":[{"href":"https:\/\/www.uwb.edu\/stem\/wp-json\/wp\/v2\/media?parent=22905"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}