Thesis/Project Final Exam Schedule

 

Final Examination Schedule

PLEASE JOIN US AS THE FOLLOWING CANDIDATES PRESENT THEIR CULMINATING WORK.

Summer 2021

For summer quarter 2021, all Final Examination and Defenses will not be held in person due to public health guidelines. For a link to attend a candidate's online defense, please contact our office at stemgrad@uw.edu.

Thursday, July 29

Victoria Salvatore

Chair: Dr. Michael Stiber
Candidate: Master of Science in Computer Science & Software Engineering

12:00 P.M. (noon); Online
Thesis: Demonstrating Software Reusability: Simulating Emergency Response Network Agility with a Graph-Based Neural Simulator

This research validates the re-engineering of a neural network simulator to implement other graph-based scenarios. Most components were abstracted to increase reusability and maintainability through strategic refactoring decisions. This paper demonstrates how the simulator, developed at the University of Washington Bothell, can be applied to another graph-based problem: the resilience of the US’s Next-Generation 911 (NG-911) system in the face of a crisis. This research focuses on separating the neurospecific components from the architecture of the simulator and verifying its functionality as reusable software. It also includes first-person interviews, literature reviews, data analyses, and NG-911 system research to establish the system requirements for the NG-911 test-bed. Initial results demonstrate that when a crisis destroys critical parts of emergency response infrastructure, the NG- 911 test-bed can reroute calls. This can support future work that will investigate the patterns that emerge from the interconnected events of a regional emergency response network. By applying previous research findings on the self-organizing behavior observed in both neural networks and emergency response networks during catastrophic events, this research will also contribute to the demonstration of self-organized criticality in complex networks. The NG-911 implementation of the simulator intends to model the resilience of emergency response infrastructure at varying levels of network connectivity.

Monday, August 2

Khadouj Fikry

Chair: Dr. Clark Olson
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Photo Mosaic Reassembling Images & Image Matching Techniques

Art often produces an artifact that is appreciated for its richness, meaning, message, and emotional power. Mosaic is an artwork that involves setting pieces of small tiles together to create an art piece. An image captures a moment in time and results in an artifact that can be printed or digitized.  Photomosaic is an engineering solution that combines the techniques of art, mosaic, and images.  Given an input and a library of images, the algorithm mimics mosaic techniques to regenerate the input image by stitching images from the library together. Selecting the right image to stitch at the right position to maintain the original image features and look requires intensive image analysis and matching techniques.  This project focuses on quality Photomosaic by honing at the different image analysis techniques.  A combination of a weighted BGR and a large color histogram to extract image statistics produces a quality photomosaic that is very similar to the original image.  Numerous image analysis techniques are articulated in this project to help quantify and conclude the effectiveness of this proposed solution. 

Back to top

Tuesday, August 3

Marcel Okello

Chair: Dr. Marc Dupuis
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Project: Password Feedback Meters: The Development and Validation of a Testbed to Examine the Efficacy of Peer-Feedback and Other Password Meter Types

Passwords have become entrenched in front-end authentication. Over the years, system administrators and security experts have stressed that users should create strong passwords with little information on how to actually accomplish this. Many websites and apps will provide a password strength meter to gauge if the user is complying with the password policies of the given site or app. While these meters have proven effective at forcing users to create stronger passwords, they have not encouraged users to intrinsically want to create a strong password. With many users feeling like creating a password is a cumbersome, secondary task, we wanted to see if this could be improved. Based on the concept of social influence, we developed a password strength meter testbed to determine if a password feedback meter that uses peer-feedback is more efficacious than traditional password feedback meters, no password feedback meter, and a novelty password feedback meter. We evaluate if this helps users not only create stronger passwords due to mandatory requirements, but want to create stronger passwords intrinsically. The testbed itself is adaptable and designed in such a way that other types of password feedback meters may be employed for testing. Our results are interesting and our next steps would be to improve the peer-feedback meter to provide targeted feedback to the specific user based on their password selection.

Wednesday, August 4

Dylan Shallow

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Document Clustering with Knowledge Graphs

Modern problems steadily require the management, processing and storage of increasingly larger datasets, which in turn requires harnessing larger computational resources. An example of such a problem is clustering documents using knowledge graphs. Representing documents with knowledge graphs will enable scholarly article databases to better serve researchers by providing accurate and interpretable information about the relationships between documents. This can be used to present better search results and suggestions for related works. In this work, we propose the use of knowledge graphs for this purpose. Knowledge graph representation and clustering of data may be applied to other areas of research like data provenance, search results clustering, and abstract synthesis. Performing the necessary operations on large graphs requires the use of tools like MPI, which can conveniently scale up to meet our needs. The use of a compute cluster, Purdue Scholar, is introduced to accommodate larger computing resource needs since the problem at hand is computationally expensive and it has made it possible to make strides in this project. Researchers and developers need access to these resources as well as the expertise and tooling to write efficient and correct software that can take advantage of those resources. As part of this project, we also share experiences and lessons learned, with the goal of paving the road for future UWB students and to encourage more focus in the area of high performance computing (HPC) and remote development on this or other projects, trailblazing collaboration opportunities. Quantitatively, our results show how we are able to use MPI to harness the available performance of Purdue Scholar, scale up our system, and process our datasets in a reasonable amount of time compared to previous efforts.

Back to top

Thursday, August 5

Joe Leung

Chair: Dr. Kelvin Sung
Candidate: Master of Science in Computer Science & Software Engineering

1:00 P.M.; Online
Thesis: An User Programmable System Agnostic Real-Time Ray-Tracing Framework

Ray tracing rendering is a well understood rendering technique that can produce photorealistic or complex visual effects. Until recently, customizing ray tracing systems posed challenges because of lack of infrastructure to program the rendering pipeline. To address the customizability issue in ray tracing pipeline, industry has been proposing their own ray tracing frameworks to integrate user programs. However, most of the existing solutions target proprietary platforms, which limits the access to ray tracing for the general public. In this project, we propose a platform agnostic ray tracing framework that is based on general programming capacity found in commodity graphical processing units (GPUs). The framework supports six types of customizable user programs, including custom geometries, shading, and lighting. The framework also provides the developers with utilities such as built-in acceleration structure, secondary ray generation, and material instancing. Our framework has been demonstrated that it is able to fulfill the common functional requirements found in existing commercial products without the platform limitations. We also identified tradeoffs in developing a flexible, cross-platform ray tracing framework running on GPUs. Our results provide insights to future development of similar ray tracing framework.

Friday, August 6

Benjamin D. Pittman

Chair: Dr. Munehiro Fukuda
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Project: Multi Agent Spatial Simulation (MASS) in multiple GPU Environment with NVIDIA CUDA

Agent-based models are used to simulate real-world random and pseudo-random systems. These models leverage many autonomous agents and require high computation capabilities. The Multi-Agent Spatial Simulation (MASS) grew from this need and is now implemented independently in Java, C++, and CUDA. MASS implements Agents and Places to both act amongst and with each other where Places remain resident and Agents may migrate to other Places.

The cloud computing infrastructure, its ever expanding compute capabilities, and the increasing demands for these resources to solve challenging problems, demand MASS CUDA grow to leverage multiple graphical processing units (GPU). This research accomplishes this through refactorization of the existing code base to initialize models on multiple GPU devices using parallel computing algorithms and strategies. These improvements include (1) ghost-spacing in the globally distributed Place array, (2) GPU direct communication, and an (3) application specific garbage collection and provisioning scheme for Agents.

This MASS CUDA implementation allows larger model sizes and faster processing time for simulations larger than 10,000 Place objects and shows an increase for size of models developed in MASS CUDA of more than double the previous implementation. This research implemented a neural signal simulation that is not finished. The simplified BrainGrid implementation requires Agents to be added to the simulation throughout processing and this research implemented Agent termination and refactored Agent spawning to facilitate. The dynamic Agents’ implementation allows for a greater number of simulations using the MASS CUDA library.

MASS CUDA Multiple GPU (MGPU) is an efficient choice for large simulation sizes. Future research should test different data apportioning schemes over devices; test sharing and splitting computations amongst host and devices; and test related strategies for coalescing memory in the system and with like computations.

Back to top

Thursday, August 12

Qiaoyu (Iris) Zheng

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Project: Seme-Automatically Style Landscape Site Plans With Machine Learning

A landscape site plan is a graphic representation that shows the arrangement of landscape elements such as trees, buildings, ground, grassland and water from an aerial view. Restyling a site plan includes adjusting the colors, textures and other artistic customization for various elements in the plan. This is a common task for contemporary landscape architects who spend most of their time working on computers. Our research develops a web application semi-automatically restyle landscape site plans based on machine learning. The system’s working flow are listed as the followings: a) receives the user’s uploaded site plan image; b) recognizes the semantic meaning of the image; c) visualizes the recognition results; d) provides an interface for modifying and customizing the detected result and the style setting; e) styles the uploaded image based on the detection results and user’s modification and customizing. This report presents our recent approach of combining two machine learning models to detect elements on a landscape site plan: using Mask-RCNN model to detect the single elements such as tree elements and building elements; using U-Net model to detect the surface elements such as grassland elements, ground elements and water elements. This paper also describes the training process and the app’s interface design. Additional tests are conducted for the further evaluation of the application’s performance which includes IoU score on single element, IoU (intersection over union) score on surface element, time consumption, user-experience satisfaction survey score and restyle quality satisfaction survey score.

Eric Hulderson

Chair: Dr.Brent Lagesse
Candidate: Master of Science in Cybersecurity Engineering

1:15 P.M.; Online
Thesis: Adversarial Example Resistant Hyperparameters In Deep Learning Networks

Deep learning has continued to make significant progress across many modalities of machine learning, and it is increasingly making life-critical systems and applications its beneficiary.  Companies from the IT industry such as Amazon and Google as well auto manufacturers like Mercedes and Tesla are heavily utilizing deep learning to enrich autonomous vehicle capabilities.  Further, deep learning has made its way into the healthcare industry with applications in imaging analytics and diagnostics, electronic health records analysis, and development of precision medication.  With deep learning at the center of these life-critical applications, safety and security must be a foundational component of the technology as it moves towards mainstream adoption.  Research has shown that adversarial training can improve the robustness of deep learning models, but at the cost of accuracy.  Moreover, it has been shown that hyperparameter selection can influence the resiliency of neural networks, although data supporting this is limited.  In this work, we expand the research of intelligent hyperparameter selection by incorporating deep learning architectures ResNet50 and VGG16 while also examining the relationship between robustness and accuracy.  

Nuoya (Nora) Wang

Chair: Dr. Dong Si
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Developing a health therapy chatbot using Natural Language Processing

Family caregivers experience a significant caregiving burden that negatively impacts their physical and mental health. The current support system for family caregivers is fragmented with limited access and high cost. In this project we aim to build a practical health therapy dialog system based on Hybrid Code Networks (HCNs) and Problem-Solving Therapy (PST) to help them schedule tasks, manage emotional needs and alleviate their symptoms. We developed a Wizard of Oz system to conduct therapy sessions for data collection. We processed the collected data with entity annotation, utterance augmentation and embedding. We created a new domain-specific model - PST module into HCNs, trained and compared our system with the original HCNs and other end-to-end task-oriented dialogue systems. Result shows that our system achieved high dialogue accuracy and outperformed other systems. We finally designed a web interface and API that allows users to interact with the chatbot we build.

Back to top

Friday, August 13

Jessica Frierson

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

1:15 P.M.; Online
Project: A Textual Data Extraction Application to Generate Events from Mobile Camera Image

Pictoevents is an application that is inspired by the complexity found within parenting plan schedules. Because of the sensitivity of such schedules, data is not openly available. Therefore, this project cannot directly accommodate parenting plan schedules. Instead, we propose to solve the problem of manually entering dates for an event as an entry point to addressing parenting plans. Most calendar applications require the user to manually add an event apart from peripheral tools such as voice commands that can create an event. To simplify the process of creating a new event on a mobile device’s calendar, this project aims to create a simple application that can create a calendar event with minimal user action. To accomplish this, computer vision techniques are applied through APIs to process an image taken by the user and extract text via optical character recognition. The extracted text is parsed using regular expressions and added to a template data structure that helps populate the necessary data needed to create an event on an Android mobile device. A title for the event is also generated using Natural Language Processing that identifies a title composing of the most relevant words. This is achieved using a pipeline of various techniques to identify subject, verbs, and noun as well as noun chunking to find an event title. The result of this project is an application that can identify an event from images and conveniently add it to the devices’ calendar.

Back to top

Questions: Please email cssgrad@uw.edu