Thesis/Project Final Exam Schedule

 

Final Examination Schedule

PLEASE JOIN US AS THE FOLLOWING CANDIDATES PRESENT THEIR CULMINATING WORK.

Autumn 2021

For a link to attend a candidate's online defense, please contact our office at stemgrad@uw.edu.

Thursday, December 2

Apoorva Bhatnagar

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

1:15 P.M.; Online
Project: GIF Image Descriptor

Social media platforms feature short animations known as GIFs, but they are inaccessible to people with vision impairments. Such individuals rely heavily on accessibility tools such as screen readers to help them understand the context of an image. In this paper, we present a project that aims to improve the accessibility of web pages by auto-injecting machine-generated but accurate commentaries of the GIF image content into the alt text tag of the image on the web. We use machine learning and web browser extension to create a software system, the “GIF Image Descriptor,” to solve an often-overlooked issue: web accessibility for non-static images. This system uses a CNN-LSTM architecture to generate descriptive texts for GIF images because CNN prospers at encoding images and videos, and LSTM works well with sequence generation. We examine two very popular model architectures of 2D CNNs and 3D CNNs for generating our first neural network, and settle for a hybrid 2D CNN model that can take multiple frames of a single image to create vector embeddings, which then become the input to our LSTM neural network.  This approach gives us substantial improvement in accuracy metrics such as BLEU (7% improvement), ROUGE (6% improvement), and METEOR (10% improvement) when compared to existing TGIF model that generates captions for GIF images. A usability study performed with a focus group of individuals with and without vision disabilities further proves that the system performs well for its targeted audience.

Friday, December 3

Pinki Patel

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Project: Yoga Pose Correction and Classification

Yoga is an ancient physical science that originated in India over 5,000 years ago. It has been popularized in the West due its various benefits. A lot of people are participating in performing yoga by themselves through watching TV or videos. However, it is not easy for novice people to find incorrect parts of their Yoga poses by themselves. Therefore, in this paper we are introducing a web application, YogaAndPoses, which provides two-fold benefits to users: Yoga Pose Correction and Yoga Pose Classification.

The goal of Yoga Pose Correction is to use pose estimation technology and allow a person to practice different yoga poses using their webcams and receive feedback on their yoga poses. Since Pose Estimation is a well explored area in Computer Vision and Deep Learning, we are leveraging the public knowledge and using it for our application. Our Yoga Detection uses CMU’s open-source pose estimator OpenPose as a pre-processing module in order to detect the joints in the human body and extract their keypoints. We use these keypoints to calculate the angle difference between the yoga pose of an instructor/expert and a novice performing yoga and suggest the correction if the angle difference is larger than the given threshold. For evaluations, we have applied the Yoga Pose Detection to three people and confirmed that it found the incorrect parts in the yoga pose.

We are introducing Yoga Pose Classification that classifies 82 Yoga Poses which are very challenging in terms of viewpoints of yoga poses. Our yoga pose classification allows a user to upload an image of a yoga pose or capture a yoga pose using the user's web camera and classify the yoga pose using the DenseNet-201 pretrained model. Before sending an image to DenseNet-201, we perform various pre-processing steps on the image such as resizing, augmentation, segmentation and converting the image to RGB if it is a grayscale image. We have obtained results of 64% for top 1 accuracy and 78% for top 3 accuracy for 82 yoga poses where others’ state of the art yoga pose classifier was able to achieve similar accuracy with maximum of 20 yoga poses.

Back to top

Monday, December 6

Sukriti Tiwari

Chair: Dr. David Socha
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Art Website: UX Design and Development

The aim of this undertaking is to design and develop an art website for an individual artist – the author’s mother - Mrs. Kavita Tiwari. The website will act as an online portfolio, help showcase her skills and advertise her offerings. Artists often find it difficult to market their products as they are engrossed in perfecting their skills and lack the necessary marketing prowess. The intention of creating a website is to take a concrete step towards becoming an established presence online for streamlined sales.

 The methods opted for reaching the goal of a full-fledged artist website include determining the scope of the project through conducting semi-structured user interviews and competitive analysis of existing websites, identifying and segregating the types of users of the website, understanding the user’s needs, selecting features to be implemented on the website, accordingly, designing and creating a prototype using Figma, and finally developing and testing the website. The author wishes to use this opportunity to help her with sales as well as learn full-stack web development.

Tuesday, December 7

Tejaswini Madam 

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

8:45 A.M.; Online
Project: MealFit - Health Monitoring Mobile Application

Obesity is one of the major problems many people are facing today. There has been a rapid increase in dietary ailments during the last few decades caused by unhealthy food routine. Mobile based dietary assessment systems that can record meal consumption and physical activities can be very handy and would improve dietary habits. This paper proposes a novel system that allows users to capture images of the meal consumed and upload them into the mobile application. The mobile application uses a pre-trained Convulational neural network model, MobileNet to classify the meal images. In order to improve the accuracy, pre-processing has been performed using Keras image data generator, which takes a batch of images and applies random transformation to each image in the batch. In this way multiple copies of the images in different parameters would be generated for the training process. Users can monitor consumption details via mobile application and receive regular alerts on the percentage of calories consumed with respect to the assigned daily calorie limit. Additional features to maintain the daily calorie consumption, such as access to different levels of exercises, nutrition tips and calendar modules, are developed in order to motivate the users. The experimental results demonstrate that the application is able to recognize the food items accurately with over 87% validation accuracy. The application is user friendly and is able to generate meaningful  notifications according to  the usability studies, which will benefit the user with a clear insight of healthy dietary and guide their daily consumption to improve body health and wellness.

Back to top

Thursday, December 9

Nhu Ly

Chair: Dr. Marc Dupuis
Candidate: Master of Science in Cybersecurity Engineering

11:00 A.M.; Online
Project: The impact of cultural factors, trait affect, and risk perception on the spread of COVID-19 misinformation on social media

In the information age, social media has created a favorable environment for spreading misinformation. This has become a serious concern for public health, especially during the unpredictable developments of COVID-19 and its variants. Misinformation related to COVID-19 has spread faster than ever on social media, emerging in numerous types from its origin and treatment methods to vaccines. This results in social media users making dangerous health decisions, as well as impacting the pandemic control process of authorities. Despite the warnings from accredited organizations about the risks of misinformation, a majority of social media users still trust such misinformation and spread it without verifying the reliability of the content. To tackle the spread of COVID-19 misinformation, this paper aims to explore what factors influence users' beliefs in such misinformation and their tendencies to share it on social media. We conduct a survey to assess users' reactions and behaviors toward COVID-19 misinformation. Memes and additional measures are included in the survey to evaluate the roles of culture, trait affect, and risk perception on the spread of misinformation. 

 

Friday, December 10

Carla Colaianne 

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering

1:15 P.M.; Online
Project: Developing Autonomous Mining Through a Simulation and Reinforcement Learning

Fully autonomous mining could provide many opportunities to safely and efficiently mine materials on earth and in extraterrestrial environments. Due to the complexity of autonomous mining, having a method to test out various solutions rapidly and safely is beneficial. This leads to the development of simulations that accurately represent the real excavator and the environment it is interacting with. In addition to using simulations, reinforcement learning is considered a promising application for developing autonomous algorithms for the complex situation of mining material. Reinforcement learning provides the key advantages of being able to interact with and learn from a changing environment. This project creates a simulation to model a robotic excavator and its interaction with mining lunar simulant soil. It then uses reinforcement learning to create an autonomous mining algorithm capable of reaching a certain depth in the soil and producing motions that can overcome any soil resistance the excavator may face.

Back to top

Questions: Please email cssgrad@uw.edu