Thesis/Project Final Exam Schedule

 

Final Examination Schedule

PLEASE JOIN US AS THE FOLLOWING CANDIDATES PRESENT THEIR CULMINATING WORK.

Winter 2020
 

Monday, March 2

Neha Kumari

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

1:15 P.M.; DISC 464
Piracy and Malware detection in Android Applications

Over the last decade, there has been a manifold growth in the number of smartphone users, which has resulted in an increase in demand for feature-rich smartphone applications. Android is one of the major mobile platforms that has been widely adopted. Amidst the exploding growth in the number of applications, software piracy and malware incidents are some of the major concerns that need to be addressed in order to build a healthy marketplace. Manual detection of pirated and malware-infected applications is error-prone and does not scale with the rising demand. Thus there is an urgent need to develop a robust system that can automate the analysis of Android applications. In this work, we compare and contrast two disparate malware detection systems. To do so, we first re-implement Juxtapp, a system developed at UC Berkeley, which performs code similarity analysis among Android applications to discover instances of code reuse based on the underlying opcodes. In contrast to Juxtapp, we also implement a permission-based malware detection system that identifies malware infected applications using the permissions accessed by an application. We evaluate the performance of both the approaches on the AndroZoo and KuafuDet dataset, consisting of a total of 20,047 android applications, including original, pirated and malware infected applications. We show in our results that Juxtapp can be effectively used for identifying software piracy, and can be used in conjunction with a permission-based malware detection system to accurately identify malware infected applications.

Friday, March 6

Kaihua Hu

Chair: Dr. Kelvn Sung
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; DISC 464
Real-time ray tracing in Unity

Ray tracing is an important rendering algorithm that naturally supports advanced effects such as realistic reflections, refractions, and shadows. It is capable of synthesizing images with striking realism that are comparable to photographs or videos captured in the real world. However, due to the intensive computation requirements and limitations of traditional hardware, ray trace image generation has largely been limited to off-line batch processing. Recently, with the increasing performance of the hardware, the latest Graphics Processing Units (GPUs) are becoming capable of delivering the computation requirements of ray tracing in real-time.

In this project, we implemented a real-time ray tracing system based on the Unity game engine platform. Our system delivers typical rendering features via a ray tracing pipeline, where the features include camera manipulations, anti-aliasing, environmental mapping, and supports for customized mesh models, materials, light sources, Phong illumination, reflection, and shadow. Besides, our system also provides simple User Interfaces (UI) for Unity users to configure the rendering process in real-time. For example, the user can define a skybox and increase the number of samples for anti-aliasing. They can even modify the material properties and generations of reflection rays, all in real-time, while the ray trace rendering system is running.

We have carefully examined the performance of our real-time ray tracing system and identified bottlenecks. Somewhat surprisingly, the performance results indicated that the communication between the CPU and GPU was not a limiting factor. Instead, as expected in all ray tracing systems, the number of ray intersection computation was the main issue. In our case, the vast proportion of frame time was spent in the GPU computing intersections. This bottleneck was relieved with an acceleration KD-Tree spatial structure. With the acceleration structure, the bottleneck of the system switched to the KD-Tree construction computation in the CPU. We addressed this issue by applying a lazy updating strategy for the KD-Tree construction. With these optimizations, based on our benchmark scenes, the resulting system was able to achieve over 100-times speed up, from rendering 144 triangles to 21500 triangles in real-time.

Monday, March 9

Rajasri Nanduri

Chair: Dr. David Soch
Candidate: Master of Science in Computer Science & Software Engineering

5:00 P.M.; DISC 464
Studying How User Research Informs Incremental Iterative Development: A Case Study Of A Location-Based Services Mobile Application

Incremental iterative development has been gaining acceptance in the mainstream software development community. This development in conjunction with user research can yield products with higher user acceptance and usability. The goal of this project is to study the development process in regular incremental iterations, based upon user research. To do this, I chose to develop a location-based mobile application. There are applications that provide services using device location but most of them focus on a vertical, often limited to certain content types – weather, hotels, restaurants. This provided the basis for the project’s initial scope to develop a news vertical using maps. Following an incremental iterative development process with regular user feedback sessions, the final application implemented stands apart from other location-based applications by catering to a wider range of content based on a location. This is a direct result of the user research conducted, which provided the application the direction the users desired. This document reports on how this project helped me understand the impact of user research and incremental iterative development on a product. It also describes ways that the application could be elaborated further.

Wednesday, March 11

Praveena Avula

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; DISC 464
Learning and Analysis Platform for Melodic Transcription in Language Documentation and Application (MeTILDA)

Language endangerment is one of the most urgent problems facing humanities, with roughly one language disappearing every two weeks. Blackfoot is one such language that needs to be documented, analyzed, and preserved. Blackfoot is challenging to learn and teach as a pitch accent language whose words take on different meanings when pitch changes. There are some existing tools such as Praat to capture the nuances of pitch changes by creating pitch graphs. However, these tools require time-consuming work across multiple applications to create graphs. To overcome this gap, earlier work in Dr. Min Chen’s group has developed an application called MeTILDA (Melodic Transcription in Language Documentation and Analysis)[MC1] . It automates the process of creating pitch graphs and enables linguistics researchers to analyze and compare multiple Blackfoot words. It also benefits the learners of this language by providing the ability to record their own pronunciation of words and compare it with that of native speakers. The goal of this project is to scale, enhance and improve the usability of the initial version of MeTILDA. Specifically, this project develops a dynamic content management system that supports any audio file to be uploaded and analyzed within the system. Pitch art analysis and images of the audio files can also be saved in the system which allows linguistic researchers to compare analyses of multiple speakers with features such as ‘View History’ and ‘My Files’. Learners can also save their own pronunciations and listen and share their previous pronunciations with other users. This project also aims at providing security by developing authentication and authorization features that allows users with different roles to perform specific functionalities within the application. While the project is currently being developed using Blackfoot as testbed, it can be used for other pitch accent languages as well in the future. 

Ruiwen Xing

Chair: Dr. Dong Si (Acting Chair: Clark Olson)
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; DISC 464
Deep Learning Based CT Image Reconstruction

As a common medical imaging method, Computed Tomography (CT) can create tomographic images using X-ray data acquired from around the human body. However, high quality and adequately sampled X-ray measurement data are not always available. In this scenario, the tomographic image created by conventional reconstruction algorithms will be noisy, or contain unreal artifacts. The goal of our study is to reconstruct high-quality tomographic images from noisy or incomplete scan data, including low-dose, sparse-view, and limited-angle scenarios, by utilizing novel deep learning techniques. In this project, we trained a Generative Adversarial Network (GAN) and used it as a signal prior in the Simultaneous Algebraic Reconstruction Technique (SART) iterative reconstruction algorithm. The GAN we trained includes a self-attention block to model long-range dependencies in the scan data. Compared with the state-of-the-art denoising cycle GAN, CIRCLE GAN, and a conventional mathematical reconstruction algorithm incorporating total variation minimization, our Self-Attention GAN for CT image reconstruction produces competitive results on solving limited-angle data reconstruction problems.

Thursday, March 12

Safina Chaudhry

Chair: Dr. Dong Si
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; UW1 260
Automated Cognitive Decline Analysis Using Voice Features and Clinical Test Results

The ability to detect cognitive decline at an early stage can help physicians and patients to start early treatment and make future treatment plans. This would also help to reduce the cost associated with the detection and treatment of cognitive decline. The aim of this study is to identify the most predictive features that gives the best results in detection of the disease. In this study, we developed an automated system that evaluates speech audio recordings of neuropsychological examinations of 135 subjects along with their clinical test results from the Framingham Heart Study (FHS) dataset. We extracted participant voice samples based on cognitive task assessments performed with them. A total of 89 acoustic voice features were extracted for all the tests performed with the participants. The FHS dataset was divided into three smaller feature sets, including neuropsychological test measures (NPT), magnetic resonance imaging (MRI) data, and combined NPT and acoustic voice features dataset.  Different numbers of features were used for different subsets of data to train Long Short-term Memory (LSTM) and Convolutional Neural Network (CNN) models to detect cognitive decline and identify the most useful and powerful features. The comparison of the results of both the models shows that the combined feature set of NPT and acoustic voice features gave decent accuracy rate with both the models. The CNN model showed slightly better results than LSTM model with an accuracy rate of 86% while LSTM showed an accuracy rate of 83%. As there was not much difference between the accuracy values of the two models, we used the training and testing times as metrics for selecting one model for future work. Based on training and testing time of both the models, CNN was used for future improvement of this project.

Back to top

Questions: Please email cssgrad@uw.edu