Thesis/Project Final Exam Schedule

 

Final Examination Schedule

PLEASE JOIN US AS THE FOLLOWING CANDIDATES PRESENT THEIR CULMINATING WORK.

Spring 2019
 

Monday, May 20

Maxfield Strange

Chair: Dr. Michael Stiber
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; DISC 464
A Biologically Plausible Mechanism for Phonology Acquisition in Human Infants

Infants go through a serial developmental process in language acquisition, which is observable as specific linguistic milestones: by around six weeks of age, infants coo; by about six months they begin to babble, entering the reduplicated babbling stage at about eight months and non-reduplicated babbling at 11 months, with the first word not long after that. There are several theories of how infants do this, but few of them are end-to-end testable, often either falling short of being end-to-end, or else being too vague in certain details. This thesis presents a new theory of phonology acquisition, called the Evolving Signal Chain (ESC) theory of speech acquisition, which is extensible to lexical acquisition and which is end-to-end testable by means of computer simulation. Such a simulation is described and results from key portions of it are given. Key results include an analysis of deep convolutional autoencoders as a plausible means of learning an unsupervised embedding of raw speech and the use of that embedding space in learning to reproduce the speech found in it. Results show that continuous natural speech can be embedded in at least two time scales and that these embedding spaces can be used for learning to babble.

Wednesday, May 22

Yang Zhang

Chair: Dr. Kelvin Sung
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; DISC 464
A Web Framework for Interactive Data Visualization System

The College Affordability Model system began as an idea prototyping exploration system, and evolved over the past three years into an actual product for legislators and policymakers. It helps users to explore how much the students and their families can use from the most common sources of funds to pay for college, and the potential debt they would assume.

Due to the prototyping nature of the initial project, the current system went through a phase of lack of modularization and encapsulation where developers must parse through sparsely documented and redundant code when maintaining and developing the system. This problem was recognized in the backend, and the system went through an architectural revival in previous projects. However, the frontend continues to be challenging for developers where both architecture and UI need to be replaced. This project aims to design and develop a web framework for interactive data visualization system, and use the framework to revamp the front-end architecture of the College Affordability Model project. In these ways, the software development efficiency and front-end user experience of the website can both be increased.

By adopting concepts from existing Business Intelligence (BI) system, the framework uses Page, Visualization, Data Source and Parameters in the Object-Oriented Design (OOD). Additionally, the framework supports friendly user-interface extensions to provide more advanced features including: draggable chart, URL parameter binder, and a spreadsheet-like table editor. The underlying architecture of the framework is based on the MVVM design pattern that modularizes UI elements into components and provides Single-Page Application (SPA) user experience. The pub/sub messaging and action/reducer patterns are adopted to simplify the inter-communication between loosely coupled components and to formalize the message and state transition of the application.

The project implementation was based on an agile model in managing the development lifecycle, and went through two iterations of development, testing, release, and feedback. The data visualization framework is delivered as an Angular module with unit tests, documentation and is ESLint compliant. The framework is applied and verified in the College Affordability Model website where the front-end system architecture is refactored and new features deployed. In general, the framework simplifies and accelerates the process of building any web-based interactive data visualization system especially for researchers who do not have web programming experience.

Friday, May 24

Anjal Doshi

Chair: Dr. Kelvin Sung
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; DISC 464
Learning Modern Computer Graphics using Vulkan API

OpenGL is a graphics Application Programming Interface (API) built for rendering 2D and 3D graphics. It was evolved from the Iris Graphics Library based on the Silicon Graphics Inc’s Geometry Engine, a special-purpose VLSI processor for computer graphics, from the 1980’s [1][2]. With the rapid and continuous improvements of the Graphics Processing Unit (GPU) over the past decades, the discrepancies between OpenGL’s assumptions on the underlying hardware, and that of the actual modern GPU are becoming increasingly significant.

Vulkan is a modern, low-level graphics and compute API that provides an up to date abstraction of modern graphics hardware. The explicit nature of the API requires that the developer to be responsible for managing hardware resources, and results in a more efficient approach where the default system driver does not perform tasks behind the scenes to manage resources. Unfortunately, the added responsibilities and the associated sophistication in the underlying architecture mean that the Vulkan API can be a significant hurdle for new users. This is especially challenging as it is a new API with no significant community help.

The primary goals of this project are to design and implement a conceptual layer in the form of a library in between Volkan and user’s applications and a set of demonstration tutorials. The library and the tutorials would assist new users of Vulkan in understanding, learning, and working with the API. This project approaches the library creation by creating multiple simple and typical single-goal applications, and then abstracting common layers into a unified interface that supports all the applications. The library is then modularized into components based on well-established computer graphics concepts, like renderable, mesh, camera, texture, input system, etc. Along with the applications, tutorials are developed to demonstrate how to work with the library. Extensive documentation is also included.

The usability of the library is verified with two moderately large sale, interactive applications involving typical modern interactive graphical capabilities including: geometric model loading and displaying, camera controls, multiple user views, illumination model, planar reflection, shadow casting light sources, and tessellation. Tutorials included with this library demonstrates the usage of the library by explaining the user-side source code in detail. The intrinsic of the library, like resource creation, renderable creation, API classes, etc. are described in the documentation. This library allows users to rapidly prototype Vulkan-based graphics application where the low-level, complex details of the Vulkan API are all hidden. As users gain familiarity, they can then proceed gradually to work with Vulkan API directly.

References:
[1] “History of OpenGL - OpenGL Wiki.” [Online].
[2] J. H. Clark, “The Geometry Engine: A VLSI Geometry System for Graphics,” in Proceedings of the 9th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 1982, pp. 127–133.

Tuesday, May 28

Erin Choate

Chair: Dr. Geethapriya Thamilarasu
Candidate: Master of Science in Cybersecurity Engineering
11:00 A.M.; DISC 464
Blockchain Based Authentication for Transport Layer Security in IoT

In this project, a blockchain based authentication mechanism for IoT was developed, that can be integrated into the traditional transport layer security protocols, Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS). This mechanism is an alternative to the traditional Certificate Authority(CA)-based Public Key Infrastructure (PKI) that relies on x.509 certificates. The developed solution replaces the Certificate Request and Certificate Verify stages of the TLS handshake with state verification, via a device registration blockchain. The blockchain state provides the information needed to assert that: a) a device has been properly registered on the blockchain; b) the key/device has not been revoked by its rightful owner due to theft; c) ownership data is consistent between the device and the blockchain. Transparency is inherent via use of the blockchain as the transaction record. Any owner or user can query the blockchain to see the transactions associated with their device to ensure there has been no malicious activity related to ownership or state.

Blockchain technology provides a solution to many of the challenges of applying CA-based PKI to IoT, namely scalability and transparency. A barrier to employing traditional TLS/DTLS to IoT devices is the memory and processing requirements associated with the storage and processing of x.509 certificates and CA chains, since typically, memory resource are limited. The developed mechanism lessens those requirements in order to make the modified TLS/DTLS solution a viable option for devices where minimizing memory utilization is critical. Data collected in experiments on the use of the blockchain based authentication mechanism, show that dynamic memory usage can be reduced by up to 20%, while only minimally increasing application image size and time of execution of the TLS/DTLS handshake.

Wednesday, May 29

Rizky Ramdhani

Chair: Dr. Dong Si
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; DISC 464
Microservice-based Architecture Design and Deployment for Juvenile Idiopathic Arthritis Screening Service

Juvenile Idiopathic Arthritis (JIA) is a chronic rheumatic disease that affects 300,000 children in the United States [1].  If not properly diagnosed and treated, the disease may lead to alteration of joint function and structure [2].  Studies have shown that it is possible to detect JIA using skin surface temperatures of inflamed joints [3].  By providing an online screening service that reads infrared (IR) images of a patient’s body to obtain a screening result, it would dramatically reduce the cost of families to obtain screenings for their children by eliminating potential travel.  The goals of this project are to design a system architecture, introduce development workflows, utilize existing open-sourced technologies to create and maintain an online JIA screening service, and lastly implement a screening microservice based on region of interest IR imaging analysis.

The criteria for the JIA screening service are accuracy, scalability, updatability, and recoverability.  The design that meets these criteria is a 3-tier microservice architecture that divides the screening service into a client-layer, a middle service layer, and a persistent storage/data layer.  This architecture allows each individual component to be developed and updated individually of other components in the service.  This also allows individual components to be scaled depending on the bottlenecks of the system.

 Applications in the JIA screening service will be containerized with Docker and deployed on a managed Kubernetes cluster on Azure cloud services to meet scalability and recoverability criteria.  In addition, CosmosDB will be used as a managed No-SQL document store to store and access the data.  Contributors to the project will use DevOps principles, mainly Continuous Integration/Continuous Development (CI/CD) principles, to quickly deploy new code to a live service so clients can experience new features and fixes in a timely manner.

The screening microservice was developed using Python and the Flask microservice framework.  A template screening microservice was developed to help team-members easily deploy new screening models.  This same template microservice was used to develop a region-of-interest(ROI) based screening algorithm that can be used to serve patients as well as technicians of an ongoing JIA IR study at Seattle Children’s Hospital (SCH).  The results of the ROI-based screening microservice is close to the results of an existing monolithic MATLAB application currently in use by the technicians at SCH.  Discrepancies between the two applications can be attributed to the underlying BLAS (Basic Linear Algebra Subroutine) of the computation libraries, as well as the differences between the results from a team-developed IS2 (a proprietary Fluke IR thermal imaging file) parser and the results from the proprietary Fluke-developed SmartView program.

Because many of the components of the JIA screening service are still under the design stage, the applications are not yet deployed on the managed Kubernetes cluster.  The hope in the future is that the service can be expanded not only to provide screenings for JIA, but also to screen for other medical conditions using models developed and trained with the anonymized data collected by the screening service.

[1] “Juvenile Arthritis.” [Online]. Available: https://www.rheumatology.org/I-Am-A/Patient-Caregiver/Diseases-Conditions/Juvenile-Arthritis. [Accessed: 28-Jul-2018].
[2] F. C. Arnett et al., “The American Rheumatism Association 1987 revised criteria for the classification of rheumatoid arthritis,” Arthritis Rheum., vol. 31, no. 3, pp. 315–324, Mar. 1988.
[3] R. Lasannen et al., “Thermal imaging in screening of joint inflammation and rheumatoid arthritis in children,” Clin. Phys. Physiol. Meas., vol. 36, no. 2, 2015.

Azra Bandukwala

Chair: Dr. William Erdly
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; DISC 464
Using an Alexa Skill to administer a vision screening tool: The Convergence Insufficiency Symptoms Survey (CISS)

The EYE (Educating Young Eyes) Center for Children's Vision Learning & Technology, is a university-sponsored non-profit organization dedicated to the research, development, and education of technologies. It aims to help increase awareness of the importance of functional vision in children’s learning process and provide apps and tools accessible to all communities. Many projects, mobile applications, games, web applications and back systems have been developed by the team at EYE center to generate awareness about existence near vision issues amongst children. CISS Alexa Skill is one of those projects which is targeted to ease the process of taking CISS survey for people suffering from near vision disorders.

The Convergence Insufficiency Symptom Survey (CISS) was developed by the CITT group to determine whether a person is suffering from issues related to near vision.  This well-researched survey is comprised of fifteen diagnostic questions, and then a summary score is calculated indicating whether near vision issues are present or not.  The current version of the survey requires a person to read the questions either by themselves or with the assistance of another person reading questions for them. In both cases, the survey becomes less accessible for the person taking the survey because either they are dependent on someone else and/or they might be facing a near vision disorder which makes reading difficult for them. This project focuses on developing an audio-automated tool using an Alexa device (an “Alexa skill”), that enables the CISS survey to be readily available and accessible to individuals outside of the traditional optometry (or school nurse) office. This project further focuses on specific design and interaction challenges related to vision screening for young children (grades K – 6). Initial functional and usability tests show that administration of the CISS via the use of an Alexa skill is a viable alternative to traditional methods. Further enhancements such as providing multilingual capabilities, improving the understanding of various dialogue forms (e.g., additional intents), and embedding real-time usability metrics can further improve the efficiency and efficacy of this approach to screening for near vision problems in young children.

Sri Venkata Bhavani Likitha Vijjapu

Chair: Dr. Erika Parson
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; DISC 464
Machine Learning based Recommendations to aid Educational Planning and Academic Advising through the Virtual Academic Advisor System

The process of educational planning and academic advising are critical for supporting student retention rate and on-time graduation. Faculty-based advising has been widely adopted by a number of community colleges because it results in a better advising experience for student. However, due to lack of resources, outdated technologies, and growing diverse student population, the process increases workload on already overwhelmed faculty. The role of technology in different areas of the education sector is being slowly adopted, for instance, to provide various services such as online classes, registration services, and classwork maintenance. Despite these advancements, it is not being used for the purpose of advising. In this thesis, we discuss how machine learning (ML) and other technologies can be used to assist with faculty-based academic advising in higher education planning. The Virtual Academic Advisor (VAA) is an initial software solution to address this problem. Various ML-based techniques such as supervised learning, natural text processing, collaborative filtering, and sequence classification are explored to provide new functionality for the VAA. Collaborative filtering is used to provide recommendation of study plans based on set of input parameters. Sequence Classification is used to predict possible suitable college majors, based on the course work completed by a student. In addition, we propose a strategy to generate synthetic data, necessary due to the fact that is nearly impossible to collect copious amount of real observations necessary for properly training ML-based modules. These various approaches will be integrated into the VAA system, to help faculty with the advising process, saving time for more meaningful conversations with students, and providing students with the ability to explore different educational paths.

Thursday, May 30

Gayathry Radhakrishna Pillai

Chair: Dr. Yang Peng
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; DISC 464
An Application-level Routing Scheme for Efficient Device-Edge-Cloud Communications

With the advance of Internet of Things (IoT), numerous controllable IoT devices are brought into our daily lives. These controllable devices, different from traditional monitoring-oriented devices, not only produce sensor data but also consume the data via device-to-device communications. In a typical IoT system with three-tier architecture, end IoT devices, edge (gateway) and cloud may continue exchanging data and control decisions for autonomous and intelligent control. The communications between each node in such an architecture are usually enabled by message brokers, which can tolerate intermittent networks connection and facilitate event-driven asynchronous operations. However, purely connecting message brokers is insufficient to support typical unicast (one-to-one) and broadcast (one-to-many) communications across the whole architecture. Some application-level routing support is desired for timely and reliable delivery of heterogeneous data and critical control decisions across IoT devices, edge and cloud. Existing application layer protocols for IoT, such as CoAP, AMQP and MQTT, are incapable of fulfilling this demand because of the two-tire design paradigm embedded into them. In an attempt to address this limitation, this project proposes a generic application-level routing scheme that supports one-to-one and one-to-many messaging between all connected points in a three-tier IoT system. The implementation of this scheme, utilizing serverless computing functions and publish/subscribe message brokers, will also help separate the application logic of IoT applications from the core communication and efficiently utilize the computing and storage resources in edge and cloud. This scheme is anticipated to benefit emerging IoT systems such as smart building and smart city where heterogeneous IoT devices may continuously interact with each other over edge and cloud. This scheme has been implemented and evaluated in AWS, and the evaluation results have demonstrated its promised efficiency and agility for message routing in various scenarios.

Mitchell Lee

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; DISC 464
Melodic Transcription in Language Documentation and Application

The Blackfoot language is an endangered language that needs to be documented, analyzed, and preserved. As a pitch accent language, Blackfoot is challenging to learn and teach. Words that are spelled the same take on different meanings when changing in pitch. Linguistics researchers are working to create visual aids, called Pitch Art, to teach the nuance in pitch changes. However, the existing techniques used to create Pitch Art require time-consuming work across multiple applications. Additionally, new forms of audio analysis are required in order to create visuals that accurately indicate changes in pitch. To support the preservation of this language, this project proposes a system that provides new forms of audio analysis and automates the process of creating Pitch Art.

Using agile design cycles, we have created a solution that provides a consolidated, interactive toolset for creating Pitch Art. This application provides value to a variety of stakeholders in this field. Linguistics researchers are provided with tools to analyze and compare Blackfoot speakers. Teachers are given collections of words and recordings from native speakers from which to teach students. Students are given the ability to compare their own pronunciation of Blackfoot words to native speakers. We also discuss a new form of audio analysis, called a perceptual scale, that is used to provide more effective visuals of perceived changes in pitch movement.

By collaborating with domain experts in this field, we have validated the effectiveness of using this system to create Pitch Art using the perceptual scale. This analysis can help reveal patterns in Blackfoot words and speech, which in turn helps analyze and preserve the language. With this system, we provide a platform to support future development and enable furthersupport of Blackfoot language preservation.

Friday, May 31

Ashish Goel

Chair: Dr. William Erdly
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; DISC 464
Cloud-based application management services for remote medical screening data: Enabling the EYE Center ecosystem

The EYE (Educating Young Eyes) Center for Children's Vision Learning & Technology is a University of Washington Bothell sponsored non-profit organization dedicated to the research and development of technologies to help increase awareness on the importance of functional vision in children’s learning process. As a part of the EYE Center, many applications have been developed to provide clinical screening tools that provide important data to parents, schools and children regarding the propensity for vision-related problems and learning disorders. The proliferation of these independent applications has led to the need for a centralized backend for secure data collection, management, and access to reporting/research. This capstone project presents Application Management Service (AMS), a microservice as part of the EYE Data Service. This microservice is a cloud-based service which provides a reliable and secure way for screening applications, vision therapy games, and other external services to access, share and manage their data with EYE Center. This service provides a “No-Code” (e.g., a simple JSON file) interface to help application developers define their database schema along with the operations they need, to ensure their application data can be stored and shared among the various user types (e.g., clinicians, parents, teachers, researchers). This microservice exposes multiple APIs that enables creation of a workflow for application and data management, which can then be used to store and retrieve data. These APIs can also be used for developing the various views of the EYE Center Web portal again requiring only a “No-Code” backend interface. This service works in conjunction with the User Security/Management microservice to allow only authenticated users to access the data. Recommendations for further development, deployment and system monitoring are provided.

Medha Srivastava

Chair: Dr. Geethapriya Thamilarasu
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; DISC 464
MSF: A comprehensive security framework for mHealth apps

Mobile health (mHealth) applications have witnessed an increased popularity in recent years because of their affordability, ease of use and effectiveness in delivering healthcare services to everyday users of mobile devices. This has led to a steady rise in cyber-attacks targeting mHealth apps, as indicated by various security researches. Security and privacy risks are compounded in mHealth apps as they handle sensitive medical data and personally identifiable information of their users. Despite popular mobile platforms having their own security issues, research studies have established that the lack of security cognizance among app developers is the primary cause for most security vulnerabilities found in mHealth applications. In this project, we propose a security framework that provides authentication, authorization, secure storage and secure transmission of sensitive medical data and workflows commonly found in mHealth apps. The proposed framework can be easily imported into new and existing mHealth apps and mitigate its security and privacy vulnerabilities. Finally, to prove the framework’s effectiveness, we use a dummy mHealth app built on top of it and confirm via our findings that the framework can be easily integrated into any mHealth app and reduce its security and privacy risks without compromising user experience. These results confirm that the functionalities offered by the framework relieve app developers from worrying about security and privacy aspects of their mHealth apps.

Monday, June 3

Tran Khanh Trang Quang

Chair: Dr. Yang Peng
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; DISC 464
Device-driven On-demand Deployment of Serverless Computing Functions

Due to its merit of elasticity and scalability, serverless computing has been accepted as a fundamental technology block when building the edge-cloud back  end of IoT systems. Advanced data analytic jobs such as machine learning inference and training can be programmed as serverless computing functions and executed in containers on edge, saving the latency and avoiding disconnection in communications with cloud. Though existing edge computing frameworks such as AWS Greengrass and Azure Edge have enabled the deployment of serverless computing functions to edge, the deployment process must be started by user in cloud. Such a user-driven push-model approach demands heavy user involvement and monitoring, which makes it less ecient and sometimes unable to promptly react to changes in dynamic IoT environments. Considering these limitations, this project proposes a novel scheme to enable device-driven on-demand deployment of serverless computing functions, which is a more exible and responsive solution for IoT devices to eciently utilize edge and cloud resources. This scheme not only reduces user involvement in the deployment process but also opens a exible channel for IoT devices to adapt and ooad their computational tasks based on real-time needs. Additionally, this scheme also enables IoT devices to specify where the requested functions shall execute, either on edge or in cloud, based on privacy and security requirements. Extensive evaluations have been conducted in AWS, and the obtained performance on both end-to-end and step-wise operation latency demonstrated that the proposed scheme can successfully enable the desired on-demand deployment with minimal overhead in various scenarios.

Kulsoom Mansoor

Chair: Dr. Clark Olson
Candidate: Master of Science in Computer Science & Software Engineering
1:15 P.M.; DISC 464
Translating Text in Images Recognized by a Convolutional Neural Network

Automatic text recognition is very beneficial in the present day as it allows computers to detect and recognize text in images and videos without human intervention. Some studies explain how multimedia database indexing and retrieval is an enormous and difficult task that can be made easy with text detection [1].  Several algorithms exist that try to tackle this problem of accurately finding and extracting text in natural scenes [2]. Text recognition is especially tricky when different types of text formats and conditions are involved, such as fonts, orientation, color, complex backgrounds, and low-quality images. This project aims to use a novel combination of previously developed techniques to solve this problem by combining two detection algorithms and feeding the detected text into a Convolutional Neural Network (CNN) model made from scratch. The CNN model then classifies each character within the text. Results of the system show that both the proposed detection algorithm and machine learning model succeed with clear text, but the system has a hard time detecting text from complex and low resolution images as well as parsing words whose characters are connected together.

Andrew Williams

Chair: Dr. Brent Lagesse
Candidate: Master of Science in Cybersecurity Engineering
3:30 P.M.; DISC 464
Penetration Testing for Privacy: A Framework for Carrying Out Attacks and Analysis on Anonymized Data

Modern computing and cloud sensing technologies are allowing researchers and companies to collect larger amounts of data than ever before. This data provides great opportunities for study, often for purposes which may be only tangentially related to the original goal of the data collector. Medical data can give new insights into epidemiology, public health, and the most effective ways to devote limited funding and resources. Crowdsensing data, collected from thousands of smart phones, can help us design and implement smarter cities.

When collected data is made public or provided to third parties, it may provide a great deal of economic and academic opportunity, in both the private and public sectors. It can improve civil planning, economic opportunities, public health, and more. However, such data cannot be released unless efforts are made to ensure privacy. In this context, privacy means that the people in the original dataset cannot be re-identified, and that the data is anonymous.

Our purpose here is to build a framework and proof-of-concept for a privacy penetration testing tool, which can be used by the holders of data to help determine if data is safe to release to third parties or to the public.

Dong Liu

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering
5:45 P.M.; DISC 464
Exploring Data Provenance Reconstruction via a Neural-Network Approach

The use of Artificial Neural Networks (ANN) to solve problems in diverse areas has become ubiquitous in the present day. At the same time, with a growing interest in data analysis, it is natural to search for innovative ways to improve existing approaches for processing, interpreting and using data.  This project is motivated by the Data Provenance problem, in the context of tracing where data comes from based on the sentences that make up a document.

In our work, we propose to use an ANN approach to predict causal relation between sentences to ultimately find their origins.  A causal relation between two sentences exists if the occurrence of the first sentence causes the other one to exist. For instance, given an article A made up of sentences, an effect is the set of those a cited by other articles, while the surrounding un-cited sentences in article A are causes.  Furthermore, identifying causal relations can be seen as a classification problem. Sentence pairs are the input data (features), and the output (labels) is the causal relation itself, thus supervised learning approaches, such as ANN, can be used.  Our experiments require an extremely large dataset, ideally, all the documents in the world, but this is implausible based on our limitations, so we scope down our work to the articles in Wikipedia.  In this dataset, each source article, e.g., a paper, its un-cited sentences are causes and its cited sentences are effects. These sentences and their causal relations are collected to create the dataset for an ANN model.   Using Wikipedia as the scope of our dataset for experimentation, we found that using an ANN model can achieve around 68% of classification accuracy. However, our experiments indicate that the ANN model has the potential to understand logic between sentences, even when seen as a black box.  This behavior could be used in a different application context:  an  ANN model might have the capability to work as a basic module for Natural Language Understanding, based on the notion that a combination of cited sentences can be used as synthetic abstract for an article. This approach might be useful for automatically generating abstracts or summaries, which can also be useful in the area of Data Provenance.

Tuesday, June 4

Mukambika Hegde

Chair: Dr. Brent Lagesse
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; DISC 464
SensePert: A Mobile Crowdsensing Platform for Data Collection and Analysis

Mobile crowdsensing (MCS) is a new sensing paradigm that leverages the ubiquitous nature of mobile devices and user participation in collecting information. Mobile crowdsensing has been used in the past for monitoring environmental pollution, understanding trac dynamics, studying dietary patterns, and chronic diseases associated with it. It is also currently being used to report or discuss local problems of cities. Unfortunately, large-scale adoption of MCS is limited by the high technical barrier faced by the research community in creating and deploying crowdsensing tasks. In this project, we present SensePert, a mobile crowdsensing platform for data collection and analysis. SensePert allows experimenters to create crowdsensing campaigns by specifying the parameters and constraints for data collection through a user-friendly web client. The client is implemented using Vue.js, a progressive JavaScript framework for building user interfaces. Experiments created by campaign designers are then made available on the Android mobile app for data collection. The web application also provides in-house data visualization tool for quick analysis of the collected data. The backend of SensePert is implemented using the SpringBoot framework. The project uses MySQL database for managing user information; MongoDB for storing experiment data; D3.js for data visualization. An evaluation of the platform shows that the users can create and deploy experiments without prior experience with an MCS platform and without learning any cloud deployment technologies.

Wednesday, June 5

Mohamed Logman

Chair: Dr. Michael Stiber
Candidate: Master of Science in Computer Science & Software Engineering
11:00 A.M.; DISC 464
Long Short-Term Memory (LSTM) and Automatic Dependent Surveillance—Broadcast (ADS–B) for Aircraft Flight-Path Anomaly Detection

Globally, the popularity of air travel has significantly increased over the decades and it is only predicted to grow in the coming years. Over 100,000 registered flights occur daily and that number is anticipated to double by the year 2035. As such, it is necessary for each operating aircraft to be tracked for safety and security purposes. Automatic Dependent Surveillance - Broadcast (ADS-B) and radar provide air traffic control (ATC) with the appropriate information to track an operational aircraft. The increased number of aircraft can make it difficult for ATC to identify and react to every operational aircraft’s anomaly; therefore, in this paper, we propose an automated method for detecting anomalies using Long Short Term Memory (LSTM) and ADS-B on a selected routes. This method focuses on learning the normal conditions of the selected ADS-B features from a selected route to identify when an anomaly is observed. With the use of this method we can observe the injected anomalies across all select ADS-B features.

Kalyani Katariya

Chair: Dr. William Erdly
Candidate: Master of Science in Computer Science & Software Engineering
3:30 P.M.; DISC 464
User-Centered Design and Usability Study of Android-Based Vision Screening Tools: The Quick Check Application

The availability of smartphones with increased computing capabilities has opened new doors for researchers and medical practitioners to reach out to patients. Usability remains the crucial factor in using mobile applications that support patient-health care provider interactions. The use of mHealth, a medical practice supported by mobile devices, is now being used to assess and monitor conditions such as autism, dementia, and Parkinson’s disease.

A series of applications being developed by the EYE Center for Children's Vision and Technology.This is a University of Washington Bothell non-profit organization dedicated to the research, development, and communication about technologies to assess and treat functional vision problems that impact learning. A new vision screening Android-based mobile app, QuickCheck (used to

Five users participated in the usability study and we used System and software quality models ISO/IEC 25010–1, 2011 and Usability Definitions and concepts, ISO 9241–11 Guidance on usability, 2018 standard to define objective measurement of QuickCheck app. MARS(Mobile App Rating Scale) questionnaire, interview and user observations have used to measure users overall satisfaction with the app. The results of the usability test for the QuickCheck app demonstrated that overall quality of the app meets the general acceptance criteria for the identified functional and non-functional requirement.  However, several areas of concern and improvement are also noted such as task flow, improvements in CISS questions and time required to complete the CISS test. Qualitative findings include the need for feedback to users, help instructions format. Further recommendations for full-scale clinical testing are provided.  

Back to top

Questions: Please email cssgrad@uw.edu