Getting Results from Testing
Monday, May 12, 2014
North Creek Events Center
Can you afford to release software that could bring down the store, leave phone calls hanging, or route two airplanes into the same airspace? Can you prevent software bugs that "should be" caught in testing from eluding your testing team and coming back to bite you?
Testing is not capable of proving that a system behaves correctly under all circumstances. However, it remains the most cost-effective means of demonstrating system dependability, and as such it is a crucial software development activity. Yet the typical testing process is such a human intensive activity that it is by nature error-prone and unproductive. Put simply, testing is all too often inadequately done.
The automation and formalization of testing techniques can enhance productivity of software developers and reduce errors in the development process. Formalization entails the use of formal methods in specifying and reasoning about product requirements and also in defining requirements of the testing process.
This talk will describe an application to testing of a specific formal method related to temporal logic. (A temporal logic is a formal logic that has been extended with a notion of ordering in time. Classical logic does not include such a notion, which is important for expressing requirements of many embedded, real-time systems---e.g. phone switches, avionics.)
The talk will describe a logic in which to specify temporal properties of real-time systems, called Graphical Interval Logic (GIL), as well as a method for constructing "oracles" from GIL specifications. GIL oracles check for temporal faults in test executions, ensuring that the testing team does not overlook executions that exhibit subtle timing and ordering faults. The visually intuitive representation of GIL specifications makes them easier to develop and to understand than specifications written in more traditional temporal logics. Additionally, when a test execution violates a GIL specification, the oracle provides information about a failure. This information can be displayed visually, together with the execution, to help the system developer see where in the execution a failure was detected and the nature of the failure.
About Laura Dillon
Laura Dillon is a professor and past Chair of Computer Science at Michigan State University (MSU). Before joining MSU, she was on the faculty of Computer Science at the University of California, Santa Barbara. Her research centers on formal methods in software engineering, emphasizing specification and analysis of concurrent software systems.
An ACM Distinguished Scientist, Laura has served on numerous editorial boards, program committees, funding panels, and professional advisory committees. She has enjoyed mentoring students and new faculty in CRAW workshops and ACM SIGSOFT mentoring events since the early 1990’s. She served as Program Co-Chair and general Co-Chair of the Grace Hopper Celebration of Women in Computing in 2011 and 2012, respectively.
Currently, she is the Vice Chair of ACM SIGSOFT, Chair the 2016 International Conference on Software Engineering, Academic Co-Chair for the Advisory Board of the Anita Borg Institute, and on the Executive Committee of the National Council of Women and Information Technology’s Academic Alliance. Laura was awarded the 2012 MSU College of Engineering Withrow Exceptional Service Award and the 2013 University of Massachusetts Amherst Outstanding Achievement and Advocacy Award in Education.
Algorithmic Game Theory of eBay's Buyer-Selling Matching
Friday, April 4, 2014
At the broadest level, eBay is an intermediary, who is in the business of matching a buyer and a seller so that they can complete a mutually beneficial transaction. There are two important strategic choices eBay makes. First, how and whom does eBay charge a fee for its services. Two, how does eBay rank the possible product choices for a buyer. In this presentation, we see eBay as a part of the intermediation industry, e.g., Google is in the same business of intermediation where they match buyer-seller (i.e., searcher-advertiser as known in their world) . We compare some of the popular intermediation business-models in the industry. We then ask if the intermediary could choose any business model, which would be a profit maximizing business model. We also try to propose some algorithms to implement these business models.
VPN Gate: A Volunteer-Organized Public VPN Relay System with Blocking Resistance for Bypassing Government Censorship Firewalls
Tuesday, April 1, 2014
VPN Gate is a public VPN relay service designed to achieve blocking resistance to censorship firewalls such as the Great Firewall (GFW) of China. To achieve such resistance, we organize many volunteers to provide a VPN relay service, with many changing IP addresses. To block VPN Gate with their firewalls, censorship authorities must find the IP addresses of all the volunteers. To prevent this, we adopted two techniques to improve blocking resistance. The first technique is to mix a number of innocent IP addresses into the relay server list provided to the public. The second technique is collaborative spy detection. The volunteer servers work together to create a list of spies, meaning the computers used by censorship authorities to probe the volunteer servers. Using this list, each volunteer server ignores packets from spies. We launched VPN Gate on March 8, 2013. By the end of August it had about 3,000 daily volunteers using 6,300 unique IP addresses to facilitate 464,000 VPN connections from users worldwide, including 45,000 connections and 9,000 unique IP addresses from China. At the time VPN Gate maintained about 70% of volunteer VPN servers as unblocked by the GFW.
About Daiyuu Nobori
Daiyuu Nobori is a software engineer and an entrepreneur. His development and research interests include systems software such as Virtual Private Network (VPN), distributed systems, and security. He entered University of Tsukuba in 2003 and started up a company, SoftEther Corporation in 2004. He received his master's degree in Engineering from University of Tsukuba in 2014. He is a Ph.D. course student at Department of Computer Science, University of Tsukuba. In SoftEther Corporation, he has developed SoftEther VPN, a cross-platform multi-protocol VPN program, made it public for free, and opened its source code in 2013. SoftEther VPN has attracted more than 100,000 users around the world since March 2013. He is aiming to gain 1,000,000 users of SoftEther VPN in next three years.
Seizure Prediction & Machine Learning
Tuesday, March 11, 2014
Dr. Jeff Howbert
Epileptic seizures are a common neurological disorder, affecting 1% of the world’s population. A major source of disability among epileptic patients is uncertainty around when their next seizure will occur, which often leads to anxiety and self-imposed limitations on activities. This talk describes the first medical device designed for long-term, ambulatory monitoring of brain EEG activity in epileptic patients. The device includes a real-time advisory system that can predict increases in seizure likelihood up to hours in advance of a seizure. The latter part of the talk focuses on the speaker’s work developing a second-generation algorithm for the advisory system, using techniques from signal processing and machine learning to create new predictive features and a simple but robust predictive model. Long-standing issues with the statistical validation of such models will be discussed.
About Jeff Howbert
Jeff Howbert received a BA in English from Stanford Univ. in 1977 and a PhD in Synthetic Organic Chemistry from Harvard Univ. in 1983. Over the ensuing 25 years, he led medicinal chemistry and drug discovery efforts at a large pharmaceutical company and several small biotech companies. He holds 45 US patents and is responsible for the entry of 6 compounds into clinical development. After earning a MS in Computer Science from Univ. of Washington in 2008, he began a second career in computational biology, with an emphasis on machine learning. He has worked in several labs on building predictive models for diverse biomedical problems, including seizure risk, cardiovascular biomarker discovery, and proteomic analysis. He is also currently teaching a course at UW Bothell on machine learning.
Implementing the Security Development Lifecycle in the Real World
Thursday, March 6, 2014
Senior Security Researcher
The Microsoft Security Development Lifecycle (SDL) was originally designed to codify security lessons learned from building Microsoft Windows and Office. We’re going to talk about real-world experiences implementing the SDL in a very different engineering environment.
About Don Ankney
Don Ankney is a Senior Security Researcher in Microsoft’s Applications and Services Group where he focuses on application and cloud security in a continuous integration/dev ops environment.
Privacy at Google - Life of a Privacy Analysis Engineer
Wednesday, February 19, 2014
The talk by a Google Privacy Analysis Engineer will overview some of the Privacy focused efforts Google is engaged in and how they affect our users, as well as touching on aspects of Privacy related research
About Ivan Medvedev
Ivan Medvedev leads a privacy engineering team in Google's Kirkland office. Ivan graduated from the Moscow State University in Russia in 1998 with a degree in Computer Science, and for the last 15 years has worked for software industry leaders Microsoft and Google, focusing on security and privacy.
Security in Emerging Environments
Thursday, February 6, 2014
Dr. Brent Lagesse
Many new systems are emerging beyond traditional desktop computing such as sensor systems, mobile systems, vehicular network systems, and wearable computing systems. These emerging environments exhibit properties that are not accounted for in traditional security models. For example, these systems are often diverse, mobile, ad-hoc, and consist of resource constrained devices. This talk will describe several areas of research in emerging environments along with potential solutions that I am actively exploring alongside several student researchers.
About Brent Lagesse
Dr. Lagesse received his Ph.D. in computer science from the University of Texas at Arlington in 2009. Prior to coming to UW Bothell, he held positions as a research scientist in the cyber security research groups at Oak Ridge National Laboratory and BBN Techologies.
Dr. Lagesse's expertise is in the areas of cyber security and pervasive systems. His dissertation research was focused on developing trust in pervasive systems, such as mobile peer-to-peer or dynamic service composition. Through game theoretic techniques, he enabled more reliable and secure access to services and resources in pervasive computing environments. Further, he established a framework for increasing code and information reuse in distributed trust mechanisms and for easing the deployment of these mechanisms. His current research focuses on formal methods, internet voting, device and wireless privacy, and secure online machine learning.
Big Data - A Relative Term
Thursday, November 14, 2013 3:45-4:45pm Presentation
President & COO
While ‘big data’ is all the hype now, it does not matter how much data you have, rather what you do with it to enhance your operations.
About Mr. Ganzarski
Prior to joining BoldIQ, Roei spent 13 years with the Boeing family of companies, most recently serving as Chief Customer Officer for Boeing’s Flight Services division, a provider to the world’s airlines of products and services to safely and efficiently operate commercial airplanes. In that role Roei was responsible for leading all customer and market facing activity worldwide including: business development, communications, customer service, marketing, sales, sales operations, and strategy. Additionally Roei led the customer engagement culture transformation for the business. Roei’s other key positions with Boeing (including former acquired Alteon and FSB) included Chief Customer Officer Training, Vice President of Sales, Director of Marketing, Director of New Ventures, and Director of Sales and Business Development for Asia-Pacific.
Data Management for Clinical Trials of Medical Devices
Tuesday, October 15, 2013
3:15-3:45pm Refreshments / Networking
3:45-4:45pm Presentation with Q&A
Vice President Global Clinical Affairs
Clinical Data Management saves lives!
Properly conducted clinical research using clinical trials ensures that medical devices used in clinics throughout the world are safe and effective. Soundly performed Clinical Data Management helps ensure that clinical research is performed accurately and safely.
This talk will cover an overview of clinical research and the regulatory environment, and then go into a little history lesson of data management, a deep dive into clinical trial operations processes, workflows that use clinical data management, examples of Case Report Forms (CRFs), and typical system architecture. Afterwards you should have a good idea of why and how data management is performed at medical device innovators and manufacturers.
About Mr. Sweeney
Mr. Terrence Sweeney’s career in regulatory affairs began in 1974 as a field biologist for the EPA. He then held the position of Radiological Health and Medical Device Specialist with the FDA. He has directed regulatory affairs departments for Johnson and Johnson, Quantum Medical Systems, Advanced Technology Laboratories, and Philips Healthcare. He is now Vice President Global Clinical Affairs for Philips Healthcare. He has worked with such medical devices as X-ray, MR and CT scanners, nuclear gamma cameras, diagnostic ultrasound equipment, patient monitors and external defibrillators.
He is an Executive Board member of the WBBA and serves on the Advisory Board of the University of Washington College of Medical Sciences, where he’s an instructor for a Master’s program in regulatory and clinical affairs. He is on the Board of the Bothell Biomedical Device Innovation Zone. He represents the U.S. medical device industry on the Steering Committee of the Global Harmonization Task Force of international regulatory authorities.
About Mr. Zydel
Andrew Zydel is a Senior Manager at Philips Healthcare where he has data management responsibilities for clinical trials of medical devices. Prior to working at Philips, Andrew spent nearly 10 years at Merck where he developed custom software for using clinical data in a variety of applications. He also has experience providing clinical data software solutions to hospitals and clinics.