2008 - 2009 Archived Speaker Series

Inside the Video Game Industry

Jason Bay
Gripptonite Games

What's it like to work in this exciting and dynamic field? Who are all the different 'players' that create and sell video games? What goes into the creating of a game; from concept to programming to shipping?

Join us for a special lecture by Jason Bay, head of programming at Gripptonite, as he exposes the inside world of the video game industry. A jack of all trades, Jason has served as Lead Tester, Designer/Writer, Programmer and Lead Programmer within multiple game companies. A significant portion of this lecture will be reserved for Q&A with the audience.

Jason's Background

After a five-year stint as a partner in a small software startup in Seattle, Jason joined Amaze in 2001 with a knack for making great games even better. He has served as Lead Tester, Designer/Writer, Programmer and Lead Programmer with 15 shipped titles under his belt before landing at the head of Griptonite's programming department. He is motivated by his passion for programming and technology, as well as a drive to nurture a professional and innovative environment in which to build exciting, high-quality games.

Tuesday, June 2nd
5:50PM
CC1-011

Trouble in Cyberspace!

Understanding the Evolving Threats to our Electronic Lives

Kirk Baily
Chief Information Security Officer
University of Washington

Organized Cybercrime? Botnets? Domain hijacking? Fast flux networking? Polymorphic malware? Cyber-underground? Supply chain compromises? Wicked attack vectors? Stealthy wartime reserves? Cyber-extortion? Cyber-espionage?

The Information Age is still in its infancy and, in the electronic ether where it thrives, there is trouble brewing. All the promises -

convenience, cost saving, innovation and re-invention of our lives in cyberspace - are now challenged by significant unintended consequences.  Come and hear  about the challenges that cyber-security professionals are facing in their work. The Web may not be exactly what you believe it to be.

When:  Tuesday, February 24th, 3:30 PM
Where: UW2-131

The Importance of Context in Computing Education

Dr. Richard LeBlanc
Seattle University

Concerns about shrinking enrollments in computing courses in recent years have resulted in considerable attention being paid to making the way we teach more attractive to students.  Those of us who have spent many years in the discipline tend to think that computing concepts are inherently interesting.  Most students have a vastly different opinion!
 
The notion of teaching computing in an engaging context has been a key idea behind recent successful efforts to increase student interest in computing courses.  The proliferation of courses and degree programs emphasizing computer games is a prominent example.  This presentation will examine a number of ways that the "computing in context" notion is influencing computing education.  Drawing heavily on my experiences at Georgia Tech, it will examine the use of context in introductory courses, the interdisciplinary Computational Media degree program, the Threads approach to the computer science degree, and other innovations.  The collection of computing degree programs included in the ACM/IEEE Computer Society Computing Curricula volumes will also be discussed, with a focus on their connection to the recent emphasis on contexts.

Tuesday, February 17th
1:30 - 3:00
UW1 041

From Computational Complexity to Constraints on the Brain's Visual System to a Predictive Model of Human Vision and Attention

Dr. John Tsotsos
Dept. of Computer Science & Engineering and
Centre for Vision Research
York University

The general problem of visual search can be shown to be computationally intractable in a formal complexity-theoretic sense, yet visual search is widely involved in everyday perception and biological systems manage to perform it remarkably well. Complexity level analysis resolves this contradiction. Visual search can be reshaped into tractability through approximations and by optimizing the resources devoted to visual processing. Architectural constraints can be derived to rule out a large class of potential solutions and the evidence speaks strongly against bottom-up approaches to vision. More importantly, the brain is not performing so-called general purpose vision; the vision problem the brain is solving is re-shaped using the constraints arising from the complexity analysis. In particular, the constraints argue for an attentional mechanism that exploits knowledge of the specific problem being solved. This analysis of visual search performance in terms of attentional influences on visual information processing and complexity satisfaction allows a large body of neurophysiological and psychological evidence to be tied together leading to a predictive model of vision and attention.

The Selective Tuning Model is a proposal for the explanation at the computational and behavioral levels of visual attention in humans and primates. Key characteristics of the model include a top-down coarse-to-fine winner-take-all selection process, a unique competition formulation with provable convergence properties, a task-relevant inhibitory bias mechanism, and selective inhibition in both spatial and feature dimensions for elimination of signal interference that leads to a suppressive surround for attended items. An extensive set of predictions arises many of which have now been supported by experiment. This presentation is an example of how a purely theoretical computational analysis can lead to a derivation of a model (without data-fitting or learning) that can have deep impact on brain science.

You can visit Mr. Tsotsos's website to download his tutorial slides used during his presentation.

Wednesday, January 28th
3:30 - 4:30
UW1 010

Robotics and Autonomous Systems

Dr. Michael Jenkin
CSE & Center for Vision Research
York University, Canada

Mobile robotic systems have found a wide range of applications, from sensing/survey tasks in dangerous environments, to environmental exploration, to carpet cleaning.  This lecture considers the three fundamental tasks underlying mobile robots: sensing, planning and locomotion.  It considers typical solutions to these problems and reviews some recent robotic systems that have been successfully deployed on Earth and elsewhere.

Dr. Jenkin is a Professor of Computer Science and Engineering and a member of the Centre for Vision Research at York University, Canada.  Working in the fields of visually guided autonomous robots and virtual reality, he has published over 150 research papers including co-authoring Computational Principles of Mobile Robotics with Gregory Dudek and a series of co-edited books on human and machine vision with Laurence Harris.

Michael Jenkin's current research interests include work on sensing strategies for AQUA, an amphibious autonomous robot being developed as a collaboration between Dalhousie University, McGill University and York University; the development of tools and techniques to support crime scene investigation; and the understanding of the perception of self-motion and orientation in unusual environments.

Wednesday, October 15
5:40 PM
UW2 221

facebook-16.pngtwitter-16.pngyoutube-16.pngClip to Evernote