Michael Stiber, Ph.D.
Dr. Stiber received a BS in Computer Science and a BS in Electrical Engineering from Washington University, Saint Louis and his MS and PhD in Computer Science from the University of California, Los Angeles. He has held industry positions with McDonnell Douglas (St. Charles, Missouri), Texas Instruments (Dallas, Texas), Philips (Eindhoven, Netherlands), and the IBM Los Angeles Scientific Center. He was an Assistant Professor in the Department of Computer Science at the Hong Kong University of Science & Technology and a Research Assistant Professor in the Department of Molecular and Cell Biology at the University of California, Berkeley. Dr. Stiber is a frequent visitor to the Department of Biophysical Engineering at Osaka University (Japan). His research interests include: computational neuroscience, biocomputing, neuroinformatics, simulation, scientific computing, neural networks, scientific data management and visualization, autonomous systems, nonlinear dynamics, and complex systems.
My research focuses on developing an understanding of biological computation by examining simulations of nervous systems. I am currently working with students to develop large-scale neural simulations using GPU (Graphics Processing Unit) hardware, which harnesses the number crunching capability behind video games and animated movies for general-purpose computation. This is being applied to models of neural development, formation of network connectivity, and the subsequent behaviors that are produced.
In an additional project, I have been working with a colleague at the University of Montreal at Quebec on what we've termed "Sapien Systems". The motivation is a simple observation of the growth in computer performance over time. Between 1990 and 2010, the cost of one megabyte of disk storage went from $9 to $0.00015. Had the human population followed a similar growth curve, there would be 300 trillion people on Earth. Obviously, growth in computer performance will continue to far outstrip human population growth. One could conclude from this that any algorithm that is related in size to human population should not be evaluated in terms of asymptotic complexity -- that in fact it is constant-time. What kinds of problems are like this (for example, perhaps all problems relating to the web)? Which classes of problems are larger? Assuming that most real-world problems are those of sapien systems, what are the implications of computing capability that renders them constant-time?
University of California, Los Angeles
Los Angeles, CA
- Ph.D. Computer Science
- M.S. Computer Science
Saint Louis, MO
- B.S. Electrical Engineering