Location: Intel Corp SC 12 Building, 3600 Juliette Lane, Santa Clara, CA 95054  Parking Info

Time: Sep 26, 2016 Monday, 1:30pm - 4:30pm  Agenda    See Who's Attended                       

Guest Host:
Senior Research Scientist, Intel Labs

Ruzena Bajcsy

Professor EECS, UC Berkeley

Bio: Ruzena Bajcsy (LF’08) received the Master’s and Ph.D. degrees in electrical engineering from Slovak Technical University, Bratislava, Slovak Republic, in 1957 and 1967, respectively, and the Ph.D. in computer science from Stanford University, Stanford, CA, in 1972. She is a Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley, and Director Emeritus of the Center for Information Technology Research in the Interest of Science (CITRIS). Prior to joining Berkeley, she headed the Computer and Information Science and Engineering Directorate at the National Science Foundation. Dr. Bajcsy is a member of the National Academy of Engineering and the National Academy of Science Institute of Medicine as well as a Fellow of the Association for Computing Machinery (ACM) and the American Association for Artificial Intelligence. In 2001, she received the ACM/Association for the Advancement of Artificial Intelligence Allen Newell Award, and was named as one of the 50 most important women in science in the November 2002 issue of Discover Magazine. She is the recipient of the Benjamin Franklin Medal for Computer and Cognitive Sciences (2009) and the IEEE Robotics and Automation Award (2013) for her contributions in the field of robotics and automation.

Title: Tele-immersion and Augmented Reality at UC Berkeley (Video)

Abstract: In this presentation we shall review our efforts, technical challenges in Tele-immersion and Augmented reality over the past 10 years at the University of California, Berkeley. WE will show the issues of calibration of many cameras in order to capture, reconstruct 3D information, display and transmit the data over the regular bandwidth in distance. We will show many applications both from health, training physical activities, archeology and arts/dance.

Finally we will show the affordance of latest technology in interaction via augmented reality.

Hanlin Tang

Senior Algorithms Engineer, Intel

Bio: Hanlin Tang is a Senior Algorithms Engineer at Intel, and previously from the deep learning startup Nervana Systems. At Nervana, he worked on developing the open-source framework neon and building deep learning models for computer vision. He received his PhD from Harvard University, where he investigated the role of recurrent neural networks in human cortex. His main interests are the intersection of neuroscience and deep learning. Hanlin previously was at RAND Corporation, where he performed research on national security issues.

Title: Building the Deep Learning Stack (Video)

Abstract: As deep learning moves from homebrew solutions in academic labs to production deployments in industry, Nervana has spent the last few years building a full stack deep learning solution, from custom silicon to a specialized cloud. In this talk, I will present a technical overview of our approach, from the specialized deep learning processor to neon, our deep learning framework, and the Nervana Cloud. Our platform is engineered for speed and scale, and is currently used by a diverse set of organizations, from small groups to large organizations. I will also discuss challenges and lessons learned from our experience.

Roger Zimmermann

Associate Professor of Computer Science at National University of Singapore

Bio: Roger Zimmermann is an associate professor with the Computer Science Department at the National University of Singapore (NUS). He is also a deputy director with the Interactive and Digital Media Institute (IDMI) at NUS and co-director of the Centre of Social Media Innovations for Communities (COSMIC). Earlier he held the position of Research Area Director with the Integrated Media Systems Center (IMSC) at the University of Southern California (USC). Among his research interests are mobile video management, streaming media architectures, distributed systems, spatio-temporal data management and location-based services. He has co-authored six patents and more than two-hundred peer-reviewed articles in the aforementioned areas. Roger is on the editorial boards of the IEEE Multimedia Communications Technical Committee (MMTC) R-Letter and the Springer International Journal of Multimedia Tools and Applications (MTAP). Additionally, he is an associate editor for the ACM Transactions on Multimedia Computing, Communications and Applications journal (TOMM) and he is currently serving as the secretary of ACM SIGSPATIAL. He has participated on the conference program committees of many leading conferences and as reviewer of many journals. He received his Ph.D. and M.S. degrees from USC. Further details can be found on his website at

Title: DASH Streaming and Software Defined Networking (Video)

Abstract: HTTP adaptive streaming (HAS) is being adopted with increasing frequency and becoming the de-facto standard for video streaming. However, the client-driven, on-off adaptation behavior of HAS results in uneven bandwidth competition and it is exacerbated when a large number of clients share the same bottleneck network link and compete for the available bandwidth. With HAS each client independently strives to maximize its individual share of the available bandwidth, which leads to bandwidth competition and a decrease in end-user quality of experience (QoE). The competition causes scalability issues, which are quality instability, unfair bandwidth sharing and network resource underutilization. In this talk I will present our proposal of a new software defined networking (SDN) based dynamic resource allocation and management architecture for HAS systems, which aims to alleviate these scalability issues and improve the per-client QoE.


What's BAMMF?

BAMMF is a Bay Area Multimedia Forum series. Experts from both academia and industry are invited to exchange ideas and information through talks, tutorials, posters, panel discussions and networking sessions. Topics of the forum will include but not limited to emerging areas in vision, audio, touch, speech, text, sensors, human computer interaction, natural language processing, machine learning, media-related signal processing, communication, and cross-media analysis etc. Talks in the event may cover advancement in algorithms and development, demonstration of new inventions, product innovation, business opportunities, etc. If you are interested in giving a presentation at the forum, please contact us.

Our Sponsors:
PARC, a Xerox Company

Hewlett Packard