Steve Simske (Director of Systems and Services Lab, HP Labs)
Steve spent his first five years at HP in the Imaging and Printing Group, where he worked on image processing, image analysis, and document-understanding technologies that were later incorporated into HP Labs projects for automatic book digitization, document understanding, speech recognition, and other classification and analytics programs. Developing these technologies helped hone the toolset for architecting massive intelligent systems—now known as meta-algorithmics—and led Steve to write the book, “Meta-Algorithmics” (Wiley & Sons) in 2013.
Steve Simske is Director of the Systems and Services Lab in HP Labs, which performs R&D on document security and understanding solutions, 3D printing, brand protection, education, and other printing and personalized systems research.
The author of roughly 120 US Patents and nearly 400 peer-reviewed publications, Simske is a member of the World Economic Forum Global Agenda Councils on Illicit Economy and the Future of Electronics, a participant in several GS1 standards committees, and an IS&T Fellow and President Elect of Imaging.org. Steve was recently named an Honorary Professor at the University of Nottingham.
Prior to HP, Steve has designed and developed animal life support hardware, performed experiments on a dozen US Space Shuttle missions, written the first optimal reconstruction system for impedance tomography, and co-invented “lifetime” orthopedic implants. Steve has been a faculty member at the University of Colorado, Regis University, Colorado School of Mines, and Colorado State University. He holds a BS in biomedical engineering from Marquette University, an MS in biomedical engineering from Rensselaer Polytechnic University, and a PhD in electrical engineering from the University of Colorado where he was also a postdoctoral fellow in aerospace engineering.
Title: Multimedia and the future of knowledge
Abstract: In this talk, I will discuss the dual transformation of the world’s information—from analog to digital and from linear to non-linear. Multimedia and electronic publishing are on the forefront of the analog to digital conversion of the world’s information. Our communications, our finances, our social interactions, our education, and effectively our entire culture is now online. This provides a large number of advantages, but also a wide range of threats to privacy, security and data integrity. However, the transformation from linear reading, learning and thinking to the non-linearity of electronic content ingestion – particularly that associated with multi-media content – is potentially of even greater concern. Will it change the human mind from a holder of content to a holder of context? From an integrated thinking machine to a referential one? From a holder of facts to a holder of URLs?
Next, we consider how the media for knowledge affect the way people think, and the way society functions. Does multimedia inevitably drive us to distraction? Or can it be used to create better content than it currently provides? In this talk, I focus on the great opportunity – and the great responsibility – of multimedia experts to craft the past, present and future of content in a way which both preserves its value, and allows its value to be appreciated by people of different age, gender, culture and purpose. The future of knowledge, and thus the future of culture and society, depends on how multimedia content is crafted, accessed and preserved. This talk will try to outline some plans for doing this right.__________________________________________________________________________________________________
Bio: Dr. C.-C. Jay Kuo received his Ph.D. degree from the Massachusetts Institute of Technology in 1987. He is now with the University of Southern California (USC) as Director of the Media Communications Laboratory and Dean’s Professor in Electrical Engineering-Systems. His research interests are in the areas of digital media processing, compression, communication and networking technologies. Dr. Kuo was the Editor-in-Chief for the IEEE Trans. on Information Forensics and Security in 2012-2014. He was the Editor-in-Chief for the Journal of Visual Communication and Image Representation in 1997-2011, and served as Editor for 10 other international journals. Dr. Kuo received the National Science Foundation Young Investigator Award (NYI) and Presidential Faculty Fellow (PFF) Award in 1992 and 1993, respectively. He was an IEEE Signal Processing Society Distinguished Lecturer in 2006, and the recipient of the Electronic Imaging Scientist of the Year Award in 2010 and the holder of the 2010-2011 Fulbright-Nokia Distinguished Chair in Information and Communications Technologies. Dr. Kuo is a Fellow of AAAS, IEEE and SPIE. Dr. Kuo has guided 130 students to their Ph.D. degrees and supervised 23 postdoctoral research fellows. He is a co-author of about 230 journal papers, 870 conference papers and 13 books.
Title: Perceptual Coding: Hype or Hope?
Abstract: There has been a significant progress in image/video coding in the last 50 years, and many visual coding standards have been established, including JPEG, MPEG-1, MPEG-2, H.264/AVC and H.265, in the last three decades. The visual coding research field has reached a mature stage, and the question “is there anything left for image/video coding?” arises in recent years. One emerging R&D topic is “perceptual coding”. That is, we may leverage the characteristics of the human visual system (HVS) to achieve a higher coding gain. For example, we may change the traditional quality/distortion measure (i.e., PSNR/MSE) to a new perceptual quality/distortion measure and take visual saliency and spatial-temporal masking effects into account. Recent developments in this area will be reviewed first. However, “is this sufficient to keep visual coding research vibrant and prosperous for another decade with such a modification?” The answer is probably not. Instead, I will present a new HVS-centric coding framework that is dramatically differently from the past. This framework is centered on two key concepts – the stair quality function (SQF) and the Just-Noticeable-Differences (JND). It will lead to numerous new R&D opportunities and revolutionize coding research with modern machine learning tools.
Title: Intel RealSenseTM Technology
Bio: As Executive Director, Technology Strategy at Dolby Laboratories, Patrick Griffis is charged with helping define future technology strategy for the company which includes identifying and tracking key technical trends, performing technical due diligence, and supporting advanced technology initiatives for the company.
Before joining Dolby, Pat spent 10 years at Microsoft leading global digital media standards strategy, including adoption of the Digital Living Network Alliance as a baseline media sharing standard in Windows 7 and standardization of Windows Media Video technology as an international SMPTE standard. Prior to Microsoft, Pat spent 15 years at Panasonic in senior management positions, including Vice President of Strategic Product Development at Panasonic Broadcast. Pat started his career at RCA, earning eight patents in TV product design.
Pat has served two terms as President of the IEEE Consumer Electronics Society. A SMPTE Fellow, he serves on the SMPTE Executive Committee as Vice President, Education. He serves on the Board of the UHD Forum and is Dolby’s Board alternate in the UHD Alliance as well as Chair of the Compliance and Certification Working Group. Pat is member of the IBC Council, an industry executive advisory group as well as the Academy of Digital TV Pioneers. Pat holds a BSEE degree from Tufts University and an MSEE from Purdue University.
Title: Next Generation Entertainment: More, Faster, Better and Perceptually Quantized Pixels
Abstract: The next generation of entertainment imaging technology can be characterized as a function of higher spatial resolution, higher temporal resolution, and improved dynamic range and color rendition. In this presentation, Pat Griffis Vice President of Education for the Society of Motion Picture and Televisions Engineers and Executive Director, Technology Strategy at Dolby Labs will present an overview of the current state of thinking on these topics and some observations about the practical impact on next generation of entertainment imaging.
Bio: Dr. Gerald Friedland is the Director of the Audio and Multimedia lab at the International Computer Science Institute, a private non-profit research organization affiliated with the University of California, Berkeley. He leads a group of researchers, currently focusing on acoustic analysis methods for large scale video retrieval, but also on related privacy concerns and privacy education. Dr. Friedland has published more than 200 peer-reviewed articles in conferences, journals, and books. He authored a new textbook on multimedia computing published by Cambridge University Press. He is associate editor for ACM Transactions on Multimedia and IEEE Multimedia Magazine and is the recipient of several research and industry awards, among them the European Academic Software Award and the Multimedia Entrepreneur Award from the German Federal Department of Economics. Dr. Friedland received his doctorate (summa cum laude) and master's degree in computer science from Freie Universitaet Berlin, Germany, in 2002 and 2006, respectively.
Title: Deriving Knowledge from Environmental Audio
Abstract: Today's world is filled not only with cameras, but also with microphones, listening to us -- and to the environment we live in. This talk presents results and lessons learned from my research on extracting information from environmental audio and video data using scalable acoustic recognition methods. The research I will present is mainly focused on multimedia retrieval, but the underlying environmental audio recognition methods are being applied to robotics, autonomous vehicles and cell phones.
BAMMF is a Bay Area Multimedia Forum series. Experts from both academia and industry are invited to exchange ideas and information through talks, tutorials, posters, panel discussions and networking sessions. Topics of the forum will include but not limited to emerging areas in vision, audio, touch, speech, text, sensors, human computer interaction, natural language processing, machine learning, media-related signal processing, communication, and cross-media analysis etc. Talks in the event may cover advancement in algorithms and development, demonstration of new inventions, product innovation, business opportunities, etc. If you are interested in giving a presentation at the forum, please contact us.
8th BAMMF Event Agenda - Nov 5, 2015 (Thur)
We are pleased to announce the agenda for the 8th BAMMF event on Nov 5, 2015: Please note the new venue, Prysm Theater (180 Baytech Drive, Suite 200, San Jose ...
Posted by Bee Low
7th BAMMF Event Agenda - Aug 21, 2015 (FRI)
Here is the agenda for the 7th BAMMF event on Aug 21, 2015:
1:00pm - 1:25pm Check-In,
Networking, and Short Announcements
1:30pm - 2:15pm Zhengyou Zhang (Microsoft ...
Posted Aug 19, 2015, 1:52 PM by Bee Low
Speakers for August 21, 2015 BAMMF
We're looking forward to have Susie Wee (Cisco), Zhengyou Zhang (Microsoft Research) and Bo Begole (Huawei) for the upcoming BAMMF event. Please mark your calendar on August 21, 2015 ...
Posted May 19, 2015, 8:25 PM by Bee Low
BAMMF @HP Auditorium on May 15 (FRI) 1:30pm - 4:30pm
The first BAMMF event for 2015 will be held at HP Auditorium on May 15 (Friday) from 1:30pm - 4:30pm. Please RSVP. This event is hosted by Dr. Tong ...
Posted May 13, 2015, 4:00 PM by Bee Low