B.S. Manjunath, Director of Bioinformatics Center, University of California, Santa Barbara
Title: Scalable and Reproducible Scientific Image Informatics
Abstract: Recent advances in imaging sciences enable very large amounts of complex scientific data generation, in some cases exceeding a terabyte of data in one single experiment. Sharing, collaborating and manipulating such complex data is becoming a critical issue in several disciplines, from life sciences to materials science to remote sensing. In this talk I will describe the BisQue image informatics platform - a collaborative ecosystem that can be easily deployed in a cloud computing environment for large scale, distributed image analytics. BisQue enables large groups of scientists to easily share and work with complex 2D/3D/4D/5D imaging data using a standard web browser interface. It can manage very large data sets and render the visualization within the browser on the fly. A unique feature of BisQue is the integration of the image analysis modules and machine learning within a database framework and the provenance of the data is maintained- scientists are thus able to create reproducible results. Current research is focused on supporting advanced high dimensional search and indexing, and deep learning tools for detection and classification of salient image regions. BisQue is distributed as open source (http://bioimage.ucsb.edu) and also as a core service through the CyVerse infrastructure (http://cyverse.org).
CV: B.S. Manjunath directs the NSF/ITR funded Bio-Image Informatics Center and was the Principal Investigator for the NSF/IGERT program on Interactive Digital Multimedia. He has published about 250 articles in various journals and peer reviewed conferences and his publications have been cited extensively. His research interest include: Image/video analysis (including texture and shape analysis, segmentation, registration), multimedia databases and data mining (feature extraction, content based access, high dimensional indexing and similarity search), steganography (data hiding in images and video, and their detection), and signal/image processing for bio-informatics.
Jiri Matas, Professor, Center for Machine Perception, CTU Prague, Czech Republic
CV: Jiri Matas received his MSc degree (with honours) in technical cybernetics from the Czech Technical University in Prague, and his PhD degree in 1995 from the University of Surrey (Advisor: Prof. J. Kittler). He was Visiting Professor, EPFL Lausanne, Switzerland in 2007 and Associate Professor at the Center for Machine Perception, CTU Prague, Czech Republic from 2006 to 2010. He is currently full Professor, at the Center for Machine Perception, CTU Prague, Czech Republic. His research interests include Visual recognition, Tracking, Image Retrieval, Sequential decision-making, Pattern recognition, Wide-baseline Matching, RANSAC, Face detection and recognition, Biometric authentication, Colour-based recognition, and the Hough Transform.
Bernardino Romera-Paredes, Research Scientist, Google Deepmind
Title: Learning to segment and count leaves (and other objects) sequentially
Abstract: In this talk I will present deep learning methods for inferring sequential visual content. I will focus on instance segmentation, which is the problem of detecting and delineating each distinct object of interest appearing in an image. Traditional instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here I will present an instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. We developed this approach using the Computer Vision Problems in Plant Phenotyping dataset, in which it achieved state of the art results on leaf counting. Encouraged by those results, we successfully applied this approach to the problem of multiple person segmentation, overcoming results obtained by previous methods.
CV: Bernardino Romera-Paredes, is currently a research scientist with Google DeepMind in London. Previously he was a Postdoc at University of Oxford, with Philip Torr and he received his PhD at University College London (UCL). His work focuses on building user-dependent models for affect and expression recognition by combining and extending multilinear and multi-task learning methods. These models have been successfully applied to a variety of scenarios beyond affect recognition, in which data sets can be arranged into meaningful multi-modal structures. This research has led to publications in top-tier machine learning conferences such as NIPS, ICML and AISTATS, as well as interaction based conferences, such as Affective Computing and Intelligent Interaction (ACII) and Face and Gesture (FG). He received the Best Paper Award at ACII 2013 and the Best Paper Runner-up Prize at ICML 2013, among others.