Speakers

Prof. Dorit Aharonov

ProfessorThe Hebrew University of Jerusalem

Bio:

Dorit Aharonov is considered a leader in the field of quantum computation, where she had made major contributions in a variety of areas including quantum error correction, algorithms, cryptography, and verification.

Many of her works can be viewed as creating a bridge between physics and computer science, attempting to study fundamental physics questions using computation language.

She was educated at the Hebrew University (BSc in Math and Physics, Ph.D. in Computer Science and Physics) and then continued to a postdoc at IAS Princeton (Mathematics) and UC Berkeley (Computer Science).

In 2001 Aharonov joined the faculty of the computer science department of the Hebrew University of Jerusalem.

In 2005 she was featured by the journal Nature as one of four theoreticians making waves in their field; in 2006 she had won the Krill prize, and in 2014 she was awarded the prestigious Michael Bruno award. 

Title:

The Second Quantum Revolution: Towards Quantum Computers

Abstract:

What are quantum computers, and why should I be interested in them? 

Will they have machine learning applications, soon, or ever?   

And what is entanglement, anyway? I will try to partially answer these questions and raise a few more in this very introductory talk, which will assume almost nothing except for some lack of common sense. 

Dr. Ayelet Akselrod-Ballin

Director of AIZebra Medical Vision

Bio:

Ayelet Akselrod-Ballin’s is the director of AI at  Zebra Medical Vision where she leads an excellent team of CV-ML researchers and data scientists. Her work focuses on developing novel technologies for computer vision, machine learning, deep learning, and biomedical image analysis. Ayelet did her Post-doctoral research as a fellow in the Computational Radiology Laboratory at Harvard Medical School, Children’s Hospital (Boston) and she holds a Ph.D. in Applied Mathematics and Computer Science from Weizmann Institute of Science. Prior to joining Zebra Medical Vision,  Ayelet lead the medical imaging research technology at IBM-research and lead the Computer Vision & Algorithms team at MOD.

Title:

Opportunities in Radiology with AI

Abstract:

The advances in deep learning algorithms together with the continuous increase in digital information volume provide a unique opportunity in the healthcare domain. Recent deep Learning studies have established remarkable performance in complex diagnostic tasks.  Nevertheless, some of the challenges include the need for interpretability to ease the human-machine collaboration, together with integration of imaging data and other types of data such as text, which is complex, noisy, heterogeneous and sometimes missing. In this talk we discuss several key applications spanning neuro CT, Chest X-ray and Spinal analysis with CT and demonstrate the results scaling large multi-domain dataset.

Eran Avidan

Senior Software EngineerIntel

Bio:

Eran Avidan is a senior software engineer in Intel’s Advanced Analytics Department. Eran enjoys everything distributed, from Spark and Kafka to Kubernetes and TensorFlow. He holds an MS in computer science from the Hebrew University of Jerusalem.

Title:

Real-Time Deep Learning on Video Streams

Abstract:

Deep learning has recently become an abundant technology for analyzing video data. However, the increasing resolution and frame rates of videos make real-time analysis a remarkably challenging task.

We present an overview of a novel architecture based on Redis, Docker, and TensorFlow that enables real-time analysis of high-resolution streaming video. The solution can serve advanced deep learning algorithms at subsecond rates and appears fully synchronous to the user despite containing an asynchronous backend. The talk offers a demo using visual inspection and shares results that highlight the solution’s applicability to real-time neural network processing of videos. The approach is generalizable and can be applied to diverse domains that require video analytics.

Sagie Benaim

PhD CandidateTel Aviv University

Bio:

A PhD candidate under the supervision of Prof. Lior Wolf at Tel Aviv University. Sagie has a several publications on GANs and related models, and its his main domain expertise. 

Title:

Generative Adversarial Networks for Image to Image Translation

Abstract:

The talk will focus and the recent advances in Generative Adversarial Networks (GANs) for Image to Image Translations. I will begin by describing the problem and the distinction between supervised and unsupervised approaches, focusing on the approaches of pix2pix, DTN, CycleGAN, Unit and DistanceGAN. I will then describe recent approaches which allow for variation in the target domain as well as "one-shot"-translation. Lastly, I will talk about the application of these techniques in other domains such as audio and text. 

Ofir Bibi

Director of ResearchLightricks

Bio:

Ofir Bibi is the director of research at Lightricks. His research is in the fields of machine learning, statistical signal processing, computational photography and computer graphics. He has taken leading research positions in building systems for estimation and prediction of various signals from weather, electricity consumption and financial markets to photos, videos and auditory signals. Currently he is focused on ways to create consistent results from neural networks in a resource constraint environment.

Title:

Sky Replacement in Video

Abstract:

Who hasn't gotten an image or video with some dull skies taking up most of the space?

Professionals may edit these videos using advanced and time-consuming tools, to replace the sky with a more expressive or imaginative sky. 

In this work, we propose an algorithm for automatic replacement of the sky region in a video with a different sky.

The method is fast, achieving close to real-time performance on mobile devices and can be fully automatic.

Joint work with Tavi Halperin, Harel Cain and Michael Werman. To be presented at Eurographics 2019.

Gal Chechik

Directory of AI research, NVIDIABar-Ilan University & NVIDIA

Bio:

Gal Chechik is an Assoc. Prof at the Gonda Brain Institute at Bar-Ilan University and a director of AI research at NVIDIA. His research spans learning in brains and machines, including large-scale learning algorithms for machine perception, and analysis of representation and changes of mammalian brains.

In 2018 Gal joined NVIDIA as a director of AI research, leading Nvidia's research in Israel. Prior to that, Gal was a staff research scientist at Google Brain and Google research developing large-scale algorithms for machine perception, used by millions daily. Gal earned his Ph.D. in 2004 from the Hebrew University, developing computational methods to study neural coding, and did his postdoctoral training at Stanford CS department. Since 2009, he heads the computational neurobiology lab at the Gonda center of Bar Ilan University. Gal authored ~75 refereed publications, including publications in Nature Biotechnology, Cell and PNAS. 

Title:

Understanding Images by Learning Operable Representations

Abstract:

What does it mean to understand a visual scene? Deep models of visual recognition successfully exploit statistical regularities to recognize and localize objects relations and actions, but there is strong evidence that their "understanding" is limited.  I will describe a series of studies, both from our lab and from Nvidia research, aimed at learning intermediate representations that can be operated on. This allows models to generalize to new categories 

Yonatan Cohen

CTOQuantum Machines

Bio:

I completed my PhD in Physics at the Weizmann Institute in professor Moty Heiblum's group, where I investigated quantum electronics devices and topological quantum states. 

A year ago, together with Itamar Sivan and Nissim Ofek, I co-founded Quantum Machines (QM), which is the first Israeli startup in quantum computing. At QM we develop control and operation systems for quantum computers. With a multidisciplinary team of highly motivated physicists and engineers, all passionate about solving the challenges of quantum control, we hope to help making the dream of a large-scale quantum computer become a reality. 

Title:

Hello Quantum World

Abstract:

Quantum mechanics, which was born at the beginning of the 20th century and revolutionized our understanding of nature, implies that nature is far more counter intuitive and far richer than anyone could imagine. In the 1980’s, physicists understood that this richness of nature can be used to construct new computers that can outperform any classical computer. In the decades that followed, physicists have made huge progress in designing and controlling quantum systems, and today quantum computers seem to be closer than ever. What are the basic principles of quantum computers? how do they allow them to beat classical computers? What problems will they be best for? 

Dr. Moti Freiman

Staff Research Scientist, Global Advanced Technology, CT/AMIPhilips Healthcare

Bio:

Moti Freiman is a staff research scientist at Philips Healthcare where he is developing advanced algorithms with the aim of improving the capacity of medical imaging devices to provide clinically meaningful information by leveraging machine learning, computer vision, and image processing algorithms.

Prior to Philips, Dr. Freiman was an Instructor in Radiology at Harvard Medical School where he developed advanced algorithms for quantitative analysis of diffusion-weighted MRI data.

Dr. Freiman is the recipient of an NIH R01 research grant and the 2012 Crohn's and Colitis foundation of America research fellow award. He is the author and co-author of more than 40 journal and full-length conference papers and holds several patents and patent applications.

Title:

Unsupervised Medical Abnormality Detection through Mixed Structure Regularization (MSR) in Deep Sparse Autoencoders

Abstract:

Deep sparse auto-encoders with mixed structure regularization (MSR) in addition to explicit sparsity regularization term and stochastic corruption of the input data with Gaussian noise have the potential to improve unsupervised abnormality detection. Unsupervised abnormality detection based on identifying outliers using deep sparse auto-encoders is a very appealing approach for medical computer aided detection systems as it requires only healthy data for training rather than expert annotated abnormality. 

In the task of detecting coronary artery disease from Coronary Computed Tomography Angiography (CCTA), our results suggests that the MSR has the potential to improve overall performance by 20-30% compared to deep sparse and denoising auto-encoders.

Itamar Friedman

DirectorAlibaba Israel Machine Vision Lab.

Bio:

Itamar (一塔) Friedman is a director at Alibaba Israel Machine Vision Lab, leading teams to create innovations for Alibaba’s diverse ecosystems. Itamar was the CTO of Visualead, a startup developing cutting-edge machine vision O2O technologies acquired by Alibaba. Itamar holds a BSc in Electrical Engineering (Summa Cum Laude) and an MSc in Computer Vision and Machine Learning from the Technion.

Title:

AutoML - Towards "CV as a Service"

Abstract:

Manual network architecture search and hyper-parameter tuning of neural network based algorithms could be a very interesting task but also arduous. In this talk, we will present two novel methods developed in our lab for automating the network architecture search. With these methods, we are able to reach SotA results on various classification datasets within 1 GPU day per dataset. Are we heading towards a "CV as a Service" era?

Adham Ghazali

CEO & Co-FounderImagry

Bio:

Adham Ghazali is the CEO of Imagry. He spent the last 10 years working on various machine learning problems including large scale computer vision, Brain computer interfacing and Bio-Inspired facial recognition. He is interested in the intersection between biology and computer science. At his current post, he is responsible for strategic R&D and Business Development. 

Prior to cofounding Imagry, Adham was a brain researcher focusing on the study of the visual system in infants. 

Title:

Autonomous Driving in Unknown Areas

Abstract:

Autonomous driving has come a long way albeit it's success is limited only to those areas where a High-Definition map is pre-built and known. Inspired by the recent achievements in Artificial Intelligence particularly of methods that combine trees and DNNs (e.g. Alpha Go Zero), this talk demonstrates how to effectively combine Deep Learning and convention path planning to drive in unknown areas. 

Jacob Gildenblat

Co-Founder and CTODeePathology.ai

Bio:

Jacob is Co-Founder and CTO of DeePathology.ai.

DeePathology.ai develops digital pathology products for diagnostics and for pharma research.

Before DeePathology, Jacob was the leader of the deep learning group at SagivTech.

Jacob has a BSc and MSc in Electrical Engineering from Tel Aviv university. 

Title:

Active Learning for Fast and Efficient Annotation of Medical Images

Abstract:

Common problems in the process of developing AI solutions for the medical field are highly unbalanced datasets on one hand and limited annotation resources on the other hand.

The use of Active Learning can dramatically help with both issues.

The task of Cell Detection is very important in digital pathology. For example, analyzing the quantity and density of immune cells can provide important indications on the progress of cancer.

This is a tedious task when manually done by pathologists and thus, automating this process is desirable.

Automating cell detection requires annotating large amounts of data, which is usually very unbalanced.

The DeePathology.ai Cell Detection Studio is a do it yourself tool for pathologists to train deep learning cell detection algorithms on their own data.

Using this tool, deep learning cell detection solutions can be easily created by the pathologist very quickly.

In the talk we will use the example of the DeePathology.ai Cell Detection Studio to demonstrate how Active Learning can be used for medical imaging annotation.

We will also present our approach for using active learning with unbalanced datasets.

Tzofnat Greenberg-Toledo

Ph.D. StudentTechnion

Bio:

Tzofnat Greenberg-Toledo is a Ph.D. student at the Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion – Israel Institute of Technology. She received her B.Sc. degree from the Andrew and Erna Viterbi Faculty of Electrical Engineering at the Technion in 2015. From 2014 to 2016 she worked as a logic design engineer at Intel corp. As of 2016, she is a graduate student in Electrical Engineering at the Technion. Her current research is focused on computer architecture and accelerators for Deep Neural Networks with the use of memristors.

Title:

Accelerating DNN Applications with Emerging Memory Technologies

Abstract:

Deep Neural Networks (DNNs) are usually executed by commodity hardware, mostly FPGA and GPU platforms, and accelerators, such as Google's TPU. However, when executing DNN algorithms, the conventional von Neumann architectures, where the memory and computation are separated, pose significant limitations on performance and energy efficiency, as DNN algorithms are compute and memory intensive.  Emerging memory technologies, known as memristors, enable in-place, highly parallel, and energy efficient analog multiply-accumulate operations. This is known as processing-near-memory (PNM). This talk will present the potential and opportunities of integrating memristors into DNN accelerators design.

Rana Hanocka

Ph.D. StudentTel Aviv University

Bio:

Rana Hanocka is a Ph.D. student under the supervision of Daniel Cohen-Or and Raja Giryes at Tel Aviv University. Her research is focused on applying convolutional neural networks on irregular data. 

Title:

Applying CNNs on Triangular Meshes

Abstract:

Polygonal meshes provide an efficient representation for 3D shapes. They explicitly capture both shape surface and topology, and leverage non-uniformity to represent large flat regions as well as sharp, intricate features. This non-uniformity and irregularity, however, inhibits mesh analysis efforts using neural networks that combine convolution and pooling operations. In this paper, we utilize the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes. Analogous to classic CNNs, MeshCNN combines specialized convolution and pooling layers that operate on the mesh edges, by leveraging their intrinsic geodesic connections. Convolutions are applied on edges and the four edges of their incident triangles, and pooling is applied via an edge collapse operation that retains surface topology, thereby, generating new mesh connectivity for the subsequent convolutions. MeshCNN learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones. We demonstrate the effectiveness of our task-driven pooling on various learning tasks applied to 3D meshes.

Ilan Kadar

Director of Deep LearningNexar

Bio:

Ilan Kadar is the Director of Deep Learning at Nexar. Ilan is responsible for leading the deep learning team and effort to leverage Nexar's large-scale datasets of real-world driving environments to automotive safety applications. Prior to Nexar, Ilan was leading the image recognition group at Cortica and was responsible for building the company's machine vision technology. Ilan received his BSc, MSc and PhD degrees in computer science from the Ben-Gurion University of the Negev, Israel, in 2006, 2008, and 2012 respectively (Summa Cum Laude). His research thesis focused on machine learning algorithms for scene recognition and image retrieval and was published in leading conferences and journals in the areas of machine vision.

Title:

Continuous Deep Learning at the Edge

Abstract:

The robustness of end-to-end driving policy models depends on having access to the largest possible training dataset, exposing the true diversity of the 10 trillion miles that humans drive every year in the real world. However, current approaches are limited to models trained using homogenous data from a small number of vehicles running in controlled environments or in simulation, which fail to perform adequately in real-world dangerous corner cases. Safe driving requires continuously resolving a long tail of those corner cases. The only possible way to train a robust driving policy model is therefore to continuously capture as many of these cases as possible. The capture of driving data is unfortunately constrained by the reduced compute capabilities of the devices running at the edge and the limited network connectivity to the cloud, making the task of building robust end-to-end driving policies very complex.

In this talk, I will give an overview of a network of connected devices deployed at the edge running deep learning models that continuously capture, select, and transfer to the cloud “interesting” monocular camera observations, vehicle motion, and driver actions. The collected data is used to train an end-to-end vehicle driving policy, which also guarantees that the information gain of the learned model is monotonically increasing, effectively becoming progressively more selective of the data captured by the edge devices as it walks down the tail of corner cases.

Dr. Leonid Karlinsky

DL Team Lead, CVAR GroupIBM Research AI

Bio:

Dr. Karlinsky leads the CV & DL research team in the Computer Vision and Augmented Reality (CVAR) group @ IBM Research AI. He is a Computer Vision and Machine Learning expert with years of hands on experience. He is actively publishing research papers in leading CV and ML venues such as ECCV, CVPR and NIPS and is actively reviewing for these conferences for the past 10 years. Dr. Karlinsky holds a PhD degree in CV from the Weizmann Institute of Science, supervised by Prof. Shimon Ullman.

Title:

Few-Shot Object X, or How Can We Train A DL Model with Only Few Examples

Abstract:

Learning to classify and localize instances of objects that belong to new categories, while training on just one or very few examples, is a long-standing challenge in modern computer vision. This problem is generally referred to as 'few-shot learning'. It is particularly challenging for modern deep-learning based methods, which tend to be notoriously hungry for training data.  In this talk I will cover several of our recent research papers offering advances on these problems using example synthesis (hallucination) and metric learning techniques and achieving state-of-the-art results on known and new few-shot benchmarks. In addition to covering the relatively well studied few-shot classification task, I will show how our approaches can address the yet under-studied few-shot localization and multi-label few-shot classification tasks. In addition to this talk, a detailed tutorial covering the few-shot learning field in general will be given by Dr. Joseph Shtok from my team.

Dr. Laurence Keselbrener

Site Leader and Vice President R&DMedtronic

Bio:

Laurence has over 20 years of experience in biomedical startups and corporates. She held several roles including algorithm engineer, product manager and VP R&D for startups. She joined General Electric as Site and Engineering Manager of the HCIT Herzlia site and was then promoted to General Manager of the Respiratory Value Segment (Versamed).  She joined Given Imaging in July 2014. She holds a Ph.D. in Medical Physics from Tel Aviv University.

Laurence is part of the Board of MindUp, an IIA incubator for Digital Health. She is also part of the Directors Unit of the governmental companies and serves as a board member of Rotem, the governmental company of the Center for Nuclear Research in Dimona. 

Title:

AI for Capsule Endoscopy – the Second Revolution of Endoscopy

Abstract:

Given Imaging revolutionized the GI world with the invention of Capsule Endoscopy 20 years ago. Capsule endoscopy was the first device able to visualize the Small Bowel and improved lives already of more than 3 million patients worldwide. The PillCam is the state of the art for Capsule endoscopy, owns the largest market share and continues to bring breakthrough innovations.

Today, the technology owned by Medtronic, keeps on developing additional applications for Small Bowel, Colon and Pan-enteric visualization for a variety of diagnosis and monitoring. The technologies use the most advanced AI, vision and machine learning algorithms to revolutionize once again this field and create the Intelligent Capsule Endoscopy.

In our presentation, we will present some of the unique technical achievements and challenges. Our deep learning based pathology detectors achieve expert human level performance with only a few hundreds of unique examples. Furthermore, this technology enables accurate localization of the GI tract without any need for additional inputs aside from the images.

Samah Khawaled

Technion

Bio:

Samah Khawaled graduated from Technion’s Viterbi faculty of electrical engineering in 2017. She is currently a graduate student working on her research towards M.Sc degree. Her research is concerned with modeling of Natural Stochastic Textures (NST) that incorporate also structural information and with the application of the model in analysis and classification of images. Samah supervises image-processing-related projects and she is involved as a T.A in various courses. Previously she worked at Intel and served as a tutor in Landa (equal opportunities) project. Samah is the recipient of the Israeli Ministry of Science and Technology Fellowship for 2017-2019.

Title:

On the Interplay of Structure and Texture in Natural Images

Abstract:

Natural Stochastic Textures (NST) images exhibit self-similarity and Gaussianity that are the two main properties characteristic of Fractional Brownian Motion (fBm) processes. We consider non-pure NST images that contain also structural information. The latter is characterized by having profound local phase information, whereas the former is characterized by random spatial phase. In this meeting, we address primarily the fractal-based layer of the model and implementing it on the NST component. We also discuss applications where the approach can be used, with special emphasis on medical images. Examples of mammography and bone X-ray images are presented.

* Join work with Prof. Yehoshua Y. Zeevi

Eldad Klaiman

Principal ScientistRoche Innovation Center Munich

Bio:

B.Sc and M.Sc in electrical engineering from the Technion IIT in Haifa. For the past 4 years working as a principal scientist at the Roche Innovation Center near Munich, Germany in the Oncology division of Roche Pharma Research and Early Development. During this period he has been responsible for the development and application of machine learning algorithms, mostly in the field of pathology tissue analytics and leveraging Roche's internal datasets in order to develop novel technology and techniques for informing and supporting oncology drug development.

Title:

Novel Predictive Technologies Supporting Personalized Healthcare and Early Decisions in Pharma Research

Abstract:

Modern day pharma organizations face mounting challenges in the field of drug research and development due to the pharma competitive landscape, the compelxity of treatment methods and increasing number of drug combination studies. At Roche we are persuing a Personalized Healthcare (PHC) approach to drug development, which is meant to match the best medicine to each specific patient. This approach has the potential to drastically improve drug efficacy for our patients but it also poses the challenge of patient-drug selection. The ammount of data available in the drug development process, from histology and clinical imaging, from genomics and sequencing, from real world data and from diagnostic methods present a unique oportunity for the extraction of novel insights and the development of new technologies. These novel machine learning techniques are helping us address these challenges and leverage this information to assist scientists in the research process, supporting descision making and bringing better drugs to patients, faster. 

Shmoolik Mangan

Algorithms Development ManagerVayaVision

Bio:

Shmoolik Mangan, is leading the algorithms development of VayaVision, a startup focused on algorithms solutions for autonomous driving.

Has Ph.D. from Weizmann inst., and 27 years of experience and leadership in the fields of Algorithms, Physics, Optics and Systems development of multi-disciplinary systems. Developed products for Metrology, Inspection, Detection and Classification systems for semiconductors and electronics manufacturing industries. Worked in Applied Materials and Orbotech.

Title:

The Inherent Redundancy Advantage of Low-Level Sensor Fusion for Autonomous Vehicles

Abstract:

The foundations of autonomous driving (AD) are based on advanced sensors and algorithms for environmental perception, especially for road and obstacles detection. The sensor set for AD is expected to contain a combination of low and high resolution, image and distance sensors, including cameras, Radars  and  Lidars. Low-level sensor fusion uses all sensor to generate a high-density pixel-level joint image-distance HD- RGBd  model through upsampling. The environmental perception is generated by processing this joint HD- RGBd  model through unified algorithms. While seems prone to loss of one sensor in the set, we actually developed built-in redundancy mechanisms that compensate for the loss of one sensor through alternative low-level fusion and detection of all the remaining sensors, with known loss of accuracy that can be used by the driving system to continue driving through at lower speed.

Soliman Nasser

Lead Research and Development EngineerBlink Technologies Inc.

Bio:

Soliman Nasser is a Computer Scientist and Researcher, working as a Lead Research and Development Engineer at Blink Technologies Inc. Previously, Soliman was a part of the Algorithms team at Intel RealSense, developing depth cameras 3D core algorithms. Soliman completed his B.Sc in tandem with high school studies as a part of "Etgar" Program, and M.Sc in Computer Science under the supervision of Dr. Dan Feldman at the Robotics and Big Data Lab, both from the University of Haifa. Soliman's research interests include: Computer Vision, Multiple View Geometry, {Machine, Deep}-Learning, Vision for Robotics, SLAM, Depth cameras , Eye-Tracking and more.

Title:

Eye Tracking: Theory and Applications

Abstract:

The eyes play a vital role in the perception of our evolving surrounding as well as in communication with other human beings. Apart from the behavioral and scientific research in medical diagnoses
and psychological studies, estimating gaze direction is also important in Mixed Reality, AI assistant devices, Automotive, Human-Computer and Robot interaction, User authentication and many other applications.

In this talk, I will give a short introduction to eye-tracking research, compare between the geometric (model based) and data-driven (appearance based) methods, with emphasis on the ad-vantages and disadvantages of deep-learning in this domain and present some interesting usages of this technology.

Oron Nir

Senior Data ScientistMicrosoft Media AI Research Team

Bio:

Oron is a founding member of the Microsoft Video Indexer, a media analytics cloud service recently announced GA, and a Senior Data Scientist at the Microsoft Media AI research team. He has acquired his B.Sc. and M.Sc. degrees at the Ben-Gurion University of the Negev, with a thesis at the field of Computer Vision.

Title:

Multimodal Topics Inference from Video

Abstract:

Media companies accumulate large archives and struggle to transform it into business value. Media is unstructured and therefore hard to manage at scale. Content categorization by topics is an intuitive approach that makes it easier for people to search. Many organizations turn to tagging their content manually, which is expensive and isn’t scalable. To automate this process, we apply multimodal topic detection and tracking (TDT). The information communicated through the speech, OCR, and identified celebrities is detected using Named Entity Recognition, clustered by concepts into similarity graphs, named using Wikipedia Categories ranker in an interpretable manner with evolvement over time.

Yuval Nirkin

PhD StudentBar-Ilan University

Bio:

Yuval is co-founder and CTO of DeepNen Ltd and a PhD student in Bar-Ilan University under the supervision of Prof. Yosi Keller and Prof. Tal Hassner. Yuval has a BSc in Computer Engineering from the Technion and a MSc in Computer Science from the Open University of Israel, and he was an academic officer in one of the biggest technology units in the IDF. Yuval’s research is focused on Deep Learning based face synthesis.

Title:

Introduction to Face Swapping

Abstract:

Face swapping is one of the most important face synthesis problems. It has numerous practical application such as hairstyle replacement, face spoofing, and data augmentation for machine learning. One of the most prominent face swapping methods is DeepFakes, which has recently attracted a lot of attention from the media around the world by making face swapping more accessible to regular users. The rapid progress in face synthesis should raise concerns on its implications on society. What will happen when those methods become easily accessible and harder to distinguish from real images?

Karina Odinaev

CorticaFounder, COO, Chief IP Officer

Bio:

Karina brings with her an extensive background in computational neuroscience and brain research. Karina pursued her award-winning research in the area of non-linear dynamical computational systems at the Technion, Israeli Institute of Technology, under the direction of Prof. Josh Zeevi.

Karina served as COO and VP Product of Cortica for 8 years. Karina has a vast experience in product characterization & development for high-dimensional and high-volume data processing systems. Prior to founding CORTICA, Karina was COO and VP Product of LCB Ltd., developing high-end systems for real-time voice recognition.

As Chief IP Officer, Karina’s accomplishments have helped Cortica generate an IP portfolio of over 200 patents and inventions and maintain its position as a top patent holder in Artificial Intelligence.

Karina served in an elite intelligence unit in IDF, leading a team of data production and analysis. Karina is part of a forum dealing with Israeli strategic homeland security issues.

Karina featured in the “Forty under 40” list of Israel in 2017, and one of the “top 20 leading entrepreneurs” in 2018. 

Title:

Does Deep Learning Pave the Way to Full Autonomy?

Abstract:

The road to fully autonomous vehicles has been filled with challenges. When will cars drive themselves? What will it take from a technological standpoint to get us there? During this presentation, we will discuss the industry and technological challenges surrounding self-driving vehicles. We will illuminate how these hurdles can be overcome by moving from a deep learning model into a completely new paradigm of machine learning technology – Autonomous AI

Dolev Pomeranz

Chief Architect and Head of ResearchTrax

Bio:

Dolev Pomeranz is Chief Architect and Head of Research at Trax, a startup in the retail industry aiming to digitize the physical world of retail. There he works on both algorithmic and engineering challenges. He holds his M.Sc. from Ben-Gurion University. His thesis was in the field of computational Jigsaw puzzle solving.

Title:

Retail Innovation with Augmented Reality in Reality

Abstract:

Augmented Reality is the combination of Computer Vision and Computer Graphics. As technology mature and is now being commoditized, new innovation can emerge. In this talk, we would share our story of Augmented Reality @ Trax. How we use it for multi-perspective geometry, indoor mapping and navigation, and even challenging the shopping experience.

Yael Pritch Knaan

Google ResearchGoogle

Bio:

Yael Pritch Knaan received a PhD in Computer Science from the Hebrew University of Jerusalem, and her Post doc in Disney Research Zurich. Her research is in the area of computational photography for videos and images. She co-founded two startup companies: One in panoramic stereo imaging (HumanEyes) and another on summarization of surveillance video (BriefCam). She joined Google X in 2013 and now she's part of Google AI/Perception where she leads a research team developing computational photography / machine learning technologies for Google Mobile Cameras and other Google products.

Title:

Computational Photographyon Google’s Smartphones

Abstract:

Mobile photography has been transformed by software. While sensors and lens design have improved over time, the mobile phone industry relies increasingly on software to mitigate physical limits and the constraints imposed by industrial design. In this talk, I'll present the technology behind two recent projects we’ve developed for Google Pixel Phones: Synthetic Depth-of-Field with a Single-Camera (also known as Portrait Mode) and key algorithms for the recently released Night Sight mode.

Elad Richardson

ResearcherDefence Community

Bio:

Elad Richardson is a Computer Vision enthusiast, focusing on the application of Deep Learning methods for a variety of problems.

Elad completed his M.Sc in Computer Science under the supervision of Prof. Ron Kimmel at the GIP Lab, Technion. His research focused on the applications of neural networks for learning 3D facial reconstructions and was presented at various international conferences. Currently, Elad is a researcher at the Defence Community.

Title:

You Only Scale Once - Efficient Text Detection using Adaptive Scaling

Abstract:

Text detection and recognition systems have gained a significant amount of attention in recent years. Current state-of-art text detection algorithms tackle challenging text instances in natural images. In particular the problem of detecting multi-scale text in a single image still presents a challenge. A common paradigm for dealing with that challenge is simply, given a single-scale text detection algorithm, to re-run that algorithm on different rescaled versions of the original image. While this approach usually achieves a boost in results, it is wasteful and significantly increases runtime.

In our work, we present an approach that bypasses the need to re-run the same detection algorithm on multiple scales. We show that using a simple plug-and-play change in the architecture, we are able to transform a text segmentation Convolutional Neural Network to also detect text scales. Knowing the text scales allows us to adaptively re-scale text regions, and aggregate them into a compact image, which enables our network to detect the smaller text using only one additional pass. We present some qualitative and quantitative results on the ICDAR benchmark, showing that our approach offers a good trade-off between runtime and accuracy.

Ortal Senouf

Researcher at VISTA LabTechnion & MaxQ-ai

Bio:

Ms. Ortal Senouf is a recent MSc graduate from the Department of Electrical Engineering of the Technion – Israel Institute of Technology. During her master’s studies she has been dividing her time between her research at the VISTA lab of the Department of Computer science at the Technion, and the algorithm research team of MaxQ-AI, a medical AI start-up she had joined as first employee, soon after completing her B.Sc. in 2013 from the Department of Bio-Medical Engineering. Among Ms. Senouf research interests: Medical imaging- analysis and acquisition, machine learning and computer vision. 

Title:

Learning Beamforming in Ultrasound Imaging

Abstract:

Viewing ultrasound imaging (US), or any other medical imaging modality for that matter, as an inverse problem, in which a latent image is reconstructed from a set of measurements, current research is focused mostly on learning the inverse operator producing an image from the measurements. The scope of our work differs sharply in the sense that we propose to learn the parameters of the forward model, specifically, the transmitted beams patterns (Tx), together with the receive beamforming (Rx). We demonstrate a significant improvement in the image quality compared to the standard patterns used in standard fast US acquisition settings.

Stav Shapiro

TechnionEE Ms.c Student

Bio:

Stav Shapiro is a CTO at an AI research branch in the Defense Community. He double majored in Physics and Electrical Engineering in the Technion, and is currently completing his thesis under the supervision of Prof. Michael Elad.  His hobbies include playing video games, reading sci-fi novels and eating his wife's, Maya, amazing food.

Title:

Improving Patch-Matching using Order-Preserving Deep-Learned Context-Features

Abstract:

Patch matching is a key ingredient in many image processing  and  computer  vision  applications.  Learning base approaches for patch-matching were shown to be successful, but were tailored towards specific tasks. A more general approach for improving the PM engine was recently introduced by Romano et. al, showing the potential improvement in using their modified similarity measure on several PM-based tasks. In our research, we investigate a deep-learning based strategy that preserve the order between distances and show that the order-preserving approach can improve upon previous work by a significant margin.

Asi Shefer

Lead Data ScientistLogMeIn

Bio:

Asi Shefer is a lead data scientist at LogMeIn’s AI research center. His research focuses on natural language processing and computer vision with a spatial interest in multimodal frameworks where different modalities represent similar objects. Asi holds his M.Sc from Ben-Gurion University.

Title:

Cross Domain Normalization: Natural Language in the Visual World

Abstract:

Multi-domain problems like grounding text in the Visual world are crucial for many real world applications, yet remain mostly unsolved.

We examine the effects of combining visual and linguistic representations revealing their fundamentally different nature yielding an imbalanced co-adaptation.

We introduce Cross Domain Normalization to dramatically stabilize learning, reduce overfit and speed up learning (up to 19X faster than Batch Normalization) though manipulation of cross domains statistics. With CDN our extremely simple model significantly outperforms all today’s SOTA models. The insights gained by investigating the linguistic and visual co-adapted parameters can be utilized for other multi-domain tasks and co-adaptation in general.

Prof. Shmuel Peleg

The Hebrew University & Briefcam

Bio:

Shmuel Peleg is a Professor at the Hebrew University since 1981, and a co-founder and Chief Scientist of Briefcam. He received his Ph.D. from the University of Maryland in 1979. Shmuel published over 150 papers in computer vision and holds 25 patents. He served a general co-chair of ICPR 1994, CVPR 2011, and ICCP 2013, and CVPR 2018. His technologies are used by several companies, and his latest company, Briefcam, was acquired by Canon on 2018.

Title:

The Benefits of Combining Sight and Sound

Abstract:

In traditional computer vision research, we were careful to strip and ignore the soundtrack. In the same manner, audio analysis traditionally ignored the available visual information in a video.
I will describe how where better scene understanding can be gained by using both sight and sound in video.
One example will show how speech can be enhanced when using a video showing the face of the speaker.
Also, video surveillance cases will be shown where the sound can enhance the understanding of scene activity.
Wide deployment of joint sight and sound analysis is limited by wiretapping and eavesdropping laws, preventing most surveillance and many wearable cameras from recording audio. Some possible approached to overcome these restrictions will be discussed.

Assaf Shocher

Ph.D. CandidateWeizmann Institute of Science

Bio:

Assaf Shocher is a Ph.D. candidate at the Weizmann Institute of Science, in the Department of Computer Science and Applied Mathematics, under the supervision of Prof. Michal Irani. His research focuses on Deep-Learning and Computer Vision, especially on Unsupervised Methods for Neural Networks. 
He has received his MSc from the Weizmann Institute, with dean's prize for outstanding students and B.Sc in Physics and in Electrical Engineering from Ben-Gurion University. Assaf has co-founded a fintech startup and worked as a Machine Learning team leader and as a data scientist in several startups. Currently, Assaf is also a lecturer at the Deep Learning Academy TLV, teaching Machine Learning and Deep Learning courses

Title:

Deep Internal Learning

Abstract:

Deep Learning has always been divided into two phases: Training and Inference. Deep networks are mostly used with large data-sets both under supervised (Classification, Regression etc.) or unsupervised (Autoencoders, GANs) regimes. Such networks are only applicable to the type of data they were trained for and do not exploit the internal statistics of a single datum.
We introduce Deep Internal Learning; We train a signal-specific network, we do it at test-time and on the test-input only, in an unsupervised manner (no label or ground-truth). In this regime, training is a part of the inference, no additional data or prior training is taking place. This is possible due to the fact that one single instance (be it image, video or audio) actually contains a lot of data when internal statistics are exploited. In a series of papers from the last year, that will be reviewed throughout the talk, I will demonstrate how we applied this framework for various challenges: Super-Resolution, Segmentation, Dehazing, Transparency-Separation, Watermark removal. I will also show how this approach can be incorporated into Generative Adversarial Networks by training a GAN on a single image for the challenge of retargeting.

Dr. Joseph Shtok

Research Staff Member at Computer Vision and Augmented Reality TeamIBM Research AI

Bio:

Joseph Shtok holds an M.Sc. degree in Mathematics and a Ph.D. degree in Computer Science, from the Technion – Israel Institute of Technology. His areas of expertise include Machine Learning, Computer Vision with focus on Object Recognition, Image Processing, Medical Imaging and depth sensors. Joseph is one of the leading research scientists of the IBM Research AI team, specializing in object recognition and 3D algorithms. His research papers are published in leading Computer Vision and Signal Processing conferences such as CVPR, NIPS, ISBI, ICASSP.

Title:

Few-shot Learning – State of the Art

Abstract:

Few-shot learning is a rapidly evolving field in computer vision, where the standard tasks of classification and detection are addressed in a challenging scenario with only few examples per class available for training. Using the approaches of meta-learning, metric learning, synthesis (hallucination) and data augmentation, substantial progress has been made in this field during the recent years. We will provide an overview of the state of the art in few-shot learning, highlighting and describing the main methods and results in the aforementioned directions.

Guy Tsafrir

VP R&DCognata

Bio:

Guy Tsafrir is the VP R&D of Cognata and has a strong track record in designing complex multidisciplinary systems.

Before joining Cognata, he held the position of digital health team leader at the Samsung Strategy and Innovation Center in Israel.

During his time at GE Healthcare, he served in various positions including software team leader, control & algorithms team leader, software & system manager, and R&D manager. Before joining GE Healthcare, he developed software and algorithms at Versamed and worked as a software engineer at Verint. Guy holds a BSc in Physics from Tel Aviv University.

Title:

Pushing the Boundaries - Simulation vs. the Real World

Abstract:

The greatest challenge regarding simulation is finding the matrix that compares it to real life.

In this session, Danny will present a mathematical matrix that defines this relation and show how a proper simulation can be constructed based on deep learning techniques. He will also supply live examples of the Cognata simulation platform.

Dr. Aya Soffer

Vice President AI TechIBM Research AI

Bio:

Dr. Aya Soffer is VP of AI Tech for the IBM Research AI organization, whose mission is to create world-class foundational AI technologies for business while advancing state-of-the-art AI capabilities. Dr. Soffer sets the strategy, works with IBM scientists around the world to help shape their ideas into new AI technology, and with IBM’s product groups and customers to drive Research innovation into the market. In her 19 years at IBM, Dr. Soffer has led several strategic initiatives that grew into successful IBM products and solutions. Her team worked on the original Watson system, and more recently on the Project Debater - the first computer to engage in a full debate with humans.

Title:

Video Comprehension and the Challenges it Poses

Abstract:

As video becomes the most popular form of communication and information sharing, automatic video comprehension is becoming a key requirement in many use cases. Fully understanding video poses many challenges, that are being addressed by a variety of AI techniques. In this talk, I will describe some of these challenges and present several ongoing projects in IBM Research AI in the space of Video Comprehension. These include segmentation of video into semantic scenes, recognizing objects and human actions and activities, identifying highlights in videos, few shot learning for object recognition. I will also present the Moments in Time public data set created by IBM Research AI, with its millions of video clips, to support the research and development of state of the art AI algorithms for the comprehension of subtle actions and activities in videos.

Shani Toledano

Founder & CEOHT BioImaging

Bio:

Title:

A Novel Image Modality for the Early Detection and Diagnosis of Cancer

Abstract:

Prof. Naftali Tishby

The Hebrew University of Jerusalem

Bio:

Naftali Tishby is a professor of Computer Science and Computational Neuroscience at the Hebrew university. He is one of the founders of Machine Learning research in Israel. His research is at the interfaces between physics, biology, and computer science and he is known for his integration of information theory, control and learning theory. Among his best known contributions are the information bottleneck method, information constrained reinforcement learning, and the information theory of deep neural networks.  

Title:

The Information Theory of Deep Learning

Abstract:

While a comprehensive theory of deep learning is still to be developed, one of the most interesting existing theoretical frameworks is the information bottleneck theory of deep neural networks.  In this framework we consider the information flow from the input layer through the layers, which form a cascade of filters of the irrelevant information about the desired label. The theory explains how stochastic gradient descent can achieve optimal internal representations, layer by layer, and what each layer represents. Moreover, it explains the Computational benefits of the hidden layers and the specific information encoded by the weights of each layer in the deep network.

Daniel Urieli

Senior Research ScientistGM Research

Bio:

Daniel Urieli is a senior research scientist at GM Research, applying Reinforcement Learning for challenging problems in the world of transportation. Daniel received his Ph.D in artificial intelligence from The University of Texas at Austin, where he was advised by Prof. Peter Stone. During his Ph.D studies Daniel won several awards, including: 1st place in the international RoboCup competition, 1st place in the international Power Trading Agent Competition, Best Contribution Award at NIPS workshop on Machine Learning for Sustainability, and an NSF IGERT Fellowship. Daniel received his M.Sc and his B.Sc in computer science and mathematics from Tel-Aviv University.

Title:

Reinforcement Learning: from Foundations to State-of-the-Art

Abstract:

Reinforcement learning (RL) is a branch of machine learning concerned with using experience gained through interacting with the world and evaluative feedback to improve a system's ability to make behavioral decisions [Littman, Nature 2015].  In the recent years RL achieved some major successes, which include: super-human GO playing, challenging robotic manipulation tasks execution, super-human computer game playing, and autonomous helicopter flight. This tutorial will start from RL's foundations, continue with some of RL's core techniques, and will cover some of RL's recent successes. This tutorial assumes no prior background in RL - general CS/math background will suffice.

Dr. Refael Vivanti

ResearcherRafael

Bio:

Dr. Refael Vivanti is a researcher at the Computer Vision for Autonomous Systems department at Rafael Advanced Defense Systems Ltd. Dr. Vivanti holds a Ph.D. degree in medical computer vision from the Hebrew University in Jerusalem, Israel, supervised by Prof. Leo Joskowicz. His research areas are deep reinforcement learning, deep learning for autonomous systems and SLAM. 

Title:

Self Learning Self Driving Car

Abstract:

We succeeded to train an agent to drive in a specific unvisited area using only a realistic 3D model of the area which was built using merely aerial images. The training system is end to end: the inputs are realistically rendered images from the 3D model to the agent perspective and the outputs are driving commands to the moving platform. While training, the system got only sparse weak supervision from collision detections with the 3D model. Our main novelty is the separate treatment to the two linked challenges: navigation and obstacle avoidance, which both affect the platform location and position, but have different rewards distribution and thus hard to learn simultaneously. The two are approached using different methods and combined in the final agent behavior. We hope to use the trained agent for driving a real vehicle inside the modeled area. 

Liat Zakay

CEO & FounderDonde Search

Bio:

Donde Search CEO & Founder, Donde award winning AI Computer Vision and NLP technology mimics the way people think about products helping retailers read their customers mind. Liat holds a BSc in a Computer Science and Economics student from BGU of the Negev. Liat, is a tech entrepreneur. She led Magshimim, a Cyber Security Training program in Israel peripheral area that became a national success and was head of Cyber & Computer Networks course. Prior to her academic studies, Liat was a team leader and served as an officer in an elite technological unit in the Israeli Intelligence and managed projects using cutting edge technologies and high significance and impact.

Title:

Deep Learning in E-Commerce: Problems and Approaches to Solve Them

Abstract:

Donde Search uses Computer Vision and Natural Language Processing to analyse product pages and extract meaningful information to improve navigation, merchandising, personalization, and search across e-commerce platforms.

In this talk we will present some of Donde's solutions and discuss their underlaying technologies. We will show how Donde's technology transforms a challenging and sometimes frustrating search experience into a natural and engaging visual process that allows humans to navigate their way in a database including hundreds of thousands of items. We will discuss the gap between state-of-the-art deep-learning algorithms and real working solutions for enterprises, including some of the most common problems in transforming working algorithms into product like handling structured data, combining texts and images and using unsupervised methods for data-augmentation and for allowing better training and learning.