March 18, 2019 Pavilion 10, EXPO Tel Aviv
The Hebrew University of Jerusalem
Read more
Zebra Medical Vision
Intel
Tel Aviv University
Lightricks
Bar-Ilan University & NVIDIA
Quantum Machines
Philips Healthcare
Alibaba Israel Machine Vision Lab.
Imagry
DeePathology.ai
Technion
Nexar
IBM Research AI
Medtronic
Roche Innovation Center Munich
VayaVision
Blink Technologies Inc.
Microsoft Media AI Research Team
Bar-Ilan University
Founder, COO, Chief IP Officer
Trax
Google
Defence Community
Technion & MaxQ-ai
EE Ms.c Student
LogMeIn
The Hebrew University & Briefcam
Weizmann Institute of Science
Cognata
HT BioImaging
GM Research
Rafael
Donde Search
ProfessorThe Hebrew University of Jerusalem
Dorit Aharonov is considered a leader in the field of quantum computation, where she had made major contributions in a variety of areas including quantum error correction, algorithms, cryptography, and verification.
Many of her works can be viewed as creating a bridge between physics and computer science, attempting to study fundamental physics questions using computation language.
She was educated at the Hebrew University (BSc in Math and Physics, Ph.D. in Computer Science and Physics) and then continued to a postdoc at IAS Princeton (Mathematics) and UC Berkeley (Computer Science).
In 2001 Aharonov joined the faculty of the computer science department of the Hebrew University of Jerusalem.
In 2005 she was featured by the journal Nature as one of four theoreticians making waves in their field; in 2006 she had won the Krill prize, and in 2014 she was awarded the prestigious Michael Bruno award.
The Second Quantum Revolution: Towards Quantum Computers
What are quantum computers, and why should I be interested in them?
Will they have machine learning applications, soon, or ever?
And what is entanglement, anyway? I will try to partially answer these questions and raise a few more in this very introductory talk, which will assume almost nothing except for some lack of common sense.
Director of AIZebra Medical Vision
Ayelet Akselrod-Ballin’s is the director of AI at Zebra Medical Vision where she leads an excellent team of CV-ML researchers and data scientists. Her work focuses on developing novel technologies for computer vision, machine learning, deep learning, and biomedical image analysis. Ayelet did her Post-doctoral research as a fellow in the Computational Radiology Laboratory at Harvard Medical School, Children’s Hospital (Boston) and she holds a Ph.D. in Applied Mathematics and Computer Science from Weizmann Institute of Science. Prior to joining Zebra Medical Vision, Ayelet lead the medical imaging research technology at IBM-research and lead the Computer Vision & Algorithms team at MOD.
Opportunities in Radiology with AI
The advances in deep learning algorithms together with the continuous increase in digital information volume provide a unique opportunity in the healthcare domain. Recent deep Learning studies have established remarkable performance in complex diagnostic tasks. Nevertheless, some of the challenges include the need for interpretability to ease the human-machine collaboration, together with integration of imaging data and other types of data such as text, which is complex, noisy, heterogeneous and sometimes missing. In this talk we discuss several key applications spanning neuro CT, Chest X-ray and Spinal analysis with CT and demonstrate the results scaling large multi-domain dataset.
Senior Software EngineerIntel
Real-Time Deep Learning on Video Streams
Deep learning has recently become an abundant technology for analyzing video data. However, the increasing resolution and frame rates of videos make real-time analysis a remarkably challenging task.
We present an overview of a novel architecture based on Redis, Docker, and TensorFlow that enables real-time analysis of high-resolution streaming video. The solution can serve advanced deep learning algorithms at subsecond rates and appears fully synchronous to the user despite containing an asynchronous backend. The talk offers a demo using visual inspection and shares results that highlight the solution’s applicability to real-time neural network processing of videos. The approach is generalizable and can be applied to diverse domains that require video analytics.
PhD CandidateTel Aviv University
Generative Adversarial Networks for Image to Image Translation
Director of ResearchLightricks
Sky Replacement in Video
Who hasn't gotten an image or video with some dull skies taking up most of the space?
Professionals may edit these videos using advanced and time-consuming tools, to replace the sky with a more expressive or imaginative sky.
In this work, we propose an algorithm for automatic replacement of the sky region in a video with a different sky.
The method is fast, achieving close to real-time performance on mobile devices and can be fully automatic.
Joint work with Tavi Halperin, Harel Cain and Michael Werman. To be presented at Eurographics 2019.
Directory of AI research, NVIDIABar-Ilan University & NVIDIA
Gal Chechik is an Assoc. Prof at the Gonda Brain Institute at Bar-Ilan University and a director of AI research at NVIDIA. His research spans learning in brains and machines, including large-scale learning algorithms for machine perception, and analysis of representation and changes of mammalian brains.
In 2018 Gal joined NVIDIA as a director of AI research, leading Nvidia's research in Israel. Prior to that, Gal was a staff research scientist at Google Brain and Google research developing large-scale algorithms for machine perception, used by millions daily. Gal earned his Ph.D. in 2004 from the Hebrew University, developing computational methods to study neural coding, and did his postdoctoral training at Stanford CS department. Since 2009, he heads the computational neurobiology lab at the Gonda center of Bar Ilan University. Gal authored ~75 refereed publications, including publications in Nature Biotechnology, Cell and PNAS.
Understanding Images by Learning Operable Representations
CTOQuantum Machines
I completed my PhD in Physics at the Weizmann Institute in professor Moty Heiblum's group, where I investigated quantum electronics devices and topological quantum states.
A year ago, together with Itamar Sivan and Nissim Ofek, I co-founded Quantum Machines (QM), which is the first Israeli startup in quantum computing. At QM we develop control and operation systems for quantum computers. With a multidisciplinary team of highly motivated physicists and engineers, all passionate about solving the challenges of quantum control, we hope to help making the dream of a large-scale quantum computer become a reality.
Hello Quantum World
Staff Research Scientist, Global Advanced Technology, CT/AMIPhilips Healthcare
Moti Freiman is a staff research scientist at Philips Healthcare where he is developing advanced algorithms with the aim of improving the capacity of medical imaging devices to provide clinically meaningful information by leveraging machine learning, computer vision, and image processing algorithms.
Prior to Philips, Dr. Freiman was an Instructor in Radiology at Harvard Medical School where he developed advanced algorithms for quantitative analysis of diffusion-weighted MRI data.
Dr. Freiman is the recipient of an NIH R01 research grant and the 2012 Crohn's and Colitis foundation of America research fellow award. He is the author and co-author of more than 40 journal and full-length conference papers and holds several patents and patent applications.
Unsupervised Medical Abnormality Detection through Mixed Structure Regularization (MSR) in Deep Sparse Autoencoders
Deep sparse auto-encoders with mixed structure regularization (MSR) in addition to explicit sparsity regularization term and stochastic corruption of the input data with Gaussian noise have the potential to improve unsupervised abnormality detection. Unsupervised abnormality detection based on identifying outliers using deep sparse auto-encoders is a very appealing approach for medical computer aided detection systems as it requires only healthy data for training rather than expert annotated abnormality.
In the task of detecting coronary artery disease from Coronary Computed Tomography Angiography (CCTA), our results suggests that the MSR has the potential to improve overall performance by 20-30% compared to deep sparse and denoising auto-encoders.
DirectorAlibaba Israel Machine Vision Lab.
AutoML - Towards "CV as a Service"
CEO & Co-FounderImagry
Adham Ghazali is the CEO of Imagry. He spent the last 10 years working on various machine learning problems including large scale computer vision, Brain computer interfacing and Bio-Inspired facial recognition. He is interested in the intersection between biology and computer science. At his current post, he is responsible for strategic R&D and Business Development.
Prior to cofounding Imagry, Adham was a brain researcher focusing on the study of the visual system in infants.
Autonomous Driving in Unknown Areas
Co-Founder and CTODeePathology.ai
Jacob is Co-Founder and CTO of DeePathology.ai.
DeePathology.ai develops digital pathology products for diagnostics and for pharma research.
Before DeePathology, Jacob was the leader of the deep learning group at SagivTech.
Jacob has a BSc and MSc in Electrical Engineering from Tel Aviv university.
Active Learning for Fast and Efficient Annotation of Medical Images
Common problems in the process of developing AI solutions for the medical field are highly unbalanced datasets on one hand and limited annotation resources on the other hand.
The use of Active Learning can dramatically help with both issues.
The task of Cell Detection is very important in digital pathology. For example, analyzing the quantity and density of immune cells can provide important indications on the progress of cancer.
This is a tedious task when manually done by pathologists and thus, automating this process is desirable.
Automating cell detection requires annotating large amounts of data, which is usually very unbalanced.
The DeePathology.ai Cell Detection Studio is a do it yourself tool for pathologists to train deep learning cell detection algorithms on their own data.
Using this tool, deep learning cell detection solutions can be easily created by the pathologist very quickly.
In the talk we will use the example of the DeePathology.ai Cell Detection Studio to demonstrate how Active Learning can be used for medical imaging annotation.
We will also present our approach for using active learning with unbalanced datasets.
Ph.D. StudentTechnion
Tzofnat Greenberg-Toledo is a Ph.D. student at the Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion – Israel Institute of Technology. She received her B.Sc. degree from the Andrew and Erna Viterbi Faculty of Electrical Engineering at the Technion in 2015. From 2014 to 2016 she worked as a logic design engineer at Intel corp. As of 2016, she is a graduate student in Electrical Engineering at the Technion. Her current research is focused on computer architecture and accelerators for Deep Neural Networks with the use of memristors.
Accelerating DNN Applications with Emerging Memory Technologies
Deep Neural Networks (DNNs) are usually executed by commodity hardware, mostly FPGA and GPU platforms, and accelerators, such as Google's TPU. However, when executing DNN algorithms, the conventional von Neumann architectures, where the memory and computation are separated, pose significant limitations on performance and energy efficiency, as DNN algorithms are compute and memory intensive. Emerging memory technologies, known as memristors, enable in-place, highly parallel, and energy efficient analog multiply-accumulate operations. This is known as processing-near-memory (PNM). This talk will present the potential and opportunities of integrating memristors into DNN accelerators design.
Ph.D. StudentTel Aviv University
Applying CNNs on Triangular Meshes
Director of Deep LearningNexar
Ilan Kadar is the Director of Deep Learning at Nexar. Ilan is responsible for leading the deep learning team and effort to leverage Nexar's large-scale datasets of real-world driving environments to automotive safety applications. Prior to Nexar, Ilan was leading the image recognition group at Cortica and was responsible for building the company's machine vision technology. Ilan received his BSc, MSc and PhD degrees in computer science from the Ben-Gurion University of the Negev, Israel, in 2006, 2008, and 2012 respectively (Summa Cum Laude). His research thesis focused on machine learning algorithms for scene recognition and image retrieval and was published in leading conferences and journals in the areas of machine vision.
Continuous Deep Learning at the Edge
The robustness of end-to-end driving policy models depends on having access to the largest possible training dataset, exposing the true diversity of the 10 trillion miles that humans drive every year in the real world. However, current approaches are limited to models trained using homogenous data from a small number of vehicles running in controlled environments or in simulation, which fail to perform adequately in real-world dangerous corner cases. Safe driving requires continuously resolving a long tail of those corner cases. The only possible way to train a robust driving policy model is therefore to continuously capture as many of these cases as possible. The capture of driving data is unfortunately constrained by the reduced compute capabilities of the devices running at the edge and the limited network connectivity to the cloud, making the task of building robust end-to-end driving policies very complex.
In this talk, I will give an overview of a network of connected devices deployed at the edge running deep learning models that continuously capture, select, and transfer to the cloud “interesting” monocular camera observations, vehicle motion, and driver actions. The collected data is used to train an end-to-end vehicle driving policy, which also guarantees that the information gain of the learned model is monotonically increasing, effectively becoming progressively more selective of the data captured by the edge devices as it walks down the tail of corner cases.
DL Team Lead, CVAR GroupIBM Research AI
Few-Shot Object X, or How Can We Train A DL Model with Only Few Examples
Site Leader and Vice President R&DMedtronic
Laurence has over 20 years of experience in biomedical startups and corporates. She held several roles including algorithm engineer, product manager and VP R&D for startups. She joined General Electric as Site and Engineering Manager of the HCIT Herzlia site and was then promoted to General Manager of the Respiratory Value Segment (Versamed). She joined Given Imaging in July 2014. She holds a Ph.D. in Medical Physics from Tel Aviv University.
Laurence is part of the Board of MindUp, an IIA incubator for Digital Health. She is also part of the Directors Unit of the governmental companies and serves as a board member of Rotem, the governmental company of the Center for Nuclear Research in Dimona.
AI for Capsule Endoscopy – the Second Revolution of Endoscopy
Given Imaging revolutionized the GI world with the invention of Capsule Endoscopy 20 years ago. Capsule endoscopy was the first device able to visualize the Small Bowel and improved lives already of more than 3 million patients worldwide. The PillCam is the state of the art for Capsule endoscopy, owns the largest market share and continues to bring breakthrough innovations.
Today, the technology owned by Medtronic, keeps on developing additional applications for Small Bowel, Colon and Pan-enteric visualization for a variety of diagnosis and monitoring. The technologies use the most advanced AI, vision and machine learning algorithms to revolutionize once again this field and create the Intelligent Capsule Endoscopy.
In our presentation, we will present some of the unique technical achievements and challenges. Our deep learning based pathology detectors achieve expert human level performance with only a few hundreds of unique examples. Furthermore, this technology enables accurate localization of the GI tract without any need for additional inputs aside from the images.
Samah Khawaled graduated from Technion’s Viterbi faculty of electrical engineering in 2017. She is currently a graduate student working on her research towards M.Sc degree. Her research is concerned with modeling of Natural Stochastic Textures (NST) that incorporate also structural information and with the application of the model in analysis and classification of images. Samah supervises image-processing-related projects and she is involved as a T.A in various courses. Previously she worked at Intel and served as a tutor in Landa (equal opportunities) project. Samah is the recipient of the Israeli Ministry of Science and Technology Fellowship for 2017-2019.
On the Interplay of Structure and Texture in Natural Images
Natural Stochastic Textures (NST) images exhibit self-similarity and Gaussianity that are the two main properties characteristic of Fractional Brownian Motion (fBm) processes. We consider non-pure NST images that contain also structural information. The latter is characterized by having profound local phase information, whereas the former is characterized by random spatial phase. In this meeting, we address primarily the fractal-based layer of the model and implementing it on the NST component. We also discuss applications where the approach can be used, with special emphasis on medical images. Examples of mammography and bone X-ray images are presented.
* Join work with Prof. Yehoshua Y. Zeevi
Principal ScientistRoche Innovation Center Munich
Novel Predictive Technologies Supporting Personalized Healthcare and Early Decisions in Pharma Research
Algorithms Development ManagerVayaVision
Shmoolik Mangan, is leading the algorithms development of VayaVision, a startup focused on algorithms solutions for autonomous driving.
Has Ph.D. from Weizmann inst., and 27 years of experience and leadership in the fields of Algorithms, Physics, Optics and Systems development of multi-disciplinary systems. Developed products for Metrology, Inspection, Detection and Classification systems for semiconductors and electronics manufacturing industries. Worked in Applied Materials and Orbotech.
The Inherent Redundancy Advantage of Low-Level Sensor Fusion for Autonomous Vehicles
Lead Research and Development EngineerBlink Technologies Inc.
Eye Tracking: Theory and Applications
The eyes play a vital role in the perception of our evolving surrounding as well as in communication with other human beings. Apart from the behavioral and scientific research in medical diagnoses and psychological studies, estimating gaze direction is also important in Mixed Reality, AI assistant devices, Automotive, Human-Computer and Robot interaction, User authentication and many other applications.
In this talk, I will give a short introduction to eye-tracking research, compare between the geometric (model based) and data-driven (appearance based) methods, with emphasis on the ad-vantages and disadvantages of deep-learning in this domain and present some interesting usages of this technology.
Senior Data ScientistMicrosoft Media AI Research Team
Multimodal Topics Inference from Video
PhD StudentBar-Ilan University
Introduction to Face Swapping
Face swapping is one of the most important face synthesis problems. It has numerous practical application such as hairstyle replacement, face spoofing, and data augmentation for machine learning. One of the most prominent face swapping methods is DeepFakes, which has recently attracted a lot of attention from the media around the world by making face swapping more accessible to regular users. The rapid progress in face synthesis should raise concerns on its implications on society. What will happen when those methods become easily accessible and harder to distinguish from real images?
CorticaFounder, COO, Chief IP Officer
Karina brings with her an extensive background in computational neuroscience and brain research. Karina pursued her award-winning research in the area of non-linear dynamical computational systems at the Technion, Israeli Institute of Technology, under the direction of Prof. Josh Zeevi.
Karina served as COO and VP Product of Cortica for 8 years. Karina has a vast experience in product characterization & development for high-dimensional and high-volume data processing systems. Prior to founding CORTICA, Karina was COO and VP Product of LCB Ltd., developing high-end systems for real-time voice recognition.
As Chief IP Officer, Karina’s accomplishments have helped Cortica generate an IP portfolio of over 200 patents and inventions and maintain its position as a top patent holder in Artificial Intelligence.
Karina served in an elite intelligence unit in IDF, leading a team of data production and analysis. Karina is part of a forum dealing with Israeli strategic homeland security issues.
Does Deep Learning Pave the Way to Full Autonomy?
The road to fully autonomous vehicles has been filled with challenges. When will cars drive themselves? What will it take from a technological standpoint to get us there? During this presentation, we will discuss the industry and technological challenges surrounding self-driving vehicles. We will illuminate how these hurdles can be overcome by moving from a deep learning model into a completely new paradigm of machine learning technology – Autonomous AI
Chief Architect and Head of ResearchTrax
Dolev Pomeranz is Chief Architect and Head of Research at Trax, a startup in the retail industry aiming to digitize the physical world of retail. There he works on both algorithmic and engineering challenges. He holds his M.Sc. from Ben-Gurion University. His thesis was in the field of computational Jigsaw puzzle solving.
Retail Innovation with Augmented Reality in Reality
Google ResearchGoogle
Yael Pritch Knaan received a PhD in Computer Science from the Hebrew University of Jerusalem, and her Post doc in Disney Research Zurich. Her research is in the area of computational photography for videos and images. She co-founded two startup companies: One in panoramic stereo imaging (HumanEyes) and another on summarization of surveillance video (BriefCam). She joined Google X in 2013 and now she's part of Google AI/Perception where she leads a research team developing computational photography / machine learning technologies for Google Mobile Cameras and other Google products.
Computational Photographyon Google’s Smartphones
Mobile photography has been transformed by software. While sensors and lens design have improved over time, the mobile phone industry relies increasingly on software to mitigate physical limits and the constraints imposed by industrial design. In this talk, I'll present the technology behind two recent projects we’ve developed for Google Pixel Phones: Synthetic Depth-of-Field with a Single-Camera (also known as Portrait Mode) and key algorithms for the recently released Night Sight mode.
ResearcherDefence Community
Elad Richardson is a Computer Vision enthusiast, focusing on the application of Deep Learning methods for a variety of problems.
Elad completed his M.Sc in Computer Science under the supervision of Prof. Ron Kimmel at the GIP Lab, Technion. His research focused on the applications of neural networks for learning 3D facial reconstructions and was presented at various international conferences. Currently, Elad is a researcher at the Defence Community.
You Only Scale Once - Efficient Text Detection using Adaptive Scaling
Text detection and recognition systems have gained a significant amount of attention in recent years. Current state-of-art text detection algorithms tackle challenging text instances in natural images. In particular the problem of detecting multi-scale text in a single image still presents a challenge. A common paradigm for dealing with that challenge is simply, given a single-scale text detection algorithm, to re-run that algorithm on different rescaled versions of the original image. While this approach usually achieves a boost in results, it is wasteful and significantly increases runtime.
In our work, we present an approach that bypasses the need to re-run the same detection algorithm on multiple scales. We show that using a simple plug-and-play change in the architecture, we are able to transform a text segmentation Convolutional Neural Network to also detect text scales. Knowing the text scales allows us to adaptively re-scale text regions, and aggregate them into a compact image, which enables our network to detect the smaller text using only one additional pass. We present some qualitative and quantitative results on the ICDAR benchmark, showing that our approach offers a good trade-off between runtime and accuracy.
Researcher at VISTA LabTechnion & MaxQ-ai
Ms. Ortal Senouf is a recent MSc graduate from the Department of Electrical Engineering of the Technion – Israel Institute of Technology. During her master’s studies she has been dividing her time between her research at the VISTA lab of the Department of Computer science at the Technion, and the algorithm research team of MaxQ-AI, a medical AI start-up she had joined as first employee, soon after completing her B.Sc. in 2013 from the Department of Bio-Medical Engineering. Among Ms. Senouf research interests: Medical imaging- analysis and acquisition, machine learning and computer vision.
Learning Beamforming in Ultrasound Imaging
Viewing ultrasound imaging (US), or any other medical imaging modality for that matter, as an inverse problem, in which a latent image is reconstructed from a set of measurements, current research is focused mostly on learning the inverse operator producing an image from the measurements. The scope of our work differs sharply in the sense that we propose to learn the parameters of the forward model, specifically, the transmitted beams patterns (Tx), together with the receive beamforming (Rx). We demonstrate a significant improvement in the image quality compared to the standard patterns used in standard fast US acquisition settings.
TechnionEE Ms.c Student
Stav Shapiro is a CTO at an AI research branch in the Defense Community. He double majored in Physics and Electrical Engineering in the Technion, and is currently completing his thesis under the supervision of Prof. Michael Elad. His hobbies include playing video games, reading sci-fi novels and eating his wife's, Maya, amazing food.
Improving Patch-Matching using Order-Preserving Deep-Learned Context-Features
Patch matching is a key ingredient in many image processing and computer vision applications. Learning base approaches for patch-matching were shown to be successful, but were tailored towards specific tasks. A more general approach for improving the PM engine was recently introduced by Romano et. al, showing the potential improvement in using their modified similarity measure on several PM-based tasks. In our research, we investigate a deep-learning based strategy that preserve the order between distances and show that the order-preserving approach can improve upon previous work by a significant margin.
Lead Data ScientistLogMeIn
Cross Domain Normalization: Natural Language in the Visual World
Multi-domain problems like grounding text in the Visual world are crucial for many real world applications, yet remain mostly unsolved.
We examine the effects of combining visual and linguistic representations revealing their fundamentally different nature yielding an imbalanced co-adaptation.
We introduce Cross Domain Normalization to dramatically stabilize learning, reduce overfit and speed up learning (up to 19X faster than Batch Normalization) though manipulation of cross domains statistics. With CDN our extremely simple model significantly outperforms all today’s SOTA models. The insights gained by investigating the linguistic and visual co-adapted parameters can be utilized for other multi-domain tasks and co-adaptation in general.
The Benefits of Combining Sight and Sound
Ph.D. CandidateWeizmann Institute of Science
Deep Internal Learning
Deep Learning has always been divided into two phases: Training and Inference. Deep networks are mostly used with large data-sets both under supervised (Classification, Regression etc.) or unsupervised (Autoencoders, GANs) regimes. Such networks are only applicable to the type of data they were trained for and do not exploit the internal statistics of a single datum. We introduce Deep Internal Learning; We train a signal-specific network, we do it at test-time and on the test-input only, in an unsupervised manner (no label or ground-truth). In this regime, training is a part of the inference, no additional data or prior training is taking place. This is possible due to the fact that one single instance (be it image, video or audio) actually contains a lot of data when internal statistics are exploited. In a series of papers from the last year, that will be reviewed throughout the talk, I will demonstrate how we applied this framework for various challenges: Super-Resolution, Segmentation, Dehazing, Transparency-Separation, Watermark removal. I will also show how this approach can be incorporated into Generative Adversarial Networks by training a GAN on a single image for the challenge of retargeting.
Research Staff Member at Computer Vision and Augmented Reality TeamIBM Research AI
Few-shot Learning – State of the Art
VP R&DCognata
Guy Tsafrir is the VP R&D of Cognata and has a strong track record in designing complex multidisciplinary systems.
Before joining Cognata, he held the position of digital health team leader at the Samsung Strategy and Innovation Center in Israel.
During his time at GE Healthcare, he served in various positions including software team leader, control & algorithms team leader, software & system manager, and R&D manager. Before joining GE Healthcare, he developed software and algorithms at Versamed and worked as a software engineer at Verint. Guy holds a BSc in Physics from Tel Aviv University.
Pushing the Boundaries - Simulation vs. the Real World
The greatest challenge regarding simulation is finding the matrix that compares it to real life.
In this session, Danny will present a mathematical matrix that defines this relation and show how a proper simulation can be constructed based on deep learning techniques. He will also supply live examples of the Cognata simulation platform.
Vice President AI TechIBM Research AI
Video Comprehension and the Challenges it Poses
Founder & CEOHT BioImaging
A Novel Image Modality for the Early Detection and Diagnosis of Cancer
The Information Theory of Deep Learning
Senior Research ScientistGM Research
Reinforcement Learning: from Foundations to State-of-the-Art
ResearcherRafael
Self Learning Self Driving Car
We succeeded to train an agent to drive in a specific unvisited area using only a realistic 3D model of the area which was built using merely aerial images. The training system is end to end: the inputs are realistically rendered images from the 3D model to the agent perspective and the outputs are driving commands to the moving platform. While training, the system got only sparse weak supervision from collision detections with the 3D model. Our main novelty is the separate treatment to the two linked challenges: navigation and obstacle avoidance, which both affect the platform location and position, but have different rewards distribution and thus hard to learn simultaneously. The two are approached using different methods and combined in the final agent behavior. We hope to use the trained agent for driving a real vehicle inside the modeled area.
CEO & FounderDonde Search
Donde Search CEO & Founder, Donde award winning AI Computer Vision and NLP technology mimics the way people think about products helping retailers read their customers mind. Liat holds a BSc in a Computer Science and Economics student from BGU of the Negev. Liat, is a tech entrepreneur. She led Magshimim, a Cyber Security Training program in Israel peripheral area that became a national success and was head of Cyber & Computer Networks course. Prior to her academic studies, Liat was a team leader and served as an officer in an elite technological unit in the Israeli Intelligence and managed projects using cutting edge technologies and high significance and impact.
Deep Learning in E-Commerce: Problems and Approaches to Solve Them
Donde Search uses Computer Vision and Natural Language Processing to analyse product pages and extract meaningful information to improve navigation, merchandising, personalization, and search across e-commerce platforms.
In this talk we will present some of Donde's solutions and discuss their underlaying technologies. We will show how Donde's technology transforms a challenging and sometimes frustrating search experience into a natural and engaging visual process that allows humans to navigate their way in a database including hundreds of thousands of items. We will discuss the gap between state-of-the-art deep-learning algorithms and real working solutions for enterprises, including some of the most common problems in transforming working algorithms into product like handling structured data, combining texts and images and using unsupervised methods for data-augmentation and for allowing better training and learning.