Séminaire CLAP - 18/11/21 - Machine learning compiler and MLIR

Vous êtes cordialement invités à participer au séminaire CLAP le 18 novembre 2021 de 10h15 à 11h40. Le séminaire se fera en visio sur zoom (info de connexion ci-dessous). Vous êtes également conviés à un café CLAP de 10h à 10h15 sur gather.town (https://gather.town/app/7giEyr3CTKAwe3Or/cafeCLAP).  Les talks se feront en anglais.

Le programme :

  • 10h20-11h00: TVM: a machine learning compiler for the STM32 AI on the edge by Arthur Stoutchinin
This talk is a (hopefully) friendly introduction to ML compilers on the edge. While companies are in a race to develop specialized edge devices optimized for different ML use cases, the questions arise:  How do we make a ML model built with an arbitrary framework run on arbitrary hardware?  How to avoid spending the months that are traditionally needed to port a ML model onto a specialized hardware? This is where the ML compilers come into the picture, such as the TVM which works with a wide range of frameworks (including TensorFlow, MXNet, PyTorch, Keras, etc.) and a wide range of hardware backends (including CPUs, server GPUs, ARMs, x86, mobile GPUs, and FPGA-based accelerators). 
 

ST Microelectronics is developing a STM32 (microcontroller) based ML platform that aims at balancing the power and performance for the low-cost edge computing applications. The computing hardware of this edge platform is limited in compute, memory and power resources.  ST has adopted Apache TVM as a compiler for this platform because of its capability to support a variety of machine learning frameworks and its active OpenSource community. We collaborate with the TVM community and companies, such as ARM and OctoML, in developing efficient deep learning inference on the resource-constrained edge devices. I will talk about the TVM compiler, our work on TVM to support the STM32 ML platform, the main challenges that we are facing, and our current research directions.

  • 11h00-11h40: MLIR, a novel approach to building reusable and extensible compiler infrastructure by Oleksandr Zinenko
This talk presents MLIR, a novel approach to building reusable and extensible compiler infrastructure. MLIR addresses software fragmentation and compilation for heterogeneous hardware, significantly reducing the cost of building domain specific compilers, and connecting existing compilers together. MLIR facilitates the design and implementation of code generators, translators and optimizers at different levels of abstraction and across application domains, hardware targets and execution environments. The talk will discuss original design principles, structures and semantics of MLIR, and present it as a generalized infrastructure that reduces the cost of building compilers---describing diverse use-cases to show research and educational opportunities for future programming languages, compilers, execution environments, and computer architecture with a particular focus on GPU and accelerators. 
 
Bio:
Oleksandr Zinenko is a senior research engineer at Google Brain based in Paris, France working on compiler technology for machine learning. Previously, he worked as a research engineer for Inria, French National Institute for Computer Science and Applied Mathematics and École Normale Supérieure in Paris, as a member of Parkas group. Oleksandr obtained his PhD from the University Paris Saclay (Paris Sud XI) for his work on “Interactive Program Restructuring”. His research interests span from compilation to high-performance systems, to interactive software visualization united for the common goal of making programming efficient programs effectively.
 

Kevin Martin is inviting you to a scheduled Zoom meeting.
 
Topic: Séminaire CLAP 18/11/21
Time: Nov 18, 2021 10:15 AM Paris
 
Join Zoom Meeting
https://cnrs.zoom.us/j/95718437641?pwd=UTNkNUxsb3NiVUZmUTdmaCs2Z0Z3Zz09
 
Meeting ID: 957 1843 7641
Passcode: 39casj

Date: 
Jeudi, 18 Novembre, 2021