Courses
- AMS-X02: Advanced numerical methods and high-performance computing for simulating complex phenomena (Marc Massot, 5 ECTS)Dans un nombre croissant d'applications, scientifiques ou industrielles, la simulation numérique joue un rôle clef pour comprendre et analyser les phénomènes physiques complexes. Elle permet aussi de prédire le fonctionnement de dispositifs comme les chambres de combustion aéronautiques dans l'optique d'une conception avancée. La complexité des systèmes et la taille des simulations multi-dimensionnelles rendent l'utilisation du calcul haute performance nécessaire. Ce cours propose dans un premier temps une présentation des enjeux que pose la modélisation des systèmes complexes pour les méthodes numériques et la simulation et un état de l'art des nouvelles architectures de calcul et des modèles de programmation parallèle. Après avoir rappelé les bases de l'analyse numérique des EDP pour les problèmes multi-échelles, nous proposons d'explorer quelques méthodes numériques avancées conçues pour traiter la raideur présente dans ces modèles complexes tout en tirant le meilleur parti des nouvelles architectures de calcul. Ces méthodes s'appuient sur une combinaison efficace entre analyse numérique, modélisation et calcul scientifique. Des séances de mise en oeuvre sur machines en lien avec un mésocentre de calcul seront proposées.Contenu:
- Modélisation mathématique des systèmes complexes multi-échelles.
- Définition de la notion de calcul haute performance et synthèse sur les nouvelles architectures de calcul et modèles de programmation parallèle.
- Analyse numérique des EDP multi-échelles (Décomposition de domaine, séparation d'opérateur...).
- Présentation et analyse de méthodes numériques avancées (multi-résolution adaptative et séparation d'opérateur avec adaptation temps/espace, algorithme pararéel, méthodes préservant l'asymptotique,...).
- TP sur machine parallèle avec fourniture de codes de calcul à titre d'exemple pour chaque méthode.
- AMS301: Parallel Scientific Computing (Axel Modave, 5 ECTS)
Nowadays, parallel scientific computing is an essential tool for academia and industry to solve a large range of engineering applications. This course deals with the efficient parallel solution of structured and unstructured problems (to solve e.g. PDE problems discretized with finite differences or finite elements). In particular, we focus on parallel computing with distributed memory. The theoretical part of this course is composed of two topics: parallel algorithms for solving structured and unstructured mathematical problems; solution of large linear systems (direct/iterative methods, conjugate gradient method, Krylov methods, GMRES, preconditioning, domain decomposition).
- AMS302: Modeling and simulation of neutral particle transport (François Fevotte, 5 ECTS)
This course gives an overview of the simulation of neutral particles transport phenomena. We start by modeling particles transport phenomena using Partial Differential Equations (PDEs). Then we study how to discretize and solve such equations. Some of the resolution methods will actually be implemented in C++ during hands-on sessions. On a simple model problem, we will thus be able to study the compared merits of Monte-Carlo and deterministic methods. In a last part, we present "real-world" applications of the simulation of neutron transport, mainly from the field of nuclear reactor physics.
This course is only available in French.
- AMS304: Numerical methods and modern algorithms for solving integral equations (Stéphanie Chaillat, 5 ECTS)
Waves that propagate in our environment are both a tool to investigate the world around us (non-destructive testing, radar or telescopes), and a mean to transmit informations (music, radio). In the first part of the course, we will first establish the acoustic equations; We will then describe the use of elementary solutions for the representation of fields. We then establish integral equations. In the second part, we will study the various methods to solve boundary integral equations. In particular, modern algorithms for the fast solution of these systems will be presented.
- CSC4508: Operating systems (François Trahay and Gaël Thomas, 5 ECTS)
Web page: http://www-inf.telecom-sudparis.eu/COURS/CSC4508/Supports/index_ipparis.php
This course presents the design principles of modern operating systems. In this course you will learn:
- how applications interact with the operating system and how the operating system interacts with the hardware
- the main internal mechanisms of an operating system (memory manager, I/O subsystem, scheduler)
- how these mechanisms are implemented in a real operating system through the study of the XV6 operating system
- how to develop parallel applications and parallel operating systems with threads
- CSC5001: High performance runtimes (Élisabeth Brunet, François Trahay and Gaël Thomas, 5 ECTS)
Web page: http://www-inf.telecom-sudparis.eu/COURS/CSC5001/new_site/Supports/
Calendar: http://www-inf.telecom-sudparis.eu/COURS/masteripparis/hpda/?page=..%2Fcommon%2Fcourses&genics=CSC5001With the advent of multicore processors (and now many-core processors with several dozens of execution units), expressing parallelism is mandatory to enable high performance on different kinds of applications (scientific computing, big-data...). In this context, this course details multiple parallel programming paradigms to help exploiting such a large number of cores on different target architectures (regular CPUs and GPUs). The course introduces distributed-memory model (MPI), shared-memory model (OpenMP) and heterogeneous model (CUDA). All these approaches would allow leveraging the performance of differents computers (from small servers to large supercomputers listed in Top500).
- CSC5004: Cloud infrastructures (Pierre Sutra and Mathieu Bacou, 5 ECTS)
Web page: https://github.com/otrack/cloud-computing-infrastructures
Calendar: http://www-inf.telecom-sudparis.eu/COURS/masteripparis/hpda/?page=..%2Fcommon%2Fcourses&genics=CSC5004This course presents cloud infrastructures in order to:
- acquire an overview of Cloud computing (e.g., data centers, everything-as-a-service, on-demand computing, cloud economy model)
- apprehend the fundamental notions in Cloud computing (e.g., fault-tolerance, elasticity, scalability, load balancing)
- understand how virtualization works (VM, container)
- deconstruct and classify a distributed data store
- recognize data consistency problems and know common solutions
In details, a student will learn how to:
- deploy and maintain IaaS
- construct base data storage services (e.g., key-value store, coordination kernels)
- construct and deploy a micro-service architecture
- think for dependability & scalability
- CSC5101: Advanced programming of multi-core architectures (Gaël Thomas, 5 ECTS)
Web page: http://www-inf.telecom-sudparis.eu/COURS/chps/paam/
Calendar: http://www-inf.telecom-sudparis.eu/COURS/masteripparis/hpda/?page=..%2Fcommon%2Fcourses&genics=CSC5101This course presents advanced programming techniques for multi-core architectures: lock-free algorithms, transactional memory, virtualization techniques or techniques to mitigate non-uniform memory architectures. This module presents the theoretical concepts underlying these systems and their practical implementation.
- CSC7256: Big Data Processing (Louis Jachiet, 2,5 ECTS)
This module will present the basis of architectures and algorithms for bigdata processing at a very large scale. It covers Map Reduce Apache Spark, Lambda and Kappa Architectures.
- CSC7239: Architecture for big data (Ioana Manolescu, 2,5 ECTS)
Mediator systems, P2P systems, structured data management in massively parallel settings
- IA307-master: Advanced GPU programming (Goran Frehse and Élisabeth Brunet, 2,5 ECTS)
Web page: https://sites.google.com/site/frehseg/teaching/ia307
Calendar: http://www-inf.telecom-sudparis.eu/COURS/masteripparis/hpda/?page=..%2Fcommon%2Fcourses&genics=IA307-masterThe aim of this course is to give a vision of algorithms and their implementations in modern machine learning libraries on neural networks. In particular, the use of specific hardware, such as graphics cards, to improve performance is at the heart of these libraries. It is important to understand how the calculations are shared between the hardware and the CPU.
- IA317: Large scale machine learning (Thomas Bonald, 2,5 ECTS)
On considère la problématique du passage à l'échelle en machine learning. Il s'agit de comprendre et d'apprendre à implémenter les principales approches permettant de résoudre numériquement des problème d'apprentissage statistique supervisé. Plusieurs angles seront abordé : réduction de la dimension et sélection des features, utilisation d'algorithmes d'optimisation adaptés, et utilisation d'outils informatiques distribués permettant de porter les calculs sur un cluster.
- INF504: Machine learning and deep learning (Mounim El Yacoubi, 2,5 ECTS)
- CSC_51052_EP (INF552): Data Visualization (Emmanuel Pietriga, 5 ECTS)
- CSC_51053_EP (INF553): Database Management Systems (Ioana Manolescu, 5 ECTS)Contenu du cours
- Modélisation des données: modèle entité-association, modèle relationnel
- Algèbre relationelle, calcule relationnel
- Le langage d'interrogation des bases de données relationnelles: SQL Qualité des schémas relationnels, formes normales
- Sous-système des bases de données relationnelles: disques, fichiers, buffers
- Indexation dans les bases de données: structures d'arbres, structures de tableau
- Evaluation des opérateurs relationnels
- Optimisation des requêtes SQL
- Brève introduction aux bases de données NoSQL
- CSC_51054_EP (INF554): Machine and Deep learning (Michalis Vazirgiannis and Jessee Read, 5 ECTS)
We have entered the Artificial Intelligence Era. The explosion of available data in a wide range of application domains give rise to new challenges and opportunities in a plethora of disciplines – ranging from science and engineering to business and society in general. A major challenge is how to take advantage of the unprecedented scale of data, in order to acquire further insights and knowledge for improving the quality of the offered services, and this is where Machine and Deep Learning comes in capitalizing on techniques and methodologies from data exploration (statistical profiling, visualization) aiming at identifying patterns, correlations, groupings, modeling and doing predictions. In the last years Deep learning is becoming a very important element for solving large scale prediction problems.
Syllabus of the course:
- General Introduction to Machine Learning
- Supervised Learning
- Unsupervised Learning
- Advanced Machine Learning Concepts
- Kernels
- Neural Networks
- Deep Learning I
- Deep Learning II
- Machine & Deep Learning for Graphs
- CSC_51059_EP (INF559): A Programmer’s Introduction to Computer Architectures and Operating Systems (Francesco Zappa Nardelli, Timothy Bourke and Théophile Bastian, 5 ECTS)
We will explain the enduring concepts underlying all computer systems, and show the concrete ways that these ideas affect the correctness, performance, and utility of any application program.
This course serves as an introduction to the students who go on to implement systems hardware and software. But this course also pushes students towards becoming the rare programmers who know how things work and how to fix them when they break.
This course will cover most of the key interfaces between user programs and the bare hardware, including:
- The representation and manipulation of information
- Machine-level representation of programs
- Processor architecture
- The memory hierarchy
- Exceptional Control Flow
- Virtual memory
- CSC_52060_EP (INF560): High performance runtimes (Patrick Carribault, 5 ECTS)
With the advent of multicore processors (and now many-core processors with several dozens of execution units), expressing parallelism is mandatory to enable high performance on different kinds of applications (scientific computing, big-data...). In this context, this course details multiple parallel programming paradigms to help exploiting such a large number of cores on different target architectures (regular CPUs and GPUs).It includes distributed-memory model (MPI), shared-memory model (OpenMP) and heterogeneous model (CUDA). All these approaches would allow leveraging the performance of differents computers (from small servers to large supercomputers listed in Top500).
- CSC_52064_EP (INF564): Compilation (Jean-Christophe Filliatre and Georges-Axel Jaloyan, 5 ECTS)
This course is an introduction to compilation. It explains the techniques and tools used in the different phases of a compiler, up to the production of optimized assembler code. A compiler for a fragment of the C language to the x86-64 assembler is realized in TD.
- CSC_51071_EP (INF571): Distributed Data Structures, with a Spotlight on Blockchains (Constantin Enea and Daniel Augot, 5 ECTS)
Distributed systems are composed of several computational units, classically called processes, that run concurrently and independently, without any central control. Additional difficulties are introduced by asynchrony (processes and channels operate at different speeds) and by limited local knowledge (each process has only a local view of the system and has a limited amount of information). Distributed algorithms are algorithms designed to run in this quite challenging setting. They arise in a wide range of applications, including telecommunications, internet, peer-to-peer computing, blockchain technology...
This course aims at giving a comprehensive introduction to the field of distributed algorithms. A collection of significant algorithms will be presented for asynchronous networked systems, with a particular emphasis on their correctness proofs. Algorithms will be analyzed according to various measures of interest (eg., time and space complexities, communication costs). We will also present some "negative" results, i.e., impossibility theorems and lower bounds as they play a useful role for a system designer to determine what problems are solvable and at what cost.
- CSC_52083_EP (INF583): Systems for big data (Angelos Anadiotis , 5 ECTS)
This course covers the design principles and algorithmic foundation of influential software systems for Big Data Analytics. The course begins with the design of large enterprise data warehouses, Online-Analytic processing, and data mining over data warehouses. The course then examines fundamental architectural changes to scale data processing and analysis to a shared-nothing compute cluster, including parallel databases, MapReduce, column stores, and the support of batch processing, stream processing, iterative algorithms, machine learning, and interactive analytics in this new context.
- MAP553: Foundation of Machine Learning (Erwan Le Pennec, 5 ECTS)
Web page: https://moodle.polytechnique.fr/course/info.php?name=MAP553-2022
Calendar: http://www-inf.telecom-sudparis.eu/COURS/masteripparis/hpda/?page=..%2Fcommon%2Fcourses&genics=MAP553Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to learn from data. A major focus of machine learning is to automatically learn complex patterns and to make intelligent decisions based on them.
This course focuses on the methodology underlying supervised and unsupervised learning, with a particular emphasis on the mathematical formulation of algorithms, and the way they can be implemented and used in practice. We will therefore describe some necessary tools from optimization theory, and explain how to use them for machine learning. A glimpse about theoretical guarantees, such as upper bounds on the generalization error, are provided during the last lecture.
The methodology will be the main concern of the lectures while some proofs will be done during the PCs. Practice will be done through a challenge.
- MAP569: Machine learning 2 (Stephane Canu, 5 ECTS)
- MAP572: Implementation of numerical methods (Lucas Gerin, 5 ECTS)
- MAP583: Apprentissage profond de la théorie à la pratique (Andreï Bursuc and Marc Lelarge, 5 ECTS)
Web page: https://moodle.polytechnique.fr/course/info.php?name=MAP583-2022
Recent developments in neural network approaches (more known now as “deep learning”) have dramatically changed the landscape of several research fields such as image classification, object detection, speech recognition, machine translation, self-driving cars and many more. Due its promise of leveraging large (sometimes even small) amounts of data in an end-to-end manner, i.e. train a model to extract features by itself and to learn from them, deep learning is increasingly appealing to other fields as well: medicine, time series analysis, biology, simulation...
This course is a deep dive into practical details of deep learning architectures, in which we attempt to demystify deep learning and kick start you into using it in your own field of interest. During this course, you will gain a better understanding of the basis of deep learning and get familiar with its applications. We will show how to set up, train, debug and visualize your own neural network. Along the way, we will be providing practical engineering tricks for training or adapting neural networks to new tasks.
By the end of this class, you will have an overview on the deep learning landscape and its applications to traditional fields, but also some ideas for applying it to new ones. You should also be able to train a multi-million parameter deep neural network by yourself. For the implementations we will be using the Pytorch library in Python.
- MAP584: Effective implementation of the finite element method (François Alouges, Aline Lefebvre and Flore Nabet, 5 ECTS)
- NET7212: Safe System Programming (Stefano Zacchiroli and Samuel Tardieu, 5 ECTS)
Description
In this cours you will learn how to build system-level applications that avoid by construction memory safety issues and data race issues, by relying on modern type systems. You will be introduced to Rust as an example of a programming language that realizes this approach and has significant industry adoption.
Syllabus
- Memory safety
- How to detect memory-safety issues in C/C++
- The Rust memory model
- NULL references and how to avoid "billion dollar mistakes"
- Rust language basics
- Race conditions
- Avoiding multiprocessing (security) pitfalls
- Data races
- Avoiding multithreading (security) pitfalls
Meta
- Site: https://ssp-rs.telecom-paris.fr/
- Field: System programming and Software security
- Keywords: programming, security, statictyping, memorysafety, multiprocessing, multithreading, rust
- Evaluation: exam + project
- Prerequisites:
- operating systems foundamentals
- C programming (C++ would be a plus)
- POSIX programming
- some experience with multithreading/multiprocessing programming
- ROB306: Accélérateurs matériels et programmation (Omar Hammami, 2,5 ECTS)
Les objectifs de ce cours sont doubles: 1. maitriser les techniques de modélisation de circuits numériques à base de langage de description matériel de haut niveau (C/C++/SystemC) ainsi que les flots de transformation en un circuit physique 2. maitriser les technologies de circuits numériques reprogrammables de type FPGA. Ces circuits qui connaissant un essor spectaculaire ces dernières années sont très largement utilisés dans les applications embarquées en particulier pour leur capacité a accélerer les calculs. Ces deux points réunis permettront la conception et l'implémentation de circuits pour fonctionnalités multiples sur composants reprogrammables. Les circuits FPGA sont aussi fortement utilisés en vérification de systèmes électroniques par émulation.
- ROB307: MPSOC Multiprocesseurs sur puce (Omar Hammami, 2,5 ECTS)
La conception de systèmes embarqués génère des systèmes complets comprenant des parties logicielles et matérielles indissociables et concues conjointement. Les systèmes résultants sont de manière quasi systématique amenés a résider sur une seule puce d'ou leurs appellation de systèmes sur puce. Les méthodologies de conception de Systèmes sur Puce (SOC - System on Chip) sont un outil indispensable pour un ingénieur amène a concevoir un système embarqué pour déterminer les possibilités offertes par la technologie pour réaliser le système étudié sous les contraintes spécifiés. Le cours introduit les méthodologies de conception de SOC et leurs applications dans des exemples industriels avec une focalisation sur les MPSOC (Multiprocessors System on Chip) et les NOCs (Network on Chip).
- CSC_4SD01_TP (SD201): Data Mining (Mauro Sozio, 2,5 ECTS)
- FLE1: French courses for foreign students (M1/S1) (Nicoline Lagel, 2,5 ECTS)
- FLE2: French courses for foreign students (M1/S2) (Nicoline Lagel)
- FLE3: French courses for foreign students (M2/S1) (Nicoline Lagel)
- English: English (2,5 ECTS)
- Free ECTS: Free ECTS (7,5 ECTS)
A student can choose 7.5 ECTS from any track (his/her main track included) of the master degree in computer science.
- Internship: 6-month M2 research internship (30 ECTS)
- M1 HPDA Project: M1 HPDA research projects (20 ECTS)
Web page: ?page=../common/research-projects-2023-2024
During the master, a student will learn research by doing research. During the two years of the master, a student will thus spend between one or two days each week in a research group in order to do research projects with professors and PhD students of IP Paris.
- M1 Seminar: M1 Seminar (2.5 ECTS)
Web page: https://www.inf.telecom-sudparis.eu/pds/seminars/
Calendar: http://www-inf.telecom-sudparis.eu/COURS/masteripparis/hpda/?page=..%2Fcommon%2Fcourses&genics=M1 SeminarThe seminar consists in presentations of ongoing research works, both by students on papers of conferences or journals, and by professors from IP Paris and other universities.
- M2 HPDA Project: M2 HPDA research projects (12,5 ECTS)
Web page: ?page=../common/research-projects-2023-2024
During the master, a student will learn research by doing research. During the two years of the master, a student will thus spend between one or two days each week in a research group in order to do research projects with professors and PhD students of IP Paris.
- M2 Seminar: M2 Seminar (2,5 ECTS)
Web page: https://www.inf.telecom-sudparis.eu/pds/seminars/
Calendar: http://www-inf.telecom-sudparis.eu/COURS/masteripparis/hpda/?page=..%2Fcommon%2Fcourses&genics=M2 SeminarThe seminar consists in presentations of ongoing research works, both by students on papers of conferences or journals, and by professors from IP Paris and other universities.