Skip to main content
Norway

Job offer

  • JOB
  • France

Verifiable Independence in Decentralized Collaborative Learning Information-Theoretic Foundations and Cryptographic Proofs for Privacy-Preserving Cybersecurity

Apply now
17 Mar 2026

Job Information

Organisation/Company
IMT Atlantique
Department
Doctoral division
Research Field
Computer science » Informatics
Researcher Profile
First Stage Researcher (R1)
Positions
PhD Positions
Application Deadline
Country
France
Type of Contract
Temporary
Job Status
Full-time
Offer Starting Date
Is the job funded through the EU Research Framework Programme?
Not funded by a EU programme
Is the Job related to staff position within a Research Infrastructure?
No

Offer Description

Context

IMT Atlantique, internationally recognised for the quality of its research, is a leading general engineering school under the aegis of the Ministry of Industry and Digital Technology, ranked in the three main international rankings (THE, SHANGHAI, QS).
Located on three campuses, Brest, Nantes and Rennes, IMT Atlantique aims to combine digital technology and energy to transform society and industry through training, research and innovation. It aims to be the leading French higher education and research institution in this field on an international scale. With 290 researchers and permanent lecturers, 1000 publications and 18 M€ of contracts, it supervises 2300 students each year and its training courses are based on cutting-edge research carried out within 6 joint research units: GEPEA, IRISA, LATIM, LABSTICC, LS2N and SUBATECH. The proposed thesis is part of the research activities of the team: SOTERN and of the laboratory IRISA and the department SRCD. The scientific activities of this department are related to Network Systems, Cyber Security and Digital Law.

The thesis is embedded in the CyberCNI Chair (Cybersecurity for Critical National Infrastructures), one of the largest academic–industrial cybersecurity initiatives in France, bringing together major industrial stakeholders and academic experts to address real-world cybersecurity challenges.
The candidate will join a dynamic, international, and interdisciplinary research environment at the intersection of:
• machine learning
• cybersecurity
• distributed systems
• privacy-enhancing technologies
The research will be carried out within the (team name) at LS2N, focusing on trustworthy AI and cybersecurity for critical infrastructures.
The PhD student will benefit from:
• close collaboration with industrial partners
• access to real-world cybersecurity datasets and challenges
• opportunities to publish in top-tier conferences and journals (NeurIPS, IEEE S&P, CCS, NDSS, etc.)
• an excellent research atmosphere and supervision

 

Scientific context

Modern cybersecurity is shifting from isolated defenses toward collaborative, distributed intelligence. Techniques such as Federated Learning (FL), Swarm Learning (SL), and Transfer Learning enable organizations to jointly train models without sharing raw sensitive data.
However, these approaches rely on a critical and largely unverified assumption: that trained models do not leak sensitive information about their training data.
Recent research has shown that this assumption is often violated through:
• gradient inversion attacks
• membership inference attacks
• unintended memorization
These attacks demonstrate that models can retain and expose sensitive data, raising fundamental concerns for applications in critical infrastructures, where privacy and confidentiality are paramount.
At the same time, emerging techniques such as:
• differential privacy
• machine unlearning
• information-theoretic generalization bounds
offer partial solutions but lack formal guarantees and verifiability in decentralized environments.
This thesis addresses a central open problem:
How can we define, measure, achieve, and prove independence between training data and learned models in collaborative learning systems?

Expected contributions of the Thesis

The thesis aims to establish foundations for verifiable privacy in collaborative machine learning, with contributions at the intersection of theory, systems, and cybersecurity.
Key expected contributions include:

1. Theoretical Foundations
• Formal definition of data–model independence
• Information-theoretic bounds (mutual information, stability)
• Analysis of privacy–utility trade-offs

2. Attack Modeling and Empirical Analysis
• Implementation and extension of: gradient inversion attacks, membership inference attacks, memorization metrics
• Identification of failure modes in real cybersecurity datasets

3. Quantification of Dependence
• Metrics based on: influence functions, Shapley values, gradient-based signals
• Detection of high-risk contributors in collaborative learning

4. Verifiable Privacy Mechanisms
• Design of machine unlearning protocols
• Integration of cryptographic proofs (e.g., zero-knowledge proofs)
• Formal guarantees of data removal and independence

5. Systems and Cybersecurity Evaluation
• Implementation in federated/swarm learning environments
• Evaluation on intrusion detection and cybersecurity datasets
• Comparison with differential privacy baselines

Require skills

We are looking for an exceptional and highly motivated candidate with:

Required:
• MSc (or equivalent) in: Computer Science, Cybersecurity, Machine Learning, or related field
• Strong background in: machine learning / deep learning, mathematics (probability, linear algebra)
• Solid programming skills (Python, PyTorch or similar)

Highly valued:
• Knowledge of one or more of: federated learning / distributed systems, privacy-enhancing technologies, cryptography, cybersecurity (e.g., IDS, malware analysis)
• Experience with research (internships, publications)

Personal qualities:
• curiosity and scientific rigor
• autonomy and initiative
• ability to work in an international environment
• strong communication skills in English

Work Plan

The PhD is structured over 36 months:

Year 1: Foundations and attack modeling
o literature review
o implementation of baseline attacks
o initial theoretical formulation

Year 2: Metrics and theoretical development
o dependence quantification (IF, Shapley, MI)
o first publications

Year 3: Verifiable unlearning and system integration
o cryptographic proof mechanisms
o full system evaluation
o thesis writing and publications

Additional Informations 

• Date de fin de candidature - Application deadline : au plus tôt p.e. 1/5/2026
• Date de démarrage de la thèse- Start date : au plus tôt p.e. 1/7/2026
• Durée du contrat- Contract duration: 36 months
• Localisation - Location : Rennes
• Contact(s) : Marc-Oliver.Pahl@imt-atlantique.fr

Where to apply

E-mail
marc-oliver.pahl@imt-atlantique.fr

Requirements

Research Field
Computer science » Informatics
Education Level
Master Degree or equivalent
Skills/Qualifications

We are looking for an exceptional and highly motivated candidate with:

Required:
• MSc (or equivalent) in: Computer Science, Cybersecurity, Machine Learning, or related field
• Strong background in: machine learning / deep learning, mathematics (probability, linear algebra)
• Solid programming skills (Python, PyTorch or similar)

Highly valued:
• Knowledge of one or more of: federated learning / distributed systems, privacy-enhancing technologies, cryptography, cybersecurity (e.g., IDS, malware analysis)
• Experience with research (internships, publications)

Personal qualities:
• curiosity and scientific rigor
• autonomy and initiative
• ability to work in an international environment
• strong communication skills in English

Languages
ENGLISH
Level
Good
Internal Application form(s) needed
Verifiable Independence in Decentralized Collaborative Learning.pdf
English
(322.63 KB - PDF)
Download

Additional Information

Work Location(s)

Number of offers available
1
Company/Institute
IMT Atlantique Bretagne - Pays de la Loire
Country
France
City
Rennes
Postal Code
35576
Street
2, rue de la Châtaigneraie
Geofield

Contact

City
Brest, Nantes, Rennes
Website
Street
Brest, Nantes, Rennes
E-Mail
marc-oliver.pahl@imt-atlantique.fr

Share this page