Federated Learning with Layer Skipping: Efficient Training of Large Language Models for Healthcare NLP

Publication date: Apr 13, 2025

Federated learning (FL) enables collaborative model training across organizations without sharing raw data, addressing crucial privacy concerns in healthcare natural language processing (NLP). However, training large language models (LLMs) in federated settings faces significant challenges, including communication overhead and data heterogeneity. We propose Layer-Skipping Federated Learning, where only selected layers of a pre-trained LLM are fine-tuned across clients while others remain frozen. Applied to LLaMA 3.2-1B, our approach reduces communication costs by approximately 70% while maintaining performance within 2% of centralized training. We evaluate our method on clinical NER and classification tasks using i2b2 and MIMIC-III datasets. Our experiments demonstrate that Layer-Skipping FL outperforms competitive baselines, handles non-IID clinical data distributions effectively, and shows robustness when combined with differential privacy. This approach represents a practical solution for privacy-preserving collaborative learning in healthcare NLP.

PDF

Concepts Keywords
Cardiology Communication
Dj Federated
Efficient Fine
Harvard Healthcare
Layer
Layers
Learning
Models
Parameters
Privacy
Server
Skipping
Trainable
Training
Tuning

Semantics

Type Source Name
disease MESH privacy
drug DRUGBANK Aspartame
drug DRUGBANK Coenzyme M
drug DRUGBANK Chlorhexadol
drug DRUGBANK Flunarizine
drug DRUGBANK Esomeprazole

Download Document

(Visited 2 times, 1 visits today)

Leave a Comment

Your email address will not be published. Required fields are marked *