Fatema Siddika
she/her/hers
Iowa State University
Computer Science
Biography
Fatema Siddika is a Ph.D. student in Computer Science at Iowa State University, working in the Laboratory for Software Analytics and Pervasive Parallelism (SwAPP) under the supervision of Dr. Ali Jannesari. Her research focuses on advancing Federated Learning (FL) in heterogeneous environments through representation learning, sparse aggregation, and efficient fine-tuning. She aims to align heterogeneous models and enhance global learning while ensuring privacy, minimizing information loss during aggregation, and reducing communication overhead. Her recent work explores parameter-efficient techniques such as LoREFT, DPSGD, and LoRA to align heterogeneous modules with varying ranks and optimize foundation models for FL settings. Fatema also served as a Givens Research Associate at Argonne National Laboratory, where she applied fine-tuning and test-time scaling methods to improve large language model alignment and reasoning in scientific domains. Previously, she was a Lecturer at Jagannath University and BRAC University in Bangladesh. Her honors include the Prime Minister Gold Medal, IEEE WISC Scholarship, Google CS Research Mentorship Fellowship, and WiCyS Scholarship. Her long-term goal is to design scalable, privacy-preserving, and communication-efficient learning frameworks for distributed intelligent systems.
Academic Status
PhD Student - 5th
Research Area/Department
Computer Science; Data Science; Engineering; Machine Learning/AI
Major/Specialty
Computer Science
Degrees Earned or in Progress
Ph.D. in Computer Science, in progress, class of 2026
Academic Preparation
I have completed several advanced Computer Science courses that have strengthened my expertise in distributed systems, machine learning, and data privacy, all of which are directly relevant to my research and internship goals. Key courses include: COM S 554: Distributed Systems and COM S 652: Advanced Topics in Distributed Systems, which provided in-depth knowledge of distributed coordination, fault tolerance, and scalability—fundamental for Federated Learning. COM S 553: Privacy-Preserving Algorithms and Data Security and COM S 559: Security and Privacy in Cloud Computing, which deepened my understanding of differential privacy, secure aggregation, and data confidentiality. COM S 574: Introduction to Machine Learning and COM S 579: Natural Language Processing, where I gained experience with model optimization, fine-tuning, and representation learning. COM S 511: Advanced Algorithm, COM S 531: Theory of Computation, and COM S 527: Concurrent Systems, which enhanced my analytical, algorithmic, and concurrency reasoning skills. COM S 5980: Graduate Internship, which provided hands-on research and professional experience in applying large-scale model optimization and privacy-preserving techniques to real-world distributed AI systems. These courses collectively prepared me to contribute to large-scale, privacy-preserving AI research and distributed learning projects, bridging theoretical foundations with practical implementation.
Research/Publications
My research has been conducted in the Laboratory for Software Analytics and Pervasive Parallelism (SwAPP) at Iowa State University, under the supervision of Dr. Ali Jannesari. I focus on advancing Federated Learning (FL) through representation learning, sparse aggregation, and efficient fine-tuning. I also served as a Givens Research Associate at Argonne National Laboratory, where I worked on large-language-model (LLM) optimization, post-training adaptation (LoRA, DPO), and test-time scaling for scientific applications. My work has led to multiple publications and submissions, including: Fair Bandwidth Allocation at Edge Servers for Hierarchical, Distributed, and Concurrent FL Processes — FMEC 2025 (published). FedReFT: Federated Representation Fine-Tuning with All-But-Me Aggregation — under review (2025). FedProtoKD: Dual Knowledge Distillation with Adaptive Prototype Margin for Heterogeneous FL — under review (2025). TASA: Task-Aware Sparse Aggregation for Efficient Multi-Task Tuning — under review (2025). These experiences strengthened my ability to bridge theoretical innovation with scalable, privacy-preserving, and communication-efficient learning in distributed AI systems.
Research/Academic Interests
My research interests lie at the intersection of Federated Learning (FL), Representation Learning, and Efficient Fine-Tuning of Foundation Models. I focus on developing communication-efficient and privacy-preserving techniques for heterogeneous and large-scale distributed systems. A central theme of my work is addressing the challenge of model and data heterogeneity, specifically, how to align diverse client representations while minimizing information loss during aggregation. I am particularly interested in sparse and adaptive aggregation, parameter-efficient fine-tuning, and representation alignment across heterogeneous models to improve generalization in non-iid environments. My recent work explores techniques such as LoREFT, DPSGD, and LoRA to align modules of varying ranks, enabling scalable and efficient optimization for foundation models in FL settings. Broadly, my goal is to design trustworthy, efficient, and adaptive learning frameworks that bridge theoretical rigor with practical deployment in real-world, privacy-constrained distributed AI systems.
Computational and Data Science Areas
Computer Science
Motivation
I am deeply motivated to participate in the Sustainable Research Pathways program because it represents the kind of community I believe research should be built on — one that values collaboration, mentorship, and inclusion. My research in Federated Learning and distributed AI often reminds me that progress depends on connection: diverse systems, people, and ideas all working together to achieve something larger. Through this program, I hope to grow both as a researcher and as a mentor. Working with the NAIRR projects would give me the opportunity to apply my technical skills in privacy-preserving learning to real scientific challenges, while also learning from faculty and peers with different perspectives. I am especially drawn to the program’s focus on sustainable mentorship and community building. I want to be part of a research culture that not only advances technology but also ensures that everyone’s contribution is valued and that innovation truly reflects collective effort.
Lightning Talk Title
Adaptive Sparse Fine-Tuning and Multi-Task Continual Learning in Federated LLMs
Keywords (Maximum 20 words)
Parameter Efficient Fine-tuning; Federated Representation Learning; Continual learning; Sparse Fine-Tuning; Federated Learning; Large Language Models; Mixture-of-Experts; Knowledge Retention; Task Heterogeneity