SHI Collaboration Profiles

Profile pages for Sustainable Horizons Institute SRP 2025-2026 Student of Faculty


Jianfeng Zhu

Jianfeng Zhu

Kent State University

Computer Science

Biography

Jianfeng (Sally) Zhu is a Ph.D. candidate in Computer Science at Kent State University, working under the guidance of Dr. Hailong Jiang. Her current research explores the intersection of artificial intelligence and high-performance computing (HPC), focusing on whether large language models (LLMs) can understand and optimize compiler intermediate representations (IRs). She collaborates with Dr. Jiang on developing IR-driven learning frameworks to improve program resilience, fault prediction, and automatic optimization within HPC environments.

Academic Information

Status: PhD Student

Year in Program: 5th

Major/Specialty: Computer Science – Artificial Intelligence and High-Performance Computing

Degrees: Ph.D. in Computer Science, Kent State University – In Progress (Expected 2026) M.A. in Digital Sciences, Kent State University – 2019 Ph.D. & M.A. in Information Science, Wuhan University – 2014 B.A. in Computer Science, Wuhan University – 2000

Research Areas

Computer Science; Machine Learning/AI

Research Interests

My research interests lie at the intersection of artificial intelligence, high-performance computing (HPC), and compiler optimization. I am particularly focused on understanding how large language models (LLMs) can analyze and optimize compiler intermediate representations (IRs) to improve program resilience, scalability, and efficiency in HPC environments. Working with Dr. Hailong Jiang, I have contributed to projects such as eHAPPA and HAPPA, which apply parameter-efficient tuning methods (e.g., LoRA) and representation learning to predict fault resilience and enhance code reliability. Our recent work, Can Large Language Models Understand Intermediate Representations?, investigates LLMs’ ability to reason over structured program semantics—a step toward integrating AI-driven reasoning into software optimization pipelines. Beyond systems-level AI, I also explore human-centered applications, using LLMs to model personality, emotion, and mental health from real-world language data. My long-term goal is to bridge computational systems and behavioral modeling, advancing both the theoretical understanding and practical deployment of trustworthy, adaptive AI across domains.

Topical Areas

Applied Computer Science; Artificial Intelligence and Intelligent Systems; Computer Science; Informatics, Analytics and Information Science; Performance Evaluation and Benchmarking

Relevant Coursework

I have completed core courses in Machine Learning, Data Mining, and Compiler Principles, which provided strong foundations in system architecture, memory management, and code optimization—essential for my current research on large language models and compiler intermediate representations (IR). In addition, I have taken High-Performance Computing, and Big Data Analytics, which strengthened my ability to apply AI and optimization techniques to large-scale computational systems. Together, these courses have well prepared me for research in AI-driven compiler optimization and HPC system analysis.

Publications & Research Projects

1. H. Jiang, J. Zhu, B. Fang, K. Barker, C. Chen, R. Jin, and Q. Guan, “eHAPPA: Efficient and Scalable Resilience Prediction in HPC Applications with Low-Rank Adaptation,” TechRxiv Preprint, pp. 1–12, Jun. 2025. 2. H. Jiang, J. Zhu, B. Fang, K. Barker, C. Chen, R. Jin, and Q. Guan, “HAPPA: A Modular Platform for HPC Application Resilience Analysis with LLMs Embedded,” in Proceedings of the 43rd IEEE International Symposium on Reliable Distributed Systems (SRDS 2024), pp. 40–51, Oct. 2024. 3. H. Jiang, J. Zhu, Y. Wan, B. Fang, H. Zhang, R. Jin, and Q. Guan, “Can Large Language Models Understand Intermediate Representations?” in Proceedings of the 42nd International Conference on Machine Learning (ICML 2025), 2025.

Faculty Mentor

Hailong Jiang