Sai Karthik Navuluru
he/him/his
University of Texas at Dallas
Computer Engineering
Biography
Sai Karthik Navuluru is a PhD student in Computer Engineering at the University of Texas at Dallas. His research focuses on scientific machine learning with an emphasis on Graph Neural Networks, high-performance computing, and constrained optimization for large-scale, multimodal scientific data. His prior work spans computational chemistry workflows (NWChem), online change-point detection, and action recognition, supported by experience across academia and industry (including Deloitte). Karthik holds an M.S. in Data Science from the University of New Haven and a B.Tech in Electronics and Communication from SRM University. His publications include work on HPC-informed dimensionality reduction, parallelization strategies for performance bounds, and computer vision for cognitive assessment. He is motivated by building scalable, reliable ML methods that bridge theory and practice in scientific computing.
Academic Information
Status: PhD Student
Year in Program: 2nd
Major/Specialty: I am a PhD student in Computer Engineering at the University of Texas at Dallas, specializing in scientific machine learning with an emphasis on Graph Neural Networks (GNNs), High-Performance Computing (HPC), and constrained optimization. My research centers on scalable methods for multimodal data integration and large-scale optimization, including DOE-aligned work on HPC-driven modeling and performance analysis on GPU supercomputers. I bring a strong applied foundation from an M.S. in Data Science (University of New Haven), focused on machine learning, deep learning, and NLP, and a mathematically rigorous B.Tech in Electronics and Communication Engineering (SRM University), which built depth in calculus, probability, and signal processing. Across projects and publications, I integrate statistical modeling with HPC and graph-based methods to advance data-driven discovery in scientific computing and computational engineering.
Degrees: Ph.D. / Computer Engineering / In Progress (Aug 2024 – Present) – University of Texas at Dallas M.S. / Data Science / 2023 – University of New Haven B.Tech. / Electronics and Communication Engineering / 2021 – SRM University
Research Areas
Applied Mathematics; Computer Science; Data Science; Machine Learning/AI; Mathematics
Research Interests
I am interested in building scalable, reliable ML systems for scientific computation. Methodologically, I focus on graph learning (message passing, spectral methods, geometric priors), constrained and stochastic optimization, and Bayesian inference for principled uncertainty. Systems-wise, I work on parallel and distributed training, data layout and graph partitioning, GPU utilization/throughput, and algorithm–architecture co-design on HPC platforms. Application threads include computational chemistry (structure–property prediction, surrogate modeling for simulations), multimodal data integration across experiments/simulations, and online change detection in scientific streams. I’m particularly motivated by problems where physical constraints and scaling limits matter—designing models that respect conservation/feasibility while achieving end-to-end performance on supercomputers.
Topical Areas
Applied Computer Science; Applied Mathematics; Computer Science; Performance Evaluation and Benchmarking; Statistics and Probability; Training; Visualization and Human-Computer Systems
Relevant Coursework
I’ve completed rigorous coursework and research that directly support a summer internship in scientific computing and ML. Core CS and systems classes built strong fundamentals, while advanced ML/statistics and math courses developed my modeling and analysis skills for large-scale, multimodal data. Graph Theory (PhD, UT Dallas): Graph algorithms, connectivity, spectral properties—foundation for GNN design and analysis. Bayesian Data Analysis (PhD, UT Dallas): Probabilistic modeling, inference, uncertainty quantification for data-driven decision-making. Computer Architecture (PhD, UT Dallas): Memory hierarchies, parallelism, GPU/CPU performance considerations—useful for HPC workflows. Machine Learning (M.S., Univ. of New Haven): Supervised/unsupervised methods, model evaluation, pipelines. Deep Learning (M.S., Univ. of New Haven): CNNs/RNNs/transformers, optimization, regularization; hands-on PyTorch/TensorFlow. Natural Language Processing (M.S., Univ. of New Haven): Sequence models, embeddings, transformers—transferable to multimodal ML. Mathematics (B.Tech, SRM): Calculus, Advanced Calculus, Complex Analysis, Probability & Random Processes—continuous/discrete math for ML and optimization.
Publications & Research Projects
1.AI-Driven Dimensionality Reduction for Scientific Computing with HPC. MSDS 2024 Conference. 2.A Systematic Study of Parallelization Strategies for Optimizing Scientific Computing Performance Bounds. SOCC 2024 Conference, 2024. 3.Mind in Action: Cognitive Assessment Using Action Recognition. DEXA 2023 (LNCS), 2023.