Vivek Sivaraman Narayanaswamy

Portrait of  Vivek Sivaraman Narayanaswamy
  • Title
    Machine Learning Research Scientist
  • Email
    narayanaswam1@llnl.gov
  • Organization
    COMP-CASC DIV-CENTER FOR APPLIED SCIENTIFIC COMPUTING DIVISION

Vivek Sivaraman Narayanaswamy is a Machine Learning Research Scientist with the Machine Intelligence Group @ Center of Applied Scientific Computing (CASC), specializing in deep learning with applications in computer vision and scientific machine learning. In this role, he has led the development of novel neural network surrogates for scientific simulations, incorporating built-in uncertainty quantification  to improve reliability. He has also designed novel training and inference protocols to enhance model extrapolation, calibration, and anomaly/outlier detection in AI systems. His research interests encompass generative modelinginverse problemsexplainable AIrobustness, and out-of-distribution (OOD) generalization, with a focus on building trustworthy AI models for real-world deployment.

Dr. Narayanaswamy earned his Ph.D. in Electrical Engineering from Arizona State University in January 2023, completing a dissertation on methods to improve the fidelity and reliability of deep learning models. He received his Bachelor of Engineering in Electronics and Communication from Anna University, India, in 2017. During his doctoral studies, he interned at LLNL for three consecutive summers (2019–2021), tackling challenges in inverse problems, explainable AI, and surrogate modeling. Dr. Narayanaswamy has demonstrated a strong problem-solving aptitude and has published extensively at top venues like NeurIPS, ICML, ECCV, AAAI, Interspeech, and ICASSP. His major contributions include advancing uncertainty quantification techniques for deep neural networks, improving anomaly rejection in medical imaging, and leveraging foundation models (vision-language and large-language models) to design trustworthy AI systems. He has also co-mentored students on cutting-edge topics such as foundation models, conformal prediction, and inverse problems, reflecting his commitment to collaborative research and mentorship.

Ph.D. Electrical Engineering, Arizona State University, Tempe, AZ - January 2023

M.S Electrical Engineering, Arizona State University, Tempe, AZ - May 2021

B.E Electronics and Communication Engineering, Anna University, Chennai, India - May 2017

Journals

  • Narayanaswamy, V., Ayyanar, R., Tepedelenlioglu, C., Srinivasan, D., & Spanias, A. (2023): “Optimizing Solar Power Using Array Topology Reconfiguration With Regularized Deep Neural Networks.” IEEE Access11, 7461–7470. (Journal article presenting a deep learning approach for reconfiguring solar array topology to maximize power output under varying conditions.)
  • Rao, S., Narayanaswamy, V., Esposito, M., Thiagarajan, J. J., & Spanias, A. (2021): “COVID-19 detection using cough sound analysis and deep learning algorithms.” Intelligent Decision Technologies15(4), 655–665. (Demonstrates a machine learning framework for detecting COVID-19 from cough audio, including hyperparameter tuning for robust performance.)
  • Rajkumar, S., Sivaraman, N. V., Murali, S., & Selvan, K. T. (2017): “Heptaband swastik arm antenna for MIMO applications.” IET Microwaves, Antennas & Propagation11(9), 1255–1261. (Early work on antenna design leveraging novel multi-band configurations.)
  • Sattigeri, P., Thiagarajan, J. J., Ramamurthy, K., Spanias, A., Banavar, M., Dixit, A., Fan, J., Rao, S., Shanthamallu, U. S., Narayanaswamy, V., Katoch, S. (2021): “Instruction Tools for Signal Processing and Machine Learning for Ion-Channel Sensors.” International Journal of Virtual and Personal Learning Environments (IJVPLE)12(1), Article 12 (July 2021), 15 pages. (Describes educational tools integrating DSP, machine learning, and sensor data for remote learning.)

Conference Papers (Peer-Reviewed)

  • Thopalli, K., Narayanaswamy, V., & Thiagarajan, J. J. (2025): “Group Conformal Prediction.” In Uncertainty Quantification for Computer Vision Workshop (UnCV 2025) at CVPR 2025(Accepted workshop paper proposing a conformal prediction approach for group-conditioned uncertainty calibration in computer vision.)
  • Narayanaswamy, V., Thopalli, K., Anirudh, R., Mubarka, Y.,  Sakla, S., & Thiagarajan, J. J. (2024): “On the Use of Anchoring for Training Vision Models.” In Advances in Neural Information Processing Systems (NeurIPS 2024). (Spotlight presentation; proposes an anchoring technique to improve vision model training stability and robustness.)
  • Narayanaswamy, V., Thopalli, K., Subramanyam, R., & Thiagarajan, J. J. (2024): “DECIDER: Leveraging Foundation Model Priors for Improved Model Failure Detection and Explanation.” In European Conference on Computer Vision (ECCV 2024)(Introduces a method using foundation model priors to better detect and explain failures in vision models.)
  • Thiagarajan, J. J., Narayanaswamy, V., Trivedi, P., & Anirudh, R. (2024): “PAGER: A Framework for Failure Analysis of Deep Regression Models.” In International Conference on Machine Learning (ICML 2024)(Develops a framework for analyzing and understanding failures in deep regression models.)
  • Narayanaswamy, V., Anirudh, R., & Thiagarajan, J. J. (2024): “The Double-Edged Sword of AI Safety: Balancing Anomaly Detection and OOD Generalization via Model Anchoring.” In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024)(Examines how an anchoring-based approach can improve anomaly detection without sacrificing generalization to out-of-distribution data.)
  • Narayanaswamy, V., Mubarka, Y., Anirudh, R., Rajan, D., Spanias, A., & Thiagarajan, J. J. (2023): “Know Your Space: Inlier and Outlier Construction for Calibrating Medical OOD Detectors.” In Medical Imaging with Deep Learning (MIDL 2023)(Presents a data augmentation strategy to construct inliers/outliers that significantly improves out-of-distribution detection in medical image classifiers.)
  • Thiagarajan, J. J., Narayanaswamy, V., Rajan, D., Liang, J., Chaudhari, A., & Spanias, A. (2021):“Designing counterfactual generators using deep model inversion.” In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pp. 16873–16884. (Develops a technique to generate counterfactual examples for model interpretability by inverting deep models.)
  • Thiagarajan, J. J., Anirudh, R., Narayanaswamy, V., & Bremer, P. T. (2021): “Accurate and robust feature importance estimation under distribution shifts.” In Proceedings of the AAAI Conference on Artificial Intelligence 35(9), pp. 7891–7898 (AAAI 2021). (Proposes a robust method for estimating feature importance that remains reliable even when data distribution shifts from training to deployment.)
  • Thiagarajan, J. J., Anirudh, R., Narayanaswamy, V., & Bremer, P. T. (2022): “Single Model Uncertainty Estimation via Stochastic Data Centering.” In Advances in Neural Information Processing Systems (NeurIPS 2022)(Introduces a technique for estimating prediction uncertainty from a single trained model by using stochastic alterations of input data.)
  • Spotlight Presentation – NeurIPS 2024: Received a spotlight (top 15% of accepted papers) for the paper “On the Use of Anchoring for Training Vision Models” at NeurIPS 2024.
  • Computing Research SLAM Award (2022): Won 3rd Place in an elevator pitch research competition (SLAM) for presenting innovative research in computing.
  • LLNL Director’s Award for Publication (2022): Awarded the DDS&T Excellence in Publication Award at LLNL for the NeurIPS 2021 paper “Designing Counterfactual Generators Using Deep Model Inversion”.
  • Spotlight Presentation – NeurIPS 2022: Earned a spotlight presentation (top 15% of papers) for “Single Model Uncertainty Estimation via Stochastic Data Centering” at NeurIPS 2022.
  • Travel Grant – Interspeech 2020: Received a competitive travel grant from ISCA to attend and present at the Interspeech 2020 conference.
  • Travel Grant – ICASSP 2019: Received an IEEE travel grant to present research at the ICASSP 2019 conference.

Google Scholar -  https://scholar.google.com/citations?user=7h2Ui6YAAAAJ&hl=en
Github - https://github.com/vivsivaraman