Research
What if robustness, generalization, and interpretability are governed by the same geometric property?
My research investigates how the spectral properties of neural network weight matrices govern trustworthiness across deployment contexts. The central question: if we constrain the geometry of learned representations, can we simultaneously resolve robustness failures, generalization failures, and opacity as manifestations of the same underlying pathology?
Forthcoming publication, 2026. Patent filed.
Selected Work
Computational Cognitive Modeling of Human Emotion
RoBERTa-large fine-tuning on GoEmotions (27 labels) with multi-GPU training and layerwise representation analysis
Attention-Enhanced Interpretability in VGG16 for Object Recognition
Modified VGG16 with per-layer attention masks compared against saliency maps via correlation/IoU/SSIM/KL metrics
Spotify Song Analysis
Exploratory/ML analysis of Spotify track features to predict popularity and explore patterns
Inductive-Bias Study: CNN vs FCNN on MNIST (2-D Latent Evolution)
Equal-capacity CNN vs FCNN constrained to 2-D latent with embedding evolution videos showing CNN inductive bias
Fairness Audit of Jigsaw Toxicity Classifier
BERT/LSTM/GPT-2 models with subgroup AUCs, demographic/error parity, SHAP explainability, and custom SHarP fairness metric
Deep Q-Learning for Atari Boxing-v5 (FML Course)
DQN-based reinforcement learning agent trained on Atari Boxing-v5 with evaluation results