| Advancing State-of-the-Art Vector Search with Advanced Computing Architecture
Graduate Researcher specializing in Hierarchical Navigable Small World graphs, Production-Scale RAG Systems, and FPGA and Parallel Computing
π M.S. Computer Engineering @ Arizona State University
From honors undergraduate to doctoral researcher, pioneering efficient ML systems
Starting August 2026 | Advisor: Dr. Jeff Zhang
Graduating May 2026
Graduated May 2025 |
Summa Cum Laude, GPA: 3.86
Advancing for efficient machine learning systems
Why: Manual network design is inefficient and often suboptimal for specific hardware
Automating discovery of optimal neural architectures for resource-constrained environments. My NEXUS-NAS framework uses evolutionary algorithms and reinforcement learning for Pareto-optimal architectures.
Why: Classical computing faces fundamental limits in solving certain optimization problems
Exploring quantum algorithms for similarity search and optimization in high-dimensional spaces. Investigating QAOA for large-scale recommendation systems and quantum-enhanced HNSW.
Why: Software-only solutions cannot meet ultra-low latency requirements for real-time AI
Developing FPGA implementations of HNSW and NAS algorithms. Targeting sub-microsecond query latency through custom datapaths and memory hierarchies for edge deployment.
Contributing to the advancement of efficient ML systems
Design and Automation Conference (DAC) 2026
Novel density-aware adaptive quantization achieving 4Γ compression while preserving distance relationships. Demonstrates 1.8-2.5Γ higher QPS than state-of-the-art HNSW implementations.
Master's Thesis | Target: IEEE FCCM Reconfigurable Computing Challenge (RCC 2026)
Developing FPGA-based acceleration for HNSW algorithm targeting sub-microsecond query latency. Custom datapath optimizations for billion-scale deployments.
Target: NeurIPS 2026
Automated neural architecture discovery with hardware constraints. Leveraging GNNs for architecture encoding and Bayesian optimization.
Target: International Conference on Supercomputing (ICS) 2026
A system to enable adaptive Graph RAG through graph-aware query profiling that prunes 98% of configuration space, dynamic hybrid retrieval mode selection, and multi-source autoscaling across CPU and GPU resources.
Mentoring the next generation while advancing industry applications
MyEdMaster Inc.
Aug 2024 β May 2025
Capstone Project
Fine-tuned transformer models using spaCy and PyTorch on scholastic corpora, improving classification accuracy by 20% and achieving F1-score of 0.91. Deployed ML models to production serving 10,000+ active students.
School of Computing & AI, ASU
Jan 2025 β May 2025
Supported 80+ undergraduate students in Embedded C and hardware systems projects, achieving 25% improvement in project outcomes. Delivered focused lectures on IoT systems and Edge AI integration.
School of Mathematical & Statistical Sciences, ASU
Aug 2023 β July 2025
Guided students through Calculus and advanced Statistics coursework. Assessed assignments using structured rubrics, providing constructive feedback to foster academic growth.
From academic research to production deployments
Leading development of production RAG system integrating LLMs with HNSW-optimized vector search. Achieved 30% lower latency and 40% memory savings across 10M+ embeddings.
Developed advanced ML models for financial time series prediction, exploring LSTM networks, ensemble methods, and feature engineering techniques for market analysis.
View Full ThesisFine-tuned transformer models on scholastic corpora, improving classification accuracy by 20% with F1-score of 0.91. Deployed to production serving 10,000+ students.
Comprehensive skillset spanning ML, systems, and hardware
Open to collaborations and opportunities in ML systems research
ganapat0706@gmail.com
Location
Tempe, Arizona, USA