A creative, sharp-thinking problem-solver with broad system vision, strong teamwork, and reliable on-time execution.
A unified evaluation platform for ML models across multiple benchmarks

Mentored by: Applied Materials
Data Science Bootcamp 2025 (Data)
Responsibilities:
Architected the overall evaluation flow, separating runners, dataset adapters, metric computation, and persistence into clear layers to keep the system scalable and easy to extend with new models and benchmarks.
Set up a Docker-based environment that packages backend services, the database, and supporting tools into a reproducible, one-command setup for local development and demos.
Implemented the runner layer that orchestrates the full evaluation flow – loading models, running inference, aggregating results, and computing detection/classification/segmentation metrics across benchmarks.
Managed and authored PostgreSQL schema design, data ingestion, and Alembic migrations, using MinIO for persistent storage.
Implemented a robust model-to-benchmark class-ID mapping layer to align dataset labels with canonical classes and ensure consistent evaluation across datasets and experiments.
Fine-tuned and compared YOLO-based detection models on project datasets using transfer-learning techniques (freezing the backbone, training detection heads, and then gradually unfreezing layers), and evaluated their performance and trade-offs with PyQt-based experiment visualizations.
Research: Conducted research and delivered a presentation on large-language-model evaluation metrics, comparing different approaches and their implications for real-world use.
Client Project for ASTBooks – Development of Advanced Learning Interface | React, Node.js, Advanced E-Commerce Website | ASP.NET MVC, Web API, C#, ADO.NET, SQL
Spring MVC–Based Management System with SQL Server Integration
Fluent