Hrishikesh Pawar
Email |
Github |
LinkedIn |
Resume
I am a master's student in the Robotics Engineering Program at Worcester Polytechnic Institute (WPI). My interests revolve around the synergy of Computer Vision, Machine Learning, and everything in between, especially in the context of autonomous applications.
I currently serve as a Graduate Student Researcher in the Perception and Autonomous Robotics Group, where my work centers on compute-efficient autonomous quadrotor navigation using Neuromorphic Vision Sensors. Recently, I developed a Depth-from-Defocus framework using event cameras that extracts sharpness-based depth cues for obstacle segmentation, enabling low-power, compute-efficient navigation without relying on traditional depth estimation techniques. Currently, I'm extending this work toward a learning-based navigation pipeline for low-light conditions, combining structured lighting with event cameras.
Prior to grad school, I spent three years at Adagrad AI, a dynamic startup delivering production-grade Computer Vision solutions to a global clientele. There, I tackled real-time video analytics challenges involving OCR, license plate recognition, pose estimation, person fall detection, and semantic segmentation.
I am currently looking for full-time opportunities starting May 2025
|
|
|
Worcester Polytechnic Institute
Masters in Robotics Engineering
Relevant Coursework: CS-RBE549: Computer Vision, CS541: Deep Learning, RBE595: Vision-Based Robotic Manipulation, RBE550: Motion Planning
Affiliations: PeAR Lab, MER Lab
|
|
Smt. Kashibai Navale College of Engineering
Bachelor's in Mechanical Engineering
I spent my undergraduate days working in the Combustion and Driverless Formula Student Team designing, building and racing Formula-3 prototype racecars at various national and international competitions. My four-year stint in the team allowed me to implement several concepts from my coursework, building a solid foundation in Mechanical Engineering fundamentals.
|
|
Nobi Smart Lights
AI Software Intern | May 2024 - August 2024
As an AI Software Intern at Nobi, I led the development of smart ceiling lamps focused on real-time fall detection and emergency response for elderly care. I developed a rotation-aware detection model using Swin Transformer backbones, enhancing the model's performance in real-world scenarios. I also worked with vision-language models like LLaVA and CLIP, using LoRA for fine-tuning to improve task generalization. Additionally, I automated the entire deployment pipeline using Jenkins, Kubernetes, and Docker, streamlining the process from model training to real-world application.
|
|
Adagrad AI
Computer Vision Engineer | November 2020 - July 2023
Developed hardware accelerated Computer Vision products addressing crucial real-world problems.
Gate-Guard: The focus was on creating an edge-based boom barrier system leveraging Automatic License Plate Recognition (ALPR) for vehicle access control. I was involved in developing data collection pipelines, model training, and deployment tailored to lightweight object detection models like Yolo-X and Yolo-v5, optimized for Nvidia Jetson TX2. Beyond model implementation, I developed interactive analytics and monitoring services using Django, Azure, WebSockets, Kafka, Celery, and Redis to ensure real-time data processing and system scalability.
|
|
PeAR Lab
Graduate Student Researcher
As part of my ongoing research at PeAR Lab, WPI, I developed a Depth-from-Defocus approach using event cameras that extracts sharpness-based depth cues for efficient obstacle segmentation. This enables lightweight navigation in resource-constrained environments. This work was published at EVEGEN Workshop, WACV 2025, where we demonstrated significant improvements in both obstacle segmentation and efficiency over state-of-the-art methods (Depth-Pro, MiDaS, RAFT).
In parallel, I've built a custom 3D Gaussian Splats simulator to generate high frame-rate, photorealistic frames and events, supporting Hardware-In-The-Loop (HITL) testing and more realistic evaluation of vision pipelines. I'm currently extending this work towards learning-based navigation in low-light environments, leveraging structured lighting with event cameras, with a strong focus on sim-to-real transfer for real-world deployment.
|
|
MER Lab
Graduate Student Researcher
Utilising depth images and point clouds to manipulate the waste stream using a robotic arm, and uncover occluded and covered objects
Project Link
|
Relevant Projects and Publications
|
|
Blurring For Clarity: Passive Computation for Defocus-Driven Parsimonious Navigation using a Monocular Event Camera
Hrishikesh Pawar,
Deepak Singh,
Nitin J. Sanket
WACV 2025 Workshops, pp. 912–916
Publication /
Project Page
Passive computation enables efficient aerial robot navigation in cluttered environments by leveraging
wave physics to extract depth cues from defocus, eliminating costly explicit depth computation. Using a
large-aperture lens on a monocular event camera, our approach optically blurs out irrelevant regions,
reducing computational demands effectively achieving 62x savings over state-of-the-art methods, with
promising real-world performance.
|
|
Neural Radiance Field (NeRF)
GitHub
Implemented the Neural Radiance Fields (NeRF) technique for synthesizing novel views of scenes. This project leverages deep neural networks to model the volumetric scene function, encoding the density and color of points in space as a function of viewing direction and location.
|
|
Classical Structure from Motion
GitHub
Developed a Classical Structure from Motion (SfM) pipeline to reconstruct 3D structures from sequences of 2D images. This project integrates key techniques such as feature detection, matching, motion recovery, and 3D reconstruction.
Utilized essential matrix computation, bundle adjustment, and triangulation methods to accurately estimate 3D points and camera positions, demonstrating the core principles of SfM in computer vision.
|
|
Camera Calibration
GitHub
Implemented the seminal work of Zhengyou Zhang from scratch to estimate the camera intrinsics and distortion parameters.
Used SVD and MLE for estimating the camera calibration parameters.
|
|
Panoroma Stitching
GitHub
Implemented Feature Detection followed by Adaptive Non-Maximal Suppression (ANMS) to ensure even distribution of corners across images, enhancing panorama stitching accuracy.
Utilized Feature Matching and Robust Homography estimation using RANSAC, further refining match quality. Employed image blending strategies like alpha blending and Poisson blending.
|
|
Probability based Edge Detection
GitHub
Implemented an edge detector that works by searching for texture, color, and intensity discontinuities across multiple scales.
Essentially it is a simpler implementation of Pablo Arbelaez's paper.
|
|
Zero Shot Semantic Style Transfer
GitHub
Implemented AdaAttN for diverse style application on images with text-based image segmentation using CLIPSeg.
|
|