Chaitanya Chawla
Email: cchawla [at] cs [dot] cmu [dot] edu
Hi! I'm a Graduate Student Researcher at the Robotics Institute at Carnegie Mellon University, where I am fortunate to be advised by Prof. Guanya Shi.
I have also worked with Prof. Jean Oh as a visiting researcher. Previously, I graduated from the Technical University of Munich, as a 4-time reciepient of the German National Scholarship.
My research interests focus on learning-based robotic manipulation. Primarily, I am interested in exploring methods to incorporate human priors by learning skill representations across humans and robots.
When I'm not in the lab, I am either playing badminton 🏸 or travelling.
CV  / 
Google Scholar  / 
Github  / 
LinkedIn
|
|
|
Humanoid Policy ~ Human Policy
Ri-Zhao Qiu, Shiqi Yang, Xuxin Cheng, Chaitanya Chawla,
Jialong Li, Tairan He, Ge Yan, David J. Yoon, Ryan Hoque, Jian Zhang, Sha Yi, Guanya Shi, Xiaolong Wang
arXiv 2025
Paper /
Website /
Code /
Dataset
Egocentric human demonstrations as a data source for humanoid manipulation.
|
|
Translating Agent-Environment Interactions across Humans and Robots
Tanmay Shankar, Chaitanya Chawla, Almut Wakel, Jean Oh
International Conference on Intelligent Robots and Systems, IROS 2024
Paper /
Website /
Code /
Video
We developed an unsupervised approach to learn temporal abstractions of skills incorporating agent-environment interactions. We hope to learn representations of patterns of motion of objects in the environment, or patterns of change of state. Our approach is able to learn semantically meaningful skill segments across robot and human demonstrations, despite being completely unsupervised.
|
|
Robot-Agnostic Framework for One-Shot Intrinsic Feature Extraction
Chaitanya Chawla,
Andrei Costinescu,
Darius Burschka
In Preparation for IEEE Transactions on Knowledge and Data Engineering
Report /
Presentation /
Code
We developed an algorithmic framework to extract different intrinsic features from human demonstrations. We are studying various features, including interactions with objects along the trajectory, analyzing the environment for interactions with the background (e.g., wiping or writing), and classifying the type of motion within a trajectory segment (e.g., shaking, rotating, or transporting).
|
|
Visual Teleoperation using Dynamic Motion Primitives
Chaitanya Chawla,
Dongheui Lee
Independent Research Project
Report /
Presentation /
Code
We presented a method to learn human motions using a Learning-From-Demonstration approach.
Using Dynamic Motion Primitives, we were able to teleoperate a Franka Panda Arm using the learned trajectories.
|
Projects
|
Self-supervised fine-tuning Pre-Grasps through 3D Object Generation
Chaitanya Chawla,
Almut Wakel, Eyob Dagnachew
10-623: Carnegie Mellon University
Report /
Code /
Presentation
Comparing different methods including Autoencoders and PCA, for feature representation in face reconstruction
|
|
Learning Dexterous Manipulation from Human Video Pretraining using 3D Point Tracks
Chaitanya Chawla,
Sungjae Park, Lucas Wu, Junkai Huang, Yanbo Xu
16-831: Carnegie Mellon University
Report /
Presentation
We proposed a pipeline to benchmark pre-training methods using different state representations.
Our method consisted of extracting sensorimotor information from videos by lifting the human hand and the manipulated object in a
shared 3D space in simulation (IsaacGym), i.e. either 3D point-tracks or 3D meshes.
Then, we retarget hand-trajectories to a Franka with a Shadow hand.
Finally, we fine-tune on various tasks.
|
|
Face Recognition using Autoencoders and PCA
Chaitanya Chawla,
Katherine Brenner
7-835: Technical University of Munich
Report
Comparing different methods including Autoencoders and PCA, for feature representation in face reconstruction
|
|
Candy Throwing Robot for Halloween 2023!
C. Chawla
2 hours long Project on Halloween Eve, Bot Intelligence Group
Distributing candies during Halloween at the Robotics Institute, Carnegie Mellon University
|
Work Experience
Roboverse Reply | July 2023 - April 2024
- Developed a perception pipeline for detecting and reporting measurements from analog gauges. Set up data-annotation, post-processing, and real-time inference of gauge measurements. Dockerized the application to integrate with Boston Dynamics' Spot to autonomously collect data in a factory environment.
- Created a webRTC pipeline using gRPC to transfer point cloud data from Spot's LIDAR sensor to Oculus VR Headset, enabling a remote user to observe Spot's immediate environment in real time.
- Migrated the company's robotic framework from ROS to ROS2.
Achievements
|
German National Scholarship - Deutschland Stipendium, 2021, 22, 23, 24
Heinrich and Lotte Muhlfenzl Scholarship - undergraduate research scholarship, 2023
TUM PROMOS 2023 - merit scholarship for stay-abroad research, 2023
Max Weber Program - nominated by the university, 2022
Academic Service
|
Introduction to Robot Learning (16-831), Carnegie Mellon University, 2025
Robotic Control Laboratory (6-931), Technical University of Munich, 2023
Mathematical Analysis (9-411), Technical University of Munich, 2022
|