Research
I am broadly interested in deep learning methods applied to robotics. I hope to build intelligent agents that can generalize to a diverse range of real-world tasks.
|
|
We developed an unsupervised approach to learn temporal abstractions of skills incorporating agent-environment interactions. We hope to learn representations of patterns of motion of objects in the environment, or patterns of change of state. Our approach is able to learn semantically meaningful skill segments across robot and human demonstrations, despite being completely unsupervised.
|
|
We developed an algorithmic framework to extract different intrinsic features from human demonstrations. We are studying various features, including interactions with objects along the trajectory, analyzing the environment for interactions with the background (e.g., wiping or writing), and classifying the type of motion within a trajectory segment (e.g., shaking, rotating, or transporting).
|
|
We presented a method to learn human motions using a Learning-From-Demonstration approach.
Using Dynamic Motion Primitives, we were able to teleoperate a Franka Panda Arm using the learned trajectories.
|
Projects
|
Face Recognition using Autoencoders and PCA
C. Chawla, K. Brenner
Course Project
Paper
Comparing different methods including Autoencoders and PCA, for feature representation in face reconstruction
|
|
Candy Throwing Robot for Halloween 2023!
C. Chawla
2 hours long Project on Halloween Eve, Bot Intelligence Group
Distributing candies during Halloween at the Robotics Institute, Carnegie Mellon University
|
Teaching Assistantships
At TU Munich:-
|