Joey Hejna

(Donald Joseph Hejna III)

I'm a fourth year student at UC Berkeley studying electrical engineering and computer science. Currently, I work with Professors Lerrel Pinto and Pieter Abbeel on reserach related to reinforcement learning and robotics. My legacy website can be found here.

jhejna @ berkeley.edu  /  Resume  /  Github  /  LinkedIn

profile photo
News
  • Our paper on unsupervised morphology optimization was accepted at ICLR 2021.
  • I was recognized as an honorable mention for the CRA Outstanding Undergraduate Researcher Award
Research

I'm broadly interested in learning for intelligent systems.

Hierarchically Decoupled Imitation for Morphological Transfer
Donald J. Hejna III, Pieter Abbeel, Lerrel Pinto
Accepted to ICML 2020
paper / website / code / talk

We propose transferring RL policies across agents using a hierarchical framework. Then, to remedy poor zero-shot transfer performance we introduce two additional imitation objectives.

Task-Agnostic Morphology Evolution
Donald J. Hejna III, Pieter Abbeel, Lerrel Pinto
Accepted to ICLR 2021
paper / website / code

Better robot strucutres hold the promise of better performance. We propose a new algorithm, TAME, that is able to evolve morphologies without any task specification. This is accomplished using an information theoretic objective that efficiently ranks morphologies based on their ability to explore and control their environment.

Improving Latent Representations via Explicit Disentanglement
Donald J. Hejna III*, Ashwin Vangipuram*, Kara Liu*
Course Project, CS 294-158 Unsupervised Learning, Spring 2020
paper

We examine and compare three methods for explicitly disentangling learned latent representations in VAE models.

Industry
Intern, Citadel Global Quantitative Strategies
Summer 2019

Developed C++ systems for trading APIs and monitoring systems. Worked on optimizing memory usage of large model training.

Intern, Intel Artificial Intelligence Group
Summer 2018
blog post

Worked on demo systems for Intel's OpenVino model optimization system in the AWS DeepLens. Explored systems for gradient based explanations of deep networks.

Teaching
berkeley UC Berkeley EECS Department

Teaching Assistant, EECS 127: Optimization Models, Fall 2020

Teaching Assistant, EECS 189: Machine Learning, Spring 2020

Teaching Assistant, CS 70: Discrete Math and Probability Theory, Fall 2019
teaching resources Public Resources

Introductory ML Notes

Deep Learning Workshop

Reinforcement Learning Workshop

Website source taken from here.