You do not have Javascript enabled. Some elements of this website may not work correctly.

Rohin is a sixth-year PhD student in computer science working at the Center for Human-Compatible AI (CHAI) at UC Berkeley. While he started his PhD working on program synthesis, he became convinced about the importance of building safe, aligned AI, and so moved to CHAI at the start of his fourth year. He now thinks about how to provide specifications of good behavior in ways other than reward functions, especially ones that do not require much human effort.

He is best known for the Alignment Newsletter, a weekly publication with recent content relevant to AI alignment that has over 2100 subscribers.