Hao Liu


I'm a research scientist at Google DeepMind.

I'm an incoming Assistant Professor at Carnegie Mellon University.

Previously, I was a PhD student in EECS at Berkeley, advised by Pieter Abbeel, and spent two years part-time at Google Brain.

I'm interested in solving general superintelligence. To reach this goal, I research scalable training objectives and architectures, developing methodologies in areas such as world models, reasoning, large-scale language models, and reinforcement learning.




News:

  • ElasticTok: adapt computation to represent images and videos based on the available amount of information (paper'24).

  • Large World Models enable modeling text and video of millions tokens length (code).

  • Agentic Transformer for learning decision-making from trial-and-error sequences at scale. (paper'23).

  • BlockwiseTransformer RingAttention (paper'24, paper'23, code).

  • Open language models of Koala and OpenLLaMa (blog'23, models).

  • Reinforcement learning from large-scale unsupervised exploration experience (paper'22, paper'21, paper'21).



  • Publications
    Teaching
    email: haoxyliu@gmail.com