Hao Liu
haoxyliu@gmail.com
Github | GScholar | Twitter

I'm a research scientist at Google DeepMind. Previously, I completed my Ph.D. in computer science at BAIR at UC Berkeley with Professor Pieter Abbeel. I'll be joining as an Assistant Professor in Machine Learning Department at Carnegie Mellon University.

Research

I'm interested in machine learning and neural networks, including areas such as general reasoning, world models, large-scale language models and reinforcement learning.

Publications
See Google Scholar page for a complete list.
  • World Model on Million-Length Video And Language With Blockwise RingAttention
    Hao Liu*, Wilson Yan*, Matei Zaharia, Pieter Abbeel
    Arxiv, 2024
    bib | paper | code | project | tl;dr
  • Ring Attention with Blockwise Transformers for Near-Infinite Context
    Hao Liu, Matei Zaharia, Pieter Abbeel
    International Conference on Learning Representations(ICLR), 2024
    bib | paper | code | media | tl;dr
  • Blockwise Parallel Transformer for Large Context Models
    Hao Liu, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS)(Spotlight Presentation), 2023
    bib | paper | code | tl;dr
  • Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment
    Hao Liu, Wilson Yan, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS), 2023
    bib | paper | code | tl;dr
  • Chain of Hindsight Aligns Language Models with Feedback
    Hao Liu, Carmelo Sferrazza, Pieter Abbeel
    International Conference on Learning Representations(ICLR), 2024
    bib | paper | code | tl;dr
  • Emergent Agentic Transformer from Chain of Hindsight Experience
    Hao Liu, Pieter Abbeel
    International Conference on Machine Learning(ICML), 2023
    bib | paper | tl;dr
  • Masked Autoencoding for Scalable and Generalizable Decision Making
    Fangchen Liu*, Hao Liu*, Aditya Grover, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS), 2022
    bib | paper | [code | tl;dr
  • Palm up: Playing in the Latent Manifold for Unsupervised Pretraining
    Hao Liu, Tom Zahavy, Volodymyr Mnih, Satinder Singh
    Advances in Neural Information Processing Systems(NeurIPS), 2022
    bib | paper | tl;dr
  • URLB: Unsupervised Reinforcement Learning Benchmark.
    Michael Laskin, Denis Yarats, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang,
    Lerrel Pinto, Pieter Abbeel
    NeurIPS 2021 Track Datasets and Benchmarks, 2021
    bib | paper | code | tl;dr
  • APS: Active Pre-Training with Successor Features
    Hao Liu, Pieter Abbeel
    International Conference on Machine Learning(ICML)(Long Oral Presentation), 2021.
    bib | paper | code
  • Behavior From the Void: Unsupervised Active Pre-Training
    Hao Liu, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS)(Spotlight Presentation), 2021.
    bib | paper | code | tl;dr
  • Education / Experience
    Teaching and Service