# Hello!
I’m Michael:
- 5th-year Computer Science PhD student advised by Chris Ré.
- Labmate at HazyResearch, Stanford AI Lab, Stanford Machine Learning Group.
I currently work on deep learning architectures for expressive + efficient long sequence modeling, and using these advances to enable learning from new tasks and data types.
I also care about deep learning robustness and personalization.
Before the COVID times, I received my A.B. in Statistics and Computer Science at Harvard in 2020. I’m grateful to have worked with Serena Yeung, Susan Murphy, and Alex D’Amour on computer vision and reinforcement learning in healthcare.
# Research
Effectively Modeling Time Series with Simple Discrete State Spaces
Michael Zhang*, Khaled Saab*, Michael Poli, Tri Dao, Karan Goel, and Christopher Ré
ICLR 2023
Contrastive Adapters for Foundation Model Group Robustness
Michael Zhang and Christopher Ré
NeurIPS 2022
Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
Michael Zhang, Nimit S. Sohoni, Hongyang R. Zhang, Chelsea Finn, Christopher Ré
ICML 2022
[Long Talk]
Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Mayee F. Chen*, Daniel Y. Fu*, Avanika Narayan, Michael Zhang, Zhao Song, Kayvon Fatahalian, Christopher Ré
ICML 2022
Shoring Up the Foundations: Fusing Model Embeddings and Weak Supervision
Mayee F. Chen*, Daniel Y. Fu*, Dyah Adila, Michael Zhang, Frederic Sala, Kayvon Fatahalian, Christopher Ré
UAI 2022
[Oral]
Personalized Federated Learning with First Order Model Optimization
Michael Zhang, Karan Sapra, Sanja Fidler, Serena Yeung, José M. Álvarez
ICLR 2021