Tyler Griggs

Tyler Griggs

I am a second year Ph.D. student in Computer Science in the UC Berkeley Sky Computing Lab advised by Ion Stoica and Matei Zaharia. Previously, I worked in Network Infrastructure at Google Cloud. Before that, I graduated from Harvard with a BA in Computer Science advised by James Mickens.

My research interests are in designing and building efficient systems, espeically for machine learning workloads. My current research focuses primarily on model post-training, reasoning, and reinforcement learning. Along with several wonderful collaborators, I am a co-lead of the NovaSky team. Our work has been featured in The New York Times, The Wall Street Journal, and The Information.

Projects

Structure not Content
LLMs Can Easily Learn to Reason from Demonstrations: Structure, not content, is what matters!
Dacheng Li*, Shiyi Cao*, Tyler Griggs*, Shu Liu*, Xiangxi Mo, Eric Tang, Sumanth Hegde, Kourosh Hakhamaneshi, Shishir G Patil, Matei Zaharia, Joseph E Gonzalez, Ion Stoica
arXiv preprint [Paper]
MoE-Lightning
MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs
Shiyi Cao, Shu Liu, Tyler Griggs, Peter Schafhalter, Xiaoxuan Liu, Ying Sheng, Joseph E Gonzalez, Matei Zaharia, Ion Stoica
ASPLOS 2025 [Paper]
SkyServe
SkyServe: Serving AI Models across Regions and Clouds with Spot Instances
Ziming Mao, Tian Xia, Zhanghao Wu, Wei-Lin Chiang, Tyler Griggs, Romil Bhardwaj, Zongheng Yang, Scott Shenker, Ion Stoica
EuroSys 2025 [Paper] [Code]
Mélange
Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity
Tyler Griggs, Xiaoxuan Liu, Jiaxiang Yu, Doyoung Kim, Wei-Lin Chiang, Alvin Cheung, Ion Stoica
arXiv preprint [Paper] [Code]