Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
PhD Student in AI, focusing on Efficient LLM Training and Inference
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in CVPR 2026
Proposes a universal real-world mathematical expression recognition network that achieves high-precision formula parsing and recognition in various complex scenarios. As co-first author, designed the model architecture and ran most of the experiments. Introduced the R-S attention mechanism as a core architectural innovation.
| Download paper here | Code |
Recommended citation: Zhuangcheng Gu, Guang Liang, Bin Wang, Chao Xu, Bo Zhang, Botian Shi, Conghui He. UniMERNet: A Universal Network for Real-World Mathematical Expression Recognition. CVPR 2026. https://arxiv.org/abs/2404.15254
Published in Technical Report, 2025
A 1.2B parameter document parsing model. Achieves SOTA-level recognition accuracy on complex formulas, tables, and dense text. Co-author in MinerU Team.
Recommended citation: MinerU Team. MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing. Technical Report, 2025. https://arxiv.org/abs/2509.22186
Published in NeurIPS 2025
Proposes an “activation-first” ViT quantization framework that is 100x faster than traditional methods while achieving SOTA-level generalization performance.
Recommended citation: Guang Liang, Xinyao Liu, Jianxin Wu. GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers. NeurIPS 2025. https://arxiv.org/abs/2506.11784
Published in CVPR 2026
Systematically solves the outlier problem in Transformer activations, significantly improving stability and training speed in general model pretraining while reducing quantization loss. Demonstrates strong domain generalization with significant throughput improvements and inference performance gains on both vision and language foundation models. Successfully validated on thousand-GPU clusters and multi-billion parameter model pretraining.
Recommended citation: Guang Liang, Jie Shao, Ningyuan Tang, Xinyao Liu, Jianxin Wu. TWEO: Transformers Without Extreme Outliers Enables FP8 Training And Quantization For Dummies. CVPR 2026. https://arxiv.org/abs/2511.23225
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.