Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in CVPR 2026
Proposes a universal real-world mathematical expression recognition network that achieves high-precision formula parsing and recognition in various complex scenarios. As co-first author, designed the model architecture and ran most of the experiments. Introduced the R-S attention mechanism as a core architectural innovation.
| Download paper here | Code |
Recommended citation: Zhuangcheng Gu, Guang Liang, Bin Wang, Chao Xu, Bo Zhang, Botian Shi, Conghui He. UniMERNet: A Universal Network for Real-World Mathematical Expression Recognition. CVPR 2026. https://arxiv.org/abs/2404.15254
Published in Technical Report, 2025
A 1.2B parameter document parsing model. Achieves SOTA-level recognition accuracy on complex formulas, tables, and dense text. Co-author in MinerU Team.
Recommended citation: MinerU Team. MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing. Technical Report, 2025. https://arxiv.org/abs/2509.22186
Published in NeurIPS 2025
Proposes an “activation-first” ViT quantization framework that is 100x faster than traditional methods while achieving SOTA-level generalization performance.
Recommended citation: Guang Liang, Xinyao Liu, Jianxin Wu. GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers. NeurIPS 2025. https://arxiv.org/abs/2506.11784
Published in CVPR 2026
Systematically solves the outlier problem in Transformer activations, significantly improving stability and training speed in general model pretraining while reducing quantization loss. Demonstrates strong domain generalization with significant throughput improvements and inference performance gains on both vision and language foundation models. Successfully validated on thousand-GPU clusters and multi-billion parameter model pretraining.
Recommended citation: Guang Liang, Jie Shao, Ningyuan Tang, Xinyao Liu, Jianxin Wu. TWEO: Transformers Without Extreme Outliers Enables FP8 Training And Quantization For Dummies. CVPR 2026. https://arxiv.org/abs/2511.23225
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.