A final-year Ph.D. candidate @ National University of Singapore
Incoming Assistant Professor @ School of Artificial Intelligence, Shanghai Jiao Tong University
Education Background:
2021.08 - Now Ph.D.@National Univeristy of Singapore
2017.09 - 2021.06 B.S.@Nanjing University
Internship:
2020: Rutgers University
Supervisor: Prof. Hao Wang
2021: Baidu Inc.
Supervisor: Tianwei Lin
Research Interests: Model Editing for Artificial General Intelligence
Model Level
Architectural Design
Efficiency & Scalability
Versatility & Generalizability
Data Level
Data Efficiency
Data Privacy
Data Interpretability
Application Level
Visual Generation
xML
Machine Learning for a spectrum of domains, including computer vision, natural language processing, speech, multimedia, science, and engineering
If you are interested in working with me, just drop me an email! We are always looking for self-motivated students actively.
Contact: songhua.liu@u.nus.edu
Last Update: 2025.02
[2025.02] 🔥We release a benchmark for large-scale dataset compression here. Try it here!🚀
[2024.12] 🔥We release CLEAR, a simple-yet-effective strategy to linearize the complexity of pre-trained diffusion transformers, such as FLUX and SD3. Try it here!🚀
[2024.12] 🔥We release OminiControl, a minimal yet powerful universal control framework for Diffusion Transformer models like FLUX. Try it here!🚀
* denotes equal contribution.
Ultra-Resolution Adaptation with Ease
Ruonan Yu*, Songhua Liu*, Zhenxiong Tan, and Xinchao Wang
[Efficiency & Scalability -> Guidelines on adapting pre-trained diffusion models from its native resolutions to higher scales]
CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up
Songhua Liu, Zhenxiong Tan, and Xinchao Wang
[Efficiency & Scalability & Architectural Design -> A simple-yet-effective strategy to linearize the complexity of pre-trained diffusion transformers, such as FLUX and SD3]
LinFusion: 1 GPU, 1 Minute, 16K Image
Songhua Liu, Weihao Yu, Zhenxiong Tan, and Xinchao Wang
[Efficiency & Scalability & Architectural Design -> The first text-to-image diffusion model with linear-complexity structure]
StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization
Songhua Liu, Xin Jin, Xingyi Yang, Jingwen Ye, and Xinchao Wang
[Versatility & Generalizability -> Why to destyle, how to destyle, and where to destyle for single-source domain generalization]
Deep Model Reassembly
Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, and Xinchao Wang
[Paper] [Code] NeurIPS 2022, Paper Award
[Architectural Design -> A framework to extract and reassemble modular building blocks of pre-trained models]
One-Shot Federated Learning via Synthetic Distiller-Distillate Communication
Junyuan Zhang, Songhua Liu, and Xinchao Wang
[Data Privacy & Data Efficiency -> A framework based on dataset distillation for one-shot federated learning]
Distilled Datamodel with Reverse Gradient Matching
Jingwen Ye, Ruonan Yu, Songhua Liu, and Xinchao Wang
[Paper] CVPR 2024
[Data Interpretability -> A framework based on dataset distillation to investigate the impact of changes in training data on a pre-trained model]
Dataset Distillation: A Comprehensive Review
Ruonan Yu*, Songhua Liu*, and Xinchao Wang
[Paper] TPAMI 2023
[Data Efficiency -> A comprehensive taxnomy and review of recent works on dataset distillation]
MGDD: A Meta Generator for Fast Dataset Distillation
Songhua Liu and Xinchao Wang
[Paper] NeurIPS 2023, Spotlight
[Data Efficiency -> A framework to obtain distilled datasets via adapting a meta generator]
Few-Shot Dataset Distillation via Translative Pre-training
Songhua Liu and Xinchao Wang
[Paper] ICCV 2023
[Data Efficiency -> A framework to obtain distilled datasets via adapting a pre-trained translator in few shots]
Slimmable Dataset Condensation
Songhua Liu, Jingwen Ye, Runpeng Yu, and Xinchao Wang
[Paper] CVPR 2023, Highlight
[Data Efficiency -> A framework enabling a distilled dataset to be compressible successively by multiple times]
Dataset Distillation via Factorization
Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, and Xinchao Wang
OminiControl: Minimal and Universal Control for Diffusion Transformer
Zhenxiong Tan, Songhua Liu, Xingyi Yang, Qiaochu Xue, and Xinchao Wang
[Visual Generation -> A minimal yet powerful universal control framework for Diffusion Transformer models like FLUX]
Mindbridge: A Cross-Subject Brain Decoding Framework
Shizun Wang, Songhua Liu, Zhenxiong Tan, and Xinchao Wang
[Machine Learning for Science -> The first cross-subject framework to reconstruct visual stimuli from brain fMRI signals]
Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer
Hao Tang*, Songhua Liu*, Tianwei Lin, Shaoli Huang, Fu Li, Dongliang He, and Xinchao Wang
[Paper] CVPR 2023
[Visual Generation -> A Transformer architecture tailored for controllable zero-shot and few-shot style transfer]
DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation
Songhua Liu, Jingwen Ye, Sucheng Ren, and Xinchao Wang
[Visual Generation -> A Transformer architecture with a dynamically sparse mechanism supporting attention computation at full resolution scales for exemplar-guided image generation]
Paint Transformer: Feed Forward Neural Painting with Stroke Prediction
Songhua Liu*, Tianwei Lin*, Dongliang He, Fu Li, Ruifeng Deng, Xin Li, Errui Ding, and Hao Wang
[Visual Generation -> A feed-forward neural painting framework trained with synthetic data only]
AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer
Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Mengling Wang, Xin Li, Zhengxing Sun, Qian Li, and Errui Ding
[Visual Generation -> An attention mechanism tailored for high-quality arbitrary style transfer]
Stable Video Style Transfer Based on Partial Convolution with Depth-Aware Supervision
Songhua Liu, Hao Wu, Shoutong Luo, and Zhengxing Sun
[Visual Generation -> A method addressing the trade-off between coherency and realisism in video style transfer]
说话让人温暖,做事问心无愧,问心绝对客观。