Jiang Liu

I am a 4-th year PhD student at Department of Electrical and Computer Engineering, Johns Hopkins University (JHU), advised by Prof. Rama Chellappa. I am a member of AIEM lab , and CIS. I received my MSE degree from JHU in 2021, and BSE degree from Department of Automation, Tsinghua University in 2019 advised by Prof. Jianjiang Feng and Prof. Jie Zhou.

In summer 2022, I worked as an Applied Scientis Intern at Amazon AWS AI working on vision-language problem mentored by Dr. Hui Ding, Dr. Zhaowei Cai, and Dr. Yuting Zhang. I've also worked as a Deep Learning Research Scientist Intern at Subtle Medical developing novel Transformer-based magnetic resonance imaging (MRI) algorithms.

Email  /  Google Scholar  /  Github

profile photo

My research interests focus on building trustworthy AI systems that can benefit human beings. A major part of my research focuses on developing principled algorithms for defending AI systems against adversarial attacks. Besides adversarial robustness, I'm also interested in developing computer vision and machine learning techniques to improve healthcare, such as medical imaging.

Selected Publications
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
IEEE / arXiv / bibtex

We propose a novel threat model called Joint Space Threat Model (JSTM), which exploit the underlying manifold information with Normalizing Flow, ensuring that exact manifold assumption holds. Under JSTM, we develop novel adversarial attacks and defenses. Furthermore, we propose the Robust Mixup strategy in which we maximize the adversity of the interpolated images and gain robustness and prevent overfitting.

One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer for Missing Data Imputation
Jiang Liu*, Srivathsa Pasumarthi*, Ben Duffy, Enhao Gong, Keshav Datta, Greg Zaharchuk (*equal contribution)
IEEE Transactions on Medical Imaging (TMI), 2022
IEEE / arXiv / bibtex

In this paper, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. It can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable. It allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of MMT decoder.

PolyFormer: Referring Image Segmentation as Sequential Polygon Generation
Jiang Liu*, Hui Ding*, Zhaowei Cai, Yuting Zhang, Ravi Kumar Satzoda, Vijay Mahadevan, R. Manmatha (*equal contribution)
CVPR, 2023
Project Page / arXiv / code / bibtex

In this work, instead of directly predicting the pixel-level segmentation masks, the problem of referring image segmentation is formulated as sequential polygon generation, and the predicted polygons can be later converted into segmentation masks. This is enabled by a new sequence-to-sequence framework, Polygon Transformer (PolyFormer), which takes a sequence of image patches and text query tokens as input, and outputs a sequence of polygon vertices autoregressively.

Segment and Complete: Defending Object Detectors Against Adversarial Patch Attacks With Robust Patch Detection
Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi
CVPR, 2022
PDF / Supp / arXiv / bibtex / code / Apricot-Mask Dataset

In this paper, we propose Segment and Complete defense (SAC), a general framework for defending object detectors against patch attacks through detection and removal of adversarial patches. SAC achieves superior robustness even under strong adaptive attacks with no reduction in performance on clean images, and generalizes well to unseen patch shapes, attack budgets, and unseen attack methods.

Mutual Adversarial Training: Learning together is better than going alone
Jiang Liu, Chun Pong Lau, Hossein Souri, Soheil Feizi, Rama Chellappa
IEEE Transactions on Information Forensics and Security (TIFS), 2022
IEEE / arXiv / bibtex

In this paper, we propose mutual adversarial training (MAT), in which multiple models are trained together and share the knowledge of adversarial examples to achieve improved robustness. MAT allows robust models to explore a larger space of adversarial samples, and find more robust feature spaces and decision boundaries. We show that MAT can improve model robustness for both single and multiple perturbations.

Source code credit to Dr. Jon Barron.