Research
My current research interests include large language models, vision-language models, and trustworthy AI.
We are hiring research interns in all areas of generative AI! Feel free to drop me an email with your CV if interested.
|
|
Instruct2Attack: Language-Guided Semantic Adversarial Attacks
Jiang Liu,
Chen Wei, Yuxiang Guo, Heng Yu, Alan Yuille, Soheil Feizi, Chun Pong Lau, Rama Chellappa
Under Submission, 2024
arXiv /
bibtex
We propose Instruct2Attack (I2A), a language-guided semantic attack that generates semantically meaningful perturbations according to free-form language instructions.
We show that I2A can successfully break state-of-the-art deep neural networks even under strong adversarial defenses, and demonstrate great transferability among a variety of network architectures.
|
|
DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection
Jiang Liu*,
Chun Pong Lau*, Yuxiang Guo, Zhaoyang Wang, Rama Chellappa (*equal contribution)
Under Submission, 2023
arXiv /
bibtex /
code
We propose DiffProtect, which utilizes a diffusion autoencoder to generate semantically meaningful perturbations on FR systems. Extensive experiments demonstrate that DiffProtect produces more natural-looking encrypted images than state-of-the-art methods while achieving significantly higher attack success rates.
|
|
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Chun Pong Lau,
Jiang Liu,
Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
IEEE /
arXiv /
bibtex
We propose a novel threat model called Joint Space Threat Model (JSTM), which exploit the underlying manifold information with Normalizing Flow,
ensuring that exact manifold assumption holds. Under JSTM, we develop novel adversarial attacks and defenses. Furthermore,
we propose the Robust Mixup strategy in which we maximize the adversity of the interpolated images and gain robustness and prevent
overfitting.
|
|
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer for Missing Data Imputation
Jiang Liu*,
Srivathsa Pasumarthi*, Ben Duffy, Enhao Gong, Keshav Datta, Greg Zaharchuk (*equal contribution)
IEEE Transactions on Medical Imaging (TMI), 2023
IEEE /
arXiv /
bibtex
In this paper, we formulate missing data imputation as a sequence-to-sequence learning problem and
propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and
synthesize those that are missing. It can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis.
Moreover, MMT is inherently interpretable. It allows us to understand the importance of each
input contrast in different regions by analyzing the in-built attention maps of MMT decoder.
|
|
PolyFormer: Referring Image Segmentation as Sequential Polygon Generation
Jiang Liu*, Hui Ding*, Zhaowei Cai, Yuting Zhang, Ravi Kumar Satzoda, Vijay Mahadevan, R. Manmatha (*equal contribution)
CVPR, 2023
Project Page /
arXiv /
code /
bibtex
In this work, instead of directly predicting the pixel-level segmentation masks, the problem of referring
image segmentation is formulated as sequential polygon generation, and the predicted polygons can be later
converted into segmentation masks. This is enabled by a new sequence-to-sequence framework, Polygon
Transformer (PolyFormer), which takes a sequence of image patches and text query tokens as input,
and outputs a sequence of polygon vertices autoregressively.
|
|
Segment and Complete: Defending Object Detectors Against Adversarial Patch Attacks With Robust Patch Detection
Jiang Liu,
Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi
CVPR, 2022
PDF /
Supp /
arXiv /
bibtex /
code /
Apricot-Mask Dataset
In this paper, we propose Segment and Complete defense (SAC),
a general framework for defending object detectors against patch attacks
through detection and removal of adversarial patches. SAC achieves superior robustness even
under strong adaptive attacks with no reduction in performance on clean images, and generalizes well to
unseen patch shapes, attack budgets, and unseen attack methods.
|
|
Mutual Adversarial Training: Learning together is better than going alone
Jiang Liu,
Chun Pong Lau,
Hossein Souri,
Soheil Feizi,
Rama Chellappa
IEEE Transactions on Information Forensics and Security (TIFS), 2022
IEEE /
arXiv /
bibtex
In this paper, we propose mutual adversarial training (MAT), in which multiple models are trained
together and share the knowledge of adversarial examples to achieve improved robustness.
MAT allows robust models to explore a larger space of adversarial samples, and
find more robust feature spaces and decision boundaries. We show that MAT can improve model robustness for
both single and multiple perturbations.
|
|