ManiTrans: Entity-Level Text-Guided Image Manipulation via Token-wise Semantic Alignment and Generation


Fudan University     Huawei Noah’s Ark Lab



Abstract

Existing text-guided image manipulation methods aim to modify the appearance of the image or to edit a few objects in a virtual or simple scenario, which is far from practical application. In this work, we study a novel task on text-guided image manipulation on the entity level in the real world. The task imposes three basic requirements, (1) to edit the entity consistent with the text descriptions, (2) to preserve the text-irrelevant regions, and (3) to merge the manipulated entity into the image naturally. To this end, we propose a new transformer-based framework based on the two-stage image synthesis method, namely ManiTrans, which can not only edit the appearance of entities but also generate new entities corresponding to the text guidance. Our framework incorporates a semantic alignment module to locate the image regions to be manipulated, and a semantic loss to help align the relationship between the vision and language. We conduct extensive experiments on the real datasets, CUB, Oxford, and COCO datasets to verify that our method can distinguish the relevant and irrelevant regions and achieve more precise and flexible manipulation compared with baseline methods.




ManiTrans



ManiTrans Framework.



Paper and Code

ManiTrans: Entity-Level Text-Guided Image Manipulation via Token-wise Semantic Alignment and Generation

Jianan Wang, Guansong Lu, Hang Xu, Zhenguo Li, Chunjing Xu, Yanwei Fu

[Paper] [GitHub]



Results



Multiple entities manipulation.




Manipulation across categories on COCO.




Manipulation from bird to flower and from flower to bird on CUB and Oxford.



Acknowledgements

The corresponding authors are Yanwei Fu, and Hang Xu.
This work was supported in part by NSFC Project (62176061), and SMSTM Projects (2018SHZDZX01 and 2021SHZDZX0103).

>