中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
DeltaEdit: Exploring Text-free Training for Text-Driven Image Manipulation

文献类型:会议论文

作者Yueming Lyu; Tianwei Lin; Fu Li; Dongliang He; Jing Dong; Tieniu Tan
出版日期2023
会议日期2023-06
会议地点Vancouver, Canada
英文摘要

Text-driven image manipulation remains challenging in training or inference flexibility. Conditional generative models depend heavily on expensive annotated training data. Meanwhile, recent frameworks, which leverage pre-trained vision-language models, are limited by either per text-prompt optimization or inference-time hyperparameters tuning. In this work, we propose a novel framework named DeltaEdit to address these problems. Our key idea is to investigate and identify a space, namely delta image and text space that has well-aligned distribution between CLIP visual feature differences of two images and CLIP textual embedding differences of source and target texts. Based on the CLIP delta space, the DeltaEdit network is designed to map the CLIP visual features differences to the editing directions of StyleGAN at training phase. Then, in inference phase, DeltaEdit predicts the StyleGAN’s editing directions from the differences of the CLIP textual features. In this way, DeltaEdit is trained in a text-free manner. Once trained, it can well generalize to various text prompts for zero-shot inference without bells and whistles. Code is available at https://github.com/Yueming6568/DeltaEdit.

源URL[http://ir.ia.ac.cn/handle/173211/56616]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Jing Dong
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.Baidu Inc
4.Nanjing University
推荐引用方式
GB/T 7714
Yueming Lyu,Tianwei Lin,Fu Li,et al. DeltaEdit: Exploring Text-free Training for Text-Driven Image Manipulation[C]. 见:. Vancouver, Canada. 2023-06.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。