DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization
文献类型:期刊论文
作者 | Huang, Nisha3,4; Zhang, Yuxin3,4; Tang, Fan2; Ma, Chongyang1; Huang, Haibin3; Dong, Weiming3,4; Xu, Changsheng3,4 |
刊名 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
![]() |
出版日期 | 2024-01-10 |
页码 | 14 |
关键词 | Arbitrary image stylization diffusion textual guidance neural network applications |
ISSN号 | 2162-237X |
DOI | 10.1109/TNNLS.2023.3342645 |
英文摘要 | Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into a stylized one according to textual descriptions of the target style provided by the user. Unlike the previous image-to-image transfer approaches, text-guided stylization progress provides users with a more precise and intuitive way to express the desired style. However, the huge discrepancy between cross-modal inputs/outputs makes it challenging to conduct text-driven image stylization in a typical feed-forward CNN pipeline. In this article, we present DiffStyler, a dual diffusion processing architecture to control the balance between the content and style of the diffused results. The cross-modal style information can be easily integrated as guidance during the diffusion process step-by-step. Furthermore, we propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image. We validate the proposed DiffStyler beyond the baseline methods through extensive qualitative and quantitative experiments. The code is available at https://github.com/haha-lisa/Diffstyler. |
资助项目 | National Science Foundation of China |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:001173965600001 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
源URL | [http://119.78.100.204/handle/2XEOYT63/38846] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Dong, Weiming |
作者单位 | 1.Kuaishou Technol, Beijing 100085, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China 4.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Huang, Nisha,Zhang, Yuxin,Tang, Fan,et al. DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2024:14. |
APA | Huang, Nisha.,Zhang, Yuxin.,Tang, Fan.,Ma, Chongyang.,Huang, Haibin.,...&Xu, Changsheng.(2024).DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,14. |
MLA | Huang, Nisha,et al."DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2024):14. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。