|
| 作者 | Gao, Chen1; Liu, Si2; Zhu, Defa1; Liu, Quan2; Cao, Jie3 ; He, Haoqian1; He, Ran3 ; Yan, Shuicheng4
|
| 出版日期 | 2020
|
| 会议日期 | 2020年10月12日 – 2020年10月16日
|
| 会议地点 | 美国西雅图
|
| 英文摘要 | Compared with the widely studied Human-Object Interaction DETection (HOI-DET), no effort has been devoted to its inverse problem, i.e. to generate an HOI scene image according to the given relationship triplet , to our best knowledge. We term this new task “Human-Object Interaction Image Generation” (HOI-IG). HOI-IG is a research-worthy task with great application prospects, such as online shopping, film production and interactive entertainment. In this work, we introduce an Interact- GAN to solve this challenging task. Our method is composed of two stages: (1) manipulating the posture of a given human image conditioned on a predicate. (2) merging the transformed human image and object image to one realistic scene image while satisfying their expected relative position and ratio. Besides, to address the large spatial misalignment issue caused by fusing two images content with reasonable spatial layout, we propose a Relation-based Spatial Transformer Network (RSTN) to adaptively process the images conditioned on their interaction. Extensive experiments on two challenging datasets demonstrate the effectiveness and superiority of our approach. We advocate for the image generation community to draw more attention to the new Human-Object Interaction Image Generation problem. To facilitate future research, our project will be released at: http://colalab.org/projects/InteractGAN. |
| 语种 | 英语
|
| 源URL | [http://ir.ia.ac.cn/handle/173211/44728]  |
| 专题 | 自动化研究所_智能感知与计算研究中心
|
| 通讯作者 | Liu, Si |
| 作者单位 | 1.中国科学院信息工程研究所 2.北京航空航天大学 3.中国科学院自动化研究所 4.依图科技
|
推荐引用方式 GB/T 7714 |
Gao, Chen,Liu, Si,Zhu, Defa,et al. InteractGAN: Learning to Generate Human-Object Interaction[C]. 见:. 美国西雅图. 2020年10月12日 – 2020年10月16日.
|