Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation
文献类型:期刊论文
作者 | Wang KX(王克欣)1,3; Zhou Y(周玉)1,2,3; Zhang JJ(张家俊)1,3; Wang SN(王少楠)1,3; Zong CQ(宗成庆)1,3; Zong, Chengqing![]() ![]() ![]() ![]() ![]() |
刊名 | ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING
![]() |
出版日期 | 2020-05 |
期号 | 4页码:19 |
关键词 | Text representation graph convolutional networks loss function |
ISSN号 | 2375-4699 |
DOI | 10.1145/3387633 |
文献子类 | 期刊论文 |
英文摘要 | Dependency-based graph convolutional networks (DepGCNs) are proven helpful for text representation to handle many natural language tasks. Almost all previous models are trained with cross-entropy (CE) loss, which maximizes the posterior likelihood directly. However, the contribution of dependency structures is not well considered by CE loss. As a result, the performance improvement gained by using the structure information can be narrow due to the failure in learning to rely on this structure information. To face the challenge, we propose the novel structurally comparative hinge (SCH) loss function for DepGCNs. SCH loss aims at enlarging the margin gained by structural representations over non-structural ones. From the per- spective of information theory, this is equivalent to improving the conditional mutual information of model decision and structure information given text. Our experimental results on both English and Chinese datasets show that by substituting SCH loss for CE loss on various tasks, for both induced structures and structures from an external parser, performance is improved without additional learnable parameters. Furthermore, the extent to which certain types of examples rely on the dependency structure can be measured directly by the learned margin, which results in better interpretability. In addition, through detailed analysis, we show that this structure margin has a positive correlation with task performance and structure induction of DepGCNs, and SCH loss can help model focus more on the shortest dependency path between entities. We achieve the new state-of-the-art results on TACRED, IMDB, and Zh. Literature datasets, even compared with ensemble and BERT baselines. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/39115] ![]() |
专题 | 模式识别国家重点实验室_自然语言处理 |
通讯作者 | Wang KX(王克欣); Wang, Kexin |
作者单位 | 1.University of Chinese Academy of Sciences, Beijing 100049, P. R. China 2.Beijing Fanyu Technology Co., Ltd 3.National Laboratory of Pattern Recognition, Institute of Automation, CAS |
推荐引用方式 GB/T 7714 | Wang KX,Zhou Y,Zhang JJ,et al. Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation[J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING,2020(4):19. |
APA | Wang KX.,Zhou Y.,Zhang JJ.,Wang SN.,Zong CQ.,...&Zhang, Jiajun.(2020).Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation.ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING(4),19. |
MLA | Wang KX,et al."Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation".ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING .4(2020):19. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。