AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*
文献类型:期刊论文
作者 | Yan, Liang1,2; Zhou, Tao3 |
刊名 | JOURNAL OF COMPUTATIONAL MATHEMATICS
![]() |
出版日期 | 2021 |
卷号 | 39期号:6页码:848-864 |
关键词 | Bayesian inverse problems Deep neural network Markov chain Monte Carlo |
ISSN号 | 0254-9409 |
DOI | 10.4208/jcm.2102-m2020-0339 |
英文摘要 | Randomize-then-optimize (RTO) is widely used for sampling from posterior distribu-tions in Bayesian inverse problems. However, RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel goal-oriented deep neural networks (DNN) sur-rogate approach to substantially reduce the computation burden of RTO. In particular, we propose to drawn the training points for the DNN-surrogate from a local approximated posterior distribution - yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach. We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO ap-proach, which shows that DNN-RTO can significantly outperform the traditional RTO. |
资助项目 | NSF of China[11771081] ; NSF of China[11822111] ; NSF of China[11688101] ; NSF of China[11731006] ; science challenge project, China[TZ2018001] ; Zhishan Young Scholar Program of SEU, China ; National Key R&D Program of China[2020YFA0712000] ; Strategic Priority Research Program of Chinese Academy of Sciences[XDA25000404] ; youth innovation promotion association (CAS), China ; science challenge project[TZ2018001] |
WOS研究方向 | Mathematics |
语种 | 英语 |
WOS记录号 | WOS:000711024000003 |
出版者 | GLOBAL SCIENCE PRESS |
源URL | [http://ir.amss.ac.cn/handle/2S8OKBNM/59477] ![]() |
专题 | 中国科学院数学与系统科学研究院 |
通讯作者 | Zhou, Tao |
作者单位 | 1.Southeast Univ, Sch Math, Nanjing 210096, Peoples R China 2.Nanjing Ctr Appl Math, Nanjing 211135, Peoples R China 3.Chinese Acad Sci, Acad Math & Syst Sci, LSEC, Inst Computat Math & Sci Engn Comp, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Yan, Liang,Zhou, Tao. AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*[J]. JOURNAL OF COMPUTATIONAL MATHEMATICS,2021,39(6):848-864. |
APA | Yan, Liang,&Zhou, Tao.(2021).AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*.JOURNAL OF COMPUTATIONAL MATHEMATICS,39(6),848-864. |
MLA | Yan, Liang,et al."AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*".JOURNAL OF COMPUTATIONAL MATHEMATICS 39.6(2021):848-864. |
入库方式: OAI收割
来源:数学与系统科学研究院
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。