登入帳戶  | 訂單查詢  | 購物車/收銀台( 0 ) | 在線留言板  | 付款方式  | 運費計算  | 聯絡我們  | 幫助中心 |  加入書簽
會員登入 新用戶登記
HOME新書上架暢銷書架好書推介特價區會員書架精選月讀2023年度TOP分類瀏覽雜誌 臺灣用戶
品種:超過100萬種各類書籍/音像和精品,正品正價,放心網購,悭钱省心 服務:香港台灣澳門海外 送貨:速遞郵局服務站

新書上架簡體書 繁體書
暢銷書架簡體書 繁體書
好書推介簡體書 繁體書

八月出版:大陸書 台灣書
七月出版:大陸書 台灣書
六月出版:大陸書 台灣書
五月出版:大陸書 台灣書
四月出版:大陸書 台灣書
三月出版:大陸書 台灣書
二月出版:大陸書 台灣書
一月出版:大陸書 台灣書
12月出版:大陸書 台灣書
11月出版:大陸書 台灣書
十月出版:大陸書 台灣書
九月出版:大陸書 台灣書
八月出版:大陸書 台灣書
七月出版:大陸書 台灣書
六月出版:大陸書 台灣書

『簡體書』神经机器翻译的联合训练(英文版)

書城自編碼: 3549558
分類:簡體書→大陸圖書→教材征订教材
作者: 程勇
國際書號(ISBN): 9787302561491
出版社: 清华大学出版社
出版日期: 2020-08-01

頁數/字數: /
書度/開本: 16开 釘裝: 平装

售價:HK$ 91.1

我要買

 

** 我創建的書架 **
未登入.


新書推薦:
周易大全
《 周易大全 》

售價:HK$ 147.2
元和十四年 : 大唐中兴与沉沦的十字路口
《 元和十四年 : 大唐中兴与沉沦的十字路口 》

售價:HK$ 79.4
思考的技术:珍藏版
《 思考的技术:珍藏版 》

售價:HK$ 90.9
琥珀之夏(《镜之孤城》作者、推理小说家辻村深月新长篇;能治愈童年创伤的,也许唯有长大成人的自己)
《 琥珀之夏(《镜之孤城》作者、推理小说家辻村深月新长篇;能治愈童年创伤的,也许唯有长大成人的自己) 》

售價:HK$ 59.8
流人系列02:亡狮
《 流人系列02:亡狮 》

售價:HK$ 90.9
希腊小史
《 希腊小史 》

售價:HK$ 112.7
中国古代的谣言与谶语
《 中国古代的谣言与谶语 》

售價:HK$ 112.7
战役图文史:改变世界历史的50场战役 (彩印典藏版)
《 战役图文史:改变世界历史的50场战役 (彩印典藏版) 》

售價:HK$ 147.2

 

建議一齊購買:

+

HK$ 147.5
《中华老字号品牌与消费心理研究》
+

HK$ 156.8
《直觉模糊时间序列分析》
+

HK$ 108.6
《物流标准化——供给侧结构性改革新动力》
+

HK$ 90.3
《普及与公平:县域义务教育均衡发展的政策研究及实践探索》
+

HK$ 52.5
《建设行业专业技术管理人员继续教育教材——工程技术经济》
+

HK$ 205.0
《博悟空间:博物馆展陈设计与实践》
編輯推薦:
传统的神经机器翻译都是用一个有向的模型来对一种语言到另外一种语言的翻译进行建模。相比于传统的学习方法,本书的联合训练可以使得独立的有向模型共享双方的信息,得到的更好的交互,从而使得训练出来的模型表现更好。本书可供高校和科研院所计算机科学与技术专业的科研人员、学生以及机器智能等相关行业的工程技术人员阅读和参考。
內容簡介:
标准的神经机器翻译模型通常是构建一个源语言到目标语言的翻译模型。在本论文中,我们将提出一些联合训练两个神经机器翻译模型的方法,其中包括以下主题:1.改进注意力模型;2.引入单语语料;3.提升基于轴语言的翻译;4.整合双向依赖关系。
關於作者:
2012年毕业于北京交通大学,2017年取得清华大学大学工学博士学位,2017年加入腾讯担任高级研究员,主要研究领域为机器翻译,在国际重要会议诸如ACL、IJCAI、AAAI等发表论文10多篇。
目錄
1 Neural Machine Translation 1
1.1 Introduction 1
1.2 Neural Machine Translation 4
References 8

2 Agreement-Based Joint Training for Bidirectional Attention-Based
Neural Machine Translation 11
2.1 Introduction 11
2.2 Agreement-Based Joint Training 12
2.3 Experiments 16
2.3.1 Setup 16
2.3.2 Comparison of Loss Functions 17
2.3.3 Results on Chinese-English Translation 18
2.3.4 Results on Chinese-English Alignment 18
2.3.5 Analysis of Alignment Matrices 19
2.3.6 Results on English-to-French Translation 21
2.4 Summary 22
References 22

3 Semi-supervised Learning for Neural Machine Translation 25
3.1 Introduction 25
3.2 Semi-supervised Learning for Neural Machine Translation 27
3.2.1 Supervised Learning 27
3.2.2 Autoencoders on Monolingual Corpora 27
3.2.3 Semi-supervised Learning 29
3.2.4 Training 30
3.3 Experiments 31
3.3.1 Setup 31
3.3.2 Effect of Sample Size k 32
3.3.3 Effect of OOV Ratio 34
3.3.4 Comparison with SMT . . . . . . . . . . . . . . . . . . . . . . . . . . .35
3.3.5 Comparison with Previous Work . . . . . . . . . . . . . . . . . . .36
3.4 Summary . . . .. . . . . .. . . . . . . . . . . . . . . . . . 38
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394 Joint Training for Pivot-Based Neural Machine Translation . . . . . . 41
4.1 Introduction . . . . . . . . .. . . . . . . . . . . 41
4.2 Pivot-Based NMT. . . . . . . . . . . . . . . . . . . . . . . 42
4.3 Joint Training for Pivot-Based NMT . . . .. . . . . . . . . . . 45
4.3.1 Training Objective . . . . . . . . . . . . . . . . . . . . . . 45
4.3.2 Connection Terms . . . . . . . . . .. . . . . . . . . . 45
4.3.3 Training. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .46

4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . .48
4.4.2 Results on the Europarl Corpus. . . . . . . . . . . 49
4.4.3 Results on the WMT Corpus . . . . . . . . . . 50
4.4.4 Effect of Bridging Corpora . . . . . . . . . . . . . 52
4.5 Summary . . . . . ... . . . . . . . . . . . . . . . . . . 53
References . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .53

5 Joint Modeling for Bidirectional Neural Machine Translation with Contrastive Learning . . . . . . . . . . . . . . . . 55
5.1 Introduction . . .. . . . . . . . . . . . . . . . . 55
5.2 Unidirectional Neural Machine Translation. . . . . . . . . . . . 57
5.3 Bidirectional Neural Machine Translation. . . . . . . . . . .57
5.4 Decoding Strategies. . . . . . . . . . . . . . . . . . . . 61
5.5 Experiments .. . . . . . . . . . . . . . . . . . . . . . . . . 61
5.5.1 Setup. . . . . . . . . . . . . . . . . . . . . . . . . . .61
5.5.2 Effect of Translation Strategies . . . . . . . . . . . . .62
5.5.3 Comparison with SMT and Standard NMT . . . . . . . . . . . . 63
5.5.4 BLEU Scores Over Sentence Length . . . . . . . . . . . 64
5.5.5 Comparison of Learning Curves . . . . . . . . . . . . . . 65
5.5.6 Analysis of Expected Embeddings . .. . . . . . . . . 66
5.5.7 Results on English-German Translation . . . . . . . . . . . . . . .66
5.6 Summary . . . . . . . .. . . . . . . . . . . . . . . . . 67
References . . . . . . . . . . . .. . . . . . . . . . . . .67

6 Related Work . . . . . . . .. . . . . . . . . . . 69
6.1 Attentional Mechanisms in Neural Machine Translation . . . . 69
6.2 Capturing Bidirectional Dependencies . . . . . .. . . . . . .70
6.2.1 Capturing Bidirectional Dependencies . . . .. . . . . 70
6.2.2 Agreement-Based Learning. . . . . . . . . . . . . .70
6.3 Incorporating Additional Data Resources 71
6.3.1 Exploiting Monolingual Corpora for Machine Translation 71
6.3.2 Autoencoders in Unsupervised and Semi-supervised Learning 71
6.3.3 Machine Translation with Pivot Languages 72
6.4 Contrastive Learning 72
References 72

7 Conclusion 75
7.1 Conclusion 75
7.2 Future Directions 76
7.2.1 Joint Modeling 76
7.2.2 Joint Training 77
7.2.3 More Tasks 78
References 78
內容試閱
Machine translation has achieved great success in the past few decades. The emergence and development of neural machine translation NMT have pushed the performance and practicality of machine translation to new heights. While NMT has obtained state-of-the-art results as a new paradigm, it still suffers from many drawbacks introduced by the new framework and machine translation itself.
The standard NMT usually builds a translation model from source to target. The modeling and training procedures in NMT are independent without the interaction with other NMT models such as an inverse translation model. In this book, we propose approaches to jointly training two directional NMT models, including the following topics:
1. Improving attentional mechanism: The attentional mechanism has proved to be effective in capturing long dependencies in NMT. However, due to the intricate structural divergence between natural languages, unidirectional attention-based models might only capture partial aspects of attentional regularities. We propose agreement-based joint training to encourage the two complementary models to agree on word alignment matrices on the same training data.
2. Incorporating monolingual corpora: NMT systems heavily rely on parallel corpora for parameter estimation. Since parallel corpora are usually limited in quantity, quality, and coverage, especially for low-resource languages, it is appealing to exploit monolingual corpora to improve NMT. We propose a semi-supervised approach for training NMT models on the concatenation of labeled parallel corpora and unlabeled monolingual corpora data. The semi-supervised approach uses an autoencoder to reconstruct monolingual corpora, in which the source-to-target and target-to-source translation models serve as the encoder and decoder, respectively.
3. Improving pivot-based translation: NMT systems suffer from the data scarcity problem for resource-scarce language pairs. Although this problem can be alleviated by exploiting a pivot language to bridge the source and target lan- guages, the source-to-pivot and pivot-to-target translation models are usually independently trained. In this work, we introduce a joint training algorithm for
pivot-based NMT. We are committed to connecting two models closely to enable them to interact with each other.
4. Integrating bidirectional dependencies: The standard NMT only captures uni- directional dependencies to model the translation procedure from source to target. Nevertheless, the inverse information is explicitly available to reinforce the con?dence of the translation process. We propose an end-to-end bidirec- tional NMT model to connect the source-to-target and target-to-source transla- tion models, which opens up the interaction of parameters between two directional models. A contrastive learning approach is also adopted to further enhance the information sharing.
This book not only introduces four interesting research works that propose a novel idea of combining multiple NMT directional models but also covers the basic techniques of NMT and some potential research directions. It can make novice researchers enter the NMT ?eld quickly and broaden their view for the advanced development of NMT.
Beijing, China Dr. Yong Cheng
June 2019

 

 

書城介紹  | 合作申請 | 索要書目  | 新手入門 | 聯絡方式  | 幫助中心 | 找書說明  | 送貨方式 | 付款方式 香港用户  | 台灣用户 | 大陸用户 | 海外用户
megBook.com.hk
Copyright © 2013 - 2024 (香港)大書城有限公司  All Rights Reserved.