• 回答数

    2

  • 浏览数

    184

九尾天使
首页 > 期刊论文 > 研究nlp100篇必读的论文

2个回答 默认排序
  • 默认排序
  • 按时间排序

王小金Fighting

已采纳

近年来, 深度学习 的研究越来越深入,在各个领域也都获得了不少突破性的进展。基于注意力(attention)机制的神经网络成为了最近神经网络研究的一个热点。Attention机制最早是在视觉图像领域提出来的,应该是在九几年思想就提出来了,但是真正火起来应该算是google mind团队的这篇论文《Recurrent Models of Visual Attention》[14],他们在RNN模型上使用了attention机制来进行图像分类。随后,Bahdanau等人在论文《Neural Machine Translation by Jointly Learning to Align and Translate》 [1]中,使用类似attention的机制在机器翻译任务上将翻译和对齐同时进行,他们的工作算是是第一个提出attention机制应用到NLP领域中。接着类似的基于attention机制的RNN模型扩展开始应用到各种NLP任务中。最近,如何在CNN中使用attention机制也成为了大家的研究热点。在介绍NLP中的Attention之前,我想大致说一下图像中使用attention的思想。就具代表性的这篇论文《Recurrent Models of Visual Attention》 [14],他们研究的动机其实也是受到人类注意力机制的启发。人们在进行观察图像的时候,其实并不是一次就把整幅图像的每个位置像素都看过,大多是根据需求将注意力集中到图像的特定部分。而且人类会根据之前观察的图像学习到未来要观察图像注意力应该集中的位置。下图是这篇论文的核心模型示意图。 该模型是在传统的RNN上加入了attention机制(即红圈圈出来的部分),通过attention去学习一幅图像要处理的部分,每次当前状态,都会根据前一个状态学习得到的要关注的位置 l 和当前输入的图像,去处理注意力部分像素,而不是图像的全部像素。这样的好处就是更少的像素需要处理,减少了任务的复杂度。可以看到图像中应用attention和人类的注意力机制是很类似的,接下来我们看看在NLP中使用的attention。这篇论文算是在NLP中第一个使用attention机制的工作。他们把attention机制用到了神经网络机器翻译(NMT)上,NMT其实就是一个典型的sequence to sequence模型,也就是一个encoder to decoder模型,传统的NMT使用两个RNN,一个RNN对源语言进行编码,将源语言编码到一个固定维度的中间向量,然后在使用一个RNN进行解码翻译到目标语言,传统的模型如下图: 这篇论文提出了基于attention机制的NMT,模型大致如下图: 图中我并没有把解码器中的所有连线画玩,只画了前两个词,后面的词其实都一样。可以看到基于attention的NMT在传统的基础上,它把源语言端的每个词学到的表达(传统的只有最后一个词后学到的表达)和当前要预测翻译的词联系了起来,这样的联系就是通过他们设计的attention进行的,在模型训练好后,根据attention矩阵,我们就可以得到源语言和目标语言的对齐矩阵了。具体论文的attention设计部分如下: 可以看到他们是使用一个感知机公式来将目标语言和源语言的每个词联系了起来,然后通过soft函数将其归一化得到一个概率分布,就是attention矩阵。 从结果来看相比传统的NMT(RNNsearch是attention NMT,RNNenc是传统NMT)效果提升了不少,最大的特点还在于它可以可视化对齐,并且在长句的处理上更有优势。这篇论文是继上一篇论文后,一篇很具代表性的论文,他们的工作告诉了大家attention在RNN中可以如何进行扩展,这篇论文对后续各种基于attention的模型在NLP应用起到了很大的促进作用。在论文中他们提出了两种attention机制,一种是全局(global)机制,一种是局部(local)机制。首先我们来看看global机制的attention,其实这和上一篇论文提出的attention的思路是一样的,它都是对源语言对所有词进行处理,不同的是在计算attention矩阵值的时候,他提出了几种简单的扩展版本。 在他们最后的实验中general的计算方法效果是最好的。我们再来看一下他们提出的local版本。主要思路是为了减少attention计算时的耗费,作者在计算attention时并不是去考虑源语言端的所有词,而是根据一个预测函数,先预测当前解码时要对齐的源语言端的位置Pt,然后通过上下文窗口,仅考虑窗口内的词。 里面给出了两种预测方法,local-m和local-p,再计算最后的attention矩阵时,在原来的基础上去乘了一个pt位置相关的高斯分布。作者的实验结果是局部的比全局的attention效果好。这篇论文最大的贡献我觉得是首先告诉了我们可以如何扩展attention的计算方式,还有就是局部的attention方法。 随后基于Attention的RNN模型开始在NLP中广泛应用,不仅仅是序列到序列模型,各种分类问题都可以使用这样的模型。那么在深度学习中与RNN同样流行的卷积神经网络CNN是否也可以使用attention机制呢?《ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs》 [13]这篇论文就提出了3中在CNN中使用attention的方法,是attention在CNN中较早的探索性工作 传统的CNN在构建句对模型时如上图,通过每个单通道处理一个句子,然后学习句子表达,最后一起输入到分类器中。这样的模型在输入分类器前句对间是没有相互联系的,作者们就想通过设计attention机制将不同cnn通道的句对联系起来。 第一种方法ABCNN0-1是在卷积前进行attention,通过attention矩阵计算出相应句对的attention feature map,然后连同原来的feature map一起输入到卷积层。具体的计算方法如下。 第二种方法ABCNN-2是在池化时进行attention,通过attention对卷积后的表达重新加权,然后再进行池化,原理如下图。 第三种就是把前两种方法一起用到CNN中,如下图这篇论文提供了我们在CNN中使用attention的思路。现在也有不少使用基于attention的CNN工作,并取得了不错的效果。 最后进行一下总结。Attention在NLP中其实我觉得可以看成是一种自动加权,它可以把两个你想要联系起来的不同模块,通过加权的形式进行联系。目前主流的计算公式有以下几种: 通过设计一个函数将目标模块mt和源模块ms联系起来,然后通过一个soft函数将其归一化得到概率分布。目前Attention在NLP中已经有广泛的应用。它有一个很大的优点就是可以可视化attention矩阵来告诉大家神经网络在进行任务时关注了哪些部分。 不过在NLP中的attention机制和人类的attention机制还是有所区别,它基本还是需要计算所有要处理的对象,并额外用一个矩阵去存储其权重,其实增加了开销。而不是像人类一样可以忽略不想关注的部分,只去处理关注的部分。

182 评论

catcat654321

推荐下NLP领域内最重要的8篇论文吧(依据学术范标准评价体系得出的8篇名单): 一、Deep contextualized word representations 摘要:We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals. 全文链接: Deep contextualized word representations——学术范 二、Glove: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: Glove: Global Vectors for Word Representation——学术范 三、SQuAD: 100,000+ Questions for Machine Comprehension of Text 摘要:We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL 全文链接: SQuAD: 100,000+ Questions for Machine Comprehension of Text——学术范 四、GloVe: Global Vectors for Word Representation 摘要:Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. 全文链接: GloVe: Global Vectors for Word Representation——学术范 五、Sequence to Sequence Learning with Neural Networks 摘要:Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.  全文链接: Sequence to Sequence Learning with Neural Networks——学术范 六、The Stanford CoreNLP Natural Language Processing Toolkit 摘要:We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage. 全文链接: The Stanford CoreNLP Natural Language Processing Toolkit——学术范 七、Distributed Representations of Words and Phrases and their Compositionality 摘要:The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. 全文链接: Distributed Representations of Words and Phrases and their Compositionality——学术范 八、Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank 摘要:Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.  全文链接: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank——学术范 希望可以对大家有帮助, 学术范 是一个新上线的一站式学术讨论社区,在这里,有海量的计算机外文文献资源与研究领域最新信息、好用的文献阅读及管理工具,更有无数志同道合的同学以及学术科研工作者与你一起,展开热烈且高质量的学术讨论!快来加入我们吧!

344 评论

相关问答

  • 论文研究的必要性

    认为可行性研究报告的必要性与可行性包括:1、项目背景情况;项目受益范围分析;部门、地区需求分析,项目单位需求分析,项目是否符合国家政策,是否属于国家政策优先支持

    赵西法119 2人参与回答 2023-12-05
  • 法律研究生读多少篇论文

    您好,我就为大家解答关于硕士研究生要发几篇论文,硕士研究生要读几年相信很多小伙伴还不知道,现在让我们一起来看看吧! 1、硕士研究生在我国又划分为学术性硕士和专业

    xiaoxiao765 3人参与回答 2023-12-12
  • 研究生一个星期要读几篇论文

    学问不是按读的多少计算的,看你能读明白多少,读完一篇后,想想自己,最好可以反思一下,不要变的有文无化。。。我相信这个道理大多数人都懂,但是要时时刻刻的提醒自己。

    碧落的海 3人参与回答 2023-12-09
  • 少男必读杂志

    科普阅读是赋予孩子对客观世界的认识,这些书籍就像是一块块拼图,帮孩子们拼出更大更完整的图景!下面是我收集整理的适合少儿看的科普杂志,希望对大家有所帮助。 1、儿

    幸福、定格 7人参与回答 2023-12-10
  • 研究nlp100篇必读的论文

    近年来, 深度学习 的研究越来越深入,在各个领域也都获得了不少突破性的进展。基于注意力(attention)机制的神经网络成为了最近神经网络研究的一个热点。At

    九尾天使 2人参与回答 2023-12-09