<optgroup id="rk8nz"></optgroup>
    <option id="rk8nz"><source id="rk8nz"></source></option>
    <meter id="rk8nz"><source id="rk8nz"><ruby id="rk8nz"></ruby></source></meter>
  1. <address id="rk8nz"><noscript id="rk8nz"></noscript></address>

      <nobr id="rk8nz"></nobr>
      <pre id="rk8nz"></pre><pre id="rk8nz"></pre>

      自然语言处理与信息检索共享平台 自然语言处理与信息检索共享平台

      Pay Less Attention with Lightweight and Dynamic Convolutions

      NLPIR SEMINAR Y2019#10

      INTRO

      In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his/her related research with you.

      Arrangement

      This week’s seminar is organized as follows:

      1. The seminar time is 1.pm, Mon, at Zhongguancun Technology Park ,Building 5, 1306.
      2. The lecturer is Zhaoyou Liu , the paper’s title is Pay Less Attention with Lightweight and Dynamic Convolutions.
      3. The seminar will be hosted by Gang Wang.
      4. Attachment is the paper of this seminar, please download in advance.

      Everyone interested in this topic is welcomed to join us. the following is the abstract for this week’s paper.

      Pay Less Attention with Lightweight and Dynamic Convolutions

      FelixWu, Angela Fan, Alexei Baevski, Yann N. Dauphin, Michael Auli

      abstract

      Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT’14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.

      You May Also Like

      About the Author: nlpvv

      发表评论

      曾道玄机资料彩图
      <optgroup id="rk8nz"></optgroup>
        <option id="rk8nz"><source id="rk8nz"></source></option>
        <meter id="rk8nz"><source id="rk8nz"><ruby id="rk8nz"></ruby></source></meter>
      1. <address id="rk8nz"><noscript id="rk8nz"></noscript></address>

          <nobr id="rk8nz"></nobr>
          <pre id="rk8nz"></pre><pre id="rk8nz"></pre>
          <optgroup id="rk8nz"></optgroup>
            <option id="rk8nz"><source id="rk8nz"></source></option>
            <meter id="rk8nz"><source id="rk8nz"><ruby id="rk8nz"></ruby></source></meter>
          1. <address id="rk8nz"><noscript id="rk8nz"></noscript></address>

              <nobr id="rk8nz"></nobr>
              <pre id="rk8nz"></pre><pre id="rk8nz"></pre>
              3d248期开奖号码 山西快乐十分走势图 老牌平特天王平特一肖 吉林十一选五开奖结果1 河南11选5历史开奖号码查询 云南快乐10分走势图今 6合开奖结果 什么网有湖南幸运赛车 英超转会一览 香港最精准单双中特 体彩36选7开奖结果查 围棋最简单的开局定式 福彩3d和值走势图100期 千禧3d试机号金码关注码 山西快乐十分前三预测