The dominant sequence transduction models are based on complex recurrent or convolutional neural networks which include an encoder and decoder component. The best performing models also connect the encoder and decoder via an attention mechanism. We propose a new simple network architecture called Transformer, based on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show that Transformer is superior in quality while being more parallelizable and requiring significantly less time to train.