Future-Guided Incremental Transformer for Simultaneous Translation

Future-Guided Incremental Transformer for Simultaneous Translation

Simultaneous translation is the type of machine translation, where output is generated while reading source sentences. It can be used in the live subtitle or simultaneous interpretation.

However, the current policies have low computational speed and lack guidance from future source information. Those two weaknesses are overcome by a recently suggested method called Future-Guided Incremental Transformer.

Image credit: Pxhere, CC0 Public Domain

Image credit: Pxhere, CC0 Public Domain

It uses the average embedding layer to summarize the consumed source information and avoid time-consuming recalculation. The predictive ability is enhanced by embedding some future information through knowledge distillation. The results show that training speed is accelerated about 28 times compared to currently used models. Improved translation quality was also achieved on the Chinese-English and German-English simultaneous translation tasks.

Simultaneous translation (ST) starts translations synchronously while reading source sentences, and is used in many online scenarios. The previous wait-k policy is concise and achieved good results in ST. However, wait-k policy faces two weaknesses: low training speed caused by the recalculation of hidden states and lack of future source information to guide training. For the low training speed, we propose an incremental Transformer with an average embedding layer (AEL) to accelerate the speed of calculation of the hidden states during training. For future-guided training, we propose a conventional Transformer as the teacher of the incremental Transformer, and try to invisibly embed some future information in the model through knowledge distillation. We conducted experiments on Chinese-English and German-English simultaneous translation tasks and compared with the wait-k policy to evaluate the proposed method. Our method can effectively increase the training speed by about 28 times on average at different k and implicitly embed some predictive abilities in the model, achieving better translation quality than wait-k baseline.

Link: https://arxiv.org/abs/2012.12465


Source link