Teacher Forcing
Williams and Zipser, 1989
In teacher forcing, you force the output of your previous time step to act as your input during RNN training. This way your RNN learns to generate next output in a sequence based upon the previous value.
即在 LSTM 训练的使用,用 GroundTruth 作为每次的输入到 LSTM 中训练中。这个方式会导致一个 Exposure Bias 的问题。
Professor Forcing
Lamb et al., 2016
Professor Forcing uses adversarial training to encourage the dynamics of the RNNs to be the same as that of training conditioned on previous ground truth words.
Professor Forcing 使用对抗训练在鼓励 RNN 动态成为和给定 GroundTruth 的训练条件一样的情况。
Exposure Bias
Ranzato et al., 2016; Wiseman and Rush, 2016; Guet al., 2017a
Specifically, the sentence decoder is trained to predict a word given the previous ground-truth words, while at testing time, the caption generation is accomplished by greedy search or with beam search, which predicts the next word based on the previously generated words that is different from the training mode. Since the model has never been exposed to its own predictions, it will result in error accumulation at test time.
具体来说,给定先前的 GroundTruth 的单词,训练 decoder 来预测单词,而在测试时,captions 生成是通过贪心搜索或使 Beam Search 来完成的,该 Beam Search 基于先前产生的单词预测下一个单词, 不同于训练模式。 由于该模型从未暴露于自己的预测中,所以在测试时间会导致错误累积。
It happens when a model is trained to maximize the likelihood given ground truth words but follows its own predictions during test inference.
Scheduled Sampling
Bengio et al., 2015
To address the exposure bias problem, scheduled sampling, i.e., randomly selecting between previous ground-truth words and previously generated words, has become the current dominant training procedure to fit RNNs based models. However, it can only mitigate the exposure bias but cannot largely solve it.
为了解决 exposure bias 问题,提出了 scheduled sampling,即在先前的 GroundTrue 单词和先前产生的单词之间随机选择,已经成为适合基于RNN的模型的当前主要训练程序。 然而,它只能减轻 exposure bias,但不能很大程度上解决它。
Loss-Evaluation Mismatch
Ranzato et al., 2016
Specifically, language models are usually trained to minimize the cross-entropy loss at each time-step, while at
testing time, we evaluate the generated captions with the sentence-level evaluation metrics, e.g., BLEU-n, CIDEr, SPICE, etc., which are non-differentiable and cannot be directly used as training loss.
具体来说,语言模型通常被训练以最小化每个时间步长的交叉熵损失,而在测试时,我们用句子级评估指标(如BLEU-n,CIDEr,SPICE等)评估生成的 captions。这是不可微分的,不能直接用作训练损失。