Transformer模型由Google团队在2017年发表的论文《Attention Is All You Need》中首次提出,彻底改变了自然语言处理领域的格局。这个架构最大的创新在于完全摒弃了传统的循环神经网络(RNN)和卷积神经网络(CNN),仅依靠注意力机制来处理序列数据。
在传统的序列模型中,RNN存在梯度消失和难以并行计算的问题,而CNN则难以捕捉长距离依赖关系。Transformer通过自注意力机制(self-attention)完美解决了这些问题,不仅能够高效地捕捉序列中任意位置之间的关系,还能充分利用现代GPU的并行计算能力。
多头注意力是Transformer的核心创新,它允许模型同时关注输入序列的不同位置,并学习多种不同的关注模式。在代码实现中,我们首先定义了MultiHeadAttention类:
python复制class MultiHeadAttention(nn.Module):
def __init__(self, hid_dim, n_heads):
super(MultiHeadAttention, self).__init__()
self.hid_dim = hid_dim
self.n_heads = n_heads
assert hid_dim % n_heads == 0
self.w_q = nn.Linear(hid_dim, hid_dim)
self.w_k = nn.Linear(hid_dim, hid_dim)
self.w_v = nn.Linear(hid_dim, hid_dim)
self.fc = nn.Linear(hid_dim, hid_dim)
self.scale = torch.sqrt(torch.FloatTensor([hid_dim // n_heads]))
这里有几个关键点需要注意:
hid_dim是每个词向量的维度,n_heads是注意力头的数量hid_dim必须能被n_heads整除,这样每个注意力头处理的维度相同scale因子用于缩放点积注意力,防止梯度消失每个Transformer层内部还包含一个前馈神经网络(Feed Forward Network),它由两个线性变换和一个ReLU激活函数组成:
python复制class Feedforward(nn.Module):
def __init__(self, d_model, d_ff, dropout=0.1):
super(Feedforward, self).__init__()
self.linear1 = nn.Linear(d_model, d_ff)
self.dropout = nn.Dropout(dropout)
self.linear2 = nn.Linear(d_ff, d_model)
这个前馈网络有几个特点:
d_model扩展到d_ff(通常更大)d_model由于Transformer没有循环结构,它需要额外的位置信息来理解序列中元素的顺序。我们使用正弦和余弦函数来生成位置编码:
python复制class PositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer("pe", pe)
位置编码的关键特性:
编码器由多个相同的层堆叠而成,每层包含一个多头自注意力机制和一个前馈网络:
python复制class EncoderLayer(nn.Module):
def __init__(self, d_model, n_heads, d_ff, dropout=0.1):
super(EncoderLayer, self).__init__()
self.self_attn = MultiHeadAttention(d_model, n_heads)
self.feedforward = Feedforward(d_model, d_ff, dropout)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
编码器层的关键设计:
解码器结构更复杂,包含三种子层:
python复制class DecoderLayer(nn.Module):
def __init__(self, d_model, n_heads, d_ff, dropout=0.1):
super(DecoderLayer, self).__init__()
self.self_attn = MultiHeadAttention(d_model, n_heads)
self.enc_attn = MultiHeadAttention(d_model, n_heads)
self.feedforward = Feedforward(d_model, d_ff, dropout)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
解码器的特殊之处:
我们将所有组件组合成完整的Transformer模型:
python复制class Transformer(nn.Module):
def __init__(self, vocab_size, d_model, n_heads, n_encoder_layers, n_decoder_layers, d_ff, dropout=0.1):
super(Transformer, self).__init__()
self.embedding = nn.Embedding(vocab_size, d_model)
self.positional_encoding = PositionalEncoding(d_model)
self.encoder_layers = nn.ModuleList([EncoderLayer(d_model, n_heads, d_ff, dropout) for _ in range(n_encoder_layers)])
self.decoder_layers = nn.ModuleList([DecoderLayer(d_model, n_heads, d_ff, dropout) for _ in range(n_decoder_layers)])
self.fc_out = nn.Linear(d_model, vocab_size)
self.dropout = nn.Dropout(dropout)
关键参数说明:
vocab_size: 词汇表大小d_model: 模型维度(词向量维度)n_heads: 注意力头数量n_encoder_layers: 编码器层数n_decoder_layers: 解码器层数d_ff: 前馈网络隐藏层维度模型的前向传播过程分为几个步骤:
python复制def forward(self, src, trg, src_mask, trg_mask):
# 词嵌入和位置编码
src = self.embedding(src)
src = self.positional_encoding(src)
trg = self.embedding(trg)
trg = self.positional_encoding(trg)
# 编码器处理
for layer in self.encoder_layers:
src = layer(src, src_mask)
# 解码器处理
for layer in self.decoder_layers:
trg = layer(trg, src, trg_mask, src_mask)
# 输出层
output = self.fc_out(trg)
return output
前向传播的关键点:
我们可以使用以下代码创建Transformer实例:
python复制vocab_size = 10000
d_model = 128
n_heads = 8
n_encoder_layers = 6
n_decoder_layers = 6
d_ff = 2048
dropout = 0.1
transformer_model = Transformer(vocab_size, d_model, n_heads,
n_encoder_layers, n_decoder_layers,
d_ff, dropout)
Transformer需要处理不同长度的序列,因此需要使用掩码:
python复制# 定义输入
src = torch.randint(0, vocab_size, (32, 10)) # 源语言句子
trg = torch.randint(0, vocab_size, (32, 20)) # 目标语言句子
# 创建掩码
src_mask = (src != 0).unsqueeze(1).unsqueeze(2)
trg_mask = (trg != 0).unsqueeze(1).unsqueeze(2)
掩码的作用:
训练Transformer时需要注意以下几点:
当处理长序列时,Transformer的内存消耗会急剧增加。解决方案:
Transformer训练初期可能出现不稳定现象,可以尝试:
如果模型在训练集表现很好但在验证集表现差:
torch.jit.script进行模型编译基于原始Transformer的改进模型:
Transformer不仅可以处理文本,还可以应用于: