word2vec之tensorflow(skip-gram)实现

忘是亡心i 2023-06-02 09:12 102阅读 0赞

关于word2vec的理解,推荐文章https://www.cnblogs.com/guoyaohua/p/9240336.html

代码参考https://github.com/eecrazy/word2vec_chinese_annotation

我在其基础上修改了错误的部分,并添加了一些注释。

代码在jupyter notebook下运行。

  1. from __future__ import print_function #表示不管哪个python版本,使用最新的print语法
  2. import collections
  3. import math
  4. import numpy as np
  5. import random
  6. import tensorflow as tf
  7. import zipfile
  8. from matplotlib import pylab
  9. from sklearn.manifold import TSNE
  10. %matplotlib inline

下载text8.zip文件,这个文件包含了大量单词。官方地址为http://mattmahoney.net/dc/text8.zip

  1. filename='text8.zip'
  2. def read_data(filename):
  3. """Extract the first file enclosed in a zip file as a list of words"""
  4. with zipfile.ZipFile(filename) as f:
  5. # 里面只有一个文件text8,包含了多个单词
  6. # f.read返回字节,tf.compat.as_str将字节转为字符
  7. # data包含了所有单词
  8. data = tf.compat.as_str(f.read(f.namelist()[0])).split()
  9. return data
  10. #words里面包含了所有的单词
  11. words = read_data(filename)
  12. print('Data size %d' % len(words))

创建正-反词典,并将单词转换为词典索引,这里词汇表取为50000,仍然有400000多的单词标记为unknown。

  1. #词汇表大小
  2. vocabulary_size = 50000
  3. def build_dataset(words):
  4. # 表示未知,即不在词汇表里的单词,注意这里用的是列表形式而非元组形式,因为后面未知的数量需要赋值
  5. count = [['UNK', -1]]
  6. count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
  7. #词-索引哈希
  8. dictionary = dict()
  9. for word, _ in count:
  10. # 每增加一个-->len+1,索引从0开始
  11. dictionary[word] = len(dictionary)
  12. #用索引表示的整个text8文本
  13. data = list()
  14. unk_count = 0
  15. for word in words:
  16. if word in dictionary:
  17. index = dictionary[word]
  18. else:
  19. index = 0 # dictionary['UNK']
  20. unk_count = unk_count + 1
  21. data.append(index)
  22. count[0][1] = unk_count
  23. # 索引-词哈希
  24. reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
  25. return data, count, dictionary, reverse_dictionary
  26. data, count, dictionary, reverse_dictionary = build_dataset(words)
  27. print('Most common words (+UNK)', count[:5])
  28. print('Sample data', data[:10])
  29. # 删除,减少内存
  30. del words # Hint to reduce memory.

生成batch的函数

  1. data_index = 0
  2. # num_skips表示在两侧窗口内总共取多少个词,数量可以小于2*skip_window
  3. # span窗口为[ skip_window target skip_window ]
  4. # num_skips=2*skip_window
  5. def generate_batch(batch_size, num_skips, skip_window):
  6. global data_index
  7. #这里两个断言
  8. assert batch_size % num_skips == 0
  9. assert num_skips <= 2 * skip_window
  10. #初始化batch和labels,都是整形
  11. batch = np.ndarray(shape=(batch_size), dtype=np.int32)
  12. labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) #注意labels的形状
  13. span = 2 * skip_window + 1 # [ skip_window target skip_window ]
  14. #buffer这个队列太有用了,不断地保存span个单词在里面,然后不断往后滑动,而且buffer[skip_window]就是中心词
  15. buffer = collections.deque(maxlen=span)
  16. for _ in range(span):
  17. buffer.append(data[data_index])
  18. data_index = (data_index + 1) % len(data)
  19. #需要多少个中心词,因为一个target对应num_skips个的单词,即一个目标单词w在num_skips=2时形成2个样本(w,left_w),(w,right_w)
  20. # 这样描述了目标单词w的上下文
  21. center_words_count=batch_size // num_skips
  22. for i in range(center_words_count):
  23. #skip_window在buffer里正好是中心词所在位置
  24. target = skip_window # target label at the center of the buffer
  25. targets_to_avoid = [ skip_window ]
  26. for j in range(num_skips):
  27. # 选取span窗口中不包含target的,且不包含已选过的
  28. target=random.choice([i for i in range(0,span) if i not in targets_to_avoid])
  29. targets_to_avoid.append(target)
  30. # batch中重复num_skips次
  31. batch[i * num_skips + j] = buffer[skip_window]
  32. # 同一个target对应num_skips个上下文单词
  33. labels[i * num_skips + j, 0] = buffer[target]
  34. # buffer滑动一格
  35. buffer.append(data[data_index])
  36. data_index = (data_index + 1) % len(data)
  37. return batch, labels
  38. # 打印前8个单词
  39. print('data:', [reverse_dictionary[di] for di in data[:10]])
  40. for num_skips, skip_window in [(2, 1), (4, 2)]:
  41. data_index = 0
  42. batch, labels = generate_batch(batch_size=16, num_skips=num_skips, skip_window=skip_window)
  43. print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
  44. print(' batch:', [reverse_dictionary[bi] for bi in batch])
  45. print(' labels:', [reverse_dictionary[li] for li in labels.reshape(16)])

我这里打印的结果为:可以看到batch和label的关系为,一个target单词多次对应于其上下文的单词

  1. data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first', 'used', 'against']
  2. with num_skips = 2 and skip_window = 1:
  3. batch: ['originated', 'originated', 'as', 'as', 'a', 'a', 'term', 'term', 'of', 'of', 'abuse', 'abuse', 'first', 'first', 'used', 'used']
  4. labels: ['as', 'anarchism', 'originated', 'a', 'term', 'as', 'of', 'a', 'term', 'abuse', 'of', 'first', 'abuse', 'used', 'against', 'first']
  5. with num_skips = 4 and skip_window = 2:
  6. batch: ['as', 'as', 'as', 'as', 'a', 'a', 'a', 'a', 'term', 'term', 'term', 'term', 'of', 'of', 'of', 'of']
  7. labels: ['anarchism', 'originated', 'a', 'term', 'originated', 'of', 'as', 'term', 'of', 'a', 'abuse', 'as', 'a', 'term', 'first', 'abuse']

构建model,定义loss:

  1. batch_size = 128
  2. embedding_size = 128 # Dimension of the embedding vector.
  3. skip_window = 1 # How many words to consider left and right.
  4. num_skips = 2 # How many times to reuse an input to generate a label.
  5. valid_size = 16 # Random set of words to evaluate similarity on.
  6. valid_window = 100 # Only pick dev samples in the head of the distribution.
  7. #随机挑选一组单词作为验证集,valid_examples也就是下面的valid_dataset,是一个一维的ndarray
  8. valid_examples = np.array(random.sample(range(valid_window), valid_size))
  9. #trick:负采样数值
  10. num_sampled = 64 # Number of negative examples to sample.
  11. graph = tf.Graph()
  12. with graph.as_default(), tf.device('/cpu:0'):
  13. # 训练集和标签,以及验证集(注意验证集是一个常量集合)
  14. train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
  15. train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  16. valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
  17. # 定义Embedding层,初始化。
  18. embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
  19. softmax_weights = tf.Variable(
  20. tf.truncated_normal([vocabulary_size, embedding_size],stddev=1.0 / math.sqrt(embedding_size)))
  21. softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
  22. # Model.
  23. # train_dataset通过embeddings变为稠密向量,train_dataset是一个一维的ndarray
  24. embed = tf.nn.embedding_lookup(embeddings, train_dataset)
  25. # Compute the softmax loss, using a sample of the negative labels each time.
  26. # 计算损失,tf.reduce_mean和tf.nn.sampled_softmax_loss
  27. loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed,
  28. labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))
  29. # Optimizer.优化器,这里也会优化embeddings
  30. # Note: The optimizer will optimize the softmax_weights AND the embeddings.
  31. # This is because the embeddings are defined as a variable quantity and the
  32. # optimizer's `minimize` method will by default modify all variable quantities
  33. # that contribute to the tensor it is passed.
  34. # See docs on `tf.train.Optimizer.minimize()` for more details.
  35. optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
  36. # 模型其实到这里就结束了,下面是在验证集上做效果验证
  37. # Compute the similarity between minibatch examples and all embeddings.
  38. # We use the cosine distance:先对embeddings做正则化
  39. norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
  40. normalized_embeddings = embeddings / norm
  41. valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
  42. #验证集单词与其他所有单词的相似度计算
  43. similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))

开始训练:

  1. num_steps = 40001
  2. with tf.Session(graph=graph) as session:
  3. tf.initialize_all_variables().run()
  4. print('Initialized')
  5. average_loss = 0
  6. for step in range(num_steps):
  7. batch_data, batch_labels = generate_batch(batch_size, num_skips, skip_window)
  8. feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
  9. _, this_loss = session.run([optimizer, loss], feed_dict=feed_dict)
  10. average_loss += this_loss
  11. # 每2000步计算一次平均loss
  12. if step % 2000 == 0:
  13. if step > 0:
  14. average_loss = average_loss / 2000
  15. # The average loss is an estimate of the loss over the last 2000 batches.
  16. print('Average loss at step %d: %f' % (step, average_loss))
  17. average_loss = 0
  18. # note that this is expensive (~20% slowdown if computed every 500 steps)
  19. if step % 10000 == 0:
  20. sim = similarity.eval()
  21. for i in range(valid_size):
  22. valid_word = reverse_dictionary[valid_examples[i]]
  23. top_k = 8 # number of nearest neighbors
  24. # nearest = (-sim[i, :]).argsort()[1:top_k+1]
  25. nearest = (-sim[i, :]).argsort()[0:top_k+1]#包含自己试试
  26. log = 'Nearest to %s:' % valid_word
  27. for k in range(top_k):
  28. close_word = reverse_dictionary[nearest[k]]
  29. log = '%s %s,' % (log, close_word)
  30. print(log)
  31. #一直到训练结束,再对所有embeddings做一次正则化,得到最后的embedding
  32. final_embeddings = normalized_embeddings.eval()

我们可以看下训练过程中的验证情况,比如many这个单词的相似词计算:

开始时,

  1. Nearest to many: many, originator, jeddah, maxwell, laurent, distress, interpret, bucharest,

10000步后,

  1. Nearest to many: many, some, several, jeddah, originator, neurath, distress, songs,

40000步后,

  1. Nearest to many: many, some, several, these, various, such, other, most,

可以看到此时单词的相似度确实很高了。

最后,我们通过降维,将单词相似情况以图示展现出来:

  1. num_points = 400
  2. # 降维度PCA
  3. tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
  4. two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
  5. def plot(embeddings, labels):
  6. assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
  7. pylab.figure(figsize=(15,15)) # in inches
  8. for i, label in enumerate(labels):
  9. x, y = embeddings[i,:]
  10. pylab.scatter(x, y)
  11. pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
  12. ha='right', va='bottom')
  13. pylab.show()
  14. words = [reverse_dictionary[i] for i in range(1, num_points+1)]
  15. plot(two_d_embeddings, words)

结果如下,随便举些例子,university和college相近,take和took相近,one、two、three等相近

1254945-20190928180328373-1870151414.png


总结:原始的word2vec是用c语言写的,这里用的python,结合的tensorflow。这个代码存在一些问题,首先,单词不是以索引作为输入的,应该是以one-hot形式输入。其次,负采样的比例太小,词汇表有50000,每批样本才选64个去做softmax。然后,这里也没使用到另一个trick(当然这里根本没用one-hot,这个trick也不存在了,我甚至觉得根本不需要负采样):将单词构建为二叉树(类似于从one-hot维度降低到二叉树编码(如哈夫曼树)),从而实现一种降维操作。不过,即使是这个简陋的模型,效果看起来依然不错,即方向对了,醉汉也能走到家。

转载于:https://www.cnblogs.com/lunge-blog/p/11603983.html

发表评论

表情:
评论列表 (有 0 条评论,102人围观)

还没有评论,来说两句吧...

相关阅读