SpaCy: how to load Google news word2vec vectors?

For spacy 1.x, load Google news vectors into gensim and convert to a new format (each line in .txt contains a single vector: string, vec):

from gensim.models.word2vec import Word2Vec
from gensim.models import KeyedVectors
model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
model.wv.save_word2vec_format('googlenews.txt')

Remove the first line of the .txt:

tail -n +2 googlenews.txt > googlenews.new && mv -f googlenews.new googlenews.txt

Compress the txt as .bz2:

bzip2 googlenews.txt

Create a SpaCy compatible binary file:

spacy.vocab.write_binary_vectors('googlenews.txt.bz2','googlenews.bin')

Move the googlenews.bin to /lib/python/site-packages/spacy/data/en_google-1.0.0/vocab/googlenews.bin of your python environment.

Then load the wordvectors:

import spacy
nlp = spacy.load('en',vectors='en_google')

or load them after later:

nlp.vocab.load_vectors_from_bin_loc('googlenews.bin')

it is much easier to use the gensim api for dowloading the word2vec compressed model by google, it will be stored in /home/"your_username"/gensim-data/word2vec-google-news-300/ . Load the vectors and play ball. I have 16GB of RAM which is more than enough to handle the model

import gensim.downloader as api

model = api.load("word2vec-google-news-300")  # download the model and return as object ready for use
word_vectors = model.wv #load the vectors from the model

I know that this question has already been answered, but I am going to offer a simpler solution. This solution will load google news vectors into a blank spacy nlp object.

import gensim
import spacy

# Path to google news vectors
google_news_path = "path\to\google\news\\GoogleNews-vectors-negative300.bin.gz"

# Load google news vecs in gensim
model = gensim.models.KeyedVectors.load_word2vec_format(gn_path, binary=True)

# Init blank english spacy nlp object
nlp = spacy.blank('en')

# Loop through range of all indexes, get words associated with each index.
# The words in the keys list will correspond to the order of the google embed matrix
keys = []
for idx in range(3000000):
    keys.append(model.index2word[idx])

# Set the vectors for our nlp object to the google news vectors
nlp.vocab.vectors = spacy.vocab.Vectors(data=model.syn0, keys=keys)

>>> nlp.vocab.vectors.shape
(3000000, 300)

I am using spaCy v2.0.10.

Create a SpaCy compatible binary file:

spacy.vocab.write_binary_vectors('googlenews.txt.bz2','googlenews.bin')

I want to highlight that the specific code in the accepted answer is not working now. I encountered "AttributeError: ..." when I run the code.

This has changed in spaCy v2. write_binary_vectors was removed in v2. From spaCy documentations, the current way to do this is as follows:

$ python -m spacy init-model en /path/to/output -v /path/to/vectors.bin.tar.gz