Is there a bi gram or tri gram feature in Spacy?

Spacy allows the detection of noun chunks. So to parse your noun phrases as single entities do this:

  1. Detect the noun chunks https://spacy.io/usage/linguistic-features#noun-chunks

  2. Merge the noun chunks

  3. Do dependency parsing again, it would parse "cloud computing" as single entity now.

>>> import spacy
>>> nlp = spacy.load('en')
>>> doc = nlp("Cloud computing is benefiting major manufacturing companies")
>>> list(doc.noun_chunks)
[Cloud computing, major manufacturing companies]
>>> for noun_phrase in list(doc.noun_chunks):
...     noun_phrase.merge(noun_phrase.root.tag_, noun_phrase.root.lemma_, noun_phrase.root.ent_type_)
... 
Cloud computing
major manufacturing companies
>>> [(token.text,token.pos_) for token in doc]
[('Cloud computing', 'NOUN'), ('is', 'VERB'), ('benefiting', 'VERB'), ('major manufacturing companies', 'NOUN')]

If you have a spacy doc, you can pass it to textacy:

ngrams = list(textacy.extract.basics.ngrams(doc, 2, min_freq=2))