Text tokenization with Stanford NLP : Filter unrequired words and characters

In stanford Corenlp, there is a stopword removal annotator which provides the functionality to remove the standord stopwords. You can also define custom stopwords here as per your need (i.e ---,<,. etc)

You can see the example here:

   Properties props = new Properties();
   props.put("annotators", "tokenize, ssplit, stopword");
   props.setProperty("customAnnotatorClass.stopword", "intoxicant.analytics.coreNlp.StopwordAnnotator");

   StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
   Annotation document = new Annotation(example);
   pipeline.annotate(document);
   List<CoreLabel> tokens = document.get(CoreAnnotations.TokensAnnotation.class);

Here in the above example "tokenize,ssplit,stopwords" are set as custom stopwords.

Hope it'll help you....!!


This is a very domain-specific task that we don't perform for you in CoreNLP. You should be able to make this work with a regular expression filter and a stopword filter on top of the CoreNLP tokenizer.

Here's an example list of English stopwords.