Features
* (Jointly) visualize
o syntactic dependency graphs
o semantic dependency graphs (a la CoNLL 2008)
o Chunks (such as syntactic chunks, NER chunks, SRL chunks etc.)
* Compare gold standard trees to your generated trees (e.g. highlight false positive and negative dependency edges)
* Filter trees and visualize only what's necessary, for example
o only dependency edges with certain labels
o only the edges between certain tokens
* Search corpora for sentences with certain attributes using powerful search expressions, for example
o search for all sentences that contain the word "vantage" and the pos tag sequence DT NN
o search for all sentences that contain false positive edges and the word "vantage"
* Reads
o CoNLL 2000, 2002, 2003, 2004, 2006 and 2008 format
o Lisp S-Expressions
o Malt-Tab format
o markov thebeast format
* Export to EPS
Check this screenshot to get a better idea.
Stanford CoreNLP provides a set of natural language analysis tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, and mark up the structure of sentences in terms of phrases and word dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract open-class relations between mentions, etc.
Chomsky bot written in Ruby. A funny little thing which generates random paragraphs of text from a set sentence building blocks. It combines four kinds of phrases (introduction phrases, subject phrases, verb phrases and object phrases) into a sentence. The sentences this simple construction can create are amazing. They are syntactically correct and "hovers on the edge on understandability".
NGramJ is a Java based library containing two types of ngram based applications. It's major focus is to provide robust and state of the art language recognition.
The Natural Programming Project is working on making programming languages and environments easier to learn, more effective, and less error prone. We are taking a human-centered approach, first studying how people perform their tasks and then designing languages and environments around people's natural tendencies. We focus on all kinds of programming, including professional programmers, novice programmers who are trying to learn to be experts, and end users, who program to support other jobs or hobbies, such as multimedia authoring, simulations, teaching, prototyping, and other activities supported by computing.
P. Xia, S. Wu, and B. Van Durme. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), page 7516--7533. Association for Computational Linguistics, (November 2020)
J. Otterbacher, G. Erkan, and D. Radev. Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), page 915-922. Vancouver, British Columbia, Canada, Association for Computational Linguistics, (October 2005)