Article,

Knowledge Representation in Sanskrit and Artificial Intelligence

.
AI Magazine, 6 (1): 32 (1985)
DOI: 10.1609/aimag.v6i1.466

Abstract

In the past twenty years, much time, effort, and money has been expended on designing an unambiguous representation of natural language to make them accessible to computer processing, These efforts have centered around creating schemata designed to parallel logical relations with relations expressed by the syntax and semantics of natural languages, which are clearly cumbersome and ambiguous in their function as vehicles for the transmission of logical data. Understandably, there is a widespread belief that natural languages are unsuitable for the transmission of many ideas that artificial languages can render with great precision and mathematical rigor. But this dichotomy, which has served as a premise underlying much work in the areas of linguistics and artificial intelligence, is a false one. There is at least one language, Sanskrit, which for the duration of almost 1000 years was a living spoken language with a considerable literature of its own. Besides works of literary value, there was a long philosophical and grammatical tradition that has continued to exist with undiminished vigor until the present century. Among the accomplishments of the grammarians can be reckoned a method for paraphrasing Sanskrit in a manner that is identical not only in essence but in form with current work in Artificial Intelligence. This article demonstrates that a natural language can serve as an artificial language also, and that much work in AI has been reinventing a wheel millenia old. First, a typical Knowledge Representation Scheme (using Semantic Nets) will be laid out, followed by an outline of the method used by the ancient Indian grammarians to analyze sentences unambiguously. Finally, the clear parallelism between the two will be demonstrated, and the theoretical implications of this equivalence will be given.

Tags

Users

  • @achakraborty
  • @dblp

Comments and Reviews