Natural Language Models (NLP)
Introduction
natural language processing (l) natural language or natural language(je)s,[1][2] abbreviated PLN[3][4] (or NLP for its acronym in English), is a field of computer science, artificial intelligence and linguistics that studies the interactions between computers and human language, as well as the computational details of natural languages. It deals with the formulation and research of computationally efficient mechanisms for communication between people and machines through natural language, that is, the languages of the world. It is not about communication through natural languages in an abstract way, but about designing mechanisms to communicate that are computationally efficient—that can be carried out through programs that execute or simulate communication. The applied models focus not only on the understanding of language itself, but also on general human cognitive aspects and the organization of memory. Natural language serves only as a means to study these phenomena. Until the 1980s, most NLP systems were based on a complex set of hand-designed rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing[5][6].
History
The history of the PLN begins in 1950, although previous works have been found. In 1950, Alan Turing published Computing machinery and intelligence, where he proposed what is today called the Turing test as a qualitative criterion of superior intelligence. In 1954, the Georgetown experiment involved machine translation of more than sixty sentences from Russian into English. The authors argued that in three or five years, machine translation would be a solved problem. Actual progress in machine translation was slower, and in 1966 the ALPAC report showed that the research had performed poorly. Later, until the late 1980s, smaller-scale research in machine translation was carried out, and the first statistical machine translation systems were developed. This was due both to the steady increase in computing power resulting from Moore's law and to the gradual decline in the dominance of the linguistic theories of Noam Chomsky (e.g., transformational grammar), whose theoretical foundations discouraged the type of corpus linguistics, which is based on the machine learning approach to language processing. The first machine learning algorithms were then used, such as decision trees, systems produced from if-then statements similar to handwritten rules. Analytics")*.[7][8].