

…a little later (1970–1990)ĭuring the 1970s, many programmers began writing “conceptual systems,” which structured real-world information into computer-understandable data, and by the 1980s, most natural language processing systems were based on complex sets of handwritten rules. ELIZA was perhaps the first antecedent of the conversational BOT and one of the first programs capable of coping with the Turing test. When the “patient” exceeded the very small knowledge base, ELIZA could provide a generic response, for example, answering “My head hurts” with “Why do you say your head hurts?”. With no information about human thought or emotion, ELIZA provided a surprisingly human-like interaction. Another example was the ELIZA program, a simulation by psychotherapist Carl Rogers, written by Joseph Weizenbaum between 19. One could give it instructions in colloquial language such as “Can you put the red cone on top of the green block?” and the program understood to execute the action. SHRDLU was a natural language system that worked with “blocks” in a restricted vocabulary framework. Some notably successful natural language processing systems developed in the 1960s were SHRDLU (whose acronym has no particular meaning) written by Terry Winograd at MIT. However, actual progress was much slower, and after the ALPAC report in 1966, which found that the ten-year research had not lived up to expectations, funding for machine translation was drastically reduced.

The authors claimed that within three to five years, machine translation would be a solved problem. The Georgetown experiment in 1954 involved the automatic translation of more than sixty Russian sentences into English in a collaboration between IBM and Georgetown University. NLP is a technique used to bridge the communication gap between computer and human and has its origins in the early ideas of machine translation (MT) that were born during World War II. A FLOWCHART OF PART OF THE IBM‘S DICTIONARY LOOKUP PROCEDURES.
