Considering the popularity and seeming sophistication of modern chatbot technology, many people have expressed the view that there must be some real intelligence behind them. Nevertheless, all chatbots are computer programs that actually exhibit no genuine ability to comprehend what they’re saying. They’re instead based on one of two broad types of algorithms that can mimic authentic human behaviors when deployed under the correct circumstances.
Script-based systems represent the first of these algorithms. Back in 1967, a computer scientist named Joseph Weizenbaum showed off a chatbot named Eliza at the Massachusetts Institute of Technology. This piece of software mimicked the style of a Rogerian psychologist by reading sentences from a flat text file and filling them in with the user’s own responses. This provides a reasonable simulation of a conversation between a doctor and their patient, provided that it doesn’t go too far off the rails.
Students who experimented with the software were surprised at how lifelike it superficially seemed, in spite of the fact that Eliza’s other scripts were less successful than the mock psychiatric one. Due to the massive increases in computer memory and storage space since that time, it’s now possible to build truly massive scripts that include automatic responses for a wide array of potential situations. Simple search plugins make it possible for these applications to fetch data off of a network and implant it into sentences that a human wrote for the program when it was first developed.
Such an add-on gives these programs a great deal of utility, and they’re often deployed alongside live chat software that allows clients the opportunity to exchange materials with actual human representatives. Chatbots may have a simple code statement that tells them to recommend human assistance whenever a question is posed that seems to require an answer outside of the limited scope of the original script. Those who want to provide an even more human-like experience may start to experiment with data training algorithms.
Large language model construction starts by gathering a huge amount of written words and feeding them into an artificial intelligence database. Computer programmers then write an implementation of a text parser that will read through the text and assign weights to individual phrases. When the LLM encounters a specific phrase again, it will calculate the potential odds of another line continuing the phrase. From this, generative AI programs are capable of putting together readable text.
Unlike classic chatbots, however, these programs can potentially create seemingly new ideas that may not match their original parameters. Algorithms that incorporate natural language processing allow for smoother communication between people and LLM-based software, which makes these popular with engineers developing sophisticated chat applications. Computer systems with training data sets that measure in the hundreds of tebibytes have been able to assemble a sufficient number of weights to reply to countless types of queries. They have a tendency to come up with highly unusual answers, however, which is why all output from LLM-based software needs to be carefully checked.