ChatGPT and generative AI have become the buzzword in recent times, but they are not a new concept. In 1966, Joseph Weizenbaum, a computer scientist at MIT, introduced Eliza, the world’s first chatbot. The program demonstrated that communications between humans and computers were shallow by providing a text box where users could communicate with the machine. The program processed the input and provided responses, creating an illusion of a conversation between a human and a machine.
Weizenbaum wrote the program to show that while machines could mimic human behavior, it was just an illusion. People were entranced by Eliza, even though they knew it was just a program. After the publication of Weizenbaum’s paper on Eliza, some people, including practicing psychiatrists, started questioning the need for human psychotherapists. Weizenbaum was horrified and became a lifelong opponent of technological determinism. He wrote the book “Computer Power and Human Reason” in 1976, which remains relevant today in highlighting the reservations of technological insiders about the direction of automation.
The intriguing aspect of Eliza and ChatGPT is that people still consider it magical, despite knowing how it works. Some have referred to ChatGPT as a “stochastic parrot” or a “hi-tech plagiarism machine.” However, the real questions about the underlying technology still remain unanswered. The CO2 emissions incurred in training the language model and the carbon footprint of interactions with the technology are not widely known. The business model behind these tools and their exploitation of the creative work of millions of people on the web is still not clear.
In his lectures, Weizenbaum pointed out that we are continuously making Faustian bargains with technology. Both sides get something, but if we eventually decide the trade-off does not work, it will be too late. The same can be said of generative AI. Are we willing to make the trade-off, or are we repeating history?