AI

Is This Our “GUI Moment”? — Comparing AI Interfaces to Computing History

· 3 min read
Is This Our “GUI Moment”? — Comparing AI Interfaces to Computing History

The debate over whether the current wave of generative AI models like OpenAI’s GPT, Google’s Bard, or Facebook’s LLaMa is really the start of a new era or just a hype wave is still on.

It all started at the end of last year. OpenAI introduced ChatGPT, which worked with GPT 3.5. Many people tried it. ChatGPT turned out to be very good. Then OpenAI introduced GPT 4 along with an 8k token-limit. The world went crazy. Now, everyone seems to be writing some analysis on AI’s future. I thought I’d give it a try, too.

Separating Chatbots and LLMs

Let’s separate two things happening now. First, the revival of chatbots from years past. Second, the new generation of large language models using transformer tech. Together, they enable remarkably natural conversation.

Before, most chatbots were scripted dialog trees, like phone menus. With LLMs (Large Language Models) powering them, engineers can make chatbots seem truly intelligent. Given the proper context, an LLM can provide meaningful responses — that context is an engineering challenge. This is how ChatGPT, Preplexity.AI, and other LLM-powered tools work.

There are also new custom apps built for specific purposes. They track question and answer history, can search or access APIs, and use the LLM to “decide” responses. It’s not real thinking — just tools and prompts to leverage the LLM. The LLM suggests which tool to use for context, the device runs, and the context loops back into the LLM. That’s the essence of agent-based architecture.

LLMs for Business Insights

LLMs are entirely separate from chatbot interfaces. They have many broader uses. Businesses collect insane amounts of data daily — call transcripts, meeting notes, and documents. It’s too much for humans to process. But LLMs can index and categorize it all.

The impact would be tremendous. Imagine asking, “What was the latest on X?” and getting an answer citing the source documents. With LangChain, that’s possible out of the box. Process the docs, create embeddings, and add to a vector DB. Any engineer can do it. This is just one use case — the beginning of a new era.

Remember sci-fi characters accompanied by robot advisors with all knowledge? It’s real now. Autonomous drones can film your hike. Voice chatbots use speech conversion and LLMs to answer questions — soon Siri and Alexa will be as smart as ChatGPT, accessing the internet and personal data. The best minds work on this.

We’ll soon see AI “assistants” move beyond phones and desktops into the real world. Adoption will be slow, though — voice assistance in cars is still poor despite smartphones being mainstream for over a decade.

Evolution of AI Interfaces

What’s happening now reminds me of when graphical user interfaces (GUIs) emerged in the 1970s. Before GUIs, people interacted via command lines. That was faster but harder — you had to memorize commands and system structure.

When GUIs took over in the 80s, even non-techies could use computers. Dragging a mouse to click buttons and switch windows worked better than typing commands. Early GUIs didn’t need a mouse — you could keyboard navigate. But most people found that less intuitive. GUIs made home computers popular in the 90s.

Before GUIs, human-computer interaction was much harder. Early interfaces used punch cards. My grandpa had stacks of them. You punched holes indicating 1s and 0s for commands. Output printed on the cards first, then paper.

Later, people interacted via keyboards called teletypes. You typed commands and got printed output. Virtual screen output came next, enabling the command line.


LLMs and new chatbots are the next phase in the evolution of human-computer interaction. Chatbots are the new GUI. LLMs represent a new abstraction layer, like the emergence of high-level languages — C, PHP, Ruby, and Python. This will change everything. But the change will happen gradually. People and businesses have time to adapt.


Originally published on Medium.com