
What’s wrong with the chat interface?
While chat interfaces with LLMs like ChatGPT are powerful for open-ended queries, their true potential lies in being embedded in everyday apps. This transforms AI from a standalone assistant into an intelligent layer within existing workflows.
A grainy surveillance video plays on an old monitor. A younger officer sits at the terminal – serious, focused, sleeves rolled up. The senior detective hovers behind him, coffee mug in hand, barking orders: “Zoom in. Enhance. Track that plate.”
Instead of clicking with a mouse, the rookie's fingers dance across the keyboard – no menus, no toolbars, just raw command-line precision. A few keystrokes later, the blurry plate sharpens, faces resolve, and the room goes quiet. There’s something magical about that moment. Not just the impossible enhancement, but the myth of total control – that the real pros don’t point and click. They type commands, speak the system’s native tongue, and bend the machine to their will. It’s a myth rooted in truth.
For decades, the most powerful interactions between humans and computers weren’t visual – they were text-based, command-driven, and unforgiving. The mouse was for civilians.
When talking to machines was still science fiction
A romantic idea of talking to machines (and getting a response) is not new. In his short stories from the ‘50s, Isaac Asimov writes about the supercomputer Mutivac that responds intelligently to human queries in plain language. In that day and age, supercomputers still understood only rigid punch cards. These cards were fed into a card reader, which acted as the main input device for computers such as IBM System/360.
Computer technology has advanced quickly. By the mid-1960s, systems such as DEC PDP-10 began to introduce teletype terminals, CRT displays, and keyboards. People could actually talk to (or rather write to) computers, but the computers were still far from understanding human language.
It was rather vice versa. People had to learn computer commands. But the terminals were interactive, meaning they responded back to prompts, even though it would require a real stretch of imagination to call that interaction “chats”. Yet, the dream about chats not with mere computers, but with intelligent machines, persisted in SF. A cult movie, 2001: A Space Odyssey (1968), based on Arthur C. Clarke's novel, envisioned even AI’s independent decision making and hallucinations. In ‘60s science fiction, the supercomputers were powerful but also unpredictable and scary.
User-friendly but picky
Despite being powerful, the command line interface had its drawbacks. It required a person to know and memorise the commands. There are no visual cues – everything relies on recall, not recognition. Users need to already know what’s possible – or constantly refer to documentation, because there are no visual cues (like menus, docks, or tooltips) to guide the users through.
But innovation never sleeps. Douglas Engelbart demonstrated an early interactive system at Stanford Research Institute (SRI) in 1968, which included a mouse (his invention), windows, hypertext, text editing, and collaborative tools. However, it was not until 1973 that the first real GUI system – the Xerox Alto – was introduced.
It goes without saying that GUI systems continued to evolve and soon gained much more popularity than CLIs. They were known for being user-friendly, even though the ones in the know claimed that CLIs are also user-friendly, albeit picky about friendships. CLIs remained to rule among computer professionals and enthusiasts, but the majority of everyday computing by the general population was and still is performed on GUI operating systems we all know and love to this day.
The emergence of voice assistants
Speaking to a machine and being understood was not only a subject of SF novels. It was also decades of research in speech recognition and natural language processing. The emergence of Siri (2010), and later Alexa and Google Assistant, represents a major leap in human-computer interaction. Finally, computers (and mobile phones and other “smart” devices) understood not only commands but also human language. However, even though that understanding may be comprehensible, the voice assistants are still limited in what they can do. They depend on rule-based systems with hardcoded intents and are limited to predefined tasks (“set alarm”, “play music”)... In short, their interface is still more based on commands than on an actual understanding of language.
We waited shorter than expected to really chat with machines. With the recent advent of LLMs like ChatGPT, general-purpose language models have been trained on vast datasets to understand and generate human-like language across countless topics. While voice assistants reset with each interaction, LLMs can maintain conversational context, adapt to user intent, and handle ambiguity with surprising fluidity. Where traditional assistants follow orders, LLMs can engage in dialogue, reason through problems, and even suggest actions you didn’t think to ask for. Undeniably powerful, yet it does sometimes feel as if something is wrong.
So, what’s wrong with the chat interface?
There is nothing wrong with chatting. It’s just that chat, even one with a supercomputer or LLM, is simply not a way most work can be done. While large language models (LLMs) gained traction through chat interfaces and still help a great deal in generating texts, summarising texts, or answering questions, their true potential is only now beginning to unfold – moving beyond the chatbox into deeper, more embedded roles within everyday applications. Chat is still a powerful entry point, especially for exploration and open-ended queries, but it’s by no means the final destination.
Tools like Microsoft Copilot and Google’s “Ask Gemini” exemplify this shift: LLMs are now woven directly into productivity apps, allowing users to generate summaries in Word, automate spreadsheet tasks in Excel, or ask contextual questions within Docs and Gmail. This evolution makes the AI feel less like a separate assistant and more like an intelligent layer built into the tools people already use. Instead of switching contexts or copying and pasting, users can interact with AI directly inside their workflows – making decisions faster, reducing friction, and turning passive software into active collaborators.
Hey, you! What do you think?
They say knowledge has power only if you pass it on - we hope our blog post gave you valuable insight.
If you're looking to operationalize LLMs beyond chat, contact us to see how we integrate intelligent AI layers directly into your business workflows.