Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- The end of Classic UI is near.
- Salesforce, a bellwether, goes straight to agents without a browser UI.
- With AI, viewable UI can be delivered to users “just in time”
Recently, Salesforce announced “Headless 360”, in which the Salesforce, AgentForce, and Slack platforms are now exposed as APIs, MCP, and CLI for agents to directly access data, workflows, and tasks without the need for a browser user interface (UI).
Also: I created two apps with just my voice and a mouse – are IDEs already obsolete?
Of course, Salesforce is the bellwether. The future of UI is increasingly focused toward meeting agents’ needs in a way that doesn’t require compelling graphics, clickable buttons, or entry points. it was change Turnt up “We are exiting the UI era,” said Michael Grinich, founder of WorkOS, who presented observations and predictions at the TypeScript AI Demo Day in San Francisco in April.
Disposable interfaces created on demand
UIs are evolving from the fixed, static screens we’ve seen for decades to “just-in-time” projection layers that appear as simple text boxes, Grinich said. In many cases, people will no longer interact with the UI directly – applications will provide AI output or results through APIs connected to agents. He explained that the interfaces that users see will be “disposable – a one-time use interface that is only generated on demand and then worn out. And when you need a new interface, just create a new interface.”
This opens a new phase of software development – the solutions of today and tomorrow are becoming more self-powered and autonomous. “Software is moving from interfaces that you operate to something that produces results,” he said. “The user expresses an intention, a suggestion, an idea, and from that you send it to the model, and the model itself creates the UI and actions.”
Also: How the rise of AI-native software can give SMBs enterprise-level power
In the process, AI is reshaping the human-computer interface – and, ironically, making computing more human-centered. Generative AI, one of the fastest growing technologies of all time, presents a simple text box that asks, “What do you want?” he explained.
The UI has progressed “from switches to commands, pointers, cursors to touch, and now language,” he said. “Because of the language model, we’ve had this success. Where UIs are now synthesized. They’re created just in time for you, according to your request. They’re context aware. They have immediate information about what you’re trying to accomplish and the world around you.”
4 Ways to Prepare for Change
This also means a change from the user’s perspective. “Here the role of the user has changed from the operator – who is simply a user – to the role of collaborator and ultimately director of AI agents,” Grinich explains.
Grinich offered four pieces of advice for technology professionals to make this transition:
Also: New rules for AI-assisted code in the Linux kernel: What every developer needs to know
- UI is no longer a product. Product capabilities, models and data are brought together. “The UI is really a projection layer of all of that. It’s a way to represent this output,” Grinich said.
- The components still matter. “The UI is no longer assembled by hand; it’s no longer lovingly crafted by people,” he explains. “You’re giving elements to the model, and the model is figuring out what to do with it. It’s a very different interaction paradigm to building a UI, because you don’t really know what will be shown to the user. You have to provide the right kind of elements in the right context for the LLM (large language model) to make decisions.”
- The API becomes the actual surface you’re building on. “UI is no longer a product – it’s an API,” Grinich said. “Agents don’t actually click buttons; they’re more like an API.”
- Model is the interface. “The interface has been turned into an API, a data layer,” Grinich said. “The idea is to reduce and reduce and reduce, and try to simplify things for people, so that there is less cognitive overload.” Grinich has compared this to the ongoing evolution of cars, which have minimized buttons and switches on the dashboard in favor of digital controls and, ultimately, are more autonomous. “You don’t really care about driving. You care about getting to your destination.”
Y Combinator, the Silicon Valley-based business incubator, offers clients a classic single-line instruction: “Build something that people want,” related Grinich. “I might edit it a bit: ‘Make something the agents want.’ Agents will work for the people. If you want to serve the people, you have to serve their agents also.”
