top of page

XME.digital Tests Agentic UI Capabilities

  • Writer: Anatoliy Medvedchuk
    Anatoliy Medvedchuk
  • 1 day ago
  • 5 min read
How Agentic UI Composes Interfaces from DSL and Component Libraries

We built a UI framework, put a visual builder on top of it, and shipped a Generative Customer Portal. Now we're testing whether an AI agent can drive that same framework at runtime, composing interfaces for individual users instead of serving ones a configurator pre-built.


The foundation: interfaces as configuration

A few years back we made a decision that turned out to matter more than we expected at the time. Instead of building customer portal interfaces as code, we built them as configuration — a domain-specific language (DSL), that describes which components appear, in what structure, and under what conditions. Our rendering layer reads that DSL and produces the actual UI.

For each tenant on the XME Digital Service Platform, we can assemble a completely different portal, dashboards, flows, and logic, without touching application code. The interface is just another thing you configure alongside business rules and data models.


We can let configurators work without developers

Once the framework existed, we built a visual builder on top of it. Someone who knows the product and the customer, but doesn't write code, can open the builder, drag and drop components, define logic, connect data, and publish a new interface. No engineering ticket required for routine changes.

It works well. But it has an obvious ceiling: the interface a user sees is the interface someone decided to build for them. A configurator made choices in advance. Those choices get served to every user in that tenant the same way. The portal is flexible at setup time and fixed at runtime.


The Generative customer portal capabilities today

The Generative Customer Portal is our solution built on top of the XME Digital Service Platform. In its current form, a customer logs in, types a request "I need roaming in Brazil", and gets a composed interface back: the relevant tariff plans, structured for action.

Before this existed, the customer either navigated the portal themselves, called support, or got a text response from an AI chatbot. That last option was an improvement, but text is a limited medium for something the customer needs to act on. We made a bet that a structured UI is easier to work with than a paragraph of information, and that bet has held up.

What the portal might serve today is still configuration-driven, though. The agent identifies the right pre-built page for the request and surfaces it. The page was designed by a configurator in advance, and it looks the same for everyone.


Where we're going: the agent drives the framework

The question we're now testing is whether an AI agent can replace the configurator at runtime.

The idea is this: a user logs in, takes an action or enters a request, and instead of a pre-built page loading, a backend agent reads the context, selects components from the tenant's library, composes a DSL configuration, and our rendering layer turns that into a live interface. The user sees something built for their specific situation, not a page that was designed for a generalized version of it.

A button click, in this model, is a structured prompt. So is a text input. Both feed the agent, which decides what to render next. The flow becomes genuinely adaptive, an interface that follows the user rather than asking the user to find their way.

Why the agent composes components

This is the part I want to be clear about, because it's where a lot of generative UI approaches go wrong.

Our agent won’t generate JavaScript. It won’t write HTML. It will work within the DSL and the component library defined for a given tenant. For a telecom self-service domain, that library might have 50 to 100 components, where atomic elements, composite blocks, comparison tables, and action flows are written and tested by engineers.

The agent's job is composition within that bounded set. That constraint is intentional. An agent composing from a defined, tested library has a far narrower failure surface than one generating arbitrary interface code. The DSL specification goes into the system prompt. 

The component library is the vocabulary it's allowed to use. We think this is how you keep hallucination manageable by limiting what it can produce.

Before and after: what changes with the agentic layer

How the two modes compare across the dimensions that matter most to us right now.

Dimension

Configuration-Driven Portal

Agentic Portal 

Interface logic

Pre-configured per tenant by a developer or configurator. Same for all users within a tenant.

Composed in real time by an AI agent, based on what the specific user is doing and asking.

Configurator role

Defines all screens and flows in advance via the visual builder.

Sets up the component library and tenant configuration; agent handles runtime composition.

Edge cases

Limited to pre-built paths. Unusual requests escalate or require new configuration work.

Agent works through non-standard scenarios on the fly, within the DSL and component constraints.

User experience

Navigation-driven: users browse menus or type queries that map to predefined pages.

Intent-driven: the portal assembles the right interface around what the user is actually trying to do.

Output format

Static UI from stored configuration; same page structure for all users in a tenant.

Dynamic UI generated per interaction; interface evolves with each user action or input.

Support load

Non-standard requests frequently reach live agents.

Wider scenario coverage autonomously; fewer escalations expected.

Compute cost

No LLM inference at runtime.

LLM inference per interaction; tiered model architecture planned to control cost.

Maturity

Production-ready. Running across multiple tenants.

Hypothesis under active R&D. First demos in progress.

What we still need to discover 

Our team is currently evaluating backend agentic frameworks, designing the agent loop, and working through the prompting architecture. The first thing we're looking for in early demos is coherence: does the agent compose interfaces that make sense for the context, without needing constant correction?

Token cost is the other open question. LLM inference isn't free, and we're not going to hand that cost off to operators without understanding it. Our working assumption is a tiered model setup — heavier models for complex reasoning, lighter ones for execution.


Where this goes

The shift we're testing is from pre-composed to reasoned. An interface that thinks about what the user needs next and builds it.

For businesses, that means a customer portal that handles a wider range of scenarios without configuration work or live agent escalation. The menu-based portal stays as not everyone wants to interact through natural language or adaptive flows. But it coexists with a layer that can reason.

We'll publish what we find from the proof of concept as it develops. If the first demos confirm the hypothesis technically, the next questions are about scale, cost, and where this creates the most value by domain. 

 
 
 

Comments


Read more

Want to beat 53% your competitors?

bottom of page