AI set for post-transformer shift to memory-led tools
Artificial intelligence vendors are predicting a shift away from today's general-purpose large language models towards architectures that place memory and domain-specific workflows at the centre of enterprise deployments.
Executives at Pathway and Alteryx expect 2026 to mark a turning point both in the technical foundations of AI models and in how businesses seek to apply them to day-to-day operations.
Post-transformer focus
Zuzanna Stamirowska, CEO and co-founder of Pathway, said current transformer-based models face structural constraints that will push the industry to explore alternative designs over the next year.
"2026 will be the first year of the post-transformer era. Transformer-based models are hitting fundamental mathematical limits that can no longer be ignored. Today's models are mathematically deprived of memory and time concepts. They always wake up in the same state, as they're incapable of learning continuously or adapting. This limits their usefulness in enterprise settings where context and accuracy are crucial to sophisticated work processes. This is why we expect to see memory emerging as the #1 investment vector in the months ahead. The problem with today's models is fundamental; it isn't scale; it's the underlying math," said Zuzanna Stamirowska, CEO and co-founder, Pathway.
Her comments reflect a growing debate in the AI research community and among vendors about how far transformer architectures can be pushed, despite recent gains from larger training runs and improved fine-tuning techniques.
Memory-led architectures
Stamirowska expects that attention will move toward architectures that can maintain richer context over time and adapt to continuously changing data in enterprise environments.
"That's why the coming months will see even more momentum for alternatives to the transformer architecture. Architectures and frontier models that address the fundamental memory issue in transformers will gain traction as new AI use cases are unlocked with models that boast continuous learning, infinite context reasoning, and real-time adaptation - all while demanding significantly less compute," said Stamirowska.
"Providing models with intrinsic, contextualised memory offers a quicker route to AGI. The enterprise adoption of models built using post-transformer architecture is a key trend to watch in 2026," said Stamirowska.
Vendors and researchers have been experimenting with approaches such as retrieval-augmented generation, external memory modules and hybrid systems that combine symbolic methods with neural networks, seeking to address some of the context and state limitations associated with traditional transformer models.
Targeted automation
Alongside architectural shifts, Alteryx CEO Andy MacMillan expects enterprises to narrow their ambitions for AI deployment and prioritise operational use cases that can be anchored in specific workflows.
"The push to build massive, all-knowing agents capable of handling a wide sweeping set of enterprise tasks isn't working in its current form. In 2026, those projects will start to slow down. Instead, we'll see a renewed focus on automation with purpose rather than AI for AI's sake. The spotlight will shift to operational use cases such as finance workflows or customer support, augmented with targeted, domain-specific agents. Rather than try to build all-knowing bots, companies will focus on embedding AI capabilities into existing, proven workflows to drive meaningful impact," said Andy MacMillan, CEO, Alteryx.
Many large organisations have spent the past two years piloting broad, generalist AI assistants. However, concerns around accuracy, governance, integration complexity and return on investment have led some firms to redirect budgets to narrower tools tailored to specific roles, data sets and compliance requirements.
Enterprise priorities
The two sets of predictions point to a common theme of enterprises placing a higher premium on context, memory and integration into existing systems than on general versatility.
Vendors building so-called post-transformer models are likely to position them around continuous learning on live data streams and closer coupling with operational databases, while companies pursuing domain-specific agents are expected to focus on defined tasks such as financial reconciliation, invoice processing, customer query handling or supply chain monitoring.
While there is little consensus on which technical approach will dominate, suppliers expect AI investments in 2026 to be scrutinised more closely for business impact, data handling and computational efficiency than in earlier phases of experimentation.