Designing for Meaning in an Ai-Driven Stack
As AI systems increasingly reason across products and domains, harmonised cross-product context is shifting from a downstream data concern to a design-time architectural consideration. Focus is increasing right now on rapid delivery utilising AI; integrating APIs, prototyping assistants, accelerating feature velocity, compressing the path from concept to product.
That matters, but something more structural is happening alongside this.
As industry leaders have repeatedly noted, there is no AI strategy without a data strategy (often associated with former CEO of Snowflake – Frank Slootman)
What’s changing is not only the tooling that we use, it’s converging previously separated streams at design level.
Before:
- software is optimised for feature delivery and localised product context
- data is optimised for structure, shared meaning and harmonised cross-product context.
Ai operates across both.
- It reasons over relationships.
- It surfaces inconsistencies.
- It spans domain boundaries.
This means that semantic clarity (context) like entity definitions, lifecycle modeling, domain boundaries are no longer downstream refinement. It becomes an operational concern.

Without harmonised cross-product context, locally valid semantics can diverge when interpreted at system scale.
Code is becoming easier to generate.
Meaning is not.
Leveraging Ai for behavioural delivery at pace also increases long-term cost of un-resolved context.
That’s where I believe the next structural shift is arising.
I don’t think that the future stack is data instead of software, it’s data-centric by necessity. Ai doesn’t remove the need for data-centric discipline and modelling, it makes the absence of it visible.
I live here, this is where I am most interested in working, the convergence point where delivery, domain meaning and data architecture intersect.
Optimising for Feature Delivery vs Optimising for Meaning
Software (and increasingly product) delivery evolved around:
- Defined sprint cadence
- Scoped functions
- Deployable increments
Data (and data product) delivery evolved around:
- Shared definitions
- Canonical entities
- Lifecycle thinking
- Cross-functional consistency
Neither approach is better, they were solving for different problems.
Ai spans both, it does not exist neatly inside a function, feature or product.

The consideration now is about design awareness, as Ai increasingly reasons and has access code and data, product design decisions benefit from considering how the localised context will interact with broader cross-product context. This doesn’t mean every product needs to solve cross-domain semantics up-front. It does mean that ignoring this interaction entirely becomes a conscious trade off rather than an accidental one.
Semantic Weakness is Amplified if Applied Organisationally
This principle applies across functions, services and products and sub-products. From an enterprise data perspective, Ai is not fragile in the way that traditional systems are, it is sensitive in a different way.
If a customer means:
- A prospect in one system
- A contract holder in another
- A debtor in another
- A location in spatial data
- A session in a product log
This is no longer just a reporting issue
It introduces:
- prompt complexity
- Embedded inconsistency
- Increased token cost
- Reduced trust in outputs

Suddenly – “just ship it” introduces risk at scale.
These risks are not new. They have always existed where systems interact across domains.
What is changing now, is that un-resolved semantic differences are surfaced more quickly, and at a larger scale. Semantic clarity becomes both an architectural operational concern as well as a data concern.
Start-ups vs Mature Organisations
Early stage startups often optimise for speed, for validation and for runway, that makes sense.
More mature organisation already operate with cross-product, cross-system context, they optimise for consistency, for scale, for cross-domain integrity.
The timeline between start-up and sustained growth stages is being compressed by Ai. You can delay semantic clarity for a while, but the more cross-domain reasoning you expect, the sooner that clarity becomes necessary. This isn’t about slowing down, it’s about understanding when and why you put your foot on the accelerator.
AI does not create semantic complexity. It reveals it — and increasingly requires us to design for it.
