Ponderings of a PM¶
2026-02-25¶
I've been reading about how AI is reshaping enterprise software. Two arguments, slightly in tension:
First: AI disrupts traditional moats. Yes, agents will automate manual work (moving money, filing reports, initiating trades). But the argument goes further: traditional moats that kept enterprise incumbents safe — high switching costs, data lock-in, complex UIs that take months to learn — are all weakening.
BVP's "System of Action" thesis isn't that you stop needing a system of record, it's that switching to a "better" or "new" one becomes dramatically cheaper. Companies are already automating SoR migration and integration end-to-end (AI-enabled services platforms, agents that write vendor-specific integration scripts), turning what used to be a 12-month system integrator project into a fraction of that. The moat was never the data or the logic — it was the pain of moving. When that pain drops, incumbents that relied on being hard to leave (rather than being good) are exposed, as long as migration can be executed with guaranteed correctness.
Second: Not all software is equally disruptable. There's a split between probabilistic systems (where approximation is fine: content generation, basic support) and deterministic systems (where the outcome is binary and errors cascade). You can't approximately process payroll. You can't mostly-correctly track a covenant. AI can process, capture, and even generate this data — that's not the issue. The issue is that someone has to guarantee it's right, enforce the rules around it, and be accountable when it's wrong. That's a system with governance, not "a model with access." These systems become more important the more you automate the stack around them. The more you automate decisions and execution, the more the underlying data and rules have to be bulletproof.
Both are probably right — but for different domains. The "switching becomes cheap" thesis will likely work fastest where AI can passively capture data from existing workflows and sources are easily accessible: CRM data from sales emails, support tickets from chat, public web data.
Treasury is different on both fronts. First, the accuracy bar: an AI can read a loan agreement, but you need to guarantee it extracted the right covenant threshold, not be 95% confident. Second, the access problem: treasury runs on live data feeds — bank API integrations, SWIFT connections, host-to-host setups. That's not something an AI agent can replicate by scraping a website or taking a snapshot of Kyriba's database then ditching Kyriba. Unless the agent can take a physical bank dongle and log into your portal each morning, it needs real infrastructure underneath.
The obvious implication is that generic "AI-powered insights" on top of a TMS or database is a feature, not a company. If the only value you add is "we put AI over your data," that's something an incumbent can ship as a feature update or people already do using ChatGPT. Yes, they might have architectural debt, but they already have the underlying data.
The question I keep coming back to: what kind of intelligence is not generic? I don't know yet, but I think it requires three things: exact data that must be right every time (instruments, policies, covenants, transactions), sophisticated domain and customer-specific prediction & recommendations (e.g., our forecasting architecture, not just generic "LLM on bank data"), and institutional knowledge that compounds per customer over time that enables hyper-customization in SaaS.
Worth reading: https://b.capital/insights/where-ai-value-will-be-built-next/ — the model isn't the moat; the value is in the execution environment around it (workflow integration, production learning loops, organizational context).
Which means the near-term strategic question might be simpler: how do we collect AND generate as much high-value data as possible, as fast as possible? Not just collecting raw data — we need to be enhancing it. We already do this: we take bank statements and turn them into categorized transactions, we turn transaction history into bottoms-up cash forecasts. Each transformation makes the data more valuable and "intelligent" than what TMS systems currently hold. On top of that, we need to start accumulating data that doesn't exist anywhere else: policy rules, covenant thresholds, investment strategy ideas, and compounding institutional knowledge from every customer interaction (corrections, learned patterns, decision history). If switching becomes cheap in 2-3 years, the defensible position might belong to whoever built the deepest layer of enhanced, customer-specific knowledge before that happens.