top of page

Why Most Fabric Projects Stall by Month Three (And What Reinsurance Teams Can Do About It)

  • 4 days ago
  • 6 min read

Based on a fireside chat with Paul McLeod, 20+ year veteran in reinsurance business intelligence and Microsoft Fabric advisory.



It starts the same way every time.


Someone on the technology side drops an Excel file into Fabric, builds a Power BI report on top, maybe even runs Copilot to generate a summary. Leadership sees it and thinks: this is simple, let's go.


Three months later, the project is stuck. Budgets are climbing. The team is rewriting work they already did. And the CFO is asking a question nobody can answer clearly: what does Fabric actually cost us?


Paul McLeod has seen this pattern repeat across reinsurance organizations in Bermuda and beyond. In a recent conversation, he walked through the real risks, the regulatory pressure points, and the practical steps that separate successful Fabric adoptions from expensive false starts.


The Cost Question Nobody Can Answer Cleanly


When a CFO asks for the cost of Fabric, the honest answer is uncomfortable: it depends, and it's genuinely hard to pin down.


There are two cost factors at play. First, the licensing. Fabric subscriptions range from $500/month at the lowest tier to $40,000 or $50,000/month at the top. And the jumps between tiers aren't incremental. Moving up can mean a 25% to 50% increase, not the 5% to 10% most teams expect.


Second, and often larger, is the human cost. Experienced Fabric professionals are rare and expensive. Internal teams learning on the job will hit walls repeatedly. As Paul put it, there's an enormous amount to learn, and the stops and starts add up fast in billable hours and delayed timelines.


The root problem is that most organizations going into Fabric have no prior experience with the platform. They can't accurately estimate subscription needs because they don't yet know how much processing they'll require, how many concurrent users will access the system, or how frequently data will need to be reprocessed.


Why Reinsurance Data Is Different


Outside consultants frequently underestimate what makes reinsurance data uniquely complex. A reinsurance company is, at its core, a finance company in its purest form. It insures other insurance companies, which means the entire business runs on financial flows, estimates, and constant revaluation.


Every contract requires monthly or quarterly reassessment. The finance team is perpetually comparing what they projected against what actually happened, then re-forecasting forward. Income is based on what they think the insurance company's income will be. Losses are based on what they think claims will look like. And both are shifting constantly.


This isn't a reporting cycle where numbers stabilize. It's a continuous recalculation engine. Any platform implementation that doesn't account for this level of financial variability is going to break under pressure.


The Regulator Question You Don't Want to Fumble


For teams reporting to the BMA (Bermuda Monetary Authority) or CIMA (Cayman Islands Monetary Authority), the stakes around data governance aren't theoretical. These regulators are monitoring solvency, tracking performance against capital models, and comparing organizations to their peers.


If a regulator notices a variance in your submissions and your team's answer is "we had data quality issues" or "our reporting platform produced errors," you've opened a serious problem. At that point, every prior submission comes into question. The organization may need to restate financials and faces intense scrutiny going forward.


The ideal response? "We have a fully governed environment that passes audits consistently. Our data is reliable. The variance reflects a legitimate business decision."


That answer requires confidence in your data stack, and that confidence has to be built before the question is ever asked.


The Month-Three Wall


Paul calls it the enthusiasm trap. Managers hear about Fabric everywhere and want the capabilities it promises. Technology teams get excited because the tooling genuinely is impressive: machine learning, PySpark notebooks, integrated analytics. Both sides agree to move forward.


The first few weeks feel productive. Quick wins come fast. Then the team hits the complex problems: building proper governance, automating data pipelines at enterprise scale, ensuring auditability.


They realize that the early work needs to be redone. Complexity compounds. And the project begins to stall.


The pattern is predictable because the root cause is consistent: teams underestimate the gap between a Fabric demo and an enterprise-grade, audit-ready solution.


AI That Passes the Audit vs. AI That Demos Well


With agentic AI on the horizon, the conversation is shifting from "should we use AI?" to "how do we govern AI when it starts making decisions?" And for reinsurance, the distinction between impressive demos and production-ready AI is critical.


Consider something as fundamental as premium.


A single reinsurance organization might use 30 or 40 different classifications: earned premium, written premium, bound premium, estimated premium, premium by line of business.


If an AI tool is asked to forecast premium without carefully curated data, it might double-count gross and net premium, combine figures that are subsets of one another, and produce a confident, well-formatted answer that is completely wrong.


AI that passes the audit does three things differently.


  • First, it operates on curated, governed data where definitions are clear and consistent.

  • Second, it provides auditable detail showing exactly what queries it ran, what data it used, and what assumptions it made.

  • Third, it gives finance teams the ability to validate results before they go anywhere near a regulatory submission.


The boards of reinsurance companies need to be thinking about this now. Not because agentic AI is fully mature in this industry yet, but because the governance frameworks need to be in place before the technology arrives.


As Paul noted, a year ago almost nobody was using ChatGPT for daily work. Today, almost everyone is. Agentic AI will follow the same adoption curve, but with far higher stakes.


Breaking the Quarterly Fire Drill


Every reinsurance finance team knows the quarterly reporting scramble. Despite investments in new technology, the fire drill persists.


The cause is structural.


Underwriting teams make mid-quarter decisions (reallocating contracts between divisions, for example) that create financial reporting distortions.


The finance team catches these at quarter-end and makes manual adjustments to ensure submissions accurately reflect business activity.


Under time pressure, those adjustments get made outside the governed data environment: in spreadsheets, in side calculations, in workarounds that don't flow back into the warehouse.


Next quarter, the same adjustments have to be made again.


Over time, these manual processes compound into a tangle of nested spreadsheets that give auditors headaches and create real governance risk.


The fix requires a team willing to look beyond the current quarter and invest in pushing corrections back into the governed data estate, automating recurring adjustments, and selecting tools that track modifications in an auditable way.


Technology can solve this problem, but only if the organization treats it as a process improvement initiative, not just a platform migration.


Making the Go/No-Go Decision With Eyes Wide Open


For any organization evaluating Fabric, Paul recommends a structured framework built around four questions:


  1. What do we expect to get? If Fabric simply replicates what you already have, the investment may not be justified. Be specific about where Fabric adds value beyond your current capabilities.

  2. What does the end state look like? Build a clear vision of your reporting environment in Fabric, then measure the gap between that vision and where you are today.

  3. What legacy issues do we need to fix along the way? Moving to a new platform is an opportunity to resolve existing data quality and governance problems. Simply migrating current issues to Fabric is not an upgrade.

  4. How will we manage implementation risk? The right automation tools can dramatically reduce complexity, governance gaps, and time to production. Choosing how to build matters as much as choosing what to build.


That evaluation process takes focus and discipline. But it's far less expensive than discovering the answers three months into a stalled implementation.


Ready to Make That Decision With Confidence?


The Bespoke Fabric Readiness Sprint is a focused, 2-week engagement that delivers exactly what your leadership team needs: a business case, a risk register, an architecture recommendation, and a 90-day implementation plan.


No guesswork.

No month-three surprises.

Just a clear-eyed assessment of what Fabric will take for your organization.



Bespoke Analytics specializes in reinsurance reporting, data governance, and governed AI for Bermuda and Cayman-based organizations.


Reach Paul McLeod on LinkedIn or visit bespoke.bm

 
 
bottom of page