Legacy Core Banking Migration War Stories: Real Timelines & Problems [2026] - CoreFi
CoreFi · 12 min read
The vendor says 9 months. The consultant says 12-18. The reality? Most core banking migrations take 18-36 months, cost 2-3x the original budget, and involve at least one moment where leadership seriously considers canceling the whole project.
This article tells the stories nobody puts in their case studies β the ugly parts, the surprises, the near-failures, and the lessons that only come from living through a migration.
Why Migrations Fail (The Real Reasons)
Before the war stories, let's name the actual killers. Core banking migration projects don't fail because of technology. They fail because of:
1. Data Quality Discovery
Every institution believes their data is clean until they try to migrate it. Then they discover:
- Customer records with addresses from 1997 that were never updated
- Duplicate accounts created by branch staff who couldn't find existing records
- Transaction histories with missing reference numbers
- Product codes that no one can map to the new system because the person who set them up retired in 2015
- Accounts in "intermediate states" that shouldn't exist but do
Data cleaning typically consumes 30-40% of total migration effort. No vendor estimate accounts for this because they've never seen your data.
2. Undocumented Business Logic
Legacy systems accumulate decades of business rules that exist only in code. No documentation. No specification. Just COBOL (or Java, or RPG) that implements complex interest calculations, fee structures, and exception handling that the business depends on but nobody fully understands.
When you migrate, you have three options:
- Reverse-engineer everything (expensive, slow, often incomplete)
- Replicate behavior through testing (run old and new in parallel, compare outputs)
- Accept that some edge cases will break (politically difficult, but sometimes the only realistic path)
3. The Parallel Running Trap
"We'll run both systems in parallel for 3 months to validate." This sounds reasonable. In practice:
- Parallel running requires feeding every transaction to both systems simultaneously
- Any discrepancy must be investigated, root-caused, and resolved before go-live
- 3 months becomes 6 months because discrepancies keep appearing
- Staff are maintaining two systems, doubling their operational burden
- The old system keeps accumulating data that needs to be re-migrated
The parallel running phase is where projects go to die. It's also where budgets explode.
4. Staff Resistance
The humans who run the current system feel threatened by the migration. They know the old system intimately β its quirks, its workarounds, its hidden features. The new system makes their expertise obsolete.
Resistance manifests as:
- Finding increasingly obscure edge cases that "must work" before go-live
- Slow adoption of new workflows during training
- Comparing every minor difference as a "regression"
- Key staff taking leave during critical migration phases
This is a people problem, not a technology problem. And it's the one most project plans ignore completely.
War Story 1: The Never-Ending Data Migration
Institution: Mid-size European lending institution, β¬2B loan book Legacy system: 15-year-old custom-built system (Java, Oracle DB) Target system: Modern cloud-native core banking platform Planned duration: 12 months Actual duration: 28 months
What Happened
The data migration was initially estimated at 8 weeks. The team built ETL pipelines, mapped 47 tables to the new schema, and ran a test migration on a Saturday.
Week 1: Test migration completed. 94% of records migrated successfully. The team celebrated.
Week 2: Business users started validating. They found that 6% of loan accounts had incorrect interest accrual calculations. Investigation revealed the legacy system had 23 different interest calculation methods, not the 8 that were documented.
Week 4: The team discovered that 12,000 loan accounts had been restructured over the years using a manual process that modified database records directly. These accounts had no audit trail and their current state couldn't be derived from the transaction history.
Month 3: A batch of 800 accounts was found to have negative balances that were actually positive β a sign inversion bug from a system patch in 2018 that had been manually compensated for in the old reporting system.
Month 6: After three failed migration dry runs, the team abandoned automated migration for the problematic accounts and built a manual review workflow. 200 accounts per week, reviewed by a team of 4 analysts.
Month 12: The planned go-live date was missed. The board extended the deadline by 6 months.
Month 18: A second go-live attempt failed when the parallel running phase revealed that end-of-day settlement processes in the new system produced different rounding results than the legacy system. The difference was fractions of a cent per account, but across 200K accounts, it created a material discrepancy.
Month 24: The rounding issue was resolved by implementing the exact same rounding logic as the legacy system (which was technically wrong per accounting standards, but changing it would have required notifying every customer).
Month 28: Successful go-live.
Lessons
- Never trust the documented data model. Extract and analyze the actual data before estimating migration effort.
- Budget 3x the data migration estimate. And then add contingency.
- Staff a dedicated data quality team. Not engineers β business analysts who understand the product.
War Story 2: The Integration Cascade
Institution: Digital bank, 500K+ customer accounts Legacy system: Monolithic platform (vendor-provided) Target system: Composable cloud-native core Planned duration: 9 months Actual duration: 22 months
What Happened
The core banking migration itself went relatively smoothly β the modern platform ingested the data well, and the basic banking functions worked as expected within 4 months.
The problem was everything connected to core banking.
The institution had 34 integrations β payment processors, card schemes, KYC providers, credit bureaus, regulatory reporting systems, accounting platforms, CRM, mobile app backends, internet banking, and internal analytics.
Each integration had to be rebuilt or adapted. The team estimated 2 weeks per integration. Reality:
-
Card processor integration: 3 months. The legacy system sent card authorization requests in a proprietary format. The card processor refused to support a new format until their next release cycle (6 months away). The team built a translation layer.
-
Regulatory reporting: 4 months. The reporting system pulled data directly from legacy database views. Those views didn't exist in the new system. Every report had to be rebuilt.
-
Mobile app: 5 months. The mobile app had 200+ API calls to the legacy system. 60% could be mapped to new APIs. 40% required new endpoints because the data model had fundamentally changed.
-
Accounting system: 2 months. The general ledger integration broke because the new system used different account codes. The finance team refused to change their chart of accounts. The integration team built a mapping layer with 847 rules.
The Cascade Effect
Each delayed integration pushed other integrations back. The mobile app couldn't launch until the card integration was ready. Regulatory reporting couldn't be validated until all transaction types were flowing through the new system. The project timeline kept extending.
The actual go-live sequence took 8 months instead of the planned 2 months β a rolling migration where different services were cut over one by one, with extensive rollback plans for each.
Lessons
- Map every integration before you start. Every. Single. One. Including the ones "nobody uses" (someone does).
- Budget 4-6 weeks per complex integration, not 2 weeks.
- Plan a rolling cutover, not a big-bang go-live. The risk of everything working simultaneously on day one is too high.
- Integration is 60% of the project. Not core banking functionality β integrations.
War Story 3: The Cultural War
Institution: Traditional bank, 100+ years old, 2,000 employees Legacy system: IBM AS/400, COBOL, 30+ years of customization Target system: Modern SaaS core banking Planned duration: 24 months Actual duration: 36 months (and counting)
What Happened
The technology migration was the easy part. The hard part was the people.
Month 1-6: Project planning went well. Executive sponsorship was strong. Budget was approved. The vendor was engaged. Architecture was designed.
Month 7: The first training sessions revealed that 40% of branch staff couldn't use the new system's web interface efficiently. They'd spent 20+ years with green-screen terminal interfaces and had developed muscle memory for keyboard shortcuts that didn't exist in the new system.
Month 9: The IT department pushed back on cloud hosting. "We've managed our own servers for 30 years. Why would we trust someone else?" This wasn't a technical argument β it was a fear-of-obsolescence argument. Three senior infrastructure engineers resigned within 2 months.
Month 12: The operations team discovered that 200+ daily processes they ran were actually custom scripts written by a single developer who had left the bank 5 years earlier. Nobody understood these scripts, but the bank depended on them. Each script had to be analyzed, documented, and replicated.
Month 18: A pilot branch went live. Complaints flooded in. Not about bugs β about change. "The old system was faster." "I can't find the customer lookup." "Why do I need to click three times instead of one?" Every complaint was escalated as a "critical defect."
Month 24: The planned full rollout was postponed. Management hired a change management consultant (β¬200K). Training was rebuilt from scratch with branch staff input. A "super user" program was created to embed champions in each branch.
Month 30: Second pilot β this time with 5 branches. The super users made the difference. Complaints dropped 80%. Staff started discovering features the old system didn't have.
Month 36: Full rollout began, branch by branch, over 6 months. Still ongoing.
Lessons
- Change management is not optional. Budget 10-15% of project cost for dedicated change management.
- Start training 6 months before go-live, not 6 weeks.
- Super user programs work. One champion per 20 users, trained deeply, empowered to support peers.
- Never underestimate muscle memory. 20 years of green-screen commands don't disappear with a training video.
The Universal Migration Playbook
Based on these and dozens of other migrations, here's what actually works:
Pre-Migration (3-6 months)
- Data audit: Extract and analyze every table, every field, every anomaly
- Integration inventory: Map every system that touches core banking
- Business logic extraction: Document every calculation, fee, and exception
- Staff assessment: Identify resistance hotspots and change champions
- Realistic timeline: Take the vendor's estimate, multiply by 2.5
Migration Phase (12-24 months)
- Wave-based approach: Migrate by product or customer segment, not big-bang
- Data migration factory: Dedicated team doing iterative migration runs
- Integration rebuild: Start with the most complex integrations first
- Parallel running: Limit to 4-6 weeks for each wave, with clear go/no-go criteria
- Continuous communication: Weekly updates to all stakeholders
Post-Migration (6-12 months)
- Hypercare: Dedicated support team for 3 months post go-live
- Legacy decommission: Don't rush β keep the old system in read-only for 6-12 months
- Optimization: Now that you're on the new platform, start leveraging features that weren't possible before
- Retrospective: Document everything for future migrations (you'll do this again in 10-15 years)
Why Your Platform Choice Matters
The migration experience varies dramatically based on the target platform:
API-first platforms (like CoreFi) reduce integration rebuild time by 40-60% because integrations connect through standardized REST APIs rather than custom interfaces.
Modular platforms allow wave-based migration β move lending first, then deposits, then payments β rather than requiring everything to move simultaneously.
Cloud-native platforms eliminate the infrastructure arguments and let the team focus on business logic and data migration.
Platforms with migration tooling (data import APIs, schema mapping tools, parallel running frameworks) can cut data migration time by 50%.
The platform won't solve the people problems or the data quality problems. But it can make the technical problems dramatically smaller.
Planning a core banking migration? CoreFi's modular, API-first architecture is designed to minimize migration risk β with standardized data import, wave-based deployment support, and integration templates. Talk to our migration specialists to assess your project realistically.