BLOG – September 2023

Why Migrating Data is so Painful for Financial Institutions and What They Can Do About it

by | Sep 21, 2023

DCI author Nick ThackerAUTHOR: NICK THACKER
Pain points from bad data
Data migration difficulties create huge risks for financial institutions any time they seek to switch or upgrade their IT systems. And as we noted in a previous blog, the resulting fear of undertaking any technology updates is constraining firms’ ability to modernise and grow.

But why is migrating data so difficult?

Lack of golden source data

Migration pain stems in large part from an almost universal problem: that financial institutions lack sufficient structure and control around their data.

True enterprise-wide golden source datasets that are centrally managed and rigorously protected are, in our experience, almost unheard of. Data instead tends to be siloed across teams, business lines and geographies, and replicated in multiple places.

Those replications are rarely uniform or controlled either.

Data points may occupy a certain field or format in one system or business unit, but something different elsewhere. Spare fields are often employed by different people for different purposes. Staff across the organisation will also put the same piece of data in multiple places.

No wonder IT implementations frequently end up with important data lines in the wrong place, or with parts missing once the data has been migrated. And that leaves staff unable to do whatever they were previously doing with the data once the new system goes live.

Common data migration issues

Take fee rates, a common problem area. We see it all the time – an employee will create a fee rate in the system, but when the next employee wants to use that rate, they often can’t find it because so many already exist. To save time, they create another almost identical fee rate. With each addition the jumble, and difficulty for staff of finding what they’re looking for, grows.

And the older the data and the systems are, the more of a mess it will be in. This proliferation of garbage data points will then be transferred to the new system during the migration, perpetuating the problem.

Another issue is that different systems handle processes in different ways. One platform might calculate corporate actions using a particular workflow, while another adopts an alternative approach, making it hard to translate them across.

All of which feeds into huge mapping headaches. Many migrations fall over because firms try to build comprehensive rules to map the data from their existing system to the new one. But because there are so many data variations, the mapping becomes enormous as they try to cater for every possible situation. And with the volume of data growing exponentially, that job is getting progressively harder.

The result: massive delays and cost.

To resolve the problems, firms will have to employ staff or pay consultants to tidy up and harmonise the data that has been migrated – work that may take months to complete. Any residual issues will leave institutions carrying a tail of garbage data that acts as a drag on their business by antagonising customers, exacerbating regulatory and reputational risk, impairing decision making, and hindering innovation and critical business initiatives, including moves to automate, digitalise and adopt AI solutions.

Minimise the migration pain

So what can financial institutions do to combat the data migration difficulties they face, and take the pain out of their IT upgrade projects?

The answer lies in starting with clean, uniform data – ensuring it is of sufficient quality, integrity and consistency before attempting to move it from point A to point B. That will minimise the data errors and duplication that come across during the transition, and the subsequent remediation effort that will be needed.

Our next blog will explore how best to do it.

ShareShare on Linked In