ENTERPRISE DATA MANAGEMENT - FINDING OUR NIRVANA from Feroz Ali's blog

Manual data collection is proving to be a painful thorn in the side of financial services firms seeking to implement enterprise data management (EDM) strategies. Reliance on manually collected data could scupper drives towards enterprise data management in the financial services industry.

 

While data projects remain high on the agenda for the industry as firms continue to centralise data functions and manage costs, many are struggling to deal with the enormity of manually collected data that flows through their operations.

 

The financial services industry has dedicated considerable thought and resources towards achieving the nirvana of a single complete and correct version of a core dataset. What is not often discussed is how to deal with the exceptions and the non-vendor based content. However, there is an increasing awareness of the risks associated with manual data collection and its contribution data collection terminal to valuation errors, missed deadlines, overstretched resources, scalability constraints as well as operational risk.

 

The changing regulatory environment and international accounting standards are further adding to the need for greater transparency.

 

Eradicating manual data collection can help resolve all these issues simultaneously. The biggest challenges lie with illiquid fixed income and over the counter (OTC) derivatives where the structured nature of the assets make data less transparent and therefore, not particularly easy to collect. But organisations are struggling to capture complete and accurate records for instruments such as American depository receipts and contracts for difference, where an underlying security can add confusion, as well as all variants of funds. Even mainstream activities such as unit trust pricing can prove troublesome. These challenges are not only limited to pricing data but extend to cover income and capital events as well as asset identification and static data.

 

There are numerous systems on the market that can assist organisations to construct their own data management platforms. These can add some value managing the bulk collection, storage and processing of readily available data.

 

Building a process for capturing and processing the existing data feeds may improve clarity and transparency. However, this is far from an all-encompassing data initiative. Many organisations can achieve good levels of automated processing for the bulk of their data, but manually collected data often remains untouched by the EDM strategy.

 

If manual data that is input to a central data platform is not subjected to the same stringent routines as readily available data, it will create background noise and confusion. If multiple sources are used to validate listed content but only a single manually input entry exists for other datasets, consistency can never be achieved.

 

Building solutions

 

It is widely acknowledged that an entity's overall performance is limited by the weakest component. In data management terms that will be the human element. Manual data collection will still exist once all of the readily available content has been automated and will continue to cause problems and chip away at quality, costs, resources, management and reputations.

 

The challenge facing the industry is finding a way to collect, database, normalise, reconcile and validate all data even for the labour intensive and risk prone manual data. The core principal of automating data collection is surely the correct approach.

 

Automation is the key to removing the weakest link in the data management chain - human error. Computers don't care if they are performing mundane tasks or not, nor do they care whether, as a senior computer, they are performing junior tasks and are not being fulfilled. Credit crunch worries or deciding what they want for lunch fail to distract computers.

 

Add complexity to mundane tasks such as having to collect data from numerous sources and utilising various emails, websites, extranets, terminals, internal departments (whose primary function is not to provide data to other teams) and, ultimately, having to contact somebody at another organisation, and it's easy to see why data management is such a complex, labour intensive and fragmented process.

 

The data, once gathered together, typically resides on an array of spreadsheets, with colour coding and bold fonts to stop the various users falling through the cracks and locked cells to stop one user deleting another user's macros all without audit trails to assist with unraveling queries or problems.

 


Previous post     
     Next post
     Blog home

The Wall

No comments
You need to sign in to comment