Valuing Financial Data: How Can Big Data Be Measured?
Prof. Laura Veldkamp of Columbia University gave the keynote speech at the SFI Research Days in June on the subject of her paper with Maryam Farboodi (MIT Sloan), Dhruv Singal (Columbia University), and Venky Venkateswaran (NYU Stern), "Valuing Financial Data" (winner of the 2022 SFI Outstanding Paper Award).
Data is one of the hardest assets to measure and put a price on. This raises a whole agenda of questions that touch on every aspect of finance: Is data interest-rate sensitive? How much should an investment firm be willing to pay for a financial data stream? How should a young firm be valued? Are large troves of data entry barriers for new firms? In her keynote, Prof. Veldkamp plunged into the greater world of data valuation to look at one such key question: How can big data be measured taking into account past and current practice and the opportunities offered by new technologies?
Big data is digitized information often generated from search histories, traffic patterns, purchases, etc. When a consumer downloads a new "free" app, the firm offering that app values the data this transaction generates more highly than the cash that can be collected. In such cases, consumers are usually aware that their data is part of the price (along with advertisements, etc.) and make an informed decision on whether to share their data or not.
In other cases, however, consumers are often unaware that their data forms part of the purchase price. Firms that have data as their main asset create higher quality and efficiency by obtaining even more data. This extra data enables these firms to choose better products, reduce inventory and transportation costs, and advertise to better customers, hence strengthening their market power and growing profits. More data also enables them to reduce their risk significantly by predicting better, hence increasing expected returns. But how are these firms able to attract more transactions and, therefore, data? They adopt a price discount policy to create more demand. And consumers end up paying part money, part data for their purchases without knowing it.
Data is, hence, extremely hard to value. Elements such as type of data (from completely raw data to structured data and knowledge (depending on the level of transformation carried out by data analysts)), data leaking, and depreciation need to be considered before looking at what valuation method(s) to apply. In their research, Prof. Veldkamp and her coauthors look into whether data depreciates, and find that it does. Plus, it depreciates even faster when there is an abundance of data and when the environment contains volatile innovations.
Prof. Veldkamp briefly explained the six approaches that can be used to measure and value financial data:
- Cost accounting
- Complementary inputs
- Value functions
- Revenue
- Choice covariance
- Market prices
Prof. Veldkamp highlighted the positive and negative points of each approach, concluding that different approaches are needed for different situations and uses, and that often a combination of approaches is best. The value that one firm gives to its data depends on how that data is used, and this value may differ significantly from the value that another, heterogeneous firm would give to the same data.
Prof. Veldkamp and her coauthors have developed a tool that investors or financial firms can use to value their existing data, or potential streams of data that they are considering acquiring. And the researchers are continuing their research into this extensive subject.
Further details can be found in the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3947931