IT architects face a dilemma when selecting the best data infrastructure solution.
The traditional data warehouse, designed to turn operational raw data into useful information, has limited agility and is slow to process data.
The more agile alternative, self-service business intelligence (BI) for ad hoc analytics, using data lakes for example, promises much faster information delivery and potentially more insightful information. However, data processing is not standardized so the resulting information is often inconsistent, leading to conflicting answers and insights.
Data warehouse or data lakes? Must you choose between speed and reliability? Or is there a happy middle ground? Data Vault is a method of data warehousing that follows agile, data-driven concepts such as the schema-on-read concept.
This whitepaper outlines your options to help you make the right choice.
Data lake vs data warehouse. Is there a middle ground?
Download the whitepaper
About the author
Michael Olschimke co-authored the book “Building a scalable data warehouse with Data Vault 2.0,” which explains the concepts of Data Vault 2.0, a methodology to deliver high-performance, next-generation data warehouses. He is the co-founder and one of the Chief Executive Officers (CEO) of Scalefree, a leading IT consultancy firm in data warehousing, business intelligence and big data.