This is the beginning of the era of non-clustered database – TechCrunch

thanks for the Cloud, the amount of data that is generated and stored has exploded in scale and scale.

Every aspect of an organization is processed with data, so new processes are created based on that data, driving every company to become a data company.

One of the most profound and inconspicuous transformations that led to this was the emergence of the cloud database. Services like Amazon S3, Google BigQuery, Snowflake, and Databriks have successfully solved computing on large amounts of data and made it easy to store data from every available source.

The organization wants to stockpile everything it can in the hope that it will be able to offer improved customer experiences and new market capabilities.

It’s time to be a database company

Database companies have raised more than $8.7 billion over the past 10 years, with nearly half of that, $4.1 billion, just in the past 24 months, according to CB Insights.

Not surprising given the sky-high ratings for Snowflake and Databriks. The market has doubled in the past four years to nearly $90 billion, and is expected to double again over the next four years. It’s safe to say that there is a high chance of her going after her.

See here for a solid list of database funding in 2021.

Database growth drives enterprise spending

Database growth drives spending in the enterprise. Image credits: Fenrock

20 years ago, you had one choice: a relational database

Today, thanks to cloud, microservices, distributed applications, global scale, real-time data, and deep learning, new database structures have emerged to solve new performance requirements.

We now have different systems for fast reading and fast writing. There are also proprietary systems for running ad hoc analytics, for unstructured, semi-structured, transactional, relational, graph or time-series data, as well as for cached data and search based on indexes, events, and more.

It may come as a surprise, but there are still billions of dollars in Oracle instances still running critical applications today, and it probably isn’t going anywhere.

Each system comes with different performance needs, including high availability, horizontal scaling, distributed consistency, failover protection, partition tolerance, no server presence, and full management.

As a result, organizations, on average, store data across seven or more different databases. For example, you might have Snowflake as a data warehouse, Clickhouse for custom analytics, Timescale for time series data, Elastic for their lookup data, S3 for logs, Postgres for transactions, Redis for caching or application data, Cassandra for composite workloads and Dgraph* for relationship data or dynamic charts.

This is all assuming you’ve been aggregated into a single cloud and that you’ve created a fresh data stack from scratch.

The level of performance and guarantees from these services and platforms is on a completely different level compared to what we had five to 10 years ago. At the same time, the spread and fragmentation of the database layer is increasingly creating new challenges.

For example, synchronizing across different schemas and systems, writing new ETL jobs to bridge workloads across multiple databases, persistent talk and connectivity issues, the overhead of managing active aggregation across many different systems, moving data when new clusters or systems come online . Each has different requirements related to scaling, branching, propagation, segmentation, and resources.

On top of that, we now have new databases every month aimed at solving the next enterprise-wide challenge.

new age database

So the question is, will the future of the database continue to be determined as it is today?