XenonStack Recommends

Big Data Engineering

Snowflake Cloud Data Warehouse Architecture

Chandan Gaur | 30 August 2024

Snowflake Cloud Data Warehouse Architecture

Introduction to Snowflake Data Warehouse

It is a cloud-based Analytical data warehouse provided as Software-as-a-service. It is ready to use a solution that the user requires to just use it directly without worrying about its installation and deployment and then its startup. It doesn’t allow any user to use Snowflake on any private infrastructure as it uses public cloud infrastructure. It has an instance for the management of computations, and it persists data through storage service.

A central repository of information that can be analyzed to make more informed decisions. Click to explore about, What is a data warehouse?

As it doesn’t require any installation or deployment, users need not worry about its updations and maintenance, and it automatically handles this for its all types of users. It users may include end-users, business analysts, or data scientists as well.

What is the Architecture of Snowflake Data Warehouse?

Snowflake is a pay as you go service. It charged users on the per-second basis of computation or query processing. There exist some data warehouses that use either shared-disk or shared-nothing architecture, but it uses hybrid architecture comprises of both shared-disk or shared-nothing architecture.

Traditional shared-disk architecture has multiple cluster nodes. In contrast, just one storage node accessible to these all cluster nodes and traditional shared-nothing architecture contains nothing in common, i.e., different storage nodes for each cluster node (CPU). Its hybrid architecture has three layers -

  • Cloud Services Layer
  • Query Processing Layer
  • Database Storage Layer

Database Storage Layer

  • It has a Scalable cloud blob storage type for storing structured and semi-structured data (including JSON, AVRO, and Parquet).
  • The storage layer contains tables, schemas, databases, and diverse data. Tables can store as much as of petabytes of data and can have partitions effectively managed.
  • It provides internal optimization and compression of data for easy metadata extraction and query processing.
  • Data objects in it are only accessible via SQL queries through the Compute layer and are hidden to users.
  • Storage is made up of multiple micro partitions that scale automatically when required.
  • All Micro partitions are encrypted, and metadata is generated to provide more efficiency and performance in data retrieval.

Query Processing Layer

  • This layer is the compute layer of architecture that contains multiple virtual warehouses, and every query runs on one virtual warehouse.
  • Virtual warehouses are the compute engines of it that empower the scalable compute capacity.
  • The compute layer retrieves data from the storage layer, and cached to compute resources locally to improve future query responses, i.e., each Virtual warehouse had its cache.
  • Multiple virtual warehouses can operate simultaneously and also maintains ACID and follow Multiple parallel processing to perform on data.
  • Read and Write query integrity is maintained on the cloud service layer that accesses required virtual warehouse and computes nodes.
  • Virtual Warehouses can be auto-resume, and auto suspend, easily scalable, has auto-scaling factor inbuilt (when defined).

Cloud Services Layer

  • The service layer is the “Brain” of the Snowflake. It maintains, optimizes, transacts on data, provides security on data, metadata management, and data sharing.
  • Cloud service is Stateless compute resource that runs on multiple availability zones and utilizes highly available and useful metadata.
  • Service layer enables SQL client interface for DDL and DML like operations on data.
  • Like other layers, this layer can also be independently scalable.

Why Snowflake Data Warehouse is important?

The importance of snowflake are mentioned below:

Warehouse Usages

There are different sizes of Snowflake Data warehouses from X-Small to 4X-Large. Each volume of the warehouse contains a different number of cluster nodes and had different billing. The bigger the size of the warehouses, the more the workers. But one must be aware of using each type of warehouse as a query must be capable of loading by a particular warehouse and had enough clusters or servers provided. Increasing the size of it is not only the factor of performance enhancement in the query. Data must be stored in a fully optimized and normalized form of partitions so that fewer files need to be processed for loading results.

It can auto-suspend and auto-resume the warehouse upon provided. It means the warehouse can be auto stopped when it is inactive for some time. It wakes up automatically when any query is submitted to that warehouse. One can simultaneously run multiple queries, and it has a concurrent mechanism to manage and provide integrity among multiple running queries. For this, the warehouse of it used multi-cluster architecture to reserve and calculated required to compute resources. It manage the cache to provide efficiency in queries. One must be aware if there is a need to store cache or not. If there is a need to cache (i.e., data is large and the query processing time is more), then one must re-think to enable/disable the auto-suspend and resume feature as every time the warehouse restarts, it runs more queries and also more processing will be required.

Roles to Access Data

Control on data access in it is delivered by defining roles on data. Database Admin has the privilege to create as many characters as needed with granted access to particular data objects such as tables, schemas, and databases. DBA can provide Read, write access also to need specific roles. It has an advantage that one role can be assigned to another role as well to maintain a hierarchy of privileges on data control access.

Partition Blocks and Updations

Data stored in it is in the form of micro-partitions. The size of these micro-partitions may vary between 50-500 MB. It automates the process of resizing the partition, and the user also has the privilege to apply such operations to resize these micro-partition blocks. The effectiveness of partitions can be a measure while retrieving data. When partitions are appropriately managed and contain processed and informational cluster partitions, the query performance will be auto increases, and the query takes less time to generate results from stored data.

When no cluster information is saved with tables and queries contains the data that needs to be accessed by columns, then it downgrades the performance. It may cost more and also processes irrelevant data. Suppose a table contains TB of data, and then we apply to recluster to that table. In this case, data is rearranged to the newly defined format as it needs to extract the required cluster information.

Time Travel

It provides a feature to time travel on data in the past for a specified interval of time. By default, every new table in it has one day of data time travel. This provides more value when there is a need to restore the data from a specific point on time in the past and to map data effectively to measure data changes and volume of data that needs to operate or process again. Another benefit to enterprises is that they can evaluate how data is being updated from time to time. It helps to restore the tables, schema, and database upon provided the retention period to time travel (from 1 day up to 90 days).

Worksheets

It provides a feature to manage workarounds on data. Users can create as many worksheets as they require with the commenting feature enabled. It means one can perform statistics on data and create a worksheet of those statistics. It helps to work on the go without managing local reports and saving them. We can keep worksheets for as long as we need them, and we can update them at any time.
Snowflake provides the best environment for data with our near-infinite resources and computing power Source: Recognized as a Leader

Analysis of Queries in Snowflake

The performance of queries in Snowflake is available as Query Profile in it Web Interface. When a user performs any query, and select that query, it will display graphical and statistical information of that query. It interface provides an option to view the history of queries run in the past. Selecting a query from history and moving to its profile tab will display the query profile. It contains a DAG of the query plan. By reviewing this plan of query in it, it becomes easier to eliminate unused parts and generate productive queries that process fewer data and generates required data only. Queries containing joins and aggregation are costly as it is not meant to perform these operations on the go. In contrast, in some cases, it keeps track of some basic aggregation operations on the go while saving data internally.

What are the Best Practices of Snowflake Data Warehouse?

The below mentioned are the best practices of snowflake:

Best practices for Schema Design

  • Most of the time, tables contain a field for timestamp or date data type for required granularity. It provides an efficient result on timestamp and date type rather than VARCHAR type.
  • Using the clustering key is a good practice. If your table contains a small amount of data and aggregate data is required all the time, then these clustering may result in the generation of many small cluster files.
  • Always remember the column data length in case of it as its storage may generate erroneous data when outputted.
  • It stores data of timestamp and date types in string formats when data type used in VARIANT, so in this case, it is recommended to flatten the structure and use appropriate type. Retrieving a string is slower than retrieving a timestamp.

Best Practices for Query Optimisation

  • Clustering helps to store data in the form of cluster storage files with named partitions. Tables having effective and active clusters may have bounded queries to these clusters, such as using where conditions on these clusters.
  • Using warehouse suspension and auto-resume also affect the query. By default, the warehouse suspension time is 10 minutes. The segmented workload must be handled by a typical virtual warehouse and by multiple users (if required). This will help to re-use the result cache generated by it and hence provides instant results on the same data.
  • If concurrent data is needed, it can auto-scale the warehouse, and it can be updated from the user interface. It should use this feature as well as an optimized query together to obtain optimization.

Best practices for Efficient Reads

Its reader is quite slow and expensive if we are accessing data with aggregations directly from it, so use “query” instead of “dbtable” in Spark-Snowflake connector, if data has some limits and conditions but no aggregation.

Best practices for Efficient Writes

If you are writing data to its table in Spark, and that table already exists, try to select data in the same sequence/order of columns as they are in it. The workaround to resolve this is to use the “column_mapping” option as “name” while writing. “column_mapping” -> “name” approach is CASE SENSITIVE.Snowflake.

A cloud-based Infrastructure as a Service model designed by Google which is used for storing and processing massive data sets with the help of several SQL queries. Click to explore about, Google BigQuery

Snowflake Pricing and Best Practices for Optimizing Cost

It charged the users on average monthly storage per TB and paid per second billing. This depends on the other three factors as well, such as -
  • In a multi-cluster environment, the number of clusters used
  • Count of servers used in each clustering
  • Running time of each server in each cluster

Optimizing and segmentation of workloads based on these three factors help to reduce cost such as using a specific size of the warehouse for a particular workload and by particular users with some caching retention time (or warehouse suspension time) help to reuse the cache and hence retrieves result faster. Less complex queries help in low billing, and these complexities in queries can be removed by applying certain cluster access conditions and by removing massive joins. 

Joins causes more shuffle operations as compared to direct loading. It is always recommended to load the data through Spark, looker, etc. and perform activities such as joins and aggregation on data in them. The cost can be reduced by identifying long-running queries from Query profile and then simplifying the schema. Also, we can define roles for the users to prevent irrelevant queries on data and unauthorized access. It allows to create of transient tables and also doesn’t preserves their history. So, the average monthly storage cost can be reduced on the go.

Java vs Kotlin
To identify bugs and errors during software development and increase the quality of the product. Interact with Data Warehouse Design Specialists

A Holistic Approach

With Snowflake Cloud Data Warehouse offerings Enterprises can enable elasticity, reduce I/O for better performance per query and increase faster performance, to learn more about how its solutions are transforming industries.