Snowflake Data Warehouse Top Commands
Snowflake Data Warehouse Top Commands
Commands
1. Selecting the current user — SELECT CURRENT_USER();
10. 10. Describe the masking policy — DESC MASKING POLICY <name>;
14. 14. Show grants that have been assigned to the user — SHOW GRANTS ON USER
<username>;
will be able to find the username on the top-right hand corner of the screen after logging
in (Classic Console).
11. Use role — USE ROLE <role_name>;
12. Show grants assigned on a particular role — SHOW GRANTS TO ROLE <role>;
13. Grant access to a particular role on a data warehouse — GRANT USAGE ON
WAREHOUSE <warehouse_name> TO ROLE <role>;
14. Grant access to a particular role on a database — GRANT USAGE ON DATABASE
<database_name> TO ROLE <role>;
15. Altering table in Snowflake — ALTER TABLE IF EXISTS <table_name> SET
<property_name> = __;
16. Altering view in Snowflake — ALTER VIEW IF EXISTS <view_name> SET
<property_name> = __;
17. Altering warehouse in Snowflake — ALTER WAREHOUSE IF EXISTS
<datawarehouse_name> SET <property_name> = __;
18. Use a particular database — USE <database_name>;
19. Use a particular warehouse — USE <warehouse_name>;
20. Showing the regions in Snowflake — SHOW REGIONS;
The regions command above will show all the regions and the cloud (AWS, Azure, GCP)
snowflake is hosted on. Not a very useful command but can be used rarely for information.
21. Show streams — SHOW STREAMS;
22. To delete all rows from a table it is similar to SQL — DELETE FROM <table_name>;
23. To delete only certain rows from a table — DELETE FROM <table_name> WHERE
<condition>;
24. Truncate a table — TRUNCATE FROM <table_name>;
25. Drop a stage — DROP <stage_name>;
26. Alter a stage — ALTER STAGE <stage_name> SET <parameter_name> = new_value;
27. Create a stage — Always remember while creating a stage have to give the
database_name.schema_name.stage_name and then give the URL from where want to fetch
the data.
url='';
NOTE: If r URL is an open one, then can simple just copy paste the URL and it will work. But if
are fetching data from a private or protected place, then might also want to add
the aws_key_id and aws_secret_key as below. Also at the time of writing this post, Snowflake
supports fetching data from storage providers- AWS, Azure and GCP.
url='s3://bucketsnowflakes3'
credentials=(aws_key_id='<string>' aws_secret_key='<string>');
FROM @<stage_name>
field_delimiter = ‘,’
skip_header=1);
32. ON_ERROR (CONTINUE) — What this parameter does is it kind of acts as error
handling. So we tell snowflake what to do if we encounter an error. See the code below. We
tell snowflake to keep continue copying the data if we find any error. Of course, we will not
have the rows that are causing errors and the rest of the rows are successfully copied.
FROM @<stage_name>
files = ('<file_name>.csv')
ON_ERROR = 'CONTINUE';
FROM @<stage_name>
ON_ERROR = 'ABORT_STATEMENT';
FROM @<stage_name>
files = ('<file_1>.csv','file_2.csv')
ON_ERROR = 'SKIP_FILE';
FROM @<stage_name>
files = ('<file_1>.csv')
ON_ERROR = 'SKIP_FILE_3';
Question: Can guess what will happen if we specify SKIP_FILE_4 and SKIP_FILE_2 ?
36. ON_ERROR (SKIP_FILE_number%) — This is exactly similar to the “SKIP_FILE_number”
functionality, except that instead of specifying the number we specify the number and a %
sign. Let’s see below and understand it.
FROM @<stage_name>
ON_ERROR = 'SKIP_FILE_3%';
Explanation — Now SKIP_FILE_3% means that whatever number of records are there in
file_1 , can keep loading until that number reaches. And if the error limit is crossed then
stop loading and skip the file. Example, If there are 500 rows in file_1 then 3% of 500 is 15.
So until 15 errors keep loading. As soon as the 15th error is reached, stop loading and skip
the file.
37. VALIDATION_MODE(size_limit)- Validation mode is just validating the data first before
loading it. So it is just like a dummy copying where the data is not loaded but gives us the
actual errors if data would have been loaded. The advantage of this is that if we are having a
huge data to be copied then we can first validate it before encountering any errors in actual
copy.
What this SIZE_LIMIT is that we can specify the size in bytes to be copied and once that size
is reached then the copying is stopped.
NOTE: One thing to note here is that the first file will always be loaded regardless of the size
limit. What this means?
FROM @aws_stage_copy
files = ('<file_1.csv>,<file_2.csv>')
SIZE_LIMIT=60000;
Explanation: In the above code file_1.csv will be loaded irrespective of the 60000 bytes size
limit. The size limit is applicable to file_2.csv and other files if any.
38. VALIDATION_MODE(return_failed_only) — This mode will only return the names of
the files along with the errors. The pattern parameter simply states that load all the files
having the word “order” in the middle, irrespective of what the file name has at it’s start and
end. So basically it acts like a pattern matching thing that we use in SQL.
FROM @aws_stage_copy
ON_ERROR =CONTINUE
RETURN_FAILED_ONLY = TRUE
Explanation — The above command/code will return only the names of the files that have
errors. Why? Because we have set RETURN_FAILED_ONLY=TRUE .
39. VALIDATION_MODE(return_n_rows) — This validation mode returns the first 10 rows
if no errors are found.
FROM @aws_stage_copy
files=('file_name.csv')
VALIDATION_MODE = RETURN_10_rows;
NOTE: If while using the validation mode, know that there is an error in the 12th row or if
there id an error in the 12th row but don’t know it, and are using VALIDATION_MODE =
RETURN_10_rows , then will get the 10 rows without any errors. But the drawback is that
there is an error in the 12th row and might face challenges while actually copying the data.
40. VALIDATION_MODE(return_errors) — This validation mode returns errors only if
found. If there are no errors, then will get nothing in the output which is a good sign :)
FROM @aws_stage_copy
files=('file_name.csv')
VALIDATION_MODE = RETURN_ERRORS;
41. JSON — JSON means Java Script Object Object. It is a type of data such as CSV. Now
below is the code to handle the JSON file.
url='s3://bucketsnowflake-jsondemo';
TYPE = JSON;
//Creating our table where we will push the data we pull from a stage
raw_file variant);
FROM @<stage_name>
file_format= MANAGE_DB.FILE_FORMATS.JSONFORMAT
files = ('HR_data.json');
Explanation: Now this whole gist of the above code is to copy a json file into our created
table. But I have also showed the complete steps so that are not lost anywhere while
reading or preparing for exam.
One thing to notice is that while copying data from a stage we are using a column raw_file
and the data type is variant . Why? Because the variantdatatype can handle the unstructured
data very well.
Now naturally must be thinking that we loaded the json data into at table with variant
column name that can handle unstructred data very well but how can I query it?
Below is the answer.
// Selecting attribute/column
Explanation: The first select query is very straight forward. It is telling snowflake to select
the attribute city and column RAW_FILE from the JSON_RAW table.
Instead of giving the column name, we can also give the column number which can be seen
as $1 and the rest of the query is same.
If we find the data to be messy or of different data types, we can cast(changing data types)
it. This can be seen in the below query.
SELECT
RAW_FILE:id::int as id,
RAW_FILE:first_name::STRING as first_name,
RAW_FILE:last_name::STRING as last_name,
42. Time travel — Time travel is a feature in snowflake to retrieve the data that we have
updated or deleted accidently. So essentially we can travel back in time and see how the
data was before we updated/deleted. Now there are two methods for time travel. One is for
certain minutes. And other is for a few days back using timestamp. Let’s look at both the
methods.
Explanation: The - sign tells snowflake to go back 90 secs and get me the data at that time.
If want to know r current time stamp, then can use the below commands.
43. Truncate columns — This feature provided by snowflake gives the additional
functionality on the fly to specify the length of the strings while copying. So if the source
table has the column length as 20 and the destination table has the column length as 10,
then we can truncate the columns to 10 characters using the below command.
from @<stage_name>
files = ('customers.csv')
Explanation: Now must be wondering where to specify the truncate limit? So the answer is
that the truncate limit is specified in the column datatype of the source table. Suppose we
have a column named last_name varchar(10)in the source table, then if we write
TRUNCATECOLUMNS=TRUE in the copy command (above code) then only 10 characters
will be inserted in the final table.
44. Force — Force TRUEmeans specify to load all the files regardless of whether they have
been loaded previously or not have been changed since they were been loaded.
NOTE: This option reloads the file, potentially duplicating the data in a table.
from @<stage_name>
files = ('customers.csv')
45. Load History — This feature enables us to view the history of the data loaded into
tables using the COPY INTO <table_name> command.
Now there are 2 ways of retrieving the load history in snowflake. One way is through UI and
the other one is by running a SQL command.
To view it through UI command we use the left hand side panel and to SNOWFLAKE ->
ACCOUNT_USAGE -> LOAD_HISTORY
The next way is through SQL command which is very simple as follows.
46. Creating a stream — Creates a new stream in the current/specified schema or replaces
an existing stream. A stream records data manipulation language (DML) changes made to a
table, directory table, external table, or the underlying tables in a view (including secure
views). The object for which changes are recorded is called the source object.
47. Dynamic data masking — Dynamic Data Masking is a Column-level Security feature
that uses masking policies to selectively mask data at query time that was previously loaded
in plain-text into Snowflake.
case
else '##-###-##'
Explanation: As discussed above we can apply the policy on a column. So phone is the
column name. We are telling snowflake to only return unmasked values if the role is
ANALYST_FULL OR ACCOUNT_ADMIN . If any other role is trying to view the phone column
then return ##-###-### . The varcharvalue(val varchar) is the data type of the original
column and the varchar in returns varchar is the datatype of the returning column.
As seen in the code above we first create a masking policy and then apply it to a particular
column.
48. Drop stream — DROP STREAM <stream_name>;
49. Describe a stream — DESC STREAM <stream_name>;
50. Show external tables — In a typical table, the data is stored in the database; however,
in an external table, the data is stored in files in an external stage. External tables store file-
level metadata about the data files, such as the filename, a version identifier and related
properties.
SHOW EXTERNAL TABLES;
51. Show grants — Lists all access control privileges that have been explicitly granted to
roles, users, and shares.
SHOW GRANTS;
52. Show functions — Lists all the native (i.e. system-defined/built-in) scalar functions
provided by Snowflake, as well as any user-defined functions (UDFs) or external functions
that have been created for r account.
SHOW FUNCTIONS;
53. Show file formats — A file format is a set of rules or instructions tell snowflake to
follow while copying a file into a table.
SHOW FILE FORMATS;
54. Show integrations — Lists the integrations in r account.
SHOW INTEGRATIONS;//The command output provides table properties and metadata in the following
columns| name | type | category | enabled | comment | created_on |
55. Show managed accounts — Lists the managed accounts created for r account.
Currently used by data providers to create reader accounts for their consumers.
SHOW MANAGED ACCOUNTS;
56. Show locks —Lists all running transactions that have locks on resources. The command
can be used to show locks for the current user in all the user’s sessions or all users in the
account.
SHOW LOCKS;
57. Show masking policies — A masking policy is basically a way to hide certain columns
from certain roles. The hidden data appears as -*-*** or in whichever format want it to
appear.
SHOW MASKING POLICIES;
58. Show materialized views —A materialized view is a virtual table like a normal view with
the main difference that it can be manually refreshed or refresh it on a schedule. MVs are used
for query optimization.
SHOW MATERIALIZED VIEWS;
59. Show network policies — Lists all network policies defined in the system.
SHOW NETWORK POLICIES;
60. Create a managed account — Create a managed account in snowflake.
[ COMMENT = '<string_literal>' ]
�Database
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
�Schema
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
�Table
CREATE
ALTER
DROP
DESCRIBE
SHOW
TRUNCATE
❄️
CREATE [ OR REPLACE ]
[ { TEMPORARY | TRANSIENT } ] TABLE [ IF NOT EXISTS ] <table_name>
( <col_name> <col_type> )
[ CLUSTER BY ( <expr> [ , <expr> , ... ] ) ]
�View
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
�Materialized View
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
�Dynamic Table
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
�Task
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
�Stream
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
� Alert
CREATE
ALTER
DROP
DESCRIBE
SHOW
HISTORY
❄️
�Stage
CREATE
ALTER
DROP
DESCRIBE
SHOW
LIST
❄️
�Loading Data
COPY INTO (standard)
COPY INTO (with transformation)
❄️
�Data Manipulation
INSERT
UPDATE
MERGE
DELETE
❄️
�Function (UDF)
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
Python
SQL
JavaScript
Scala
�Procedure
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
Python
SQL
JavaScript
Scala
in-line
on stage
�Pipe
CREATE
ALTER
DROP
DESCRIBE
SHOW
PIPE_STATUS
❄️
Amazon S3
Microsoft Azure
Google Cloud Storage
Basic Commands:
1. Login:
phpCopy code
!connect snowflake -u <username> -a <account_name> -w <warehouse_name> -r <region> -d <database_name> -s
<schema_name>
2. Switch Database:
phpCopy code
USE DATABASE <database_name>;
3. Switch Schema:
graphqlCopy code
USE SCHEMA <schema_name>;
4. List Warehouses:
sqlCopy code
SHOW WAREHOUSES;
5. List Databases:
sqlCopy code
SHOW DATABASES;
6. List Schemas:
sqlCopy code
SHOW SCHEMAS;
Snowflake Administration:
1. Create User:
sqlCopy code
CREATE USER <username> PASSWORD = '<password>';
2. Create Role:
sqlCopy code
CREATE ROLE <role_name>;
3. Grant Role to User:
sqlCopy code
GRANT <role_name> TO <username>;
4. List Users:
sqlCopy code
SHOW USERS;
5. List Roles:
sqlCopy code
SHOW ROLES;
Transaction Control:
1. Begin Transaction:
sqlCopy code
BEGIN;
2. Commit Transaction:
sqlCopy code
COMMIT;
3. Rollback Transaction:
sqlCopy code
ROLLBACK;
Snowflake Functions:
1. Date Functions:
sqlCopy code
SELECT CURRENT_DATE(), DATEADD(DAY, 7, CURRENT_DATE());
2. String Functions:
sqlCopy code
SELECT UPPER(column), LOWER(column), SUBSTRING(column, start, length);
3. Math Functions:
sqlCopy code
SELECT ABS(column), ROUND(column, 2), POWER(column, 2);
4. Conditional Functions:
sqlCopy code
SELECT CASE WHEN condition THEN result ELSE else_result END;
External Functions:
1. Create External Function:
sqlCopy code
CREATE EXTERNAL FUNCTION <function_name> RETURNS <return_type> API_LANGUAGE = <language>
HANDLER = '<external_function_handler>';
2. List External Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS;
3. Drop External Function:
sqlCopy code
DROP EXTERNAL FUNCTION <function_name>;
Stages Commands:
1. Create Stage:
sqlCopy code
CREATE STAGE <stage_name>;
2. List Stages:
sqlCopy code
SHOW STAGES;
3. List Files in Stage:
sqlCopy code
LIST @<stage_name>;
4. Put Data into Stage:
sqlCopy code
PUT file://<local_file_path> @<stage_name>;
5. Get Data from Stage:
sqlCopy code
GET @<stage_name> file://<local_file_path>;
6. Copy Data from Stage to Table:
sqlCopy code
COPY INTO <table_name> FROM @<stage_name> FILE_FORMAT = (FORMAT_NAME = 'csv');
7. Copy Data from Table to Stage:
sqlCopy code
COPY INTO @<stage_name> FROM <table_name> FILE_FORMAT = (FORMAT_NAME = 'csv');
Snowpipe Commands:
1. Create Snowpipe:
sqlCopy code
CREATE PIPE <pipe_name> AUTO_INGEST = TRUE AS COPY INTO <table_name> FROM @<stage_name>;
2. Start Snowpipe:
sqlCopy code
ALTER PIPE <pipe_name> RESUME;
3. Pause Snowpipe:
sqlCopy code
ALTER PIPE <pipe_name> PAUSE;
4. List Snowpipes:
sqlCopy code
SHOW PIPES;
Let's expand the cheat sheet to include more commands related to tables,
user management, and additional stage operations:
Table Operations:
1. Truncate Table:
sqlCopy code
TRUNCATE TABLE <table_name>;
2. Drop Table:
sqlCopy code
DROP TABLE <table_name>;
3. Create View:
sqlCopy code
CREATE VIEW <view_name> AS SELECT column1, column2 FROM <table_name> WHERE condition;
4. Drop View:
sqlCopy code
DROP VIEW <view_name>;
5. Clone Table:
sqlCopy code
CREATE TABLE <new_table_name> CLONE <existing_table_name>;
User and Role Management:
1. Create User:
sqlCopy code
CREATE USER <username> PASSWORD = '<password>';
2. Create Role:
sqlCopy code
CREATE ROLE <role_name>;
3. Grant Role to User:
sqlCopy code
GRANT ROLE <role_name> TO USER <username>;
4. Revoke Role from User:
sqlCopy code
REVOKE ROLE <role_name> FROM USER <username>;
5. List Users:
sqlCopy code
SHOW USERS;
6. List Roles:
sqlCopy code
SHOW ROLES;
Stage Operations:
1. Copy Data from Stage to Table (with Pattern):
sqlCopy code
COPY INTO <table_name> FROM @<stage_name> PATTERN='.*\.csv';
2. Remove File from Stage:
sqlCopy code
REMOVE @<stage_name> 'file_name.csv';
3. Remove All Files from Stage:
sqlCopy code
REMOVE @<stage_name>;
4. Set Stage Policy (Time Travel):
sqlCopy code
ALTER STAGE @<stage_name> SET TRANSIENT_TIME_TRAVEL = 7 DAYS;
5. List Transient Tables:
sqlCopy code
SHOW TRANSIENT TABLES;
Data Sharing:
1. Share Table:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO ACCOUNT '<account_identifier>';
2. List Shares:
sqlCopy code
SHOW SHARES;
3. Revoke Share:
sqlCopy code
REVOKE USAGE ON DATABASE <database_name> FROM ACCOUNT '<account_identifier>';
External Tables:
1. Create External Table:
sqlCopy code
CREATE TABLE <external_table_name> LIKE <existing_table_name> DATA_RETENTION_TIME_IN_DAYS
= 1 LOCATION = @<stage_name>;
2. List External Tables:
sqlCopy code
SHOW EXTERNAL TABLES;
3. Refresh External Table Metadata:
sqlCopy code
ALTER EXTERNAL TABLE <external_table_name> REFRESH;
Let's delve deeper into different types of stages in Snowflake, including user
stages, named stages, table stages, and external stages:
User Stages:
1. Create User Stage:
sqlCopy code
CREATE STAGE <username>_stage URL = 's3://<bucket>/<path>';
2. List User Stages:
sqlCopy code
SHOW USER STAGES;
3. Copy Data to User Stage:
sqlCopy code
PUT file://<local_file_path> @<username>_stage;
4. Copy Data from User Stage:
sqlCopy code
GET @<username>_stage file://<local_file_path>;
Named Stages:
1. Create Named Stage:
sqlCopy code
CREATE STAGE <named_stage_name> URL = 's3://<bucket>/<path>';
2. List Named Stages:
sqlCopy code
SHOW NAMED STAGES;
3. Copy Data to Named Stage:
sqlCopy code
PUT file://<local_file_path> @<named_stage_name>;
4. Copy Data from Named Stage:
sqlCopy code
GET @<named_stage_name> file://<local_file_path>;
Table Stages:
1. Create Table Stage:
sqlCopy code
CREATE TABLE <table_name> STAGE = @<table_stage_name>;
2. List Table Stages:
sqlCopy code
SHOW TABLE STAGES;
3. Copy Data to Table Stage:
sqlCopy code
PUT file://<local_file_path> @<table_stage_name>;
4. Copy Data from Table Stage:
sqlCopy code
GET @<table_stage_name> file://<local_file_path>;
External Stages:
1. Create External Stage:
sqlCopy code
CREATE EXTERNAL STAGE <external_stage_name> URL = 's3://<bucket>/<path>';
2. List External Stages:
sqlCopy code
SHOW EXTERNAL STAGES;
3. Copy Data to External Stage:
sqlCopy code
PUT file://<local_file_path> @<external_stage_name>;
4. Copy Data from External Stage:
sqlCopy code
GET @<external_stage_name> file://<local_file_path>;
This section covers various types of stages in Snowflake, including user stages,
named stages, table stages, and external stages. Adjust the commands based
on r specific use cases and replace placeholders like <username>,
<table_name>, <named_stage_name>, etc., with r actual values.
Let's expand the cheat sheet to include more advanced commands related to
Snowflake features like streams, tasks, MERGE statements, and stored
procedures:
Streams:
1. Create Stream:
sqlCopy code
CREATE OR REPLACE STREAM <stream_name>;
2. List Streams:
sqlCopy code
SHOW STREAMS;
3. Describe Stream:
sqlCopy code
DESCRIBE STREAM <stream_name>;
4. Alter Stream:
sqlCopy code
ALTER STREAM <stream_name> ADD COLUMN <new_column_name> <data_type>;
5. Insert into Stream:
sqlCopy code
INSERT INTO <stream_name> (column1, column2, ...) VALUES (value1, value2, ...);
6. Consume from Stream:
sqlCopy code
SELECT * FROM <stream_name>;
Tasks:
1. Create Task:
sqlCopy code
CREATE OR REPLACE TASK <task_name> WAREHOUSE = <warehouse_name> SCHEDULE = 'USING
CRON 0 0 * * * UTC' AS INSERT INTO <target_table> SELECT * FROM <source_table>;
2. List Tasks:
sqlCopy code
SHOW TASKS;
3. Run Task Manually:
sqlCopy code
ALTER TASK <task_name> RUN;
4. Disable/Enable Task:
sqlCopy code
ALTER TASK <task_name> DISABLE/ENABLE;
5. Describe Task:
sqlCopy code
DESCRIBE TASK <task_name>;
MERGE Statement:
1. MERGE Statement:
sqlCopy code
MERGE INTO <target_table> USING <source_table> ON <target_table.column> = <source_table.column> WHEN
MATCHED THEN UPDATE SET <target_table.column> = <source_table.column> WHEN NOT MATCHED
THEN INSERT (<column1, column2, ...>) VALUES (<value1, value2, ...>);
2. MERGE with DELETE:
sqlCopy code
MERGE INTO <target_table> USING <source_table> ON <target_table.column> = <source_table.column> WHEN
MATCHED THEN UPDATE SET <target_table.column> = <source_table.column> WHEN NOT MATCHED
THEN INSERT (<column1, column2, ...>) VALUES (<value1, value2, ...>) WHEN NOT MATCHED BY
SOURCE THEN DELETE;
Stored Procedures:
1. Create Stored Procedure:
sqlCopy code
CREATE OR REPLACE PROCEDURE <procedure_name>() RETURNS STRING LANGUAGE JAVASCRIPT
EXECUTE AS CALLER AS $$ // JavaScript code $$;
2. Call Stored Procedure:
sqlCopy code
CALL <procedure_name>();
3. List Procedures:
sqlCopy code
SHOW PROCEDURES;
4. Alter Stored Procedure:
sqlCopy code
ALTER PROCEDURE <procedure_name> SET <parameter_name> = <new_value>;
Let's include more commands and features in the Snowflake cheat sheet:
Materialized Views:
1. Create Materialized View:
sqlCopy code
CREATE MATERIALIZED VIEW <view_name> AS SELECT * FROM <table_name>;
2. Refresh Materialized View:
sqlCopy code
REFRESH MATERIALIZED VIEW <view_name>;
3. List Materialized Views:
sqlCopy code
SHOW MATERIALIZED VIEWS;
Network Policies:
1. Create Network Policy:
sqlCopy code
CREATE NETWORK POLICY <policy_name> ALLOWED_IP_LIST = ('192.168.1.0/24', '10.0.0.1')
DENIED_IP_LIST = ('192.168.1.5', '10.0.0.2') ENABLE;
2. List Network Policies:
sqlCopy code
SHOW NETWORK POLICIES;
3. Drop Network Policy:
sqlCopy code
DROP NETWORK POLICY <policy_name>;
Query Profiling:
1. Enable Query Profiling:
sqlCopy code
SET PROFILE = ON;
2. Run a Query with Profiling:
sqlCopy code
SELECT * FROM <table_name>;
3. View Query Profile Information:
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));
Object Dependencies:
1. Show Dependencies for a Table:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON TABLE <table_name>';
2. Show Dependencies for a View:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON VIEW <view_name>';
3. Show Dependencies for a Database:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON DATABASE <database_name>';
External Functions:
1. Create External Function:
sqlCopy code
CREATE EXTERNAL FUNCTION <function_name> RETURNS <data_type> ARGUMENTS (<arg_name>
<arg_data_type>) LANGUAGE <language> EXTERNAL NAME 'external_function_name';
2. List External Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS;
3. Drop External Function:
sqlCopy code
DROP EXTERNAL FUNCTION <function_name>;
These additional commands cover features like time travel, materialized views,
row access policies, network policies, query profiling, object dependencies,
and external functions in Snowflake. Adjust the commands based on r specific
use cases and replace placeholders like <table_name>, <view_name>,
<function_name>, etc., with r actual values.
Task Dependencies:
1. Create Task with Dependencies:
sqlCopy code
CREATE OR REPLACE TASK <task_name> WAREHOUSE = <warehouse_name> AFTER
<dependent_task_name> AS INSERT INTO <target_table> SELECT * FROM <source_table>;
2. List Task Dependencies:
sqlCopy code
SHOW TASKS;
Data Masking:
1. Create Data Masking Policy:
sqlCopy code
CREATE MASKING POLICY <masking_policy_name> ALTER COLUMN <column_name> ADD
MASKING_POLICY <masking_policy_name>;
2. Apply Data Masking Policy:
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> SET MASKING_POLICY
<masking_policy_name>;
3. List Data Masking Policies:
sqlCopy code
SHOW MASKING POLICIES;
Data Classification:
1. Classify Data (Sensitive):
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> CLASSIFY AS SENSITIVE;
2. Classify Data (Confidential):
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> CLASSIFY AS CONFIDENTIAL;
3. View Data Classification:
sqlCopy code
SHOW CLASSIFICATIONS;
Resource Monitors:
1. Create Resource Monitor:
sqlCopy code
CREATE RESOURCE MONITOR <monitor_name> WITH CREDITS_PERCENT = 20;
2. Assign Resource Monitor to Warehouse:
sqlCopy code
ALTER WAREHOUSE <warehouse_name> SET RESOURCE_MONITOR = <monitor_name>;
3. List Resource Monitors:
sqlCopy code
SHOW RESOURCE MONITORS;
Object Tags:
1. Tag Table:
sqlCopy code
ALTER TABLE <table_name> SET TAG <tag_name>;
2. Tag View:
sqlCopy code
ALTER VIEW <view_name> SET TAG <tag_name>;
3. List Object Tags:
sqlCopy code
SHOW TAGS;
Data Replication:
1. Create Replication Connection:
sqlCopy code
CREATE REPLICATION CONNECTION <connection_name> TO ACCOUNT '<account_identifier>';
2. List Replication Connections:
sqlCopy code
SHOW REPLICATION CONNECTIONS;
3. Start Replication Task:
sqlCopy code
START REPLICATION TASK FOR SCHEMA <schema_name>;
4. Stop Replication Task:
sqlCopy code
STOP REPLICATION TASK FOR SCHEMA <schema_name>;
These advanced commands cover features like security and key management,
task dependencies, data masking, data classification, resource monitors, object
tags, row-level security, and data replication in Snowflake. Adjust the
commands based on r specific use cases and replace placeholders like
<table_name>, <tag_name>, <connection_name>, etc., with r actual values.
Data Sampling:
1. Sample Data from Table:
sqlCopy code
SELECT * FROM <table_name> SAMPLE (10);
2. Stratified Sampling:
sqlCopy code
SELECT * FROM <table_name> TABLESAMPLE BERNOULLI(10) STRATIFIED ON (column_name);
Query Caching:
1. Enable Query Caching:
sqlCopy code
ALTER SESSION SET USE_CACHED_RESULT = TRUE;
2. Disable Query Caching:
sqlCopy code
ALTER SESSION SET USE_CACHED_RESULT = FALSE;
3. Clear Query Cache:
sqlCopy code
ALTER SESSION CLEAR CACHED_RESULTS;
Let's include more commands related to cache management and time travel
in Snowflake:
Cache Management:
1. Clear Query Result Cache for a Table:
sqlCopy code
ALTER TABLE <table_name> CLEAR RESULT_CACHE;
2. Clear Query Result Cache for a View:
sqlCopy code
ALTER VIEW <view_name> CLEAR RESULT_CACHE;
3. Clear Entire Query Result Cache:
sqlCopy code
ALTER SESSION CLEAR RESULT_CACHE;
4. Show Cached Query Results:
sqlCopy code
SHOW RESULT_CACHE;
5. Prevent Caching for a Query:
sqlCopy code
SELECT /*+ NO_RESULT_CACHE */ * FROM <table_name>;
Time Travel:
1. Enable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> SET TEMPORAL = VERSIONS BETWEEN 0 AND 1;
2. Disable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> UNSET TEMPORAL;
3. Time Travel Query (With AS OF Clause):
sqlCopy code
SELECT * FROM <table_name> AS OF TIMESTAMP 'YYYY-MM-DD HH:MI:SS';
4. Time Travel Query (With SNAPSHOT Clause):
sqlCopy code
SELECT * FROM <table_name> SNAPSHOT('YYYY-MM-DD HH:MI:SS');
5. Show Tables with Time Travel Enabled:
sqlCopy code
SHOW TABLES IN DATABASE <database_name> WITH TEMPORAL_HISTORY;
6. Restore Table to a Specific Time:
sqlCopy code
RESTORE TABLE <table_name> TO TIMESTAMP 'YYYY-MM-DD HH:MI:SS';
7. Show Time Travel Properties for a Table:
sqlCopy code
SHOW TABLE <table_name> TEMPORAL_HISTORY;
8. Show Query History for Time Travel Queries:
sqlCopy code
SHOW QUERY_HISTORY FOR TABLE <table_name>;
Let's include more commands related to cache management, time travel, and
pre/post validation by Query ID (QID):
Cache Management:
1. Show Cached Queries:
sqlCopy code
SHOW CACHED QUERIES;
2. Invalidate Cached Query by Query ID:
sqlCopy code
ALTER SESSION INVALIDATE METADATA QID '<query_id>';
3. Clear Query Result Cache:
sqlCopy code
ALTER SESSION CLEAR CACHED_RESULTS;
4. Purge All Cached Results for a Table:
sqlCopy code
ALTER TABLE <table_name> PURGE;
Time Travel:
1. Enable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> ENABLE VERSIONING;
2. Disable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> DISABLE VERSIONING;
3. Show Table History (Time Travel):
sqlCopy code
SELECT * FROM <table_name> AS OF SYSTEM TIME 'YYYY-MM-DD HH:MI:SS';
4. Restore Table to Specific Timestamp:
sqlCopy code
RESTORE TABLE <table_name> TO SYSTEM TIME 'YYYY-MM-DD HH:MI:SS';
Query Hints:
1. Set Query Hint (Materialize):
sqlCopy code
SELECT /*+ MATERIALIZE */ * FROM <table_name>;
2. Set Query Hint (No_Materialize):
sqlCopy code
SELECT /*+ NO_MATERIALIZATION */ * FROM <table_name>;
3. Set Query Hint (Hash_AJ):
sqlCopy code
SELECT /*+ HASH_AJ */ * FROM <table1> JOIN <table2> ON <condition>;
4. Set Query Hint (Merge_AJ):
sqlCopy code
SELECT /*+ MERGE_AJ */ * FROM <table1> JOIN <table2> ON <condition>;
Data Sharing:
1. Create Data Share:
sqlCopy code
CREATE SHARE <share_name> COMMENT = 'Description';
2. Grant Usage on Share:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO SHARE <share_name>;
3. Show Data Shares:
sqlCopy code
SHOW SHARES;
4. Revoke Usage on Share:
sqlCopy code
REVOKE USAGE ON DATABASE <database_name> FROM SHARE <share_name>;
Task Monitoring:
1. Monitor Task Status:
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));
2. Monitor Task Execution Time:
sqlCopy code
SHOW TASKS;
3. View Task Execution Details:
sqlCopy code
SELECT * FROM INFORMATION_SCHEMA.TASK_HISTORY WHERE TASK_ID = '<task_id>';
Result Sets:
1. Export Query Result to CSV:
sqlCopy code
COPY INTO @<stage_name> FROM (SELECT * FROM <table_name>) FILE_FORMAT = (FORMAT_NAME =
'csv');
2. Export Query Result to Parquet:
sqlCopy code
COPY INTO @<stage_name> FROM (SELECT * FROM <table_name>) FILE_FORMAT = (FORMAT_NAME =
'parquet');
3. Download Query Result to Local File:
sqlCopy code
GET @<stage_name>/<file_name> file://<local_file_path>;
Miscellaneous:
1. Generate Series of Numbers:
sqlCopy code
SELECT SEQ4() AS series FROM TABLE(GENERATOR(ROWCOUNT => 10));
2. Check Session Details:
sqlCopy code
SELECT CURRENT_SESSION(), CURRENT_ROLE(), CURRENT_WAREHOUSE(),
CURRENT_DATABASE();
3. Show Resource Monitors:
sqlCopy code
SHOW RESOURCE MONITORS;
4. List Object Dependencies:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON <object_type> <object_name>';
5. Alter Object Comment:
sqlCopy code
COMMENT ON <object_type> <object_name> IS 'New Comment';
6. Set Query Timeout:
sqlCopy code
ALTER SESSION SET STATEMENT_TIMEOUT_IN_SECONDS = 300;
7. Use Query Execution Tags:
sqlCopy code
SELECT /*+ TAG('tag_value') */ * FROM <table_name>;
8. Use Query Options:
sqlCopy code
SELECT * FROM <table_name> OPTION (MIN_CLUSTER_COUNT = 2, PARTITION_SIZE = 1000000);
9. List Scheduled Tasks:
sqlCopy code
SHOW SCHEDULE;
10. Abort Query Execution:
sqlCopy code
ABORT STATEMENT '<query_id>';
11. Show Object Grants:
sqlCopy code
SHOW GRANTS ON <object_type> <object_name>;
Sure! Let's explore more commands and features in Snowflake, including zero-
copy clone:
Zero-Copy Clone:
1. Create Zero-Copy Clone:
sqlCopy code
CREATE CLONE <source_table_or_database> TO <clone_table_or_database>;
2. List Clones:
sqlCopy code
SHOW CLONES;
3. Drop Clone:
sqlCopy code
DROP CLONE <clone_table_or_database>;
4. Zero-Copy Clone with Retention Time:
sqlCopy code
CREATE CLONE <source_table_or_database> TO <clone_table_or_database>
DATA_RETENTION_TIME_IN_DAYS = 7;
Resource Pools:
1. Create Resource Pool:
sqlCopy code
CREATE RESOURCE POOL <pool_name> WITH MIN_CLUSTER_COUNT = 1 MAX_CLUSTER_COUNT =
10;
2. List Resource Pools:
sqlCopy code
SHOW RESOURCE POOLS;
3. Assign Resource Pool to Warehouse:
sqlCopy code
ALTER WAREHOUSE <warehouse_name> SET RESOURCE_POOL = <pool_name>;
These additional commands cover features like zero-copy clone, Snowpipe for
auto-ingest, change tracking, external functions in Python, resource pools,
object storage integration (AWS S3), and JSON data handling. Adjust the
commands based on r specific use cases and replace placeholders like
<clone_table_or_database>, <pipe_name>, <udf_name>, etc., with r actual
values.
Data Masking:
1. Create Data Masking Policy:
sqlCopy code
CREATE MASKING POLICY <masking_policy_name> ALTER COLUMN <column_name> ADD
MASKING_POLICY <masking_policy_name>;
2. Apply Data Masking Policy:
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> SET MASKING_POLICY
<masking_policy_name>;
3. List Data Masking Policies:
sqlCopy code
SHOW MASKING POLICIES;
Query Profiling:
1. Enable Query Profiling:
sqlCopy code
SET PROFILE = ON;
2. Run a Query with Profiling:
sqlCopy code
SELECT * FROM <table_name>;
3. View Query Profile Information:
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));
Materialized Views:
1. Create Materialized View:
sqlCopy code
CREATE MATERIALIZED VIEW <view_name> AS SELECT * FROM <table_name>;
2. Refresh Materialized View:
sqlCopy code
REFRESH MATERIALIZED VIEW <view_name>;
3. List Materialized Views:
sqlCopy code
SHOW MATERIALIZED VIEWS;
Task Dependencies:
1. Create Task with Dependencies:
sqlCopy code
CREATE OR REPLACE TASK <task_name> WAREHOUSE = <warehouse_name> AFTER
<dependent_task_name> AS INSERT INTO <target_table> SELECT * FROM <source_table>;
2. List Task Dependencies:
sqlCopy code
SHOW TASKS;
Query Hints:
1. Set Query Hint (Materialize):
sqlCopy code
SELECT /*+ MATERIALIZE */ * FROM <table_name>;
2. Set Query Hint (No_Materialize):
sqlCopy code
SELECT /*+ NO_MATERIALIZATION */ * FROM <table_name>;
3. Set Query Hint (Hash_AJ):
sqlCopy code
SELECT /*+ HASH_AJ */ * FROM <table1> JOIN <table2> ON <condition>;
4. Set Query Hint (Merge_AJ):
sqlCopy code
SELECT /*+ MERGE_AJ */ * FROM <table1> JOIN <table2> ON <condition>;
Time Travel:
1. Time Travel Query (to a specific timestamp):
sqlCopy code
SELECT * FROM <table_name> AT(TIMESTAMP => 'YYYY-MM-DD HH:MI:SS');
2. Time Travel Query (using OFFSET):
sqlCopy code
SELECT * FROM <table_name> AT(OFFSET => -10);
3. Show Historical Versions of a Table:
sqlCopy code
SHOW TABLE <table_name> HISTORY;
4. Restore Table to a Specific Version:
sqlCopy code
RESTORE TABLE <table_name> TO (TIMESTAMP => 'YYYY-MM-DD HH:MI:SS');
Data Sharing:
1. Create Data Share:
sqlCopy code
CREATE SHARE <share_name> COMMENT = 'Description';
2. Grant Usage on Share:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO SHARE <share_name>;
3. Show Data Shares:
sqlCopy code
SHOW SHARES;
4. Revoke Usage on Share:
sqlCopy code
REVOKE USAGE ON DATABASE <database_name> FROM SHARE <share_name>;
Query History:
1. Show Query History:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY(WAREHOUSE_NAME =>
'<warehouse_name>', START_TIME => '<start_time>', END_TIME => '<end_time>'));
2. Query History by User:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY(USER_NAME => '<user_name>'));
Resource Monitors:
1. Create Resource Monitor:
sqlCopy code
CREATE RESOURCE MONITOR <monitor_name> WITH CREDITS_PERCENT = 20;
2. Assign Resource Monitor to Warehouse:
sqlCopy code
ALTER WAREHOUSE <warehouse_name> SET RESOURCE_MONITOR = <monitor_name>;
3. List Resource Monitors:
sqlCopy code
SHOW RESOURCE MONITORS;
Metadata Management:
1. Show Tables in a Schema:
sqlCopy code
SHOW TABLES IN SCHEMA <schema_name>;
2. Show Columns of a Table:
sqlCopy code
SHOW COLUMNS IN TABLE <table_name>;
3. Show Table Constraints:
sqlCopy code
SHOW CONSTRAINTS IN TABLE <table_name>;
Let's explore more commands related to query history, pipe history, and
INFORMATION_SCHEMA built-in functions in Snowflake:
Query History:
1. Show Recent Query History:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY());
2. Filter Query History by User:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY()) WHERE USER_NAME =
'username';
3. Retrieve Query History for a Specific Query ID:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY('<query_id>'));
4. Query History with Execution Time Aggregation:
sqlCopy code
SELECT USER_NAME, COUNT(*), MIN(START_TIME), MAX(END_TIME) FROM
TABLE(INFORMATION_SCHEMA.QUERY_HISTORY()) GROUP BY USER_NAME;
These commands provide insights into querying the query history, pipe history
(Snowpipe), and using INFORMATION_SCHEMA built-in functions for
metadata exploration in Snowflake. Adjust the commands based on r specific
use cases and replace placeholders like <pipe_name>, <table_name>,
<schema_name>, etc., with r actual values.
Dynamic SQL:
1. Dynamic SQL Using EXECUTE IMMEDIATE:
sqlCopy code
EXECUTE IMMEDIATE 'CREATE TABLE ' || :table_name || ' (column1 INT, column2 STRING)';
2. Dynamic SQL with BIND Parameters:
sqlCopy code
EXECUTE IMMEDIATE 'INSERT INTO ' || :table_name || ' VALUES (?, ?)' USING :value1, :value2;
Storage Utilization:
1. Show Storage Utilization for a Table:
sqlCopy code
SHOW TABLE <table_name> STORAGE UTILIZATION;
2. Show Storage Utilization for a Database:
sqlCopy code
SHOW DATABASE <database_name> STORAGE UTILIZATION;
Table Functions:
1. Create Table Function:
sqlCopy code
CREATE OR REPLACE TABLE FUNCTION <function_name>(arg1 INT, arg2 STRING) RETURNS TABLE
(column1 INT, column2 STRING) LANGUAGE JAVASCRIPT AS $$ // JavaScript code $$;
2. Call Table Function:
sqlCopy code
SELECT * FROM TABLE(<function_name>(arg1_value, arg2_value));
Automated Clustering:
1. Enable Automated Clustering on Table:
sqlCopy code
ALTER TABLE <table_name> SET AUTO_CLUSTER_BY('column1', 'column2');
2. Disable Automated Clustering on Table:
sqlCopy code
ALTER TABLE <table_name> UNSET AUTO_CLUSTER_BY;
3. Show Clustering Information for a Table:
sqlCopy code
SHOW TABLE <table_name> CLUSTERING KEYS;
https://docs.snowflake.com/en/sql-reference/sql-all
https://snow-flake-cheat-sheet.streamlit.app/