KEMBAR78
Snowflake Data Warehouse Top Commands | PDF | File Format | Data Management Software
0% found this document useful (0 votes)
74 views61 pages

Snowflake Data Warehouse Top Commands

Uploaded by

ksnyogatuni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views61 pages

Snowflake Data Warehouse Top Commands

Uploaded by

ksnyogatuni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Snowflake Data Warehouse Top

Commands
1. Selecting the current user — SELECT CURRENT_USER();

2. Selecting the current role — SELECT CURRENT_ROLE();

3. 3. Selecting the current data warehouse — SELECT


CURRENT_DATAWAREHOUSE();

4. 4. Selecting the current database — SELECT CURRENT_DATABASE();

5. 5. Show tables — SHOW TABLES;

6. 6. Show pipes — SHOW PIPES;

7. 7. Describe the pipe have created — DESC PIPE <pipe_name>;

8. 8. Show views that are created — SHOW VIEWS;

9. 9. Show roles that have been created — SHOW ROLES;

10. 10. Describe the masking policy — DESC MASKING POLICY <name>;

11. 11. Show masking policies — SHOW MASKING POLICIES;

12. 12. Show the warehouses present — SHOW WAREHOUSES;

13. 13. Show schemas — SHOW SCHEMAS;

14. 14. Show grants that have been assigned to the user — SHOW GRANTS ON USER
<username>;

will be able to find the username on the top-right hand corner of the screen after logging
in (Classic Console).
11. Use role — USE ROLE <role_name>;
12. Show grants assigned on a particular role — SHOW GRANTS TO ROLE <role>;
13. Grant access to a particular role on a data warehouse — GRANT USAGE ON
WAREHOUSE <warehouse_name> TO ROLE <role>;
14. Grant access to a particular role on a database — GRANT USAGE ON DATABASE
<database_name> TO ROLE <role>;
15. Altering table in Snowflake — ALTER TABLE IF EXISTS <table_name> SET
<property_name> = __;
16. Altering view in Snowflake — ALTER VIEW IF EXISTS <view_name> SET
<property_name> = __;
17. Altering warehouse in Snowflake — ALTER WAREHOUSE IF EXISTS
<datawarehouse_name> SET <property_name> = __;
18. Use a particular database — USE <database_name>;
19. Use a particular warehouse — USE <warehouse_name>;
20. Showing the regions in Snowflake — SHOW REGIONS;
The regions command above will show all the regions and the cloud (AWS, Azure, GCP)
snowflake is hosted on. Not a very useful command but can be used rarely for information.
21. Show streams — SHOW STREAMS;
22. To delete all rows from a table it is similar to SQL — DELETE FROM <table_name>;
23. To delete only certain rows from a table — DELETE FROM <table_name> WHERE
<condition>;
24. Truncate a table — TRUNCATE FROM <table_name>;
25. Drop a stage — DROP <stage_name>;
26. Alter a stage — ALTER STAGE <stage_name> SET <parameter_name> = new_value;
27. Create a stage — Always remember while creating a stage have to give the
database_name.schema_name.stage_name and then give the URL from where want to fetch
the data.

CREATE OR REPLACE STAGE MANAGE_DB.external_stages.aws_stage

url='';

NOTE: If r URL is an open one, then can simple just copy paste the URL and it will work. But if
are fetching data from a private or protected place, then might also want to add
the aws_key_id and aws_secret_key as below. Also at the time of writing this post, Snowflake
supports fetching data from storage providers- AWS, Azure and GCP.

CREATE OR REPLACE STAGE MANAGE_DB.external_stages.aws_stage

url='s3://bucketsnowflakes3'

credentials=(aws_key_id='<string>' aws_secret_key='<string>');

28. Show stages — SHOW STAGES;


29. Describe a stage — DESC <stage_name>;
30. List all the files in a stage — LIST @<stage_name>;
NOTE: If cannot run the above command, then might want to set the context environment
by selecting the correct warehouse, database and schema on the top right corner of r classic
console screen.
31. Copy Command —The copy command can be used to copy into a table from a stage.

COPY INTO <database_name.schema.name.table_name>

FROM @<stage_name>

file_format = (type = csv

field_delimiter = ‘,’

skip_header=1);

32. ON_ERROR (CONTINUE) — What this parameter does is it kind of acts as error
handling. So we tell snowflake what to do if we encounter an error. See the code below. We
tell snowflake to keep continue copying the data if we find any error. Of course, we will not
have the rows that are causing errors and the rest of the rows are successfully copied.

COPY INTO OUR_FIRST_DB.PUBLIC.ORDERS_EX

FROM @<stage_name>

file_format= (type = csv field_delimiter=',' skip_header=1)

files = ('<file_name>.csv')

ON_ERROR = 'CONTINUE';

33. ON_ERROR (ABORT STATEMENT(default)) — What this parameter does is it kind of


acts as exception handling. So we tell snowflake what to do if we encounter an error. See
the code below. We tell snowflake to abort copying the data if we find any error.

COPY INTO OUR_FIRST_DB.PUBLIC.ORDERS_EX

FROM @<stage_name>

file_format= (type = csv field_delimiter=',' skip_header=1)


files = ('<file_name>.csv')

ON_ERROR = 'ABORT_STATEMENT';

34. ON_ERROR (SKIP_FILE) — So we tell snowflake what to do if we encounter an error in


the file then skip the file and move on to the next file. See the code below. If there is any
error in file_1 and continue copying file_2 .

COPY INTO OUR_FIRST_DB.PUBLIC.ORDERS_EX

FROM @<stage_name>

file_format= (type = csv field_delimiter=',' skip_header=1)

files = ('<file_1>.csv','file_2.csv')

ON_ERROR = 'SKIP_FILE';

35. ON_ERROR (SKIP_FILE_number) — Consider the example below. So here we tell


snowflake that keep loading until the error limit is reached. Here the error limit in the below
code is 3. So if 3 errors are reached then skip loading the file. If there are less than 3 errors,
then keep loading.

COPY INTO OUR_FIRST_DB.PUBLIC.ORDERS_EX

FROM @<stage_name>

file_format= (type = csv field_delimiter=',' skip_header=1)

files = ('<file_1>.csv')

ON_ERROR = 'SKIP_FILE_3';

Question: Can guess what will happen if we specify SKIP_FILE_4 and SKIP_FILE_2 ?
36. ON_ERROR (SKIP_FILE_number%) — This is exactly similar to the “SKIP_FILE_number”
functionality, except that instead of specifying the number we specify the number and a %
sign. Let’s see below and understand it.

COPY INTO OUR_FIRST_DB.PUBLIC.ORDERS_EX

FROM @<stage_name>

file_format= (type = csv field_delimiter=',' skip_header=1)


files = ('<file_1>.csv')

ON_ERROR = 'SKIP_FILE_3%';

Explanation — Now SKIP_FILE_3% means that whatever number of records are there in
file_1 , can keep loading until that number reaches. And if the error limit is crossed then
stop loading and skip the file. Example, If there are 500 rows in file_1 then 3% of 500 is 15.
So until 15 errors keep loading. As soon as the 15th error is reached, stop loading and skip
the file.
37. VALIDATION_MODE(size_limit)- Validation mode is just validating the data first before
loading it. So it is just like a dummy copying where the data is not loaded but gives us the
actual errors if data would have been loaded. The advantage of this is that if we are having a
huge data to be copied then we can first validate it before encountering any errors in actual
copy.
What this SIZE_LIMIT is that we can specify the size in bytes to be copied and once that size
is reached then the copying is stopped.
NOTE: One thing to note here is that the first file will always be loaded regardless of the size
limit. What this means?

//Load data using copy command

COPY INTO COPY_DB.PUBLIC.ORDERS

FROM @aws_stage_copy

file_format= (type = csv field_delimiter=',' skip_header=1)

files = ('<file_1.csv>,<file_2.csv>')

SIZE_LIMIT=60000;

Explanation: In the above code file_1.csv will be loaded irrespective of the 60000 bytes size
limit. The size limit is applicable to file_2.csv and other files if any.
38. VALIDATION_MODE(return_failed_only) — This mode will only return the names of
the files along with the errors. The pattern parameter simply states that load all the files
having the word “order” in the middle, irrespective of what the file name has at it’s start and
end. So basically it acts like a pattern matching thing that we use in SQL.

COPY INTO COPY_DB.PUBLIC.ORDERS

FROM @aws_stage_copy

file_format= (type = csv field_delimiter=',' skip_header=1)


pattern='.*Order.*'

ON_ERROR =CONTINUE

RETURN_FAILED_ONLY = TRUE

Explanation — The above command/code will return only the names of the files that have
errors. Why? Because we have set RETURN_FAILED_ONLY=TRUE .
39. VALIDATION_MODE(return_n_rows) — This validation mode returns the first 10 rows
if no errors are found.

//Load data using copy command

COPY INTO COPY_DB.PUBLIC.ORDERS

FROM @aws_stage_copy

file_format= (type = csv field_delimiter=',' skip_header=1)

files=('file_name.csv')

VALIDATION_MODE = RETURN_10_rows;

NOTE: If while using the validation mode, know that there is an error in the 12th row or if
there id an error in the 12th row but don’t know it, and are using VALIDATION_MODE =
RETURN_10_rows , then will get the 10 rows without any errors. But the drawback is that
there is an error in the 12th row and might face challenges while actually copying the data.
40. VALIDATION_MODE(return_errors) — This validation mode returns errors only if
found. If there are no errors, then will get nothing in the output which is a good sign :)

//Load data using copy command

COPY INTO COPY_DB.PUBLIC.ORDERS

FROM @aws_stage_copy

file_format= (type = csv field_delimiter=',' skip_header=1)

files=('file_name.csv')

VALIDATION_MODE = RETURN_ERRORS;
41. JSON — JSON means Java Script Object Object. It is a type of data such as CSV. Now
below is the code to handle the JSON file.

//Creating a stage and getting the data.

CREATE OR REPLACE stage MANAGE_DB.EXTERNAL_STAGES.JSONSTAGE

url='s3://bucketsnowflake-jsondemo';

//Creating a json file format as the default file format is csv.

CREATE OR REPLACE file format MANAGE_DB.FILE_FORMATS.JSONFORMAT

TYPE = JSON;

//Creating our table where we will push the data we pull from a stage

CREATE OR REPLACE table OUR_FIRST_DB.PUBLIC.JSON_RAW (

raw_file variant);

COPY INTO OUR_FIRST_DB.PUBLIC.JSON_RAW

FROM @<stage_name>

file_format= MANAGE_DB.FILE_FORMATS.JSONFORMAT

files = ('HR_data.json');

SELECT * FROM OUR_FIRST_DB.PUBLIC.JSON_RAW;

Explanation: Now this whole gist of the above code is to copy a json file into our created
table. But I have also showed the complete steps so that are not lost anywhere while
reading or preparing for exam.
One thing to notice is that while copying data from a stage we are using a column raw_file
and the data type is variant . Why? Because the variantdatatype can handle the unstructured
data very well.
Now naturally must be thinking that we loaded the json data into at table with variant
column name that can handle unstructred data very well but how can I query it?
Below is the answer.

// Selecting attribute/column

SELECT RAW_FILE:city FROM OUR_FIRST_DB.PUBLIC.JSON_RAWSELECT $1:first_name FROM


OUR_FIRST_DB.PUBLIC.JSON_RAW

Explanation: The first select query is very straight forward. It is telling snowflake to select
the attribute city and column RAW_FILE from the JSON_RAW table.
Instead of giving the column name, we can also give the column number which can be seen
as $1 and the rest of the query is same.
If we find the data to be messy or of different data types, we can cast(changing data types)
it. This can be seen in the below query.

// Selecting attribute/column - formatted

SELECT

RAW_FILE:id::int as id,

RAW_FILE:first_name::STRING as first_name,

RAW_FILE:last_name::STRING as last_name,

RAW_FILE:gender::STRING as genderFROM OUR_FIRST_DB.PUBLIC.JSON_RAW;

42. Time travel — Time travel is a feature in snowflake to retrieve the data that we have
updated or deleted accidently. So essentially we can travel back in time and see how the
data was before we updated/deleted. Now there are two methods for time travel. One is for
certain minutes. And other is for a few days back using timestamp. Let’s look at both the
methods.

SELECT * FROM OUR_FIRST_DB.public.test at (OFFSET => -60*1.5)

Explanation: The - sign tells snowflake to go back 90 secs and get me the data at that time.
If want to know r current time stamp, then can use the below commands.

ALTER SESSION SET TIMEZONE ='UTC';SELECT CURRENT_TIMESTAMP;


NOTE: (Standard time travel is 1 day. Extended is 90 days in Enterprise and above editions).
Also note that time travel and restoring the data are two different concepts. By time travel
we are just looking at the data how it was but NOT restoring it.
The below method can give time travel up to days depending on the edition are using.

SELECT * FROM OUR_FIRST_DB.public.test before (timestamp => '2021-04-15


17:47:50.581'::timestamp)

43. Truncate columns — This feature provided by snowflake gives the additional
functionality on the fly to specify the length of the strings while copying. So if the source
table has the column length as 20 and the destination table has the column length as 10,
then we can truncate the columns to 10 characters using the below command.

COPY INTO OUR_FIRST_DB.public.test

from @<stage_name>

files = ('customers.csv')

TRUNCATECOLUMNS = TRUE | FALSE;

Explanation: Now must be wondering where to specify the truncate limit? So the answer is
that the truncate limit is specified in the column datatype of the source table. Suppose we
have a column named last_name varchar(10)in the source table, then if we write
TRUNCATECOLUMNS=TRUE in the copy command (above code) then only 10 characters
will be inserted in the final table.
44. Force — Force TRUEmeans specify to load all the files regardless of whether they have
been loaded previously or not have been changed since they were been loaded.
NOTE: This option reloads the file, potentially duplicating the data in a table.

COPY INTO OUR_FIRST_DB.public.test

from @<stage_name>

files = ('customers.csv')

FORCE = TRUE | FALSE;

45. Load History — This feature enables us to view the history of the data loaded into
tables using the COPY INTO <table_name> command.
Now there are 2 ways of retrieving the load history in snowflake. One way is through UI and
the other one is by running a SQL command.
To view it through UI command we use the left hand side panel and to SNOWFLAKE ->
ACCOUNT_USAGE -> LOAD_HISTORY
The next way is through SQL command which is very simple as follows.

SELECT * FROM SNOWFLAKE.ACCOUNT_USAGE.LOAD_HISTORY;

46. Creating a stream — Creates a new stream in the current/specified schema or replaces
an existing stream. A stream records data manipulation language (DML) changes made to a
table, directory table, external table, or the underlying tables in a view (including secure
views). The object for which changes are recorded is called the source object.
47. Dynamic data masking — Dynamic Data Masking is a Column-level Security feature
that uses masking policies to selectively mask data at query time that was previously loaded
in plain-text into Snowflake.

// Creating a masking policy.

create or replace masking policy PHONE

as (val varchar) returns varchar ->

case

when current_role() in ('ANALYST_FULL', 'ACCOUNTADMIN') then val

else '##-###-##'

end;// Appling the masking policy to a column.

ALTER TABLE IF EXISTS <table_name> MODIFY COLUMN <column_name>

SET MASKING POLICY PHONE;

Explanation: As discussed above we can apply the policy on a column. So phone is the
column name. We are telling snowflake to only return unmasked values if the role is
ANALYST_FULL OR ACCOUNT_ADMIN . If any other role is trying to view the phone column
then return ##-###-### . The varcharvalue(val varchar) is the data type of the original
column and the varchar in returns varchar is the datatype of the returning column.
As seen in the code above we first create a masking policy and then apply it to a particular
column.
48. Drop stream — DROP STREAM <stream_name>;
49. Describe a stream — DESC STREAM <stream_name>;
50. Show external tables — In a typical table, the data is stored in the database; however,
in an external table, the data is stored in files in an external stage. External tables store file-
level metadata about the data files, such as the filename, a version identifier and related
properties.
SHOW EXTERNAL TABLES;
51. Show grants — Lists all access control privileges that have been explicitly granted to
roles, users, and shares.
SHOW GRANTS;
52. Show functions — Lists all the native (i.e. system-defined/built-in) scalar functions
provided by Snowflake, as well as any user-defined functions (UDFs) or external functions
that have been created for r account.
SHOW FUNCTIONS;
53. Show file formats — A file format is a set of rules or instructions tell snowflake to
follow while copying a file into a table.
SHOW FILE FORMATS;
54. Show integrations — Lists the integrations in r account.

//List all the integrations in snowflake.

SHOW INTEGRATIONS;//The command output provides table properties and metadata in the following
columns| name | type | category | enabled | comment | created_on |

55. Show managed accounts — Lists the managed accounts created for r account.
Currently used by data providers to create reader accounts for their consumers.
SHOW MANAGED ACCOUNTS;
56. Show locks —Lists all running transactions that have locks on resources. The command
can be used to show locks for the current user in all the user’s sessions or all users in the
account.
SHOW LOCKS;
57. Show masking policies — A masking policy is basically a way to hide certain columns
from certain roles. The hidden data appears as -*-*** or in whichever format want it to
appear.
SHOW MASKING POLICIES;
58. Show materialized views —A materialized view is a virtual table like a normal view with
the main difference that it can be manually refreshed or refresh it on a schedule. MVs are used
for query optimization.
SHOW MATERIALIZED VIEWS;
59. Show network policies — Lists all network policies defined in the system.
SHOW NETWORK POLICIES;
60. Create a managed account — Create a managed account in snowflake.

CREATE MANAGED ACCOUNT <name>

ADMIN_NAME = <username> , ADMIN_PASSWORD = <user_password> ,


TYPE = READER ,

[ COMMENT = '<string_literal>' ]

61. Drop a managed account — Dropping a managed account


DROP MANAGED ACCOUNT <name>;

�Database
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️

☁️ create or replace an existing database

CREATE [ OR REPLACE ] [ TRANSIENT ] DATABASE [ IF NOT EXISTS ] <database_name>


[ CLONE <source_db>
[ { AT | BEFORE } ( { TIMESTAMP => <timestamp> | OFFSET => <time_difference> } ) ] ]
[ COMMENT = '<string_literal>' ]

�Schema
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️

☁️ create or replace an existing schema

CREATE [ OR REPLACE ] [ TRANSIENT ] SCHEMA [ IF NOT EXISTS ] <schema_name>


[ CLONE <source_schema>
[ { AT | BEFORE } ( { TIMESTAMP => <timestamp> | OFFSET => <time_difference> } ) ] ]
[ COMMENT = '<string_literal>' ]

�Table
CREATE
ALTER
DROP
DESCRIBE
SHOW
TRUNCATE
❄️

☁️ create or replace an existing table

CREATE [ OR REPLACE ]
[ { TEMPORARY | TRANSIENT } ] TABLE [ IF NOT EXISTS ] <table_name>
( <col_name> <col_type> )
[ CLUSTER BY ( <expr> [ , <expr> , ... ] ) ]

�View
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️

☁️ create or replace an existing view

CREATE [ OR REPLACE ] [ TEMPORARY ] VIEW [ IF NOT EXISTS ] <view_name>


[ ( <column_list> ) ]
AS <select_statement>

�Materialized View
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️

☁️ create or replace an existing materialized view

CREATE [ OR REPLACE ] [ TEMPORARY ] MATERIALIZED VIEW [ IF NOT EXISTS ] <mat_view_name>


[ ( <column_list> ) ]
[ CLUSTER BY ( <expr1> [, <expr2> ... ] ) ]
AS <select_statement>

�Dynamic Table
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️

☁️ create or replace an existing dynamic table

CREATE [ OR REPLACE ] DYNAMIC TABLE <dyn_table_name>


TARGET_LAG = { '<num> { seconds | minutes | hours | days }' | DOWNSTREAM }
WAREHOUSE = <warehouse_name>
AS <query>
[ COMMENT = '<string_literal>' ]

�Task
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️

☁️ create or replace an existing task

CREATE [ OR REPLACE ] TASK [ IF NOT EXISTS ] <task_name>


[ WAREHOUSE = <string> ]
[ SCHEDULE = '{ <num> MINUTE | USING CRON <expr> <time_zone> }' ]
[ ALLOW_OVERLAPPING_EXECUTION = TRUE | FALSE ]
[ <session_parameter> = <value> [ , <session_parameter> = <value> ... ] ]
[ USER_TASK_TIMEOUT_MS = <num> ]
[ SUSPEND_TASK_AFTER_NUM_FAILURES = <num> ]
[ COMMENT = '<string_literal>' ]
[ AFTER <string> [ , <string> , ... ] ]
[ WHEN <boolean_expr> ]
AS
{ <sql> | <stored_procedure> }
CRON Expressions

�Stream
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️

☁️ create or replace an existing stream on a table

CREATE [ OR REPLACE ] STREAM [IF NOT EXISTS] <stream_name>


ON TABLE <table_name>
[ APPEND_ONLY = TRUE | FALSE ]
[ SHOW_INITIAL_ROWS = TRUE | FALSE ]
[ COMMENT = '<string_literal>' ]

☁️ create or replace an existing stream on a directory table

CREATE [ OR REPLACE ] STREAM [IF NOT EXISTS] <stream_name>


ON STAGE <stage_name>
[ COMMENT = '<string_literal>' ]

☁️ create or replace an existing stream on a view

CREATE [ OR REPLACE ] STREAM [IF NOT EXISTS] <stream_name>


ON VIEW <view_name>
[ APPEND_ONLY = TRUE | FALSE ]
[ SHOW_INITIAL_ROWS = TRUE | FALSE ]
[ COMMENT = '<string_literal>' ]

� Alert
CREATE
ALTER
DROP
DESCRIBE
SHOW
HISTORY
❄️

☁️ create or replace an existing alert

CREATE [ OR REPLACE ] ALERT [ IF NOT EXISTS ] <alert_name>


WAREHOUSE = <warehouse_name>
SCHEDULE = '{ <num> MINUTE | USING CRON <expr> <time_zone> }'
COMMENT = '<string_literal>'
IF( EXISTS(
<condition>
))
THEN
<action>
CRON Expressions

�Stage
CREATE
ALTER
DROP
DESCRIBE
SHOW
LIST
❄️

☁️ create or replace an internal stage

CREATE [ OR REPLACE ] [ TEMPORARY ] STAGE [ IF NOT EXISTS ] <internal_stage_name>


[ DIRECTORY = ( ENABLE = { TRUE | FALSE }
[ REFRESH_ON_CREATE = { TRUE | FALSE } ] ) ]
[ FILE_FORMAT = ( TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } ) ]
[ COPY_OPTIONS = ( ON_ERROR = { CONTINUE | SKIP_FILE | ABORT_STATEMENT }) ]
Amazon S3
Microsoft Azure
Google Cloud Storage

☁️ create or replace an external stage for Amazon S3

CREATE [ OR REPLACE ] [ TEMPORARY ] STAGE [ IF NOT EXISTS ] <external_stage_name>


[ DIRECTORY = ( ENABLE = { TRUE | FALSE }
[ REFRESH_ON_CREATE = { TRUE | FALSE } ]
[ AUTO_REFRESH = { TRUE | FALSE } ] ) ]
[ FILE_FORMAT = ( TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } ) ]
[ COPY_OPTIONS = ( ON_ERROR = { CONTINUE | SKIP_FILE | ABORT_STATEMENT }) ]

�Loading Data
COPY INTO (standard)
COPY INTO (with transformation)
❄️

☁️ copy data from internal or external stage

COPY INTO [<namespace>.]<table_name>


FROM {@[<namespace>.]<stage_name>[/<path>]
| @[<namespace>.]%<table_name>[/<path>]
| @~[/<path>]
}
[ FILES = ( '<file_name>' [ , '<file_name>' ] [ , ... ] ) ]
[ PATTERN = '<regex_pattern>' ]
[ FILE_FORMAT = ( TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } ) ]
[ ON_ERROR = { CONTINUE | SKIP_FILE | ABORT_STATEMENT } ]
[ FORCE = { TRUE | FALSE } ]
[ VALIDATION_MODE = RETURN_<n>_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS ]
Amazon S3
Microsoft Azure
Google Cloud Storage

☁️ copy data from Amazon S3 directly

COPY INTO [<namespace>.]<table_name>


FROM 's3://<bucket>[/<path>]'
[ { STORAGE_INTEGRATION = <integration_name> } | { CREDENTIALS = ( { { AWS_KEY_ID =
'<string>' AWS_SECRET_KEY = '<string>' [ AWS_TOKEN = '<string>' ] } } ) } ]
[ ENCRYPTION = ( [ TYPE = 'AWS_CSE' ] [ MASTER_KEY = '<string>' ] |
[ TYPE = 'AWS_SSE_S3' ] |
[ TYPE = 'AWS_SSE_KMS' [ KMS_KEY_ID = '<string>' ] ] |
[ TYPE = 'NONE' ] ) ]
[ FILES = ( '<file_name>' [ , '<file_name>' ] [ , ... ] ) ]
[ PATTERN = '<regex_pattern>' ]
[ FILE_FORMAT = ( TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } ) ]
[ ON_ERROR = { CONTINUE | SKIP_FILE | ABORT_STATEMENT } ]
[ FORCE = { TRUE | FALSE } ]
[ VALIDATION_MODE = RETURN_<n>_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS ]

�Data Manipulation
INSERT
UPDATE
MERGE
DELETE
❄️

☁️ insert or replace data into a table from explicit values

INSERT [ OVERWRITE ] INTO <target_table> [ ( <target_col_name> [ , ... ] ) ]


VALUES ( { <value> | DEFAULT | NULL } [ , ... ] ) [ , ( ... ) ]

☁️ insert or replace data into a table from a select query

INSERT [ OVERWRITE ] INTO <target_table> [ ( <target_col_name> [ , ... ] ) ]


< query >

�Function (UDF)
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
Python
SQL
JavaScript
Scala

☁️ create or replace an existing Python UDF (user-defined function)

CREATE [ OR REPLACE ] [ TEMPORARY ] FUNCTION [ IF NOT EXISTS ] <func_name> ( [ <arg_name>


<arg_data_type> ] [ , ... ] )
RETURNS { <result_data_type> | TABLE ( <col_name> <col_data_type> [ , ... ] ) }
[ [ NOT ] NULL ]
LANGUAGE PYTHON
[ { CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT } ]
RUNTIME_VERSION = <python_version>
[ COMMENT = '<string_literal>' ]
[ IMPORTS = ( '<stage_path_and_file_name_to_read>' [ , ... ] ) ]
[ PACKAGES = ( '<package_name>[==<version>]' [ , ... ] ) ]
HANDLER = '<function_name>'
AS '<function_definition>'

�Procedure
CREATE
ALTER
DROP
DESCRIBE
SHOW
❄️
Python
SQL
JavaScript
Scala
in-line
on stage

☁️ create or replace an existing in-line Python stored procedure

CREATE [ OR REPLACE ] PROCEDURE <procedure_name> ( [ <arg_name> <arg_data_type> ] [ , ... ] )


RETURNS { <result_data_type> [ [ NOT ] NULL ] | TABLE ( [ <col_name> <col_data_type> [ , ...
] ] ) }
LANGUAGE PYTHON
RUNTIME_VERSION = '<python_version>'
PACKAGES = ( 'snowflake-snowpark-python[==<version>]'[, '<package_name>[==<version>]' ... ])
[ IMPORTS = ( '<stage_path_and_file_name_to_read>' [, '<stage_path_and_file_name_to_read>'
...] ) ]
HANDLER = '<function_name>'
[ COMMENT = '<string_literal>' ]
AS '<procedure_definition>'

�Pipe
CREATE
ALTER
DROP
DESCRIBE
SHOW
PIPE_STATUS
❄️
Amazon S3
Microsoft Azure
Google Cloud Storage

☁️ create or replace a pipe from Amazon S3

CREATE [ OR REPLACE ] PIPE [ IF NOT EXISTS ] <pipe_name>


[ AUTO_INGEST = [ TRUE | FALSE ] ]
[ ERROR_INTEGRATION = <integration_name> ]
[ AWS_SNS_TOPIC = '<string>' ]
[ COMMENT = '<string_literal>' ]
AS <copy_statement>

Snowflake all main commands quick guide cheat sheet

Snowflake is a cloud-based data warehousing platform, and here is a quick


guide cheat sheet for some of its main commands:

Basic Commands:
1. Login:
phpCopy code
!connect snowflake -u <username> -a <account_name> -w <warehouse_name> -r <region> -d <database_name> -s
<schema_name>
2. Switch Database:
phpCopy code
USE DATABASE <database_name>;
3. Switch Schema:
graphqlCopy code
USE SCHEMA <schema_name>;
4. List Warehouses:
sqlCopy code
SHOW WAREHOUSES;
5. List Databases:
sqlCopy code
SHOW DATABASES;
6. List Schemas:
sqlCopy code
SHOW SCHEMAS;

Data Definition Language (DDL):


1. Create Table:
sqlCopy code
CREATE TABLE <table_name> (column1 datatype, column2 datatype, ...);
2. Alter Table (Add Column):
sqlCopy code
ALTER TABLE <table_name> ADD COLUMN <column_name> datatype;
3. Describe Table:
sqlCopy code
DESCRIBE TABLE <table_name>;
4. Create Database:
sqlCopy code
CREATE DATABASE <database_name>;
5. Create Schema:
sqlCopy code
CREATE SCHEMA <schema_name>;

Data Manipulation Language (DML):


1. Insert Data:
sqlCopy code
INSERT INTO <table_name> (column1, column2, ...) VALUES (value1, value2, ...);
2. Select Data:
sqlCopy code
SELECT column1, column2, ... FROM <table_name> WHERE condition;
3. Update Data:
sqlCopy code
UPDATE <table_name> SET column1 = value1 WHERE condition;
4. Delete Data:
sqlCopy code
DELETE FROM <table_name> WHERE condition;

Data Querying and Analysis:


1. Aggregate Functions:
sqlCopy code
SELECT AVG(column), SUM(column), COUNT(*), MAX(column), MIN(column) FROM <table_name>;
2. Group By:
sqlCopy code
SELECT column1, AVG(column2) FROM <table_name> GROUP BY column1;
3. Order By:
sqlCopy code
SELECT * FROM <table_name> ORDER BY column ASC/DESC;
4. Joins:
sqlCopy code
SELECT * FROM table1 JOIN table2 ON table1.column = table2.column;

Security and Access Control:


1. Grant Privileges:
sqlCopy code
GRANT <privilege> ON <object> TO <user_or_role>;
2. Revoke Privileges:
sqlCopy code
REVOKE <privilege> ON <object> FROM <user_or_role>;
3. Show Grants:
sqlCopy code
SHOW GRANTS TO <user_or_role>;

Data Loading and Unloading:


1. Copy Data In:
sqlCopy code
COPY INTO <table_name> FROM <location> FILE_FORMAT = (FORMAT_NAME = 'csv');
2. Copy Data Out:
sqlCopy code
COPY INTO <location> FROM <table_name> FILE_FORMAT = (FORMAT_NAME = 'csv');
3. Unload Data:
sqlCopy code
UNLOAD INTO <location> SELECT * FROM <table_name>;

These are some essential Snowflake commands to get started. Remember to


replace placeholders like <username>, <account_name>, <database_name>,
<table_name>, etc., with r actual values.
Here are some more Snowflake commands for various tasks:

Snowflake Administration:
1. Create User:
sqlCopy code
CREATE USER <username> PASSWORD = '<password>';
2. Create Role:
sqlCopy code
CREATE ROLE <role_name>;
3. Grant Role to User:
sqlCopy code
GRANT <role_name> TO <username>;
4. List Users:
sqlCopy code
SHOW USERS;
5. List Roles:
sqlCopy code
SHOW ROLES;

Transaction Control:
1. Begin Transaction:
sqlCopy code
BEGIN;
2. Commit Transaction:
sqlCopy code
COMMIT;
3. Rollback Transaction:
sqlCopy code
ROLLBACK;

Metadata and System Information:


1. Show Tables:
sqlCopy code
SHOW TABLES;
2. Show Views:
sqlCopy code
SHOW VIEWS;
3. Information Schema:
sqlCopy code
SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = '<schema_name>';
4. Current Date and Time:
sqlCopy code
SELECT CURRENT_TIMESTAMP();

Snowflake Storage Management:


1. Show Storage Utilization:
sqlCopy code
SHOW STORAGE STATISTICS;
2. Alter Table (Resize):
sqlCopy code
ALTER TABLE <table_name> SET FILE_FORMAT = <file_format> SET CLUSTERING KEY
(<column_name>);
3. Clone Table:
sqlCopy code
CREATE TABLE <new_table_name> CLONE <existing_table_name>;

Snowflake Functions:
1. Date Functions:
sqlCopy code
SELECT CURRENT_DATE(), DATEADD(DAY, 7, CURRENT_DATE());
2. String Functions:
sqlCopy code
SELECT UPPER(column), LOWER(column), SUBSTRING(column, start, length);
3. Math Functions:
sqlCopy code
SELECT ABS(column), ROUND(column, 2), POWER(column, 2);
4. Conditional Functions:
sqlCopy code
SELECT CASE WHEN condition THEN result ELSE else_result END;

External Functions:
1. Create External Function:
sqlCopy code
CREATE EXTERNAL FUNCTION <function_name> RETURNS <return_type> API_LANGUAGE = <language>
HANDLER = '<external_function_handler>';
2. List External Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS;
3. Drop External Function:
sqlCopy code
DROP EXTERNAL FUNCTION <function_name>;

These commands cover a broad range of Snowflake functionalities. Remember


to refer to the official Snowflake documentation for more detailed information
and examples.

Snowflake provides additional commands for working with stages, including


PUT and GET commands for data loading and unloading. Here's an extension
to the cheat sheet:

Stages Commands:
1. Create Stage:
sqlCopy code
CREATE STAGE <stage_name>;
2. List Stages:
sqlCopy code
SHOW STAGES;
3. List Files in Stage:
sqlCopy code
LIST @<stage_name>;
4. Put Data into Stage:
sqlCopy code
PUT file://<local_file_path> @<stage_name>;
5. Get Data from Stage:
sqlCopy code
GET @<stage_name> file://<local_file_path>;
6. Copy Data from Stage to Table:
sqlCopy code
COPY INTO <table_name> FROM @<stage_name> FILE_FORMAT = (FORMAT_NAME = 'csv');
7. Copy Data from Table to Stage:
sqlCopy code
COPY INTO @<stage_name> FROM <table_name> FILE_FORMAT = (FORMAT_NAME = 'csv');

Snowpipe Commands:
1. Create Snowpipe:
sqlCopy code
CREATE PIPE <pipe_name> AUTO_INGEST = TRUE AS COPY INTO <table_name> FROM @<stage_name>;
2. Start Snowpipe:
sqlCopy code
ALTER PIPE <pipe_name> RESUME;
3. Pause Snowpipe:
sqlCopy code
ALTER PIPE <pipe_name> PAUSE;
4. List Snowpipes:
sqlCopy code
SHOW PIPES;

Data Definition in Stages:


1. Create File Format:
sqlCopy code
CREATE FILE FORMAT <format_name> TYPE = 'CSV' FIELD_OPTIONALLY_ENCLOSED_BY = '"';
2. Create File Format (JSON):
sqlCopy code
CREATE FILE FORMAT <format_name> TYPE = 'JSON';
3. Create File Format (Parquet):
sqlCopy code
CREATE FILE FORMAT <format_name> TYPE = 'PARQUET';
4. Create File Format (Avro):
sqlCopy code
CREATE FILE FORMAT <format_name> TYPE = 'AVRO';

Other Useful Commands:


1. Show Query History:
sqlCopy code
SHOW QUERY_HISTORY;
2. Show Storage Usage:
sqlCopy code
SHOW STORAGE USAGE;
3. Show Table Metadata:
sqlCopy code
SHOW TABLES LIKE '<table_name>';
4. Show Current Session:
sqlCopy code
SELECT CURRENT_SESSION();
5. Show Information about a Function:
sqlCopy code
DESC FUNCTION <function_name>;

Remember to replace placeholders like <stage_name>, <pipe_name>,


<local_file_path>, etc., with r actual values. This extended cheat sheet
covers more stages, file formats, and Snowpipe commands to enhance r usage
of Snowflake.

Let's expand the cheat sheet to include more commands related to tables,
user management, and additional stage operations:

Table Operations:
1. Truncate Table:
sqlCopy code
TRUNCATE TABLE <table_name>;
2. Drop Table:
sqlCopy code
DROP TABLE <table_name>;
3. Create View:
sqlCopy code
CREATE VIEW <view_name> AS SELECT column1, column2 FROM <table_name> WHERE condition;
4. Drop View:
sqlCopy code
DROP VIEW <view_name>;
5. Clone Table:
sqlCopy code
CREATE TABLE <new_table_name> CLONE <existing_table_name>;
User and Role Management:
1. Create User:
sqlCopy code
CREATE USER <username> PASSWORD = '<password>';
2. Create Role:
sqlCopy code
CREATE ROLE <role_name>;
3. Grant Role to User:
sqlCopy code
GRANT ROLE <role_name> TO USER <username>;
4. Revoke Role from User:
sqlCopy code
REVOKE ROLE <role_name> FROM USER <username>;
5. List Users:
sqlCopy code
SHOW USERS;
6. List Roles:
sqlCopy code
SHOW ROLES;

Stage Operations:
1. Copy Data from Stage to Table (with Pattern):
sqlCopy code
COPY INTO <table_name> FROM @<stage_name> PATTERN='.*\.csv';
2. Remove File from Stage:
sqlCopy code
REMOVE @<stage_name> 'file_name.csv';
3. Remove All Files from Stage:
sqlCopy code
REMOVE @<stage_name>;
4. Set Stage Policy (Time Travel):
sqlCopy code
ALTER STAGE @<stage_name> SET TRANSIENT_TIME_TRAVEL = 7 DAYS;
5. List Transient Tables:
sqlCopy code
SHOW TRANSIENT TABLES;

Data Sharing:
1. Share Table:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO ACCOUNT '<account_identifier>';
2. List Shares:
sqlCopy code
SHOW SHARES;
3. Revoke Share:
sqlCopy code
REVOKE USAGE ON DATABASE <database_name> FROM ACCOUNT '<account_identifier>';

External Tables:
1. Create External Table:
sqlCopy code
CREATE TABLE <external_table_name> LIKE <existing_table_name> DATA_RETENTION_TIME_IN_DAYS
= 1 LOCATION = @<stage_name>;
2. List External Tables:
sqlCopy code
SHOW EXTERNAL TABLES;
3. Refresh External Table Metadata:
sqlCopy code
ALTER EXTERNAL TABLE <external_table_name> REFRESH;

This extended cheat sheet includes commands related to table management,


user and role operations, additional stage operations, and data sharing in
Snowflake. As always, replace placeholders like <username>, <table_name>,
<role_name>, etc., with r actual values.

Let's delve deeper into different types of stages in Snowflake, including user
stages, named stages, table stages, and external stages:

User Stages:
1. Create User Stage:
sqlCopy code
CREATE STAGE <username>_stage URL = 's3://<bucket>/<path>';
2. List User Stages:
sqlCopy code
SHOW USER STAGES;
3. Copy Data to User Stage:
sqlCopy code
PUT file://<local_file_path> @<username>_stage;
4. Copy Data from User Stage:
sqlCopy code
GET @<username>_stage file://<local_file_path>;

Named Stages:
1. Create Named Stage:
sqlCopy code
CREATE STAGE <named_stage_name> URL = 's3://<bucket>/<path>';
2. List Named Stages:
sqlCopy code
SHOW NAMED STAGES;
3. Copy Data to Named Stage:
sqlCopy code
PUT file://<local_file_path> @<named_stage_name>;
4. Copy Data from Named Stage:
sqlCopy code
GET @<named_stage_name> file://<local_file_path>;

Table Stages:
1. Create Table Stage:
sqlCopy code
CREATE TABLE <table_name> STAGE = @<table_stage_name>;
2. List Table Stages:
sqlCopy code
SHOW TABLE STAGES;
3. Copy Data to Table Stage:
sqlCopy code
PUT file://<local_file_path> @<table_stage_name>;
4. Copy Data from Table Stage:
sqlCopy code
GET @<table_stage_name> file://<local_file_path>;

External Stages:
1. Create External Stage:
sqlCopy code
CREATE EXTERNAL STAGE <external_stage_name> URL = 's3://<bucket>/<path>';
2. List External Stages:
sqlCopy code
SHOW EXTERNAL STAGES;
3. Copy Data to External Stage:
sqlCopy code
PUT file://<local_file_path> @<external_stage_name>;
4. Copy Data from External Stage:
sqlCopy code
GET @<external_stage_name> file://<local_file_path>;

Additional Stage Operations:


1. Set Stage Policy (Retain Data for 30 days):
sqlCopy code
ALTER STAGE @<stage_name> SET FILE_RETENTION_TIME_IN_DAYS = 30;
2. Remove Old Files from Stage:
sqlCopy code
REMOVE @<stage_name> OLDER_THAN = 30 DAYS;
3. Get Stage Information:
sqlCopy code
DESCRIBE STAGE @<stage_name>;
4. Unload Data to Named Stage:
sqlCopy code
UNLOAD INTO @<named_stage_name> SELECT * FROM <table_name>;

This section covers various types of stages in Snowflake, including user stages,
named stages, table stages, and external stages. Adjust the commands based
on r specific use cases and replace placeholders like <username>,
<table_name>, <named_stage_name>, etc., with r actual values.

Let's expand the cheat sheet to include more advanced commands related to
Snowflake features like streams, tasks, MERGE statements, and stored
procedures:
Streams:
1. Create Stream:
sqlCopy code
CREATE OR REPLACE STREAM <stream_name>;
2. List Streams:
sqlCopy code
SHOW STREAMS;
3. Describe Stream:
sqlCopy code
DESCRIBE STREAM <stream_name>;
4. Alter Stream:
sqlCopy code
ALTER STREAM <stream_name> ADD COLUMN <new_column_name> <data_type>;
5. Insert into Stream:
sqlCopy code
INSERT INTO <stream_name> (column1, column2, ...) VALUES (value1, value2, ...);
6. Consume from Stream:
sqlCopy code
SELECT * FROM <stream_name>;

Tasks:
1. Create Task:
sqlCopy code
CREATE OR REPLACE TASK <task_name> WAREHOUSE = <warehouse_name> SCHEDULE = 'USING
CRON 0 0 * * * UTC' AS INSERT INTO <target_table> SELECT * FROM <source_table>;
2. List Tasks:
sqlCopy code
SHOW TASKS;
3. Run Task Manually:
sqlCopy code
ALTER TASK <task_name> RUN;
4. Disable/Enable Task:
sqlCopy code
ALTER TASK <task_name> DISABLE/ENABLE;
5. Describe Task:
sqlCopy code
DESCRIBE TASK <task_name>;
MERGE Statement:
1. MERGE Statement:
sqlCopy code
MERGE INTO <target_table> USING <source_table> ON <target_table.column> = <source_table.column> WHEN
MATCHED THEN UPDATE SET <target_table.column> = <source_table.column> WHEN NOT MATCHED
THEN INSERT (<column1, column2, ...>) VALUES (<value1, value2, ...>);
2. MERGE with DELETE:
sqlCopy code
MERGE INTO <target_table> USING <source_table> ON <target_table.column> = <source_table.column> WHEN
MATCHED THEN UPDATE SET <target_table.column> = <source_table.column> WHEN NOT MATCHED
THEN INSERT (<column1, column2, ...>) VALUES (<value1, value2, ...>) WHEN NOT MATCHED BY
SOURCE THEN DELETE;

Stored Procedures:
1. Create Stored Procedure:
sqlCopy code
CREATE OR REPLACE PROCEDURE <procedure_name>() RETURNS STRING LANGUAGE JAVASCRIPT
EXECUTE AS CALLER AS $$ // JavaScript code $$;
2. Call Stored Procedure:
sqlCopy code
CALL <procedure_name>();
3. List Procedures:
sqlCopy code
SHOW PROCEDURES;
4. Alter Stored Procedure:
sqlCopy code
ALTER PROCEDURE <procedure_name> SET <parameter_name> = <new_value>;

Other Advanced Commands:


1. Set Session Parameters:
sqlCopy code
ALTER SESSION SET <parameter_name> = <new_value>;
2. Clone Database:
sqlCopy code
CREATE DATABASE <new_database_name> CLONE <existing_database_name>;
3. Clone Schema:
sqlCopy code
CREATE SCHEMA <new_schema_name> CLONE <existing_schema_name>;
4. Set Stream Auto-Ingest Delay:
sqlCopy code
ALTER STREAM <stream_name> SET AUTO_INGEST_DELAY = 60;
5. Set Fail-safe to False for Table:
sqlCopy code
ALTER TABLE <table_name> SET FAILSAFE = FALSE;

This advanced section includes commands related to Snowflake features such


as streams, tasks, MERGE statements, stored procedures, and additional
advanced configurations. Replace placeholders like <stream_name>,
<task_name>, <procedure_name>, etc., with r actual values.

Let's include more commands and features in the Snowflake cheat sheet:

Time Travel and Versioning:


1. Time Travel Query (to a specific timestamp):
sqlCopy code
SELECT * FROM <table_name> AT(TIMESTAMP => 'YYYY-MM-DD HH:MI:SS');
2. Time Travel Query (using OFFSET):
sqlCopy code
SELECT * FROM <table_name> AT(OFFSET => -10);
3. Show Historical Versions of a Table:
sqlCopy code
SHOW TABLE <table_name> HISTORY;
4. Restore Table to a Specific Version:
sqlCopy code
RESTORE TABLE <table_name> TO (TIMESTAMP => 'YYYY-MM-DD HH:MI:SS');

Materialized Views:
1. Create Materialized View:
sqlCopy code
CREATE MATERIALIZED VIEW <view_name> AS SELECT * FROM <table_name>;
2. Refresh Materialized View:
sqlCopy code
REFRESH MATERIALIZED VIEW <view_name>;
3. List Materialized Views:
sqlCopy code
SHOW MATERIALIZED VIEWS;

Row Access Policies:


1. Create Row Access Policy:
sqlCopy code
CREATE ROW ACCESS POLICY <policy_name> ON <table_name> USING (column1 = 'condition1' OR
column2 = 'condition2') ENABLE;
2. List Row Access Policies:
sqlCopy code
SHOW ROW ACCESS POLICIES ON <table_name>;
3. Drop Row Access Policy:
sqlCopy code
DROP ROW ACCESS POLICY <policy_name> ON <table_name>;

Network Policies:
1. Create Network Policy:
sqlCopy code
CREATE NETWORK POLICY <policy_name> ALLOWED_IP_LIST = ('192.168.1.0/24', '10.0.0.1')
DENIED_IP_LIST = ('192.168.1.5', '10.0.0.2') ENABLE;
2. List Network Policies:
sqlCopy code
SHOW NETWORK POLICIES;
3. Drop Network Policy:
sqlCopy code
DROP NETWORK POLICY <policy_name>;

Query Profiling:
1. Enable Query Profiling:
sqlCopy code
SET PROFILE = ON;
2. Run a Query with Profiling:
sqlCopy code
SELECT * FROM <table_name>;
3. View Query Profile Information:
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));
Object Dependencies:
1. Show Dependencies for a Table:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON TABLE <table_name>';
2. Show Dependencies for a View:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON VIEW <view_name>';
3. Show Dependencies for a Database:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON DATABASE <database_name>';

External Functions:
1. Create External Function:
sqlCopy code
CREATE EXTERNAL FUNCTION <function_name> RETURNS <data_type> ARGUMENTS (<arg_name>
<arg_data_type>) LANGUAGE <language> EXTERNAL NAME 'external_function_name';
2. List External Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS;
3. Drop External Function:
sqlCopy code
DROP EXTERNAL FUNCTION <function_name>;

These additional commands cover features like time travel, materialized views,
row access policies, network policies, query profiling, object dependencies,
and external functions in Snowflake. Adjust the commands based on r specific
use cases and replace placeholders like <table_name>, <view_name>,
<function_name>, etc., with r actual values.

Let's continue with more advanced commands and features in Snowflake:

Security and Key Management:


1. Create Encryption Key:
sqlCopy code
CREATE KEY <key_name> TYPE = 'RSA' PURPOSE = 'ENCRYPTION';
2. Show Encryption Keys:
sqlCopy code
SHOW KEYS;
3. Rotate Encryption Key:
sqlCopy code
ALTER KEY <key_name> ROTATE;
4. Encrypt Column Using Key:
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> ENCRYPT USING KEY <key_name>;
5. Decrypt Column Using Key:
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> DECRYPT USING KEY <key_name>;

Task Dependencies:
1. Create Task with Dependencies:
sqlCopy code
CREATE OR REPLACE TASK <task_name> WAREHOUSE = <warehouse_name> AFTER
<dependent_task_name> AS INSERT INTO <target_table> SELECT * FROM <source_table>;
2. List Task Dependencies:
sqlCopy code
SHOW TASKS;

Data Masking:
1. Create Data Masking Policy:
sqlCopy code
CREATE MASKING POLICY <masking_policy_name> ALTER COLUMN <column_name> ADD
MASKING_POLICY <masking_policy_name>;
2. Apply Data Masking Policy:
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> SET MASKING_POLICY
<masking_policy_name>;
3. List Data Masking Policies:
sqlCopy code
SHOW MASKING POLICIES;

Data Classification:
1. Classify Data (Sensitive):
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> CLASSIFY AS SENSITIVE;
2. Classify Data (Confidential):
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> CLASSIFY AS CONFIDENTIAL;
3. View Data Classification:
sqlCopy code
SHOW CLASSIFICATIONS;

Resource Monitors:
1. Create Resource Monitor:
sqlCopy code
CREATE RESOURCE MONITOR <monitor_name> WITH CREDITS_PERCENT = 20;
2. Assign Resource Monitor to Warehouse:
sqlCopy code
ALTER WAREHOUSE <warehouse_name> SET RESOURCE_MONITOR = <monitor_name>;
3. List Resource Monitors:
sqlCopy code
SHOW RESOURCE MONITORS;

Object Tags:
1. Tag Table:
sqlCopy code
ALTER TABLE <table_name> SET TAG <tag_name>;
2. Tag View:
sqlCopy code
ALTER VIEW <view_name> SET TAG <tag_name>;
3. List Object Tags:
sqlCopy code
SHOW TAGS;

Row Level Security:


1. Create Row Level Security Policy:
sqlCopy code
CREATE ROW ACCESS POLICY <policy_name> ON TABLE <table_name> USING (column_name =
CURRENT_ROLE());
2. Alter Row Level Security Policy:
sqlCopy code
ALTER ROW ACCESS POLICY <policy_name> ADD ROW ACCESS EXPRESSION (column_name = 'value');
3. List Row Level Security Policies:
sqlCopy code
SHOW ROW ACCESS POLICIES ON TABLE <table_name>;

Data Replication:
1. Create Replication Connection:
sqlCopy code
CREATE REPLICATION CONNECTION <connection_name> TO ACCOUNT '<account_identifier>';
2. List Replication Connections:
sqlCopy code
SHOW REPLICATION CONNECTIONS;
3. Start Replication Task:
sqlCopy code
START REPLICATION TASK FOR SCHEMA <schema_name>;
4. Stop Replication Task:
sqlCopy code
STOP REPLICATION TASK FOR SCHEMA <schema_name>;

These advanced commands cover features like security and key management,
task dependencies, data masking, data classification, resource monitors, object
tags, row-level security, and data replication in Snowflake. Adjust the
commands based on r specific use cases and replace placeholders like
<table_name>, <tag_name>, <connection_name>, etc., with r actual values.

Let's include commands related to User-Defined Functions (UDFs) in


Snowflake:

User-Defined Functions (UDFs):


1. Create JavaScript UDF:
sqlCopy code
CREATE OR REPLACE FUNCTION <udf_name>(arg1 INT, arg2 STRING) RETURNS STRING LANGUAGE
JAVASCRIPT EXECUTE AS CALLER AS $$ // JavaScript code $$;
2. Create SQL UDF:
sqlCopy code
CREATE OR REPLACE FUNCTION <udf_name>(arg1 INT, arg2 STRING) RETURNS STRING LANGUAGE
SQL EXECUTE AS CALLER AS $$ -- SQL code $$;
3. List UDFs:
sqlCopy code
SHOW USER FUNCTIONS;
4. Call UDF:
sqlCopy code
SELECT <udf_name>(column1, column2) FROM <table_name>;
5. Drop UDF:
sqlCopy code
DROP FUNCTION IF EXISTS <udf_name>(arg1 INT, arg2 STRING);
6. Alter UDF (Change Definition):
sqlCopy code
ALTER FUNCTION <udf_name>(arg1 INT, arg2 STRING) RETURNS STRING LANGUAGE JAVASCRIPT AS
$$ // Updated JavaScript code $$;
7. Show UDF Code:
sqlCopy code
SHOW FUNCTION CODE <udf_name>(arg1 INT, arg2 STRING);

Data Sampling:
1. Sample Data from Table:
sqlCopy code
SELECT * FROM <table_name> SAMPLE (10);
2. Stratified Sampling:
sqlCopy code
SELECT * FROM <table_name> TABLESAMPLE BERNOULLI(10) STRATIFIED ON (column_name);

Object Storage Integration:


1. List External Schemas:
sqlCopy code
SHOW EXTERNAL SCHEMAS;
2. List External Tables in a Schema:
sqlCopy code
SHOW EXTERNAL TABLES IN SCHEMA <external_schema_name>;
3. Copy Data to External Table (External Stage):
sqlCopy code
COPY INTO @<external_stage_name> FROM @<stage_name> FILE_FORMAT = (FORMAT_NAME = 'csv');
4. Copy Data from External Table to Table:
sqlCopy code
COPY INTO <table_name> FROM @<external_table_name> FILE_FORMAT = (FORMAT_NAME = 'csv');

Query Caching:
1. Enable Query Caching:
sqlCopy code
ALTER SESSION SET USE_CACHED_RESULT = TRUE;
2. Disable Query Caching:
sqlCopy code
ALTER SESSION SET USE_CACHED_RESULT = FALSE;
3. Clear Query Cache:
sqlCopy code
ALTER SESSION CLEAR CACHED_RESULTS;

Data Size Estimation:


1. Estimate Table Size:
sqlCopy code
SHOW TABLE <table_name> STORAGE ESTIMATE;
2. Estimate Database Size:
sqlCopy code
SHOW DATABASE <database_name> STORAGE ESTIMATE;
3. Estimate Schema Size:
sqlCopy code
SHOW SCHEMA <schema_name> STORAGE ESTIMATE;

Object Ownership and Permissions:


1. Change Object Ownership:
sqlCopy code
ALTER TABLE <table_name> MODIFY OWNER TO <new_owner>;
2. Grant Ownership of Schema:
sqlCopy code
GRANT OWNERSHIP ON SCHEMA <schema_name> TO ROLE <role_name>;
3. Revoke Ownership of Table:
sqlCopy code
REVOKE OWNERSHIP ON TABLE <table_name> FROM ROLE <role_name>;
These commands cover various aspects of Snowflake, including User-Defined
Functions (UDFs), data sampling, object storage integration, query caching,
data size estimation, and object ownership and permissions. Adjust the
commands based on r specific use cases and replace placeholders like
<udf_name>, <table_name>, <external_stage_name>, etc., with r actual
values.

Let's include more commands related to cache management and time travel
in Snowflake:

Cache Management:
1. Clear Query Result Cache for a Table:
sqlCopy code
ALTER TABLE <table_name> CLEAR RESULT_CACHE;
2. Clear Query Result Cache for a View:
sqlCopy code
ALTER VIEW <view_name> CLEAR RESULT_CACHE;
3. Clear Entire Query Result Cache:
sqlCopy code
ALTER SESSION CLEAR RESULT_CACHE;
4. Show Cached Query Results:
sqlCopy code
SHOW RESULT_CACHE;
5. Prevent Caching for a Query:
sqlCopy code
SELECT /*+ NO_RESULT_CACHE */ * FROM <table_name>;

Time Travel:
1. Enable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> SET TEMPORAL = VERSIONS BETWEEN 0 AND 1;
2. Disable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> UNSET TEMPORAL;
3. Time Travel Query (With AS OF Clause):
sqlCopy code
SELECT * FROM <table_name> AS OF TIMESTAMP 'YYYY-MM-DD HH:MI:SS';
4. Time Travel Query (With SNAPSHOT Clause):
sqlCopy code
SELECT * FROM <table_name> SNAPSHOT('YYYY-MM-DD HH:MI:SS');
5. Show Tables with Time Travel Enabled:
sqlCopy code
SHOW TABLES IN DATABASE <database_name> WITH TEMPORAL_HISTORY;
6. Restore Table to a Specific Time:
sqlCopy code
RESTORE TABLE <table_name> TO TIMESTAMP 'YYYY-MM-DD HH:MI:SS';
7. Show Time Travel Properties for a Table:
sqlCopy code
SHOW TABLE <table_name> TEMPORAL_HISTORY;
8. Show Query History for Time Travel Queries:
sqlCopy code
SHOW QUERY_HISTORY FOR TABLE <table_name>;

Advanced Query Performance:


1. Materialized View Auto Refresh Control:
sqlCopy code
ALTER MATERIALIZED VIEW <view_name> DISABLE/ENABLE AUTO REFRESH;
2. Enable/Disable Result Set Caching for a Query:
sqlCopy code
ALTER SESSION SET QUERY_RESULT_CACHING_POLICY = 'MODE = MANUAL';
3. Enable/Disable Query Compilation Timeout:
sqlCopy code
ALTER SESSION SET STATEMENT_QUEUED_TIMEOUT_IN_SECONDS = 300;
4. Show Query Compilation Details:
sqlCopy code
SHOW PARAMETERS LIKE 'QUERY_COMPILATION_TIMEOUT_IN_SECONDS';

Snowflake Information Schema:


1. Show Columns of a Table:
sqlCopy code
SHOW COLUMNS IN TABLE <table_name>;
2. Show Databases:
sqlCopy code
SHOW DATABASES;
3. Show Schemas:
sqlCopy code
SHOW SCHEMAS;
4. Show Warehouse Details:
sqlCopy code
SHOW WAREHOUSES LIKE '<warehouse_name>';
5. Show Roles:
sqlCopy code
SHOW ROLES;
6. Show Information Schema Tables:
sqlCopy code
SHOW TABLES IN INFORMATION_SCHEMA;

These commands cover additional aspects of Snowflake, including cache


management, advanced query performance, and queries related to the
Snowflake Information Schema. Adjust the commands based on r specific use
cases and replace placeholders like <table_name>, <warehouse_name>, etc.,
with r actual values.

Let's include more commands related to cache management, time travel, and
pre/post validation by Query ID (QID):

Cache Management:
1. Show Cached Queries:
sqlCopy code
SHOW CACHED QUERIES;
2. Invalidate Cached Query by Query ID:
sqlCopy code
ALTER SESSION INVALIDATE METADATA QID '<query_id>';
3. Clear Query Result Cache:
sqlCopy code
ALTER SESSION CLEAR CACHED_RESULTS;
4. Purge All Cached Results for a Table:
sqlCopy code
ALTER TABLE <table_name> PURGE;

Time Travel:
1. Enable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> ENABLE VERSIONING;
2. Disable Time Travel for a Table:
sqlCopy code
ALTER TABLE <table_name> DISABLE VERSIONING;
3. Show Table History (Time Travel):
sqlCopy code
SELECT * FROM <table_name> AS OF SYSTEM TIME 'YYYY-MM-DD HH:MI:SS';
4. Restore Table to Specific Timestamp:
sqlCopy code
RESTORE TABLE <table_name> TO SYSTEM TIME 'YYYY-MM-DD HH:MI:SS';

Pre-Validation and Post-Validation by Query ID:


1. Enable Pre-Validation by Query ID:
sqlCopy code
ALTER SESSION SET PRE_VALIDATE_QUERY_HINT = TRUE;
2. Disable Pre-Validation by Query ID:
sqlCopy code
ALTER SESSION SET PRE_VALIDATE_QUERY_HINT = FALSE;
3. Enable Post-Validation by Query ID:
sqlCopy code
ALTER SESSION SET POST_VALIDATE_QUERY_HINT = TRUE;
4. Disable Post-Validation by Query ID:
sqlCopy code
ALTER SESSION SET POST_VALIDATE_QUERY_HINT = FALSE;
5. Show Query Execution Details (Pre-Validation and Post-Validation):
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN('<query_id>') WITH (INCLUDE_QUERY_TEXT => TRUE,
INCLUDE_METRICS => TRUE));
6. Validate Query ID (Post-Validation):
sqlCopy code
VALIDATE QUERY '<query_id>';

Query Hints:
1. Set Query Hint (Materialize):
sqlCopy code
SELECT /*+ MATERIALIZE */ * FROM <table_name>;
2. Set Query Hint (No_Materialize):
sqlCopy code
SELECT /*+ NO_MATERIALIZATION */ * FROM <table_name>;
3. Set Query Hint (Hash_AJ):
sqlCopy code
SELECT /*+ HASH_AJ */ * FROM <table1> JOIN <table2> ON <condition>;
4. Set Query Hint (Merge_AJ):
sqlCopy code
SELECT /*+ MERGE_AJ */ * FROM <table1> JOIN <table2> ON <condition>;

These additional commands cover features like cache management, time


travel, and query validation by Query ID, as well as query hints for optimizing
query performance. Adjust the commands based on r specific use cases and
replace placeholders like <table_name>, <query_id>, etc., with r actual values.

Let's include more commands related to pre-validation by validation mode:

Pre-Validation by Validation Mode:


1. Enable Pre-Validation by Validation Mode:
sqlCopy code
ALTER SESSION SET PRE_VALIDATE_QUERY_HINT = TRUE;
2. Set Validation Mode to Syntax Only:
sqlCopy code
ALTER SESSION SET VALIDATION_MODE = 'SYNTAX_ONLY';
3. Set Validation Mode to Semantic (Default):
sqlCopy code
ALTER SESSION SET VALIDATION_MODE = 'SEMANTIC';
4. Show Validation Mode for Session:
sqlCopy code
SHOW SESSION PARAMETERS LIKE 'VALIDATION_MODE';
5. Validate Query with Syntax Only:
sqlCopy code
VALIDATE QUERY '<query_id>' SYNTAX_ONLY;
6. Validate Query with Semantic Checks:
sqlCopy code
VALIDATE QUERY '<query_id>' SEMANTIC;
7. List Query IDs with Validation Mode:
sqlCopy code
SELECT QUERY_ID, VALIDATION_MODE FROM INFORMATION_SCHEMA.QUERY_HISTORY;
8. Set Global Validation Mode (Default for New Sessions):
sqlCopy code
ALTER SYSTEM SET VALIDATION_MODE = 'SYNTAX_ONLY';
9. Show Global Validation Mode:
sqlCopy code
SHOW PARAMETERS LIKE 'VALIDATION_MODE';
10. Reset Global Validation Mode to Default:
sqlCopy code
ALTER SYSTEM RESET VALIDATION_MODE;

These commands cover different aspects of pre-validation by validation mode,


including enabling/disabling pre-validation, setting validation modes,
validating queries with syntax-only or semantic checks, and managing
validation modes at both the session and system levels. Adjust the commands
based on r specific use cases and replace placeholders like <query_id>.

Here are more Snowflake commands covering various features and


functionalities:

Data Sharing:
1. Create Data Share:
sqlCopy code
CREATE SHARE <share_name> COMMENT = 'Description';
2. Grant Usage on Share:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO SHARE <share_name>;
3. Show Data Shares:
sqlCopy code
SHOW SHARES;
4. Revoke Usage on Share:
sqlCopy code
REVOKE USAGE ON DATABASE <database_name> FROM SHARE <share_name>;

Task Monitoring:
1. Monitor Task Status:
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));
2. Monitor Task Execution Time:
sqlCopy code
SHOW TASKS;
3. View Task Execution Details:
sqlCopy code
SELECT * FROM INFORMATION_SCHEMA.TASK_HISTORY WHERE TASK_ID = '<task_id>';

Result Sets:
1. Export Query Result to CSV:
sqlCopy code
COPY INTO @<stage_name> FROM (SELECT * FROM <table_name>) FILE_FORMAT = (FORMAT_NAME =
'csv');
2. Export Query Result to Parquet:
sqlCopy code
COPY INTO @<stage_name> FROM (SELECT * FROM <table_name>) FILE_FORMAT = (FORMAT_NAME =
'parquet');
3. Download Query Result to Local File:
sqlCopy code
GET @<stage_name>/<file_name> file://<local_file_path>;

Miscellaneous:
1. Generate Series of Numbers:
sqlCopy code
SELECT SEQ4() AS series FROM TABLE(GENERATOR(ROWCOUNT => 10));
2. Check Session Details:
sqlCopy code
SELECT CURRENT_SESSION(), CURRENT_ROLE(), CURRENT_WAREHOUSE(),
CURRENT_DATABASE();
3. Show Resource Monitors:
sqlCopy code
SHOW RESOURCE MONITORS;
4. List Object Dependencies:
sqlCopy code
SHOW OBJECTS LIKE 'DEPENDS ON <object_type> <object_name>';
5. Alter Object Comment:
sqlCopy code
COMMENT ON <object_type> <object_name> IS 'New Comment';
6. Set Query Timeout:
sqlCopy code
ALTER SESSION SET STATEMENT_TIMEOUT_IN_SECONDS = 300;
7. Use Query Execution Tags:
sqlCopy code
SELECT /*+ TAG('tag_value') */ * FROM <table_name>;
8. Use Query Options:
sqlCopy code
SELECT * FROM <table_name> OPTION (MIN_CLUSTER_COUNT = 2, PARTITION_SIZE = 1000000);
9. List Scheduled Tasks:
sqlCopy code
SHOW SCHEDULE;
10. Abort Query Execution:
sqlCopy code
ABORT STATEMENT '<query_id>';
11. Show Object Grants:
sqlCopy code
SHOW GRANTS ON <object_type> <object_name>;

Remember to replace placeholders like <share_name>, <table_name>,


<task_id>, etc., with r actual values. These commands cover a wide range of
Snowflake features and functionalities.

Sure! Let's explore more commands and features in Snowflake, including zero-
copy clone:

Zero-Copy Clone:
1. Create Zero-Copy Clone:
sqlCopy code
CREATE CLONE <source_table_or_database> TO <clone_table_or_database>;
2. List Clones:
sqlCopy code
SHOW CLONES;
3. Drop Clone:
sqlCopy code
DROP CLONE <clone_table_or_database>;
4. Zero-Copy Clone with Retention Time:
sqlCopy code
CREATE CLONE <source_table_or_database> TO <clone_table_or_database>
DATA_RETENTION_TIME_IN_DAYS = 7;

Snowpipe (Auto Ingest):


1. Create Snowpipe Pipe:
sqlCopy code
CREATE PIPE <pipe_name> AUTO_INGEST = TRUE;
2. List Snowpipe Pipes:
sqlCopy code
SHOW PIPES;
3. Copy Data into Snowpipe Pipe:
sqlCopy code
PUT file://<local_file_path> @<pipe_name>;
4. Monitor Snowpipe Loading History:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.LOAD_HISTORY('<pipe_name>'));

Stream and Change Tracking:


1. Enable Change Tracking on Table:
sqlCopy code
ALTER TABLE <table_name> SET CHANGE_TRACKING = TRUE;
2. List Tables with Change Tracking:
sqlCopy code
SHOW TABLES WITH CHANGE_TRACKING;
3. Query Changes in a Table:
sqlCopy code
SELECT * FROM CHANGES(<table_name>);

External Functions (Python):


1. Create External Function with Python:
sqlCopy code
CREATE OR REPLACE FUNCTION <udf_name>(arg1 INT, arg2 STRING) RETURNS STRING LANGUAGE
PYTHON EXECUTE AS CALLER AS $$ # Python code $$;
2. List External Python Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS LIKE 'PYTHON%';

Resource Pools:
1. Create Resource Pool:
sqlCopy code
CREATE RESOURCE POOL <pool_name> WITH MIN_CLUSTER_COUNT = 1 MAX_CLUSTER_COUNT =
10;
2. List Resource Pools:
sqlCopy code
SHOW RESOURCE POOLS;
3. Assign Resource Pool to Warehouse:
sqlCopy code
ALTER WAREHOUSE <warehouse_name> SET RESOURCE_POOL = <pool_name>;

Object Storage Integration (AWS S3):


1. Create Storage Integration for AWS S3:
sqlCopy code
CREATE STORAGE INTEGRATION <integration_name> TYPE = EXTERNAL_STAGE ENABLED = TRUE
STORAGE_PROVIDER = S3 STORAGE_ALLOWED_LOCATIONS = ('s3://<bucket>/');
2. List Storage Integrations:
sqlCopy code
SHOW STORAGE INTEGRATIONS;
3. Copy Data to External Table in S3:
sqlCopy code
COPY INTO @<s3_stage_name> FROM @<local_stage_name> FILE_FORMAT = (FORMAT_NAME =
'parquet');

JSON Data Handling:


1. Extract JSON Data:
sqlCopy code
SELECT PARSE_JSON(column_name) AS json_data FROM <table_name>;
2. Flatten Nested JSON:
sqlCopy code
SELECT FLATTEN(INPUT => PARSE_JSON(column_name)) AS flat_json FROM <table_name>;
3. Create JSON Array:
sqlCopy code
SELECT TO_VARIANT(ARRAY_CONSTRUCT('value1', 'value2')) AS json_array;

These additional commands cover features like zero-copy clone, Snowpipe for
auto-ingest, change tracking, external functions in Python, resource pools,
object storage integration (AWS S3), and JSON data handling. Adjust the
commands based on r specific use cases and replace placeholders like
<clone_table_or_database>, <pipe_name>, <udf_name>, etc., with r actual
values.

Here are more commands and features in Snowflake:

Data Masking:
1. Create Data Masking Policy:
sqlCopy code
CREATE MASKING POLICY <masking_policy_name> ALTER COLUMN <column_name> ADD
MASKING_POLICY <masking_policy_name>;
2. Apply Data Masking Policy:
sqlCopy code
ALTER TABLE <table_name> MODIFY COLUMN <column_name> SET MASKING_POLICY
<masking_policy_name>;
3. List Data Masking Policies:
sqlCopy code
SHOW MASKING POLICIES;

Query Profiling:
1. Enable Query Profiling:
sqlCopy code
SET PROFILE = ON;
2. Run a Query with Profiling:
sqlCopy code
SELECT * FROM <table_name>;
3. View Query Profile Information:
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));

Materialized Views:
1. Create Materialized View:
sqlCopy code
CREATE MATERIALIZED VIEW <view_name> AS SELECT * FROM <table_name>;
2. Refresh Materialized View:
sqlCopy code
REFRESH MATERIALIZED VIEW <view_name>;
3. List Materialized Views:
sqlCopy code
SHOW MATERIALIZED VIEWS;

Task Dependencies:
1. Create Task with Dependencies:
sqlCopy code
CREATE OR REPLACE TASK <task_name> WAREHOUSE = <warehouse_name> AFTER
<dependent_task_name> AS INSERT INTO <target_table> SELECT * FROM <source_table>;
2. List Task Dependencies:
sqlCopy code
SHOW TASKS;

Query Hints:
1. Set Query Hint (Materialize):
sqlCopy code
SELECT /*+ MATERIALIZE */ * FROM <table_name>;
2. Set Query Hint (No_Materialize):
sqlCopy code
SELECT /*+ NO_MATERIALIZATION */ * FROM <table_name>;
3. Set Query Hint (Hash_AJ):
sqlCopy code
SELECT /*+ HASH_AJ */ * FROM <table1> JOIN <table2> ON <condition>;
4. Set Query Hint (Merge_AJ):
sqlCopy code
SELECT /*+ MERGE_AJ */ * FROM <table1> JOIN <table2> ON <condition>;

Time Travel:
1. Time Travel Query (to a specific timestamp):
sqlCopy code
SELECT * FROM <table_name> AT(TIMESTAMP => 'YYYY-MM-DD HH:MI:SS');
2. Time Travel Query (using OFFSET):
sqlCopy code
SELECT * FROM <table_name> AT(OFFSET => -10);
3. Show Historical Versions of a Table:
sqlCopy code
SHOW TABLE <table_name> HISTORY;
4. Restore Table to a Specific Version:
sqlCopy code
RESTORE TABLE <table_name> TO (TIMESTAMP => 'YYYY-MM-DD HH:MI:SS');

Snowflake Stored Procedures:


1. Create Stored Procedure:
sqlCopy code
CREATE OR REPLACE PROCEDURE <procedure_name>() RETURNS STRING LANGUAGE JAVASCRIPT
EXECUTE AS CALLER AS $$ // JavaScript code $$;
2. Call Stored Procedure:
sqlCopy code
CALL <procedure_name>();
3. List Procedures:
sqlCopy code
SHOW PROCEDURES;

These commands cover additional features in Snowflake, such as data


masking, query profiling, materialized views, task dependencies, query hints,
and stored procedures. Adjust the commands based on r specific use cases
and replace placeholders like <view_name>, <task_name>, <procedure_name>,
etc., with r actual values.
Here are more commands and features in Snowflake:

Data Sharing:
1. Create Data Share:
sqlCopy code
CREATE SHARE <share_name> COMMENT = 'Description';
2. Grant Usage on Share:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO SHARE <share_name>;
3. Show Data Shares:
sqlCopy code
SHOW SHARES;
4. Revoke Usage on Share:
sqlCopy code
REVOKE USAGE ON DATABASE <database_name> FROM SHARE <share_name>;

Query History:
1. Show Query History:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY(WAREHOUSE_NAME =>
'<warehouse_name>', START_TIME => '<start_time>', END_TIME => '<end_time>'));
2. Query History by User:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY(USER_NAME => '<user_name>'));

Resource Monitors:
1. Create Resource Monitor:
sqlCopy code
CREATE RESOURCE MONITOR <monitor_name> WITH CREDITS_PERCENT = 20;
2. Assign Resource Monitor to Warehouse:
sqlCopy code
ALTER WAREHOUSE <warehouse_name> SET RESOURCE_MONITOR = <monitor_name>;
3. List Resource Monitors:
sqlCopy code
SHOW RESOURCE MONITORS;
Metadata Management:
1. Show Tables in a Schema:
sqlCopy code
SHOW TABLES IN SCHEMA <schema_name>;
2. Show Columns of a Table:
sqlCopy code
SHOW COLUMNS IN TABLE <table_name>;
3. Show Table Constraints:
sqlCopy code
SHOW CONSTRAINTS IN TABLE <table_name>;

Query Compilation and Execution:


1. Show Query Compilation Details:
sqlCopy code
SHOW COMPILE_HISTORY;
2. Show Query Execution Plan:
sqlCopy code
EXPLAIN SELECT * FROM <table_name>;

Snowflake Data Marketplace:


1. Show Data Marketplace Vendors:
sqlCopy code
SHOW MARKETPLACE VENDORS;
2. Browse Data Listings:
sqlCopy code
SHOW MARKETPLACE <vendor_name>.<data_listing_name>;

External Functions (Java):


1. Create External Function with Java:
sqlCopy code
CREATE OR REPLACE FUNCTION <udf_name>(arg1 INT, arg2 STRING) RETURNS STRING LANGUAGE
JAVA EXECUTE AS CALLER AS $$ // Java code $$;
2. List External Java Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS LIKE 'JAVA%';
These commands cover additional features in Snowflake, such as data sharing,
query history, resource monitors, metadata management, query compilation
and execution, Snowflake Data Marketplace, and external functions with Java.
Adjust the commands based on r specific use cases and replace placeholders
like <share_name>, <monitor_name>, <table_name>, etc., with r actual values.

ok on query history pipe history and information schema built-in functions

Let's explore more commands related to query history, pipe history, and
INFORMATION_SCHEMA built-in functions in Snowflake:

Query History:
1. Show Recent Query History:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY());
2. Filter Query History by User:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY()) WHERE USER_NAME =
'username';
3. Retrieve Query History for a Specific Query ID:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY('<query_id>'));
4. Query History with Execution Time Aggregation:
sqlCopy code
SELECT USER_NAME, COUNT(*), MIN(START_TIME), MAX(END_TIME) FROM
TABLE(INFORMATION_SCHEMA.QUERY_HISTORY()) GROUP BY USER_NAME;

Pipe History (Snowpipe):


1. Show Recent Pipe History:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.LOAD_HISTORY('<pipe_name>'));
2. Filter Pipe History by Status:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.LOAD_HISTORY('<pipe_name>')) WHERE STATUS =
'LOADED';
3. Retrieve Pipe History for a Specific Load ID:
sqlCopy code
SELECT * FROM TABLE(INFORMATION_SCHEMA.LOAD_HISTORY('<pipe_name>', '<load_id>'));
4. Pipe History with Execution Time Aggregation:
sqlCopy code
SELECT PIPE_NAME, COUNT(*), MIN(START_TIME), MAX(END_TIME) FROM
TABLE(INFORMATION_SCHEMA.LOAD_HISTORY('<pipe_name>')) GROUP BY PIPE_NAME;

INFORMATION_SCHEMA Built-In Functions:


1. List Tables in a Database:
sqlCopy code
SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = '<schema_name>';
2. Show Columns of a Table:
sqlCopy code
SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE
TABLE_NAME = '<table_name>';
3. Query for Primary Keys:
sqlCopy code
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE
CONSTRAINT_TYPE = 'PRIMARY KEY' AND TABLE_NAME = '<table_name>';
4. List Databases:
sqlCopy code
SELECT * FROM INFORMATION_SCHEMA.DATABASES;
5. Show View Definitions:
sqlCopy code
SELECT VIEW_NAME, VIEW_DEFINITION FROM INFORMATION_SCHEMA.VIEWS WHERE
TABLE_SCHEMA = '<schema_name>';
6. Retrieve Function Metadata:
sqlCopy code
SELECT ROUTINE_NAME, ROUTINE_TYPE, ROUTINE_DEFINITION FROM
INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_SCHEMA = '<schema_name>';
7. List External Tables:
sqlCopy code
SELECT * FROM INFORMATION_SCHEMA.EXTERNAL_TABLES WHERE SCHEMA_NAME =
'<schema_name>';
8. Show Schemas in a Database:
sqlCopy code
SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE DATABASE_NAME =
'<database_name>';
9. Explore Constraints on a Table:
sqlCopy code
SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE FROM
INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE TABLE_NAME = '<table_name>';

These commands provide insights into querying the query history, pipe history
(Snowpipe), and using INFORMATION_SCHEMA built-in functions for
metadata exploration in Snowflake. Adjust the commands based on r specific
use cases and replace placeholders like <pipe_name>, <table_name>,
<schema_name>, etc., with r actual values.

Let's cover more Snowflake commands and features:

Dynamic SQL:
1. Dynamic SQL Using EXECUTE IMMEDIATE:
sqlCopy code
EXECUTE IMMEDIATE 'CREATE TABLE ' || :table_name || ' (column1 INT, column2 STRING)';
2. Dynamic SQL with BIND Parameters:
sqlCopy code
EXECUTE IMMEDIATE 'INSERT INTO ' || :table_name || ' VALUES (?, ?)' USING :value1, :value2;

Querying External Data:


1. Create External Table for CSV:
sqlCopy code
CREATE EXTERNAL TABLE <external_table_name> LOCATION = @<external_stage_name> FILE_FORMAT
= (TYPE = 'CSV' FIELD_OPTIONALLY_ENCLOSED_BY='"' FIELD_OPTIONALLY_ENCLOSED_BY = ',');
2. Query External Table:
sqlCopy code
SELECT * FROM <external_table_name>;
3. Unload Data to External Stage:
sqlCopy code
COPY INTO @<external_stage_name> FROM <table_name> FILE_FORMAT = (FORMAT_NAME = 'csv');

User and Role Management:


1. Create Role:
sqlCopy code
CREATE ROLE <role_name>;
2. Grant Role to User:
sqlCopy code
GRANT <role_name> TO USER <username>;
3. Revoke Role from User:
sqlCopy code
REVOKE <role_name> FROM USER <username>;
4. List Roles for a User:
sqlCopy code
SHOW GRANTS OF USER <username>;

Storage Utilization:
1. Show Storage Utilization for a Table:
sqlCopy code
SHOW TABLE <table_name> STORAGE UTILIZATION;
2. Show Storage Utilization for a Database:
sqlCopy code
SHOW DATABASE <database_name> STORAGE UTILIZATION;

Data Sharing (Receive):


1. Grant Usage on Database for Data Sharing:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO SHARE <share_name>;
2. Receive Data Share:
sqlCopy code
CREATE DATABASE <database_name> FROM SHARE <share_name>;

Stream and Change Tracking:


1. Enable Change Tracking on Database:
sqlCopy code
ALTER DATABASE <database_name> SET CHANGE_TRACKING = TRUE;
2. Query Changes in a Database:
sqlCopy code
SELECT * FROM CHANGES(<database_name>);
External Functions (Java):
1. Create External Function with Java:
sqlCopy code
CREATE OR REPLACE FUNCTION <udf_name>(arg1 INT, arg2 STRING) RETURNS STRING LANGUAGE
JAVA EXECUTE AS CALLER AS $$ // Java code $$;
2. List External Java Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS LIKE 'JAVA%';

Row Access Control (RAC):


1. Enable Row Access Control:
sqlCopy code
ALTER TABLE <table_name> SET ROW_ACCESS_CONTROL = TRUE;
2. Create Row Access Policy:
sqlCopy code
CREATE ROW ACCESS POLICY <policy_name> ON TABLE <table_name> USING (column_name =
CURRENT_ROLE());
3. Show Row Access Policies:
sqlCopy code
SHOW ROW ACCESS POLICIES ON TABLE <table_name>;

Managed Private Snowflake:


1. Create Managed Private Snowflake:
sqlCopy code
CREATE ACCOUNT 'private_snowflake_account' WITH MANAGED PRIVATE;
2. List Managed Private Snowflakes:
sqlCopy code
SHOW ACCOUNTS LIKE 'private_snowflake_account%';
3. Add Resources to Managed Private Snowflake:
sqlCopy code
ALTER ACCOUNT 'private_snowflake_account' ADD RESOURCES (WAREHOUSE = 'extra_warehouse');

Table Functions:
1. Create Table Function:
sqlCopy code
CREATE OR REPLACE TABLE FUNCTION <function_name>(arg1 INT, arg2 STRING) RETURNS TABLE
(column1 INT, column2 STRING) LANGUAGE JAVASCRIPT AS $$ // JavaScript code $$;
2. Call Table Function:
sqlCopy code
SELECT * FROM TABLE(<function_name>(arg1_value, arg2_value));

These commands cover a diverse set of Snowflake features including dynamic


SQL, querying external data, user and role management, storage utilization,
data sharing, stream and change tracking, external functions in Java, row
access control (RAC), managed private Snowflake, and table functions. Adjust
the commands based on r specific use cases and replace placeholders like
<role_name>, <external_table_name>, <username>, etc., with r actual values.

Let's explore more Snowflake commands and features:

Snowflake Resource Monitors:


1. Create Resource Monitor:
sqlCopy code
CREATE RESOURCE MONITOR <monitor_name> WITH CREDIT_QUOTA = 1000;
2. Assign Resource Monitor to a Warehouse:
sqlCopy code
ALTER WAREHOUSE <warehouse_name> SET RESOURCE_MONITOR = <monitor_name>;
3. List Resource Monitors:
sqlCopy code
SHOW RESOURCE MONITORS;

Snowflake Share Objects:


1. Create Share Object:
sqlCopy code
CREATE SHARE <share_object_name> COMMENT = 'Description';
2. Grant Usage on Share Object:
sqlCopy code
GRANT USAGE ON DATABASE <database_name> TO SHARE <share_object_name>;
3. List Share Objects:
sqlCopy code
SHOW SHARES;

Snowflake Data Exchange:


1. Create Data Exchange:
sqlCopy code
CREATE DATA EXCHANGE <exchange_name> COMMENT = 'Description';
2. Grant Access to Data Exchange:
sqlCopy code
GRANT USAGE ON DATA EXCHANGE <exchange_name> TO ROLE <role_name>;
3. List Data Exchanges:
sqlCopy code
SHOW DATA EXCHANGES;

Query Profiling with Dynamic Sampling:


1. Enable Dynamic Sampling:
sqlCopy code
SET PROFILE_DYNAMIC_SAMPLING = TRUE;
2. Run a Query with Dynamic Sampling:
sqlCopy code
SELECT * FROM <table_name>;
3. View Query Profile with Dynamic Sampling:
sqlCopy code
SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));

Security Integration (OAuth):


1. Create OAuth Integration:
sqlCopy code
CREATE OAUTH INTEGRATION <oauth_integration_name> TYPE = EXTERNAL_OAUTH ENABLED =
TRUE ISSUER = 'https://oauth-provider.com' CLIENT_ID = 'r_client_id' CLIENT_SECRET = 'r_client_secret';
2. List OAuth Integrations:
sqlCopy code
SHOW OAUTH INTEGRATIONS;

External Functions (R):


1. Create External Function with R:
sqlCopy code
CREATE OR REPLACE FUNCTION <udf_name>(arg1 INT, arg2 STRING) RETURNS STRING LANGUAGE R
EXECUTE AS CALLER AS $$ # R code $$;
2. List External R Functions:
sqlCopy code
SHOW EXTERNAL FUNCTIONS LIKE 'R%';

Automated Clustering:
1. Enable Automated Clustering on Table:
sqlCopy code
ALTER TABLE <table_name> SET AUTO_CLUSTER_BY('column1', 'column2');
2. Disable Automated Clustering on Table:
sqlCopy code
ALTER TABLE <table_name> UNSET AUTO_CLUSTER_BY;
3. Show Clustering Information for a Table:
sqlCopy code
SHOW TABLE <table_name> CLUSTERING KEYS;

These commands cover additional Snowflake features such as resource


monitors, share objects, data exchange, query profiling with dynamic
sampling, security integration with OAuth, external functions in R, and
automated clustering. Adjust the commands based on r specific use cases and
replace placeholders like <monitor_name>, <share_object_name>,
<exchange_name>, etc., with r actual values.

https://docs.snowflake.com/en/sql-reference/sql-all

https://snow-flake-cheat-sheet.streamlit.app/

You might also like