Cloud storage must be linked for external stage object no need for internal
stage
Snowflake data warehouse s not built on topp of ay existing data platform
Micro partitions – enable horizontal and verical query pruning and are
physical data files that compromise snowflake logical tables
Cloning objects external names stages are cloned tables and there
internal stages are lcones. Internal names stages and database and
cheme stages are not cloned
Table types – temporary transient and premanent and external (no time
travel)
True or false: You can point Snowflake at any S3 bucket to directly query
the files in that bucket as long as the files are in Parquet or ORC format.
They can be in in any file format
How often does Snowflake release new features? Weekly
The FLATTEN command will parse nested objects into separate rows. One
version of the FLATTEN command uses a join and the other uses an object
keyword. Select the two words that represent the options used with the
FLATTEN command TABLE , LATERAL
Cloud services layer does : User Authentication , Infrastructure
management, metadata management , query parsing and optimisation ,
access control, authentication
Standard snowflake edition does not have elastic multi cluster
warehouses.
Partner Connect a Partner Connect
Includes automated role, user and staging database set up
Can be connected from within the WebUI
Includes a streamlined Partner Trial Account Signup
Can be connected to Snowflake using a streamlined wizard
Scaling down and up in not a automated process
Different editions require different accounts
Here’s why:
Cloning and Grants: When cloning a table in Snowflake, the COPY
GRANTS option is specifically used to transfer the access privileges
defined on the original table to the cloned object.
Default Behavior: If COPY GRANTS is not specified during the
cloning process, the cloned table inherits no explicit access
privileges from the original table.
Future Grants: Future grants defined for the table in the schema
will not automatically apply to the previously cloned object. You
would need to explicitly grant access to the cloned table for users to
interact with it.
When you clone a table in Snowflake without specifying the COPY
GRANTS option, the behavior is as follows:
The new cloned table does inherit any explicit access privileges
granted on the original table.
However, it does not inherit any future grants defined for the
table type in the schema.
Therefore, statement B is FALSE. Cloning a table without COPY
GRANTS creates a separate object with no inherited access privileges.
The clone object does inherit explicit access privileges but not future
grants. If you want the clone to inherit future grants as well, you need to
use the COPY GRANTS option during the clone operation
Each Snowflake account comes with two shared databases. One is a set of
sample data and the other contains Account Usage information. Check all
true statements about these shared databases.
SNOWFLAKE_SAMPLE_DATA contains several schemas from
TPC (tpc.org)
You do not specify availability zones
Providers cannot list data on the market place
Which of the following conditions are required for sharing data in
Snowflake?
Data providers with ACCOUNTADMIN role can set up shares
Consumer accounts must be in same Snowflake region as the
provider account
External stage = cloud storage credentials cloud storage l;ocation and
stage object in snowflake
What command (partial) is used to convert a SECURE VIEW to a “regular“
VIEW?
UNSET SECURE
When adding Consumers to Outbound Shares, what types of Consumer
Accounts can be selected?
Full
Reader
The Query Profiler view is only available for completed queries.
FALSE
Which of the following terms are associated with the Cloud Services
Layer?
Query Planning
Query Compilation
Query Optimization
Warehouse Size is a good indicartor that the query was executed
Snowflake caches are automatically invalidated if the underlying data
changes. TRUE
If a table is cloned with COPY GRANTS option specified, then the clone
object inherits any explicit access privileges granted on the original table,
but does not inherit any future grants defined for the table in the schema
– TRUE
Scaleout increase cluster increase server
PUT command cannot be executed from web ui – TRUE
Maximum number of consumer account per share object – Unlimited
Which of the following statements are true about Snowflake Data sharing?
Select all that apply.
Data Sharing is integrated with role-based access controls
Consumers can query shared tables in the same query as
their own tables
A shared data cannot be cloned by the consumer. It can only be
queried.
Data ui marketplace must be compliant to legal standards hippa must be
comliant to standards real data legally duistribuatbel
Cost – Size of warehouse, Amount of time warehouse has run.
Snowfalke cannot run on private on premisise or hosted infrastructure.
Only Table external tables secure views secure materialized views and
secure udf can be SHARED. REGUALR VIEWS cannot be SHARED
Unlimited shares can be created on a snowflake account and tou can add
unlimited amount of people to a share.
If data is shared with an existing snowflake customer the compute is
charged to the consumer
https://AB12345.snowflakecomputing.com is the ACCOUNT container
Query statement Encryption is done by Enterprise for sensitive data
Looker is business intelligence stuff.
Which statements about Data Integration Tech Partners are true?
Data Integration Tech Partner software can be used to
extract data from other systems.
Data Integration Tech Partner software can be used to carry
out transformations.
Snowflake can carry out transformations after loading files
staged by partner software (ELT).
Data Integration Tech Partner software should be used to
deliver data to stages, Snowflake is then used to load the
data
Snowflake-Hosted Accounts (on Amazon or AZURE cloud
infrastructure) olption for snowflake hosted accounts
The Snowflake Data Marketplace has two types of listings. These are
PERSONALISED AND DE-PERSONALISED
Account hosted on AWS can access a file staged in GCP or Azure
File formart for data loading is PARQUET CSV ORC AVRO XML JSON
USER STAGE AND TABLE STAGE are automatically created by snowflake
for the user
USER STAGE AND TABLE STAGE INTERNAL STAGE EXTENTAL STAGE
Recommended file size for parallel operation on data files is 100-250MB
COMPRESSED
VARIANT INDIVISUAL ROW MAXIMUM IS 16MB COMPRESSED
Snowflake instances in different regions require separate accounts
Snowflake view types – SECURE, MATERALISED , STANDARD
CONCAT(‘%‘,CONCAT(MY_COL,‘%‘)) %‘||MY_COL||‘%‘ in a column
named MY_COL being sandwiched between two percent signs
Snowpip charge 0.06 per 1000 files queued
External stages require a cloud storage provider
ODBC and JDBC drivers should be able to connect to a tool
Variant data – OPTIMIZED STORAGE BASED ON REPEATED ELEMENTS ,
CAN BE QUERIED using JSON path notation
FAIL SAFE cannot be disabled on a table
SNOWPIPE only works with external stages but SNOWPIP REST
method works with both stages
SNOWSQL IS THE COMMAND LINE INTERFACE
FEDERATED AUTHENTICATION FOR ALL SNOWFLAKE EDITIONS
COLUMN LEVEL SERCURTIY = DYNAMIC DATA MASKING ,
EXTENRAL TOKENIZATION
SHOW WAREHOUSES = SQL command to list all warehouses in an
account
SNOWFALKE contains an ACCOUNT USAGE it is a schema with many
secure views
Order from Lowest cardinality to highest cardinality.
Snowflake UI ribbon = WAREHOUSES(SECURITY ADMIN)
HISTROY(SECURITY ADMIN) ACCOUNT(SECURITY ADMIN)
NOTIFICATIONS (ACCOUNT ADMIN)
SHOW PIPES() COMMAND FOR to see pipes for which you have access
privileges for
Last in first out for servers
All files stored in stages ARE ENCRYPTED
HIPA FEDRAMP PCI DSS SOC1 SOC TYPE 2 NOT FOR Cloud GBDQ
VARIANT ROW DATA 16MB
The clustering depth for a populated table measures the average depth (1
or greater) of the overlapping micro-partitions for specified columns in a
table. The smaller the average depth, the better clustered the table is with
regards to the specified column
It can be used to determine whether a large table would
benefit from explicitly defining a clustering key
The depth of the overlapping micro-partitions
To change the warehouse that will be used to run a sql command is Run a
SQL Command like “USE WAREHOUSE LARGE_WH;“ or UPDATE
THE WAREHOUSE FIELD IN THE CONTEXT MENU LOCATED ABOVE
THE WORKSHEET
Micropartition uncompressed is between 50 MB to 500 MB
OLAP good online analytical processing not good for OLTP online
transition processing
Semi structured is nativels supported by snowflake
Results casche 24 hours unless the query executed again then 31 days
Metadata active for 64 days#
You can USE COMMAND for user role, warehouse, database ,schemea or
use xontext menu drop
UNDROP for timtravel
Increasing the size of a warehouse does not improve data loading
performance.file size and number of files does
6 minutes for the amount of queries that will stay backed up before
another cluster is started.
Snowflake minimizes the amount of storage required for historical data by
maintaining only the information required to restore the individual table
rows that were updated or deleted. As a result, storage usage is
calculated as a percentage of the table that changed. Full copies of tables
are only maintained when tables are dropped or truncated.