Which statement describes how Snowflake supports reader accounts?
A reader account can consume data from the provider account that created it and combine it with its own data.
A consumer needs to become a licensed Snowflake customer as data sharing is only supported between Snowflake accounts.
The users in a reader account can query data that has been shared with the reader account and can perform DML tasks.
The SHOW MANAGED ACCOUNTS command will view all the reader accounts that have been created for an account.
Snowflake supports reader accounts, which are a type of account that allows data providers to share data with consumers who are not Snowflake customers. However, for data sharing to occur, the consumer needs to become a licensed Snowflake customer because data sharing is only supported between Snowflake accounts. References: Introduction to Secure Data Sharing | Snowflake Documentation2.
What role is required to use Partner Connect?
ACCOUNTADMIN
ORGADMIN
SECURITYADMIN
SYSADMIN
To use Partner Connect, the ACCOUNTADMIN role is required. Partner Connect allows account administrators to easily create trial accounts with selected Snowflake business partners and integrate these accounts with Snowflake
Which features could be used to improve the performance of queries that return a small subset of rows from a large table? (Select TWO).
Search optimization service
Automatic clustering
Row access policies
Multi-cluster virtual warehouses
Secure views
The search optimization service and automatic clustering are features that can improve the performance of queries returning a small subset of rows from a large table. The search optimization service is designed for low-latency point lookup queries, while automatic clustering organizes data in micro-partitions based on specific dimensions to reduce the amount of data scanned during queries.
Using variables in Snowflake is denoted by using which SQL character?
@
&
$
#
∗∗VeryComprehensiveExplanation=InSnowflake,variablesaredenotedbyadollarsign(). Variables can be used in SQL statements where a literal constant is allowed, and they must be prefixed with a $ sign to distinguish them from bind values and column names.
Which commands should be used to grant the privilege allowing a role to select data from all current tables and any tables that will be created later in a schema? (Choose two.)
grant USAGE on all tables in schema DB1.SCHEMA to role MYROLE;
grant USAGE on future tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on all tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on future tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on all tables in database DB1 to role MYROLE;
grant SELECT on future tables in database DB1 to role MYROLE;
To grant a role the privilege to select data from all current and future tables in a schema, two separate commands are needed. The first command grants the SELECT privilege on all existing tables within the schema, and the second command grants the SELECT privilege on all tables that will be created in the future within the same schema.
How can a user change which columns are referenced in a view?
Modify the columns in the underlying table
Use the ALTER VIEW command to update the view
Recreate the view with the required changes
Materialize the view to perform the changes
In Snowflake, to change the columns referenced in a view, the view must be recreated with the required changes. The ALTER VIEW command does not allow changing the definition of a view; it can only be used to rename a view, convert it to or from a secure view, or add, overwrite, or remove a comment for a view. Therefore, the correct approach is to drop the existing view and create a new one with the desired column references.
How often are the Account and Table master keys automatically rotated by Snowflake?
30 Days
60 Days
90 Days
365 Days.
Snowflake automatically rotates the Account and Table master keys when they are more than 30 days old. Active keys are retired, and new keys are created, ensuring robust security through frequent key changes1
A Snowflake user has been granted the create data EXCHANGE listing privilege with their role.
Which tasks can this user now perform on the Data Exchange? (Select TWO).
Rename listings.
Delete provider profiles.
Modify listings properties.
Modify incoming listing access requests.
Submit listings for approval/publishing.
With the create data EXCHANGE listing privilege, a Snowflake user can modify the properties of listings and submit them for approval or publishing on the Data Exchange. This allows them to manage and share data sets with consumers effectively. References: Based on general data exchange practices in cloud services as of 2021.
User INQUISITIVE_PERSON has been granted the role DATA_SCIENCE. The role DATA_SCIENCE has privileges OWNERSHIP on the schema MARKETING of the database ANALYTICS_DW.
Which command will show all privileges granted to that schema?
SHOW GRANTS ON ROLE DATA_SCIENCE
SHOW GRANTS ON SCHEMA ANALYTICS_DW.MARKETING
SHOW GRANTS TO USER INQUISITIVE_PERSON
SHOW GRANTS OF ROLE DATA_SCIENCE
To show all privileges granted to a specific schema, the command SHOW GRANTS ON SCHEMA
The bulk data load history that is available upon completion of the COPY statement is stored where and for how long?
In the metadata of the target table for 14 days
In the metadata of the pipe for 14 days
In the metadata of the target table for 64 days
In the metadata of the pipe for 64 days
The bulk data load history available after a COPY statement is stored in the metadata of the pipe and is retained for 64 days1.
How can a user improve the performance of a single large complex query in Snowflake?
Scale up the virtual warehouse.
Scale out the virtual warehouse.
Enable standard warehouse scaling.
Enable economy warehouse scaling.
Scaling up the virtual warehouse in Snowflake involves increasing the compute resources available for a single warehouse, which can improve the performance of large and complex queries by providing more CPU and memory resources. References: Based on general cloud data warehousing knowledge as of 2021.
What privilege should a user be granted to change permissions for new objects in a managed access schema?
Grant the OWNERSHIP privilege on the schema.
Grant the OWNERSHIP privilege on the database.
Grant the MANAGE GRANTS global privilege.
Grant ALL privileges on the schema.
To change permissions for new objects in a managed access schema, a user should be granted the MANAGE GRANTS global privilege. This privilege allows the user to manage access control through grants on all securable objects within Snowflake2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What are advantages clones have over tables created with CREATE TABLE AS SELECT statement? (Choose two.)
The clone always stays in sync with the original table.
The clone has better query performance.
The clone is created almost instantly.
The clone will have time travel history from the original table.
The clone saves space by not duplicating storage.
Clones in Snowflake have the advantage of being created almost instantly and saving space by not duplicating storage. This is due to Snowflake’s zero-copy cloning feature, which allows for the creation of object clones without the additional storage costs typically associated with data duplication23. Clones are independent of the original table and do not stay in sync with it, nor do they inherently have better query performance. However, they do inherit the time travel history from the original table at the time of cloning
For non-materialized views, what column in Information Schema and Account Usage identifies whether a view is secure or not?
CHECK_OPTION
IS_SECURE
IS_UPDATEABLE
TABLE_NAME
In the Information Schema and Account Usage, the column that identifies whether a view is secure or not is IS_SECURE2.
Which feature is integrated to support Multi-Factor Authentication (MFA) at Snowflake?
Authy
Duo Security
One Login
RSA SecurlD Access
Snowflake integrates Duo Security to support Multi-Factor Authentication (MFA). This feature provides increased login security for users connecting to Snowflake, and it is managed completely by Snowflake without the need for users to sign up separately with Duo4.
Which statement MOST accurately describes clustering in Snowflake?
The database ACCOUNTADMIN must define the clustering methodology for each Snowflake table.
Clustering is the way data is grouped together and stored within Snowflake micro-partitions.
The clustering key must be included in the COPY command when loading data into Snowflake.
Clustering can be disabled within a Snowflake account.
Clustering in Snowflake refers to the organization of data within micro-partitions, which are contiguous units of storage within Snowflake tables. Clustering keys can be defined to co-locate similar rows in the same micro-partitions, improving scan efficiency and query performance12.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the MAXIMUM Time Travel retention period for a transient table?
O days
1 day
7 days
90 days
The maximum Time Travel retention period for a transient table in Snowflake is 1 day. This is the default and maximum duration for which Snowflake maintains the historical data for transient tables, allowing users to query data as it appeared at any point within the past 24 hours2.
A user has a standard multi-cluster warehouse auto-scaling policy in place.
Which condition will trigger a cluster to shut-down?
When after 2-3 consecutive checks the system determines that the load on the most-loaded cluster could be redistributed.
When after 5-6 consecutive checks the system determines that the load on the most-loaded cluster could be redistributed.
When after 5-6 consecutive checks the system determines that the load on the least-loaded cluster could be redistributed.
When after 2-3 consecutive checks the system determines that the load on the least-loaded cluster could be redistributed.
In a standard multi-cluster warehouse with auto-scaling, a cluster will shut down when, after 2-3 consecutive checks, the system determines that the load on the least-loaded cluster could be redistributed to other clusters. This ensures efficient resource utilization and cost management. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How can a Snowflake user optimize query performance in Snowflake? (Select TWO).
Create a view.
Cluster a table.
Enable the search optimization service.
Enable Time Travel.
Index a table.
To optimize query performance in Snowflake, users can cluster a table, which organizes the data in a way that minimizes the amount of data scanned during queries. Additionally, enabling the search optimization service can improve the performance of selective point lookup queries on large tables34.
What does Snowflake recommend regarding database object ownership? (Select TWO).
Create objects with ACCOUNTADMIN and do not reassign ownership.
Create objects with SYSADMIN.
Create objects with SECURITYADMIN to ease granting of privileges later.
Create objects with a custom role and grant this role to SYSADMIN.
Use only MANAGED ACCESS SCHEMAS for66 objects owned by ACCOUNTADMIN.
Snowflake recommends creating objects with a role that has the necessary privileges and is not overly permissive. SYSADMIN is typically used for managing system-level objects and operations. Creating objects with a custom role and granting this role to SYSADMIN allows for more granular control and adherence to the principle of least privilege. References: Based on best practices for database object ownership and role management.
Which statement accurately describes a characteristic of a materialized view?
A materialized view can query only a single table.
Data accessed through materialized views can be stale.
Materialized view refreshes need to be maintained by the user.
Querying a materialized view is slower than executing a query against the base table of the view.
A characteristic of a materialized view is that the data accessed through it can be stale. This is because the data in a materialized view may not reflect the latest changes in the base tables until the view is refreshed
Which activities are included in the Cloud Sen/ices layer? (Select TWO).
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
The Cloud Services layer in Snowflake includes activities such as user authentication and infrastructure management. This layer coordinates activities across Snowflake, including security enforcement, query compilation and optimization, and more
What is the minimum Snowflake edition needed for database failover and fail-back between Snowflake accounts for business continuity and disaster recovery?
Standard
Enterprise
Business Critical
Virtual Private Snowflake
The minimum Snowflake edition required for database failover and fail-back between Snowflake accounts for business continuity and disaster recovery is the Business Critical edition. References: Snowflake Documentation3.
Which task privilege does a Snowflake role need in order to suspend or resume a task?
USAGE
OPERATE
MONITOR
OWNERSHIP
In Snowflake, the OPERATE privilege is required for a role to suspend or resume a task. This privilege allows the role to perform operational tasks such as starting and stopping tasks, which includes suspending and resuming them6
How do Snowflake data providers share data that resides in different databases?
External tables
Secure views
Materialized views
User-Defined Functions (UDFs)
Snowflake data providers can share data residing in different databases through secure views. Secure views allow for the referencing of objects such as schemas, tables, and other views contained in one or more databases, as long as those databases belong to the same account. This enables providers to share data securely and efficiently with consumers. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A user needs to create a materialized view in the schema MYDB.MYSCHEMA. Which statements will provide this access?
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO ROLE MYROLE;
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER USER1;
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB. K"-'SCHEMA TO USER! ;
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE;
To provide a user with the necessary access to create a materialized view in a schema, the user must be granted a role that has the CREATE MATERIALIZED VIEW privilege on that schema. First, the role is granted to the user, and then the privilege is granted to the role
Where is Snowflake metadata stored?
Within the data files
In the virtual warehouse layer
In the cloud services layer
In the remote storage layer
Snowflake’s architecture is divided into three layers: database storage, query processing, and cloud services. The metadata, which includes information about the structure of the data, the SQL operations performed, and the service-level policies, is stored in the cloud services layer. This layer acts as the brain of the Snowflake environment, managing metadata, query optimization, and transaction coordination.
Which parameter prevents streams on tables from becoming stale?
MAXDATAEXTENSIONTIMEINDAYS
MTN_DATA_RETENTION_TTME_TN_DAYS
LOCK_TIMEOUT
STALE_AFTER
The parameter that prevents streams on tables from becoming stale is MAXDATAEXTENSIONTIMEINDAYS. This parameter specifies the maximum number of days for which Snowflake can extend the data retention period for the table to prevent streams on the table from becoming stale4.
A tabular User-Defined Function (UDF) is defined by specifying a return clause that contains which keyword?
ROW_NUMBER
TABLE
TABULAR
VALUES
In Snowflake, a tabular User-Defined Function (UDF) is defined with a return clause that includes the keyword “TABLE.” This indicates that the UDF will return a set of rows, which can be used in the FROM clause of a query. References: Based on my internal knowledge as of 2021.
What can a Snowflake user do with the information included in the details section of a Query Profile?
Determine the total duration of the query.
Determine the role of the user who ran the query.
Determine the source system that the queried table is from.
Determine if the query was on structured or semi-structured data.
The details section of a Query Profile in Snowflake provides users with various statistics and information about the execution of a query. One of the key pieces of information that can be determined from this section is the total duration of the query, which helps in understanding the performance and identifying potential bottlenecks. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which role has the ability to create and manage users and roles?
ORGADMIN
USERADMIN
SYSADMIN
SECURITYADMIN
The USERADMIN role in Snowflake has the ability to create and manage users and roles within the Snowflake environment. This role is specifically dedicated to user and role management and creation
Which Snowflake technique can be used to improve the performance of a query?
Clustering
Indexing
Fragmenting
Using INDEX__HINTS
Clustering is a technique used in Snowflake to improve the performance of queries. It involves organizing the data in a table into micro-partitions based on the values of one or more columns. This organization allows Snowflake to efficiently prune non-relevant micro-partitions during a query, which reduces the amount of data scanned and improves query performance.
References:
What happens when a virtual warehouse is resized?
When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
Users who are trying to use the warehouse will receive an error message until the resizing is complete
When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all running and queued queries. This means that the performance of these queries can improve due to the increased resources. Conversely, when the size of a warehouse is reduced, the compute resources are not removed until they are no longer being used by any current operations1.
References:
Which semi-structured file formats are supported when unloading data from a table? (Select TWO).
ORC
XML
Avro
Parquet
JSON
Semi-structured
JSON, Parquet
Snowflake supports unloading data in several semi-structured file formats, including Parquet and JSON. These formats allow for efficient storage and querying of semi-structured data, which can be loaded directly into Snowflake tables without requiring a predefined schema12.
&text=Delimited%20(CSV%2C%20TSV%2C%20etc.)
Which stage type can be altered and dropped?
Database stage
External stage
Table stage
User stage
External stages can be altered and dropped in Snowflake. An external stage points to an external location, such as an S3 bucket, where data files are stored. Users can modify the stage’s definition or drop it entirely if it’s no longer needed. This is in contrast to table stages, which are tied to specific tables and cannot be altered or dropped independently.
References:
What is a limitation of a Materialized View?
A Materialized View cannot support any aggregate functions
A Materialized View can only reference up to two tables
A Materialized View cannot be joined with other tables
A Materialized View cannot be defined with a JOIN
Materialized Views in Snowflake are designed to store the result of a query and can be refreshed to maintain up-to-date data. However, they have certain limitations, one of which is that they cannot be defined using a JOIN clause. This means that a Materialized View can only be created based on a single source table and cannot combine data from multiple tables using JOIN operations.
References:
What are the default Time Travel and Fail-safe retention periods for transient tables?
Time Travel - 1 day. Fail-safe - 1 day
Time Travel - 0 days. Fail-safe - 1 day
Time Travel - 1 day. Fail-safe - 0 days
Transient tables are retained in neither Fail-safe nor Time Travel
Transient tables in Snowflake have a default Time Travel retention period of 1 day, which allows users to access historical data within the last 24 hours. However, transient tables do not have a Fail-safe period. Fail-safe is an additional layer of data protection that retains data beyond the Time Travel period for recovery purposes in case of extreme data loss. Since transient tables are designed for temporary or intermediate workloads with no requirement for long-term durability, they do not include a Fail-safe period by default1.
References:
True or False: Fail-safe can be disabled within a Snowflake account.
True
False
What is a responsibility of Snowflake's virtual warehouses?
Infrastructure management
Metadata management
Query execution
Query parsing and optimization
Management of the storage layer
The primary responsibility of Snowflake’s virtual warehouses is to execute queries. Virtual warehouses are one of the key components of Snowflake’s architecture, providing the compute power required to perform data processing tasks such as running SQL queries, performing joins, aggregations, and other data manipulations.
References:
The Information Schema and Account Usage Share provide storage information for which of the following objects? (Choose three.)
Users
Tables
Databases
Internal Stages
The Information Schema and Account Usage Share in Snowflake provide metadata and historical usage data for various objects within a Snowflake account. Specifically, they offer storage information for Tables, Databases, and Internal Stages. These schemas contain views and table functions that allow users to query object metadata and usage metrics, such as the amount of data stored and historical activity.
References: The information is verified according to the SnowPro Core Certification Study Guide and Snowflake documentation
Which command is used to unload data from a Snowflake table into a file in a stage?
COPY INTO
GET
WRITE
EXTRACT INTO
The COPY INTO command is used in Snowflake to unload data from a table into a file in a stage. This command allows for the export of data from Snowflake tables into flat files, which can then be used for further analysis, processing, or storage in external systems.
References:
What is a key feature of Snowflake architecture?
Zero-copy cloning creates a mirror copy of a database that updates with the original
Software updates are automatically applied on a quarterly basis
Snowflake eliminates resource contention with its virtual warehouse implementation
Multi-cluster warehouses allow users to run a query that spans across multiple clusters
Snowflake automatically sorts DATE columns during ingest for fast retrieval by date
One of the key features of Snowflake’s architecture is its unique approach to eliminating resource contention through the use of virtual warehouses. This is achieved by separating storage and compute resources, allowing multiple virtual warehouses to operate independently on the same data without affecting each other. This means that different workloads, such as loading data, running queries, or performing complex analytics, can be processed simultaneously without any performance degradation due to resource contention.
References:
What data is stored in the Snowflake storage layer? (Select TWO).
Snowflake parameters
Micro-partitions
Query history
Persisted query results
Standard and secure view results
The Snowflake storage layer is responsible for storing data in an optimized, compressed, columnar format. This includes micro-partitions, which are the fundamental storage units that contain the actual data stored in Snowflake. Additionally, persisted query results, which are the results of queries that have been materialized and stored for future use, are also kept within this layer. This design allows for efficient data retrieval and management within the Snowflake architecture1.
References:
What feature can be used to reorganize a very large table on one or more columns?
Micro-partitions
Clustering keys
Key partitions
Clustered partitions
Clustering keys in Snowflake are used to reorganize large tables based on one or more columns. This feature optimizes the arrangement of data within micro-partitions to improve query performance, especially for large tables where efficient data retrieval is crucial. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the purpose of an External Function?
To call code that executes outside of Snowflake
To run a function in another Snowflake database
To share data in Snowflake with external parties
To ingest data from on-premises data sources
The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with external services and leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud services3.
What transformations are supported in a CREATE PIPE ... AS COPY ... FROM (....) statement? (Select TWO.)
Data can be filtered by an optional where clause
Incoming data can be joined with other tables
Columns can be reordered
Columns can be omitted
Row level access can be defined
In a CREATE PIPE ... AS COPY ... FROM (....) statement, the supported transformations include filtering data using an optional WHERE clause and omitting columns. The WHERE clause allows for the specification of conditions to filter the data that is being loaded, ensuring only relevant data is inserted into the table. Omitting columns enables the exclusion of certain columns from the data load, which can be useful when the incoming data contains more columns than are needed for the target table.
References:
Which of the following describes how clustering keys work in Snowflake?
Clustering keys update the micro-partitions in place with a full sort, and impact the DML operations.
Clustering keys sort the designated columns over time, without blocking DML operations
Clustering keys create a distributed, parallel data structure of pointers to a table's rows and columns
Clustering keys establish a hashed key on each node of a virtual warehouse to optimize joins at run-time
Clustering keys in Snowflake work by sorting the designated columns over time. This process is done in the background and does not block data manipulation language (DML) operations, allowing for normal database operations to continue without interruption. The purpose of clustering keys is to organize the data within micro-partitions to optimize query performance1.
References:
What Snowflake role must be granted for a user to create and manage accounts?
ACCOUNTADMIN
ORGADMIN
SECURITYADMIN
SYSADMIN
The ACCOUNTADMIN role is required for a user to create and manage accounts in Snowflake. This role has the highest level of privileges and is responsible for managing all aspects of the Snowflake account, including the ability to create and manage other user accounts1.
What happens when an external or an internal stage is dropped? (Select TWO).
When dropping an external stage, the files are not removed and only the stage is dropped
When dropping an external stage, both the stage and the files within the stage are removed
When dropping an internal stage, the files are deleted with the stage and the files are recoverable
When dropping an internal stage, the files are deleted with the stage and the files are not recoverable
When dropping an internal stage, only selected files are deleted with the stage and are not recoverable
When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the stage object in Snowflake is removed.
On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not recoverable once the internal stage is dropped, as they are permanently removed from Snowflake’s storage.
References:
What tasks can be completed using the copy command? (Select TWO)
Columns can be aggregated
Columns can be joined with an existing table
Columns can be reordered
Columns can be omitted
Data can be loaded without the need to spin up a virtual warehouse
The COPY command in Snowflake allows for the reordering of columns as they are loaded into a table, and it also permits the omission of columns from the source file during the load process. This provides flexibility in handling the schema of the data being ingested. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)
Customer-managed encryption keys through Tri-Secret Secure
Automatic encryption of all data
Up to 90 days of data recovery through Time Travel
Object-level access control
Column-level security to apply data masking policies to tables and views
In all Snowflake editions, two key capabilities are universally available:
These features are part of Snowflake’s commitment to security and governance, and they are included in every edition of the Snowflake Data Cloud.
References:
What features does Snowflake Time Travel enable?
Querying data-related objects that were created within the past 365 days
Restoring data-related objects that have been deleted within the past 90 days
Conducting point-in-time analysis for Bl reporting
Analyzing data usage/manipulation over all periods of time
Snowflake Time Travel is a powerful feature that allows users to access historical data within a defined period. It enables two key capabilities:
While Time Travel does allow querying of past data, it is limited to the retention period set for the Snowflake account, which is typically 1 day for standard accounts and can be extended up to 90 days for enterprise accounts. It does not enable querying or restoring objects created or deleted beyond the retention period, nor does it provide analysis over all periods of time.
References:
Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).
Load files that are approximately 25 MB or smaller.
Remove all dates and timestamps.
Load files that are approximately 100-250 MB (or larger)
Avoid using embedded characters such as commas for numeric data types
Remove semi-structured data types
When loading data into Snowflake, it is recommended to:
These best practices are designed to optimize the data loading process, ensuring that data is loaded quickly and accurately into Snowflake.
References:
What happens to the underlying table data when a CLUSTER BY clause is added to a Snowflake table?
Data is hashed by the cluster key to facilitate fast searches for common data values
Larger micro-partitions are created for common data values to reduce the number of partitions that must be scanned
Smaller micro-partitions are created for common data values to allow for more parallelism
Data may be colocated by the cluster key within the micro-partitions to improve pruning performance
When a CLUSTER BY clause is added to a Snowflake table, it specifies one or more columns to organize the data within the table’s micro-partitions. This clustering aims to colocate data with similar values in the same or adjacent micro-partitions. By doing so, it enhances the efficiency of query pruning, where the Snowflake query optimizer can skip over irrelevant micro-partitions that do not contain the data relevant to the query, thereby improving performance.
References:
Which of the following are benefits of micro-partitioning? (Select TWO)
Micro-partitions cannot overlap in their range of values
Micro-partitions are immutable objects that support the use of Time Travel.
Micro-partitions can reduce the amount of I/O from object storage to virtual warehouses
Rows are automatically stored in sorted order within micro-partitions
Micro-partitions can be defined on a schema-by-schema basis
Micro-partitions in Snowflake are immutable objects, which means once they are written, they cannot be modified. This immutability supports the use of Time Travel, allowing users to access historical data within a defined period. Additionally, micro-partitions can significantly reduce the amount of I/O from object storage to virtual warehouses. This is because Snowflake’s query optimizer can skip over micro-partitions that do not contain relevant data for a query, thus reducing the amount of data that needs to be scanned and transferred.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake partner specializes in data catalog solutions?
Alation
DataRobot
dbt
Tableau
Alation is known for specializing in data catalog solutions and is a partner of Snowflake. Data catalog solutions are essential for organizations to effectively manage their metadata and make it easily accessible and understandable for users, which aligns with the capabilities provided by Alation.
References:
A company's security audit requires generating a report listing all Snowflake logins (e.g.. date and user) within the last 90 days. Which of the following statements will return the required information?
SELECT LAST_SUCCESS_LOGIN, LOGIN_NAME
FROM ACCOUNT_USAGE.USERS;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM table(information_schema.login_history_by_user())
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.ACCESS_HISTORY;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.LOGIN_HISTORY;
To generate a report listing all Snowflake logins within the last 90 days, the ACCOUNT_USAGE.LOGIN_HISTORY view should be used. This view provides information about login attempts, including successful and unsuccessful logins, and is suitable for security audits4.
True or False: Reader Accounts are able to extract data from shared data objects for use outside of Snowflake.
True
False
Reader accounts in Snowflake are designed to allow users to read data shared with them but do not have the capability to extract data for use outside of Snowflake. They are intended for consuming shared data within the Snowflake environment only.
True or False: A Virtual Warehouse can be resized while suspended.
True
False
Virtual Warehouses in Snowflake can indeed be resized while they are suspended. Resizing a warehouse involves changing the number of compute resources (servers) allocated to it, which can be done to adjust performance and cost. When a warehouse is suspended, it is not currently running any queries, but its definition and metadata remain intact, allowing for modifications like resizing.
True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X-Small Warehouse.
True
False
Provisioning time can vary based on the size of the warehouse. A 4X-Large Warehouse typically has more resources and may take longer to provision compared to a X-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.References: Understanding and viewing Fail-safe | Snowflake Documentation
Which feature is only available in the Enterprise or higher editions of Snowflake?
Column-level security
SOC 2 type II certification
Multi-factor Authentication (MFA)
Object-level access control
Column-level security is a feature that allows fine-grained control over access to specific columns within a table. This is particularly useful for managing sensitive data and ensuring that only authorized users can view or manipulate certain pieces of information. According to my last update, this feature was available in the Enterprise Edition or higher editions of Snowflake.
References: Based on my internal data as of 2021, column-level security is an advanced feature typically reserved for higher-tiered editions like the Enterprise Edition in data warehousing solutions such as Snowflake.
Which services does the Snowflake Cloud Services layer manage? (Select TWO).
Compute resources
Query execution
Authentication
Data storage
Metadata
The Snowflake Cloud Services layer manages a variety of services that are crucial for the operation of the Snowflake platform. Among these services, Authentication and Metadata management are key components. Authentication is essential for controlling access to the Snowflake environment, ensuring that only authorized users can perform actions within the platform. Metadata management involves handling all the metadata related to objects within Snowflake, such as tables, views, and databases, which is vital for the organization and retrieval of data.
References:
Which object type is granted permissions for reading a table?
User
Role
Attribute
Schema
In Snowflake, permissions for accessing database objects, including tables, are not granted directly to users but rather to roles. A role encapsulates a collection of privileges on various Snowflake objects. Users are then granted roles, and through those roles, they inherit the permissions necessary to read a table or perform other actions. This approach adheres to the principle of least privilege, allowing for granular control over database access and simplifying the management of user permissions.
What does the Activity area of Snowsight allow users to do? (Select TWO).
Schedule automated data backups.
Explore each step of an executed query.
Monitor queries executed by users in an account.
Create and manage user roles and permissions.
Access Snowflake Marketplace to find and integrate datasets.
The Activity area of Snowsight, Snowflake's web interface, allows users to perform several important tasks related to query management and performance analysis. Among the options provided, the correct ones are:
These features are crucial for effective query performance tuning and ensuring efficient use of Snowflake's resources.
References:
The following settings are configured:
For how many days will the data be retained at the object level?
2
3
5
7
The settings shown in the image indicate that the data retention time in days is configured at two different levels: the account level and the object level. At the account level, the MIN_DATA_RETENTION_TIME_IN_DAYS is set to 5 days, and at the object level, the DATA_RETENTION_TIME_IN_DAYS is set to 2 days. Since the object level setting has a lower value, it takes precedence over the account level setting for the specific object. Therefore, the data will be retained for 2 days at the object level.References: Snowflake Documentation on Data Retention Policies
How should a Snowflake use' configure a virtual warehouse to be in Maximized mode''
Set the WAREHOUSES_SIZE to 6XL.
Set the STATEMENT_TIMEOUT_1M_SECOMES to 0.
Set the MAX_CONCURRENCY_LEVEL to a value of 12 or large.
Set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT.
In Snowflake, configuring a virtual warehouse to be in a "Maximized" mode implies maximizing the resources allocated to the warehouse for its duration. This is done to ensure that the warehouse has a consistent amount of compute resources available, enhancing performance for workloads that require a high level of parallel processing or for handling high query volumes.
To configure a virtual warehouse in maximized mode, you should set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT. This configuration ensures that the warehouse operates with a fixed number of clusters, thereby providing a stable and maximized level of compute resources.
Reference to Snowflake documentation on warehouse sizing and scaling:
A user needs to MINIMIZE the cost of large tables that are used to store transitory data. The data does not need to be protected against failures, because the data can be reconstructed outside of Snowflake.
What table type should be used?
Permanent
Transient
Temporary
Externa
For minimizing the cost of large tables that are used to store transitory data, which does not need to be protected against failures because it can be reconstructed outside of Snowflake, the best table type to use is Transient. Transient tables in Snowflake are designed for temporary or transitory data storage and offer reduced storage costs compared to permanent tables. However, unlike temporary tables, they persist across sessions until explicitly dropped.
CREATE TRANSIENT TABLE my_transient_table (...);
Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should be assigned to which role?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
USERADMIN
Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should ideally be granted to the ACCOUNTADMIN role. This recommendation stems from the best practices for implementing a least privilege access control model, ensuring that only the necessary permissions are granted at each level of the role hierarchy. The ACCOUNTADMIN role has the highest level of privileges in Snowflake, including the ability to manage all aspects of the Snowflake account. By assigning the top-most custom role to ACCOUNTADMIN, you ensure that the administration of role hierarchies and the assignment of roles remain under the control of users with the highest level of oversight and responsibility within the Snowflake environment.
References:
What is the PRIMARY factor that determines the cost of using a virtual warehouse in Snowflake?
The type of SQL statements executed
The number of tables or databases queried
The amount of data stored in the warehouse
The length of time the compute resources in each cluster run
The primary factor that determines the cost of using a virtual warehouse in Snowflake is D. The length of time the compute resources in each cluster run. Snowflake's pricing model for compute usage is based on the concept of Snowflake credits, which are consumed based on the time virtual warehouses are running and the size of the warehouses. The more compute resources are utilized and the longer they run, the more credits are consumed, which directly impacts the cost.
References:
Given the statement template below, which database objects can be added to a share?(Select TWO).
GRANT
Secure functions
Stored procedures
Streams
Tables
Tasks
In Snowflake, shares are used to share data across different Snowflake accounts securely. When you create a share, you can include various database objects that you want to share with consumers. According to Snowflake's documentation, the types of objects that can be shared include tables, secure views, secure materialized views, and streams. Secure functions and stored procedures are not shareable objects. Tasks also cannot be shared directly. Therefore, the correct answers are streams (C) and tables (D).
To share a stream or a table, you use the GRANT statement to grant privileges on these objects to a share. The syntax for sharing a table or stream involves specifying the type of object, the object name, and the share to which you are granting access. For example:
GRANT SELECT ON TABLE my_table TO SHARE my_share; GRANT SELECT ON STREAM my_stream TO SHARE my_share;
These commands grant the SELECT privilege on a table named my_table and a stream named my_stream to a share named my_share. This enables the consumer of the share to access these objects according to the granted privileges.
Which Snowflake data governance feature can support auditing when a user query reads column data?
Access History
Data classification
Column-level security
Object dependencies
Access History in Snowflake is a feature designed to support auditing by tracking access to data within Snowflake, including when a user's query reads column data. It provides detailed information on queries executed, including the user who ran the query, the query text, and the objects (e.g., tables, views) accessed by the query. This feature is instrumental for auditing purposes, helping organizations to monitor and audit data access for security and compliance.
A clustering key was defined on a table, but It is no longer needed. How can the key be removed?
ALTER TABLE