Recipes¶
Data Sharing¶
Provider¶
create share <share-name>grant <privilege> on <object> to share <share-name>selectfor tables/views,usagefor UDFs/database,reference_usagefor views referencing multiple databases
alter share <share-name> add accounts = <consumer-account>- Optionally create a reader account:
create managed account
Consumer¶
create database <x> from share <provider account>.<share-name>grant imported privileges to role ...
Replication¶
Group based¶
- primary
- secondary
legacy¶
- primary:
alter database ... enable replication to accounts ... - secondary:
create database ... as replica of <account>.<primary-db> - secondary: promote to primary
alter database ... primary- previous primary then becomes RO and new primary becomes RW
S3 Storage Integration¶
- Create IAM policy, and in JSON tab add policy doc that allow:
GetObjectandGetObjectVersionon all objects under desired prefixListBucketandGetBucketLocationfor the bucket with condition that includes the desired prefix
- Create IAM Role of type "Another AWS Account" as type of trusted entity
- set Account ID and External ID to temporary values for now
- Create storage integration, using the Role ARN from the previous step
- Use
DESC INTEGRATION <integration_name>to retrieveSTORAGE_AWS_IAM_USER_ARNandSTORAGE_AWS_EXTERNAL_ID - Update IAM Role using Edit trust relationship and modify the JSON doc to set
Principalandsts:ExternalIdvalues - Optional: not BP use one IAM Role's trust policy to allow all external stages in your account (default one role per one SI)
- External ID have a format
<snowflake_account>_SFCRole=<snowflake_role_id>_<random-id>. Replace the<random-id>with*
- External ID have a format
S3 Notifications¶
- for Auto Ingest, Directory Stage, External Table
- If there is no other SQL event configured, configure SQS, otherwise use SNS
Using SQS¶
create pipe ... auto_ingest=true as copy into ...- get SQS ARN:
- for pipes:
show pipesand notenotification_channelcolumn - for directory stage:
desc stage ...and notedirectory_notification_channel - for external table:
show external tablesand notenotification_channel
- for pipes:
- For the S3 bucket configure SQS using:
- Events:
ObjectCreate (All)andObjectRemoved(optionally filter by path/extension) - Add SQS Queue ARN from the previous step
- Events:
Using SNS¶
- AWS Console -> SNS -> Topics -> locate topic for the S3 bucket -> note the Topic ARN
- Get Snowflake account's SNS policy statement using
system$get_aws_sns_iam_policy('<sns_topic_arn>') - Edit the Access policy for the SNS Topic and add following policy statements:
- subscribe Snowflake SQS using statement from the previous step
- policy to allow S3 service to allow
SNS:Publishaction
Auto-refresh directory tables¶
- Configure Secure access to S3
create stage ... directory = (enable= true auto_refresh = true [aws_sns_topic = <s3-sns-topic>])- Configure Event notification
- manually refresh once directory table metadata
- Configure security:
usageon Database, schema, named stage, named file-format, andreadon named stage
External table¶
- Configure Secure access to S3
create external table ... with location = @s3_stage/tpch_sf100/orders [auto_refresh = true]- manually sync once metadata using
alter external table <table-name> refresh - Configure security:
usageon Database, schema, named stage, named file-format,readon named stage,selecton external table - Optional: remove obsolete files using
alter external table <table> remove files (<file1>, ...)
SCIM¶
create role provisioner_rolegrant create user, create role on account to role provisioner_rolecreate security integration okta_provisioning type = scim scim_client = 'okta' run_as_role = 'PROVISIONER_ROLE'select system$generate_scim_access_token('okta_provisioning')- Okta
- Snowflake Application -> Settings -> Integration -> Enable API Integration
- For API token enter the value generated from Snowflake function
- Optional: For App -> Provisioning Features enable
defaultRole,defaultSecondaryRolesanddefaultWarehouseattribute values/expressions
- Optional To enable Snowflake initiated SSO:
alter account set sso_login_page = true
Private Link¶
- Contact support with the (12 digit) VPC account IDs and Snowflake account URLs
- Snowflake supports supplies VPCE address (
com.amazonaws.vpce.<region_id>.vpce-svc-xxxxxxxxxxxxxxxxx) or useSYSTEM$GET_PRIVATELINK_CONFIG()
- Snowflake supports supplies VPCE address (
- Create an endpoint for the VPCE address
- authorize a VPCE security group to allow
443(Snowflake) and80(OCSP) - authorize SG of services that connect to Snowflake to allow outgoing
443and80of the VPCE CIDR
- authorize a VPCE security group to allow
- Configure your VPC, create CNAME records for
- locator and regionless:
privatelink-account-urlandregionless-privatelink-account-url(SYSTEM$GET_PRIVATELINK_CONFIG()) - OCSP:
privatelink-ocsp-url(SYSTEM$GET_PRIVATELINK_CONFIG()) - SnowSight (and Data Marketplace):
SNOWSIGHT_DEPLOYMENTfromSYSTEM$WHITELIST_PRIVATELINK()
- locator and regionless:
Private access for S3¶
- Configure either VPC interface endpoint (recommended) for S3 or 1+ gateway endpoints
- When using S3 gateway endpoint, allow following hosts:
sfc-*-stage.s3.amazonaws.com:443sfc-repo.snowflakecomputing.com:443(for auto-updating SnowSQL)sfc-*-stage.s3{-,.}<region_id>.amazonaws.com:443(except US East)
- When using VPC interface endpoint for internal stages
alter account set enable_internal_stages_privatelink = true; recordprivatelink_internal_stage(system$get_privatelink_config())- create a VPC endpoint for AWS PrivateLink for S3; record VPCE DNS Name
- create DNS entry to resolve
<bucket_name>.s3.<region>.amazonaws.comto the endpoint received from previous step by replacing wildcard*with bucket name