Recipes
- Fault tolerant AD
- Create two windows server instances in different regions to act as primary and secondary DC
- Create VPC and two subnets in different regions in a VPC, and allow traffic between them
gcloud compute networks create ${vpc_name} --subnet-mode custom
gcloud compute networks subnets create private-ad-zone-1 --network ${vpc_name} --range 10.1.0.0/24 --region ${region1}
gcloud compute networks subnets create private-ad-zone-2 --network ${vpc_name} --range 10.2.0.0/24 --region ${region2}
gcloud compute firewall-rules create allow-internal-ports-private-ad --network ${vpc_name} \
--allow tcp:1-65535,udp:1-65535,icmp --source-ranges 10.1.0.0/24,10.2.0.0/24
- Storage infrastructure
- create bucket for uploads:
gsutil mb gs://$BUCKET
- create upload notification:
gsutil notification create -t new-doc -f json -e OBJECT_FINALIZE gs://$BUCKET
- set up Cloud Run application:
- build and deploy:
gcloud builds submit --tag gcr.io/$PROJECT/pdf-converter
gcloud beta run deploy pdf-converter \
--image gcr.io/$PROJECT/pdf-converter \
--platform managed --region us-central1 --memory=2Gi \
--set-env-vars PDF_BUCKET=$PROJECT-processed \
--no-allow-unauthenticated
- Setup permissions
- create app service account and grant permission to run the app:
gcloud iam service-accounts create app-runner --display-name "PubSub Cloud Run Invoker"
- allow service account to invoke the application
gcloud beta run services add-iam-policy-binding pdf-converter \
--member=serviceAccount:app-runner@$PROJECT.iam.gserviceaccount.com --role=roles/run.invoker
- allow project to create pubsub authentication token
gcloud projects add-iam-policy-binding $PROJECT
--member=serviceAccount:service-$PROJECT_NUMBER@gcp-sa-pubsub.iam.gserviceaccount.com
--role=roles/iam.serviceAccountTokenCreator
- pubsub
- create a push subscription
gcloud beta pubsub subscriptions create pdf-conv-sub
--topic new-doc
--push-endpoint=$SERVICE_URL
--push-auth-service-account=app-runner@$PROJECT.iam.gserviceaccount.com
Connect two networks using VPN
- Reserve two static IP addresses for each Gateway
- create VPN gateway, a tunnel (port-forwards are created automatically if using wizard) using
- static IP address
- remote VPN Rounter's static IP address
- for route-based VPNs specifythe other network's CIDR range
- repeat above step for the other VPN gateway
Create L3 LB
- create target-pool allowing single access point to multiple VMs using a load-balancer
- create a template that will be used to instantiate muliple nodes
- create instance group specifying template, target pool and number of nodes
- create forwarding rule specifying target pool and port
Create Cloud NAT
- Create Gateway (pick region, network, specific subnets if needed)
- Select/create Cloud Router
Create Custom VM image
- Create a VM,
- Under Disk, uncheck Delete boot disk when instance is deleted
- Customize VM, eg install apache2
sudo apt-get install -y apache2; sudo update-rc.d apache2 enable
- Delete the VM
- Create Custom Image, choose source as disk
Create HTTPS LB
- Create firewall rules to allow HealthCheck probes from
130.211.0.0/22 and 35.191.0.0/16
- Create manaaged instance group, with
- Autoscaling Metric type as HTTP load balance utilization
- Healthcheck with TCP/80 (to enable proactive deletion and recreation of unhealthy instances)
- Configure HTTP LB (Pick HTTP LB, From Internet to My VMs)
- Backend Configuration: Create Backend service and add one or more managed instance groups as backends, for each backend
- pick HTTP/80 as port, managed instance group
- Balancing mode as: RPS or Utilization
- pick health-check created earlier
- Host and path rules (defaults to all paths)
- Front-end configuration: add IP4 (and IP6 if needed) and pick protocol (HTTP or HTTPS)
Setup a VM to act as a NAT
- uses VM as NAT by enabling IP forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-natgw.conf
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
gcloud compute routes create natroute1 \
--network example-vpc \
--destination-range 0.0.0.0/0 \
--tags no-ip \
--priority 800 \
--next-hop-instance-zone us-east1-b \
--next-hop-instance $nat_1_instance
Application Development
Datastore Insert
const {Datastore} = require('@google-cloud/datastore');
const ds = Datastore({projectId: config.get('GCLOUD_PROJECT')});
function create({v1, v2, v3}) {
const key = ds.key('TableName'); // datastore generated key
return ds.save( {
key,
data: [{name: 'attr1', value: v1}, {name: 'attr2', value: v2}, {name: 'attr3', value: v3}]
}) // returns a promise
}
function query() {
const query = datastore.createQuery('Company');
}
Cloud Storage
Cross project sharing
- recipe:
- create a service account in the host project
- set appropriate role (Object Viewer/Object Admin etc)
- Create/upload key Add Key -> Create New Key (download .json private key file)
- On the guest project's VM upload private key file
- activate host service account
gcloud auth activate-service-account --key-file credentials.json
CSEK Rotate keys
- To use CSEK, set a value for
encryption_key in ~/.boto file
- generate a new
.boto file: gsutil config -n
- rotate keys by:
- retain current encryption key as
decryption_key1
- generate new key and set it as new value for
encryption_key in ~/.boto file
- reencrypt objects:
gsutil rewrite -k gs://$BUCKET_NAME_1/sample.txt
- remove value for
decryption_key1 after all objects have been re-encrypted