Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70special

Google Professional-Cloud-DevOps-Engineer Google Cloud Certified - Professional Cloud DevOps Engineer Exam Exam Practice Test

Google Cloud Certified - Professional Cloud DevOps Engineer Exam Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$37.5  $124.99

PDF Study Guide

  • Product Type: PDF Study Guide
$33  $109.99
Question 1

You need to define Service Level Objectives (SLOs) for a high-traffic multi-region web application. Customers expect the application to always be available and have fast response times. Customers are currently happy with the application performance and availability. Based on current measurement, you observe that the 90th percentile of latency is 120ms and the 95th percentile of latency is 275ms over a 28-day window. What latency SLO would you recommend to the team to publish?

Options:

A.

90th percentile – 100ms95th percentile – 250ms

B.

90th percentile – 120ms95th percentile – 275ms

C.

90th percentile – 150ms95th percentile – 300ms

D.

90th percentile – 250ms95th percentile – 400ms

Question 2

You are the on-call Site Reliability Engineer for a microservice that is deployed to a Google Kubernetes Engine (GKE) Autopilot cluster. Your company runs an online store that publishes order messages to Pub/Sub and a microservice receives these messages and updates stock information in the warehousing system. A sales event caused an increase in orders, and the stock information is not being updated quickly enough. This is causing a large number of orders to be accepted for products that are out of stock You check the metrics for the microservice and compare them to typical levels.

You need to ensure that the warehouse system accurately reflects product inventory at the time orders are placed and minimize the impact on customers What should you do?

Options:

A.

Decrease the acknowledgment deadline on the subscription

B.

Add a virtual queue to the online store that allows typical traffic levels

C.

Increase the number of Pod replicas

D.

Increase the Pod CPU and memory limits

Question 3

You are creating a CI/CD pipeline in Cloud Build to build an application container image The application code is stored in GitHub Your company requires thai production image builds are only run against the main branch and that the change control team approves all pushes to the main branch You want the image build to be as automated as possible What should you do?

Choose 2 answers

Options:

A.

Create a trigger on the Cloud Build job Set the repository event setting to Pull request'

B.

Add the owners file to the Included files filter on the trigger

C.

Create a trigger on the Cloud Build job Set the repository event setting to Push to a branch

D.

Configure a branch protection rule for the main branch on the repository

E.

Enable the Approval option on the trigger

Question 4

You are the Site Reliability Engineer responsible for managing your company's data services and products. You regularly navigate operational challenges, such as unpredictable data volume and high cost, with your company's data ingestion processes. You recently learned that a new data ingestion product will be developed in Google Cloud. You need to collaborate with the product development team to provide operational input on the new product. What should you do?

Options:

A.

Deploy the prototype product in a test environment, run a load test, and share the results with the product development team.

B.

When the initial product version passes the quality assurance phase and compliance assessments, deploy the product to a staging environment. Share error logs and performancemetrics with the product development team.

C.

When the new product is used by at least one internal customer in production, share error logs and monitoring metrics with the product development team.

D.

Review the design of the product with the product development team to provide feedback early in the design phase.

Question 5

You are using Terraform to manage infrastructure as code within a Cl/CD pipeline You notice that multiple copies of the entire infrastructure stack exist in your Google Cloud project, and a new copy is created each time a change to the existing infrastructure is made You need to optimize your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time. You want to follow Google-recommended practices What should you do?

Options:

A.

Create a new pipeline to delete old infrastructure stacks when they are no longer needed

B.

Confirm that the pipeline is storing and retrieving the terraform. if state file from Cloud Storage with the Terraform gcs backend

C.

Verify that the pipeline is storing and retrieving the terrafom.tfstat* file from a source control

D.

Update the pipeline to remove any existing infrastructure before you apply the latest configuration

Question 6

You support a stateless web-based API that is deployed on a single Compute Engine instance in the europe-west2-a zone . The Service Level Indicator (SLI) for service availability is below the specified Service Level Objective (SLO). A postmortem has revealed that requests to the API regularly time out. The time outs are due to the API having a high number of requests and running out memory. You want to improve service availability. What should you do?

Options:

A.

Change the specified SLO to match the measured SLI.

B.

Move the service to higher-specification compute instances with more memory.

C.

Set up additional service instances in other zones and load balance the traffic between all instances.

D.

Set up additional service instances in other zones and use them as a failover in case the primary instance is unavailable.

Question 7

You are deploying an application to Cloud Run. The application requires a password to start. Your organization requires that all passwords are rotated every 24 hours, and your application must have the latest password. You need to deploy the application with no downtime. What should you do?

Options:

A.

Store the password in Secret Manager and send the secret to the application by using environment variables.

B.

Store the password in Secret Manager and mount the secret as a volume within the application.

C.

Use Cloud Build to add your password into the application container at build time. Ensure that Artifact Registry is secured from public access.

D.

Store the password directly in the code. Use Cloud Build to rebuild and deploy the application each time the password changes.

Question 8

You have migrated an e-commerce application to Google Cloud Platform (GCP). You want to prepare the application for the upcoming busy season. What should you do first to prepare for the busy season?

Options:

A.

Load teat the application to profile its performance for scaling.

B.

Enable AutoScaling on the production clusters, in case there is growth.

C.

Pre-provision double the compute power used last season, expecting growth.

D.

Create a runbook on inflating the disaster recovery (DR) environment if there is growth.

Question 9

You recently noticed that one Of your services has exceeded the error budget for the current rolling window period. Your company's product team is about to launch a new feature. You want to follow Site Reliability Engineering (SRE) practices.

What should you do?

Options:

A.

Notify the team that their error budget is used up. Negotiate with the team for a launch freeze or tolerate a slightly worse user experience.

B.

Look through other metrics related to the product and find SLOs with remaining error budget. Reallocate the error budgets and allow the feature launch.

C.

Escalate the situation and request additional error budget.

D.

Notify the team about the lack of error budget and ensure that all their tests are successful so the launch will not further risk the error budget.

Question 10

Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI). What should you do?

Options:

A.

Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.

B.

Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.

C.

Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.

D.

Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.

Question 11

You manage several production systems that run on Compute Engine in the same Google Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances. You want to know how must it costs to run each of the systems. What should you do?

Options:

A.

In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per system.

B.

Assign all instances a label specific to the system they run. Configure BigQuery billing export and query costs per label.

C.

Enrich all instances with metadata specific to the system they run. Configure Stackdriver Logging to export to BigQuery, and query costs based on the metadata.

D.

Name each virtual machine (VM) after the system it runs. Set up a usage report export to a Cloud Storage bucket. Configure the bucket as a source in BigQuery to query costs based on VM name.

Question 12

You support a high-traffic web application and want to ensure that the home page loads in a timely manner. As a first step, you decide to implement a Service Level Indicator (SLI) to represent home page request latency with an acceptable page load time set to 100 ms. What is the Google-recommended way of calculating this SLI?

Options:

A.

Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.

B.

Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.

C.

Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.

D.

Count the number of home page requests that load in under 100 ms. and then divide by the total number of all web application requests.

Question 13

You need to reduce the cost of virtual machines (VM| for your organization. After reviewing different options, you decide to leverage preemptible VM instances. Which application is suitable for preemptible VMs?

Options:

A.

A scalable in-memory caching system

B.

The organization's public-facing website

C.

A distributed, eventually consistent NoSQL database cluster with sufficient quorum

D.

A GPU-accelerated video rendering platform that retrieves and stores videos in a storage bucket

Question 14

You support a user-facing web application When analyzing the application's error budget over the previous six months you notice that the application never consumed more than 5% of its error budget You hold a SLO review with business stakeholders and confirm that the SLO is set appropriately You want your application's reliability to more closely reflect its SLO What steps can you take to further that goal while balancing velocity, reliability, and business needs?

Choose 2 answers

Options:

A.

Add more serving capacity to all of your application's zones

B.

Implement and measure all other available SLIs for the application

C.

Announce planned downtime to consume more error budget and ensure that users are not depending on a tighter SLO

D.

Have more frequent or potentially risky application releases

E.

Tighten the SLO to match the application's observed reliability

Question 15

You are developing a strategy for monitoring your Google Cloud Platform (GCP) projects in production using Stackdriver Workspaces. One of the requirements is to be able to quickly identify and react to production environment issues without false alerts from development and staging projects. You want to ensure that you adhere to the principle of least privilege when providing relevant team members with access to Stackdriver Workspaces. What should you do?

Options:

A.

Grant relevant team members read access to all GCP production projects. Create Stackdriver workspaces inside each project.

B.

Grant relevant team members the Project Viewer IAM role on all GCP production projects. Create Slackdriver workspaces inside each project.

C.

Choose an existing GCP production project to host the monitoring workspace. Attach the production projects to this workspace. Grant relevant team members read access to the Stackdriver Workspace.

D.

Create a new GCP monitoring project, and create a Stackdriver Workspace inside it. Attach the production projects to this workspace. Grant relevant team members read access to the Stackdriver Workspace.

Question 16

You are configuring the frontend tier of an application deployed in Google Cloud The frontend tier is hosted in ngmx and deployed using a managed instance group with an Envoy-based external HTTP(S) load balancer in front The application is deployed entirely within the europe-west2 region: and only serves users based in the United Kingdom. You need to choose the most cost-effective network tier and load balancing configuration What should you use?

Options:

A.

Premium Tier with a global load balancer

B.

Premium Tier with a regional load balancer

C.

Standard Tier with a global load balancer

D.

Standard Tier with a regional load balancer

Question 17

Your organization has a containerized web application that runs on-premises As part of the migration plan to Google Cloud you need to select a deployment strategy and platform that meets the following acceptance criteria

1 The platform must be able to direct traffic from Android devices to an Android-specific microservice

2 The platform must allow for arbitrary percentage-based traffic splitting

3 The deployment strategy must allow for continuous testing of multiple versions of any microservice

What should you do?

Options:

A.

Deploy the canary release of the application to Cloud Run Use traffic splitting to direct 10% of user traffic to the canary release based on the revision tag

B.

Deploy the canary release of the application to App Engine Use traffic splitting to direct a subset of user traffic to the new version based on the IP address

C.

Deploy the canary release of the application to Compute Engine Use Anthos Service Mesh with Compute Engine to direct 10% of user traffic to the canary release by configuring the virtual service.

D.

Deploy the canary release to Google Kubernetes Engine with Anthos Sen/ice Mesh Use traffic splitting to direct 10% of user traffic to the new version based on the user-agent header configured in the virtual service

Question 18

Your team deploys applications to three Google Kubernetes Engine (GKE) environments development staging and production You use GitHub reposrtones as your source of truth You need to ensure that the three environments are consistent You want to follow Google-recommended practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in those environments What should you do?

Options:

A.

Use Google Cloud Deploy to deploy the network policies and the DaemonSet Use Cloud Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the repository.

B.

Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the network policies Use Cloud Monitoring to detect drifts from the source in the repository and Cloud Functions tocorrect the drifts

C.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up Config Sync to sync the configurations for the three environments

D.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up a Policy Controller to enforce the configurations for the three environments

Question 19

Your company wants to implement a CD pipeline in Cloud Deploy for a web service deployed to GKE. The web service currently does not have any automated testing. The Quality Assurance team must manually verify any new releases of the web service before any production traffic is processed. You need to design the CD pipeline. What should you do?

Options:

A.

Create two pipeline stages, and use a canary deployment strategy.

B.

Create a single pipeline stage, and use a standard deployment strategy.

C.

Create a single pipeline stage, and use a canary deployment strategy.

D.

Create two pipeline stages, and use a standard deployment strategy.

Question 20

Your company stores a large volume of infrequently used data in Cloud Storage. The projects in your company's CustomerService folder access Cloud Storage frequently, but store very little data. You want to enable Data Access audit logging across the company to identify data usage patterns. You need to exclude the CustomerService folder projects from Data Access audit logging. What should you do?

Options:

A.

Enable Data Access audit logging for Cloud Storage for all projects and folders, and configure exempted principals to include users of the CustomerService folder.

B.

Enable Data Access audit logging for Cloud Storage at the organization level, with no additional configuration.

C.

Enable Data Access audit logging for Cloud Storage at the organization level, and configure exempted principals to include users of the CustomerService folder.

D.

Enable Data Access audit logging for Cloud Storage for all projects and folders other than the CustomerService folder.

Question 21

You recently created a Cloud Build pipeline for deploying Terraform code stored in a GitHub repository. You make Terraform code changes in short-lived branches and sometimes use tags during development. You tag releases with a semantic version when they are ready for deployment. You require your pipeline to apply the Terraform code whenever there is a new release, and you need to minimize operational overhead. What should you do?

Options:

A.

Create a build trigger with the * branch pattern.

B.

Create a build trigger with the \d+\.\d+\.\d* tag pattern.

C.

Create a build trigger with the .* tag pattern.

D.

Create a build trigger with the \d*\.\d+\.\d* branch pattern.

Question 22

You are configuring a CI pipeline. The build step for your CI pipeline integration testing requires access to APIs inside your private VPC network. Your security team requires that you do not expose API traffic publicly. You need to implement a solution that minimizes management overhead. What should you do?

Options:

A.

Use Cloud Build private pools to connect to the private VPC.

B.

Use Cloud Build to create a Compute Engine instance in the private VPC. Run the integration tests on the VM by using a startup script.

C.

Use Cloud Build as a pipeline runner. Configure a cross-region internal Application Load Balancer for API access.

D.

Use Cloud Build as a pipeline runner. Configure a global external Application Load Balancer with a Google Cloud Armor policy for API access.

Question 23

You need to introduce postmortems into your organization during the holiday shopping season. You are expecting your web application to receive a large volume of traffic in a short period. You need to prepare your application for potential failures during the event What should you do?

Choose 2 answers

Options:

A.

Monitor latency of your services for average percentile latency.

B.

Review your increased capacity requirements and plan for the required quota management.

C.

Create alerts in Cloud Monitoring for all common failures that your application experiences.

D.

Ensure that relevant system metrics are being captured with Cloud Monitoring and create alerts at levels of interest.

E.

Configure Anthos Service Mesh on the application to identify issues on the topology map.

Question 24

Your application’s performance in Google Cloud has degraded since the last release. You suspect that downstream dependencies might be causing some requests to take longer to complete. You need to investigate the issue with your application to determine the cause. What should you do?

Options:

A.

Configure Cloud Trace in your application.

B.

Configure Error Reporting in your application.

C.

Configure Cloud Profiler in your application.

D.

Configure Google Cloud Managed Service for Prometheus in your application.

Question 25

You use Cloud Build to build and deploy your application. You want to securely incorporate database credentials and other application secrets into the build pipeline. You also want to minimize the development effort. What should you do?

Options:

A.

Create a Cloud Storage bucket and use the built-in encryption at rest. Store the secrets in the bucket and grant Cloud Build access to the bucket.

B.

Encrypt the secrets and store them in the application repository. Store a decryption key in a separate repository and grant Cloud Build access to the repository.

C.

Use client-side encryption to encrypt the secrets and store them in a Cloud Storage bucket. Store a decryption key in the bucket and grant Cloud Build access to the bucket.

D.

Use Cloud Key Management Service (Cloud KMS) to encrypt the secrets and include them in your Cloud Build deployment configuration. Grant Cloud Build access to the KeyRing.

Question 26

You use Spinnaker to deploy your application and have created a canary deployment stage in the pipeline. Your application has an in-memory cache that loads objects at start time. You want to automate the comparison of the canary version against the production version. How should you configure the canary analysis?

Options:

A.

Compare the canary with a new deployment of the current production version.

B.

Compare the canary with a new deployment of the previous production version.

C.

Compare the canary with the existing deployment of the current production version.

D.

Compare the canary with the average performance of a sliding window of previous production versions.

Question 27

You are managing an application that exposes an HTTP endpoint without using a load balancer. The latency of the HTTP responses is important for the user experience. You want to understand what HTTP latencies all of your users are experiencing. You use Stackdriver Monitoring. What should you do?

Options:

A.

• In your application, create a metric with a metricKind set to DELTA and a valueType set to DOUBLE.• In Stackdriver's Metrics Explorer, use a Slacked Bar graph to visualize the metric.

B.

• In your application, create a metric with a metricKind set to CUMULATIVE and a valueType set to DOUBLE.• In Stackdriver's Metrics Explorer, use a Line graph to visualize the metric.

C.

• In your application, create a metric with a metricKind set to gauge and a valueType set to distribution.• In Stackdriver's Metrics Explorer, use a Heatmap graph to visualize the metric.

D.

• In your application, create a metric with a metricKind. set toMETRlc_KIND_UNSPECIFIEDanda valueType set to INT64.• In Stackdriver's Metrics Explorer, use a Stacked Area graph to visualize the metric.

Question 28

You support a Node.js application running on Google Kubernetes Engine (GKE) in production. The application makes several HTTP requests to dependent applications. You want to anticipate which dependent applications might cause performance issues. What should you do?

Options:

A.

Instrument all applications with Stackdriver Profiler.

B.

Instrument all applications with Stackdriver Trace and review inter-service HTTP requests.

C.

Use Stackdriver Debugger to review the execution of logic within each application to instrument all applications.

D.

Modify the Node.js application to log HTTP request and response times to dependent applications. Use Stackdriver Logging to find dependent applications that are performing poorly.

Question 29

You need to run a business-critical workload on a fixed set of Compute Engine instances for several months. The workload is stable with the exact amount of resources allocated to it. You want to lower the costs for this workload without any performance implications. What should you do?

Options:

A.

Purchase Committed Use Discounts.

B.

Migrate the instances to a Managed Instance Group.

C.

Convert the instances to preemptible virtual machines.

D.

Create an Unmanaged Instance Group for the instances used to run the workload.

Question 30

You are running a real-time gaming application on Compute Engine that has a production and testing environment. Each environment has their own Virtual Private Cloud (VPC) network. The application frontend and backend servers are located on different subnets in the environment's VPC. You suspect there is a malicious process communicating intermittently in your production frontend servers. You want to ensure that network traffic is captured for analysis. What should you do?

Options:

A.

Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 0.5.

B.

Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 1.0.

C.

Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 0.5. Apply changes intesting before production.

D.

Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 1.0. Apply changes in testing before production.

Question 31

Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do?

Options:

A.

A Install a Fluent Bit sidecar container, and use a JSON parser.

B.

Install the log agent in the Cloud Run container image, and use the log agent to forward logs to Cloud Logging.

C.

Configure the log agent to convert log text payload to JSON payload.

D.

Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPay10ad field.

Question 32

Your company follows Site Reliability Engineering practices. You are the Incident Commander for a new. customer-impacting incident. You need to immediately assign two incident management roles to assist you in an effective incident response. What roles should you assign?

Choose 2 answers

Options:

A.

Operations Lead

B.

Engineering Lead

C.

Communications Lead

D.

Customer Impact Assessor

E.

External Customer Communications Lead

Question 33

You are troubleshooting a failed deployment in your CI/CD pipeline. The deployment logs indicate that the application container failed to start due to a missing environment variable. You need to identify the root cause and implement a solution within your CI/CD workflow to prevent this issue from recurring. What should you do?

Options:

A.

Run integration tests in the CI pipeline.

B.

Implement static code analysis in the CI pipeline.

C.

Use a canary deployment strategy.

D.

Enable Cloud Audit Logs for the deployment.

Question 34

You encountered a major service outage that affected all users of the service for multiple hours. After several hours of incident management, the service returned to normal, and user access was restored. You need to provide an incident summary to relevant stakeholders following the Site Reliability Engineering recommended practices. What should you do first?

Options:

A.

Call individual stakeholders lo explain what happened.

B.

Develop a post-mortem to be distributed to stakeholders.

C.

Send the Incident State Document to all the stakeholders.

D.

Require the engineer responsible to write an apology email to all stakeholders.

Question 35

You are creating a CI/CD pipeline to perform Terraform deployments of Google Cloud resources Your CI/CD tooling is running in Google Kubernetes Engine (GKE) and uses an ephemeral Pod for each pipeline run You must ensure that the pipelines that run in the Pods have the appropriate Identity and Access Management (1AM) permissions to perform the Terraform deployments You want to follow Google-recommended practices for identity management What should you do?

Choose 2 answers

Options:

A.

Create a new Kubernetes service account, and assign the service account to the Pods Use Workload Identity to authenticate as the Google service account

B.

Create a new JSON service account key for the Google service account store the key as a Kubernetes secret, inject the key into the Pods, and set the boogle_application_credentials environment variable

C.

Create a new Google service account, and assign the appropriate 1AM permissions

D.

Create a new JSON service account key for the Google service account store the key in the secret management store for the CI/CD tool and configure Terraform to use this key for authentication

E.

Assign the appropriate 1AM permissions to the Google service account associated with the Compute Engine VM instances that run the Pods

Question 36

You are performing a semi-annual capacity planning exercise for your flagship service You expect a service user growth rate of 10% month-over-month for the next six months Your service is fully containerized and runs on a Google Kubemetes Engine (GKE) standard cluster across three zones with cluster autoscaling enabled You currently consume about 30% of your total deployed CPU capacity and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth o' as a result of zone failure while you avoid unnecessary costs How should you prepare to handle the predicted growth?

Options:

A.

Verify the maximum node pool size enable a Horizontal Pod Autoscaler and then perform a load lest to verify your expected resource needs

B.

Because you deployed the service on GKE and are using a cluster autoscaler your GKE cluster will scale automatically regardless of growth rate

C.

Because you are only using 30% of deployed CPU capacity there is significant headroom and you do not need to add any additional capacity for this rate of growth

D.

Proactively add 80% more node capacity to account for six months of 10% growth rate and then perform a load test to ensure that you have enough capacity

Question 37

You encounter a large number of outages in the production systems you support. You receive alerts for all the outages that wake you up at night. The alerts are due to unhealthy systems that are automatically restarted within a minute. You want to set up a process that would prevent staff burnout while following Site Reliability Engineering practices. What should you do?

Options:

A.

Eliminate unactionable alerts.

B.

Create an incident report for each of the alerts.

C.

Distribute the alerts to engineers in different time zones.

D.

Redefine the related Service Level Objective so that the error budget is not exhausted.

Question 38

You are running an application on Compute Engine and collecting logs through Stackdriver. You discover that some personally identifiable information (PII) is leaking into certain log entry fields. You want to prevent these fields from being written in new log entries as quickly as possible. What should you do?

Options:

A.

Use the filter-record-transformer Fluentd filter plugin to remove the fields from the log entries in flight.

B.

Use the fluent-plugin-record-reformer Fluentd output plugin to remove the fields from the log entries in flight.

C.

Wait for the application developers to patch the application, and then verify that the log entries are no longer exposing PII.

D.

Stage log entries to Cloud Storage, and then trigger a Cloud Function to remove the fields and write the entries to Stackdriver via the Stackdriver Logging API.

Question 39

You recently noticed that one of your services has exceeded the error budget for the current rolling window period. Your company’s product team is about to launch a new feature. You want to follow Site Reliability Engineering (SRE) practices. What should you do?

Options:

A.

Notify the team about the lack of error budget and ensure that all their tests are successful so the launch will not further risk the error budget.

B.

Notify the team that their error budget is used up. Negotiate with the team for a launch freeze or tolerate a slightly worse user experience.

C.

Escalate the situation and request additional error budget.

D.

Look through other metrics related to the product and find SLOs with remaining error budget. Reallocate the error budgets and allow the feature launch.

Question 40

Your team uses Cloud Build for all CI/CO pipelines. You want to use the kubectl builder for Cloud Build to deploy new images to Google Kubernetes Engine (GKE). You need to authenticate to GKE while minimizing development effort. What should you do?

Options:

A.

Assign the Container Developer role to the Cloud Build service account.

B.

Specify the Container Developer role for Cloud Build in the cloudbuild.yaml file.

C.

Create a new service account with the Container Developer role and use it to run Cloud Build.

D.

Create a separate step in Cloud Build to retrieve service account credentials and pass these to kubectl.

Question 41

You have a pool of application servers running on Compute Engine. You need to provide a secure solution that requires the least amount of configuration and allows developers to easily access application logs for troubleshooting. How would you implement the solution on GCP?

Options:

A.

• Deploy the Stackdriver logging agent to the application servers.• Give the developers the IAM Logs Viewer role to access Stackdriver and view logs.

B.

• Deploy the Stackdriver logging agent to the application servers.• Give the developers the IAM Logs Private Logs Viewer role to access Stackdriver and view logs.

C.

• Deploy the Stackdriver monitoring agent to the application servers.• Give the developers the IAM Monitoring Viewer role to access Stackdriver and view metrics.

D.

• Install the gsutil command line tool on your application servers.• Write a script using gsutil to upload your application log to a Cloud Storage bucket, and then schedule it to run via cron every 5 minutes.• Give the developers IAM Object Viewer access to view the logs in the specified bucket.

Question 42

You are running an experiment to see whether your users like a new feature of a web application. Shortly after deploying the feature as a canary release, you receive a spike in the number of 500 errors sent to users, and your monitoring reports show increased latency. You want to quickly minimize the negative impact on users. What should you do first?

Options:

A.

Roll back the experimental canary release.

B.

Start monitoring latency, traffic, errors, and saturation.

C.

Record data for the postmortem document of the incident.

D.

Trace the origin of 500 errors and the root cause of increased latency.

Question 43

You are running a web application that connects to an AlloyDB cluster by using a private IP address in your default VPC. You need to run a database schema migration in your CI/CD pipeline by using Cloud Build before deploying a new version of your application. You want to follow Google-recommended security practices. What should you do?  

Options:

A.

Set up a Cloud Build private pool to access the database through a static external IP address. Configure the database to only allow connections from this IP address. Execute the schema migration script in the private pool.

B.

Create a service account that has permission to access the database. Configure Cloud Build to use this service account and execute the schema migration script in a private pool.

C.

Add the database username and encrypted password to the application configuration file. Use these credentials in Cloud Build to execute the schema migration script.

D.

Add the database username and password to Secret Manager. When running the schema migration script, retrieve the username and password from Secret Manager.

Question 44

You are monitoring a service that uses n2-standard-2 Compute Engine instances that serve large files. Users have reported that downloads are slow. Your Cloud Monitoring dashboard shows that your VMS are running at peak network throughput. You want to improve the network throughput performance. What should you do?

Options:

A.

Deploy a Cloud NAT gateway and attach the gateway to the subnet of the VMS.

B.

Add additional network interface controllers (NICs) to your VMS.

C.

Change the machine type for your VMS to n2-standard-8.

D.

Deploy the Ops Agent to export additional monitoring metrics.

Question 45

You are the Operations Lead for an ongoing incident with one of your services. The service usually runs at around 70% capacity. You notice that one node is returning 5xx errors for all requests. There has also been a noticeable increase in support cases from customers. You need to remove the offending node from the load balancer pool so that you can isolate and investigate the node. You want to follow Google-recommended practices to manage the incident and reduce the impact on users. What should you do?

Options:

A.

1. Communicate your intent to the incident team.2. Perform a load analysis to determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately.3. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.

B.

1. Communicate your intent to the incident team.2. Add a new node to the pool, and wait for the new node to report as healthy.3. When traffic is being served on the new node, drain traffic from the unhealthy node, and remove the old node from service.

C.

1 . Drain traffic from the unhealthy node and remove the node from service.2. Monitor traffic to ensure that the error is resolved and that the other nodes in the pool are handling the traffic appropriately.3. Scale the pool as necessary to handle the new load.4. Communicate your actions to the incident team.

D.

1 . Drain traffic from the unhealthy node and remove the old node from service.2. Add a new node to the pool, wait for the new node to report as healthy, and then serve traffic to the new node.3. Monitor traffic to ensure that the pool is healthy and is handling traffic appropriately.4. Communicate your actions to the incident team.

Question 46

You created a Stackdriver chart for CPU utilization in a dashboard within your workspace project. You want to share the chart with your Site Reliability Engineering (SRE) team only. You want to ensure you follow the principle of least privilege. What should you do?

Options:

A.

Share the workspace Project ID with the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.

B.

Share the workspace Project ID with the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.

C.

Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.

D.

Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.

Question 47

You have an application that runs on Cloud Run. You want to use live production traffic to test a new version of the application while you let the quality assurance team perform manual testing. You want to limit the potential impact of any issues while testing the new version, and you must be able to roll back to a previous version of the application if needed. How should you deploy the new version?

Choose 2 answers

Options:

A.

Deploy the application as a new Cloud Run service.

B.

Deploy a new Cloud Run revision with a tag and use the —no-traffic option.

C.

Deploy a new Cloud Run revision without a tag and use the —no-traffic option.

D.

Deploy the new application version and use the —no-traffic option Route production traffic to the revision's URL.

E.

Deploy the new application version and split traffic to the new version.

Question 48

Your organization recently adopted a container-based workflow for application development. Your team develops numerous applications that are deployed continuously through an automated build pipeline to a Kubernetes cluster in the production environment. The security auditor is concerned that developers or operators could circumvent automated testing and push code changes to production without approval. What should you do to enforce approvals?

Options:

A.

Configure the build system with protected branches that require pull request approval.

B.

Use an Admission Controller to verify that incoming requests originate from approved sources.

C.

Leverage Kubernetes Role-Based Access Control (RBAC) to restrict access to only approved users.

D.

Enable binary authorization inside the Kubernetes cluster and configure the build pipeline as an attestor.

Question 49

You support a web application that runs on App Engine and uses CloudSQL and Cloud Storage for data storage. After a short spike in website traffic, you notice a big increase in latency for all user requests, increase in CPU use, and the number of processes running the application. Initial troubleshooting reveals:

After the initial spike in traffic, load levels returned to normal but users still experience high latency.

Requests for content from the CloudSQL database and images from Cloud Storage show the same high latency.

No changes were made to the website around the time the latency increased.

There is no increase in the number of errors to the users.

You expect another spike in website traffic in the coming days and want to make sure users don’t experience latency. What should you do?

Options:

A.

Upgrade the GCS buckets to Multi-Regional.

B.

Enable high availability on the CloudSQL instances.

C.

Move the application from App Engine to Compute Engine.

D.

Modify the App Engine configuration to have additional idle instances.

Question 50

You are investigating issues in your production application that runs on Google Kubernetes Engine (GKE). You determined that the source Of the issue is a recently updated container image, although the exact change in code was not identified. The deployment is currently pointing to the latest tag. You need to update your cluster to run a version of the container that functions as intended. What should you do?

Options:

A.

Create a new tag called stable that points to the previously working container, and change the deployment to point to the new tag.

B.

Apply the latest tag to the previous container image, and do a rolling update on the deployment.

C.

Build a new container from a previous Git tag, and do a rolling update on the deployment to the new container.

D.

Alter the deployment to point to the sha2 56 digest of the previously working container.

Question 51

You have a CI/CD pipeline that uses Cloud Build to build new Docker images and push them to Docker Hub. You use Git for code versioning. After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are being built by the pipeline. You need to resolve the issue following Site Reliability Engineering practices. What should you do?

Options:

A.

Disable the CI pipeline and revert to manually building and pushing the artifacts.

B.

Change the CI pipeline to push the artifacts to Container Registry instead of Docker Hub.

C.

Upload the configuration YAML file to Cloud Storage and use Error Reporting to identify and fix the issue.

D.

Run a Git compare between the previous and current Cloud Build Configuration files to find and fix the bug.

Question 52

You are designing a system with three different environments: development, quality assurance (QA), and production.

Each environment will be deployed with Terraform and has a Google Kubemetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy

infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (laC) and application code?

Options:

A.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features

B.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different branches are different environmentsApplication (app source code) repositories are separated: different branches are different features

C.

Cloud Infrastructure (Terraform) repository is shared: different branches are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repository is shared: different directories are different features

D.

Cloud Infrastructure (Terraform) repositories are separated: different branches are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different overlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features

Question 53

You deploy a new release of an internal application during a weekend maintenance window when there is minimal user traffic. After the window ends, you learn that one of the new features isn't working as expected in the production environment. After an extended outage, you roll back the new release and deploy a fix. You want to modify your release process to reduce the mean time to recovery so you can avoid extended outages in the future. What should you do?

Choose 2 answers

Options:

A.

Before merging new code, require 2 different peers to review the code changes.

B.

Adopt the blue/green deployment strategy when releasing new code via a CD server.

C.

Integrate a code linting tool to validate coding standards before any code is accepted into the repository.

D.

Require developers to run automated integration tests on their local development environments before release.

E.

Configure a CI server.Add a suite of unit tests to your code and have your CI server run them on commit and verify any changes.

Question 54

You need to enforce several constraint templates across your Google Kubernetes Engine (GKE) clusters. The constraints include policy parameters, such as restricting the Kubernetes API. You must ensure that the policy parameters are stored in a GitHub repository and automatically applied when changes occur. What should you do?

Options:

A.

Set up a GitHub action to trigger Cloud Build when there is a parameter change. In Cloud Build, run a gcloud CLI command to apply the change.

B.

When there is a change in GitHub, use a web hook to send a request to Anthos Service Mesh, and apply the change.

C.

Configure Anthos Config Management with the GitHub repository. When there is a change in the repository, use Anthos Config Management to apply the change.

D.

Configure Config Connector with the GitHub repository. When there is a change in the repository, use Config Connector to apply the change.

Question 55

Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort. What should you do?

Options:

A.

Use Cloud Build to trigger a Spinnaker pipeline.

B.

Use Cloud Pub/Sub to trigger a Spinnaker pipeline.

C.

Use a custom builder in Cloud Build to trigger a Jenkins pipeline.

D.

Use Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine (GKE).

Question 56

You are designing a deployment technique for your applications on Google Cloud. As part Of your deployment planning, you want to use live traffic to gather performance metrics for new versions Ofyour applications. You need to test against the full production load before your applications are launched. What should you do?

Options:

A.

Use A/B testing with blue/green deployment.

B.

Use shadow testing with continuous deployment.

C.

Use canary testing with continuous deployment.

D.

Use canary testing with rolling updates deployment,

Question 57

You manage an application that is writing logs to Stackdriver Logging. You need to give some team members the ability to export logs. What should you do?

Options:

A.

Grant the team members the IAM role of logging.configWriter on Cloud IAM.

B.

Configure Access Context Manager to allow only these members to export logs.

C.

Create and grant a custom IAM role with the permissions logging.sinks.list and logging.sink.get.

D.

Create an Organizational Policy in Cloud IAM to allow only these members to create log exports.

Question 58

Your company runs applications in Google Kubernetes Engine (GKE) that are deployed following a GitOps methodology.

Application developers frequently create cloud resources to support their applications. You want to give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices. You need to ensure that infrastructure as code reconciles periodically to avoid configuration drift. What should you do?

Options:

A.

Install and configure Config Connector in Google Kubernetes Engine (GKE).

B.

Configure Cloud Build with a Terraform builder to execute plan and apply commands.

C.

Create a Pod resource with a Terraform docker image to execute terraform plan and terraform apply commands.

D.

Create a Job resource with a Terraform docker image to execute terraforrm plan and terraform apply commands.