Labour Day Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: suredis

Google Professional-Cloud-Developer Google Certified Professional - Cloud Developer Exam Practice Test

Google Certified Professional - Cloud Developer Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$42  $119.99

PDF Study Guide

  • Product Type: PDF Study Guide
$36.75  $104.99
Question 1

Your company's development teams want to use Cloud Build in their projects to build and push Docker images

to Container Registry. The operations team requires all Docker images to be published to a centralized,

securely managed Docker registry that the operations team manages.

What should you do?

Options:

A.

Use Container Registry to create a registry in each development team's project. Configure the Cloud Build

build to push the Docker image to the project's registry. Grant the operations team access to each

development team's registry.

B.

Create a separate project for the operations team that has Container Registry configured. Assign

appropriate permissions to the Cloud Build service account in each developer team's project to allow

access to the operation team's registry.

C.

Create a separate project for the operations team that has Container Registry configured. Create a Service

Account for each development team and assign the appropriate permissions to allow it access to the

operations team's registry. Store the service account key file in the source code repository and use it to

authenticate against the operations team's registry.

D.

Create a separate project for the operations team that has the open source Docker Registry deployed on a

Compute Engine virtual machine instance. Create a username and password for each development team.

Store the username and password in the source code repository and use it to authenticate against the

operations team's Docker registry.

Question 2

You have deployed an HTTP(s) Load Balancer with the gcloud commands shown below.

Health checks to port 80 on the Compute Engine virtual machine instance are failing and no traffic is sent to your instances. You want to resolve the problem.

Which commands should you run?

Options:

A.

gcloud compute instances add-access-config ${NAME}-backend-instance-1

B.

gcloud compute instances add-tags ${NAME}-backend-instance-1 --tags http-server

C.

gcloud compute firewall-rules create allow-lb --network load-balancer --allow

tcp --source-ranges 130.211.0.0/22,35.191.0.0/16 --direction INGRESS

D.

gcloud compute firewall-rules create allow-lb --network load-balancer --allow

tcp --destination-ranges 130.211.0.0/22,35.191.0.0/16 --direction EGRESS

Question 3

You support an application that uses the Cloud Storage API. You review the logs and discover multiple HTTP 503 Service Unavailable error responses from the API. Your application logs the error and does not take any further action. You want to implement Google-recommended retry logic to improve success rates. Which approach should you take?

Options:

A.

Retry the failures in batch after a set number of failures is logged.

B.

Retry each failure at a set time interval up to a maximum number of times.

C.

Retry each failure at increasing time intervals up to a maximum number of tries.

D.

Retry each failure at decreasing time intervals up to a maximum number of tries.

Question 4

You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do?

Options:

A.

1) Create a managed instance group. Replicate the static content across the virtual machines (VMs)

2) Create an external HTTP(S) load balancer.

3) Enable Cloud CDN, and send traffic to the managed instance group.

B.

1) Create an unmanaged instance group. Replicate the static content across the VMs.

2) Create an external HTTP(S) load balancer

3) Enable Cloud CDN, and send traffic to the unmanaged instance group.

C.

1) Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket

2) Reserve an external IP address, and create an external HTTP(S) load balancer

3) Enable Cloud CDN, and send traffic to your backend bucket

D.

1) Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket.

2) Reserve an external IP address, and create an external HTTP(S) load balancer.

3) Enable Cloud CDN, and send traffic to your backend bucket.

Question 5

Your team develops services that run on Google Kubernetes Engine. You need to standardize their log data using Google-recommended practices and make the data more useful in the fewest number of steps. What should you do? (Choose two.)

Options:

A.

Create aggregated exports on application logs to BigQuery to facilitate log analytics.

B.

Create aggregated exports on application logs to Cloud Storage to facilitate log analytics.

C.

Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs.

D.

Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging.

E.

Mandate the use of the Pub/Sub API to write structured data to Pub/Sub and create a Dataflow streaming pipeline to normalize logs and write them to BigQuery for analytics.

Question 6

Your team has created an application that is hosted on a Google Kubernetes Engine (GKE) cluster You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the legacy service in a way that is resilient and requires the fewest number of steps You also want to be able to run probe-based health checks on the legacy service on a separate port How should you set up the connection?

Options:

A.

Use Traffic Director with a sidecar proxy to connect the application to the service.

B.

Use a proxyless Traffic Director configuration to connect the application to the service.

C.

Configure the legacy service's firewall to allow health checks originating from the proxy.

D.

Configure the legacy service's firewall to allow health checks originating from the application.

E.

Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.

Question 7

You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do?

Options:

A.

Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script.

B.

Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script.

C.

Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script.

D.

Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script.

Question 8

Your development team has built several Cloud Functions using Java along with corresponding integration and service tests. You are building and deploying the functions and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment failures immediately after successfully validating the code. What should you do?

Options:

A.

Check the maximum number of Cloud Function instances.

B.

Verify that your Cloud Build trigger has the correct build parameters.

C.

Retry the tests using the truncated exponential backoff polling strategy.

D.

Verify that the Cloud Build service account is assigned the Cloud Functions Developer role.

Question 9

You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application

can scale horizontally, and each instance of your application needs to have a stable network identity and its

own persistent disk.

Which GKE object should you use?

Options:

A.

Deployment

B.

StatefulSet

C.

ReplicaSet

D.

ReplicaController

Question 10

You have an ecommerce application hosted in Google Kubernetes Engine (GKE) that receives external requests and forwards them to third-party APIs external to Google Cloud. The third-party APIs are responsible for credit card processing, shipping, and inventory management using the process shown in the diagram.

Your customers are reporting that the ecommerce application is running slowly at unpredictable times. The application doesn't report any metrics You need to determine the cause of the inconsistent performance What should you do?

Options:

A.

Install the Ops Agent inside your container and configure it to gather application metrics.

B.

Install the OpenTelemetry library for your respective language, and instrument your application.

C.

Modify your application to read and forward the x-Cloud-Trace-context header when it calls the

downstream services

D Enable Managed Service for Prometheus on the GKE cluster to gather application metrics.

Question 11

Your team manages a Google Kubernetes Engine (GKE) cluster where an application is running. A different team is planning to integrate with this application. Before they start the integration, you need to ensure that the other team cannot make changes to your application, but they can deploy the integration on GKE. What should you do?

Options:

A.

Using Identity and Access Management (IAM), grant the Viewer IAM role on the cluster project to the other team.

B.

Create a new GKE cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.

C.

Create a new namespace in the existing cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.

D.

Create a new namespace in the existing cluster. Using Kubernetes role-based access control (RBAC), grant the Admin role on the new namespace to the other team.

Question 12

You are building a new API. You want to minimize the cost of storing and reduce the latency of serving

images.

Which architecture should you use?

Options:

A.

App Engine backed by Cloud Storage

B.

Compute Engine backed by Persistent Disk

C.

Transfer Appliance backed by Cloud Filestore

D.

Cloud Content Delivery Network (CDN) backed by Cloud Storage

Question 13

You are a developer working on an internal application for payroll processing. You are building a component of the application that allows an employee to submit a timesheet, which then initiates several steps:

• An email is sent to the employee and manager, notifying them that the timesheet was submitted.

• A timesheet is sent to payroll processing for the vendor's API.

• A timesheet is sent to the data warehouse for headcount planning.

These steps are not dependent on each other and can be completed in any order. New steps are being considered and will be implemented by different development teams. Each development team will implement the error handling specific to their step. What should you do?

Options:

A.

Deploy a Cloud Function for each step that calls the corresponding downstream system to complete the required action.

B.

Create a Pub/Sub topic for each step. Create a subscription for each downstream development team to subscribe to their step's topic.

C.

Create a Pub/Sub topic for timesheet submissions. Create a subscription for each downstream development team to subscribe to the topic.

D.

Create a timesheet microservice deployed to Google Kubernetes Engine. The microservice calls each downstream step and waits for a successful response before calling the next step.

Question 14

You are deploying your applications on Compute Engine. One of your Compute Engine instances failed to launch. What should you do? (Choose two.)

Options:

A.

Determine whether your file system is corrupted.

B.

Access Compute Engine as a different SSH user.

C.

Troubleshoot firewall rules or routes on an instance.

D.

Check whether your instance boot disk is completely full.

E.

Check whether network traffic to or from your instance is being dropped.

Question 15

You need to redesign the ingestion of audit events from your authentication service to allow it to handle a large increase in traffic. Currently, the audit service and the authentication system run in the same Compute Engine virtual machine. You plan to use the following Google Cloud tools in the new architecture:

Multiple Compute Engine machines, each running an instance of the authentication service

Multiple Compute Engine machines, each running an instance of the audit service

Pub/Sub to send the events from the authentication services.

How should you set up the topics and subscriptions to ensure that the system can handle a large volume of messages and can scale efficiently?

Options:

A.

Create one Pub/Sub topic. Create one pull subscription to allow the audit services to share the messages.

B.

Create one Pub/Sub topic. Create one pull subscription per audit service instance to allow the services to share the messages.

C.

Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to a load balancer in front of the audit services.

D.

Create one Pub/Sub topic per authentication service. Create one pull subscription per topic to be used by one audit service.

E.

Create one Pub/Sub topic per authentication service. Create one push subscription per topic, with the endpoint pointing to one audit service.

Question 16

You want to re-architect a monolithic application so that it follows a microservices model. You want to

accomplish this efficiently while minimizing the impact of this change to the business.

Which approach should you take?

Options:

A.

Deploy the application to Compute Engine and turn on autoscaling.

B.

Replace the application's features with appropriate microservices in phases.

C.

Refactor the monolithic application with appropriate microservices in a single effort and deploy it.

D.

Build a new application with the appropriate microservices separate from the monolith and replace it when

it is complete.

Question 17

Your application is deployed in a Google Kubernetes Engine (GKE) cluster. You want to expose this application publicly behind a Cloud Load Balancing HTTP(S) load balancer. What should you do?

Options:

A.

Configure a GKE Ingress resource.

B.

Configure a GKE Service resource.

C.

Configure a GKE Ingress resource with type: LoadBalancer.

D.

Configure a GKE Service resource with type: LoadBalancer.

Question 18

You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a

transaction amount (a number). You want to calculate the sum of all transaction amounts for each unique

account number efficiently.

Which data structure should you use?

Options:

A.

A linked list

B.

A hash table

C.

A two-dimensional array

D.

A comma-delimited string

Question 19

Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do?

Options:

A.

Process the messages with a Dataproc job, and write the output to storage.

B.

Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.

C.

Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.

D.

Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.

Question 20

You are porting an existing Apache/MySQL/PHP application stack from a single machine to Google Kubernetes Engine. You need to determine how to containerize the application. Your approach should follow Google-recommended best practices for availability. What should you do?

Options:

A.

Package each component in a separate container. Implement readiness and liveness probes.

B.

Package the application in a single container. Use a process management tool to manage each component.

C.

Package each component in a separate container. Use a script to orchestrate the launch of the components.

D.

Package the application in a single container. Use a bash script as an entrypoint to the container, and then spawn each component as a background job.

Question 21

You are deploying a microservices application to Google Kubernetes Engine (GKE). The application will receive daily updates. You expect to deploy a large number of distinct containers that will run on the Linux operating system (OS). You want to be alerted to any known OS vulnerabilities in the new containers. You want to follow Google-recommended best practices. What should you do?

Options:

A.

Use the gcloud CLI to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.

B.

Enable Container Analysis, and upload new container images to Artifact Registry. Review the vulnerability results before each deployment.

C.

Enable Container Analysis, and upload new container images to Artifact Registry. Review the critical vulnerability results before each deployment.

D.

Use the Container Analysis REST API to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.

Question 22

Your company has a BigQuery data mart that provides analytics information to hundreds of employees. One

user of wants to run jobs without interrupting important workloads. This user isn't concerned about the time it

takes to run these jobs. You want to fulfill this request while minimizing cost to the company and the effort

required on your part.

What should you do?

Options:

A.

Ask the user to run the jobs as batch jobs.

B.

Create a separate project for the user to run jobs.

C.

Add the user as a job.user role in the existing project.

D.

Allow the user to run jobs when important workloads are not running.

Question 23

Your team is writing a backend application to implement the business logic for an interactive voice response (IVR) system that will support a payroll application. The IVR system has the following technical characteristics:

• Each customer phone call is associated with a unique IVR session.

• The IVR system creates a separate persistent gRPC connection to the backend for each session.

• If the connection is interrupted, the IVR system establishes a new connection, causing a slight latency for that call.

You need to determine which compute environment should be used to deploy the backend application. Using current call data, you determine that:

• Call duration ranges from 1 to 30 minutes.

• Calls are typically made during business hours.

• There are significant spikes of calls around certain known dates (e.g., pay days), or when large payroll changes occur.

You want to minimize cost, effort, and operational overhead. Where should you deploy the backend application?

Options:

A.

Compute Engine

B.

Google Kubernetes Engine cluster in Standard mode

C.

Cloud Functions

D.

Cloud Run

Question 24

You are developing an event-driven application. You have created a topic to receive messages sent to Pub/Sub. You want those messages to be processed in real time. You need the application to be independent from any other system and only incur compute costs when new messages arrive. You want to configure the simplest and most efficient architecture What should you do?

Options:

A.

Deploy your code on Cloud Functions. Use a Pub/Sub trigger to invoke the Cloud Function. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it.

B.

Deploy your code on Cloud Functions. Use a Pub/Sub trigger to handle new messages in the topic.

C.

Deploy the application on Google Kubernetes Engine. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it

D.

Deploy the application on Compute Engine. Use a Pub/Sub push subscription to process new messages in the topic.

Question 25

You migrated some of your applications to Google Cloud. You are using a legacy monitoring platform deployed on-premises for both on-premises and cloud-deployed applications. You discover that your notification system is responding slowly to time-critical problems in the cloud applications. What should you do?

Options:

A.

Replace your monitoring platform with Cloud Monitoring.

B.

Install the Cloud Monitoring agent on your Compute Engine instances.

C.

Migrate some traffic back to your old platform. Perform A/B testing on the two platforms concurrently.

D.

Use Cloud Logging and Cloud Monitoring to capture logs, monitor, and send alerts. Send them to your existing platform.

Question 26

You are designing an application that will subscribe to and receive messages from a single Pub/Sub topic and insert corresponding rows into a database. Your application runs on Linux and leverages preemptible virtual machines to reduce costs. You need to create a shutdown script that will initiate a graceful shutdown. What should you do?

Options:

A.

Write a shutdown script that uses inter-process signals to notify the application process to disconnect from the database.

B.

Write a shutdown script that broadcasts a message to all signed-in users that the Compute Engine instance is going down and instructs them to save current work and sign out.

C.

Write a shutdown script that writes a file in a location that is being polled by the application once every five minutes. After the file is read, the application disconnects from the database.

D.

Write a shutdown script that publishes a message to the Pub/Sub topic announcing that a shutdown is in progress. After the application reads the message, it disconnects from the database.

Question 27

You are designing a resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster. You need to ensure that all applications can access the resources needed to run. What should you do? (Choose two.)

Options:

A.

Specify the resource limits and requests in the object specifications.

B.

Create a namespace for each team, and attach resource quotas to each namespace.

C.

Create a LimitRange to specify the default compute resource requirements for each namespace.

D.

Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace.

E.

Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for namespaces.

Question 28

You are developing an ecommerce web application that uses App Engine standard environment and Memorystore for Redis. When a user logs into the app, the application caches the user’s information (e.g., session, name, address, preferences), which is stored for quick retrieval during checkout.

While testing your application in a browser, you get a 502 Bad Gateway error. You have determined that the application is not connecting to Memorystore. What is the reason for this error?

Options:

A.

Your Memorystore for Redis instance was deployed without a public IP address.

B.

You configured your Serverless VPC Access connector in a different region than your App Engine instance.

C.

The firewall rule allowing a connection between App Engine and Memorystore was removed during an infrastructure update by the DevOps team.

D.

You configured your application to use a Serverless VPC Access connector on a different subnet in a different availability zone than your App Engine instance.

Question 29

You developed a JavaScript web application that needs to access Google Drive’s API and obtain permission from users to store files in their Google Drives. You need to select an authorization approach for your application. What should you do?

Options:

A.

Create an API key.

B.

Create a SAML token.

C.

Create a service account.

D.

Create an OAuth Client ID.

Question 30

You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud run deploy. What should you do next?

Options:

A.

Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic.

B.

Create a build trigger that runs the build file in response to a repository code being pushed to the development branch.

C.

Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL.

D.

Create a Cron job that runs the following command every 24 hours: gcloud builds submit.

Question 31

You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should you do next?

Options:

A.

Assign the Google Cloud service account to your GKE Pod using Workload Identity.

B.

Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret.

C.

Export the Google Cloud service account, and embed it in the source code of the application.

D.

Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.

Question 32

You have written a Cloud Function that accesses other Google Cloud resources. You want to secure the environment using the principle of least privilege. What should you do?

Options:

A.

Create a new service account that has Editor authority to access the resources. The deployer is given permission to get the access token.

B.

Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to get the access token.

C.

Create a new service account that has Editor authority to access the resources. The deployer is given permission to act as the new service account.

D.

Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to act as the new service account.

Question 33

You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read and write to a Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to retrieve Spanner credentials?

Options:

A.

Configure the appropriate service accounts, and use Workload Identity to run the pods.

B.

Store the application credentials as Kubernetes Secrets, and expose them as environment variables.

C.

Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database.

D.

Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.

Question 34

You recently joined a new team that has a Cloud Spanner database instance running in production. Your manager has asked you to optimize the Spanner instance to reduce cost while maintaining high reliability and availability of the database. What should you do?

Options:

A.

Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity required.

B.

Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small increments until you find the minimum capacity required.

C.

Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum capacity required.

D.

Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the minimum capacity required.

Question 35

Your company stores their source code in a Cloud Source Repositories repository. Your company wants to build and test their code on each source code commit to the repository and requires a solution that is managed and has minimal operations overhead.

Which method should they use?

Options:

A.

Use Cloud Build with a trigger configured for each source code commit.

B.

Use Jenkins deployed via the Google Cloud Platform Marketplace, configured to watch for source code commits.

C.

Use a Compute Engine virtual machine instance with an open source continuous integration tool, configured to watch for source code commits.

D.

Use a source code commit trigger to push a message to a Cloud Pub/Sub topic that triggers an App Engine service to build the source code.

Question 36

You need to migrate a standalone Java application running in an on-premises Linux virtual machine (VM) to Google Cloud in a cost-effective manner. You decide not to take the lift-and-shift approach, and instead you plan to modernize the application by converting it to a container. How should you accomplish this task?

Options:

A.

Use Migrate for Anthos to migrate the VM to your Google Kubernetes Engine (GKE) cluster as a container.

B.

Export the VM as a raw disk and import it as an image. Create a Compute Engine instance from the Imported image.

C.

Use Migrate for Compute Engine to migrate the VM to a Compute Engine instance, and use Cloud Build to convert it to a container.

D.

Use Jib to build a Docker image from your source code, and upload it to Artifact Registry. Deploy the application in a GKE cluster, and test the application.

Question 37

You are building a CI/CD pipeline that consists of a version control system, Cloud Build, and Container Registry. Each time a new tag is pushed to the repository, a Cloud Build job is triggered, which runs unit tests on the new code builds a new Docker container image, and pushes it into Container Registry. The last step of your pipeline should deploy the new container to your production Google Kubernetes Engine (GKE) cluster. You need to select a tool and deployment strategy that meets the following requirements:

• Zero downtime is incurred

• Testing is fully automated

• Allows for testing before being rolled out to users

• Can quickly rollback if needed

What should you do?

Options:

A.

Trigger a Spinnaker pipeline configured as an A/B test of your new code and, if it is successful, deploy the container to production.

B.

Trigger a Spinnaker pipeline configured as a canary test of your new code and, if it is successful, deploy the container to production.

C.

Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a canary test.

D.

Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a shadow test.

Question 38

You are building an API that will be used by Android and iOS apps The API must:

• Support HTTPs

• Minimize bandwidth cost

• Integrate easily with mobile apps

Which API architecture should you use?

Options:

A.

RESTful APIs

B.

MQTT for APIs

C.

gRPC-based APIs

D.

SOAP-based APIs

Question 39

In order to meet their business requirements, how should HipLocal store their application state?

Options:

A.

Use local SSDs to store state.

B.

Put a memcache layer in front of MySQL.

C.

Move the state storage to Cloud Spanner.

D.

Replace the MySQL instance with Cloud SQL.

Question 40

HipLocal is configuring their access controls.

Which firewall configuration should they implement?

Options:

A.

Block all traffic on port 443.

B.

Allow all traffic into the network.

C.

Allow traffic on port 443 for a specific tag.

D.

Allow all traffic on port 443 into the network.

Question 41

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

Options:

A.

Use App Engine for autoscaling.

B.

Use Cloud Functions for autoscaling.

C.

Use a Compute Engine cluster for the service.

D.

Use a dedicated Compute Engine virtual machine instance for the service.

Question 42

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

Options:

A.

Take frequent snapshots of all of the VMs.

B.

Install the Stackdriver Logging agent on the VMs.

C.

Install the Stackdriver Monitoring agent on the VMs.

D.

Use Stackdriver Trace to look for performance bottlenecks.

Question 43

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

Options:

A.

Create manual subnets.

B.

Create an auto mode subnet.

C.

Create multiple peered VPCs.

D.

Provision a single instance for NAT.

Question 44

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

Options:

A.

Cloud Profiler

B.

Cloud Monitoring

C.

Cloud Trace

D.

Cloud Logging

Question 45

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

Options:

A.

Migrate the database to Bigtable and use it to serve all global user traffic.

B.

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Question 46

For this question, refer to the HipLocal case study.

HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

Options:

A.

Create an API key. Use the API key to interact with Google Cloud.

B.

Use the default compute service account to interact with Google Cloud.

C.

Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D.

Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

Question 47

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

Options:

A.

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Question 48

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

Options:

A.

Use Google App Engine services.

B.

Use serverless Google Cloud Functions.

C.

Use Knative to build and deploy serverless applications.

D.

Use Google Kubernetes Engine for automated deployments.

E.

Use a large Google Compute Engine cluster for deployments.

Question 49

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

Options:

A.

Cloud Spanner

B.

Cloud Datastore

C.

Cloud Memorystore as a cache

D.

Separate Cloud SQL clusters for each region

Question 50

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

Options:

A.

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Question 51

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

Options:

A.

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Question 52

Which service should HipLocal use for their public APIs?

Options:

A.

Cloud Armor

B.

Cloud Functions

C.

Cloud Endpoints

D.

Shielded Virtual Machines

Question 53

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

Options:

A.

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.

Migrate data to Firestore in Native mode and set up instan

Question 54

Which service should HipLocal use to enable access to internal apps?

Options:

A.

Cloud VPN

B.

Cloud Armor

C.

Virtual Private Cloud

D.

Cloud Identity-Aware Proxy

Question 55

Which database should HipLocal use for storing user activity?

Options:

A.

BigQuery

B.

Cloud SQL

C.

Cloud Spanner

D.

Cloud Datastore

Question 56

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

Options:

A.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Question 57

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

Options:

A.

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.