Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: suredis

Google Professional-Cloud-Architect Google Cloud Certified - Professional Cloud Architect exam Exam Practice Test

Google Cloud Certified - Professional Cloud Architect exam Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$40.25  $114.99

PDF Study Guide

  • Product Type: PDF Study Guide
$35  $99.99
Question 1

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:

• Services are deployed redundantly across multiple regions in the US and Europe.

• Only frontend services are exposed on the public internet.

• They can provide a single frontend IP for their fleet of services.

• Deployment artifacts are immutable.

Which set of products should they use?

Options:

A.

Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine

B.

Google Cloud Storage, Google App Engine, Google Network Load Balancer

C.

Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

D.

Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Question 2

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

Options:

A.

Container Engine, Cloud Pub/Sub, and Cloud SQL

B.

Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

C.

Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D.

Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow

E.

Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Question 3

For this question, refer to the Mountkirk Games case study.

Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

Options:

A.

Create a scalable environment in GCP for simulating production load.

B.

Use the existing infrastructure to test the GCP-based backend at scale.

C.

Build stress tests into each component of your application using resources internal to GCP to simulate load.

D.

Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Question 4

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

Options:

A.

Tests should scale well beyond the prior approaches.

B.

Unit tests are no longer required, only end-to-end tests.

C.

Tests should be applied after the release is in the production environment.

D.

Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Question 5

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

Options:

A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Question 6

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.

What change in the on-premises architecture should you make?

Options:

A.

Replace RabbitMQ with Google Pub/Sub.

B.

Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.

C.

Resize compute resources to match predefined Compute Engine machine types.

D.

Containerize the micro services and host them in Google Kubernetes Engine.

Question 7

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for performance in the cloud?

Options:

A.

Web applications deployed using App Engine standard environment

B.

RabbitMQ deployed using an unmanaged instance group

C.

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode

D.

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Question 8

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in

Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.

Considering Dress4Win’s business and technical requirements, what should you do?

Options:

A.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.

B.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Enable default storage encryption before storing files in Cloud Storage.

C.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.

Utilize Google’s default encryption at rest when storing files in Cloud Storage.

D.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Question 9

For this question, refer to the Dress4Win case study.

As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in. Which configuration should Dress4Win use?

Options:

A.

Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.

B.

Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer's unique ID.

C.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring privacy of images.

D.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to their image files.

Question 10

For this question, refer to the Dress4Win case study.

The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP). The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects. What can they do?

Options:

A.

Grant the operations engineers access to use Google Cloud Shell.

B.

Configure a VPN connection to GCP to allow SSH access to the cloud VMs.

C.

Develop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.

D.

Have the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.

Question 11

For this question, refer to the Dress4Win case study.

As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load. They want to ensure that:

• The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day

• Their administrators are notified automatically when their application reports errors.

• They can filter their aggregated logs down in order to debug one piece of the application across many hosts

Which Google StackDriver features should they use?

Options:

A.

Logging, Alerts, Insights, Debug

B.

Monitoring, Trace, Debug, Logging

C.

Monitoring, Logging, Alerts, Error Reporting

D.

Monitoring, Logging, Debug, Error Report

Question 12

For this question, refer to the Dress4Win case study.

You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?

Options:

A.

Google Cloud Storage Coldline to store the data, and gsutil to access the data.

B.

Google Cloud Storage Nearline to store the data, and gsutil to access the data.

C.

Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.

D.

BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.

Question 13

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

Options:

A.

Delete the virtual machine (VM) and disks and create a new one.

B.

Delete the instance, attach the disk to a new VM, and investigate.

C.

Take a snapshot of the disk and connect to a new machine to investigate.

D.

Check inbound firewall rules for the network the machine is connected to.

E.

Connect the machine to another network with very simple firewall rules and investigate.

F.

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Question 14

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

Options:

A.

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Question 15

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

Options:

A.

Error rates for requests from Asia

B.

Latency difference between US and Asia

C.

Total visits, error rates, and latency from Asia

D.

Total visits and average latency for users in Asia

E.

The number of character sets present in the database

Question 16

For this question, refer to the JencoMart case study.

JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?

Options:

A.

Cloud Spanner

B.

Google BigQuery

C.

Google Cloud SQL

D.

Google Cloud Datastore

Question 17

For this question, refer to the TerramEarth case study

Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data. What should you do?

Options:

A.

Build or leverage an OAuth-compatible access control system.

B.

Build SAML 2.0 SSO compatibility into your authentication system.

C.

Restrict data access based on the source IP address of the partner systems.

D.

Create secondary credentials for each dealer that can be given to the trusted third party.

Question 18

Your agricultural division is experimenting with fully autonomous vehicles.

You want your architecture to promote strong security during vehicle operation.

Which two architecture should you consider?

Choose 2 answers:

Options:

A.

Treat every micro service call between modules on the vehicle as untrusted.

B.

Require IPv6 for connectivity to ensure a secure address space.

C.

Use a trusted platform module (TPM) and verify firmware and binaries on boot.

D.

Use a functional programming language to isolate code execution cycles.

E.

Use multiple connectivity subsystems for redundancy.

F.

Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.

Question 19

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

Options:

A.

Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.

B.

Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.

C.

Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.

D.

Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Question 20

For this question refer to the TerramEarth case study.

Which of TerramEarth's legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption.

Options:

A.

Opex/capex allocation, LAN changes, capacity planning

B.

Capacity planning, TCO calculations, opex/capex allocation

C.

Capacity planning, utilization measurement, data center expansion

D.

Data Center expansion, TCO calculations, utilization measurement

Question 21

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

Options:

A.

Use Explainable AI.

B.

Use Vision AI.

C.

Use Google Cloud’s operations suite.

D.

Use Jupyter Notebooks.

Question 22

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a

payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,

and season ticket holders. You need to implement a custom card tokenization service that meets the following

requirements:

• It must provide low latency at minimal cost.

• It must be able to identify duplicate credit cards and must not store plaintext card numbers.

• It should support annual key rotation.

Which storage approach should you adopt for your tokenization service?

Options:

A.

Store the card data in Secret Manager after running a query to identify duplicates.

B.

Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

C.

Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.

D.

Use column-level encryption to store the data in Cloud SQL.

Question 23

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

Options:

A.

glouc compute firewall rules update hlr-policy \

--priority 1000 \

target tags-sourceiplist fastly \

--allow tcp:443

B.

gcloud compute security policies rules update 1000 \

--security-policy hlr-policy \

--expression "evaluatePreconfiguredExpr('sourceiplist-fastly')" \

--action " allow"

C.

gcloud compute firewall rules update

sourceiplist-fastly \

priority 1000 \

allow tcp: 443

D.

gcloud compute priority-policies rules update

1000 \

security policy from fastly

--src- ip-ranges"

-- action " allow"

Question 24

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team

releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a

repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released

every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

Options:

A.

Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B.

Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.

C.

Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

D.

Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Question 25

TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost

What should you do?

Options:

A.

Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

B.

Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

C.

Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.

D.

Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Question 26

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to

BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an

automated daily basis while managing cost.

What should you do?

Options:

A.

Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.

B.

Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud Function from a Compute Engine instance.

C.

Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.

D.

Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Question 27

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

Options:

A.

Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.

B.

Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.

C.

Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Cloud Storage

bucket. Upload this data into BigQuery using gcloud. Use Google data Studio for analysis and reporting.

D.

Use Cloud Dataproc Hive as the data warehouse. Directly stream data into prtitioned Hive tables. Use Pig scripts to analyze data.

Question 28

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

Options:

A.

Replace the existing data warehouse with BigQuery. Use table partitioning.

B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.

Replace the existing data warehouse with BigQuery. Use federated data sources.

D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.

Question 29

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

Options:

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

Question 30

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

Options:

A.

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Question 31

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)

Options:

A.

Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

B.

Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.

C.

Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.

D.

Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

Question 32

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

Options:

A.

Use a private cluster with a private endpoint with master authorized networks configured.

B.

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.

Use a private cluster with a public endpoint with master authorized networks configured.

D.

Use a public cluster with master authorized networks enabled and firewall rules.

Question 33

Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations. Which database type should you use?

Options:

A.

Flat file

B.

NoSQL

C.

Relational

D.

Blobstore

Question 34

Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?

Options:

A.

Configure a new load balancer for the new version of the API.

B.

Reconfigure old clients to use a new endpoint for the new API.

C.

Have the old API forward traffic to the new API based on the path.

D.

Use separate backend pools for each API path behind the load balancer.

Question 35

The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud. Which three practices should you recommend? Choose 3 answers

Options:

A.

Port the application code to run on Google App Engine.

B.

Integrate Cloud Dataflow into the application to capture real-time metrics.

C.

Instrument the application with a monitoring tool like Stackdriver Debugger.

D.

Select an automation framework to reliably provision the cloud infrastructure.

E.

Deploy a continuous integration tool with automated testing in a staging environment.

F.

Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable.

Question 36

Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data. The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team. The development team does not want the networking team to have access to the sensitive data on the instances. What should you do?

Options:

A.

1. Create a project with a standalone VPC and assign the Network Admin role to the networking team.

2.Create a second project with a standalone VPC and assign the Compute Admin role to the development team.

3.Use Cloud VPN to join the two VPCs.

B.

1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team.

C.

1. Create a project with a Shared VPC and assign the Network Admin role to the networking team.

2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team.

D.

1. Create a project with a standalone VPC and assign the Network Admin role to the networking team.

2.Create a second project with a standalone VPC and assign the Compute Admin role to the development team.

3.Use VPC Peering to join the two VPCs.

Question 37

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What

should you do?

Options:

A.

Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects

B.

Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs

C.

Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.

D.

Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in both projects

Question 38

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.

Which two steps should be part of their migration plan? (Choose two.)

Options:

A.

Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.

B.

Write a schema migration plan to denormalize data for better performance in BigQuery.

C.

Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.

D.

Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.

E.

Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

Question 39

Your development teams release new versions of games running on Google Kubernetes Engine (GKE) daily.

You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the user’s

perspective. What should you do?

Options:

A.

Create CPU Utilization and Request Latency as service level indicators.

B.

Create GKE CPU Utilization and Memory Utilization as service level indicators.

C.

Create Request Latency and Error Rate as service level indicators.

D.

Create Server Uptime and Error Rate as service level indicators.