March Sale Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70special

Amazon Web Services DBS-C01 AWS Certified Database - Specialty Exam Practice Test

Page: 1 / 32
Total 324 questions

AWS Certified Database - Specialty Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$36  $119.99

PDF Study Guide

  • Product Type: PDF Study Guide
$31.5  $104.99
Question 1

A company is using AWS CloudFormation to provision and manage infrastructure resources, including a production database. During a recent CloudFormation stack update, a database specialist observed that changes were made to a database resource that is named ProductionDatabase. The company wants to prevent changes to only ProductionDatabase during future stack updates.

Which stack policy will meet this requirement?

Options:

A.

B.

A close-up of a computer code Description automatically generated

C.

A screen shot of a computer code Description automatically generated

D.

A screenshot of a computer program Description automatically generated

Question 2

A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table. Periodically, the other users’ devices read the latest statuses of their teammates from the table using the BatchGetltemn operation.

Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation.

Which recommendation would resolve this issue?

Options:

A.

Ensure the DynamoDB table is configured to be always consistent.

B.

Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.

C.

Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.

D.

Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.

Question 3

A large retail company recently migrated its three-tier ecommerce applications to AWS. The company’s backend database is hosted on Amazon Aurora PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.

How should this issue be resolved?

Options:

A.

Modify the application to commit transactions in batches

B.

Add a new Aurora Replica to the Aurora DB cluster.

C.

Add an Amazon ElastiCache for Redis cluster and change the application to write through.

D.

Change the Aurora DB cluster storage to Provisioned IOPS (PIOPS).

Question 4

An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The

steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts.

How should a Database Specialist address these requirements?

Options:

A.

Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB

B.

Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift

C.

Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance

D.

Use DynamoDB Accelerator to offload the reads

Question 5

An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:

Update scores in real time whenever a player is playing the game. Retrieve a player’s score details for a specific game session.

A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each

game has a unique game_id.

Which choice of keys is recommended for the DynamoDB table?

Options:

A.

Create a global secondary index with game_id as the partition key

B.

Create a global secondary index with user_id as the partition key

C.

Create a composite primary key with game_id as the partition key and user_id as the sort key

D.

Create a composite primary key with user_id as the partition key and game_id as the sort key

Question 6

A database specialist needs to reduce the cost of an application's database. The database is running on a Multi-AZ deployment of an Amazon ROS for Microsoft SQL Server DB instance. The application requires the database to support stored procedures, SQL Server Wire Protocol (TDS), and T-SQC The database must also be highly available. The database specialist is using AWS Database Migration Service (AWS DMS) to migrate the database to a new data store.

Which solution will reduce the cost of the database with the LEAST effort?

Options:

A.

Use AWS Database Migration Service (DMS) to migrate to an RDS for MySQL Multi-AZ database. Update the application code to use the features of MySQL that correspond to SQL Server. Update the application to use the MySQL port.

B.

use AWS Database Migration Serve (OMS) to migrate to an RDS for PostgreSQL Multi-AZ database. Turn on the SQL_COMPAT optional extension within the database to allow the required features. Update the application to use the PostgreSQL port

C.

Use AWS Database Migration Service (OMS) to migrate to an RDS for SQL Server Single-AZ database. Update the application to use the new database endpoint

D.

Use AWS Database Migration Service (DMS) to migrate the database to Amazon Aurora PostgreSOL_ Turn on Babelfish for Aurora PostgreSOL_ Update the application to use the Babelfish TDS port.

Question 7

A database specialist needs to delete user data and sensor data 1 year after it was loaded in an Amazon DynamoDB table. TTL is enabled on one of the attributes. The database specialist monitors TTL rates on the Amazon CloudWatch metrics for the table and observes that items are not being deleted as expected.

What is the MOST likely reason that the items are not being deleted?

Options:

A.

The TTL attribute's value is set as a Number data type.

B.

The TTL attribute's value is set as a Binary data type.

C.

The TTL attribute's value is a timestamp in the Unix epoch time format in seconds.

D.

The TTL attribute's value is set with an expiration of 1 year.

Question 8

A company's development team needs to have production data restored in a staging AWS account. The production database is running on an Amazon RDS for

PostgreSQL Multi-AZ DB instance, which has AWS KMS encryption enabled using the default KMS key. A database specialist planned to share the most recent automated snapshot with the staging account, but discovered that the option to share snapshots is disabled in the AWS Management Console.

What should the database specialist do to resolve this?

Options:

A.

Disable automated backups in the DB instance. Share both the automated snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account and enable automated backups.

B.

Copy the automated snapshot specifying a custom KMS encryption key. Share both the copied snapshot and the custom KMS encryption key with the staging account. Restore the snapshot to the staging account within the same Region.

C.

Modify the DB instance to use a custom KMS encryption key. Share both the automated snapshot and the custom KMS encryption key with the staging account. Restore the snapshot in the staging account.

D.

Copy the automated snapshot while keeping the default KMS key. Share both the snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account.

Question 9

Amazon DynamoDB global tables are being used by a business to power an online gaming game. The game is played by gamers from all around the globe. As the game became popularity, the amount of queries to DynamoDB substantially rose. Recently, gamers have complained about the game's condition being inconsistent between nations. A database professional notices that the ReplicationLatency metric for many replica tables is set to an abnormally high value.

Which strategy will resolve the issue?

Options:

A.

Configure all replica tables to use DynamoDB auto scaling.

B.

Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.

C.

Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.

D.

Configure the table-level write throughput limit service quota to a higher value.

Question 10

A company is running its customer feedback application on Amazon Aurora MySQL. The company runs a report every day to extract customer feedback, and a team reads the feedback to determine if the customer comments are positive or negative. It sometimes takes days before the company can contact unhappy customers and take corrective measures. The company wants to use machine learning to automate this workflow.

Which solution meets this requirement with the LEAST amount of effort?

Options:

A.

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.

B.

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.

C.

Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.

D.

Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.

Question 11

A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.

Which solution meets these requirements?

Options:

A.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.

B.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.

C.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

D.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

Question 12

A financial services company runs an on-premises MySQL database for a critical application. The company is dissatisfied with its current database disaster recovery (DR) solution. The application experiences a significant amount of downtime whenever the database fails over to its DR facility. The application also experiences slower response times when reports are processed on the same database. To minimize the downtime in DR situations, the company has decided to migrate the database to AWS. The company requires a solution that is highly available and the most cost-effective.

Which solution meets these requirements?

Options:

A.

Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the replica instance endpoint and report queries to reference the primary DB instance endpoint.

B.

Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.

C.

Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the cluster endpoint and report queries to reference the reader endpoint.

D.

Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.

Question 13

A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well- Architected Framework review, a Database Specialist was given new security requirements.

Only certain on-premises corporate network IPs should connect to the DB instance. Connectivity is allowed from the corporate network only.

Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

Options:

A.

Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.

B.

Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.

C.

Move the DB instance to a private subnet using AWS DMS.

D.

Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.

E.

Disable the publicly accessible setting.

F.

Connect to the DB instance using private IPs and a VPN.

Question 14

A financial institution uses AWS to host its online application. Amazon RDS for MySQL is used to host the application's database, which includes automatic backups.

The program has corrupted the database logically, resulting in the application being unresponsive. The exact moment the corruption occurred has been determined, and it occurred within the backup retention period.

How should a database professional restore a database to its previous state prior to corruption?

Options:

A.

Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.

B.

Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.

C.

Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.

D.

Restore using the appropriate automated backup. No changes to the application connection string are required.

Question 15

An worldwide gaming company's development team is experimenting with using Amazon DynamoDB to store in-game events for three mobile titles. Maximum concurrent users for the most popular game is 500,000, while the least popular game is 10,000. The typical event is 20 KB in size, while the average user session generates one event each second. Each event is assigned a millisecond time stamp and a globally unique identification.

The lead developer generated a single DynamoDB database with the following structure for the events:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

In a small-scale development setting, the tests were successful. When the application was deployed to production, however, new events were not being added to the database, and the logs indicated DynamoDB failures with the ItemCollectionSizeLimitExceededException issue code.

Which design modification should a database professional offer to the development team?

Options:

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Question 16

A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To meet the company’s disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient.

What should a Database Specialist do to copy the database backup into a different Region?

Options:

A.

Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region

B.

Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region

C.

Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region

D.

Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica

Question 17

A development team asks a database specialist to create a copy of a production Amazon RDS for MySQL DB instance every morning. The development team will use the copied DB instance as a testing environment for development. The original DB instance and the copy will be hosted in different VPCs of the same AWS account. The development team wants the copy to be available by 6 AM each day and wants to use the same endpoint address each day.

Which combination of steps should the database specialist take to meet these requirements MOST cost-effectively? (Choose three.)

Options:

A.

Create a snapshot of the production database each day before the 6 AM deadline.

B.

Create an RDS for MySQL DB instance from the snapshot. Select the desired DB instance size.

C.

Update a defined Amazon Route 53 CNAME record to point to the copied DB instance.

D.

Set up an AWS Database Migration Service (AWS DMS) migration task to copy the snapshot to the copied DB instance.

E.

Use the CopySnapshot action on the production DB instance to create a snapshot before 6 AM.

F.

Update a defined Amazon Route 53 alias record to point to the copied DB instance.

Question 18

A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data. The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.

Which migration approach will be the fastest and most cost-effective to implement?

Options:

A.

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B.

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C.

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D.

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

Question 19

A company has an AWS CloudFormation stack that defines an Amazon RDS DB instance. The company accidentally deletes the stack and loses recent data from the DB instance. A database specialist must change the CloudFormation template for the RDS resource to reduce the chance of accidental data loss from the DB instance in the future.

Which combination of actions should the database specialist take to meet this requirement? (Choose three.)

Options:

A.

Set the DeletionProtection property to True.

B.

Set the MultiAZ property to True.

C.

Set the TerminationProtection property to True.

D.

Set the DeleteAutomatedBackups property to False.

E.

Set the DeletionPolicy attribute to No.

F.

Set the DeletionPolicy attribute to Retain.

Question 20

A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.

Where should the AWS DMS replication instance be placed for the MOST optimal performance?

Options:

A.

In the same Region and VPC of the source DB instance

B.

In the same Region and VPC as the target DB instance

C.

In the same VPC and Availability Zone as the target DB instance

D.

In the same VPC and Availability Zone as the source DB instance

Question 21

A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the user’s browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the United States, Europe, Hong Kong, and India. The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users.

Which database solution meets these requirements?

Options:

A.

Amazon DocumentDB

B.

Amazon RDS Multi-AZ deployment

C.

Amazon DynamoDB global table

D.

Amazon Aurora Global Database

Question 22

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

Options:

A.

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.

B.

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.

C.

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.

D.

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Question 23

An ecommerce company uses Amazon DynamoDB as the backend for its payments system. A new regulation requires the company to log all data access requests for financial audits. For this purpose, the company plans to use AWS logging and save logs to Amazon S3

How can a database specialist activate logging on the database?

Options:

A.

Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create a DynamoDB stream to monitor data-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

B.

Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDB stream to monitor control-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

C.

Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-plane operations. Use Trail2 to monitor DynamoDB data-plane operations.

D.

Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.

Question 24

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform.

Which combination of steps should the company take to meet these requirements? (Choose two.)

Options:

A.

Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled.

B.

Deploy an ElastiCache for Memcached global datastore.

C.

Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup.

D.

Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available.

E.

Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates.

Question 25

A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?

Options:

A.

Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.

B.

Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.

C.

Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.

D.

Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

Question 26

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture.

Which solution will meet these requirements?

Options:

A.

Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.

B.

Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

C.

Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

D.

Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.

Question 27

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.

A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.

Which AWS service or feature will help automate and achieve this objective?

Options:

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Question 28

A business just transitioned from an on-premises Oracle database to Amazon Aurora PostgreSQL. Following the move, the organization observed that every day around 3:00 PM, the application's response time is substantially slower. The firm has determined that the problem is with the database, not the application.

Which set of procedures should the Database Specialist do to locate the erroneous PostgreSQL query most efficiently?

Options:

A.

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

B.

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

C.

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

D.

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Question 29

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.

Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?

Options:

A.

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

B.

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

C.

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

D.

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Question 30

A business's production databases are housed on a 3 TB Amazon Aurora MySQL DB cluster. The database cluster is installed in the region us-east-1. For disaster recovery (DR) requirements, the company's database expert needs to fast deploy the DB cluster in another AWS Region to handle the production load with an RTO of less than two hours.

Which approach is the MOST OPERATIONALLY EFFECTIVE in meeting these requirements?

Options:

A.

Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.

B.

Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.

C.

Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.

D.

Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.

Question 31

After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

Options:

A.

The restored DB instance does not have Enhanced Monitoring enabled

B.

The production DB instance is using a custom parameter group

C.

The restored DB instance is using the default security group

D.

The production DB instance is using a custom option group

Question 32

A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.

Which actions would improve the data migration speed? (Choose three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Question 33

A software company is conducting a security audit of its three-node Amazon Aurora MySQL DB cluster.

Which finding is a security concern that needs to be addressed?

Options:

A.

The AWS account root user does not have the minimum privileges required for client applications.

B.

Encryption in transit is not configured for all Aurora native backup processes.

C.

Each Aurora DB cluster node is not in a separate private VPC with restricted access.

D.

The IAM credentials used by the application are not rotated regularly.

Question 34

A database specialist is planning to migrate a 4 TB Microsoft SQL Server DB instance from on premises to Amazon RDS for SQL Server. The database is primarily used for nightly batch processing.

Which RDS storage option meets these requirements MOST cost-effectively?

Options:

A.

General Purpose SSD storage

B.

Provisioned IOPS storage

C.

Magnetic storage

D.

Throughput Optimized hard disk drives (HDD)

Question 35

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on premises to an Amazon Aurora

PostgreSQL database on AWS.

The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to complete the migration.

What is the MOST operationally efficient solution that meets these requirements?

Options:

A.

Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.

B.

Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convert-lowercase.

C.

Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.

D.

Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.

Question 36

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.

Which solution meets these requirements?

Options:

A.

Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.

B.

Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.

C.

Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.

D.

Change the DB clusters to the burstable instance family.

Question 37

A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.

Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

Options:

A.

Stop the DB cluster and analyze how the website responds

B.

Use Aurora fault injection to crash the master DB instance

C.

Remove the DB cluster endpoint to simulate a master DB instance failure

D.

Use Aurora Backtrack to crash the DB cluster

Question 38

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.

How should the company perform this data load?

Options:

A.

Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

B.

Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

C.

Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

D.

Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

Question 39

A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.

This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.

The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.

Which solution will meet these requirements?

Options:

A.

Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.

B.

Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service domain for queries.

C.

Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.

D.

Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.

Question 40

A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times.

What could be causing these slow response times?

Options:

A.

New volumes created from snapshots load lazily in the background

B.

Long-running statements on the master

C.

Insufficient resources on the master

D.

Overload of a single replication thread by excessive writes on the master

Question 41

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:

“Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.”

The developers need to load this data soon, so a database specialist must act quickly to solve this issue.

What is the MOST secure solution?

Options:

A.

Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.

B.

Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.

C.

Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.

D.

Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.

Question 42

A database expert is responsible for building a highly available online transaction processing (OLTP) solution that makes use of Amazon RDS for MySQL production databases. Disaster recovery criteria include a cross-regional deployment and an RPO and RTO of 5 and 30 minutes, respectively.

What should the database professional do to ensure that the database meets the criteria for high availability and disaster recovery?

Options:

A.

Use a Multi-AZ deployment in each Region.

B.

Use read replica deployments in all Availability Zones of the secondary Region.

C.

Use Multi-AZ and read replica deployments within a Region.

D.

Use Multi-AZ and deploy a read replica in a secondary Region.

Question 43

A database specialist is building a system that uses a static vendor dataset of postal codes and related territory information that is less than 1 GB in size. The dataset is loaded into the application’s cache at start up. The company needs to store this data in a way that provides the lowest cost with a low application startup time.

Which approach will meet these requirements?

Options:

A.

Use an Amazon RDS DB instance. Shut down the instance once the data has been read.

B.

Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.

C.

Use Amazon DynamoDB in on-demand capacity mode.

D.

Use Amazon S3 and load the data from flat files.

Question 44

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.

The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.

Which solution will meet these requirements with minimal effort?

Options:

A.

Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

B.

Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.

C.

Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

D.

Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

Question 45

A worldwide digital advertising corporation collects browser information in order to provide targeted visitors with contextually relevant pictures, websites, and connections. A single page load may create many events, each of which must be kept separately. A single event may have a maximum size of 200 KB and an average size of 10 KB. Each page load requires a query of the user's browsing history in order to deliver suggestions for targeted advertising. The advertising corporation anticipates daily page views of more than 1 billion from people in the United States, Europe, Hong Kong, and India. The information structure differs according to the event. Additionally, browsing information must be written and read with a very low latency to guarantee that consumers have a positive viewing experience.

Which database solution satisfies these criteria?

Options:

A.

Amazon DocumentDB

B.

Amazon RDS Multi-AZ deployment

C.

Amazon DynamoDB global table

D.

Amazon Aurora Global Database

Question 46

A financial company is hosting its web application on AWS. The application's database is hosted on Amazon RDS for MySQL with automated backups enabled.

The application has caused a logical corruption of the database, which is causing the application to become unresponsive. The specific time of the corruption has been identified, and it was within the backup retention period.

How should a database specialist recover the database to the most recent point before corruption?

Options:

A.

Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.

B.

Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.

C.

Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.

D.

Restore using the appropriate automated backup. No changes to the application connection string are required.

Question 47

A database professional is tasked with the task of migrating 25 GB of data files from an on-premises storage system to an Amazon Neptune database.

Which method of data loading is the FASTEST?

Options:

A.

Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.

B.

Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.

C.

Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.

D.

Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.

Question 48

A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium- sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.

In the event of a primary failure, what will occur?

Options:

A.

Aurora will promote an Aurora Replica that is of the same size as the primary instance

B.

Aurora will promote an arbitrary Aurora Replica

C.

Aurora will promote the largest-sized Aurora Replica

D.

Aurora will not promote an Aurora Replica

Question 49

A bank intends to utilize Amazon RDS to host a MySQL database instance. The database should be able to handle high-volume read requests with extremely few repeated queries.

Which solution satisfies these criteria?

Options:

A.

Create an Amazon ElastiCache cluster. Use a write-through strategy to populate the cache.

B.

Create an Amazon ElastiCache cluster. Use a lazy loading strategy to populate the cache.

C.

Change the DB instance to Multi-AZ with a standby instance in another AWS Region.

D.

Create a read replica of the DB instance. Use the read replica to distribute the read traffic.

Question 50

A retail company is about to migrate its online and mobile store to AWS. The company’s CEO has strategic plans to grow the brand globally. A Database Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead.

What should the Database Specialist do to meet these requirements?

Options:

A.

Use Amazon DynamoDB global tables to synchronize transactions

B.

Use Amazon EMR to copy the orders table data across Regions

C.

Use Amazon Aurora Global Database to synchronize all transactions

D.

Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them

Question 51

A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora.

Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)

Options:

A.

CONNECT

B.

QUERY_DCL

C.

QUERY_DDL

D.

QUERY_DML

E.

TABLE

F.

QUERY

Question 52

A company has a web application that uses Amazon API Gateway to route HTTPS requests to AWS Lambda functions. The application uses an Amazon Aurora MySQL database for its data storage. The application has experienced unpredictable surges in traffic that overwhelm the database with too many connection requests. The company needs to implement a scalable solution that is more resilient to database failures as quickly as possible.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Migrate the Aurora MySQL database to Amazon Aurora Serverless by restoring a snapshot. Change the endpoint in the Lambda functions to use the new database.

B.

Migrate the Aurora MySQL database to Amazon DynamoDB tables by using AWS Database Migration Service (AWS DMS). Change the endpoint in the Lambda functions to use the new database.

C.

Create an Amazon EventBridge rule that invokes a Lambda function. Code the function to iterate over all existing connections and to call MySQL queries to end any connections in the sleep state.

D.

Increase the instance class for the Aurora database with more memory. Set a larger value for the max_connections parameter.

Question 53

A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises

SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only.

How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?

Options:

A.

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.

B.

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.

C.

Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.

D.

Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.

Question 54

For the first time, a database professional is establishing a test graph database on Amazon Neptune. The database expert must input millions of rows of test observations from an Amazon S3.csv file. The database professional uploaded the data to the Neptune DB instance through a series of API calls.

Which sequence of actions enables the database professional to upload the data most quickly? (Select three.)

Options:

A.

Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.

B.

Ensure the vertices and edges are specified in different .csv files with proper header column formatting.

C.

Use AWS DMS to move data from Amazon S3 to the Neptune Loader.

D.

Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.

E.

Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.

F.

Create an S3 VPC endpoint and issue an HTTP POST to the databaseג€™s loader endpoint.

Question 55

A company is loading sensitive data into an Amazon Aurora MySQL database. To meet compliance requirements, the company needs to enable audit logging on the Aurora MySQL DB cluster to audit database activity. This logging will include events such as connections, disconnections, queries, and tables queried. The company also needs to publish the DB logs to Amazon CloudWatch to perform real-time data analysis.

Which solution meets these requirements?

Options:

A.

Modify the default option group parameters to enable Advanced Auditing. Restart the database for the changes to take effect.

B.

Create a custom DB cluster parameter group. Modify the parameters for Advanced Auditing. Modify the cluster to associate the new custom DB parameter group with the Aurora MySQL DB cluster.

C.

Take a snapshot of the database. Create a new DB instance, and enable custom auditing and logging to CloudWatch. Deactivate the DB instance that has no logging.

D.

Enable AWS CloudTrail for the DB instance. Create a filter that provides only connections, disconnections, queries, and tables queried.

Question 56

A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine.

The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO.

Which solution will meet these requirements?

Options:

A.

Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover.

B.

Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server.

C.

Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region.

D.

Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region.

Question 57

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

Options:

A.

Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

B.

Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

C.

Enable Amazon RDS Performance Insights and review the appropriate dashboard

D.

Enable Enhanced Monitoring will the appropriate settings

Question 58

On AWS, a business is developing a web application. The application needs that the database supports concurrent read and write activities in several AWS Regions. Additionally, the database must communicate data changes across Regions as they occur. The application must be highly available and have a latency of less than a few hundred milliseconds.

Which solution satisfies these criteria?

Options:

A.

Amazon DynamoDB global tables

B.

Amazon DynamoDB streams with AWS Lambda to replicate the data

C.

An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards

D.

An Amazon Aurora global database

Question 59

A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.

Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

Options:

A.

Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.

B.

Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.

C.

Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.

D.

Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.

Question 60

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of

99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.

Which database solution meets these requirements at the LOWEST cost?

Options:

A.

Amazon Neptune

B.

Amazon Aurora PostgreSQL Serverless

C.

Amazon RDS for PostgreSQL

D.

Amazon DynamoDB

Question 61

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.

Which approach has the least risk and the highest likelihood of a successful data transfer?

Options:

A.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.

B.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.

C.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.

D.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.

Question 62

A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.

What should the Database Specialist do to automatically collect the database logs for the Administrator?

Options:

A.

Enable DocumentDB to export the logs to Amazon CloudWatch Logs

B.

Enable DocumentDB to export the logs to AWS CloudTrail

C.

Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs

D.

Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3

Question 63

A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window.

What is the MOST cost-effective action that should be taken to avoid downtime?

Options:

A.

Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB

B.

Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down

C.

Enable a read replicas and direct read traffic to it when Amazon RDS is down

D.

Enable an Amazon RDS for MySQL Multi-AZ configuration

Question 64

A database specialist needs to move an Amazon ROS DB instance from one AWS account to another AWS account.

Which solution will meet this requirement with the LEAST operational effort?

Options:

A.

Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.

B.

Create a DB snapshot of the DB instance. Share the snapshot With the destination AWS account Create a new DB instance by restoring the snapshot in the destination AWS account

C.

Create a Multi-AZ deployment tor the DB instance. Create a read replica tor the DB instance in the source AWS account. use the read replica to replicate the data into the DB instance in the destination AWS account

D.

Use AWS DataSync to back up the DB instance in the source AWS account Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.

Question 65

A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC.

The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access.

Which security strategy should a database specialist implement to meet these requirements?

Options:

A.

Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet.

B.

Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.

C.

Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.

D.

Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.

Question 66

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions. The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.

Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

Options:

A.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.

B.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket

C.

Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.

D.

Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.

E.

Modify the system table to enable logging for each user.

Question 67

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.

Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

Options:

A.

Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.

B.

Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.

C.

Create additional readers to cater to the different scenarios.

D.

Use custom endpoints to satisfy the different workloads.

Question 68

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

Options:

A.

Change the data model to avoid hot partitions in the global secondary index.

B.

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.

Modify the table to use on-demand capacity instead of provisioned capacity.

D.

Increase the number of retries on the bulk loading application.

Question 69

A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.

Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.

Which approach should the Database Specialist take to reduce downtime?

Options:

A.

Deploy multiple read replicas and have the team members make changes to separate replica instances

B.

Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot

C.

Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature

D.

Enable the Amazon RDS for MySQL Backtrack feature

Question 70

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

Options:

A.

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.

Configure an Amazon Aurora global database and add a different AWS Region.

C.

Configure a binlog and create a replica in a different AWS Region.

D.

Configure a cross-Region read replica.

Question 71

A corporation intends to migrate a 500-GB Oracle database to Amazon Aurora PostgreSQL utilizing the AWS Schema Conversion Tool (AWS SCT) and AWS Data Management Service (AWS DMS). The database does not have any stored procedures, but does contain several huge or partitioned tables. Because the program is vital to the company, it is preferable to migrate with little downtime.

Which measures should a database professional perform in combination to expedite the transfer process? (Select three.)

Options:

A.

Use the AWS SCT data extraction agent to migrate the schema from Oracle to Aurora PostgreSQL.

B.

For the large tables, change the setting for the maximum number of tables to load in parallel and perform a full load using AWS DMS.

C.

For the large tables, create a table settings rule with a parallel load option in AWS DMS, then perform a full load using DMS.

D.

Use AWS DMS to set up change data capture (CDC) for continuous replication until the cutover date.

E.

Use AWS SCT to convert the schema from Oracle to Aurora PostgreSQL.

F.

Use AWS DMS to convert the schema from Oracle to Aurora PostgreSQL and for continuous replication.

Question 72

A database specialist must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery strategy.

Which backup methodology should the database specialist use to MINIMIZE management overhead?

Options:

A.

Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that executes the command on a nightly basis.

B.

Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that executes the Lambda function on a nightly basis.

C.

Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.

D.

Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.

Question 73

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.

Which solution will MOST improve the performance of the data migration?

Options:

A.

Increase the number of tables that are loaded in parallel.

B.

Drop all indexes on the source tables.

C.

Change the processing mode from the batch optimized apply option to transactional mode.

D.

Enable Multi-AZ on the target database while the full load task is in progress.

Question 74

A retail company uses Amazon Redshift Spectrum to run complex analytical queries on objects that are stored in an Amazon S3 bucket. The objects are joined with multiple dimension tables that are stored in an Amazon Redshift database. The company uses the database to create monthly and quarterly aggregated reports. Users who attempt to run queries are reporting the following error message: error: Spectrum Scan Error: Access throttled

Which solution will resolve this error?

Options:

A.

Check file sizes of fact tables in Amazon S3, and look for large files. Break up large files into smaller files of equal size between 100 MB and 1 GB

B.

Reduce the number of queries that users can run in parallel.

C.

Check file sizes of fact tables in Amazon S3, and look for small files. Merge the small files into larger files of at least 64 MB in size.

D.

Review and optimize queries that submit a large aggregation step to Redshift Spectrum.

Question 75

A company needs to deploy an Amazon Aurora PostgreSQL DB instance into multiple accounts. The company will initiate each DB instance from an existing Aurora PostgreSQL DB instance that runs in a

shared account. The company wants the process to be repeatable in case the company adds additional accounts in the future. The company also wants to be able to verify if manual changes have been made

to the DB instance configurations after the company deploys the DB instances.

A database specialist has determined that the company needs to create an AWS CloudFormation template with the necessary configuration to create a DB instance in an account by using a snapshot of the existing DB instance to initialize the DB instance. The company will also use the CloudFormation template's parameters to provide key values for the DB instance creation (account ID, etc.).

Which final step will meet these requirements in the MOST operationally efficient way?

Options:

A.

Create a bash script to compare the configuration to the current DB instance configuration and to report any changes.

B.

Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.

C.

Set up CloudFormation to use drift detection to send notifications if the DB instance configurations have been changed.

D.

Create an AWS Lambda function to compare the configuration to the current DB instance configuration and to report any changes.

Question 76

A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on- premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.

Which approach should the Database Specialist take to securely manage the database credentials?

Options:

A.

Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.

B.

Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.

C.

Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.

D.

Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.

Question 77

A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company’s development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.

How should this be accomplished?

Options:

A.

Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.

B.

Create a dump file from the DB cluster. Load the dump file into a new DB cluster.

C.

Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.

D.

Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.

Question 78

Application developers have reported that an application is running slower as more users are added. The application database is running on an Amazon Aurora

DB cluster with an Aurora Replica. The application is written to take advantage of read scaling through reader endpoints. A database specialist looks at the performance metrics of the database and determines that, as new users were added to the database, the primary instance CPU utilization steadily increased while the Aurora Replica CPU utilization remained steady.

How can the database specialist improve database performance while ensuring minimal downtime?

Options:

A.

Modify the Aurora DB cluster to add more replicas until the overall load stabilizes. Then, reduce the number of replicas once the application meets service level objectives.

B.

Modify the primary instance to a larger instance size that offers more CPU capacity.

C.

Modify a replica to a larger instance size that has more CPU capacity. Then, promote the modified replica.

D.

Restore the Aurora DB cluster to one that has an instance size with more CPU capacity. Then, swap the names of the old and new DB clusters.

Question 79

A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.

Which solution meets these requirements?

Options:

A.

Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.

B.

Use reader endpoints for both the read-only workload applications.

C.

Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.

D.

Use custom endpoints for the two read-only applications.

Question 80

An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company’s Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.

What should the database specialist do to achieve this? (Choose two.)

Options:

A.

Create an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.

B.

Subscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.

C.

Use Amazon SES to send notifications based on configured Amazon CloudWatch Events events.

D.

Configure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.

E.

Enable email notifications for AWS Trusted Advisor.

Question 81

A company has a hybrid environment in which a VPC connects to an on-premises network through an AWS Site-to-Site VPN connection. The VPC contains an application that is hosted on Amazon EC2 instances. The EC2 instances run in private subnets behind an Application Load Balancer (ALB) that is associated with multiple public subnets. The EC2 instances need to securely access an Amazon DynamoDB table.

Which solution will meet these requirements?

Options:

A.

Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.

B.

Add a NAT gateway in one of the public subnets of the VPC_ Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway

C.

Use the Site-to-Site VPN connection to route all DynamoD8 network traffic through the on-premises network infrastructure to access the EC2 instances

D.

Create a VPC endpoint for DynamoDB_ Assign the endpoint to the route table of the private subnets that contain the EC2 instances.

Question 82

A database specialist has been entrusted by an ecommerce firm with designing a reporting dashboard that visualizes crucial business KPIs derived from the company's primary production database running on Amazon Aurora. The dashboard should be able to read data within 100 milliseconds after an update.

The Database Specialist must conduct an audit of the Aurora DB cluster's present setup and provide a cost-effective alternative. The solution must support the unexpected read demand generated by the reporting dashboard without impairing the DB cluster's write availability and performance.

Which solution satisfies these criteria?

Options:

A.

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.

Provision a clone of the existing DB cluster for the new Application team.

C.

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Question 83

A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.

Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

Options:

A.

Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.

B.

Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.

C.

Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.

D.

Use Amazon QuickSight to view the SQL statement being run.

E.

Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.

Question 84

A company's application team needs to select an AWS managed database service to store application and user data. The application team is familiar with MySQL but is open to new solutions. The application and user data is stored in 10 tables and is de-normalized. The application will access this data through an API layer using an unique ID in each table. The company expects the traffic to be light at first, but the traffic Will Increase to thousands of transactions each second within the first year- The database service must support active reads

and writes in multiple AWS Regions at the same time_ Query response times need to be less than 100 ms Which AWS database solution will meet these requirements?

Options:

A.

Deploy an Amazon RDS for MySQL environment in each Region and leverage AWS Database Migration Service (AWS DMS) to set up a multi-Region bidirectional replication

B.

Deploy an Amazon Aurora MySOL global database with write forwarding turned on

C.

Deploy an Amazon DynamoDB database with global tables

D.

Deploy an Amazon DocumentDB global cluster across multiple Regions.

Question 85

A company has a reporting application that runs on an Amazon EC2 instance in an isolated developer account on AWS. The application needs to retrieve data during non-peak company hours from an Amazon Aurora PostgreSQL database that runs in the companys production account The companys security team requires that access to production

resources complies with AWS best security practices

A database administrator needs to provide the reporting application with access to the production database. The company has already configured VPC peering between the production account and developer account The company has also updated the route tables in both accounts With the necessary entries to correctly set up VPC peering

What must the database administrator do to finish providing connectivity to the reporting application?

Options:

A.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

B.

Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

C.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432_

D.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432_ Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports

Question 86

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.

What should the company do to address this space constraint issue?

Options:

A.

Log in to the host and run the rm $PGDATA/pg_logs/* command

B.

Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted

C.

Create a ticket with AWS Support to have the logs deleted

D.

Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

Question 87

A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.

Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.

What should the Database Specialist do to correct the Data Analysts’ inability to connect?

Options:

A.

Restart the DB cluster to apply the SSL change.

B.

Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.

C.

Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group assigned to the DB cluster.

D.

Modify the Data Analysts’ local client firewall to allow network traffic to AWS.

Question 88

A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.

How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

Options:

A.

Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.

B.

Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.

C.

Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.

D.

Create the maintenance job using the Amazon CloudWatch job scheduling plugin.

Question 89

A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:

*Real-time inserts through Amazon Kinesis Data Firehose

*Bulk inserts through COPY commands from Amazon S3

*Analytics through SQL queries

Recently, the cluster has started to experience performance issues.

Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)

Options:

A.

Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.

B.

Stream real-time data into Redshift temporary tables before loading the data into permanent tables.

C.

For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.

D.

For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.

E.

Optimize analytics SQL queries to use sort keys.

F.

Avoid using temporary tables in analytics SQL queries.

Question 90

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

Options:

A.

Create an Amazon DynamoDB table with provisioned capacity mode

B.

Create an Amazon DocumentDB cluster

C.

Create an Amazon DynamoDB table with on-demand capacity mode

D.

Create an Amazon Aurora Serverless DB cluster

Question 91

A company has a production environment running on Amazon RDS for SQL Server with an in-house web application as the front end. During the last application maintenance window, new functionality was added to the web application to enhance the reporting capabilities for management. Since the update, the application is slow to respond to some reporting queries.

How should the company identify the source of the problem?

Options:

A.

Install and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.

B.

Enable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.

C.

Use AWS X-Ray deployed with Amazon RDS to track query system traces.

D.

Create a support request and work with AWS Support to identify the source of the issue.

Question 92

The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality. This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect WHERE clause filtering the wrong set of rows.

The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal. How can the Database Specialist accomplish this?

Options:

A.

Quickly rewind the DB cluster to a point in time before the release using Backtrack.

B.

Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.

C.

Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.

D.

Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.

Question 93

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.

The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.

How can a Database Specialist address these requirements with minimal user involvement?

Options:

A.

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

B.

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

C.

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

D.

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Question 94

An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.

The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a

cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.

Which solution meets these requirements?

Options:

A.

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.

Provision a clone of the existing DB cluster for the new Application team.

C.

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Question 95

A company is launching a new Amazon RDS for MySQL Multi-AZ DB instance to be used as a data store for a custom-built application. After a series of tests with point-in-time recovery disabled, the company decides that it must have point-in-time recovery reenabled before using the DB instance to store production data.

What should a database specialist do so that point-in-time recovery can be successful?

Options:

A.

Enable binary logging in the DB parameter group used by the DB instance.

B.

Modify the DB instance and enable audit logs to be pushed to Amazon CloudWatch Logs.

C.

Modify the DB instance and configure a backup retention period

D.

Set up a scheduled job to create manual DB instance snapshots.

Question 96

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.

The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?

Options:

A.

Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.

B.

Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.

C.

Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.

D.

Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.

Question 97

A large IT hardware manufacturing company wants to deploy a MySQL database solution in the AWS Cloud. The solution should quickly create copies of the company's production databases for test purposes. The solution must deploy the test databases in minutes, and the test data should match the latest production data as closely as possible. Developers must also be able to make changes in the test database and delete the instances afterward.

Which solution meets these requirements?

Options:

A.

Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqidump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.

B.

Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.

C.

Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.

D.

Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.

Page: 1 / 32
Total 324 questions