The Leading Guide To DBS-C01 Study Guides

Cause all that matters here is passing the Amazon-Web-Services DBS-C01 exam. Cause all that you need is a high score of DBS-C01 AWS Certified Database - Specialty exam. The only one thing you need to do is downloading Examcollection DBS-C01 exam study guides now. We will not let you down with our money-back guarantee.

Online Amazon-Web-Services DBS-C01 free dumps demo Below:

NEW QUESTION 1
A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time-consuming, so it is not an option.
How should the Database Specialist satisfy this new requirement?

  • A. Create a snapshot of the unencrypted RDS DB instanc
  • B. Create an encrypted copy of the unencryptedsnapsho
  • C. Restore the encrypted snapshot copy.
  • D. Modify the RDS DB instanc
  • E. Enable the AWS KMS encryption option that leverages the AWS CLI.
  • F. Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
  • G. Create an encrypted read replica of the RDS DB instanc
  • H. Promote it the master.

Answer: A

NEW QUESTION 2
A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.
What is the quickest way for the company to gather data on the migration compatibility?

  • A. Perform a logical dump from the Db2 database and restore it to an Aurora DB cluste
  • B. Identify the gaps andcompatibility of the objects migrated by comparing row counts from source and target tables.
  • C. Run AWS DMS from the Db2 database to an Aurora DB cluste
  • D. Identify the gaps and compatibility of theobjects migrated by comparing the row counts from source and target tables.
  • E. Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate themigration compatibility.
  • F. Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster.Create a migration assessment report to evaluate the migration compatibility.

Answer: D

NEW QUESTION 3
A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.
Which step will provide additional security?

  • A. Set up NACLs that allow the entire EC2 subnet to access the DB instance
  • B. Disable the master user account
  • C. Set up a security group that blocks SSH to the DB instance
  • D. Set up RDS to use SSL for data in transit

Answer: D

NEW QUESTION 4
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low. Which solution meets these requirements?

  • A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storag
  • B. Provision enough instances to support high demand.
  • C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • D. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • E. Provision enough instances to support high demand.
  • F. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • G. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • H. Enable Amazon Redshift Concurrency Scaling.
  • I. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • J. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • K. Leverage Amazon Redshift elastic resize.

Answer: C

NEW QUESTION 5
An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.
What should a Database Specialist do in this situation to increase performance and return latency to sub-second levels?

  • A. Increase the size of the DB instance storage
  • B. Change the underlying EBS storage type to General Purpose SSD (gp2)
  • C. Disable EBS optimization on the DB instance
  • D. Change the DB instance to an instance class with a higher maximum bandwidth

Answer: B

NEW QUESTION 6
A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.
Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

  • A. Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Setacross all nodes in the cluster.
  • B. Increase the size of the ElastiCache cluster nodes to a larger instance size.
  • C. Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
  • D. Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.

Answer: B

NEW QUESTION 7
A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.
Which solution meets these requirements?

  • A. Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
  • B. Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
  • C. Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
  • D. Use Amazon Neptune for storage

Answer: A

NEW QUESTION 8
A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.
How should a Database Specialist ensure DynamoDB can handle the increased traffic?

  • A. Ensure the table is always provisioned to meet peak needs
  • B. Allow burst capacity to handle the additional load
  • C. Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
  • D. Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Answer: B

NEW QUESTION 9
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.
Which step should be taken to troubleshoot this issue?

  • A. Ensure that the database option group for the RDS DB instance allows ingress from theDevelopermachine’s IP address
  • B. Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer toconnect
  • C. Ensure that the RDS DB instance has not reached its maximum connections limit
  • D. Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listeningfor encrypted connections

Answer: B

NEW QUESTION 10
A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora.
Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)

  • A. CONNECT
  • B. QUERY_DCL
  • C. QUERY_DDL
  • D. QUERY_DML
  • E. TABLE
  • F. QUERY

Answer: ACE

NEW QUESTION 11
A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.
Which combination of actions should the Database Specialist take? (Choose three.)

  • A. Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
  • B. Modify the RDS SQL Server DB instance to use the directory for Windows authentication.Createappropriate new logins.
  • C. Use the AWS Management Console to create an AWS Managed Microsoft A
  • D. Create a trust relationshipwith the corporate AD.
  • E. Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and startit agai
  • F. Create appropriate new logins.
  • G. Use the AWS Management Console to create an AD Connecto
  • H. Create a trust relationship withthecorporate AD.
  • I. Configure the AWS Managed Microsoft AD domain controller Security Group.

Answer: CDF

NEW QUESTION 12
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?

  • A. Split up the DB cluster into two different clusters: one for OLTP and the other for reportin
  • B. Monitor and set up replication between the two clusters to keep data consistent.
  • C. Review all evaluate the peak combined workloa
  • D. Ensure that utilization of the DB cluster node is at an acceptable leve
  • E. Adjust the number of instances, if necessary.
  • F. Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workloa
  • G. The cluster can be restarted again depending on the workload at the time.
  • H. Set up automatic scaling on the DB cluste
  • I. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Answer: D

NEW QUESTION 13
A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.
What should the Database Specialist do to meet these requirements?

  • A. Restore a snapshot from the production cluster into test clusters
  • B. Create logical dumps of the production cluster and restore them into new test clusters
  • C. Use database cloning to create clones of the production cluster
  • D. Add an additional read replica to the production cluster and use that node for testing

Answer: D

NEW QUESTION 14
A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:
ERROR: cloud not write block 7507718 of temporary file: No space left on device
What is the cause of this error and what should the Database Specialist do to resolve this issue?

  • A. The scaling of Aurora storage cannot catch up with the data loadin
  • B. The Database Specialist needs tomodify the workload to load the data slowly.
  • C. The scaling of Aurora storage cannot catch up with the data loadin
  • D. The Database Specialist needs toenable Aurora storage scaling.
  • E. The local storage used to store temporary tables is ful
  • F. The Database Specialist needs to scale up theinstance.
  • G. The local storage used to store temporary tables is ful
  • H. The Database Specialist needs to enable localstorage scaling.

Answer: C

NEW QUESTION 15
A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To meet the company’s disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient.
What should a Database Specialist do to copy the database backup into a different Region?

  • A. Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
  • B. Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication tocopy the snapshot into another Region
  • C. Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a secondLambda function to copy the snapshot into another Region
  • D. Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot ofthe read replica

Answer: D

NEW QUESTION 16
A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on-premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?

  • A. Set up a VPN tunnel for encrypting data over the network from the data center to AW
  • B. Leverage AWSSCT and apply the converted schema to Amazon Redshif
  • C. Once complete, start an AWS DMS task tomove the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to AmazonRedshift.
  • D. Leverage AWS SCT and apply the converted schema to Amazon Redshif
  • E. Start an AWS DMS task withtwo AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption.Use AWS DMS to finish copying data to Amazon Redshift.
  • F. Leverage AWS SCT and apply the converted schema to Amazon Redshif
  • G. Once complete, use a fleet of10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data fromon-premises toAmazon S3 with AWS KMS encryptio
  • H. Use AWS Glue to load the data to Amazon redshift.
  • I. Set up a VPN tunnel for encrypting data over the network from the data center to AW
  • J. Leverage a nativedatabase export feature to export the data and compress the file
  • K. Use the aws S3 cp multi-port uploadcommand to upload these files to Amazon S3 with AWS KMS encryptio
  • L. Once complete, load the data toAmazon Redshift using AWS Glue.

Answer: C

NEW QUESTION 17
A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?

  • A. Use a specific instance endpoint for each replica and add the instance endpoint to each read-onlyapplication connection string.
  • B. Use reader endpoints for both the read-only workload applications.
  • C. Use a reader endpoint for one read-only application and use an instance endpoint for the other read-onlyapplication.
  • D. Use custom endpoints for the two read-only applications.

Answer: B

NEW QUESTION 18
An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.
The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.
Which solution meets these requirements?

  • A. Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
  • B. Provision a clone of the existing DB cluster for the new Application team.
  • C. Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoingreplication using AWS DMS change data capture (CDC).
  • D. Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPUconsumption.

Answer: A

NEW QUESTION 19
A company is planning to close for several days. A Database Specialist needs to stop all applications alongwith the DB instances to ensure employees do not have access to the systems during this time. All databasesare running on Amazon RDS for MySQL.
The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs,the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.
How should the Database Specialist edit the script to fix this issue?

  • A. Stop the source instances before stopping their read replicas
  • B. Delete each read replica before stopping its corresponding source instance
  • C. Stop the read replicas before stopping their source instances
  • D. Use the AWS CLI to stop each read replica and source instance at the same

Answer: D

NEW QUESTION 20
After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect tothe restored RDS DB instance. What is the likely cause of this problem?

  • A. The restored DB instance does not have Enhanced Monitoring enabled
  • B. The production DB instance is using a custom parameter group
  • C. The restored DB instance is using the default security group
  • D. The production DB instance is using a custom option group

Answer: B

NEW QUESTION 21
A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.
Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

  • A. Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
  • B. Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
  • C. Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
  • D. Use Amazon QuickSight to view the SQL statement being run.
  • E. Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQLstatements, hosts, or users.

Answer: BE

NEW QUESTION 22
An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
DBS-C01 dumps exhibit Update scores in real time whenever a player is playing the game.
DBS-C01 dumps exhibit Retrieve a player’s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?

  • A. Create a global secondary index with game_id as the partition key
  • B. Create a global secondary index with user_id as the partition key
  • C. Create a composite primary key with game_id as the partition key and user_id as the sort key
  • D. Create a composite primary key with user_id as the partition key and game_id as the sort key

Answer: B

NEW QUESTION 23
......

Recommend!! Get the Full DBS-C01 dumps in VCE and PDF From DumpSolutions.com, Welcome to Download: https://www.dumpsolutions.com/DBS-C01-dumps/ (New 85 Q&As Version)