The Secret Of Amazon-Web-Services SAA-C03 Testing Bible

Guaranteed of SAA-C03 test questions materials and preparation for Amazon-Web-Services certification for IT learners, Real Success Guaranteed with Updated SAA-C03 pdf dumps vce Materials. 100% PASS AWS Certified Solutions Architect - Associate (SAA-C03) exam Today!

Check SAA-C03 free dumps before getting the full version:

NEW QUESTION 1
A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application's architecture.
What should a solutions architect do to meet these requirements?

  • A. Use Amazon ElastiCache in front of the database.
  • B. Use RDS Proxy between the application and the database.
  • C. Migrate the application from EC2 instances to AWS Lambda.
  • D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB.

Answer: C

NEW QUESTION 2
A company is migrating a distributed application to AWS The application serves variable workloads The legacy platform consists of a primary server trial coordinates jobs across multiple compute nodes The company wants to modernize the application with a solution that maximizes resiliency and scalability
How should a solutions architect design the architecture to meet these requirements?

  • A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling grou
  • B. Configure EC2 Auto Scaling to use scheduled scaling
  • C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 Instances that are managed in an Auto Scaling group Configure EC2 Auto Scaling based on the size of the queue
  • D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed Inan Auto Scaling grou
  • E. Configure AWS CloudTrail as a destination for the fobs Configure EC2 Auto Scaling based on the load on the primary server
  • F. implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs Configure EC2 Auto Scaling based on the load on the compute nodes

Answer: C

NEW QUESTION 3
A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has automated backups configured for Aurora.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

  • A. Take a manual snapshot of the DB cluster.
  • B. Create a lifecycle policy for the automated backups.
  • C. Configure automated backup retention for 5 years.
  • D. Configure an Amazon CloudWatch Logs export for the DB cluster.
  • E. Use AWS Backup to take the backups and to keep the backups for 5 years.

Answer: AD

NEW QUESTION 4
A company needs to retain application logs files for a critical application for 10 years. The application team regularly accesses logs from the past month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month.
Which storage option meets these requirements MOST cost-effectively?

  • A. Store the Iogs in Amazon S3 Use AWS Backup lo move logs more than 1 month old to S3 Glacier Deep Archive
  • B. Store the logs in Amazon S3 Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive
  • C. Store the logs in Amazon CloudWatch Logs Use AWS Backup to move logs more then 1 month old to S3 Glacier Deep Archive
  • D. Store the logs in Amazon CloudWatch Logs Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive

Answer: B

NEW QUESTION 5
A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt
all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
  • B. Create a customer managed multi-Region KMS ke
  • C. Create an S3 bucket in each Regio
  • D. Configure replication between the S3 bucket
  • E. Configure the application to use the KMS key with client-side encryption.
  • F. Create a customer managed KMS key and an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
  • G. Create a customer managed KMS key and an S3 bucket m each Region Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS) Configure replication between the S3 buckets.

Answer: C

Explanation:
Explanation
From https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.htmlFor most users, the default AWS KMS key store, which is protected by FIPS 140-2 validatedcryptographic modules, fulfills their security requirements. There is no need to add an extra layer ofmaintenance responsibility or a dependency on an additional service. However, you might considercreating a custom key store if your organization has any of the following requirements: Key materialcannot be stored in a shared environment. Key material must be subject to a secondary, independentaudit path. The HSMs that generate and store key material must be certified at FIPS 140-2 Level 3.
https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.html

NEW QUESTION 6
A solution architect is creating a new Amazon CloudFront distribution for an application Some of Ine information submitted by users is sensitive. The application uses HTTPS but needs another layer" of security The sensitive information should be protected throughout the entire application stack end access to the information should be restricted to certain applications
Which action should the solutions architect take?

  • A. Configure a CloudFront signed URL
  • B. Configure a CloudFront signed cookie.
  • C. Configure a CloudFront field-level encryption profile
  • D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy

Answer: C

NEW QUESTION 7
An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale (or a period of 24 hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use Amazon S3 to host the full website in different S3 buckets Add Amazon CloudFront distributions Set the S3 buckets as origins for the distributions Store the order data in Amazon S3
  • B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones Add an Application Load Balancer (ALB) to distribute the website traffic Add another ALB for the backend APIs Store the data in Amazon RDS for MySQL
  • C. Migrate the full application to run in containers Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use the Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic Store the data in Amazon RDS for MySQL
  • D. Use an Amazon S3 bucket to host the website's static content Deploy an Amazon CloudFront distributio
  • E. Set the S3 bucket as the origin Use Amazon API Gateway and AWS Lambda functions for the backend APIs Store the data in Amazon DynamoDB

Answer: D

NEW QUESTION 8
A company has two VPCs named Management and Production The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections The Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?

  • A. Add a set of VPNs between the Management and Production VPCs
  • B. Add a second virtual private gateway and attach it to the Management VPC.
  • C. Add a second set of VPNs to the Management VPC from a second customer gateway device
  • D. Add a second VPC peering connection between the Management VPC and the Production VPC.

Answer: C

Explanation:
https://docs.aws.amazon.com/vpn/latest/s2svpn/images/Multiple_Gateways_diagram.png
"To protect against a loss of connectivity in case your customer gateway device becomes unavailable, you can set up a second Site-to-Site VPN connection to your VPC and virtual private gateway by using a second customer gateway device." https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-redundant-connection.html

NEW QUESTION 9
A solutions architect is creating a new VPC design. There are two public subnets for the load balancer, two private subnets for web servers, and two private subnets for MySQL. The web servers use only HTTPS. The solutions architect has already created a security group for the load balancer allowing port 443 from 0.0.0.0/0.
Company policy requires that each resource has the least access required to still be able to perform its tasks. Which additional configuration strategy should the solutions architect use to meet these requirements?

  • A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group (or the MySQL servers and allow port 3306 from the web servers security group.
  • B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.
  • C. Create a security group for the web servers and allow port 443 from the load balance
  • D. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
  • E. Create a network ACL for the web servers and allow port 443 from the load balance
  • F. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.

Answer: C

NEW QUESTION 10
A company has chosen to rehost its application on Amazon EC2 instances The application occasionally experiences errors that affect parts of its functionality The company was unaware of this issue until users reported the errors The company wants to address this problem during the migration and reduce the time it takes to detect issues with the application Log files for the application are stored on the local disk.
A solutions architect needs to design a solution that will alert staff if there are errors in the application after the application is migrated to AWS. The solution must not require additional changes to the application code.
What is the MOST operationally efficient solution that meets these requirements?

  • A. Configure the application to generate custom metrics tor the errors Send these metric data points to Amazo
  • B. CloudWatch by using the PutMetricData API call Create a CloudWatch alarm that is based on the custom metrics
  • C. Create an hourly cron job on the instances to copy the application log data to an Amazon S3 bucket Configure an AWS Lambda function to scan the log file and publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert staff rf errors are detected.
  • D. Install the Amazon CloudWatch agent on the instances Configure the CloudWatch agent to stream the application log file to Amazon CloudWatch Logs Run a CloudWatch Logs insights query to search lor the relevant pattern in the log file Create a CloudWatch alarm that is based on the query output
  • E. Install the Amazon CloudWatch agent on the instances Configure the CloudWatch agent to stream the application log file to Amazon CloudWatch Log
  • F. Create a metric fitter for the relevant log grou
  • G. Define the filter pattern that is required to determine that there are errors in the application Create a CloudWatch alarm that is based on the resulting metric.

Answer: B

NEW QUESTION 11
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?

  • A. Use AWS Secrets Manage
  • B. Turn on automatic rotation.
  • C. Use AWS Systems Manager Parameter Stor
  • D. Turn on automatic rotation.
  • E. Create an Amazon S3 bucket lo store objects that are encrypted with an AWS Key
  • F. Management Service (AWS KMS) encryption ke
  • G. Migrate the credential file to the S3 bucke
  • H. Point the application to the S3 bucket.
  • I. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume (or each EC2 instanc
  • J. Attach the new EBS volume to each EC2 instanc
  • K. Migrate the credential file to the new EBS volum
  • L. Point the application to the new EBS volume.

Answer: B

NEW QUESTION 12
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly available.
Which combination of configuration options will meet these requirements? (Choose two.)

  • A. Use an Auto Scaling group to launch the EC2 instances in private subnet
  • B. Deploy an RDS Multi-AZ DB instance in private subnets.
  • C. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones.Deploy an Application Load Balancer in the private subnets.
  • D. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones.Deploy an RDS Multi-AZ DB instance in private subnets.
  • E. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zone
  • F. Deploy an Application Load Balancer in the public subnet.
  • G. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zone
  • H. Deploy an Application Load Balancer in the public subnets.

Answer: AE

Explanation:
Explanation
Before you begin: Decide which two Availability Zones you will use for your EC2 instances. Configure your
virtual private cloud (VPC) with at least one public subnet in each of these Availability Zones. These public subnets are used to configure the load balancer. You can launch your EC2 instances in other subnets of these Availability Zones instead.

NEW QUESTION 13
A company is using a SQL database to store movie data that is publicly accessible. The database runs on an Amazon RDS Single-AZ DB instance A script runs queries at random intervals each day to record the number of new movies that have been added to the database. The script must report a final total during business hours The company's development team notices that the database performance is inadequate for development tasks when the script is running. A solutions architect must recommend a solution to resolve this issue. Which solution will meet this requirement with the LEAST operational overhead?

  • A. Modify the DB instance to be a Multi-AZ deployment
  • B. Create a read replica of the database Configure the script to query only the read replica
  • C. Instruct the development team to manually export the entries in the database at the end of each day
  • D. Use Amazon ElastiCache to cache the common queries that the script runs against the database

Answer: B

NEW QUESTION 14
A solutions architect is designing a new hybrid architecture to extend a company s on-premises infrastructure to AWS The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?

  • A. Provision an AWS Direct Connect connection to a Region Provision a VPN connection as a backup if the primary Direct Connect connection fails.
  • B. Provision a VPN tunnel connection to a Region for private connectivit
  • C. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
  • D. Provision an AWS Direct Connect connection to a Region Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
  • E. Provision an AWS Direct Connect connection to a Region Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.

Answer: A

NEW QUESTION 15
An online photo application lets users upload photos and perform image editing operations The application offers two classes of service free and paid Photos submitted by paid users are processed before those submitted by free users Photos are uploaded to Amazon S3 and the job information is sent to Amazon SQS.
Which configuration should a solutions architect recommend?

  • A. Use one SQS FIFO queue Assign a higher priority to the paid photos so they are processed first
  • B. Use two SQS FIFO queues: one for paid and one for free Set the free queue to use short polling and the paid queue to use long polling
  • C. Use two SQS standard queues one for paid and one for free Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
  • D. Use one SQS standard queu
  • E. Set the visibility timeout of the paid photos to zero Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first

Answer: C

Explanation:
https://acloud.guru/forums/guru-of-the-week/discussion/-L7Be8rOao3InQxdQcXj/ https://aws.amazon.com/sqs/features/
Priority: Use separate queues to provide prioritization of work. https://aws.amazon.com/sqs/features/ https://aws.amazon.com/sqs/features/#:~:text=Priority%3A%20Use%20separate%20queues%20to%20provide% https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.

NEW QUESTION 16
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?

  • A. Turn on AWS Config with the appropriate rules.
  • B. Turn on AWS Trusted Advisor with the appropriate checks.
  • C. Turn on Amazon Inspector with the appropriate assessment template.
  • D. Turn on Amazon S3 server access loggin
  • E. Configure Amazon EventBridge (Amazon Cloud Watch Events).

Answer: A

NEW QUESTION 17
A company collects data from thousands of remote devices by using a RESTful web services application that runs on an Amazon EC2 instance. The EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket. The number of remote devices will increase into the millions soon. The company needs a highly scalable solution that minimizes operational overhead.
Which combination of steps should a solutions architect take to meet these requirements9 (Select TWO.)

  • A. Use AWS Glue to process the raw data in Amazon S3.
  • B. Use Amazon Route 53 to route traffic to different EC2 instances.
  • C. Add more EC2 instances to accommodate the increasing amount of incoming data.
  • D. Send the raw data to Amazon Simple Queue Service (Amazon SOS). Use EC2 instances to process the data.
  • E. Use Amazon API Gateway to send the raw data to an Amazon Kinesis data strea
  • F. Configure Amazon Kinesis Data Firehose to use the data stream as a source to deliver the data to Amazon S3.

Answer: BE

NEW QUESTION 18
A company is running an ASP.NET MVC application on a single Amazon EC2 instance. A recent increase in application traffic is causing slow response times for users during lunch hours. The company needs to resolve this concern with the least amount of configuration.
What should a solutions architect recommend to meet these requirements?

  • A. Move the application to AWS Elastic Beanstal
  • B. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours
  • C. Move the application to Amazon Elastic Container Service (Amazon ECS) Create an AWS Lambda function to handle scaling during lunch hours.
  • D. Move the application to Amazon Elastic Container Service (Amazon ECS). Configure scheduled scaling for AWS Application Auto Scaling during lunch hours.
  • E. Move the application to AWS Elastic Beanstal
  • F. Configure load-based auto scaling, and create an AWS Lambda function to handle scaling during lunch hours.

Answer: A

Explanation:
- Scheduled scaling is the solution here, while "using the least amount of settings possible" - Beanstalk vs moving to ECS - ECS requires MORE CONFIGURATION / SETTINGS (task and service definitions, configuring ECS container agent) than Beanstalk (upload application code)
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-autoscaling-scheduledactions.html Elastic Beanstalk supports time based scaling, since we are aware that the application performance slows down during the lunch hours.
https://aws.amazon.com/about-aws/whats-new/2015/05/aws-elastic-beanstalk-supports-time-based-scaling/

NEW QUESTION 19
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?

  • A. Use AWS Lambda to process the photo
  • B. Store the photos and metadata in DynamoDB.
  • C. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
  • D. Use AWS Lambda to process the photo
  • E. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
  • F. Increase the number of EC2 instances to thre
  • G. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.

Answer: A

NEW QUESTION 20
......

P.S. Easily pass SAA-C03 Exam with 0 Q&As Certshared Dumps & pdf Version, Welcome to Download the Newest Certshared SAA-C03 Dumps: https://www.certshared.com/exam/SAA-C03/ (0 New Questions)