How Many Questions Of SAA-C03 Free Practice Exam

We provide real SAA-C03 exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Amazon-Web-Services SAA-C03 Exam quickly & easily. The SAA-C03 PDF type is available for reading and printing. You can print more and practice many times. With the help of our Amazon-Web-Services SAA-C03 dumps pdf and vce product and material, you can easily pass the SAA-C03 exam.

Online SAA-C03 free questions and answers of New Version:

NEW QUESTION 1
A company needs the ability to analyze the log files of its proprietary application. The logs are stored
in JSON format in an Amazon S3 bucket Queries will be simple and will run on-demand A solutions
architect needs to perform the analysis with minimal changes to the existing architecture
What should the solutions architect do to meet these requirements with the LEAST amount of
operational overhead?

  • A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed
  • B. Use Amazon CloudWatch Logs to store the logs Run SQL queries as needed from the AmazonCloudWatch console
  • C. Use Amazon Athena directly with Amazon S3 to run the queries as needed
  • D. Use AWS Glue to catalog the logs Use a transient Apache Spark cluster on Amazon EMR to run theSQL queries as needed

Answer: C

Explanation:
Explanation
Amazon Athena can be used to query JSON in S3

NEW QUESTION 2
A company is creating a new application that will store a large amount of data. The data will be analyzed hourly and will be modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones. The needed amount of storage space will continue to grow for the next 6 Months.
Which storage solution should a solutions architect recommend to meet these requirements?

  • A. Store the data in Amazon S3 Glacier Update me S3 Glacier vault policy to allow access to the application Instances
  • B. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume Mount the EBS volume on the application nuances.
  • C. Store the data in an Amazon Elastic File System (Amazon EFS) tile system Mount the file system on the application instances.
  • D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned K)PS volume shared between the application instances.

Answer: C

NEW QUESTION 3
A company is planning to build a high performance computing (HPC) workload as a service solution that Is hosted on AWS A group of 16 AmazonEC2Ltnux Instances requires the lowest possible latency for
node-to-node communication. The instances also need a shared block device volume for high-performing
storage.
Which solution will meet these requirements?

  • A. Use a duster placement grou
  • B. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon E BS) volume to all the instances by using Amazon EBS Multi-Attach
  • C. Use a cluster placement grou
  • D. Create shared 'lie systems across the instances by using Amazon Elastic File System (Amazon EFS)
  • E. Use a partition placement grou
  • F. Create shared tile systems across the instances by using Amazon Elastic File System (Amazon EFS).
  • G. Use a spread placement grou
  • H. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume to all the instances by using Amazon EBS Multi-Attach

Answer: A

NEW QUESTION 4
A company hosts an application on multiple Amazon EC2 instances The application processes messages from an Amazon SQS queue writes to an Amazon RDS table and deletes the message from the queue Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?

  • A. Use the CreateQueue API call to create a new queue
  • B. Use the Add Permission API call to add appropriate permissions
  • C. Use the ReceiveMessage API call to set an appropriate wail time
  • D. Use the ChangeMessageVisibility APi call to increase the visibility timeout

Answer: D

Explanation:
Explanation
The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within the duration of the visibility timeout. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
Keyword: SQS queue writes to an Amazon RDS From this, Option D best suite & other Options ruled out [Option A - You can't intruduce one more Queue in the existing one; Option B - only Permission & Option C - Only Retrieves Messages] FIF O queues are designed to never introduce duplicate messages. However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any duplicates introduced by the message producer are removed within a 5-minute deduplication interval. For standard queues, you might occasionally receive a duplicate copy of a message (at-least- once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be affected adversely when processing the same message more than once).

NEW QUESTION 5
A solution architect is using an AWS CloudFormation template to deploy a three-tier web application. The web application consist of a web tier and an application that stores and retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly accessible. The application EC2 instances need to access the Dynamo tables Without exposing API credentials in the template.
What should the solution architect do to meet the requirements?

  • A. Create an IAM role to read the DynamoDB table
  • B. Associate the role with the application instances by referencing an instance profile.
  • C. Create an IAM role that has the required permissions to read and write from the DynamoDB table
  • D. Add the role to the EC2 instance profile, and associate the instances profile with the application instances.
  • E. Use the parameter section in the AWS CloudFormation template to have the user input access and secret keys from an already-created IAM user that has the required permissions to read and write from the DynamoDB tables.
  • F. Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write from the DynamoDB table
  • G. Use the GetAtt function to retrieve the access secret keys, and pass them to the application instances through the user data.

Answer: B

NEW QUESTION 6
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The company uses Amazon EC2 Windows Server instances behind an Application Load Balancer to host its dynamic application. The company needs a highly available storage solution for the application. The application consists of static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

  • A. Store the static files on Amazon S3. Use Amazon
  • B. CloudFront to cache objects at the edge.
  • C. Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
  • D. Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.
  • E. Store the server-side code on Amazon FSx for Windows File Serve
  • F. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.
  • G. Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volum
  • H. Mount the EBS volume on each EC2 instance to share the files.

Answer: AE

NEW QUESTION 7
A company has an on-premises MySQL database that handles transactional data The company is migrating the database to the AWS Cloud The migrated database must maintain compatibility with the company's applications that use the database The migrated database also must scale automatically during periods of increased demand.
Which migration solution will meet these requirements?

  • A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL Configure elastic storage scaling
  • B. Migrate the database to Amazon Redshift by using the mysqldump utility Turn on Auto Scaling for the Amazon Redshift cluster
  • C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora Turn on Aurora Auto Scaling.
  • D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB Configure an Auto Scaling policy.

Answer: C

NEW QUESTION 8
An online retail company needs to run near-real-time analytics on website traffic to analyze top-selling products across different locations. The product purchase data and the user location details are sent to a third-party application that runs on premises The application processes the data and moves the data into the company's analytics engine
The company needs to implement a cloud-based solution to make the data available for near-real-time analytics.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use Amazon Kinesis Data Streams to ingest the data Use AWS Lambda to transform the data Configure Lambda to write the data to Amazon Amazon OpenSearch Service (Amazon Elasticsearch Service)
  • B. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket Schedule an AWS Glue crawler job to enrich the data and update the AWS Glue Data Catalog Use Amazon Athena for analytics
  • C. Configure Amazon Kinesis Data Streams to write the data to an Amazon S3 bucket Add an Apache Spark job on Amazon EMR to enrich the data in the S3 bucket and write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service)
  • D. Use Amazon Kinesis Data Firehose to ingest the data Enable Kinesis Data Firehose data transformation with AWS Lambda Configure Kinesis Data Firehose to write the data to Amazon OpenSearch Service (Amazon Elasticsearch Service).

Answer: C

NEW QUESTION 9
A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambda. The application’s traffic recently spiked due to fraudulent requests from botnets.
Which steps should a solutions architect take to block requests from unauthorized users? (Select TWO.)

  • A. Create a usage plan with an API key that it shared with genuine users only.
  • B. Integrate logic within the Lambda function to ignore the requests lion- fraudulent IP addresses
  • C. Implement an AWS WAF rule to target malicious requests and trigger actions to filler them out
  • D. Convert the existing public API to a private API Update the DNS records to redirect users to the new API endpoint
  • E. Create an IAM role tor each user attempting to access the API A user will assume the role when making the API call

Answer: CD

NEW QUESTION 10
A company's web application consists of multiple Amazon EC2 instances that run behind an Application Load Balancer in a VPC. An Amazon ROS for MySQL DB instance contains the data. The company needs the ability to automatically detect and respond to suspicious or unexpected behaviour in its AWS environment the company already has added AWS WAF to its architecture.
What should a solutions architect do next lo protect against threats?

  • A. Use Amazon GuardDuty to perform threat detectio
  • B. Configure Amazon EventBridge (Amazon CloudWatch Events) to filler for GuardDuty findings and to invoke pin AWS Lambda function to adjust the AWS WAF rules
  • C. Use AWS Firewall Manager to perform threat detection Configure Amazon EventBridge (Amazon CloudWatch Events) to filter for Firewall Manager findings and to invoke an AWS Lambda function to adjust the AWS WAF web ACL
  • D. Use Amazon Inspector to perform three! detection and to update the AWS WAT rules Create a VPC network ACL to limit access to the web application
  • E. Use Amazon Macie to perform throat detection and to update the AWS WAF rules Create a VPC network ACL to limit access to the web application

Answer: A

NEW QUESTION 11
A company has an AWS Glue extract. transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an Amazon S3 bucket.
New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all
the data during each run.
What should the solutions architect do to prevent AWS Glue from reprocessing old data?

  • A. Edit the job to use job bookmarks.
  • B. Edit the job to delete data after the data is processed
  • C. Edit the job by setting the NumberOfWorkers field to 1.
  • D. Use a FindMatches machine learning (ML) transform.

Answer: B

NEW QUESTION 12
A company's ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database performance and ensure that the Lambda invocations do not overload the database with too many connections.
What should a solutions architect do to meet these requirements?

  • A. Point the client driver at an RDS custom endpoint Deploy the Lambda functions inside a VPC
  • B. Point the client driver at an RDS proxy endpoint Deploy the Lambda functions inside a VPC
  • C. Point the client driver at an RDS custom endpoint Deploy the Lambda functions outside a VPC
  • D. Point the client driver at an RDS proxy endpoint Deploy the Lambda functions outside a VPC

Answer: B

NEW QUESTION 13
A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week.
What should the company do to guarantee the EC2 capacity?

  • A. Purchase Reserved instances that specify the Region needed
  • B. Create an On Demand Capacity Reservation that specifies the Region needed
  • C. Purchase Reserved instances that specify the Region and three Availability Zones needed
  • D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed

Answer: D

Explanation:
Explanation
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html: "When you create a Capacity Reservation, you specify:
The Availability Zone in which to reserve the capacity"

NEW QUESTION 14
A company wants to run applications in container in the AWS Cloud. Those applications arc stateless and can tolerate disruptions. What should a solutions architect do to meet those requirements?
What should a solution architect do to meet these requirements?

  • A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers
  • B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group
  • C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers
  • D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.

Answer: A

NEW QUESTION 15
A company has two AWS accounts in the same AWS Region. One account is a publisher account, and the other account is a subscriber account Each account has its own Amazon S3 bucket.
An application puts media objects into the publisher account's S3 bucket The objects are encrypted with server-side encryption with customer-provided encryption keys (SSE-C). The company needs a solution that will automatically copy the objects to the subscriber's account's S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Enable S3 Versioning on the publisher account's S3 bucket Configure S3 Same-Region Replication of the objects to the subscriber account's S3 bucket
  • B. Create an AWS Lambda function that is invoked when objects are published in the publisher account's S3 bucke
  • C. Configure the Lambda function to copy the objects to the subscriber accounts S3 bucket
  • D. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function when objects are published in the publisher account's S3 bucket Configure the Lambda function to copy the objects to the subscriber account's S3 bucket
  • E. Configure Amazon EventBridge (Amazon CloudWatch Events) to publish Amazon Simple Notification Service (Amazon SNS) notifications when objects are published in the publisher account's S3 bucket When notifications are received use the S3 console to copy the objects to the subscriber accounts S3 bucket

Answer: B

NEW QUESTION 16
A company has an application that processes customer of tiers. The company hosts the application on an Amazon EC2 instance that saves the orders to an Amazon Aurora database. Occasionally when traffic Is high, the workload does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as possible?

  • A. Increase the instance size of the EC2 instance when baffle Is hig
  • B. Write orders to Amazon Simple Notification Service (Amazon SNS) Subscribe the database endpoint to the SNS topic
  • C. Write orders to an Amazon Simple Queue Service (Amazon SOS) queue Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read born the SQS queue and process orders into the database
  • D. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topi
  • E. Use EC2 ^stances in an Auto Scaling group behind an Application Load Balancer to read from the SNS topic.
  • F. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU threshold limit
  • G. Use scheduled scaling of EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database

Answer: B

NEW QUESTION 17
A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.
What should a solutions architect propose to improve the performance of the workload?

  • A. Choose a cluster placement group while launching Amazon EC2 instances.
  • B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
  • C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
  • D. Choose the required capacity reservation while launching Amazon EC2 instances.

Answer: A

Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-placementgroup.html "A cluster placement group is a logical grouping of instances within a single Availability Zone that benefit from low network latency, high network throughput"

NEW QUESTION 18
A company wants to migrate its existing on-premises monolithic application to AWS.
The company wants to keep as much of the front- end code and the backend code as possible. However, the company wants to break the application into smaller applications. A different team will manage each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?

  • A. Host the application on AWS Lambda Integrate the application with Amazon API Gateway.
  • B. Host the application with AWS Amplif
  • C. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
  • D. Host the application on Amazon EC2 instance
  • E. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as targets.
  • F. Host the application on Amazon Elastic Container Service (Amazon ECS) Set up an Application Load Balancer with Amazon ECS as the target.

Answer: C

NEW QUESTION 19
A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in the AWS Cloud. The application will transmit data by using UDP packets. The company wants to ensure that the application can scale out and in as traffic increases and decreases.
What should a solutions architect do to meet these requirements?

  • A. Attach a Network Load Balancer to the Auto Scaling group
  • B. Attach an Application Load Balancer to the Auto Scaling group.
  • C. Deploy an Amazon Route 53 record set with a weighted policy to route traffic appropriately
  • D. Deploy a NAT instance that is configured with port forwarding to the EC2 instances in the Auto Scaling group.

Answer: B

NEW QUESTION 20
......

Thanks for reading the newest SAA-C03 exam dumps! We recommend you to try the PREMIUM Surepassexam SAA-C03 dumps in VCE and PDF here: https://www.surepassexam.com/SAA-C03-exam-dumps.html (0 Q&As Dumps)