A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift. The amount of data delivered is uneven throughout then day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spike occurs, locks can exist and data can be missed. Currently, the AWS Glue job is configured to run without retries, with timeout at 5 minutes and concurrency at 1. How should a data analytics specialist configure the AWS Glue job to optimize fault tolerance and improve data availability in the Amazon Redshift cluster?
A) Increase the number of retries. Decrease the timeout value. Increase the job concurrency.
B) Keep the number of retries at 0. Decrease the timeout value. Increase the job concurrency.
C) Keep the number of retries at 0. Decrease the timeout value. Keep the job concurrency at 1.
D) Keep the number of retries at 0. Increase the timeout value. Keep the job concurrency at 1.
Correct Answer:
Verified
Q81: An operations team notices that a few
Q82: A company wants to use an automatic
Q83: A company is planning to create a
Q84: A marketing company is using Amazon EMR
Q85: A company is planning to do a
Q87: A company wants to research user turnover
Q88: A company wants to enrich application logs
Q89: A large retailer has successfully migrated to
Q90: A retail company wants to use Amazon
Q91: An airline has been collecting metrics on
Unlock this Answer For Free Now!
View this answer and more for free by performing one of the following actions
Scan the QR code to install the App and get 2 free unlocks
Unlock quizzes for free by uploading documents