NEW QUESTION 1
Due to compliance regulations, management has asked you to provide a system that allows for cost-effective long-term storage of your application logs and provides a way for support staff to view the logs more quickly. Currently your log system archives logs automatically to Amazon S3 every hour, and support staff must wait for these logs to appear in Amazon S3, because they do not currently have access to the systems to view live logs. What method should you use to become compliant while also providing a faster way for support staff to have access to logs?
A. Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and add a new policy to push all log entries to Amazon SQS for ingestion by the support team.
B B. Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and use or write a service to also stream your application logs to CloudWatch Logs.
C Update Amazon Glacier lifecycle policies to pull new logs from Amazon S3, and in the Amazon EC2 console, enable the CloudWatch Logs Agent on all of your application servers.
D. Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier. key can be different from the tableEnable Amazon S3 partial uploads on your Amazon S3 bucket, and trigger an Amazon SNS notification when a partial upload occurs.
E Use or write a service to stream your application logs to CloudWatch Logs. Use an Amazon Elastic Map Reduce cluster to live stream your logs from CloudWatch Logs for ingestion by the support team, and create a Hadoop job to push the logs to S3 in five-minute chunks.
Answer: E
NEW QUESTION 2
You want to securely distribute credentials for your Amazon RDS instance to your fleet of web server instances. The credentials are stored in a file that is controlled by a configuration management system. How do you securely deploy the credentials in an automated manner across the fleet of web server instances, which can number in the hundreds, while retaining the ability to roll back if needed?
A Store your credential files in an Amazon S3 bucket.
Use Amazon S3 server-side encryption on the credential files.
Have a scheduled job that pulls down the credential files into the instances every 10 minutes.
B Store the credential files in your version-controlled repository with the rest of your code.
Have a post-commit action in version control that kicks off a job in your continuous integration system which securely copses the new credential files to all web server instances.
C Insert credential files into user data and use an instance lifecycle policy to periodically refresh the file from the user data.
D Keep credential files as a binary blob in an Amazon RDS MySQL DB instance, and have a script on each Amazon EC2 instance that pulls the files down from the RDS instance.
E Store the credential files in your version-controlled repository with the rest of your code.
Use a parallel file copy program to send the credential files from your local machine to the Amazon EC2 instances.
Answer: D
NEW QUESTION 3
What is web identity federation?
A. Use of an identity provider like Google or Facebook to become an AWS IAM User.
B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
C. Use of AWS IAM User tokens to log in as a Google or Facebook user.
D. Use of AWS STS Tokens to log in as a Google or Facebook user.
Answer: B
Explanation:
... users of your app can sign in using a well-known identity provider (IdP) -such as Login with Amazon,
Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token,
and then exchange that token for temporary security credentials in AWS that map to an IAM role with
permissions to use the resources in your AWS account.
Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html
NEW QUESTION 4
Which of these techniques enables the fastest possible rollback times in the event of a failed
deployment?
A. Rolling; Immutable
B. Rolling; Mutable
C. Canary or A/B
D. Blue-Green
Answer: D
Explanation:
AWS specifically recommends Blue-Green for super-fast, zero-downtime deploys - and thus rollbacks, which are redeploying old code.
You use various strategies to migrate the traffic from your current application stack (blue) to a new version
of the application (green). This is a popular technique for deploying applications with zero downtime.
Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
NEW QUESTION 5
Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the
whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt
to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened?
A. You didn't choose the Development version of the AMI you are using.
B. You didn't set the Development flag to true when deploying EC2 instances.
C. You hit the soft limit of 5 EIPs per region and requested a 6th.
D. You hit the soft limit of 2 VPCs per region and requested a 3rd.
Answer: C
Explanation:
There is a soft limit of 5 EIPs per Region for VPC on new accounts. The third environment could not
allocate the 6th EIP.
Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_vpc
NEW QUESTION 6
When thinking of AWS Elastic Beanstalk, which statement is true?
A. Worker tiers pull jobs from SNS.
B. Worker tiers pull jobs from HTTP.
C. Worker tiers pull jobs from JSON.
D. Worker tiers pull jobs from SQS.
Answer: D
Explanation:
Elastic Beanstalk installs a daemon on each Amazon EC2 instance in the Auto Scaling group to process Amazon SQS messages in the worker environment. The daemon pulls data off the Amazon SQS queue, inserts it into the message body of an HTTP POST request, and sends it to a user-configurable URL path on the local host. The content type for the message body within an HTTP POST request is application/json by default.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html
NEW QUESTION 7
You need to grant a vendor access to your AWS account. They need to be able to read protected
messages in a private S3 bucket at their leisure. They also use AWS. What is the best way to accomplish
this?
A. Create an IAM User with API Access Keys. Grant the User permissions to access the bucket. Give
the vendor the AWS Access Key ID and AWS Secret Access Key for the User.
B. Create an EC2 Instance Profile on your account. Grant the associated IAM role full access to the
bucket. Start an EC2 instance with this Profile and give SSH access to the instance to the vendor.
C. Create a cross-account IAM Role with permission to access the bucket, and grant permission to use
the Role to the vendor AWS account.
D. Generate a signed S3 PUT URL and a signed S3 PUT URL, both with wildcard values and 2 year
durations. Pass the URLs to the vendor.
Answer: C
Explanation:
When third parties require access to your organization's AWS resources, you can use roles to delegate
access to them. For example, a third party might provide a service for managing your AWS resources.
With IAM roles, you can grant these third parties access to your AWS resources without sharing your
AWS security credentials. Instead, the third party can access your AWS resources by assuming a role
that you create in your AWS account.
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
NEW QUESTION 8
You were just hired as a DevOps Engineer for a startup. Your startup uses AWS for 100% of their infrastructure. They currently have no automation at all for deployment, and they have had many failures while trying to deploy to production. The company has told you deployment process risk mitigation is the most important thing now, and you have a lot of budget for tools and AWS resources.
Their stack:
2 -tier API
Data stored in DynamoDB or S3, depending on type
Compute layer is EC2 in Auto Scaling Groups
They use Route53 for DNS pointing to an ELB
An ELB balances load across the EC2 instances
The scaling group properly varies between 4 and 12 EC2 servers.
Which of the following approaches, given this company's stack and their priorities, best meets the
company's needs?
A. Model the stack in AWS Elastic Beanstalk as a single Application with multiple Environments. Use Elastic Beanstalk's Rolling Deploy option to progressively roll out application code changes when promoting across environments.
B. Model the stack in 3 CloudFormation templates: Data layer, compute layer, and networking layer.
Write stack deployment and integration testing automation following Blue-Green methodologies.
C. Model the stack in AWS OpsWorks as a single Stack, with 1 compute layer and its associated ELB.
Use Chef and App Deployments to automate Rolling Deployment.
D. Model the stack in 1 CloudFormation template, to ensure consistency and dependency graph
resolution. Write deployment and integration testing automation following Rolling Deployment
methodologies.
Answer: B
Explanation:
AWS recommends Blue-Green for zero-downtime deploys. Since you use DynamoDB, and neither AWS OpsWorks nor AWS Elastic Beanstalk directly supports DynamoDB, the option selecting CloudFormation and Blue-Green is correct.
You use various strategies to migrate the traffic from your current application stack (blue) to a new
version of the application (green). This is a popular technique for deploying applications with zero downtime.
The deployment services like AWS Elastic Beanstalk, AWS CloudFormation, or AWS OpsWorks are
particularly useful as they provide a simple way to clone your running application stack. You can set
up a new version of your application (green) by simply cloning current version of the application (blue).
Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
NEW QUESTION 9
Your system uses a multi-master, multi-region DynamoDB configuration spanning two regions to achieve high availablity. For the first time since launching your system, one of the AWS Regions in which you operate over went down for 3 hours, and the failover worked correctly. However, after recovery, your users are experiencing strange bugs, in which users on different sides of the globe see different data.
What is a likely design issue that was not accounted for when launching?
A. The system does not have Lambda Functor Repair Automatons, to perform table scans and chack for
corrupted partition blocks inside the Table in the recovered Region.
B. The system did not implement DynamoDB Table Defragmentation for restoring partition performance in
the Region that experienced an outage, so data is served stale.
C. The system did not include repair logic and request replay buffering logic for post-failure, to re-synchronize data to the Region that was unavailable for a number of hours.
D. The system did not use DynamoDB Consistent Read requests, so the requests in different areas are not utilizing consensus across Regions at runtime.
Answer: C
Explanation:
When using multi-region DynamoDB systems, it is of paramount importance to make sure that all requests made to one Region are replicated to the other. Under normal operation, the system in question would correctly perform write replays into the other Region. If a whole Region went down, the system would be unable to perform these writes for the period of downtime. Without buffering write requests somehow, there would be no way for the system to replay dropped cross-region writes, and the requests would be serviced differently depending on the Region from which they were served after recovery.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepl.html
NEW QUESTION 10
You run a clustered NoSQL database on AWS EC2 using AWS EBS. You need to reduce latency for database response times. Performance is the most important concern, not availability. You did not perform the initial setup, someone without much AWS knowledge did, so you are not sure if they configured everything optimally. Which of the following is NOT likely to be an issue contributing to increased latency?
A. The EC2 instances are not EBS Optimized.
B. The database and requesting system are both in the wrong Availability Zone.
C. The EBS Volumes are not using PIOPS.
D. The database is not running in a placement group.
Answer: B
Explanation:
For the highest possible performance, all instances in a clustered database like this one should be in a single Availability Zone in a placement group, using EBS optimized instances, and using PIOPS SSD EBS Volumes. The particular Availability Zone the system is running in should not be important, as long as it is the same as the requesting resources.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html