AWS认证

AWS考题,最新的都有,跟大家分享一部分

1.

A Solutions Architect must design a highly available, stateless, REST service. The service will require multiple persistent storage layers for service object meta information and the delivery of content. Each request needs to be authenticated and securely processed. There is a requirement to keep costs as low as possible?

How can these requirements be met?

A. Use AWS Fargate to host a container that runs a self-contained REST service. Set up an Amazon ECS service that is fronted by an Application Load Balancer (ALB). Use a custom authenticator to control access to the API. Store request meta information in Amazon DynamoDB with Auto Scaling and static content in a secured S3 bucket. Make secure signed requests for Amazon S3 objects and proxy the data through the REST service interface.

B. Use AWS Fargate to host a container that runs a self-contained REST service. Set up an ECS service that is fronted by a cross-zone ALB. Use an Amazon Cognito user pool to control access to the API. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucket. Generate presigned URLs when returning references to content stored in Amazon S3.

C. Set up Amazon API Gateway and create the required API resources and methods. Use an Amazon Cognito user pool to control access to the API. Configure the methods to use AWS Lambda proxy integrations, and process each resource with a unique AWS Lambda function. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucket. Generate presigned URLs when returning references to content stored in Amazon S3.

D. Set up Amazon API Gateway and create the required API resources and methods. Use an Amazon API Gateway custom authorizer to control access to the API. Configure the methods to use AWS Lambda custom integrations, and process each resource with a unique Lambda function. Store request meta information in an Amazon ElastiCache Multi-AZ cluster and static content in a secured S3 bucket. Generate presigned URLs when returning references to content stored in Amazon S3.

C is the correct Answer.

Although both donathon and moon agree, I thought the considerations needed in this question were worthy of additional discussion.

    A. Custom authenticator is not the best option. Using Fargate is fine, but you need a better way to call it then an ALB/ELB, which does not have security integrated into it.

    B. Similar issue as with A, although it is an improvement since it uses Cognito.

  C. This answer nails all the requirements and is my choice for the best answer. HOWEVER, D is preferable in some ways. Arguably, AWS Lambda custom integrations (D) is preferable to using AWS Lambda Proxy integration. The other key part to this question is the “multiple persistent storage layers” requirement. Donathon and Moon stated that ElastiCache is NOT persistent. This is NOT necessarily true. In using ElastiCache you have two implementation options: MemCache or Redis. MemCache does not provide persistent storage but Redis DOES. As such, “D” including ElastiCache does not mean it is incorrect (See https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.html: Q: Does Amazon ElastiCache for Redis support Redis persistence? Yes, you can achieve persistence by snapshotting your Redis data using the Backup and Restore feature. ). ElastiCache is a better solution then DynamoDB. However, DynamoDB (answer D) would also work, and there is no question about that verses the question of which implementation of ElastiCache is being used (MemCache vs. Redis).

I initially felt strongly that “D” was the best answer but in weighing all these factors I am now leaning towards “C” and that would be my selection at this point.

    D. This answer meets all the requirements and I believe is in ways better then answer “C”. However, I have still selected “C” as my answer, as per my explanation below and my comments for “C”. One key difference between “C” and “D” is “C” uses “AWS Lambda proxy integrations” and “D” uses “AWS Lambda custom integrations”. Advantages and disadvantages are listed in the following link: https://medium.com/@lakshmanLD/lambda-proxy-vs-lambda-integration-in-aws-api-gateway-3a9397af0e6d.

In a nutshell, Custom Integrations are more powerful, easier to document and less prone to human error. The downside is they are more work to implement. Since time and cost are not specifically mentioned, this is a better answer then “C”.

2.

company with several AWS accounts is using AWS Organizations and service control policies (SCPs). An Administrator created the following SCP and has attached it to an organizational unit (OU) that contains AWS account 1111-1111-1111:

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Sid”: “AllowsAllActions”,

“Effect”: “Allow”,

“Action”: “*”,

“Resource”: “*”

},

{

“Sid”: “DenyCloudTrail”,

“Effect”: “Deny”,

“Action”: “cloudtrail: *”,

“Resource”: “*”

},

Developers working in account 1111-1111-1111 complain that they cannot create Amazon S3 buckets. How should the Administrator address this problem?

A. Add s3:CreateBucket with “Allow” effect to the SCP.

B. Remove the account from the OU, and attach the SCP directly to account 1111-1111-1111

C. Instruct the Developers to add Amazon S3 permissions o their IAM entities.

D. Remove the SCP from account 1111-1111-1111

Answer: C

Boundaries, SCP is a guardrail, it works with a logic operator AND with the IAM policies to apply the correct rule, if you look there is an allow all actions on the SCP so it will need a IAM policy to work

SCPs are necessary but not sufficient for granting access in the accounts in your organization. Attaching an SCP to the organization root or an organizational unit (OU) defines a guardrail for what actions accounts within the organization root or OU can do. You still need to attach IAM policies to users and roles in your organization's accounts to actually grant permissions to them. With an SCP attached to those accounts, identity-based and resource-based policies grant permissions to entities only if those policies and the SCP allow the action. If both a permissions boundary (an advanced IAM feature) and an SCP are present, then the boundary, the SCP, and the identity-based policy must all allow the action. For more information, see Policy Evaluation Logic in the IAM User Guide.

3.

A Development team is deploying new APIs as serverless applications within a company. The team is currently using the AWS Management Console to provision Amazon API Gateway, AWS Lambda, and Amazon DynamoDB resources. A Solutions Architect has been tasked with automating the future deployments of these serverless APIs. How can this be accomplished?

    A. Use AWS CloudFormation with a Lambda-backed custom resource to provision API Gateway. Use the AWS::DynamoDB::Table and AWS::Lambda::Function resources to create the Amazon DynamoDB table and Lambda functions. Write a script to automate the deployment of the CloudFormation template.

    B. Use the AWS Serverless Application Model to define the resources. Upload a YAML template and application files to the code repository. Use AWS CodePipeline to connect to the code repository and to create an action to build using AWS CodeBuild. Use the AWS CloudFormation deployment provider in CodePipeline to deploy the solution.

    C. Use AWS CloudFormation to define the serverless application. Implement versioning on the Lambda functions and create aliases to point to the versions. When deploying, configure weights to implement shifting traffic to the newest version, and gradually update the weights as traffic moves over.

    D. Commit the application code to the AWS CodeCommit code repository. Use AWS CodePipeline and connect to the CodeCommit code repository. Use AWS CodeBuild to build and deploy the Lambda functions using AWS CodeDeploy. Specify the deployment preference type in CodeDeploy to gradually shift traffic over to the new version.

Answer:B

https://aws-quickstart.s3.amazonaws.com/quickstart-trek10-serverless-enterprise-cicd/doc/serverless-cicd-for-the-enterprise-on-the-aws-cloud.pdf

https://aws.amazon.com/quickstart/architecture/serverless-cicd-for-enterprise/

4.

API gateway and Lambda non-proxy integrations have been chosen to implement an application by a software engineer. The application is a data analysis tool that returns some statistic results when the HTTP endpoint is called. The lambda needs to communicate with some back-end data services such as Keen.io however there are chances that error happens such as wrong data requested, bad communications, etc. The lambda is written using Java and two exceptions may be returned which are BadRequestException and InternalErrorException. What should the software engineer do to map these two exceptions in API gateway with proper HTTP return codes? For example, BadRequestException and InternalErrorException are mapped to HTTP return codes 400 and 500 respectively. Select 2. BD

  A. Add the corresponding error codes (400 and 500) on the Integration Response in API gateway.

    B. Add the corresponding error codes (400 and 500) on the Method Response in API gateway.

    C. Put the mapping logic into Lambda itself so that when exception happens, error codes are returned at the same time in a JSON body.

    D. Add Integration Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes.

    E. Add Method Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes 400 and 500.Method Request/Method Response are part mainly deal with API gateways and they are the API’s interface with the API’s frontend (a client), whereas Integration Request and Integration Response are the API’s interface with the backend. In this case, the backend is a lambda.For the mapping of exceptions that come from Lambda, Integration Response is the correct place to configure. However, the corresponding error code (400) on the method response should be created first. Otherwise, API Gateway throws an invalid configuration error response at runtime. The below is an example to map BadRequestException to HTTP return code 400:

Answer:BD

    Option A is incorrect: Because HTTP error codes are defined firstly in Method Response instead of Integration Response.

    Option B is CORRECT:  Because HTTP error codes are defined firstly in Method Response instead of Integration Response. (Same reason as A).

    Option C is incorrect: Because Integration Response in API gateway should be used. Refer to https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html on “how to Handle Lambda Errors in API Gateway”.

    Option D is CORRECT: Because BadRequest or InternalError should be mapped to 400 and 500 in Integration Response settings.

    Option E is incorrect: Because Method Response is the interface with the frontend. It does not deal with how to map the response from Lambda/backend.

5.

A large company is migrating its entire IT portfolio to AWS. Each business unit in the company has a standalone AWS account that

supports both development and test environments. New accounts to support production workloads will be needed soon.

The Finance department requires a centralized method for payment but must maintain visibility into each group’s spending to allocate

costs.The Security team requires a centralized mechanism to control IAM usage in all the company’s accounts. What combination of the following options meet the company’s needs with the LEAST effort? (Choose two.)

    A. Use a collection of parameterized AWS CloudFormation templates defining common IAM permissionsthat are launched into each account. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model.

        B. Use AWS Organizations to create a new organization from a chosen payer account and define anorganizational unit hierarchy.

Invite the existing accounts to join the organization and create new accounts using Organizations.

        C. Require each business unit to use its own AWS accounts. Tag each AWS account appropriately andenable Cost Explorer to

administer chargebacks.

    D. Enable all features of AWS Organizations and establish appropriate service control policies that filterIAM permissions for sub_x005f_x005faccounts.

        E. Consolidate all of the company’s AWS accounts into a single AWS account. Use tags for billingpurposes and IAM’s Access Advice feature to enforce the least privilege model

Answer:BD

A: While CloudFormation is a good start, remember this does not prevent changes after the stack has been deployed.

B: This looks likely.

C: This does not allow Finance to view the bill in a centralized manner which is a requirement.

D: This is the best way to meet the security requirements. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.

E: It’s best to use different accounts for dev\test and prod.

6.

A company has a legacy application running on servers on premises. To increase the application's reliability, the company wants to gain actionable insights using application logs. A Solutions Architect has been given following requirements for the solution:

✑ Aggregate logs using AWS.

✑ Automate log analysis for errors.

✑ Notify the Operations team when errors go beyond a specified threshold.

What solution meets the requirements? D

A. Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify errors, create an Amazon CloudWatch alarm to notify the Operations team of errors

B. Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to notify the Operations team of errors.

C. Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team of errors.

D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors.

answer: A

https://docs.aws.amazon.com/kinesis-agent-windows/latest/userguide/what-is-kinesis-agent-windows.html

https://medium.com/@khandelwal12nidhi/build-log-analytic-solution-on-aws-cc62a70057b2

7. A Solutions Architect is working with a company that operates a standard three-tier web application in AWS. The web and application tiers run on Amazon EC2 and the database tier runs on Amazon RDS.The company is redesigning the web and application tiers to use Amazon API Gateway and AWS Lambda, and the company intends to deploy the new application within 6 months. The IT Manager has asked the Solutions Architect to reduce costs in the interim.

Which solution will be MOST cost effective while maintaining reliability?

A. Use Spot Instances for the web tier, On-Demand Instances for the application tier, and Reserved Instances for the database tier.

B. Use On-Demand Instances for the web and application tiers, and Reserved Instances for the database tier.

C. Use Spot Instances for the web and application tiers, and Reserved Instances for the database tier.

D. Use Reserved Instances for the web, application, and database tiers.

answer: B

There are 2 problems here. Firstly, the EBS snapshot is too old and secondly, the outage resulted in DB issues and data loss. Using 2 instances installed with the web server and using Route 53 load balancing should help with the first problem and RDS Multi-AZ DB would help in the second.

A: This will not reduce the chances of lost data and downtime could still be significant and risky.

B\D: I chose this simply because of the LB\Auto Scaling. While Route 53 can do similar function, it does not auto heal the instance to bring it back to healthy state.

C: There is only 1 active instance, there should be at least 2.

8. A company runs a legacy system on a single m4.2xlarge Amazon EC2 instance with Amazon EBS2 storage. The EC2 instance runs both the web server and a self-managed Oracle database. A snapshot is made of the EBS volume every 12 hours, and an AMI was created from the fully configured EC2 instance. A recent event that terminated the EC2 instance led to several hours of downtime. The application was successfully launched from the AMI, but the age of the

EBS snapshot and the repair of the database resulted in the loss of 8 hours of data. The system was also down for 4 hours while the Systems Operators manually performed these processes.

What architectural changes will minimize downtime and reduce the chance of lost data?

    A. Create an Amazon CloudWatch alarm to automatically recover the instance. Create a script that will check and repair the database upon reboot. Subscribe the Operations team to the Amazon SNS message generated by the CloudWatch alarm.

      B. Run the application on m4.xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balancer. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of two. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.

    C. Run the application on m4.2xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balancer. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of one. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.

  D. Increase the web server instance count to two m4.xlarge instances and use Amazon Route 53 round-robin load balancing to spread the load. Enable Route 53 health checks on the web servers. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.

answer: B is correct.

A: Does not address a loss of data since the last backup.

B: Ensures that there are at least two EC instances, each of which is in a different AZ. It also ensures that the database spans multiple AZs. Hence this meets all the criteria.

C: Having auto scaling set to a minimum instance count of one means that if there is just one instance and there is a problem, that instance will need to be restarted, meaning there would an outage during that restart time. As such, B is a better answer.

D: Does not indicate that the two EC2 instances will be in different availability zones. If they are in the same AZ, that entire zone could theoretically have an outage. Given that, I would select B instead of D. Apart from that consideration D does the trick.

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 205,033评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,725评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,473评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,846评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,848评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,691评论 1 282
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,053评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,700评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,856评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,676评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,787评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,430评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,034评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,990评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,218评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,174评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,526评论 2 343