Download aws certified solutions architect pdf






















Use the Amazon Cognito wizard to create an identity pool, which is a container that Amazon Cognito uses to keep end user identities organized for your apps.

You can share identity pools between apps. When you set up an identity pool, Amazon Cognito creates one or two IAM roles one for authenticated identities, and one for unauthenticated "guest" identities that define permissions for Amazon Cognito users. When your app accesses an AWS resource, pass the credentials provider instance to the client object, which passes temporary security credentials to the client. The permissions for the credentials are based on the role or roles that you defined earlier.

You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use? Amazon ElastiCache to store the writes until the writes are committed to the database. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput. By using Amazon SQS, developers can simply move data between distributed application components performing different tasks, without losing messages or requiring each component to be always available.

Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them. This allows you to quickly build message queuing applications that can be run on any computer on the internet. Since Amazon SQS is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability.

This lets you focus on building sophisticated message-based applications, without worrying about how the messages are stored and managed. You can use Amazon SQS with software applications in various ways.

Use Amazon SQS to create a queue of work where each message is a task that needs to be completed by a process. One or many computers can read tasks from the queue and perform them. Build a microservices architecture, using queues to connect your microservices. Keep notifications of significant events in a business process in an Amazon SQS queue. Each event can have a corresponding message in a queue, and applications that need to be aware of the event can read and process the messages.

What is the problem and a valid solution? Small block sizes cause performance degradation, limiting the I'O throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput. Answer: E Question: 8 You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around sensors for 3 months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.

During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements.

To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements? Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS Answer: C Question: 9 Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met.

Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining Which architecture outlined below win meet the initial requirements for the collection platform?

Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR. Call duration is mostly in the minutes timeframe. Each traced call can be either active or terminated.

Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible? Use DynamoDB with a 'Calls" table and a Global secondary index on a 'State" attribute that can equal to "active" or "terminated" in this way the Global Secondary index can be used for all Items in the table.

Answer: A Question: 11 A web design company currently runs several FTP servers that their customers use to upload and download large graphic files They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and Keep costs to a minimum. What AWS architecture would you recommend?

Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold.

Load a central list of ftp users from S3 as part of the user Data startup script on each Instance. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer. Answer: A Question: 12 You have been asked to design the storage layer for an application. The application requires disk performance of at least , IOPS in addition, the storage layer must be able to survive the loss of an individual disk.

EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which of the following designs will meet these objectives'? Instantiate a c3. Ensure that EBS snapshots are performed every 15 minutes. Instantiate an i2. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed volume. Attach the volume to the instance. Configure synchronous, block- level replication to an identically configured instance in us-east-1b.

Answer: C Question: 13 You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? Choose 2 answers A. Route 53 Record Sets B.

IM1 Roles C. EC2 Key Pairs E. Launch configurations F. When deploying this application in a region with three availability zones AZs which architecture provides high availability?

If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load?

No, if the cache node fails you can always get the same data from the DB withouthaving any availability impact. No, if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact. Answer: A Explanation: ElastiCache for Memcached The primary goal of caching is typically to offload reads from your database or other primary data source.

In most apps, you have hot spots of data that are regularly queried, but only updated periodically. Think of the front page of a blog or news site, or the top leaderboard in an online game.

In this type of case, your app can receive dozens, hundreds, or even thousands of requests for the same data before it's updated again. Having your caching layer handle these queries has several advantages. First, it's considerably cheaper to add an in-memory cache than to scale up to a larger database cluster. Second, an in-memory cache is also easier to scale out, because it's easier to distribute an in-memory cache horizontally than a relational database.

Last, a caching layer provides a request buffer in the event of a sudden spike in usage. If your app or game ends up on the front page of Reddit or the App Store, it's not unheard of to see a spike that is 10 to times your normal application load. Even if you autoscale your application instances, a 10x request spike will likely make your database very unhappy. Let's focus on ElastiCache for Memcached first, because it is the best fit for a cachingfocused solution.

We'll revisit Redis later in the paper, and weigh its advantages and disadvantages. Presented with and without answers so you can study or simulate an exam. Everything you need to know to fast-track your exam success. These Practice Tests will prepare you thoroughly so that you get to ace your exam with confidence. Identify your strengths and weaknesses and assess your exam readiness.

Focus your study on the knowledge areas where you need to most. Programmer Books. OpenStack Trove Essentials. Pro MongoDB Development. Ubuntu Unleashed Edition, 12th Edition. Please enter your comment! Please enter your name here. View code.

Deliver production-ready and cloud-scale Angular web apps What is this book about? This book covers the following exciting features: Explore AWS terminology and identity and access management Acquaint yourself with important cloud services and features in categories such as compute, network, storage, and databases Define access control to secure AWS resources and set up efficient monitoring Back up your database and ensure high availability by understanding all of the database-related services in the AWS Cloud Integrate AWS with your applications to meet and exceed non-functional requirements If you feel this book is for you, get your copy today!

Instructions and Navigations All of the code is organized into folders. For example, Chapter MIT License. Releases No releases published.



0コメント

  • 1000 / 1000