We provide aws certified solutions architect professional salary which are the best for clearing AWS-Certified-Solutions-Architect-Professional test, and to get certified by Amazon AWS-Certified-Solutions-Architect-Professional. The aws certified solutions architect professional exam dumps covers all the knowledge points of the real AWS-Certified-Solutions-Architect-Professional exam. Crack your Amazon AWS-Certified-Solutions-Architect-Professional Exam with latest dumps, guaranteed!

Also have AWS-Certified-Solutions-Architect-Professional free dumps questions for you:

NEW QUESTION 1
You want to use Amazon Redshift and you are planning to deploy dw1.8xIarge nodes. What is the minimum amount of nodes that you need to deploy with this kind of configuration?

  • A. 1
  • B. 4
  • C. 3
  • D. 2

Answer: D

Explanation: For a single-node configuration in Amazon Redshift, the only option available is the smallest of the two options. The 8XL extra-large nodes are only available in a multi-node configuration
Reference: http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-c|usters.htmI

NEW QUESTION 2
Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and indMdual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database
Which backup architecture will meet these requirements?

  • A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore
  • B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.
  • C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore
  • D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for indMdual volume restore.

Answer: A

NEW QUESTION 3
You have been asked to design the storage layer for an application. The application requires disk performance of at least 100,000 IOPS. In addition, the storage layer must be able to survive the loss of an indMdual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which of the following designs will meet these objectives?

  • A. Instantiate a c3.8x|arge instance in us-east-1. Provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volum
  • B. Ensure that EBS snapshots are performed every 15 minutes.
  • C. Instantiate a c3.8xIarge instance in us-east-1. Provision 3xITB EBS volumes, attach them to the Instance, and configure them as a single RAID 0 volum
  • D. Ensure that EBS snapshots are performed every 15 minutes.
  • E. Instantiate an i2.8xIarge instance in us-east-1
  • F. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instanc
  • G. Provision 3x1TB EBS volumes, attach them to the instance, and configure them as a second RAID 0 volum
  • H. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed volume.
  • I. Instantiate a c3.8xIarge instance in us-east-1. Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100,000 IOP
  • J. Attach the volume to the instance.
  • K. Instantiate an i2.8xIarge instance in us-east-1
  • L. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instanc
  • M. Configure synchronous, blocklevel replication to an identically configured instance in us-east-1b.

Answer: C

NEW QUESTION 4
Can Provisioned IOPS be used on RDS instances launched in a VPC?

  • A. Yes, they can be used only with Oracle based instances.
  • B. Yes, they can be used for all RDS instances.
  • C. No
  • D. Yes, they can be used only with MySQL based instance

Answer: B

Explanation: The basic building block of Amazon RDS is the DB instance. DB instance storage comes in three types: Magnetic, General Purpose (SSD), and Provisioned IOPS (SSD). When you buy a server, you get CPU, memory, storage, and IOPS, all bundled together. With Amazon RDS, these are split apart so that you can scale them independently. So, for example, if you need more CPU, less IOPS, or more storage, you
can easily allocate them.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/RDSFAQ.PIOPS.htmI

NEW QUESTION 5
A user has created a VPC with public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.0.0/24 . The NAT instance ID is i-a12345. Which of the below mentioned entries are required in the main route table attached with the private subnet to allow instances to connect with the internet?

  • A. Destination: 20.0.0.0/0 and Target: 80
  • B. Destination: 20.0.0.0/0 and Target: i-a12345
  • C. Destination: 20.0.0.0/24 and Target: i-a12345
  • D. Destination: 0.0.0.0/0 and Target: i-a12345

Answer: D

Explanation: A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public private subnet, the instances in the public subnet can receive inbound traffic directly from the Internet, whereas the instances in the private subnet cannot. If these subnets are created with Wizard, AWS will create two route tables and attach to the subnets. The main route table will have the entry "Destination: 0.0.0.0/0 and Target: i-a12345", which allows all the instances in the private subnet to connect to the internet using NAT.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

NEW QUESTION 6
Identify a true statement about the statement ID (Sid) in IAM.

  • A. You cannot expose the Sid in the IAM API.
  • B. You cannot use a Sid value as a sub-ID for a policy document's ID for services provided by SQS and SNS.
  • C. You can expose the Sid in the IAM API.
  • D. You cannot assign a Sid value to each statement in a statement arra

Answer: A

Explanation: The Sid(statement ID) is an optional identifier that you provide for the policy statement. You can assign a Sid a value to each statement in a statement array. In IAM, the Sid is not exposed in the IAM API. You can't retrieve a particular statement based on this ID.
Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_poIicies_eIements.htmI#Sid

NEW QUESTION 7
You've been hired to enhance the overall security posture for a very large e-commerce site They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3 They are using a combination of RDS and DynamoOB for their dynamic data and then archMng nightly into S3 for further processing with EMR They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access.
Which approach provides a cost effective scalable mitigation to this kind of attack?

  • A. Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC they would then establish Internet connectMty into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC.
  • B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet
  • C. Add a WAF tier by creating a new ELB and an AutoScaIing group of EC2 Instances running ahost-based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would their pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
  • D. Remove all but TLS 1.2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the ELB itself to perform WAF functionality.

Answer: C

NEW QUESTION 8
Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Nlanagement has tasked you to architect the collection platform ensuring the following requirements are met.
Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining
Which architecture outlined below win meet the initial requirements for the collection platform?

  • A. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
  • B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
  • C. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
  • D. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.

Answer: B

NEW QUESTION 9
You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web application s database. You are currently running a MuIti-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier. Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.

  • A. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
  • B. Generate the reports by querying the synchronously replicated standby RDS NIySQL instance maintained through Nlulti-AZ.
  • C. Launch a RDS Read Replica connected to your MuIti AZ master database and generate reports by querying the Read Replica.
  • D. Generate the reports by querying the EIastiCache database caching tie

Answer: C

NEW QUESTION 10
How many g2.2xIarge on-demand instances can a user run in one region without taking any limit increase approval from AWS?

  • A. 20
  • B. 2
  • C. 5
  • D. 10

Answer: C

Explanation: Generally AWS EC2 allows running 20 on-demand instances and 100 spot instances at a time. This limit can be increased by requesting at https://aws.amazon.com/contact-us/ec2-request. Excluding certain types of instances, the limit is lower than mentioned above. For g2.2xIarge, the user can run only 5
on-demand instance at a time.
Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws_service_|imits.htmI#Iimits_ec2

NEW QUESTION 11
You want to use AWS CodeDepIoy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC). What criterion must be met for this to be possible?

  • A. The AWS CodeDepIoy agent installed on the Amazon EC2 instances must be able to access only the public AWS CodeDepIoy endpoint.
  • B. The AWS CodeDepIoy agent installed on the Amazon EC2 instances must be able to access only the public Amazon S3 service endpoint.
  • C. The AWS CodeDepIoy agent installed on the Amazon EC2 instances must be able to access the public AWS CodeDepIoy and Amazon S3 service endpoints.
  • D. It is not currently possible to use AWS CodeDepIoy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC.)

Answer: C

Explanation: You can use AWS CodeDepIoy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC). However, the AWS CodeDepIoy agent installed on the Amazon EC2 instances must be able to access the public AWS CodeDepIoy and Amazon S3 service endpoints. Reference: http://aws.amazon.com/codedepIoy/faqs/

NEW QUESTION 12
You are developing a new mobile application and are considering storing user preferences in AWS.2w This would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be
cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?

  • A. Setup an RDS MySQL instance in 2 availability zones to store the user preference dat
  • B. Deploy apublic facing application on a server in front of the database to manage security and access credentials
  • C. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preference
  • D. The mobile application will query the user preferences directly from the DynamoDB tabl
  • E. Utilize ST
  • F. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
  • G. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replica
  • H. Leverage the MySQL user management and access prMlege system to manage security and access credentials.
  • I. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user’ S3 objec
  • J. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.

Answer: B

NEW QUESTION 13
A user is trying to create a vault in AWS Glacier. The user wants to enable notifications. In which of the below mentioned options can the user enable the notifications from the AWS console?

  • A. Glacier does not support the AWS console
  • B. Archival Upload Complete
  • C. Vault Upload Job Complete
  • D. Vault Inventory Retrieval Job Complete

Answer: D

Explanation: From AWS console the user can configure to have notifications sent to Amazon Simple Notifications Service (SNS). The user can select specific jobs that, on completion, will trigger the notifications such as Vault Inventory Retrieval Job Complete and Archive Retrieval Job Complete.
Reference: http://docs.aws.amazon.com/amazongIacier/latest/dev/configuring-notifications-console.html

NEW QUESTION 14
An organization is undergoing a security audit. The auditor wants to view the AWS VPC configurations as the organization has hosted all the applications in the AWS VPC. The auditor is from a remote place and wants to have access to AWS to view all the VPC records.
How can the organization meet the expectations of the auditor without compromising on the security of their AWS infrastructure?

  • A. The organization should not accept the request as sharing the credentials means compromising on security.
  • B. Create an IAM role which will have read only access to all EC2 services including VPC and assign that role to the auditor.
  • C. Create an IAM user who will have read only access to the AWS VPC and share those credentials with the auditor.
  • D. The organization should create an IAM user with VPC full access but set a condition that will not allow to modify anything if the request is from any IP other than the organization’s data center.

Answer: C

Explanation: A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. The user can create subnets as per the requirement within a VPC. The VPC also works with IAM and the organization can create IAM users who have access to various VPC services.
If an auditor wants to have access to the AWS VPC to verify the rules, the organization should be careful before sharing any data which can allow making updates to the AWS infrastructure. In this scenario it is recommended that the organization creates an IAM user who will have read only access to the VPC. Share the above mentioned credentials with the auditor as it cannot harm the organization. The sample policy is given below:
{
"Effect":"AI|ow",
"Action":[ "ec2:DescribeVpcs", "ec2:DescribeSubnets",
"ec2:DescribeInternetGateways", "ec2:DescribeCustomerGateways", "ec2:DescribeVpnGateways", "ec2:DescribeVpnConnections", "ec2:DescribeRouteTabIes", "ec2:DescribeAddresses", "ec2:DescribeSecurityGroups", "ec2:DescribeNetworkAcIs", "ec2:DescribeDhcpOptions", "ec2:DescribeTags", "ec2:DescribeInstances"
]!
"Resource":"*"
}
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_IANI.htmI

NEW QUESTION 15
With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. The cluster is managed using an open-source framework called Hadoop.
You have set up an application to run Hadoop jobs. The application reads data from DynamoDB and generates a temporary file of 100 TBs.
The whole process runs for 30 minutes and the output of the job is stored to S3. Which of the below mentioned options is the most cost effective solution in this case?

  • A. Use Spot Instances to run Hadoop jobs and configure them with EBS volumes for persistent data storage.
  • B. Use Spot Instances to run Hadoop jobs and configure them with ephermal storage for output file storage.
  • C. Use an on demand instance to run Hadoop jobs and configure them with EBS volumes for persistent storage.
  • D. Use an on demand instance to run Hadoop jobs and configure them with ephemeral storage for output file storage.

Answer: B

Explanation: AWS EC2 Spot Instances allow the user to quote his own price for the EC2 computing capacity. The user can simply bid on the spare Amazon EC2 instances and run them whenever his bid exceeds the current Spot Price. The Spot Instance pricing model complements the On-Demand and Reserved Instance
pricing models, providing potentially the most cost-effective option for obtaining compute capacity, depending on the application. The only challenge with a Spot Instance is data persistence as the instance can be terminated whenever the spot price exceeds the bid price.
In the current scenario a Hadoop job is a temporary job and does not run for a longer period. It fetches data from a persistent DynamoDB. Thus, even if the instance gets terminated there will be no data loss and the job can be re-run. As the output files are large temporary files, it will be useful to store data on ephermal storage for cost savings.
Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/

NEW QUESTION 16
You create a VPN connection, and your VPN device supports Border Gateway Protocol (BGP). Which of the following should be specified to configure the VPN connection?

  • A. Classless routing
  • B. Classfull routing
  • C. Dynamic routing
  • D. Static routing

Answer: C

Explanation: If you create a VPN connection, you must specify the type of routing that you plan to use, which will depend upon on the make and model of your VPN devices. If your VPN device supports Border Gateway Protocol (BGP), you need to specify dynamic routing when you configure your VPN connection. If your device does not support BGP, you should specify static routing.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.htmI

NEW QUESTION 17
In Amazon EIastiCache, the failure of a single cache node can have an impact on the availability of your application and the load on your back-end database while EIastiCache provisions a replacement for the failed cache node and it get repopulated. Which of the following is a solution to reduce this potential availability impact?

  • A. Spread your memory and compute capacity over fewer number of cache nodes, each with smaller capacity.
  • B. Spread your memory and compute capacity over a larger number of cache nodes, each with smaller capacity.
  • C. Include fewer number of high capacity nodes.
  • D. Include a larger number of cache nodes, each with high capacit

Answer: B

Explanation: In Amazon EIastiCache, the number of cache nodes in the cluster is a key factor in the availability of your cluster running Memcached. The failure of a single cache node can have an impact on the availability of your application and the load on your back-end database while EIastiCache provisions a replacement for the failed cache node and it get repopulated. You can reduce this potential availability impact by spreading your memory and compute capacity over a larger number of cache nodes, each with smaller capacity, rather than using a fewer number of high capacity nodes.
Reference: http://docs.aws.amazon.com/AmazonEIastiCache/Iatest/UserGuide/CacheNode.Memcached.htmI

NEW QUESTION 18
Mike is appointed as Cloud Consultant in ExamKi|Ier.com. ExamKiI|er has the following VPCs set-up in the US East Region:
A VPC with CIDR block 10.10.0.0/16, a subnet in that VPC with CIDR block 10.10.1.0/24 A VPC with CIDR block 10.40.0.0/16, a subnet in that VPC with CIDR block 10.40.1.0/24
ExamKiIIer.com is trying to establish network connection between two subnets, a subnet with CIDR block 10.10.1.0/24 and another subnet with CIDR block 10.40.1.0/24. Which one of the following solutions should lV|ike recommend to ExamKiI|er.com?

  • A. Create 2 Virtual Private Gateways and configure one with each VPC.
  • B. Create 2 Internet Gateways, and attach one to each VPC.
  • C. Create a VPC Peering connection between both VPCs.
  • D. Create one EC2 instance in each subnet, assign Elastic IPs to both instances, and configure a set up Site-to-Site VPN connection between both EC2 instances.

Answer: C

Explanation: A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. EC2 instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region.
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.htmI

P.S. Certleader now are offering 100% pass ensure AWS-Certified-Solutions-Architect-Professional dumps! All AWS-Certified-Solutions-Architect-Professional exam questions have been updated with correct answers: https://www.certleader.com/AWS-Certified-Solutions-Architect-Professional-dumps.html (272 New Questions)