My journey to AWS Solution Architect Exam — Test 4

MayBeMan
11 min readApr 2, 2024

Question 33:

What does this AWS CloudFormation snippet do? (Select three)

A security group acts as a virtual firewall that controls the traffic for one or more instances. The following are the characteristics of security group rules:

  • By default, security groups allow all outbound traffic.
  • Security group rules are always permissive; you can’t create rules that deny access.
  • Security groups are stateful.

AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources.

My journey to AWS Solution Architect Exam — Part 6 — Amazon Elastic Compute Cloud EC2 (2/4) | by MayBeMan | Medium

Considering the given snippet:

0.0.0.0/0 means any IP, not the IP 0.0.0.0. Ingress means traffic going into your instance, and Security Groups are different from NACL. Each "-" in our security group rule represents a different rule (YAML syntax)

Therefore the AWS CloudFormation snippet creates two Security Group inbound rules that allow any IP to pass through on the HTTP port and lets traffic flow from one source IP (192.168.1.1) on port 22.

Question 34:

A company has recently created a new department to handle their services workload. An IT team has been asked to create a custom VPC to isolate the resources created in this new department. They have set up the public subnet and internet gateway (IGW). However, they are not able to ping the Amazon EC2 instances with elastic IP address (EIP) launched in the newly created VPC. As a Solutions Architect, the team has requested your help. How will you troubleshoot this scenario? (Select two)

An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. An internet gateway serves two purposes:

  • provide a target in your VPC route tables for internet-routable traffic
  • to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.

To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:

  1. Attach an internet gateway to your VPC.
  2. Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway.
  3. Ensure that instances in your subnet have a globally unique IP address.
  4. Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.

Additionally, ensure the security group allows the ICMP protocol for ping requests.

Question 35:

A junior developer has downloaded a sample Amazon S3 bucket policy to make changes to it based on new company-wide access policies. He has requested your help in understanding this bucket policy. As a Solutions Architect, which of the following would you identify as the correct description for the given policy?

A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, AWS Organizations SCPs, ACLs, and session policies.

My journey to AWS Solution Architect Exam — Part 19 — Amazon S3 (1/3) | by MayBeMan | Medium

Let’s analyze the bucket policy one step at a time:

  • The snippet "Effect": "Allow" implies an allow effect.
  • The snippet "Principal": "*" implies any Principal.
  • The snippet "Action": "s3:*" implies any Amazon S3 API.
  • The snippet "Resource": "arn:aws:s3:::examplebucket/*" implies that the resource can be the bucket examplebucket and its contents.

Consider the last snippet of the given bucket policy: "Condition": { "IpAddress": {"aws:SourceIp": "54.240.143.0/24"}, "NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"} } This snippet implies that if the source IP is in the CIDR block "54.240.143.0/24" (== 54.240.143.0 - 54.240.143.255), then it is allowed to access the examplebucket and its contents. However, the source IP cannot be in the CIDR "54.240.143.188/32" (== 1 IP, 54.240.143.188/32), which means one IP address is explicitly blocked from accessing the examplebucket and its contents.

Question 36:

A development team has configured Elastic Load Balancing for host-based routing. The idea is to support multiple subdomains and different top-level domains. The rule *.example.com matches which of the following?

You can use host conditions to define rules that route requests based on the hostname in the host header (also known as host-based routing). This enables you to support multiple subdomains and different top-level domains using a single load balancer. A hostname is not case-sensitive, can be up to 128 characters in length, and can contain any of the following characters: 1. A–Z, a–z, 0–9 2. — . 3. * (matches 0 or more characters) 4. ? (matches exactly 1 character). You must include at least one “.” character. You can include only alphabetical characters after the final “.” character.

My journey to AWS Solution Architect Exam — Part 22 — High Availability & Scalability For EC2 (ELB) | by MayBeMan | AWS Tip (medium.com)

Question 37:

A media company uses Amazon ElastiCache Redis to enhance the performance of its Amazon RDS database layer. The company wants a robust disaster recovery strategy for its caching layer that guarantees minimal downtime as well as minimal data loss while ensuring good application performance. Which of the following solutions will you recommend to address the given use-case?

Multi-AZ is the best option when data retention, minimal downtime, and application performance are a priority:

  • Data-loss potential — Low. Multi-AZ provides fault tolerance for every scenario, including hardware-related issues.
  • Performance impact — Low. Of the available options, Multi-AZ provides the fastest time to recovery, because there is no manual procedure to follow after the process is implemented.
  • Cost — Low to high. Multi-AZ is the lowest-cost option. Use Multi-AZ when you can’t risk losing data because of hardware failure or you can’t afford the downtime required by other options in your response to an outage.

Question 38:

An enterprise has decided to move its secondary workloads such as backups and archives to AWS cloud. The CTO wishes to move the data stored on physical tapes to Cloud, without changing their current tape backup workflows. The company holds petabytes of data on tapes and needs a cost-optimized solution to move this data to cloud. What is an optimal solution that meets these requirements while keeping the costs to a minimum?

Tape Gateway enables you to replace using physical tapes on-premises with virtual tapes in AWS without changing existing backup workflows. It compresses and stores archived virtual tapes in the lowest-cost Amazon S3 storage classes, Amazon S3 Glacier and Amazon S3 Glacier Deep Archive. This makes it feasible for you to retain long-term data in the AWS Cloud at a very low cost. With Tape Gateway, you only pay for what you consume, with no minimum commitments and no upfront fees; integrates with all leading backup applications allowing you to start using cloud storage for on-premises backup and archive without any changes to your backup and archive workflows.

My journey to AWS Solution Architect Exam — Part 47 — Hybrid Cloud for storage & file transfers | by MayBeMan | Mar, 2024 | Medium

Tape Gateway Overview:

https://aws.amazon.com/storagegateway/vtl/

Incorrect options:

  • AWS DataSync supports only NFS and SMB file types and hence is not the right choice for the given use case.
  • AWS Direct Connect is used when customers need to retain on-premises structure because of compliance reasons and have moved the rest of the architecture to AWS Cloud. The given use-case needs a cost-optimized solution and they do not have an ongoing requirement for high availability bandwidth.
  • Amazon EFS is a managed file system by AWS and cannot be used for archiving on-premises tape data onto AWS Cloud.

Question 39:

You have an Amazon S3 bucket that contains files in two different folders — s3://my-bucket/images and s3://my-bucket/thumbnails. When an image is first uploaded and new, it is viewed several times. But after 45 days, analytics prove that image files are on average rarely requested, but the thumbnails still are. After 180 days, you would like to archive the image files and the thumbnails. Overall you would like the solution to remain highly available to prevent disasters happening against a whole Availability Zone (AZ). How can you implement an efficient cost strategy for your Amazon S3 bucket? (Select two)

An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

  • Transition actions — Define when objects transition to another storage class.
  • Expiration actions — Define when objects expire. Amazon S3 deletes expired objects on your behalf.

Create a Lifecycle Policy to transition objects to Amazon S3 Standard IA using a prefix after 45 days.

Amazon S3 Standard-IA is for data that is accessed less frequently but requires rapid access when needed. As the use-case mentions that after 45 days, image files are rarely requested, but the thumbnails still are. So you need to use a prefix while configuring the Lifecycle Policy so that only objects in the s3://my-bucket/images are transitioned to Standard IA and not all the objects in the bucket.

Create a Lifecycle Policy to transition all objects to Amazon S3 Glacier after 180 days.

Amazon S3 Glacier and S3 Glacier Deep Archive are secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup. They are designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.

Question 40:

A company wants to adopt a hybrid cloud infrastructure where it uses some AWS services such as Amazon S3 alongside its on-premises data center. The company wants a dedicated private connection between the on-premise data center and AWS. In case of failures though, the company needs to guarantee uptime and is willing to use the public internet for an encrypted connection. What do you recommend? (Select two)

Use AWS Direct Connect connection as a primary connection

AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC.

Use AWS Site-to-Site VPN as a backup connection

AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.

My journey to AWS Solution Architect Exam — Part 12 –Site-to-Site VPN, VPN CloudHub, Direct Connect | by MayBeMan | Medium

Question 41:

A mobile gaming company is experiencing heavy read traffic to its Amazon RDS database that retrieves player’s scores and stats. The company is using an Amazon RDS database instance type that is not cost-effective for their budget. The company would like to implement a strategy to deal with the high volume of read traffic, reduce latency, and also downsize the instance size to cut costs. Which of the following solutions do you recommend?

Amazon ElastiCache is an ideal front-end for data stores such as Amazon RDS, providing a high-performance middle tier for applications with extremely high request rates and/or low latency requirements. The best part of caching is that it’s minimally invasive to implement and by doing so, your application performance regarding both scale and speed is dramatically improved.

My journey to AWS Solution Architect Exam — Part 36 — Databases in AWS | by MayBeMan | Feb, 2024 | Medium

Incorrect options:

  • Setup Amazon RDS Read Replicas — Adding read replicas would further add to the database costs and will not help in reducing latency when compared to a caching solution. So this option is ruled out.
  • Move to Amazon Redshift — Amazon Redshift is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more. If the company is looking at cost-cutting, moving to Amazon Redshift from Amazon RDS is not an option.
  • Switch application code to AWS Lambda for better performance — AWS Lambda can help in running data processing workflows. But, data still needs to be read from RDS and hence we need a solution to speed up the data reads and not before/after processing.

Question 42:

A company’s business logic is built on several microservices that are running in the on-premises data center. They currently communicate using a message broker that supports the MQTT protocol. The company is looking at migrating these applications and the message broker to AWS Cloud without changing the application logic. Which technology allows you to get a managed message broker that supports the MQTT protocol?

Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Message brokers allow different software systems–often using different programming languages, and on different platforms–to communicate and exchange information. If an organization is using messaging with existing applications and wants to move the messaging service to the cloud quickly and easily, AWS recommends Amazon MQ for such a use case.

My journey to AWS Solution Architect Exam — Part 28 — Integration & Messaging (Kinesis & Amazon MQ ) | by MayBeMan | Feb, 2024 | Medium

--

--

MayBeMan

Technician specialized in the security of electronic payment systems. Crypto supporter.