top of page

Are AWS Certifications worth it? : AWS SA-Professional 2

Are AWS Certifications worth it? : AWS Solutions Architect - Profassional (SAP) Certification 2

Are AWS Certifications worth it? : AWS Solutions Architect - Profassional (SAP) Certification 2

Written by Minhyeok Cha



Continuing from our last discussion, we further explore AWS certifications, focusing on the Solutions Architect - Professional (SAP) exam, specifically how its questions relate to practical use in consoles or architectural structures.


 

Question 1.

A company is running a two-tier web-based application in its on-premises data center. The application layer consists of a single server running a stateful application, connected to a PostgreSQL database running on a separate server. Anticipating significant growth in the user base, the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing.


Which solution provides a consistent user experience while allowing scalability for the application and database layers?



ⓐ Enable Aurora Auto Scaling for Aurora replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.


ⓑ Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with a round-robin routing algorithm and sticky sessions enabled.


ⓒ Enable Aurora Auto Scaling for Aurora replicas. Use an Application Load Balancer with round-robin routing and sticky sessions enabled.


ⓓ Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.



Solutions

In this question, the answer is apparent just by looking at the options.

RDS Aurora Auto Scaling is a feature intended for replicas, not writers. Therefore, options B and D are eliminated.


Aurora Auto Scaling adjusts the number of Aurora replicas in an Aurora DB cluster using scaling policies.

The routing algorithm is also key. The routing algorithm mentioned in A for NLB is not the least outstanding requests routing algorithm, thus eliminating option A, leaving C as the correct answer.


Answer: C



💡 Load balancer nodes receiving connections in a Network Load Balancer use the following process:
1. Use a flow hash algorithm to select a target from the target group for the default rule. The algorithm is based on. 
    ◦ Protocol 
    ◦ Source IP Address and port 
    ◦ Destination IP Address and port 
    ◦ TCP sequence number
2. Individual TCP connections are routed to a single target for the duration of the connection. TCP connections from a client can be routed to different targets as the source port and sequence number differ.
    

However, since this blog's main focus is on practical usage, let's delve into the architecture and console settings based on the content of this question.


The problem suggests a traditional two-tier web-based application, commonly used in low-traffic scenarios, involving a Client and a Server directly using a database.


Reading further, the customer is expected to grow significantly, so from a Solutions Architect's perspective, transitioning to a three-tier architecture is necessary. The actual migration services mentioned can be implemented as follows:



The round-robin weights are set at a 50:50 ratio, as not specified in the question. Let's now check the console operations together.


Application Load Balancer Operations: These settings are configured under LB - Target Group - Properties.

  • Round-robin settings


  • Sticky session settings


Sticky sessions use cookies to bind traffic to specified servers. Load balancer-generated cookies are default, and application-based cookies are set by servers included in the load balancer.


Aurora Auto Scaling Operations:

Use the "Add Auto Scaling" option for replicas in RDS to create a leader instance.

Before creation, configure the Auto Scaling policy by clicking the button as shown above.

Note that even if multiple policies are applied, Scale Out is triggered upon satisfying any one policy.



※ cf. Routing algorithms for each ELB type:

For Application Load Balancers, load balancer nodes receiving requests use the following process:

  1. Evaluate listener rules based on priority to determine applicable rules.

  2. Select targets from the target group for the rule action using the configured routing algorithm. The default routing algorithm is round-robin. Even if targets are registered in multiple target groups, routing is performed independently for each target group.

For Network Load Balancers, load balancer nodes receiving connections use the following process:

  1. Use a flow hash algorithm to select targets from the target group for the default rule based on:

    1. Protocol

    2. Source IP address and port

    3. Destination IP address and port

    4. TCP sequence number

  2. Individual TCP connections are routed to a single target throughout the connection's life. TCP connections from clients can be routed to different targets due to differing source ports and sequence numbers.

For Classic Load Balancers, load balancer nodes receiving requests select registered instances using:

  • Round-robin routing algorithm for TCP listeners

  • Least outstanding requests routing algorithm for HTTP and HTTPS listeners

Weighted settings

Though not mentioned in the question, traffic weighting is a key feature of load balancers.


 

Question 2.

A retail company must provide a series of data files to another company, its business partner. These files are stored in an Amazon S3 bucket belonging to Account A of the retail company. The business partner wants one of their IAM users, User_DataProcessor, from their own AWS account (Account B) to access the files.


What combination of steps should the company perform to enable User_DataProcessor to successfully access the S3 bucket? (Select two.)


ⓐ Enable CORS (Cross-Origin Resource Sharing) for the S3 bucket in Account A.


ⓑ Set the S3 bucket policy in Account A as follows:

{
	"Effect": "Allow",
	"Action": [
		"s3:Getobject",
		"s3:ListBucket"
	],
	"Resource": "arn:aws:s3:::AccountABucketName/*"
}

ⓒ Set the S3 bucket policy in Account A as follows:

{
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor"
     },
     "Action": [
         "s3:GetObject"
         "se:ListBucket"
     ],
     "Resource": [
         "arn:aws:s3:::AccountABucketName/*"
     ]
}

ⓓ Set the permissions for User_DataProcessor in Account B as follows:

{
    "Effect": "Allow",
    "Action": [
        "s3:GetObject",
        "s3:ListBucket"
    ],
    "Resource": "arn:aws:s3:::AccountABucketName/*"
}

ⓔ Set the permissions for User_DataProcessor in Account B as follows:

{
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor"
    },
    "Action": [
        "s3:GetObject",
        "s3:ListBucket",
    ],
    "Resource": [
        "arn:aws:s3:::AccountABucketName/*"
    ]
}

Solutions



This question revolves around how IAM in Account B should use policies to access files in a bucket in Account A. AWS S3 service allows granting permissions to users from other accounts to access objects they own.


There's no need for Account B to access Account A's console; only resource lookup is necessary. Hence, adding IAM's Principal is unnecessary.


Instead, it's necessary to open external account access to the S3 bucket.


Therefore, the S3 policy opening B account with S3 permissions and Principal Option C and

{
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor"
     },
     "Action": [
         "s3:GetObject",
         "s3:ListBucket"
     ],
     "Resource": [
         "arn:aws:s3:::AccountABucketName/*"
     ]
}

IAM policy specifying S3 permissions and resource (A account bucket) Option D are the correct answers.

{
    "Effect": "Allow",
    "Action": [
        "s3:GestObject",
        "s3:ListBucket"
    ],
    "Resource": "arn:aws:::AccountABucketName/*"
}

Answer: C, D


※ cf.

Depending on the type of access you want to provide, permissions can be granted as follows:

  1. IAM policies and resource-based bucket policies

  2. IAM policies and resource-based ACLs

  3. Cross-account IAM roles

 

Question 3.

A company is running an existing web application on Amazon EC2 instances and needs to refactor the application into microservices running in containers. Separate application versions exist for two different environments, Production and Testing. The application load is variable, but the minimum and maximum loads are known. The solution architect must design the updated application in a serverless architecture while minimizing operational complexity.


Which solution most cost-effectively meets these requirements?



ⓐ Upload container images as functions to AWS Lambda. Configure concurrency limits for the attached Lambda functions to handle the anticipated maximum load. Configure two separate Lambda integrations within Amazon API Gateway, one for Production and another for Testing.


ⓑ Upload container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Container Service (Amazon ECS) clusters with Fargate launch type to handle the expected load. Deploy tasks from ECR images. Configure two separate Application Load Balancers to route traffic to ECS clusters.


ⓒ Upload container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with Fargate launch type to handle the expected load. Deploy tasks from ECR images. Configure two separate Application Load Balancers to route traffic to EKS clusters.


ⓓ In AWS Elastic Beanstalk, we create separate environments and deployments for production and testing. We configure two separate Application Load Balancers to route traffic to the Elastic Beanstalk deployment.



Solutions

The issue here involves refactoring microservices using containers on existing EC2s, essentially a service migration. In this instance, we will focus on four key areas: containers, microservices, serverless architecture, and cost efficiency before proceeding.


Option A, AWS Lambda, is indeed serverless but not a container, hence it's eliminated. Option D, AWS Elastic Beanstalk, can use containers (Docker Image) but is categorized as PaaS, not precisely serverless, so it's also eliminated. This leaves us with Option B, ECS, and Option C, EKS. Considering the last criterion of cost efficiency, ECS is more affordable, making B the correct answer.


Answer: B


This problem is about constructing a simple architectural solution, so we will skip the process of working in the console.



Conclusion

I hope the AWS SA certification questions we covered today have been helpful to you. If you have any questions about the solutions, notice any errors, or have additional queries, please feel free to contact us anytime at partner@smileshark.kr.

34 views0 comments

Related Posts

See All
bottom of page