top of page

검색 결과

37 items found for ""

  • Are AWS Certifications worth it? : AWS SA-Professional 3

    Are AWS Certifications worth it? : AWS Solutions Architect - Profassional (SAP) Certification 3 Written by Minhyeok Cha It's been a while since I've written AWS certification post, so let's get it started. Question 1. A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts. The company's infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to manage their own network. However, individual accounts must be able to create AWS resources within the subnet. What combination of actions should the solutions architect perform to meet these requirements? (Choose two.) ⓐ Create a transit gateway in the infrastructure account. ⓑ Enable resource sharing from the AWS Organizations management account. ⓒ Create VPCs in each AWS account within the organization in AWS Organizations. Configure the VPCs to share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the VPCs in each individual account with the VPC in the infrastructure account. ⓓ Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share. ⓔ Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each prefix list to associate with the resource share. Solutions This question is about how to you want to manage multiple AWS accounts. For example, in the picture above, we have two random accounts and one dedicated to infrastructure. The needs in question are: These accounts must be used to manage the network. Individual accounts cannot manage their own network. The individual accounts need to be able to create AWS resources within the subnet. Since the infrastructure account itself is not designed to manage the network, you can see that the intent is to share permissions with accounts 1 and 2 so that they can manage the VPC subnet resources. A is wrong - create a Transit Gateway, and as you can see from the architecture of the problem, there is only one VPC mentioned in one account. TG, as you know, is a service that bundles multiple VPCs, so A is not a good fit for this problem. C is wrong - because building the same environment on each account is not a shared task. E is wrong - can't share resources via RAM using prefix lists. D is correct - directly shares the subnets Therefore, the remaining answers, B & D, are correct and can be solved in AWS Resource Access Manager (RAM). Correct Answers: B, D 💡 B. Enable resource sharing in the AWS Organizations master account. 💡 D. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU for which you want to use the shared network. Select each subnet that you want to associate with the resource share. Question 2. A company wants to use a third-party software-as-a-service (SaaS) application. The third-party SaaS application is consumed through several API calls. The third- party SaaS application also runs on AWS inside a VPC. The company will consume the third-party SaaS application from inside a VPC. The company has internal security policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the company VPC are allowed to be accessed from outside the company's VPC. All permissions must conform to the principles of least privilege. Which solution meets these requirements? ⓐ Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint. ⓑ Create an AWS Site-to-Site VPN connection between the third-party SaaS application and the company VPC. Configure network ACLs to limit access across the VPN tunnels. ⓒ Create a VPC peering connection between the third-party SaaS application and the company VPC. Update route tables by adding the needed routes for the peering connection. ⓓ Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create an interface VPC endpoint for this endpoint service. Grant permissions for the endpoint service to the specific account of the third-party SaaS provider. Solutions The question has "does not traverse the Internet" in the question, so we're going to eliminate B and C because it involves PrivateLink. The correct answer for #2 is A, because the perspective is consulting from the consumer's point of view, not the provider's. Account authorization for the endpoint service in D is the provider's responsibility. Correct Answer : A We need to create users and providers as per the above solution, so I've set up the following architecture. On the Provider VPC side, where you have the third-party SaaS application, you must first create an endpoint service. 💡Before creating the endpoint service, it supports network, gateway LB. The health check of the load balancer should be normal, and the output is shown in the picture below, but since we created the NLB in advance, we'll skip the creation process. 1. Provider accounts - Create an endpoint service 2. Provider Account - Add a Consumer IAM ARN 3. Consumer account - Name the endpoint service created in the provider account and send a connection request 4. Provider account - Accept connection request 5. Consumer account - Check status Question 3. A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve security: ✑ The database must use strong, randomly generated passwords stored in a secure AWS managed service. ✑ The application resources must be deployed through AWS CloudFormation. ✑ The application must rotate credentials for the database every 90 days. A solutions architect will generate a CloudFormation template to deploy the application. Which resources specified in the CloudFormation template will meet the security engineer's requirements with the LEAST amount of operational overhead? ⓐ Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days. ⓑ Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda function resource to rotate the database password. Specify a Parameter Store RotationSchedule resource to rotate the database password every 90 days. ⓒ Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days. ⓓ Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync DataSource resource to automatically rotate the database password every 90 days. Solution This question is looking for a managed service for cryptographic key-levels in AWS. As you can see, there is AWS Secrets Manager and AWS Systems Manager Parameter Store, both of which are services that store key-values. We need to know about each service before we can solve the problem, but as with any other problem, we can start by looking at the customer need in question The database must use strong, randomly generated passwords stored in a secure AWS managed service. Application resources must be deployed through AWS CloudFormation. The application needs to replace the credentials to the database every 90 days. The solutions architect generates a CloudFormation template to deploy the application. B and D are excluded here because the ability to periodically replace credentials is a feature of AWS Secrets Manager, and additionally, they are using resources that are not supported by Cloudformation. 💡 The Parameter Store RotationSchedule resource does not exist, and documentation checks show "RotationSchedule" in AWS Secrets Manager. 💡 AWS CloudFormation does not currently support creating a SecureString parameter type. Then we have to look at the remaining A and C, which is really just a problem of how we trigger the replacement cycle, and the answer is A because we don't have to use Amazon EventBridge, we have our own replacement capabilities. Correct Answer : A AWS Secrets Manager refresh cycle 💡 These days, IaCmetas need to be general purpose, so don't use CloudFormation very often. but I included it just in case there's a surprise CloudFormation question on t he AWS exam. Conclusion I hope the AWS SA certification questions we covered today have been helpful to you. If you have any questions about the solutions, notice any errors, or have additional queries, please feel free to contact us anytime at

  • AWS Case Study - TRIBONS

    How did TRIBONS provide uninterrupted shopping mall services to their customers? SmileShark's CloudOps Service TRIBONS Inc. As an affiliate of LF (formerly LG Fashion), TRIBONS owns famous brands such as DAKS SHIRTS, the industry leader in men's shirts, as well as Notig, Bobcat, and Benovero. TRIBONS is also successfully operating FOMEL CAMELE, a fashion miscellaneous goods brand targeting women in their twenties and thirties. TRIBONS also has a strong presence in children's apparel, and through its "PastelMall" subsidiary, TRIBONS offers premium children's apparel brands such as Daks kids, Hazzys kids, PETIT BATEAU, BonTon and K.I.D.S. These brands are available in Korea's major department stores, and are also available online through Pastel Mall. TRIBONS is constantly striving to provide the customers with quality products. Name TRIBONS Inc. Area Shirt and blouse manufacturing Estab.  Jan, 2008 Site Anomalous Service Failures in a Shopping Mall System Challenges SmileShark When did the need for SmileShark come up in TRIBONS, and what were the challenges at the time? Hyunsoo Jang We had previously been using an AWS cloud environment through a different partner. However, in 2022, we began to experience difficulties running its shopping mall in the cloud. As the number of customers increased, we were facing anomalous service failures. We were also considering expanding additional services due to system development. SmileShark You mentioned that TRIBONS experienced some unusual service failures, can you tell us what it was? Hyunsoo Jang Certain events, such as the real-time live commerce 'Parabang', were only exposed on our own mall, but sometimes we had to broadcast simultaneously on other live commerce platforms. In such cases, the difference from the usual inflow was about 10 times. In addition to this inflow, we also received customers through advertising marketing such as marketing texts and KakaoTalk Plus friends, and we could see that the inflow increased by about 5 times compared to the usual inflow. Therefore, we aimed for a more stable service. Why TRIBONS Chose SmileShark SmileShark Why did you choose SmileShark's CloudOps service? Hyunsoo Jang To solve the problems we were facing, we needed a partner that could be agile and flexible, and we found SmileShark through a referral. Being recognized as an AWS Rising Star of the Year, meeting with SmileShark's CEO and engineers built trust, it convinced us that they could empathize with our problem and promise to support us. SmileShark What did you find frustrating about your previous partner? Hyunsoo Jang As mentioned above, we were facing various issues during the operation of the shopping mall system, and there were many complicated parts that had not been explained well, so we were very disappointed with the previous partner's service provision. Changing server settings in AWS was not easy due to the absence of internal manpower, and communication was also difficult due to the difference in work areas between developers and system engineers. Therefore, the most anticipated aspect of the new partner introduction was smooth communication and proactive measures. When we used previous partners' services, issues were not shared, which led to confusion due to server reboots, checks, and policy changes during business hours, and there were many unnecessary procedures to respond to issues, so it was important to us to see if we could improve this. Stabilizing the infrastructure and a successful digital transformation As a collaborative partner, not just a request and responder SmileShark We've heard that TRIBONS' infrastructure issues have been dramatically stabilized since implementing SmileShark’s CloudOps, but what's it really like? Hyunsoo Jang In the year or so since we have been with SmileShark, we have seen a lot of improvements. We have been able to connect the system issue alerts to the collaboration solutions we use so we can respond to issues quickly. From time to time, AWS would send out an announcement saying, "There's an issue with a service or a region, and you may experience downtime." The emails are sent to our contacts within TRIBONS, but they are also sent to our MSP. It would be nice if the MSP partners we work with could share this with us when we miss something like this, but unfortunately this little detail hasn't been done before with the previous partner. The shopping mall was supposed to be an uninterrupted system, but we were often getting server error pages (503). SmileShark has provided us with AWS announcements months in advance so that we can plan ahead and say, "We need to address these issues around this time." It also sends out urgent announcements in the middle of the day so that we don't miss any issues. TRIBONS doesn't have any outages now, which we used to have four to five per quarter before SmileShark. SmileShark What do you think makes SmileShark's CloudOps service different from other previous monitoring and operations support and MSPs? Hyunsoo Jang When an issue arises, they analyze the cause of the problem and explain it in detail in an email, and then again on the phone, so I know exactly what the issue is, and they share their technical opinions and areas for improvement, which is very helpful. Furthermore, in the event of a failure, we are notified within one minute on average and receive prompt feedback from the person in charge, and we communicate in real time through a separate communication channel. As a result, we were able to successfully obtain the certification mark just one year after the start of the ISMS certification audit project. SmileShark How did SmileShark help TRIBONS with the ISMS certification audit? Hyunsoo Jang During the ISMS audit, there was a part of the architecture that needed to be changed. SmileShark told us that it was a security violation to have the private and development areas in the same zone, so we had to separate them. We discussed this closely with Hosang Kwak, CloudOps team lead of SmileShark and proceeded with as little disruption to the shopping mall as possible. In fact, even when we changed the architecture structure, the shopping mall service was not interrupted and the system operated stably. When I asked how to configure the application servers such as tomcat, which are in EC2 in addition to the AWS structure, he promptly responded and took practical measures. SmileShark In addition to running a stable infrastructure, we've heard that communication between developers has improved. Hyunsoo Jang Yes, organizations without system engineer positions end up lacking knowledge such as log analysis and server settings for each server. Communication with MSP partners was also a challenge due to the lack of communication between the teams. This was always a big concern for me due to the different job background, but I think SmileShark was the only one that worked out well in terms of communication. AWS Cost and Operations Optimization Consulting SmileShark So, how was SmileShark's AWS consulting experience? Hyunsoo Jang We had a cost issue with the CDN service we were using, and we thought that the fees charged due to the contract were excessive, so we were considering other CDN services, and we consulted with SmileShark about the CloudFront (CDN) service provided by AWS, which can be used at a reasonable price without a contract. We confirmed the cost-effective part of the service and are considering switching to it this year. Also, we were having frequent issues with the software configuration management server, so we consulted with SmileShark about AWS software configuration management service. I told them that I would like to be able to deploy or build servers automatically, and SmileShark told me that AWS has a structure that can automate the software configuration management. I thought that this would reduce the risk of manpower and server stabilization. However, the software configuration management server can be critical, so we are still considering it. Consulting with SmileShark helped us make the decision because we were able to put our situation into perspective. SmileShark Thank you. Do you have any comments that might be helpful to any customer considering SmileShark? Hyunsoo Jang I would highly recommend SmileShark's CloudOps service to any company or team that doesn't yet have an expert in the field of systems engineering, as SmileShark provides personalized support. SmileShark also helps build, manage, and optimize cloud infrastructure, making it especially useful for teams that don't have the knowledge or manpower to manage cloud in-house. I would recommend SmileShark as the best AWS partner to build the infrastructure, not only due to the technical issues, but also because SmileShark provides guidance on optimizing costs and increasing operational efficiency. Beyond just the numbers, there's something else I've been noticing a lot lately, and that's the trust SmileShark shows in the work. SmileShark is always consistent in the guidance and proactive in the solutions, and that's a big deal to me as a service provider. At a time when we felt overwhelmed by the complexity of the AWS environment, SmileShark reached out to us and made us feel comfortable just like seeing a lighthouse in the storm. Building an Enhanced Security and Gifting System TRIBONS’ Future Plan It has been four years since Pastel Mall (shopping mall) was launched, and we have been able to grow functionally in the service sector due to the influx of many customers. While we previously aimed to improve the service level, this year we are working to improve it by focusing on server strengthening and security to maintain a stable system. Therefore, we are aiming to obtain the enhanced ISMS-P certification. SmileShark Can you tell us about the new service TRIBONS recently launched, Gifting? Hyunsoo Jang The Pastel Mall Gifting Service is now open, a mobile-only service that allows the customers to send DAKS shirts and other products from Pastel Mall to your loved ones. Gifts can be given from existing Pastel Mall customers to non-members, and any customer can find a variety of products that match the theme in the dedicated gift shop, and the customers can send a gift with a message card with a small sentiment, so we hope you enjoy it. Detailed Services Applied to TRIBONS What is SmileShark’s CloudOps? SmileShark Which of SmileShark's CloudOps services did TRIBONS adopt, and what was the collaboration process like? Hosang Kwak, CloudOps team lead of SmileShark CloudOps doesn't just alert the customers when something goes wrong with their system, it also analyzes it. It's important for us to analyze, find solutions, and provide them to our customers so that they can improve their systems to prevent the same problems from happening again. CloudOps is a collaborative MSP service that doesn't solve all problems at once, but rather works with the customers to solve them and grow together. Hyunsoo Jang, TRIBONS online platform team leader, also has a good understanding of CloudOps, so he authorized us to do various tests over time. Also, when we suggested a solution, he agreed to give it a try, so to repay us for this, we are still working well together with the common goal of uninterrupted service. TRIBONS Architecture What is Shark-Mon? Shark-Mon is a monitoring tool that enables applications and services to operate around the clock without interruption, rather than being monitored by humans in the legacy way. Developed in-house by SmileShark, SharkMon provides functions necessary for cloud operations, including basic 'protocol monitoring' such as HTTP, TCP, SSH, DNS, ICMP, gRPC, TLS, 'AWS usage resource view' and 'Kubernetes monitoring', which is emerging as a global trend. It is currently in closed beta for select customers.

  • Are AWS Certifications worth it? : AWS SA-Professional 2

    Are AWS Certifications worth it? : AWS Solutions Architect - Profassional (SAP) Certification 2 Written by Minhyeok Cha Continuing from our last discussion, we further explore AWS certifications, focusing on the Solutions Architect - Professional (SAP) exam, specifically how its questions relate to practical use in consoles or architectural structures. Question 1. A company is running a two-tier web-based application in its on-premises data center. The application layer consists of a single server running a stateful application, connected to a PostgreSQL database running on a separate server. Anticipating significant growth in the user base, the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing. Which solution provides a consistent user experience while allowing scalability for the application and database layers? ⓐ Enable Aurora Auto Scaling for Aurora replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled. ⓑ Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with a round-robin routing algorithm and sticky sessions enabled. ⓒ Enable Aurora Auto Scaling for Aurora replicas. Use an Application Load Balancer with round-robin routing and sticky sessions enabled. ⓓ Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled. Solutions In this question, the answer is apparent just by looking at the options. RDS Aurora Auto Scaling is a feature intended for replicas, not writers. Therefore, options B and D are eliminated. Aurora Auto Scaling adjusts the number of Aurora replicas in an Aurora DB cluster using scaling policies. The routing algorithm is also key. The routing algorithm mentioned in A for NLB is not the least outstanding requests routing algorithm, thus eliminating option A, leaving C as the correct answer. Answer: C 💡 Load balancer nodes receiving connections in a Network Load Balancer use the following process: 1. Use a flow hash algorithm to select a target from the target group for the default rule. The algorithm is based on. ◦ Protocol ◦ Source IP Address and port ◦ Destination IP Address and port ◦ TCP sequence number 2. Individual TCP connections are routed to a single target for the duration of the connection. TCP connections from a client can be routed to different targets as the source port and sequence number differ. However, since this blog's main focus is on practical usage, let's delve into the architecture and console settings based on the content of this question. The problem suggests a traditional two-tier web-based application, commonly used in low-traffic scenarios, involving a Client and a Server directly using a database. Reading further, the customer is expected to grow significantly, so from a Solutions Architect's perspective, transitioning to a three-tier architecture is necessary. The actual migration services mentioned can be implemented as follows: The round-robin weights are set at a 50:50 ratio, as not specified in the question. Let's now check the console operations together. Application Load Balancer Operations: These settings are configured under LB - Target Group - Properties. Round-robin settings Sticky session settings Sticky sessions use cookies to bind traffic to specified servers. Load balancer-generated cookies are default, and application-based cookies are set by servers included in the load balancer. Aurora Auto Scaling Operations: Use the "Add Auto Scaling" option for replicas in RDS to create a leader instance. Before creation, configure the Auto Scaling policy by clicking the button as shown above. Note that even if multiple policies are applied, Scale Out is triggered upon satisfying any one policy. ※ cf. Routing algorithms for each ELB type: For Application Load Balancers, load balancer nodes receiving requests use the following process: Evaluate listener rules based on priority to determine applicable rules. Select targets from the target group for the rule action using the configured routing algorithm. The default routing algorithm is round-robin. Even if targets are registered in multiple target groups, routing is performed independently for each target group. For Network Load Balancers, load balancer nodes receiving connections use the following process: Use a flow hash algorithm to select targets from the target group for the default rule based on: Protocol Source IP address and port Destination IP address and port TCP sequence number Individual TCP connections are routed to a single target throughout the connection's life. TCP connections from clients can be routed to different targets due to differing source ports and sequence numbers. For Classic Load Balancers, load balancer nodes receiving requests select registered instances using: Round-robin routing algorithm for TCP listeners Least outstanding requests routing algorithm for HTTP and HTTPS listeners Weighted settings Though not mentioned in the question, traffic weighting is a key feature of load balancers. Question 2. A retail company must provide a series of data files to another company, its business partner. These files are stored in an Amazon S3 bucket belonging to Account A of the retail company. The business partner wants one of their IAM users, User_DataProcessor, from their own AWS account (Account B) to access the files. What combination of steps should the company perform to enable User_DataProcessor to successfully access the S3 bucket? (Select two.) ⓐ Enable CORS (Cross-Origin Resource Sharing) for the S3 bucket in Account A. ⓑ Set the S3 bucket policy in Account A as follows: { "Effect": "Allow", "Action": [ "s3:Getobject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" } ⓒ Set the S3 bucket policy in Account A as follows: { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject" "se:ListBucket" ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] } ⓓ Set the permissions for User_DataProcessor in Account B as follows: { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" } ⓔ Set the permissions for User_DataProcessor in Account B as follows: { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject", "s3:ListBucket", ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] } Solutions This question revolves around how IAM in Account B should use policies to access files in a bucket in Account A. AWS S3 service allows granting permissions to users from other accounts to access objects they own. There's no need for Account B to access Account A's console; only resource lookup is necessary. Hence, adding IAM's Principal is unnecessary. Instead, it's necessary to open external account access to the S3 bucket. Therefore, the S3 policy opening B account with S3 permissions and Principal Option C and { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] } IAM policy specifying S3 permissions and resource (A account bucket) Option D are the correct answers. { "Effect": "Allow", "Action": [ "s3:GestObject", "s3:ListBucket" ], "Resource": "arn:aws:::AccountABucketName/*" } Answer: C, D ※ cf. Depending on the type of access you want to provide, permissions can be granted as follows: IAM policies and resource-based bucket policies IAM policies and resource-based ACLs Cross-account IAM roles Question 3. A company is running an existing web application on Amazon EC2 instances and needs to refactor the application into microservices running in containers. Separate application versions exist for two different environments, Production and Testing. The application load is variable, but the minimum and maximum loads are known. The solution architect must design the updated application in a serverless architecture while minimizing operational complexity. Which solution most cost-effectively meets these requirements? ⓐ Upload container images as functions to AWS Lambda. Configure concurrency limits for the attached Lambda functions to handle the anticipated maximum load. Configure two separate Lambda integrations within Amazon API Gateway, one for Production and another for Testing. ⓑ Upload container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Container Service (Amazon ECS) clusters with Fargate launch type to handle the expected load. Deploy tasks from ECR images. Configure two separate Application Load Balancers to route traffic to ECS clusters. ⓒ Upload container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with Fargate launch type to handle the expected load. Deploy tasks from ECR images. Configure two separate Application Load Balancers to route traffic to EKS clusters. ⓓ In AWS Elastic Beanstalk, we create separate environments and deployments for production and testing. We configure two separate Application Load Balancers to route traffic to the Elastic Beanstalk deployment. Solutions The issue here involves refactoring microservices using containers on existing EC2s, essentially a service migration. In this instance, we will focus on four key areas: containers, microservices, serverless architecture, and cost efficiency before proceeding. Option A, AWS Lambda, is indeed serverless but not a container, hence it's eliminated. Option D, AWS Elastic Beanstalk, can use containers (Docker Image) but is categorized as PaaS, not precisely serverless, so it's also eliminated. This leaves us with Option B, ECS, and Option C, EKS. Considering the last criterion of cost efficiency, ECS is more affordable, making B the correct answer. Answer: B This problem is about constructing a simple architectural solution, so we will skip the process of working in the console. Conclusion I hope the AWS SA certification questions we covered today have been helpful to you. If you have any questions about the solutions, notice any errors, or have additional queries, please feel free to contact us anytime at

  • ETL? AWS Glue로 간편하게! (난이도:매우 쉬움)

    ETL? AWS Glue로 간편하게! (난이도: 매우 쉬움) - AWS Glue Studio 찍먹하기 Written by Minhyeok Cha 최근 팀장님께서 데이터 파이프라인에 대해 많은 관심을 갖고 계신데, 데이터 변환 자동화에 대해 이야기 할 때마다 어떤 방식으로 돌아가는지 어렵다고 하셨습니다. 또한 고객으로부터 AWS Glue를 추천드려도 어려워서 사용하기 꺼려진다는 피드백을 받은 적이 있습니다. 이러한 이야기를 듣고 AWS를 이용하시는 분들이 Glue에 대해 너무 무겁게 생각하지 않게 하고자 이번 블로그를 작성하게 되었습니다. AWS Glue 서비스 소개 왜 AWS Glue를 선택하는가 AWS Glue Studio 및 찍먹해보기 사전 준비 S3에 데이터 셋 넣기 Data Catalog 및 Crawlers 생성 Glue Studio 결과 마무리 AWS Glue 서비스 소개 AWS Glue는 간단히 말해 ETL 서비스입니다. 데이터 파이프라인이라고 해서 종사자분들이 아닌 분들은 무조건 어렵다고 생각하시는데 반은 맞고 반은 틀렸다고 생각합니다. AWS Glue에서는 Script를 Visualizing 하여 설정이 편하고 쉽습니다. 또한 AWS 서비스이기에 다른 AWS 서비스와의 연동성도 뛰어나며 S3, RDS, DynamoDB 등 다양한 데이터 소스를 받아 사용이 가능합니다. 왜 AWS Glue를 선택하는가 ETL의 기능을 필요로 하는 고객이 있는데 오픈 소스를 사용한 ETL 과정을 사용할 경우, 처음부터 아키텍칭을 해야 하며 그 툴의 사용 방법을 익히기까지 많은 시간이 듭니다. 예시로 ElasticStack 사용한 ETL 구조입니다. 데이터 파이프라인은 사용자의 목적성에 따라 다르기 때문에 위 사항은 참고용으로만 봐주세요. 데이터 수집을 위한 API나 크롤링 스크립트 작성을 하여 데이터를 수집 logstash로 정기적으로 가져와 통합시킨 후 Elasticsearch에 저장 이후 데이터 분석을 위해 Elasticsearch API를 사용해 데이터를 가져와 python 스크립트를 사용해 데이터 분석을 진행 후 다시 export ElasticStack flow를 보면 사실 Glue와 비교할 만한 서비스는 logstach입니다. Logstash는 사용자가 직접 서버를 관리하고 설정해야 합니다. 말 그대로 서버를 열어 초기 세팅이 필요하다 것이 단점입니다. 또한 Filter에서 지원하는 Plugin이나 사용 방법 등 많은 내용을 공부해야 하기 때문에 데이터 파이프라인에 익숙지 않다면 사용하기 어려울 수 있습니다. 그러나 AWS Glue는 관리형 서비스로서 사용자가 서버나 인프라를 직접 관리할 필요가 없고 Filter 과정이 정형화 되어있는 편이라고 할 수 있습니다. ※ 위에서 서술한 단점만 외에 여러 장점도 있으며, 사용자가 어떻게・어떤 목적으로 활용하는지에 따라 사용되는 툴을 달리할 수 있습니다. AWS Glue Studio 및 찍먹해보기 AWS Glue를 사용하기에 앞서 Glue Studio라는 녀석을 알고 가야 합니다. 이것이 알파이자 오메가입니다. 왜냐면 ETL을 사용하려고 Glue를 쓰는데 AWS Glue Studio가 설치 및 세팅 과정을 전부 정형화 시켜두어 간편하게 사용만 하면 되기 때문입니다. 그래도 기능을 쓰기 위해선 알아야 할 것이 있으니 아주 조금만 살펴보겠습니다. 사전 준비 1. S3에 데이터 셋 넣기 각 Data Source와 Target Data에 속한 버킷입니다. Raw 버킷에 데이터 셋을 삽입합니다. (이때 사용한 데이터는 사용자가 평가한 비디오 게임입니다) 2. Data Catalog 및 Crawlers 생성 Glue Crawler는 스토리지나 DB를 탐색하면서 데이터의 스키마를 제안하고, 관련 데이터를 메타데이터이터로 하여 Data Catalog에 기록합니다. Data Catalog용 데이터베이스 생성 당장은 껍데기인 DB입니다. Crawlers 생성 사전에 만들어둔 S3 버킷과 DB를 지정해 줍니다. 크롤러 생성 후 실행을 하면, S3 버킷 안에 있는 데이터 셋을 크롤링하여 메타테이터를 DB에 적재합니다. Glue Studio 사전 준비는 끝났으니 진짜 Glue를 확인하러 가봅니다. 각 노드를 사용하여 ETL을 구축 각 소스와 타겟이 되는 엔드포인트 지정 데이터 변환 및 ETL 작업을 위한 복잡한 코드 작성 없이도 데이터 파이프라인을 구축 지정한 노드 이름, 엔드포인트 값, 변환 작업 내용 등 기입 ETL Workflow 확인 후 저장 - 실행 위에서 작업한 모든 과정은 apache spark script화 되어 기록이 되고 추가로 visual node에서 없는 변환 기능을 코딩으로 작업이 가능합니다 결과 위 워크플로우에서 사용한 변환은 다음과 같습니다. DynamicFrame을 DataFrame으로 변환 → DataFrame에서 결측값(null)을 제거합니다. (지원되지 않기 때문에 PySpark 코드를 코드 블록에 작성하여 진행) Filter를 통해 user score 점수가 8점 이상인 게임 목록만을 출력합니다. 기존에 200개의 게임 목록에서 Null 값을 제외해 162개로 줄이고, 8점 이상의 게임을 조회하여 총 30개의 게임을 찾는 데 성공하였습니다. Target 버킷을 확인합니다. 마무리 위에서도 딱 한 번 언급은 했지만 Glue는 Apache Spark 기반으로 구축되어 있으며 메모리를 사용하여 데이터 처리 작업을 수행합니다. 데이터의 ETL은 목적성에 따라 사용하는 방법이 다양합니다. 예시로 데이터 셋이 여러 개가 있다고 가정하고 이를 컬럼끼리 묶으려고 하면 귀찮아지기 시작합니다. 사실 이 부분도 자세하게 보여드리고 싶었으나 제가 가져온 데이터 파일이 단일 데이터 셋이라 접목할 데이터를 찾기가 힘들어 패스했습니다. 이번 블로그의 컨셉은 찍먹이라 Glue의 사용하기 쉬운 기능만 소개하려 했지만, Apache Spark에 RDD, Dataframe, Data set 정도는 알아두면 유용할 것 같아 일부 작업에 넣어 봤습니다.

  • AWS Case Study - INUC

    How did INUC leverage AWS to minimize development time by over 35% and quickly deploy SaaS? INUC Inc. INUC is a B2B media platform software development company that specializes video content management system(CMS) services. INUC’s video CMS solution features live scheduling, VOD archiving, menu organization and management, as well as web interfaces for each type of content that media managers need. INUC provides various editions(templates) for video meeting minutes, in-house broadcasting, and live commerce, so the media managers simply can select the screen they want. The media managers also have the option to choose the appropriate license (Basic/Standard/Enterprise) and cloud service according to each customer's system policy and service scale. Name INUC Inc. Area Software development and supply Estab. Nov, 2010 Site Migration of B2B On-premises Solution to the Cloud Challenges INUC was provided on-premises based solutions; however, with changes in the market and a growing customer demand, the need for cloud adoption became apparent. As their potential customer base expands from public and enterprise to healthcare and commerce, there is a growing demand for cloud services in the form of SaaS, and INUC expects to create opportunities for global expansion. In addition, all INUC’s media services were built on Docker, consisting of containers for API, Streaming Server, Chatting, Web, Storage, etc. The migration to the SaaS model was relatively easy, given the cloud environment was already prepared. Why SmileShark? SmileShark's wide experience and expertise were key attractions. Specifically, SmileShark's solutions and various suggestions during meeting helped INUC makes quick decisions. INUC expected that SmileShark's experience in various Kubernetes deployments and container operations would facilitate the achievemenet of their goal of transforming CMS services into SaaS within a tight time frame. In fact, SmileShark's prompt technical support helped INUC to smoothly migration to the cloud. "Due to the nature of media services, we believe that it is reasonable to have a hybrid form that operates both existing on-premises servers and the cloud from a TCO (total cost of ownership) perspective.", "The cloud-based B2B SaaS model can be thought of as a content store that plans an independent brand," Explained Jason Shin, CEO of INUC, Inc. Safe and Swift Migration of Adopting ECS INUC reliably adopted the Amazon Web Services (AWS) cloud through SmileShark, experiencing flexibility and scalability beyond that of the traditional on-premises model. During the AWS architecture design process, Elastic Load Balancers (ELBs) and multiple availability zones were employed to enhance business continuity and customer satisfaction. Network traffic coming into the ELB is automatically distributed to multiple servers, preventing the load from being concentrated on one server and ensuring that even if a problem occurs in one server, the entire service is not affected. Additionally, by distributing the infrastructure using two or more Availability Zones, INUC can continue operations without service interruption even if a problem occurs in one Availability Zone. To migrate data leakage and security risks, INUC organized its infrastructure with public and private subnets, placing critical data and systems in the private subnet, shielding against external threats. This approach has bolstered customer satisfaction and protected the brand value of INUC in the long run. INUC adopted ECS (Elastic Container Service) to the AWS environment to simplify and enhance the efficiency of deploying, managing, and scaling Docker container-based applications. ECS significantly shortened time to market by streamlining the process of deploying & managing applications and allowing developers to concentrate on develeoping higher-quality service To ensure consistent service during traffic spikes, INUC implemented Auto Scaling Group, dynamically managing resources based on usage. Additionally, INUC set the ECS service type as Replica to maintain a specified number of tasks running continuously, thereby ensuring scalability and resilience of the tasks, and configured it to automatically adjust to workload demands. Managed services such as ElastiCache, Aurora, S3, etc have helped INUC reduce hardware and software maintenance costs, allowing them to focus more on core business activities. INUC established a fast and efficient development process through AWS services. Supported by AWS and SmileShark, developers quickly acquired new skill and developed cloud-optimized solutions, significantly accelerating INUC's technological innovation. Upcoming Development of Intelligent Services Based on STT INUC Next Step INUC is currently improving SEDN v2 with communication and incorporating AI applications based on deep learning algorithms into the cloud. Upcoming intelligent services include video scene analysis based on STT (Speech-to-Text), timestamp extraction highlight generation, and video keyword search. INUC is improving their media user experience(MX) and aims to create business opportunities with more content IP operators and strengthen their global market presence. ※ Click the image above to sign up for the SEDN beta service. Used AWS Services Amazon Elastic Container Service(ECS) Amazon Simple Storage Service(S3) Amazon ElastiCache Amazon Aurora Introduced SmileShark Services SmileShark BuildUp | Accurate infra suggestion / Rapid deployment support SmileShark Migration | SmileShark guides you through the entire migration to AWS SmileShark Tech Support | Get expert guidance and assistance achieving your objectives

  • AWS Config란?

    AWS Config란? AWS Config는 AWS(Amazon Web Services)에서 제공하는 서비스로 기존 AWS 리소스를 검색하고, 서드 파티 리소스의 구성을 기록하고, 모든 세부 구성 정보가 포함된 리소스의 완전한 인벤토리를 내보내며, 특정 시점의 리소스 구성 방식을 확인할 수 있습니다. 이러한 기능에는 규정 준수 감사, 보안 분석, 리소스 변경 추적 및 문제 해결에 사용할 수 있습니다. 개요 AWS Config는 AWS 계정에 있는 AWS 리소스의 구성을 자세히 보여 줍니다. 자세하게 말씀드리자면 설정을 모니터링하고 해당 설정이 원하는 상태 또는 잠재적 규정 준수 요구 사항에 부합하는지 알려줍니다. 여기에는 리소스가 서로 어떻게 연관되어 있는지, 과거에 어떻게 구성되었는지 등이 포함되어 있어 시간이 지남에 따라 구성과 관계가 어떻게 변하는지 확인할 수 있습니다. AWS Config 작동 방식 AWS Config 기능 AWS Config 설정 시 다음을 완료할 수 있습니다: 리소스 관리 AWS Config에서 기록할 리소스 유형을 지정합니다. 요청 시 구성 스냅샷과 구성 기록을 받도록 Amazon S3 버킷을 설정하세요. 구성 스트림 알림을 보내도록 Amazon SNS를 설정합니다. AWS Config에 Amazon S3 버킷 및 Amazon SNS 주제에 액세스하는 데 필요한 권한을 부여합니다. 규칙 및 규정 준수 팩 AWS Config에서 기록된 리소스 유형에 대한 규정 준수 정보를 평가하는 데 사용할 규칙을 지정합니다. 규정 준수 팩 또는 AWS 계정에서 단일 엔티티로 배포하고 모니터링할 수 있는 AWS Config 규칙 및 수정 작업 모음을 사용합니다. 애그리게이터 애그리게이터를 사용하여 리소스 인벤토리 및 규정 준수에 대한 중앙 집중식 보기가 가능합니다. 애그리게이터는 여러 AWS 계정과 AWS 리전의 AWS Config 구성 및 규정 준수 데이터를 단일 계정과 리전으로 수집하는 AWS Config 리소스 유형입니다. 고급 쿼리 이 기능은 AWS 사용자가 여러 계정과 지역에 걸쳐 있는 리소스의 구성을 효과적으로 관리하고 모니터링할 수 있게 해주는 도구로, 복잡한 쿼리를 사용하여 필요한 정보를 빠르고 정확하게 얻을 수 있습니다. 샘플 쿼리 중 하나를 사용하거나 AWS 리소스의 구성 스키마를 참조하여 직접 쿼리를 작성하세요. AWS Config 사용 방법 AWS에서 애플리케이션을 실행할 때는 일반적으로 AWS 리소스를 사용하게 되는데, 이러한 리소스를 종합적으로 생성하고 관리해야 합니다. 애플리케이션에 대한 수요가 계속 증가함에 따라 AWS 리소스를 추적해야 할 필요성도 커지고 있습니다. AWS Config는 다음 시나리오에서 애플리케이션 리소스를 감독하는 데 도움이 되도록 설계되었습니다: 리소스 관리 리소스 구성에 대한 거버넌스를 강화하고 리소스 구성 오류를 감지하려면, 어떤 리소스가 존재하고 이러한 리소스가 어떻게 구성되는지에 대한 세분화된 가시성을 언제든지 확보해야 합니다. 각 리소스에 대한 호출을 폴링하여 이러한 변경 사항을 모니터링하지 않고도 리소스가 생성, 수정 또는 삭제될 때마다 알림을 받을 수 있도록 AWS Config를 사용할 수 있습니다. AWS 구성 규칙을 사용하여 AWS 리소스의 구성 설정을 평가할 수 있습니다. 리소스가 규칙 중 하나의 조건을 위반하는 것을 AWS 구성에서 감지하면, AWS 구성은 리소스를 비규격으로 플래그 지정하고 알림을 보냅니다. AWS 구성은 리소스가 생성, 변경 또는 삭제될 때 지속적으로 리소스를 평가합니다. 감사 및 규정 준수 AWS Config를 사용하면 리소스 구성 기록에 액세스할 수 있습니다. 구성 변경을 일으킨 AWS CloudTrail 이벤트와 구성 변경 사항을 연결할 수 있습니다. 이 정보를 통해 ‘변경한 사용자’, ‘변경한 IP 주소’ 등의 세부 정보에서 AWS 리소스와 관련 리소스에 대한 변경 결과에 이르기까지 전체적으로 파악할 수 있습니다. 이 정보를 사용하여 시간 경과에 따라 감사 및 규정 준수 평가에 도움이 되는 보고서를 생성할 수 있습니다. 구성 변경 사항 관리 및 문제 해결 서로 의존하는 여러 AWS 리소스를 사용하는 경우, 한 리소스의 구성을 변경하면 관련 리소스에 의도하지 않은 결과가 발생할 수 있습니다. AWS Config를 사용하면 수정하려는 리소스가 다른 리소스와 어떻게 연관되어 있는지 확인하고 변경의 영향을 평가할 수 있습니다. 또한 AWS Config에서 제공하는 리소스의 기록 구성을 사용하여 문제를 해결하고 문제가 있는 리소스의 문제가 없는 가장 최근 버전으로 액세스할 수 있습니다. 보안 분석 잠재적인 보안 취약점을 분석하려면 사용자에게 부여된 AWS IAM 권한 또는 리소스에 대한 액세스를 제어하는 Amazon EC2 보안 그룹 규칙과 같은 AWS 리소스 구성에 대한 자세한 기록 정보가 필요합니다. AWS Config가 기록되는 동안 언제든지 AWS Config를 사용하여 사용자, 그룹 또는 역할에 할당된 IAM 정책을 볼 수 있습니다. 이 정보를 통해 특정 시점에 사용자에게 부여된 권한을 확인할 수 있습니다. 또한 AWS Config를 사용하여 특정 시점에 열려 있던 포트 규칙을 포함한 EC2 보안 그룹의 구성을 볼 수 있습니다. 이 정보를 통해 보안 그룹이 특정 포트로 들어오는 TCP 트래픽을 차단했는지 여부를 확인할 수 있습니다. 관련 링크 AWS Config Features

  • AWS Lambda: The Ultimate Guide for Beginners 2/2

    AWS Lambda의 모든 것: 초보자를 위한 완벽한 가이드 2/2 - Console에서 람다 함수 생성, 트리거 설정 및 요금 계산 Written by Hyojung Yoon Hello! Today, we will continue to delve deeper into AWS Lambda. Especially in this part, we will practice creating Lambda functions and setting Lambda triggers using the AWS Console. We will also understand the pricing policy of AWS Lambda and learn how to calculate actual costs. Let’s begin! Start AWS Lambda Creating Lambda Functions in the Console Writing Lambda Function Code Configuring Lambda Functions Executing Lambda Runctions Setting Lambda Trigger Lambda Trigger + S3 AWS Lambda Pricing Lambda Pricing Policy Calculating Lambda Prices Interpreting Lambda Pricing Calculation Conclusion Start AWS Lambda 1. Creating Lambda Functions in the Console You can create your first function using the AWS Console. Select Lambda within the AWS Console. Press the [ Create function ] button to create a Lambda function. You will be presented with three options at the top. Create from scratch: Start building a function from the ground up Use a blueprint: Utilize AWS-provided templates that can be customized with sample code. Container image: Specifically for Docker containers. After making your selection, add a new function name and choose the desired runtime¹. ¹Runtime: Options for the programming language you want to write your Lambda in, such as Node.js, Python, Go, etc. Permissions specify the rights that will be granted to the Lambda function. Click [ Change default execution role ] to create a new role with the standard Lambda permissions. 2. Writing Lambda Function Code Review the function you created, here named hjLambda. Scroll down to the function code section. Here, you can select a template or design your own. 3. Executing Lambda Functions Before running the Lambda function, we will first perform a test. Select [ Configure test events ] from the test dropdown menu, which opens a code editor for test event configuration. Select create new event, and enter an event name like MyEvent. Keep the event visibility settings private as default. From the template list, select hello-world and then click [ Save ]. Click the [ Test ] button and check the console for successful execution. In the execution result tab, confirm if the execution was successful. The function log section displays logs created by the Lambda function execution and key information reported in the log output. If the test went well, click the [ Deploy ] button to make it executable. 4. Setting Lambda Trigger 1) Lambda Trigger + S3 We will implement logic using an AWS Lambda function to copy files from one Amazon S3 bucket to another. ※ Cf: How can I use a Lambda function to copy files from one Amazon S3 bucket to another? Step 1: Create the source and destination Amazon S3 buckets. Open the Amazon S3 console and select create bucket. Create both the source and destination buckets. Here, the name of the source bucket is set to [ hjtestbucket ] and the destination bucket to [ hjtestbucket02 ]. Step 2: Create a Lambda Function Open the functions page in the Lambda console and create a function. Select the runtime dropdown and choose Python 3.9, then create a Lambda function like the one shown in the picture. Select the code tab and paste the following JSON code. import boto3 import botocore import json import os import logging logger = logging.getLogger() logger.setLevel(logging.INFO) s3 = boto3.resource('s3') def lambda_handler(event, context):"New files uploaded to the source bucket.") key = event['Records'][0]['s3']['object']['key'] source_bucket = event['Records'][0]['s3']['bucket']['name'] destination_bucket = "destination_bucket" source = {'Bucket': source_bucket, 'Key': key} try: response = s3.meta.client.copy(source, destination_bucket, key)"File copied to the destination bucket successfully!") except botocore.exceptions.ClientError as error: logger.error("There was an error copying the file to the destination bucket") print('Error Message: {}'.format(error)) except botocore.exceptions.ParamValidationError as error: logger.error("Missing required parameters while calling the API.") print('Error Message: {}'.format(error)) After pasting the code, select [ Deploy ]. Step 3: Create an Amazon S3 Trigger for the Lambda Function Open the function page in the Lambda console and select [ Add trigger ] from the function overview. Select S3 from the trigger configuration dropdown. Enter the name of the source bucket and select All object create events for the event type. Acknowledge that using the same S3 bucket for both input and output is not recommended, then select Add. Step 4: Provide AWS IAM Permissions for the Lambda Function's Execution Role Like the following resource-based policy, add IAM permissions to the Lambda function's execution role to copy files to the destination S3 bucket. Open the functions page in the Lambda console and click the role name under configuration - execution role. In the IAM console, select [ Add permissions ] and then [ Create inline policy ]. Choose the [ JSON ] option and paste the JSON policy document below. ※ Note Replace destination-s3-bucket with your S3 destination bucket and source-s3-bucket with your S3 source bucket. Change the /* at the end of the resource ARN to the prefix value needed for your environment to restrict permissions. It is best to grant only the minimum permissions necessary to perform the action. For more details, refer to Granting least privilege. { "Version": "2012-10-17", "Statement": [ { "Sid": "putObject", "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::destination-s3-bucket/*" ] }, { "Sid": "getObject", "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::source-s3-bucket/*" ] } ] } Select [ Create policy ] to save the new policy. Step 5: Check if the Lambda Function is Executing Properly Now, to check if the Lambda trigger is working correctly, upload a file to the original S3 bucket. Click [ Upload ] and check the upload status. Go into the destination S3 bucket and verify that the file has been copied. If the same file is stored, you can tell the function is working properly. AWS Lambda Pricing 1. Lambda Pricing Policy Lambda costs are determined by three main factors: the number of requests, execution time, and memory size. Lambda offers 1 million free requests and 400,000 GB-seconds of free computing time per month, which allows small projects or those in the testing phase to use Lambda without additional costs. 2. Calculating Lambda Prices You can easily calculate Lambda prices using the AWS pricing calculator website. Let's calculate the AWS Lambda fees for 3,000,000 executions per month, each running for 1 second, with 512MB of memory (0.5 GB). Scroll down to [ Show Details ] to see how the pricing is determined. This calculation only considers the base costs, so additional costs may occur. Prices are subject to change, so it's best to check the latest information on the AWS official website. Conclusion Through this guide, you have learned how to create Lambda functions in the AWS console. Additionally, this series has introduced you to Lambda’s pricing policy and calculation methods, providing you with the basic steps needed to apply this knowledge to real business scenarios. I hope this experience will be beneficial as you design a variety of cloud services utilizing AWS Lambda. Links Copy S3 files to another S3 bucket with Lambda function | AWS re:Post Invoking Lambda functions - AWS Lambda Serverless Computing - AWS Lambda Pricing - Amazon Web Services

  • Are AWS Certifications worth it? : AWS SA-Professional

    Are AWS Certifications worth it? : AWS Solutions Architect - Professional (SAP) Certification 1 Written by Minhyeok Cha Today, I've organized the AWS Solution Architect - Professional (SAP) certification exam questions in terms of real-world console or architectural structures. Question 1. A company needs to design a hybrid DNS solution. This solution uses Amazon Route 53 private hosting zones for the domain for resources stored in VPC. The company has the following DNS resolution requirements: On-premises systems must be able to resolve and connect to All VPCs should be able to resolve There is already an AWS Direct Connect connection between the on-premises corporate network and the AWS Transit Gateway. What architecture should the company use to meet these requirements with the best performance? ⓐ Connect the private hosting zone to all VPCs. Create a Route 53 inbound resolver in a shared services VPC. Connect all VPCs to the transit gateway and create forwarding rules on the on-premises DNS server for pointing to the inbound resolver. ⓑ Connect the private hosting zone to all VPCs. Deploy Amazon EC2 conditional forwarders in a shared services VPC. Connect all VPCs to the transit gateway and create forwarding rules on the on-premises DNS server for pointing to the conditional forwarder. ⓒ Connect the private hosting zone to the shared services VPC. Create a Route 53 outbound resolver in the shared services VPC. Connect all VPCs to the transit gateway and create forwarding rules on the on-premises DNS server for pointing to the outbound resolver. ⓓ Connect the private hosting zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Connect the shared services VPC to the transit gateway and create forwarding rules on the on-premises DNS server for pointing to the inbound resolver. Solutions The key to this question is how to centrally manage DNS for a hybrid cloud using AWS services. Combining the company's requirements, the answer is A. Let's examine this one by one. Answer: A Breaking down the DNS requirements in the question: First, connecting the private hosting zone to all VPCs is configured as follows This setting allows traffic routing by directly connecting the private hosting to VPCs. As seen in the blue box, to use this function, you need to set enableDnsHostnames and enableDnsSupport to true in VPC settings. Second, establish a connection to the inbound resolver endpoint's IP address via Direct Connect or VPN. This allows on-premises to resolve and connect to Assuming DX and VPN are set up, implementing the Route 53 resolver's endpoint results in the following architecture. Using this architecture, you can create inbound and outbound endpoints (specified for VPCs) and create a VPC Route53 private hosting zone for the designated endpoints using the first method. By completing this task, you can verify that all VPCs (though they need to be specified separately) and on-premises can resolve the domain through the AWS Transit Gateway and DX (or VPN). ※ cf. You can simply check the connected domain using the following command. Use the telnet command for port 53 connection confirmation between the inbound endpoint resolver IP address: telnet 53. To check the validity of domain resolution, complete a domain name lookup from the on-premises DNS server or local host. For Windows: nslookup For Linux or macOS: dig If the previous command fails to return records, you can bypass the on-premises DNS server. Use the following command to send a DNS query directly to the inbound resolver endpoint IP address. For Windows: nslookup @ For Linux or macOS: dig @ Question 2 A company provides weather data to multiple customers through a REST-based API. The API is hosted in Amazon API Gateway and integrates with various AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record for The company stores data for the API in an Amazon DynamoDB table. The company needs a solution to provide failover capability for the API to another AWS region. Which solution meets these requirements? ⓐ Deploy a new set of Lambda functions in a new region. Update the API Gateway API to use an edge-optimized API endpoint targeting Lambda functions in both regions. Convert the DynamoDB table into a global table. ⓑ Deploy a new API Gateway API and Lambda functions in a different region. Change the Route 53 DNS record to multi-value answer. Add both API Gateway APIs to the response. Enable health check monitoring. Convert the DynamoDB table into a global table. ⓒ Deploy a new API Gateway API and Lambda functions in a different region. Change the Route 53 DNS record to a failover record. Enable health check monitoring. Convert the DynamoDB table into a global table. ⓓ Deploy a new API Gateway API in a new region. Change Lambda functions to global functions. Change the Route 53 DNS record to multi-value answer. Add both API Gateway APIs to the response. Enable health check monitoring. Convert the DynamoDB table into a global table. Solutions Question 2 involves frequently used AWS services in combination: API Gateway - Lambda - DynamoDB, with the DNS using Route 53 service records. This question seeks a combination that can handle failover to another region in case of an API outage. Many might think the answer is C, focusing solely on the “Change the Route 53 DNS record to a failover record” option. However, surprisingly, the answer is indeed C. Answer: C For DNS usage, if there's an outage, the following configuration is necessary for managing failover to another region: Create API resources in the main region (domain). Create API resources in the sub-region (domain). Map the created APIs to a custom domain. Create a Route 53 DNS failover record. Additionally, continue reading the problem, you’ll find health monitoring activation and DynamoDB global table. Completing these steps results in the following architecture. This problem mainly requires building a solution for disaster recovery, but this time we will also solve the API design. 1. Create APIs for both main and sub-regions. (Configure separate regions) It’s easy to create an API Gateway, but we need a domain name. AWS API G/W has a custom domain creation feature. It’s easy to make, but note that a TLS, i.e., ACM certificate, is required. Perform the same task in the sub-region as well. 2. Create a Route 53 health check. First, use the domain of the API in the main region created above. This step involves setting up an alarm to switch to the sub-region in case of an outage. 3. Routing Policy - Configure failover. You need to know that there are various record policy methods in Route 53. Among various policies, we need to check the failover method. Add records using primary (main region) and secondary (sub-region) in the main region - each created API domain - record type. 4. DynamoDB Global table There is a separate section for creating global table replicas, so it’s easy to find. Conclusion I hope the problems solved today help you with your certification preparation. Look forward to more in-depth problem explanations and key strategies in the next post!

  • AWS Lambda: The Ultimate Guide for Beginners 1/2

    Everything About AWS Lambda: The Ultimate Guide for Beginners 1/2 Written by Hyojung Yoon Today, we will learn about AWS Lambda, a key player in various IT environments. AWS Lambda enables the provision of services with high availability and scalability, thus enhancing performance and stability in cloud environments like AWS. In this blog, we'll delve into AWS Lambda, covering its basic concepts, advantages and disadvantages, and real-life use cases. Additionally, we'll compare AWS Lambda with EC2 to understand when to use each service. Let's get started! What is AWS Lambda? What is Serverless Computing? AWS Lambda How AWS Lambda Works Pros and Cons of AWS Lambda Advantages Serverless Architecture Cost-Effective Integration with AWS Services Disadvantages Execution Time Limit Stateless ColdStart Concurrency Limit Use Cases of AWS Lambda Automation of System Operations Web Applications Serverless Batch Processing Others Differences between AWS Lambda and EC2 When to Use AWS Lambda? When to Use AWS EC2? Conclusion What is AWS Lambda? 1. What is Serverless¹ Computing? AWS Lambda is a serverless computing service. Serverless computing is a cloud computing execution model that allows the operation of backend services without managing servers. Here, you can focus solely on writing code, while AWS manages the infrastructure. This model enables developers to develop and deploy applications more quickly and efficiently. ¹Serverless? A cloud-native development model where developers don't need to provision servers or manage application scaling. Essentially, cloud providers manage server infrastructure, freeing developers to focus more on the actual functionalities they need to implement. 2. AWS Lambda AWS Lambda is an event-driven serverless computing service that enables code execution for a variety of applications and backend services without the need to provision or manage servers. Users simply provide code in a supported language runtimes (Lambda supports Python, C#, Node.js, Ruby, Java, PowerShell, Go). The code is structured as Lambda functions, which users can write and use as needed. AWS Lambda offers an automatically triggered code execution environment, ideal for an event-based architecture and powerful backend solutions. For example, code is executed when a file is uploaded to an S3 bucket or when a new record is added to DynamoDB. 3. How AWS Lambda Works Lambda Functions These are resources in Lambda that execute code in response to events or triggers from other AWS services. Functions include code to process events or other AWS service events that are passed to them. Event Triggers (Event Sources) AWS Lambda runs function instances to process events. These can be directly called using the Lambda API or triggered by various AWS sercies and resources. AWS Lambda functions are triggered by various events, like HTTP requests, data state transitions, file uploads, etc. How Lambda Works You create a function, add basic information, write code in the Lambda editor or upload it, and AWS handles scaling, patching, and infrastructure management. Pros and Cons of AWS Lambda Using AWS Lambda allows developers to focus on development without the burden of server management, similar to renting a car where you only drive, and maintenance is handled by the rental company. However, Lambda functions are stateless, so additional configurations are necessary for state management. Also, the 'cold start' phenomenon can slow initial response times, like a computer waking from sleep. 1. Advantages 1) Serverless Architecture Developers can focus on development without worrying about server management, akin to renting and driving a car while maintenance is handled by the rental company. 2) Cost-Effective Pay only for the computing resources actually used. Functions are called and processed only when needed, so you don't need to keep servers running all the time, making it cost-effective. Lambda charges based on the number of requests and the execution time of the Lambda code, so no charges apply when code is not running. 3) Integration with AWS Services Allows seamless integration and programmatic interactions with other AWS services. Lambda functions also allow programmatic interactions with other AWS services using one of the AWS software development kits (SDKs). 2. Disadvantages 1)Execution Time Limit Lambda has a maximum execution time of 15 minutes (900 seconds) and a maximum memory limit of 10GB (10240MB). Thus, it is not suitable for long-running processes that exceed 15 minutes. 2) Stateless³ Not suitable for maintaining states or DB connections. - ³Stateless? Means that data is not stored between interactions, allowing for multiple tasks to be performed at once or rapidly scaled without waiting for a task to complete. 3) ColdStart As a serverless service for efficient resource use, Lambda turns off computing power if not used for a long time. When a function is first called, additional setup is needed to run the Lambda function, leading to a delay known as a Cold Start. The cold start phenomenon varies depending on the language used and memory settings. This initial delay can affect performance by delaying responses. 4) Concurrency⁴ Limit By default, Lambda limits the number of Lambda functions that can be executed simultaneously to 1000 per region. Exceeding this number of requests can prevent Lambda from performing. - ⁴Concurrency? The number of requests a Lambda function is processing at the same time. As concurrency increases, Lambda provisions more execution environment instances to meet the demand. Use Cases of AWS Lambda Lambda is ideal for applications that need to rapidly scale up and scale down to zero when there's no demand. For example, Lambda can be used for purposes like: 1. Automation of System Operations 🎬 Set up CloudWatch Alarms for all resources. When resources are in poor condition, such as Memory Full or a sudden CPU spike, CloudWatch Alarms trigger a Lambda Function. The Lambda Function notifies the team or relevant parties via Email or Slack Notification. Combine Lambda Function with Ansible for automated recovery in case of failure, such as resetting memory on a local instance or replacing resources when Memory Full occurs. 2. Web Applications 🎬 Store Static Contents (like images) in S3 when clients connect. Use CloudFront in front of S3 for fast serving globally. Separately use Cognito for authentication. For Dynamic Contents and programmatic tasks, use Lambda and API Gateway to provide services, with DynamoDB as the backend database. 3. Serverless Batch Processing 🎬 When an object enters S3, a Lambda Splitter distributes tasks to Mappers, and the Mappers save the completed tasks in DynamoDB. Lambda Reducer outputs back to S3. 4. Other Cases 1) Real-Time Lambda Data Processing Triggered by Amazon S3 Uploads. [Example] Thumbnail creation for S3 source images. 2) Stream Processing Use Lambda and Amazon Kinesis for real-time streaming data processing for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, IoT device data telemetry, etc. 3) IoT Backend Build a serverless backend using Lambda to handle web, mobile, IoT, and third-party API requests. 4) Mobile Backend Build a backend using Lambda and Amazon API Gateway to authenticate and process API requests. Integrate easily with iOS, Android, web, and React Native frontends using AWS Amplify. Differences Between AWS Lambda & EC2 AWS Lambda is serverless and event-driven, suitable for low-complexity, fast execution tasks, and infrequent traffic. EC2, on the other hand, is ideal for high-performance computing, disaster recovery, DevOps, development, and testing, and offers a secure environment. 1. When Should I Use AWS Lambda? Low-Complexity Code: Lambda is the perfect choice for running code with minimal variables and third-party dependencies. It simplifies the handling of easy tasks with low-complexity code. Fast Execution Time: Lambda is ideal for tasks that occur infrequently and need to be executed within minutes. Infrequent Traffic: Businesses dislike having idle servers while still paying for them. A pay-per-use model can significantly reduce computing costs. Real-Time Processing: Lambda, when used with AWS Kinesis, is best suited for real-time batch processing. Scheduled CRON Jobs: AWS Lambda functions are well-suited for ensuring scheduled events are triggered at their set times. 2. When Should I Use AWS EC2? High-Performance Computing: Using multiple EC2 instances, businesses can create virtual servers tailored to their needs, making EC2 perfect for handling complex tasks. Disaster Recovery: EC2 is used as a medium for disaster recovery in both active and passive environments. It can be quickly activated in emergencies, minimizing downtime. DevOps: DevOps processes have been comprehensively developed around EC2 Development and Testing: EC2 provides on-demand computing resources, enabling companies to deploy large-scale testing environments without upfront hardware investments. Secure Environment: EC2 is renowned for its excellent security. Conclusion This guide provided an in-depth understanding of AWS Lambda, which plays a significant role in traffic management and server load balancing in the AWS environment. In the next session, we will explore accessing the console, creating and executing Lambda functions, and understanding fee calculations. We hope this guide helps you in starting and utilizing AWS Lambda, as you embark on your journey into the expansive serverless world with AWS Lambda! Links A Deep Dive into AWS Lambda - Sungyeol Cho, System Engineer (AWS Managed Services) - YouTube What is AWS Lambda? - AWS Lambda Troubleshoot Lambda function cold start issues | AWS re:Post

  • What is Amazon Lightsail : EC2 vs Lightsail comparison

    What is Amazon Lightsail : EC2 vs Lightsail comparison written by Hyojung Yoon Hello everyone. Today, let's take some time to explore Amazon's cloud service called Lightsail. Understanding both Amazon Lightsail and Amazon EC2, two key cloud computing services, is essential. These two services are part of AWS's major cloud solutions, each with its unique features and advantages. In this post, we'll delve into each service, especially focusing on the key features of Amazon Lightsail and when it's suitable. So, let's dive right in! What is Amazon Lightsail? Amazon Lightsail What is a VPS? Components ofLightsail Features of Lightsail Advantages of Lightsail Disadvantages of Lightsail EC2 vs Lightsail Differences between Amazon Lightsail and EC2 Which one should you use? Conclusion What is Amazon Lightsail? 1. Amazon Lightsail Amazon Lightsail is a Virtual Private Server(VPS)created by AWS. It includes everything you need to quickly launch your project, such as instances, container services, managed databases, CDN distribution, load balancers, SSD-based block storage, static IP addresses, DNS management for registered domains, and resource snapshots (backups), and more. It's specialized in making it easy and fast to build websites or web applications. 2. What is a VPS? A VPS stands for Virtual Private Server, which means taking a physical server and dividing it into multiple virtual servers. These segmented virtual servers are shared among various clients. While you share a physical server with others, each clients has its private server space. However, since everyone shares computing resources on one server, a user monopolizing too many resources can affect others in terms of RAM, CPU, etc. 3. Components of Lightsail Instances Containers Databases Networking Static IP Load Balancer(ELB) Deployment(CDN) DNS Zone : Domain & Sub-domain management Storage(S3, EBS): Additional capacity available if instances run out of space Snapshots(AMI) : Scheduled for automatic backups Features of Lightsail 1. Advantages of Lightsail AWS Lightsail allows for intuitive instance creation, which is less complex than EC2. With pre-configured bundles, users can swiftly deploy applications, websites, and development environments without a deep understanding of cloud architecture. Its user-friendly interface allows easy creation of containers, storage, and databases. This makes it ideal for beginners and smaller projects. 2. Disadvantages of Lightsail However, the advantages mentioned above can become limitations of Lightsail. It may not be suitable for applications expecting rapid increases in traffic or resource demands, and pre-configured bundles can limit detailed settings. Additionally, integrating with other AWS services may require migration. Other limitations include: Up to 20 instances per account 5 fixed IP addresses per account Up to 6 DNS zones per account Total 20TB block storage (disks) attachment 5 load balancers per account Up to 20 certificates EC2 vs Lightsail 1. Differences Between Amazon Lightsail and EC2 1) Cost Generally, Amazon Lightsail is cheaper. At 2GB memory, it charges $10, inclusive of 60GB SSD EBS volume and traffic costs. In contrast, EC2 charges $11.37 for a 3-year commitment (without upfront payment) for t3.small with 60GB EBS. Here, traffic costs are extra. Therefore, Lightsail is more economical for continuous usage. However, if you only use EC2 for the necessary time, it might be cost-effective. EC2 charges are based on actual usage, making it a more flexible option for cost management. 2) Features While EC2 offers more advanced features not available in Lightsail, it may lack some detailed options. Features not available in Lightsail include: Limited VPC-related functions Instance type changes Scheduled snapshot creation Detailed security group settings IAM role assignment Various load balancer options 2. Which one should you use? 1) Amazon EC2(Elastic Compute Cloud) Powerful and flexible cloud computing platform offered by AWS Customizable on-demand computing performance for all application needs Scalable resources for anything from websites to high-performance scientific simulations Seamlessly integrates with other AWS services Ideal for businesses with infrastructure managers capable of managing virtual servers, networks, security groups, etc. It's particularly beneficial for CPU-intensive operations and on-demand functionalities, allowing for efficient cost management. 2) Amazon Lightsail Simplifies the cloud experience Offers virtual servers, storage, and networking in easy-to-understand packages Ideal for simpler applications like personal websites, blogs, or small web apps Fixed pricing model simplifies budgeting Ideal for individuals looking for swift web service hosting without dedicated infrastructure management. It's more suitable for services emphasizing network traffic rather than CPU-intensive tasks. Conclusion Understanding the differences between Amazon EC2 and Lightsail is the first step toward harnessing cloud computing. EC2 offers high scalability and customization, while Lightsail provides a simple and intuitive cloud experience. By selecting the most appropriate service based on your requirements, technical expertise, and project complexity, you can ensure success in the digital landscape. Both have unique advantages, so choose according to your needs and expertise. So, enjoy your cloud surfing! ⛵⛵ Links Virtual Private Server and Web Hosting-Amazon Lightsail-Amazon Web Services Virtual Private Server and Web Hosting - Amazon Lightsail FAQs - Amazon Web Services

  • What is a Load Balancer? : A Comprehensive Guide to AWS Load Balancer

    Written by Hyojung Yoon Hello, everyone! Today, we will delve into the fascinating world of Load Balancers and Load Balancing – pivotal technologies that make the world smarter by enabling web services to maintain stability, even in high traffic situations, especially in cloud environments like AWS. These technologies enhance the service's performance, stability, and scalability. Let’s begin our journey through the basic concepts of Load Balancers and Load Balancing to the types of AWS Load Balancers in this blog. What is a Load Balancer Load Balancer Scale Up and Scale Out What is a Load Balancing Load Balancing Benefits of Load Balancing Load Balancing Algorithms Static Load Balancing Round Robin Method Weighted Round Robin Method IP Hash Method Dynamic Load Balancing Least Connection Method Least Response Time Method Types of AWS Load Balancer ALB(Application Load Balancer) NLB(Network Load Balancer) ELB(Elastic Load Balancer) Conclusion What is a Load Balacer? 1. Load Balancer Load Balancers sit between the client and a group of servers, distributing traffic evenly across multiple servers and thereby mitigating the load on any particular server. When there is excessive traffic to a single server, it may not handle the load, leading to downtime. To address this issue, either a Scale Up or Scale Out approach is employed. 2. Scale Up and Scale Out Scale Up improves the existing server's performance, including tasks like upgrading CPU or memory, while Scale Out distributes traffic or workload across multiple computers or servers. Each method has its advantages and disadvantages, and choosing the more appropriate one is crucial. In the case of Scale Out, Load Balancing is essential to evenly distribute the load among multiple servers. The primary purpose of Load Balancing is to prevent any single server from being overwhelmed by distributing incoming web traffic across multiple servers, thus enhancing server performance and stability. What is a Load Balancing? 1. Load Balancing Load Balancing refers to the technology that distributes tasks evenly across multiple servers or computing resources, preventing service interruption due to excessive traffic and ensuring tasks are processed without delay. 2. Benefits of Load Balancing 1) Application Availability Server failures or maintenance can increase application downtime, rendering the application unusable for visitors. A load balancer automatically detects server issues and redirects client traffic to available servers, enhancing system fault tolerance. With load balancing, it is more manageable to: Undertake application server maintenance or upgrades without application downtime Facilitate automatic disaster recovery to your backup site Conduct health checks and circumvent issues leading to downtime 2) Application Scalability A load balancer can intelligently route network traffic between multiple servers. This allows your application to accommodate thousands of client requests, enabling you to: Circumvent traffic bottlenecks on individual servers Gauge application traffic to adaptively add or remove servers as required Integrate redundancy into your system for coordinated and worry-free operation 3) Application Security Load balancers, equipped with inbuilt security features, add an extra security layer to your Internet applications. They are invaluable for managing distributed denial-of-service attacks, where an attacker overwhelms an application server with concurrent requests, causing server failure. Additionally, a load balancer can: Monitor traffic and block malicious content Reduce impact by dispersing attack traffic across multiple backend servers Direct traffic through network firewall groups for reinforced security 4) Application Performance Load balancers enhance application performance by optimizing response times and minimizing network latency. They facilitate several crucial tasks to: Elevate application performance by equalizing load across servers Lower latency by routing client requests to proximate servers Guarantee reliability and performance of both physical and virtual computing resources Load Balancing Algorithms Various algorithms, such as Round Robin, Weighted Distribution, and Least Connections, are employed for load balancing, each serving different purposes and scenarios. 1. Static Load Balancing 1)Round Robin Method This method systematically allocates client requests across servers. It is apt when servers share identical specifications and the connections (sessions) with the server are transient. Example: For servers A, B, and C, the rotation order is A → B → C → A. 2) Weighted Round Robin Method This assigns weights to each server and prioritizes the server with the highest weight. When servers have varied specifications, this method increases traffic throughput by assigning higher weights to superior servers. Example: Server A's weight=8; Server B's weight=2; Server C's weight=3. Hence, 8 requests are assigned to Server A, 2 to Server B, and 3 to Server C. 3) IP Hash Method Here, the load balancer hashes the client IP address, converting IP addresses to numbers and mapping them to distinct servers. This method assures users are consistently directed to the same server. 2. Dynamic Load Balancing 1) Least Connection Method This method directs traffic to the server with the fewest active connections, presuming each connection demands identical processing power across all servers. 2) Least Response Time Method This considers both the current connection status and server response time, steering traffic to the server with the minimal response time. It is suitable when servers have disparate available resources, performance levels, and processing data volumes. If a server adequately meets the criteria, it is prioritized over a server that is unoccupied. This algorithm is employed by the load balancer to ensure prompt service for all users. Types of AWS Load Balancer 1. ALB(Application Load Balancer) Complex modern applications often operate on server farms, each composed of multiple servers assigned to specific application functions. An Application Load Balancer (ALB) redirects traffic after examining the request content, such as HTTP headers or SSL session IDs. For instance, an e-commerce application, possessing features like a product directory, shopping cart, and checkout functionality, when coupled with an ALB, dispenses content like images and videos without necessitating sustained user connection. When a user searches for a product, the ALB directs the search request to a server where maintaining user connection is not mandatory. Conversely, the shopping cart, which necessitates the maintenance of multiple client connections, transmits the request to a server capable of long-term data storage. It facilitates application-level load balancing, apt for HTTP/HTTPS traffic. It supports L7-based load balancers and can enforce SSL. 2. NLB(Network Load Balancer) A Network Load Balancer (NLB) operates by analyzing IP addresses and various network data to efficiently direct traffic. It allows you to trace the origin of your application traffic and allocate static IP addresses to multiple servers. The NLB uses both static and dynamic load balancing methods to distribute server load effectively. It’s an ideal solution for scenarios demanding high performance, capable of managing millions of requests per second while maintaining low latency. It’s especially adept at handling abrupt increases and fluctuations in traffic, making it particularly useful for real-time streaming services, video conferencing, and chat applications where establishing and maintaining a smart, optimized connection is crucial. In such cases, utilizing an NLB ensures effective management of connections and maintenance of session persistence. It conducts network-level load balancing, suitable for TCP/UDP traffic. It supports L4-based load balancers. 3. ELB(Elastic Load Balancer) Elastic Load Balancer (ELB) automatically allocates incoming traffic amongst various targets like EC2 instance containers and IP addresses across multiple Availability Zones. With ELB, the load on both L4 and L7 can be controlled. Should the primary address of your server alter, a new load balancer must be created and a target group must be assigned to a singular address, making the process more complex and cost-intensive with the increase in targets. It accommodates the four types of load balancers provided by AWS. It extends substantial scalability and adaptability to cater to diverse needs and environments. Conclusion We have delved into the intricate domains of load balancers and load balancing, recognizing the indispensable role a load balancer plays in moderating website and application traffic and allocating server load to bolster service performance and stability. Particularly within cloud environments like AWS, a plethora of load balancing options and functionalities are available, allowing the implementation of the most suited solution for your services and applications. Such technological advancements empower us to offer quicker and more reliable services, culminating in enhanced user experience and customer contentment, thus forging the path to business success. Links What is a Load Balancing? - Load Balancing Algorithm Explained - AWS Load Balancer - Amazon Elastic Load Balancer (ELB) - AWS What is an Application Load Balancer? - Elastic Load Balancing What is a Network Load Balancer? - Elastic Load Balancing What is an Elastic Load Balanceing? - Elastic Load Balancing

  • How to use AWS Pricing Calculator in 10 minutes

    Written by Hyojung Yoon You may be wondering how much it will cost you to move to the AWS cloud, or you may be hesitant to move to the cloud because you're afraid you'll make a mistake and end up paying more. Well, listen up, because we've got the answer to all of your worries. Over the next 10 minutes, I'm going to show you how to estimate your AWS costs with the AWS Cost Calculator, which will help you estimate your AWS costs. First, let's take a quick look at AWS's pricing model, and then we'll show you how to calculate your costs, so let's get started! Benefits and Features AWS Pricing Model On-Demand Reserved Instances Spot Instances Savings Plans How to use AWS Pricing Calculator AWS Pricing Calculator Q&A Ways to save money on AWS Conclusion 1. Benefits and Features 1) Transparent pricing See the math behind the price for your service configurations. View prices per service or per group of services to analyze your architecture costs. 2) Share your estimate Save each estimate's unique link to share or revisit directly through your browser. Estimates are saved to AWS public servers. 3) Hierarchical estimates See and analyze service costs grouped by different parts of your architecture. 4) Estimate exports Export your estimate to a .csv, .pdf and .json file to quickly share and analyze your proposed architecture spend. 2. AWS Pricing Models 1) On-Demand The most basic pricing option, you pay for what you use. Users use it when they need it and are only charged for what they use, making your business more elastic. Choose this if you're primarily using resources on a temporary basis or for testing purposes. 📌Pros: Great for unpredictable workloads, flexible resource management 📌Cons: Most expensive pricing option, costs can add up quickly 2) Reserved Instances (RIs) Reserved Instances, also known as RIs, are an option to pay for capacity upfront and receive a discount by committing to use it for one or three years. When you reserve an instance, you pay the committed amount regardless of usage, and of the three payment options, the full upfront payment results in a larger discount. With discounts of up to 75% off the same on-demand capacity, reserved instances are available for EC2, RDS, and ElastiCache. 📌Pros: Cost savings, easier to set up and maintain than spot instances. 📌Cons: You pay for reserved capacity regardless of usage, and RI can be difficult to manage if you have a lot of instances, so many companies use AWS partners to manage their RIs. (If you reserve 100 instances through RI, but only use 70 due to service scaling or other reasons, you'll still have to pay for 100 instances for the duration of your contract) 3) Spot Instances AWS reserves more resources than customers need, so there is always spare compute capacity available. Users can purchase these extra instances at a discounted hourly rate with no prior commitment, which can be as much as 90% off. However, this method is less secure, and when the total number of EC2 instances runs out, the spot instances in use will be terminated. Therefore, it is recommended to use it for the purpose of performing specific operations instead of hosting a database or server. 📌Pros: Maximum cost savings, and the extra computer capacity will allow you to scale quickly. 📌Cons: Very unreliable as instances can be terminated at any time (but you will be warned 2 minutes before the instance is terminated) 4) Savings Plans (SP) A model that allows you to reduce your payment by up to 72% compared to the on-demand price. You'll commit for 1 or 3 years based on your hourly usage. You'll stay within your commitment, regardless of instance class or size, and if you go over your commitment, you'll be charged the on-demand rate. 📌Pros: You can get benefits of flexibility and convenience, regardless of instance class or size. 📌Cons: You have to pay a fixed amount regardless of usage, and you can't cancel, refund, or change after purchase. ※ LifeHack: It is pretty much impossible to refund or change RI or SP after purchase, so we strongly recommend you to purchase very carefully. 3. How to use AWS Pricing Calculator 1) Create estimate You can change the language setting in the top right corner, and the [Create estimate] Button shows you how to calculate your costs. 2) Add Services Search and add AWS services that you need. In this case, we'll calculate the cost of Amazon EC2, the most common AWS service. 3) Configure Services ① Choose a Region AWS has slightly different costs for each region, so choose the region where your service will be deployed. ※ It's a good idea to write a description (ex : dev_ if this is a development server) because it's convenient to see a summary of estimated costs later. ② Configure Specifications Choose the one that fits your operating system. Let's use Linux as an example, as it's the most popular. Let's arbitrarily put in that we're using 2 instances and calculate. ③ Choose an Instance Type Next, choose an instance type. We'll choose t3.medium, which is the most popular type. ④ Select a Pricing Option The pricing option is set to Savings Plan by default, so we'll change it to On-Demand. ⑤ Amazon EBS Select General Purpose SSD (gp3) from Amazon EBS. General-purpose SSD (gp3) is the most recent version and is less expensive than gp2. IOPS and throughput are set to the default values of 3000 and 125, and the storage size is arbitrary. For this article, we set it to 100 GB. When you select [Save and view summary], you can see the estimated cost of the service, as shown below. In my estimate, you can see the upfront cost, monthly cost, and 12-month cost. If you are using multiple services together, you can add additional services such as EC2, ELB, etc. to be included in the service through the [Add Service] button. 4) Export My Estimate You can save it as a CSV or PDF file, or you can share it via the [Share] button with a URL, just like you would a saved estimate form. Another option is to save the estimate file before the service. 4. AWS Pricing Calculator Q&A Q1: Why is my estimate different than my actual bill? A1: The AWS Pricing Calculator estimates service costs based on a normalized monthly time frame. The Calculator assumes there are 730 hours in a month ((365 days * 24 hours) / 12 months in a year), which may be less or more than the actual hours in the current billing period. For e­xample, if you use an On-Demand EC2 instance that costs 0.10 USD an hour, you will see the following variances in pricing between your estimated cost and your actual monthly costs: Monthly cost estimated by the AWS Pricing Calculator: 730 hours x 0.10 USD = 73.00 USD Actual cost in February of a non-leap year: 28 days x 24 hours x 0.10 USD = 67.20 USD Actual cost in February of a leap year: 29 days x 24 hours x 0.10 USD = 69.60 USD Actual cost in November: 30 days x 24 hours x 0.10 USD = 72.00 USD Actual cost in October: 31 days x 24 hours x 0.10 USD = 74.40 USD If you use the same On-Demand EC2 instance for a year, your estimated and actual costs are the same: 12-month total cost estimated by the AWS Pricing Calculator: 730 hours x 12 months x 0.10 USD = 876.00 USD Actual total hours a non-leap year: 8760 hours x 0.10 USD = 876.00 USD Q2. If I purchased EC2 RI with the full upfront payment option, why do I still have monthly payments? A2. This is due to Amazon EBS. Amazon EBS is a service that provides block-level storage volumes that can be used with EC2 instances. EBS is not eligible for RI, so even if you pay for the full EC2 RI upfront, you will still have monthly payments for EBS. 5. AWS Cost Optimization with SmileShark You can easily optimize your AWS cost with SmileShark. ▶ Talk to SmileShark sales experts: ▶ Learn about SmileShark's CloudOps service: ▶ How to lower AWS cost through SmileShark: Conclusion In this blog, we've discussed AWS's pricing model and cost calculator, and we hope you'll use the AWS Pricing Calculator to help you estimate your costs, one of the most important factors in your AWS cloud journey. Links AWS Pricing Calculator Amazon EC2 Spot - Save up-to 90% on On-Demand Prices Reserved Instances Cloud Cost Savings - Savings Plans - Amazon Web Services Calculator FAQ

bottom of page