top of page

검색 결과

28 items found for ""

  • AWS Case Study - Happy Moonday

    How was Happy Moonday able to expand their business with focusing on customer satisfaction?? Happy Moonday Inc. Happy Moonday is a women's healthcare startup that aims to "help more women to live healthier lives.” They recognize menstrual cycle as one of important vital signs of women's health and a good starting point for healthcare, and they offer a variety of ways to make it easier and healthier. Happy Moonday Inc. runs HappyMoonday, a brand that eases menstrual periods with products made with women in mind, and Heymoon, an all-in-one women's health app. Happy Moonday Inc. is a femtech leader that provides excellent services, commerce, and content in the women's health field and is leading the market with unrivaled competitiveness. Name   Happy Moonday Inc. Area   Women's healthcare Estab. July, 2017 Site.  https://join.happymoonday.com/ Various needs arise as the business grows Challenges Q. What were the challenges before meeting SmileShark? A. Happy Moonday Inc. runs HappyMoonday, a brand that provides high quality of menstrual products that make periods easier, and Heymoon, an all-in-one women's health app. We're running both tangible and intangible product brands by ourselves, which has led to growing complexity from an infrastructure perspective. As the volume of users and offerings of both brands grow together, the servers and databases needed to be added and expanded, and the volume and frequency of content delivery, such as marketing images, increased. For Heymoon in particular, as the company expanded its features from accurately predicting menstrual cycle with its unique algorithm to offering a shopping for women's health and wellness products, it also needed to support commerce technology, including exploring products, payments, and promotions. As the business has grown to embrace broader features, so did its technology environment, all of which led to increased costs. Why Happy Moonday Chose SmileShark Q. Then what made you choose Smileshark? A. We wanted to test the PoC program to ensure we were providing a quality service to our customers, so we decided to adopt the AWS Proof of Concept (PoC) program to help us discover the value of the solution, including credit offering, and compared several consulting partners. SmileShark was a high priority for us because they offered AWS Credit, but also offered a deep discount on CloudFront, which was cost prohibitive. SmileShark's experience working with startups was also a key reason we chose them. We chose to partner with SmileShark because we knew they could help us solve our AWS challenges, but also because they have in-depth understanding of how startups work and could closely support our needs. Successful Cost and Operational Optimization through Keen Support from SmileShark Rational Decision-Making through the AWS PoC Program Q. Have you seen any improvements through the was AWS PoC program? A. First, as mentioned earlier, AWS Credit, which was funded through the PoC program, allowed us to run an internal learning environment where we experimented with ways to better support various services. In particular, there are technical resource and cost issues that come with EKS advanced testing. Thanks to AWS and SmileShark, we were able to test out EKS on our own. As a result of the test, we could tell that ECS was the right service for Happy Moonday, and we finally decided to adopt ECS. Q. How was SmileShark's technical support? A. SmileShark’s technical support was fast and accurate. As a startup, it's often difficult to afford a separate support plan. As an AWS consulting partner, SmileShark has been very responsive when we needed technical support from AWS. AWS Cost and Operational Optimization Q. You talked about the CloudFront discount above, have you seen any cost optimization? A. In fact, the biggest savings we're seeing is in CloudFront costs. Comparing similar traffic volumes before and after the SmileShark collaboration, we've been able to reduce our costs by about 65%, which ultimately has lowered our content delivery burden significantly, allowing us to focus on other opportunities to engage and connect with our customers. Additionally, SmileShark provided an optimization report for us within just two weeks after signing the partnership. They suggested cost-saving tactics that can be applied to the existing infrastructure and complementary methods that would be good to introduce, and based on those advices, we were able to further reduce costs by upgrading the database minor version and changing Amazon EBS to the latest generation. Customized Support and Flexible Partnerships Q. Lastly, do you have any comments that might be helpful to any customer considering SmileShark? A. We would like to recommend SmileShark to those who are looking for a partner who is close to the company and can help with AWS concerns. If you are hesitant because you have no experience collaborating with AWS partners, it means that you need a partner who can maintain close dialogue and move flexibly, and SmileShark is the perfect fit. Especially if you are a startup, I recommend working with SmileShark. Happy Moonday's Future Plan Happy Moonday developed Heymoon Indonesia Edition and Global Edition with menstrual cycle prediction as an essential feature and launched them on the Apple AppStore in Indonesia and Singapore, respectively in March 2024. Heymoon has rapidly grown to become a popular app with more than one in three women signing up in 15-24 ages in South Korea, and now it's bringing its benefits to women overseas. We plan to make Heymoon Shopping, a women's health and wellness store that was only available in-app in Korea, accessible on the web to increase accessibility. This is expected to strengthen our competitiveness that organically weave commerce and service to create an integrated experience. As the services and infrastructure environments operated by the company are diverse, we also plan to make efforts to maintain the technology well. Detailed Services Applied to Happy Moonday Happy Moonday Architecture Introduced SmileShark Services SmileShark CloudFront | Fast and Secure Developer-friendly Content Delivery Network SmileShark Tech Support | Get expert guidance and assistance achieving your objectives

  • AWS Certification Types & Tiers : Updated AWS Certification in 2024

    AWS Certification Types & Tiers : Updated AWS Certification - Retirements and New borns Written by Hyojung Yoon Hello! Today we're going to learn about AWS certifications. Amazon Web Services (AWS) is a cloud computing platform provided by Amazon and is one of the most popular cloud service provider in the world. According to Michael Yu, Client Market Leader of Skillsoft Technology and Developer Portfolio, "The skyrocketing value of cloud-related certifications is not a new phenomenon," indicating that more companies are using cloud computing platforms. Among cloud-related certifications, AWS Certification is known to validate the technical skills and cloud expertise needed to advance your career and scale your business. There are a few changes to AWS certifications this year, so we've put together a blog to help guide you, so let's get started! AWS Certification Overview 1. What is AWS Certifications? AWS certifications are programs that allow you to demonstrate your knowledge and expertise in using the Amazon Web Services (AWS) cloud computing platform. AWS certifications focus on a variety of areas, including cloud architecture, development, and operations, and are organized into different levels. Certification exams are administered in multiple languages at testing centers around the world. 2. Types of AWS Certification AWS offers granular certifications for different roles and skill levels. These certifications are divided into four tiers: Foundational, Associate, Professional, and Specialty. 3. Certification Validity Certifications are valid for three years from the date they are earned, so be sure to keep them up to date before they expire. For Foundational and Associate level certifications, you can also fulfill the renewal requirements for Sage certifications by passing a higher level exam or renewing your certification. 4. Changes for 2024 New Certification AWS Certified Data Enginner - Associate, which has been available in beta since November of last year, will be available for the standard version starting in April 2024. This certification validates your skills and knowledge of core data-related AWS services, your ability to implement data pipelines, monitor and resolve issues, and optimize cost and performance according to best practices. Retirement Certifications Three certifications are retiring this year. AWS Certified Data Analytics will not be available for examination after April 9, 2024, AWS Certified Database - Specialty, and AWS Certified SAP on AWS - Specialty will not be available for examination after April 30, 2024. If you hold these certifications, they are valid for three years from the date you earned them, and you can renew them by retaking them on or before April 8, 2024 and April 29, 2024, respectively. Tiers of AWS Certification Foundational AWS Certification 1. Cloud Practitioner (CLF) Target Candidates Individuals with a basic understanding of the AWS cloud platform Indiviauals with no IT of cloud background transitioning into a cloud career Exam Overview Cloud Concepts(24%), Security and Compliance(30%), Cloud Technology and Services(34%), Billing, Pricing and Support(12%) Cost $100 | 65 questions | 90 minutes Associate AWS Certifications 1.  SysOps Administrator (SOA) Target Candidates 1 year of experience with deployment, management, networking, and security on AWS Exam Overview the AWS Certified SysOps Administrator - Associate exam will not include exam labs until further more. Monitoring, Logging, and Remediation(20%) Reliability and Business Continuity(16%) Deployment, Provisioning, and Automation(18%) Seucirty and Complience(16%) Networking and Content Delivery(18%) Cost and Performance Optimization(12%) Cost $150 | 65개 questions | 130 minutes 2. Developer (DVA) Target Candidates 1 + years of hands-on experience in developing and maintaining applications by using AWS services Exam Overview Deployment with AWS Services(32%) Security(26%) Deployment(24%) Troubleshooting and Optimization(18%) Cost $150 | 65 questions | 130 minutes 3. Solutions Architect (SAA) Targe Candidates 1 + years of hands-on experience designing cloud solutions that use AWS services Exam Overview Design Secure Architectures(30%) Design Recilient Architectures(26%) Design High-Performing Architectures(24%) Design Cost-Optimized Architectures(20%) Cost $150 | 65 questions | 130 minutes 4. Data Engineer (DEA) Target Candidates 2 + years of experience in data engineering 1 + years of hands-on experience with AWS services Exam Overview Demand for data engineer roles increased by 42% year over year per a Dice tech jobs report Data Ingestion and Transformation(34%) Data Store Management(26%) Data Operations and Support(22%) Data Security and Governanve(18%) Cost $75* | 85 questions | 170 minutes *Beta exams are offered at a 50% discount from standard exam pricing Professional AWS Certifications 1. Solutions Architect (SAP) Target Candidates 2 + years of experience in using AWS services to design and implement cloud solutions Exam Overview Design Solutions for Organizational Complexity(26%) Design for New Solutions(29%) Continuous Improvement for Exisiting Solutions(25%) Accelerate Workload Migratoin and Modernization(20%) Cost $300 | 75 questions | 180 minutes 2. DevOps Engineer (DOP) Target Candidates 2 + years of experience in provisioning, operating, and managing AWS environments Experience with software development lifecycle and programming and/or scripting Exam Overview Job listings requiring this certification have increased by 52% between Oct 2021 and Sept 2022 (source: Lightcast™ September 2022). SDLC Automation(22%) Configuration Management and IaC(17%) Resilient Cloud Solutions(15%) Monitoring and Logging(15%) Incident and Event Response(14%) Security and Compliance(17%) Cost $300 | 75 questions | 180 minutes Specialty AWS Certifications 1. Advanced Networking (ANS) Target Candidates 5 + years of networking experience with 2 + years of cloud and hybrid networking experience Exam Overview Network Design(30%) Network Implementation(26%) Network Management and Operation(20%) Network Security, Compliance, and Governance(24%) Cost $300 | 65 questions | 170 minutes 2. Database (DBS)  : Retired on April 30, 2024 Target Candidates Minimum of 5 years of common database technology Minimum of 2 years of hands-on experience working on AWS Exam Overview This certification will be retired on April 30, 2024. The last day to take this exam is April 29, 2024 Workload-Specific Database Design(26%) Deployment and Migration(20%) Management and Operations(18%) Monitoring and Troubleshooting(18%) Database Security(18%) Cost $300 | 65 questions | 180 minutes 3. SAP on AWS (PAS) : Retired on April 30, 2024 Target Candidates 5 + years of SAP experience and 1 + years of experience in working with SAP on AWS Exam Overview This certification will be retired on April 30, 2024. The last day to take this exam is April 29, 2024 Design of SAP Workloads on AWS(30%) Implementation of SAP Workloads on AWS(24%) Migration of SAP Workloads to AWS(26%) Operation and Maintenance of SAP Workloads on AWS(20%) Cost $300 | 65 questions | 170 minutes 4. Machine Learning (MLS) Target Candidates 2 + years of experience developing, architecting, and running ML or deep learning workloads in the AWS Cloud Exam Overveiw Data Engineering(20%) Exploratory Data Analysis(24%) Modeling(36%) Machine Learning Implementation and Operations(20%) Cost $300 | 65 questions | 180 minutes 5. Security (SCS) Target Candidates 2 + years of hands-on experience in securing AWS workloads 3~5 + years of experience in designing and implementing security solutions Exam Overview Threat Detection and Incident Response(14%) Security Logging and Monitoring(18%) Infrastructure Security(20%) Identity and Access Management(16%) Data Protection(18%) Management and Security(14%) Cost $300 | 65 questions | 170 minutes 6. Data Analytics (DAS) : Retired on April 9, 2024 Target Candidates 5 + years of common data analytics technologies and 2 + years of hands-on experience and expertise working with AWS services to design, build, secure and maintain analytics solutions Exam Overview This certification will be retiring on April 9, 2024. The last day to take this exam is April 8, 2024. Collection(18%) Storage and Data Management(22%) Processing(24%) Analysis and Visualization(18%) Security(18%) Cost $300 | 65 questions | 180 minutes AWS Certification Paths *Zoom in on the image to see the AWS Certification Paths. Above are the top cloud jobs, their responsibilities, and the AWS Certification paths that fit those roles. Choose the job that interests you and start your AWS Certification journey to achieve your career goals! ※ Which AWS Certification should I start with? New to IT and Cloud From a non-IT background, switching to a cloud career? Start with AWS Certified Cloud Practitioner that validates foundational knowledge of AWS Cloud and terminology Line-of-Business Roles In sales/marketing or other business roles? Start with AWS Certified Cloud Practitioner that validates foundational knowledge of AWS Cloud and terminology IT Professionals Do you have 1~3 years of IT or STEM background? Start with An Associate-level AWS Certification that aligns with your role or interests. 5 AWS Certifications expected to be more in demand in 2024 Surveys and job market analysis on platforms like LinkedIn, Glassdoor, and Indeed show a steady increase in the number of job listings that require AWS certification as a preferred qualification. Based on analysis of IT and cloud computing industry trends, market trends, and demand, the following AWS certifications are expected to be in high demand by 2024. 1. AWS Solutions Architect - Associate AWS Solutions Architect - Associate is a highly regarded certification that is widely applicable in AWS. Those who hold the SAA have seen their average salary grow from $155,200 to $160,052 in less than a year, demonstrating the growing recognition and value of the certification in the industry. 2. AWS Solutions Architect - Professional Following the Associate level, the Solutions Architect - Professional certification signifies expertise in complex AWS solutions. Certified professionals report an average salary increase of 27%, further emphasizing the demand, which indicates the industry's high regard for SAP. SAP is highlighted in industry salary surveys like Robert Half or PayScale as correlating to higher earnings and job titles. 3. AWS DevOps Engineer - Professional Demand for DevOps engineers is expected to increase as organizations are increasingly adopting DevOps practices to increase efficiency and improve the frequency of deployments. They are also in high demand as they become more integrated with cloud infrastructure management. 4. AWS Machine Learning - Specialty The explosive growth of AI/ML is driving demand for people skilled in designing, implementing, and maintaining AI/ML solutions. However, due to the AI/ML skills gap, many organizations report difficulty hiring experts. As a result, we expect AWS Machine Learning -Specialty to be more popular than ever before. 5. AWS Certified Security - Specialty According to the Mandiant Cyber Security Forecast 2024, attacks targeting cloud environments will be prevalent this year. In addition, data protection and privacy compliance is becoming increasingly important around the world, including GDPR in Europe and CCPA in California. With the growing need for cloud security skills, the average salary for security professionals with AWS Certified Security - Specialty has increased by about 11% over the past year. (Source: Skillsoft) Conclusion The AWS certifications introduced in this article demonstrate cloud expertise. It is good to improve your competitiveness with having the AWS certifications which is the best cloud service provider in the world. If you're looking to demonstrate your AWS knowledge in the ever-evolving and fast-paced world of cloud technology, then get AWS certifications. Links The 20 Top-Paying IT Certifications going into 2024 - Skillsoft AWS Certification - Validate AWS Cloud Skills - Get AWS Certified Top 5 Highest-Paying AWS Certifications - Skillsoft

  • Are AWS Certifications worth it? : AWS SA-Professional 3

    Are AWS Certifications worth it? : AWS Solutions Architect - Profassional (SAP) Certification 3 Written by Minhyeok Cha It's been a while since I've written AWS certification post, so let's get it started. Question 1. A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts. The company's infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to manage their own network. However, individual accounts must be able to create AWS resources within the subnet. What combination of actions should the solutions architect perform to meet these requirements? (Choose two.) ⓐ Create a transit gateway in the infrastructure account. ⓑ Enable resource sharing from the AWS Organizations management account. ⓒ Create VPCs in each AWS account within the organization in AWS Organizations. Configure the VPCs to share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the VPCs in each individual account with the VPC in the infrastructure account. ⓓ Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share. ⓔ Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each prefix list to associate with the resource share. Solutions This question is about how to you want to manage multiple AWS accounts. For example, in the picture above, we have two random accounts and one dedicated to infrastructure. The needs in question are: These accounts must be used to manage the network. Individual accounts cannot manage their own network. The individual accounts need to be able to create AWS resources within the subnet. Since the infrastructure account itself is not designed to manage the network, you can see that the intent is to share permissions with accounts 1 and 2 so that they can manage the VPC subnet resources. A is wrong - create a Transit Gateway, and as you can see from the architecture of the problem, there is only one VPC mentioned in one account. TG, as you know, is a service that bundles multiple VPCs, so A is not a good fit for this problem. C is wrong - because building the same environment on each account is not a shared task. E is wrong - can't share resources via RAM using prefix lists. D is correct - directly shares the subnets Therefore, the remaining answers, B & D, are correct and can be solved in AWS Resource Access Manager (RAM). Correct Answers: B, D 💡 B. Enable resource sharing in the AWS Organizations master account. 💡 D. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU for which you want to use the shared network. Select each subnet that you want to associate with the resource share. Question 2. A company wants to use a third-party software-as-a-service (SaaS) application. The third-party SaaS application is consumed through several API calls. The third- party SaaS application also runs on AWS inside a VPC. The company will consume the third-party SaaS application from inside a VPC. The company has internal security policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the company VPC are allowed to be accessed from outside the company's VPC. All permissions must conform to the principles of least privilege. Which solution meets these requirements? ⓐ Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint. ⓑ Create an AWS Site-to-Site VPN connection between the third-party SaaS application and the company VPC. Configure network ACLs to limit access across the VPN tunnels. ⓒ Create a VPC peering connection between the third-party SaaS application and the company VPC. Update route tables by adding the needed routes for the peering connection. ⓓ Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create an interface VPC endpoint for this endpoint service. Grant permissions for the endpoint service to the specific account of the third-party SaaS provider. Solutions The question has "does not traverse the Internet" in the question, so we're going to eliminate B and C because it involves PrivateLink. The correct answer for #2 is A, because the perspective is consulting from the consumer's point of view, not the provider's. Account authorization for the endpoint service in D is the provider's responsibility. Correct Answer : A We need to create users and providers as per the above solution, so I've set up the following architecture. On the Provider VPC side, where you have the third-party SaaS application, you must first create an endpoint service. 💡Before creating the endpoint service, it supports network, gateway LB. The health check of the load balancer should be normal, and the output is shown in the picture below, but since we created the NLB in advance, we'll skip the creation process. 1. Provider accounts - Create an endpoint service 2. Provider Account - Add a Consumer IAM ARN 3. Consumer account - Name the endpoint service created in the provider account and send a connection request 4. Provider account - Accept connection request 5. Consumer account - Check status Question 3. A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve security: ✑ The database must use strong, randomly generated passwords stored in a secure AWS managed service. ✑ The application resources must be deployed through AWS CloudFormation. ✑ The application must rotate credentials for the database every 90 days. A solutions architect will generate a CloudFormation template to deploy the application. Which resources specified in the CloudFormation template will meet the security engineer's requirements with the LEAST amount of operational overhead? ⓐ Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days. ⓑ Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda function resource to rotate the database password. Specify a Parameter Store RotationSchedule resource to rotate the database password every 90 days. ⓒ Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days. ⓓ Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync DataSource resource to automatically rotate the database password every 90 days. Solution This question is looking for a managed service for cryptographic key-levels in AWS. As you can see, there is AWS Secrets Manager and AWS Systems Manager Parameter Store, both of which are services that store key-values. We need to know about each service before we can solve the problem, but as with any other problem, we can start by looking at the customer need in question The database must use strong, randomly generated passwords stored in a secure AWS managed service. Application resources must be deployed through AWS CloudFormation. The application needs to replace the credentials to the database every 90 days. The solutions architect generates a CloudFormation template to deploy the application. B and D are excluded here because the ability to periodically replace credentials is a feature of AWS Secrets Manager, and additionally, they are using resources that are not supported by Cloudformation. 💡 The Parameter Store RotationSchedule resource does not exist, and documentation checks show "RotationSchedule" in AWS Secrets Manager. 💡 AWS CloudFormation does not currently support creating a SecureString parameter type. Then we have to look at the remaining A and C, which is really just a problem of how we trigger the replacement cycle, and the answer is A because we don't have to use Amazon EventBridge, we have our own replacement capabilities. Correct Answer : A AWS Secrets Manager refresh cycle 💡 These days, IaCmetas need to be general purpose, so don't use CloudFormation very often. but I included it just in case there's a surprise CloudFormation question on t he AWS exam. Conclusion I hope the AWS SA certification questions we covered today have been helpful to you. If you have any questions about the solutions, notice any errors, or have additional queries, please feel free to contact us anytime at partner@smileshark.kr.

  • AWS Case Study - TRIBONS

    How did TRIBONS provide uninterrupted shopping mall services to their customers? SmileShark's CloudOps Service TRIBONS Inc. As an affiliate of LF (formerly LG Fashion), TRIBONS owns famous brands such as DAKS SHIRTS, the industry leader in men's shirts, as well as Notig, Bobcat, and Benovero. TRIBONS is also successfully operating FOMEL CAMELE, a fashion miscellaneous goods brand targeting women in their twenties and thirties. TRIBONS also has a strong presence in children's apparel, and through its "PastelMall" subsidiary, TRIBONS offers premium children's apparel brands such as Daks kids, Hazzys kids, PETIT BATEAU, BonTon and K.I.D.S. These brands are available in Korea's major department stores, and are also available online through Pastel Mall. TRIBONS is constantly striving to provide the customers with quality products. Name TRIBONS Inc. Area Shirt and blouse manufacturing Estab.  Jan, 2008 Site https://www.pastelmall.com/ Anomalous Service Failures in a Shopping Mall System Challenges SmileShark When did the need for SmileShark come up in TRIBONS, and what were the challenges at the time? Hyunsoo Jang We had previously been using an AWS cloud environment through a different partner. However, in 2022, we began to experience difficulties running its shopping mall in the cloud. As the number of customers increased, we were facing anomalous service failures. We were also considering expanding additional services due to system development. SmileShark You mentioned that TRIBONS experienced some unusual service failures, can you tell us what it was? Hyunsoo Jang Certain events, such as the real-time live commerce 'Parabang', were only exposed on our own mall, but sometimes we had to broadcast simultaneously on other live commerce platforms. In such cases, the difference from the usual inflow was about 10 times. In addition to this inflow, we also received customers through advertising marketing such as marketing texts and KakaoTalk Plus friends, and we could see that the inflow increased by about 5 times compared to the usual inflow. Therefore, we aimed for a more stable service. Why TRIBONS Chose SmileShark SmileShark Why did you choose SmileShark's CloudOps service? Hyunsoo Jang To solve the problems we were facing, we needed a partner that could be agile and flexible, and we found SmileShark through a referral. Being recognized as an AWS Rising Star of the Year, meeting with SmileShark's CEO and engineers built trust, it convinced us that they could empathize with our problem and promise to support us. SmileShark What did you find frustrating about your previous partner? Hyunsoo Jang As mentioned above, we were facing various issues during the operation of the shopping mall system, and there were many complicated parts that had not been explained well, so we were very disappointed with the previous partner's service provision. Changing server settings in AWS was not easy due to the absence of internal manpower, and communication was also difficult due to the difference in work areas between developers and system engineers. Therefore, the most anticipated aspect of the new partner introduction was smooth communication and proactive measures. When we used previous partners' services, issues were not shared, which led to confusion due to server reboots, checks, and policy changes during business hours, and there were many unnecessary procedures to respond to issues, so it was important to us to see if we could improve this. Stabilizing the infrastructure and a successful digital transformation As a collaborative partner, not just a request and responder SmileShark We've heard that TRIBONS' infrastructure issues have been dramatically stabilized since implementing SmileShark’s CloudOps, but what's it really like? Hyunsoo Jang In the year or so since we have been with SmileShark, we have seen a lot of improvements. We have been able to connect the system issue alerts to the collaboration solutions we use so we can respond to issues quickly. From time to time, AWS would send out an announcement saying, "There's an issue with a service or a region, and you may experience downtime." The emails are sent to our contacts within TRIBONS, but they are also sent to our MSP. It would be nice if the MSP partners we work with could share this with us when we miss something like this, but unfortunately this little detail hasn't been done before with the previous partner. The shopping mall was supposed to be an uninterrupted system, but we were often getting server error pages (503). SmileShark has provided us with AWS announcements months in advance so that we can plan ahead and say, "We need to address these issues around this time." It also sends out urgent announcements in the middle of the day so that we don't miss any issues. TRIBONS doesn't have any outages now, which we used to have four to five per quarter before SmileShark. SmileShark What do you think makes SmileShark's CloudOps service different from other previous monitoring and operations support and MSPs? Hyunsoo Jang When an issue arises, they analyze the cause of the problem and explain it in detail in an email, and then again on the phone, so I know exactly what the issue is, and they share their technical opinions and areas for improvement, which is very helpful. Furthermore, in the event of a failure, we are notified within one minute on average and receive prompt feedback from the person in charge, and we communicate in real time through a separate communication channel. As a result, we were able to successfully obtain the certification mark just one year after the start of the ISMS certification audit project. SmileShark How did SmileShark help TRIBONS with the ISMS certification audit? Hyunsoo Jang During the ISMS audit, there was a part of the architecture that needed to be changed. SmileShark told us that it was a security violation to have the private and development areas in the same zone, so we had to separate them. We discussed this closely with Hosang Kwak, CloudOps team lead of SmileShark and proceeded with as little disruption to the shopping mall as possible. In fact, even when we changed the architecture structure, the shopping mall service was not interrupted and the system operated stably. When I asked how to configure the application servers such as tomcat, which are in EC2 in addition to the AWS structure, he promptly responded and took practical measures. SmileShark In addition to running a stable infrastructure, we've heard that communication between developers has improved. Hyunsoo Jang Yes, organizations without system engineer positions end up lacking knowledge such as log analysis and server settings for each server. Communication with MSP partners was also a challenge due to the lack of communication between the teams. This was always a big concern for me due to the different job background, but I think SmileShark was the only one that worked out well in terms of communication. AWS Cost and Operations Optimization Consulting SmileShark So, how was SmileShark's AWS consulting experience? Hyunsoo Jang We had a cost issue with the CDN service we were using, and we thought that the fees charged due to the contract were excessive, so we were considering other CDN services, and we consulted with SmileShark about the CloudFront (CDN) service provided by AWS, which can be used at a reasonable price without a contract. We confirmed the cost-effective part of the service and are considering switching to it this year. Also, we were having frequent issues with the software configuration management server, so we consulted with SmileShark about AWS software configuration management service. I told them that I would like to be able to deploy or build servers automatically, and SmileShark told me that AWS has a structure that can automate the software configuration management. I thought that this would reduce the risk of manpower and server stabilization. However, the software configuration management server can be critical, so we are still considering it. Consulting with SmileShark helped us make the decision because we were able to put our situation into perspective. SmileShark Thank you. Do you have any comments that might be helpful to any customer considering SmileShark? Hyunsoo Jang I would highly recommend SmileShark's CloudOps service to any company or team that doesn't yet have an expert in the field of systems engineering, as SmileShark provides personalized support. SmileShark also helps build, manage, and optimize cloud infrastructure, making it especially useful for teams that don't have the knowledge or manpower to manage cloud in-house. I would recommend SmileShark as the best AWS partner to build the infrastructure, not only due to the technical issues, but also because SmileShark provides guidance on optimizing costs and increasing operational efficiency. Beyond just the numbers, there's something else I've been noticing a lot lately, and that's the trust SmileShark shows in the work. SmileShark is always consistent in the guidance and proactive in the solutions, and that's a big deal to me as a service provider. At a time when we felt overwhelmed by the complexity of the AWS environment, SmileShark reached out to us and made us feel comfortable just like seeing a lighthouse in the storm. Building an Enhanced Security and Gifting System TRIBONS’ Future Plan It has been four years since Pastel Mall (shopping mall) was launched, and we have been able to grow functionally in the service sector due to the influx of many customers. While we previously aimed to improve the service level, this year we are working to improve it by focusing on server strengthening and security to maintain a stable system. Therefore, we are aiming to obtain the enhanced ISMS-P certification. SmileShark Can you tell us about the new service TRIBONS recently launched, Gifting? Hyunsoo Jang The Pastel Mall Gifting Service is now open, a mobile-only service that allows the customers to send DAKS shirts and other products from Pastel Mall to your loved ones. Gifts can be given from existing Pastel Mall customers to non-members, and any customer can find a variety of products that match the theme in the dedicated gift shop, and the customers can send a gift with a message card with a small sentiment, so we hope you enjoy it. Detailed Services Applied to TRIBONS What is SmileShark’s CloudOps? SmileShark Which of SmileShark's CloudOps services did TRIBONS adopt, and what was the collaboration process like? Hosang Kwak, CloudOps team lead of SmileShark CloudOps doesn't just alert the customers when something goes wrong with their system, it also analyzes it. It's important for us to analyze, find solutions, and provide them to our customers so that they can improve their systems to prevent the same problems from happening again. CloudOps is a collaborative MSP service that doesn't solve all problems at once, but rather works with the customers to solve them and grow together. Hyunsoo Jang, TRIBONS online platform team leader, also has a good understanding of CloudOps, so he authorized us to do various tests over time. Also, when we suggested a solution, he agreed to give it a try, so to repay us for this, we are still working well together with the common goal of uninterrupted service. TRIBONS Architecture What is Shark-Mon? Shark-Mon is a monitoring tool that enables applications and services to operate around the clock without interruption, rather than being monitored by humans in the legacy way. Developed in-house by SmileShark, SharkMon provides functions necessary for cloud operations, including basic 'protocol monitoring' such as HTTP, TCP, SSH, DNS, ICMP, gRPC, TLS, 'AWS usage resource view' and 'Kubernetes monitoring', which is emerging as a global trend. It is currently in closed beta for select customers.

  • AWS Case Study - INUC

    How did INUC leverage AWS to minimize development time by over 35% and quickly deploy SaaS? INUC Inc. INUC is a B2B media platform software development company that specializes video content management system(CMS) services. INUC’s video CMS solution features live scheduling, VOD archiving, menu organization and management, as well as web interfaces for each type of content that media managers need. INUC provides various editions(templates) for video meeting minutes, in-house broadcasting, and live commerce, so the media managers simply can select the screen they want. The media managers also have the option to choose the appropriate license (Basic/Standard/Enterprise) and cloud service according to each customer's system policy and service scale. Name INUC Inc. Area Software development and supply Estab. Nov, 2010 Site https://sedn.software/ Migration of B2B On-premises Solution to the Cloud Challenges INUC was provided on-premises based solutions; however, with changes in the market and a growing customer demand, the need for cloud adoption became apparent. As their potential customer base expands from public and enterprise to healthcare and commerce, there is a growing demand for cloud services in the form of SaaS, and INUC expects to create opportunities for global expansion. In addition, all INUC’s media services were built on Docker, consisting of containers for API, Streaming Server, Chatting, Web, Storage, etc. The migration to the SaaS model was relatively easy, given the cloud environment was already prepared. Why SmileShark? SmileShark's wide experience and expertise were key attractions. Specifically, SmileShark's solutions and various suggestions during meeting helped INUC makes quick decisions. INUC expected that SmileShark's experience in various Kubernetes deployments and container operations would facilitate the achievemenet of their goal of transforming CMS services into SaaS within a tight time frame. In fact, SmileShark's prompt technical support helped INUC to smoothly migration to the cloud. "Due to the nature of media services, we believe that it is reasonable to have a hybrid form that operates both existing on-premises servers and the cloud from a TCO (total cost of ownership) perspective.", "The cloud-based B2B SaaS model can be thought of as a content store that plans an independent brand," Explained Jason Shin, CEO of INUC, Inc. Safe and Swift Migration of Adopting ECS INUC reliably adopted the Amazon Web Services (AWS) cloud through SmileShark, experiencing flexibility and scalability beyond that of the traditional on-premises model. During the AWS architecture design process, Elastic Load Balancers (ELBs) and multiple availability zones were employed to enhance business continuity and customer satisfaction. Network traffic coming into the ELB is automatically distributed to multiple servers, preventing the load from being concentrated on one server and ensuring that even if a problem occurs in one server, the entire service is not affected. Additionally, by distributing the infrastructure using two or more Availability Zones, INUC can continue operations without service interruption even if a problem occurs in one Availability Zone. To migrate data leakage and security risks, INUC organized its infrastructure with public and private subnets, placing critical data and systems in the private subnet, shielding against external threats. This approach has bolstered customer satisfaction and protected the brand value of INUC in the long run. INUC adopted ECS (Elastic Container Service) to the AWS environment to simplify and enhance the efficiency of deploying, managing, and scaling Docker container-based applications. ECS significantly shortened time to market by streamlining the process of deploying & managing applications and allowing developers to concentrate on develeoping higher-quality service To ensure consistent service during traffic spikes, INUC implemented Auto Scaling Group, dynamically managing resources based on usage. Additionally, INUC set the ECS service type as Replica to maintain a specified number of tasks running continuously, thereby ensuring scalability and resilience of the tasks, and configured it to automatically adjust to workload demands. Managed services such as ElastiCache, Aurora, S3, etc have helped INUC reduce hardware and software maintenance costs, allowing them to focus more on core business activities. INUC established a fast and efficient development process through AWS services. Supported by AWS and SmileShark, developers quickly acquired new skill and developed cloud-optimized solutions, significantly accelerating INUC's technological innovation. Upcoming Development of Intelligent Services Based on STT INUC Next Step INUC is currently improving SEDN v2 with communication and incorporating AI applications based on deep learning algorithms into the cloud. Upcoming intelligent services include video scene analysis based on STT (Speech-to-Text), timestamp extraction highlight generation, and video keyword search. INUC is improving their media user experience(MX) and aims to create business opportunities with more content IP operators and strengthen their global market presence. ※ Click the image above to sign up for the SEDN beta service. Used AWS Services Amazon Elastic Container Service(ECS) Amazon Simple Storage Service(S3) Amazon ElastiCache Amazon Aurora Introduced SmileShark Services SmileShark BuildUp | Accurate infra suggestion / Rapid deployment support SmileShark Migration | SmileShark guides you through the entire migration to AWS SmileShark Tech Support | Get expert guidance and assistance achieving your objectives

  • AWS Config란?

    AWS Config란? AWS Config는 AWS(Amazon Web Services)에서 제공하는 서비스로 기존 AWS 리소스를 검색하고, 서드 파티 리소스의 구성을 기록하고, 모든 세부 구성 정보가 포함된 리소스의 완전한 인벤토리를 내보내며, 특정 시점의 리소스 구성 방식을 확인할 수 있습니다. 이러한 기능에는 규정 준수 감사, 보안 분석, 리소스 변경 추적 및 문제 해결에 사용할 수 있습니다. 개요 AWS Config는 AWS 계정에 있는 AWS 리소스의 구성을 자세히 보여 줍니다. 자세하게 말씀드리자면 설정을 모니터링하고 해당 설정이 원하는 상태 또는 잠재적 규정 준수 요구 사항에 부합하는지 알려줍니다. 여기에는 리소스가 서로 어떻게 연관되어 있는지, 과거에 어떻게 구성되었는지 등이 포함되어 있어 시간이 지남에 따라 구성과 관계가 어떻게 변하는지 확인할 수 있습니다. AWS Config 작동 방식 AWS Config 기능 AWS Config 설정 시 다음을 완료할 수 있습니다: 리소스 관리 AWS Config에서 기록할 리소스 유형을 지정합니다. 요청 시 구성 스냅샷과 구성 기록을 받도록 Amazon S3 버킷을 설정하세요. 구성 스트림 알림을 보내도록 Amazon SNS를 설정합니다. AWS Config에 Amazon S3 버킷 및 Amazon SNS 주제에 액세스하는 데 필요한 권한을 부여합니다. 규칙 및 규정 준수 팩 AWS Config에서 기록된 리소스 유형에 대한 규정 준수 정보를 평가하는 데 사용할 규칙을 지정합니다. 규정 준수 팩 또는 AWS 계정에서 단일 엔티티로 배포하고 모니터링할 수 있는 AWS Config 규칙 및 수정 작업 모음을 사용합니다. 애그리게이터 애그리게이터를 사용하여 리소스 인벤토리 및 규정 준수에 대한 중앙 집중식 보기가 가능합니다. 애그리게이터는 여러 AWS 계정과 AWS 리전의 AWS Config 구성 및 규정 준수 데이터를 단일 계정과 리전으로 수집하는 AWS Config 리소스 유형입니다. 고급 쿼리 이 기능은 AWS 사용자가 여러 계정과 지역에 걸쳐 있는 리소스의 구성을 효과적으로 관리하고 모니터링할 수 있게 해주는 도구로, 복잡한 쿼리를 사용하여 필요한 정보를 빠르고 정확하게 얻을 수 있습니다. 샘플 쿼리 중 하나를 사용하거나 AWS 리소스의 구성 스키마를 참조하여 직접 쿼리를 작성하세요. AWS Config 사용 방법 AWS에서 애플리케이션을 실행할 때는 일반적으로 AWS 리소스를 사용하게 되는데, 이러한 리소스를 종합적으로 생성하고 관리해야 합니다. 애플리케이션에 대한 수요가 계속 증가함에 따라 AWS 리소스를 추적해야 할 필요성도 커지고 있습니다. AWS Config는 다음 시나리오에서 애플리케이션 리소스를 감독하는 데 도움이 되도록 설계되었습니다: 리소스 관리 리소스 구성에 대한 거버넌스를 강화하고 리소스 구성 오류를 감지하려면, 어떤 리소스가 존재하고 이러한 리소스가 어떻게 구성되는지에 대한 세분화된 가시성을 언제든지 확보해야 합니다. 각 리소스에 대한 호출을 폴링하여 이러한 변경 사항을 모니터링하지 않고도 리소스가 생성, 수정 또는 삭제될 때마다 알림을 받을 수 있도록 AWS Config를 사용할 수 있습니다. AWS 구성 규칙을 사용하여 AWS 리소스의 구성 설정을 평가할 수 있습니다. 리소스가 규칙 중 하나의 조건을 위반하는 것을 AWS 구성에서 감지하면, AWS 구성은 리소스를 비규격으로 플래그 지정하고 알림을 보냅니다. AWS 구성은 리소스가 생성, 변경 또는 삭제될 때 지속적으로 리소스를 평가합니다. 감사 및 규정 준수 AWS Config를 사용하면 리소스 구성 기록에 액세스할 수 있습니다. 구성 변경을 일으킨 AWS CloudTrail 이벤트와 구성 변경 사항을 연결할 수 있습니다. 이 정보를 통해 ‘변경한 사용자’, ‘변경한 IP 주소’ 등의 세부 정보에서 AWS 리소스와 관련 리소스에 대한 변경 결과에 이르기까지 전체적으로 파악할 수 있습니다. 이 정보를 사용하여 시간 경과에 따라 감사 및 규정 준수 평가에 도움이 되는 보고서를 생성할 수 있습니다. 구성 변경 사항 관리 및 문제 해결 서로 의존하는 여러 AWS 리소스를 사용하는 경우, 한 리소스의 구성을 변경하면 관련 리소스에 의도하지 않은 결과가 발생할 수 있습니다. AWS Config를 사용하면 수정하려는 리소스가 다른 리소스와 어떻게 연관되어 있는지 확인하고 변경의 영향을 평가할 수 있습니다. 또한 AWS Config에서 제공하는 리소스의 기록 구성을 사용하여 문제를 해결하고 문제가 있는 리소스의 문제가 없는 가장 최근 버전으로 액세스할 수 있습니다. 보안 분석 잠재적인 보안 취약점을 분석하려면 사용자에게 부여된 AWS IAM 권한 또는 리소스에 대한 액세스를 제어하는 Amazon EC2 보안 그룹 규칙과 같은 AWS 리소스 구성에 대한 자세한 기록 정보가 필요합니다. AWS Config가 기록되는 동안 언제든지 AWS Config를 사용하여 사용자, 그룹 또는 역할에 할당된 IAM 정책을 볼 수 있습니다. 이 정보를 통해 특정 시점에 사용자에게 부여된 권한을 확인할 수 있습니다. 또한 AWS Config를 사용하여 특정 시점에 열려 있던 포트 규칙을 포함한 EC2 보안 그룹의 구성을 볼 수 있습니다. 이 정보를 통해 보안 그룹이 특정 포트로 들어오는 TCP 트래픽을 차단했는지 여부를 확인할 수 있습니다. 관련 링크 AWS Config Features

  • Are AWS Certifications worth it? : AWS SA-Professional 2

    Are AWS Certifications worth it? : AWS Solutions Architect - Profassional (SAP) Certification 2 Written by Minhyeok Cha Continuing from our last discussion, we further explore AWS certifications, focusing on the Solutions Architect - Professional (SAP) exam, specifically how its questions relate to practical use in consoles or architectural structures. Question 1. A company is running a two-tier web-based application in its on-premises data center. The application layer consists of a single server running a stateful application, connected to a PostgreSQL database running on a separate server. Anticipating significant growth in the user base, the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing. Which solution provides a consistent user experience while allowing scalability for the application and database layers? ⓐ Enable Aurora Auto Scaling for Aurora replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled. ⓑ Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with a round-robin routing algorithm and sticky sessions enabled. ⓒ Enable Aurora Auto Scaling for Aurora replicas. Use an Application Load Balancer with round-robin routing and sticky sessions enabled. ⓓ Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled. Solutions In this question, the answer is apparent just by looking at the options. RDS Aurora Auto Scaling is a feature intended for replicas, not writers. Therefore, options B and D are eliminated. Aurora Auto Scaling adjusts the number of Aurora replicas in an Aurora DB cluster using scaling policies. The routing algorithm is also key. The routing algorithm mentioned in A for NLB is not the least outstanding requests routing algorithm, thus eliminating option A, leaving C as the correct answer. Answer: C 💡 Load balancer nodes receiving connections in a Network Load Balancer use the following process: 1. Use a flow hash algorithm to select a target from the target group for the default rule. The algorithm is based on. ◦ Protocol ◦ Source IP Address and port ◦ Destination IP Address and port ◦ TCP sequence number 2. Individual TCP connections are routed to a single target for the duration of the connection. TCP connections from a client can be routed to different targets as the source port and sequence number differ. However, since this blog's main focus is on practical usage, let's delve into the architecture and console settings based on the content of this question. The problem suggests a traditional two-tier web-based application, commonly used in low-traffic scenarios, involving a Client and a Server directly using a database. Reading further, the customer is expected to grow significantly, so from a Solutions Architect's perspective, transitioning to a three-tier architecture is necessary. The actual migration services mentioned can be implemented as follows: The round-robin weights are set at a 50:50 ratio, as not specified in the question. Let's now check the console operations together. Application Load Balancer Operations: These settings are configured under LB - Target Group - Properties. Round-robin settings Sticky session settings Sticky sessions use cookies to bind traffic to specified servers. Load balancer-generated cookies are default, and application-based cookies are set by servers included in the load balancer. Aurora Auto Scaling Operations: Use the "Add Auto Scaling" option for replicas in RDS to create a leader instance. Before creation, configure the Auto Scaling policy by clicking the button as shown above. Note that even if multiple policies are applied, Scale Out is triggered upon satisfying any one policy. ※ cf. Routing algorithms for each ELB type: For Application Load Balancers, load balancer nodes receiving requests use the following process: Evaluate listener rules based on priority to determine applicable rules. Select targets from the target group for the rule action using the configured routing algorithm. The default routing algorithm is round-robin. Even if targets are registered in multiple target groups, routing is performed independently for each target group. For Network Load Balancers, load balancer nodes receiving connections use the following process: Use a flow hash algorithm to select targets from the target group for the default rule based on: Protocol Source IP address and port Destination IP address and port TCP sequence number Individual TCP connections are routed to a single target throughout the connection's life. TCP connections from clients can be routed to different targets due to differing source ports and sequence numbers. For Classic Load Balancers, load balancer nodes receiving requests select registered instances using: Round-robin routing algorithm for TCP listeners Least outstanding requests routing algorithm for HTTP and HTTPS listeners Weighted settings Though not mentioned in the question, traffic weighting is a key feature of load balancers. Question 2. A retail company must provide a series of data files to another company, its business partner. These files are stored in an Amazon S3 bucket belonging to Account A of the retail company. The business partner wants one of their IAM users, User_DataProcessor, from their own AWS account (Account B) to access the files. What combination of steps should the company perform to enable User_DataProcessor to successfully access the S3 bucket? (Select two.) ⓐ Enable CORS (Cross-Origin Resource Sharing) for the S3 bucket in Account A. ⓑ Set the S3 bucket policy in Account A as follows: { "Effect": "Allow", "Action": [ "s3:Getobject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" } ⓒ Set the S3 bucket policy in Account A as follows: { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject" "se:ListBucket" ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] } ⓓ Set the permissions for User_DataProcessor in Account B as follows: { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" } ⓔ Set the permissions for User_DataProcessor in Account B as follows: { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject", "s3:ListBucket", ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] } Solutions This question revolves around how IAM in Account B should use policies to access files in a bucket in Account A. AWS S3 service allows granting permissions to users from other accounts to access objects they own. There's no need for Account B to access Account A's console; only resource lookup is necessary. Hence, adding IAM's Principal is unnecessary. Instead, it's necessary to open external account access to the S3 bucket. Therefore, the S3 policy opening B account with S3 permissions and Principal Option C and { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] } IAM policy specifying S3 permissions and resource (A account bucket) Option D are the correct answers. { "Effect": "Allow", "Action": [ "s3:GestObject", "s3:ListBucket" ], "Resource": "arn:aws:::AccountABucketName/*" } Answer: C, D ※ cf. Depending on the type of access you want to provide, permissions can be granted as follows: IAM policies and resource-based bucket policies IAM policies and resource-based ACLs Cross-account IAM roles Question 3. A company is running an existing web application on Amazon EC2 instances and needs to refactor the application into microservices running in containers. Separate application versions exist for two different environments, Production and Testing. The application load is variable, but the minimum and maximum loads are known. The solution architect must design the updated application in a serverless architecture while minimizing operational complexity. Which solution most cost-effectively meets these requirements? ⓐ Upload container images as functions to AWS Lambda. Configure concurrency limits for the attached Lambda functions to handle the anticipated maximum load. Configure two separate Lambda integrations within Amazon API Gateway, one for Production and another for Testing. ⓑ Upload container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Container Service (Amazon ECS) clusters with Fargate launch type to handle the expected load. Deploy tasks from ECR images. Configure two separate Application Load Balancers to route traffic to ECS clusters. ⓒ Upload container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with Fargate launch type to handle the expected load. Deploy tasks from ECR images. Configure two separate Application Load Balancers to route traffic to EKS clusters. ⓓ In AWS Elastic Beanstalk, we create separate environments and deployments for production and testing. We configure two separate Application Load Balancers to route traffic to the Elastic Beanstalk deployment. Solutions The issue here involves refactoring microservices using containers on existing EC2s, essentially a service migration. In this instance, we will focus on four key areas: containers, microservices, serverless architecture, and cost efficiency before proceeding. Option A, AWS Lambda, is indeed serverless but not a container, hence it's eliminated. Option D, AWS Elastic Beanstalk, can use containers (Docker Image) but is categorized as PaaS, not precisely serverless, so it's also eliminated. This leaves us with Option B, ECS, and Option C, EKS. Considering the last criterion of cost efficiency, ECS is more affordable, making B the correct answer. Answer: B This problem is about constructing a simple architectural solution, so we will skip the process of working in the console. Conclusion I hope the AWS SA certification questions we covered today have been helpful to you. If you have any questions about the solutions, notice any errors, or have additional queries, please feel free to contact us anytime at partner@smileshark.kr.

  • AWS Lambda: The Ultimate Guide for Beginners 2/2

    AWS Lambda의 모든 것: 초보자를 위한 완벽한 가이드 2/2 - Console에서 람다 함수 생성, 트리거 설정 및 요금 계산 Written by Hyojung Yoon Hello! Today, we will continue to delve deeper into AWS Lambda. Especially in this part, we will practice creating Lambda functions and setting Lambda triggers using the AWS Console. We will also understand the pricing policy of AWS Lambda and learn how to calculate actual costs. Let’s begin! Start AWS Lambda Creating Lambda Functions in the Console Writing Lambda Function Code Configuring Lambda Functions Executing Lambda Runctions Setting Lambda Trigger Lambda Trigger + S3 AWS Lambda Pricing Lambda Pricing Policy Calculating Lambda Prices Interpreting Lambda Pricing Calculation Conclusion Start AWS Lambda 1. Creating Lambda Functions in the Console You can create your first function using the AWS Console. Select Lambda within the AWS Console. Press the [ Create function ] button to create a Lambda function. You will be presented with three options at the top. Create from scratch: Start building a function from the ground up Use a blueprint: Utilize AWS-provided templates that can be customized with sample code. Container image: Specifically for Docker containers. After making your selection, add a new function name and choose the desired runtime¹. ¹Runtime: Options for the programming language you want to write your Lambda in, such as Node.js, Python, Go, etc. Permissions specify the rights that will be granted to the Lambda function. Click [ Change default execution role ] to create a new role with the standard Lambda permissions. 2. Writing Lambda Function Code Review the function you created, here named hjLambda. Scroll down to the function code section. Here, you can select a template or design your own. 3. Executing Lambda Functions Before running the Lambda function, we will first perform a test. Select [ Configure test events ] from the test dropdown menu, which opens a code editor for test event configuration. Select create new event, and enter an event name like MyEvent. Keep the event visibility settings private as default. From the template list, select hello-world and then click [ Save ]. Click the [ Test ] button and check the console for successful execution. In the execution result tab, confirm if the execution was successful. The function log section displays logs created by the Lambda function execution and key information reported in the log output. If the test went well, click the [ Deploy ] button to make it executable. 4. Setting Lambda Trigger 1) Lambda Trigger + S3 We will implement logic using an AWS Lambda function to copy files from one Amazon S3 bucket to another. ※ Cf: How can I use a Lambda function to copy files from one Amazon S3 bucket to another? Step 1: Create the source and destination Amazon S3 buckets. Open the Amazon S3 console and select create bucket. Create both the source and destination buckets. Here, the name of the source bucket is set to [ hjtestbucket ] and the destination bucket to [ hjtestbucket02 ]. Step 2: Create a Lambda Function Open the functions page in the Lambda console and create a function. Select the runtime dropdown and choose Python 3.9, then create a Lambda function like the one shown in the picture. Select the code tab and paste the following JSON code. import boto3 import botocore import json import os import logging logger = logging.getLogger() logger.setLevel(logging.INFO) s3 = boto3.resource('s3') def lambda_handler(event, context): logger.info("New files uploaded to the source bucket.") key = event['Records'][0]['s3']['object']['key'] source_bucket = event['Records'][0]['s3']['bucket']['name'] destination_bucket = "destination_bucket" source = {'Bucket': source_bucket, 'Key': key} try: response = s3.meta.client.copy(source, destination_bucket, key) logger.info("File copied to the destination bucket successfully!") except botocore.exceptions.ClientError as error: logger.error("There was an error copying the file to the destination bucket") print('Error Message: {}'.format(error)) except botocore.exceptions.ParamValidationError as error: logger.error("Missing required parameters while calling the API.") print('Error Message: {}'.format(error)) After pasting the code, select [ Deploy ]. Step 3: Create an Amazon S3 Trigger for the Lambda Function Open the function page in the Lambda console and select [ Add trigger ] from the function overview. Select S3 from the trigger configuration dropdown. Enter the name of the source bucket and select All object create events for the event type. Acknowledge that using the same S3 bucket for both input and output is not recommended, then select Add. Step 4: Provide AWS IAM Permissions for the Lambda Function's Execution Role Like the following resource-based policy, add IAM permissions to the Lambda function's execution role to copy files to the destination S3 bucket. Open the functions page in the Lambda console and click the role name under configuration - execution role. In the IAM console, select [ Add permissions ] and then [ Create inline policy ]. Choose the [ JSON ] option and paste the JSON policy document below. ※ Note Replace destination-s3-bucket with your S3 destination bucket and source-s3-bucket with your S3 source bucket. Change the /* at the end of the resource ARN to the prefix value needed for your environment to restrict permissions. It is best to grant only the minimum permissions necessary to perform the action. For more details, refer to Granting least privilege. { "Version": "2012-10-17", "Statement": [ { "Sid": "putObject", "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::destination-s3-bucket/*" ] }, { "Sid": "getObject", "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::source-s3-bucket/*" ] } ] } Select [ Create policy ] to save the new policy. Step 5: Check if the Lambda Function is Executing Properly Now, to check if the Lambda trigger is working correctly, upload a file to the original S3 bucket. Click [ Upload ] and check the upload status. Go into the destination S3 bucket and verify that the file has been copied. If the same file is stored, you can tell the function is working properly. AWS Lambda Pricing 1. Lambda Pricing Policy Lambda costs are determined by three main factors: the number of requests, execution time, and memory size. Lambda offers 1 million free requests and 400,000 GB-seconds of free computing time per month, which allows small projects or those in the testing phase to use Lambda without additional costs. 2. Calculating Lambda Prices You can easily calculate Lambda prices using the AWS pricing calculator website. Let's calculate the AWS Lambda fees for 3,000,000 executions per month, each running for 1 second, with 512MB of memory (0.5 GB). Scroll down to [ Show Details ] to see how the pricing is determined. This calculation only considers the base costs, so additional costs may occur. Prices are subject to change, so it's best to check the latest information on the AWS official website. Conclusion Through this guide, you have learned how to create Lambda functions in the AWS console. Additionally, this series has introduced you to Lambda’s pricing policy and calculation methods, providing you with the basic steps needed to apply this knowledge to real business scenarios. I hope this experience will be beneficial as you design a variety of cloud services utilizing AWS Lambda. Links Copy S3 files to another S3 bucket with Lambda function | AWS re:Post Invoking Lambda functions - AWS Lambda Serverless Computing - AWS Lambda Pricing - Amazon Web Services

  • Are AWS Certifications worth it? : AWS SA-Professional

    Are AWS Certifications worth it? : AWS Solutions Architect - Professional (SAP) Certification 1 Written by Minhyeok Cha Today, I've organized the AWS Solution Architect - Professional (SAP) certification exam questions in terms of real-world console or architectural structures. Question 1. A company needs to design a hybrid DNS solution. This solution uses Amazon Route 53 private hosting zones for the cloud.example.com domain for resources stored in VPC. The company has the following DNS resolution requirements: On-premises systems must be able to resolve and connect to cloud.example.com. All VPCs should be able to resolve cloud.example.com. There is already an AWS Direct Connect connection between the on-premises corporate network and the AWS Transit Gateway. What architecture should the company use to meet these requirements with the best performance? ⓐ Connect the private hosting zone to all VPCs. Create a Route 53 inbound resolver in a shared services VPC. Connect all VPCs to the transit gateway and create forwarding rules on the on-premises DNS server for cloud.example.com pointing to the inbound resolver. ⓑ Connect the private hosting zone to all VPCs. Deploy Amazon EC2 conditional forwarders in a shared services VPC. Connect all VPCs to the transit gateway and create forwarding rules on the on-premises DNS server for cloud.example.com pointing to the conditional forwarder. ⓒ Connect the private hosting zone to the shared services VPC. Create a Route 53 outbound resolver in the shared services VPC. Connect all VPCs to the transit gateway and create forwarding rules on the on-premises DNS server for cloud.example.com pointing to the outbound resolver. ⓓ Connect the private hosting zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Connect the shared services VPC to the transit gateway and create forwarding rules on the on-premises DNS server for cloud.example.com pointing to the inbound resolver. Solutions The key to this question is how to centrally manage DNS for a hybrid cloud using AWS services. Combining the company's requirements, the answer is A. Let's examine this one by one. Answer: A Breaking down the DNS requirements in the question: First, connecting the private hosting zone to all VPCs is configured as follows This setting allows traffic routing by directly connecting the private hosting to VPCs. As seen in the blue box, to use this function, you need to set enableDnsHostnames and enableDnsSupport to true in VPC settings. Second, establish a connection to the inbound resolver endpoint's IP address via Direct Connect or VPN. This allows on-premises to resolve and connect to cloud.example.com. Assuming DX and VPN are set up, implementing the Route 53 resolver's endpoint results in the following architecture. Using this architecture, you can create inbound and outbound endpoints (specified for VPCs) and create a VPC Route53 private hosting zone for the designated endpoints using the first method. By completing this task, you can verify that all VPCs (though they need to be specified separately) and on-premises can resolve the domain through the AWS Transit Gateway and DX (or VPN). ※ cf. You can simply check the connected domain using the following command. Use the telnet command for port 53 connection confirmation between the inbound endpoint resolver IP address: telnet 53. To check the validity of domain resolution, complete a domain name lookup from the on-premises DNS server or local host. For Windows: nslookup For Linux or macOS: dig If the previous command fails to return records, you can bypass the on-premises DNS server. Use the following command to send a DNS query directly to the inbound resolver endpoint IP address. For Windows: nslookup @ For Linux or macOS: dig @ Question 2 A company provides weather data to multiple customers through a REST-based API. The API is hosted in Amazon API Gateway and integrates with various AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record for Weather.example.com. The company stores data for the API in an Amazon DynamoDB table. The company needs a solution to provide failover capability for the API to another AWS region. Which solution meets these requirements? ⓐ Deploy a new set of Lambda functions in a new region. Update the API Gateway API to use an edge-optimized API endpoint targeting Lambda functions in both regions. Convert the DynamoDB table into a global table. ⓑ Deploy a new API Gateway API and Lambda functions in a different region. Change the Route 53 DNS record to multi-value answer. Add both API Gateway APIs to the response. Enable health check monitoring. Convert the DynamoDB table into a global table. ⓒ Deploy a new API Gateway API and Lambda functions in a different region. Change the Route 53 DNS record to a failover record. Enable health check monitoring. Convert the DynamoDB table into a global table. ⓓ Deploy a new API Gateway API in a new region. Change Lambda functions to global functions. Change the Route 53 DNS record to multi-value answer. Add both API Gateway APIs to the response. Enable health check monitoring. Convert the DynamoDB table into a global table. Solutions Question 2 involves frequently used AWS services in combination: API Gateway - Lambda - DynamoDB, with the DNS using Route 53 service records. This question seeks a combination that can handle failover to another region in case of an API outage. Many might think the answer is C, focusing solely on the “Change the Route 53 DNS record to a failover record” option. However, surprisingly, the answer is indeed C. Answer: C For DNS usage, if there's an outage, the following configuration is necessary for managing failover to another region: Create API resources in the main region (domain). Create API resources in the sub-region (domain). Map the created APIs to a custom domain. Create a Route 53 DNS failover record. Additionally, continue reading the problem, you’ll find health monitoring activation and DynamoDB global table. Completing these steps results in the following architecture. This problem mainly requires building a solution for disaster recovery, but this time we will also solve the API design. 1. Create APIs for both main and sub-regions. (Configure separate regions) It’s easy to create an API Gateway, but we need a domain name. AWS API G/W has a custom domain creation feature. It’s easy to make, but note that a TLS, i.e., ACM certificate, is required. Perform the same task in the sub-region as well. 2. Create a Route 53 health check. First, use the domain of the API in the main region created above. This step involves setting up an alarm to switch to the sub-region in case of an outage. 3. Routing Policy - Configure failover. You need to know that there are various record policy methods in Route 53. Among various policies, we need to check the failover method. Add records using primary (main region) and secondary (sub-region) in the main region - each created API domain - record type. 4. DynamoDB Global table There is a separate section for creating global table replicas, so it’s easy to find. Conclusion I hope the problems solved today help you with your certification preparation. Look forward to more in-depth problem explanations and key strategies in the next post!

  • AWS Lambda: The Ultimate Guide for Beginners 1/2

    Everything About AWS Lambda: The Ultimate Guide for Beginners 1/2 Written by Hyojung Yoon Today, we will learn about AWS Lambda, a key player in various IT environments. AWS Lambda enables the provision of services with high availability and scalability, thus enhancing performance and stability in cloud environments like AWS. In this blog, we'll delve into AWS Lambda, covering its basic concepts, advantages and disadvantages, and real-life use cases. Additionally, we'll compare AWS Lambda with EC2 to understand when to use each service. Let's get started! What is AWS Lambda? What is Serverless Computing? AWS Lambda How AWS Lambda Works Pros and Cons of AWS Lambda Advantages Serverless Architecture Cost-Effective Integration with AWS Services Disadvantages Execution Time Limit Stateless ColdStart Concurrency Limit Use Cases of AWS Lambda Automation of System Operations Web Applications Serverless Batch Processing Others Differences between AWS Lambda and EC2 When to Use AWS Lambda? When to Use AWS EC2? Conclusion What is AWS Lambda? 1. What is Serverless¹ Computing? AWS Lambda is a serverless computing service. Serverless computing is a cloud computing execution model that allows the operation of backend services without managing servers. Here, you can focus solely on writing code, while AWS manages the infrastructure. This model enables developers to develop and deploy applications more quickly and efficiently. ¹Serverless? A cloud-native development model where developers don't need to provision servers or manage application scaling. Essentially, cloud providers manage server infrastructure, freeing developers to focus more on the actual functionalities they need to implement. 2. AWS Lambda AWS Lambda is an event-driven serverless computing service that enables code execution for a variety of applications and backend services without the need to provision or manage servers. Users simply provide code in a supported language runtimes (Lambda supports Python, C#, Node.js, Ruby, Java, PowerShell, Go). The code is structured as Lambda functions, which users can write and use as needed. AWS Lambda offers an automatically triggered code execution environment, ideal for an event-based architecture and powerful backend solutions. For example, code is executed when a file is uploaded to an S3 bucket or when a new record is added to DynamoDB. 3. How AWS Lambda Works Lambda Functions These are resources in Lambda that execute code in response to events or triggers from other AWS services. Functions include code to process events or other AWS service events that are passed to them. Event Triggers (Event Sources) AWS Lambda runs function instances to process events. These can be directly called using the Lambda API or triggered by various AWS sercies and resources. AWS Lambda functions are triggered by various events, like HTTP requests, data state transitions, file uploads, etc. How Lambda Works You create a function, add basic information, write code in the Lambda editor or upload it, and AWS handles scaling, patching, and infrastructure management. Pros and Cons of AWS Lambda Using AWS Lambda allows developers to focus on development without the burden of server management, similar to renting a car where you only drive, and maintenance is handled by the rental company. However, Lambda functions are stateless, so additional configurations are necessary for state management. Also, the 'cold start' phenomenon can slow initial response times, like a computer waking from sleep. 1. Advantages 1) Serverless Architecture Developers can focus on development without worrying about server management, akin to renting and driving a car while maintenance is handled by the rental company. 2) Cost-Effective Pay only for the computing resources actually used. Functions are called and processed only when needed, so you don't need to keep servers running all the time, making it cost-effective. Lambda charges based on the number of requests and the execution time of the Lambda code, so no charges apply when code is not running. 3) Integration with AWS Services Allows seamless integration and programmatic interactions with other AWS services. Lambda functions also allow programmatic interactions with other AWS services using one of the AWS software development kits (SDKs). 2. Disadvantages 1)Execution Time Limit Lambda has a maximum execution time of 15 minutes (900 seconds) and a maximum memory limit of 10GB (10240MB). Thus, it is not suitable for long-running processes that exceed 15 minutes. 2) Stateless³ Not suitable for maintaining states or DB connections. - ³Stateless? Means that data is not stored between interactions, allowing for multiple tasks to be performed at once or rapidly scaled without waiting for a task to complete. 3) ColdStart As a serverless service for efficient resource use, Lambda turns off computing power if not used for a long time. When a function is first called, additional setup is needed to run the Lambda function, leading to a delay known as a Cold Start. The cold start phenomenon varies depending on the language used and memory settings. This initial delay can affect performance by delaying responses. 4) Concurrency⁴ Limit By default, Lambda limits the number of Lambda functions that can be executed simultaneously to 1000 per region. Exceeding this number of requests can prevent Lambda from performing. - ⁴Concurrency? The number of requests a Lambda function is processing at the same time. As concurrency increases, Lambda provisions more execution environment instances to meet the demand. Use Cases of AWS Lambda Lambda is ideal for applications that need to rapidly scale up and scale down to zero when there's no demand. For example, Lambda can be used for purposes like: 1. Automation of System Operations 🎬 Set up CloudWatch Alarms for all resources. When resources are in poor condition, such as Memory Full or a sudden CPU spike, CloudWatch Alarms trigger a Lambda Function. The Lambda Function notifies the team or relevant parties via Email or Slack Notification. Combine Lambda Function with Ansible for automated recovery in case of failure, such as resetting memory on a local instance or replacing resources when Memory Full occurs. 2. Web Applications 🎬 Store Static Contents (like images) in S3 when clients connect. Use CloudFront in front of S3 for fast serving globally. Separately use Cognito for authentication. For Dynamic Contents and programmatic tasks, use Lambda and API Gateway to provide services, with DynamoDB as the backend database. 3. Serverless Batch Processing 🎬 When an object enters S3, a Lambda Splitter distributes tasks to Mappers, and the Mappers save the completed tasks in DynamoDB. Lambda Reducer outputs back to S3. 4. Other Cases 1) Real-Time Lambda Data Processing Triggered by Amazon S3 Uploads. [Example] Thumbnail creation for S3 source images. 2) Stream Processing Use Lambda and Amazon Kinesis for real-time streaming data processing for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, IoT device data telemetry, etc. 3) IoT Backend Build a serverless backend using Lambda to handle web, mobile, IoT, and third-party API requests. 4) Mobile Backend Build a backend using Lambda and Amazon API Gateway to authenticate and process API requests. Integrate easily with iOS, Android, web, and React Native frontends using AWS Amplify. Differences Between AWS Lambda & EC2 AWS Lambda is serverless and event-driven, suitable for low-complexity, fast execution tasks, and infrequent traffic. EC2, on the other hand, is ideal for high-performance computing, disaster recovery, DevOps, development, and testing, and offers a secure environment. 1. When Should I Use AWS Lambda? Low-Complexity Code: Lambda is the perfect choice for running code with minimal variables and third-party dependencies. It simplifies the handling of easy tasks with low-complexity code. Fast Execution Time: Lambda is ideal for tasks that occur infrequently and need to be executed within minutes. Infrequent Traffic: Businesses dislike having idle servers while still paying for them. A pay-per-use model can significantly reduce computing costs. Real-Time Processing: Lambda, when used with AWS Kinesis, is best suited for real-time batch processing. Scheduled CRON Jobs: AWS Lambda functions are well-suited for ensuring scheduled events are triggered at their set times. 2. When Should I Use AWS EC2? High-Performance Computing: Using multiple EC2 instances, businesses can create virtual servers tailored to their needs, making EC2 perfect for handling complex tasks. Disaster Recovery: EC2 is used as a medium for disaster recovery in both active and passive environments. It can be quickly activated in emergencies, minimizing downtime. DevOps: DevOps processes have been comprehensively developed around EC2 Development and Testing: EC2 provides on-demand computing resources, enabling companies to deploy large-scale testing environments without upfront hardware investments. Secure Environment: EC2 is renowned for its excellent security. Conclusion This guide provided an in-depth understanding of AWS Lambda, which plays a significant role in traffic management and server load balancing in the AWS environment. In the next session, we will explore accessing the console, creating and executing Lambda functions, and understanding fee calculations. We hope this guide helps you in starting and utilizing AWS Lambda, as you embark on your journey into the expansive serverless world with AWS Lambda! Links A Deep Dive into AWS Lambda - Sungyeol Cho, System Engineer (AWS Managed Services) - YouTube What is AWS Lambda? - AWS Lambda Troubleshoot Lambda function cold start issues | AWS re:Post

  • What is Amazon Lightsail : EC2 vs Lightsail comparison

    What is Amazon Lightsail : EC2 vs Lightsail comparison written by Hyojung Yoon Hello everyone. Today, let's take some time to explore Amazon's cloud service called Lightsail. Understanding both Amazon Lightsail and Amazon EC2, two key cloud computing services, is essential. These two services are part of AWS's major cloud solutions, each with its unique features and advantages. In this post, we'll delve into each service, especially focusing on the key features of Amazon Lightsail and when it's suitable. So, let's dive right in! What is Amazon Lightsail? Amazon Lightsail What is a VPS? Components ofLightsail Features of Lightsail Advantages of Lightsail Disadvantages of Lightsail EC2 vs Lightsail Differences between Amazon Lightsail and EC2 Which one should you use? Conclusion What is Amazon Lightsail? 1. Amazon Lightsail Amazon Lightsail is a Virtual Private Server(VPS)created by AWS. It includes everything you need to quickly launch your project, such as instances, container services, managed databases, CDN distribution, load balancers, SSD-based block storage, static IP addresses, DNS management for registered domains, and resource snapshots (backups), and more. It's specialized in making it easy and fast to build websites or web applications. 2. What is a VPS? A VPS stands for Virtual Private Server, which means taking a physical server and dividing it into multiple virtual servers. These segmented virtual servers are shared among various clients. While you share a physical server with others, each clients has its private server space. However, since everyone shares computing resources on one server, a user monopolizing too many resources can affect others in terms of RAM, CPU, etc. 3. Components of Lightsail Instances Containers Databases Networking Static IP Load Balancer(ELB) Deployment(CDN) DNS Zone : Domain & Sub-domain management Storage(S3, EBS): Additional capacity available if instances run out of space Snapshots(AMI) : Scheduled for automatic backups Features of Lightsail 1. Advantages of Lightsail AWS Lightsail allows for intuitive instance creation, which is less complex than EC2. With pre-configured bundles, users can swiftly deploy applications, websites, and development environments without a deep understanding of cloud architecture. Its user-friendly interface allows easy creation of containers, storage, and databases. This makes it ideal for beginners and smaller projects. 2. Disadvantages of Lightsail However, the advantages mentioned above can become limitations of Lightsail. It may not be suitable for applications expecting rapid increases in traffic or resource demands, and pre-configured bundles can limit detailed settings. Additionally, integrating with other AWS services may require migration. Other limitations include: Up to 20 instances per account 5 fixed IP addresses per account Up to 6 DNS zones per account Total 20TB block storage (disks) attachment 5 load balancers per account Up to 20 certificates EC2 vs Lightsail 1. Differences Between Amazon Lightsail and EC2 1) Cost Generally, Amazon Lightsail is cheaper. At 2GB memory, it charges $10, inclusive of 60GB SSD EBS volume and traffic costs. In contrast, EC2 charges $11.37 for a 3-year commitment (without upfront payment) for t3.small with 60GB EBS. Here, traffic costs are extra. Therefore, Lightsail is more economical for continuous usage. However, if you only use EC2 for the necessary time, it might be cost-effective. EC2 charges are based on actual usage, making it a more flexible option for cost management. 2) Features While EC2 offers more advanced features not available in Lightsail, it may lack some detailed options. Features not available in Lightsail include: Limited VPC-related functions Instance type changes Scheduled snapshot creation Detailed security group settings IAM role assignment Various load balancer options 2. Which one should you use? 1) Amazon EC2(Elastic Compute Cloud) Powerful and flexible cloud computing platform offered by AWS Customizable on-demand computing performance for all application needs Scalable resources for anything from websites to high-performance scientific simulations Seamlessly integrates with other AWS services Ideal for businesses with infrastructure managers capable of managing virtual servers, networks, security groups, etc. It's particularly beneficial for CPU-intensive operations and on-demand functionalities, allowing for efficient cost management. 2) Amazon Lightsail Simplifies the cloud experience Offers virtual servers, storage, and networking in easy-to-understand packages Ideal for simpler applications like personal websites, blogs, or small web apps Fixed pricing model simplifies budgeting Ideal for individuals looking for swift web service hosting without dedicated infrastructure management. It's more suitable for services emphasizing network traffic rather than CPU-intensive tasks. Conclusion Understanding the differences between Amazon EC2 and Lightsail is the first step toward harnessing cloud computing. EC2 offers high scalability and customization, while Lightsail provides a simple and intuitive cloud experience. By selecting the most appropriate service based on your requirements, technical expertise, and project complexity, you can ensure success in the digital landscape. Both have unique advantages, so choose according to your needs and expertise. So, enjoy your cloud surfing! ⛵⛵ Links Virtual Private Server and Web Hosting-Amazon Lightsail-Amazon Web Services Virtual Private Server and Web Hosting - Amazon Lightsail FAQs - Amazon Web Services

  • What is a Load Balancer? : A Comprehensive Guide to AWS Load Balancer

    Written by Hyojung Yoon Hello, everyone! Today, we will delve into the fascinating world of Load Balancers and Load Balancing – pivotal technologies that make the world smarter by enabling web services to maintain stability, even in high traffic situations, especially in cloud environments like AWS. These technologies enhance the service's performance, stability, and scalability. Let’s begin our journey through the basic concepts of Load Balancers and Load Balancing to the types of AWS Load Balancers in this blog. What is a Load Balancer Load Balancer Scale Up and Scale Out What is a Load Balancing Load Balancing Benefits of Load Balancing Load Balancing Algorithms Static Load Balancing Round Robin Method Weighted Round Robin Method IP Hash Method Dynamic Load Balancing Least Connection Method Least Response Time Method Types of AWS Load Balancer ALB(Application Load Balancer) NLB(Network Load Balancer) ELB(Elastic Load Balancer) Conclusion What is a Load Balacer? 1. Load Balancer Load Balancers sit between the client and a group of servers, distributing traffic evenly across multiple servers and thereby mitigating the load on any particular server. When there is excessive traffic to a single server, it may not handle the load, leading to downtime. To address this issue, either a Scale Up or Scale Out approach is employed. 2. Scale Up and Scale Out Scale Up improves the existing server's performance, including tasks like upgrading CPU or memory, while Scale Out distributes traffic or workload across multiple computers or servers. Each method has its advantages and disadvantages, and choosing the more appropriate one is crucial. In the case of Scale Out, Load Balancing is essential to evenly distribute the load among multiple servers. The primary purpose of Load Balancing is to prevent any single server from being overwhelmed by distributing incoming web traffic across multiple servers, thus enhancing server performance and stability. What is a Load Balancing? 1. Load Balancing Load Balancing refers to the technology that distributes tasks evenly across multiple servers or computing resources, preventing service interruption due to excessive traffic and ensuring tasks are processed without delay. 2. Benefits of Load Balancing 1) Application Availability Server failures or maintenance can increase application downtime, rendering the application unusable for visitors. A load balancer automatically detects server issues and redirects client traffic to available servers, enhancing system fault tolerance. With load balancing, it is more manageable to: Undertake application server maintenance or upgrades without application downtime Facilitate automatic disaster recovery to your backup site Conduct health checks and circumvent issues leading to downtime 2) Application Scalability A load balancer can intelligently route network traffic between multiple servers. This allows your application to accommodate thousands of client requests, enabling you to: Circumvent traffic bottlenecks on individual servers Gauge application traffic to adaptively add or remove servers as required Integrate redundancy into your system for coordinated and worry-free operation 3) Application Security Load balancers, equipped with inbuilt security features, add an extra security layer to your Internet applications. They are invaluable for managing distributed denial-of-service attacks, where an attacker overwhelms an application server with concurrent requests, causing server failure. Additionally, a load balancer can: Monitor traffic and block malicious content Reduce impact by dispersing attack traffic across multiple backend servers Direct traffic through network firewall groups for reinforced security 4) Application Performance Load balancers enhance application performance by optimizing response times and minimizing network latency. They facilitate several crucial tasks to: Elevate application performance by equalizing load across servers Lower latency by routing client requests to proximate servers Guarantee reliability and performance of both physical and virtual computing resources Load Balancing Algorithms Various algorithms, such as Round Robin, Weighted Distribution, and Least Connections, are employed for load balancing, each serving different purposes and scenarios. 1. Static Load Balancing 1)Round Robin Method This method systematically allocates client requests across servers. It is apt when servers share identical specifications and the connections (sessions) with the server are transient. Example: For servers A, B, and C, the rotation order is A → B → C → A. 2) Weighted Round Robin Method This assigns weights to each server and prioritizes the server with the highest weight. When servers have varied specifications, this method increases traffic throughput by assigning higher weights to superior servers. Example: Server A's weight=8; Server B's weight=2; Server C's weight=3. Hence, 8 requests are assigned to Server A, 2 to Server B, and 3 to Server C. 3) IP Hash Method Here, the load balancer hashes the client IP address, converting IP addresses to numbers and mapping them to distinct servers. This method assures users are consistently directed to the same server. 2. Dynamic Load Balancing 1) Least Connection Method This method directs traffic to the server with the fewest active connections, presuming each connection demands identical processing power across all servers. 2) Least Response Time Method This considers both the current connection status and server response time, steering traffic to the server with the minimal response time. It is suitable when servers have disparate available resources, performance levels, and processing data volumes. If a server adequately meets the criteria, it is prioritized over a server that is unoccupied. This algorithm is employed by the load balancer to ensure prompt service for all users. Types of AWS Load Balancer 1. ALB(Application Load Balancer) Complex modern applications often operate on server farms, each composed of multiple servers assigned to specific application functions. An Application Load Balancer (ALB) redirects traffic after examining the request content, such as HTTP headers or SSL session IDs. For instance, an e-commerce application, possessing features like a product directory, shopping cart, and checkout functionality, when coupled with an ALB, dispenses content like images and videos without necessitating sustained user connection. When a user searches for a product, the ALB directs the search request to a server where maintaining user connection is not mandatory. Conversely, the shopping cart, which necessitates the maintenance of multiple client connections, transmits the request to a server capable of long-term data storage. It facilitates application-level load balancing, apt for HTTP/HTTPS traffic. It supports L7-based load balancers and can enforce SSL. 2. NLB(Network Load Balancer) A Network Load Balancer (NLB) operates by analyzing IP addresses and various network data to efficiently direct traffic. It allows you to trace the origin of your application traffic and allocate static IP addresses to multiple servers. The NLB uses both static and dynamic load balancing methods to distribute server load effectively. It’s an ideal solution for scenarios demanding high performance, capable of managing millions of requests per second while maintaining low latency. It’s especially adept at handling abrupt increases and fluctuations in traffic, making it particularly useful for real-time streaming services, video conferencing, and chat applications where establishing and maintaining a smart, optimized connection is crucial. In such cases, utilizing an NLB ensures effective management of connections and maintenance of session persistence. It conducts network-level load balancing, suitable for TCP/UDP traffic. It supports L4-based load balancers. 3. ELB(Elastic Load Balancer) Elastic Load Balancer (ELB) automatically allocates incoming traffic amongst various targets like EC2 instance containers and IP addresses across multiple Availability Zones. With ELB, the load on both L4 and L7 can be controlled. Should the primary address of your server alter, a new load balancer must be created and a target group must be assigned to a singular address, making the process more complex and cost-intensive with the increase in targets. It accommodates the four types of load balancers provided by AWS. It extends substantial scalability and adaptability to cater to diverse needs and environments. Conclusion We have delved into the intricate domains of load balancers and load balancing, recognizing the indispensable role a load balancer plays in moderating website and application traffic and allocating server load to bolster service performance and stability. Particularly within cloud environments like AWS, a plethora of load balancing options and functionalities are available, allowing the implementation of the most suited solution for your services and applications. Such technological advancements empower us to offer quicker and more reliable services, culminating in enhanced user experience and customer contentment, thus forging the path to business success. Links What is a Load Balancing? - Load Balancing Algorithm Explained - AWS Load Balancer - Amazon Elastic Load Balancer (ELB) - AWS What is an Application Load Balancer? - Elastic Load Balancing What is a Network Load Balancer? - Elastic Load Balancing What is an Elastic Load Balanceing? - Elastic Load Balancing

bottom of page