top of page

검색 결과

58 results found with an empty search

  • Using MCP 
on AWS: Now Available with Amazon Q

    Written by Minhyeok Cha 이번 Q 관련해서 새로운 소식이 두개정도 나왔습니다. 하나는 다국어 지원(한글포함) 이고, 다른 하나는 CLI+Q+MCP 의 조합입니다. 요즘 MCP가 또 그렇게 핫합니다. IT에 종사하는 사람으로서 늦게나마 뭐하는 놈인지 확인은 해봐야 할것 같아서 본 글을 작성하게 되었습니다. 보통은 Claude, Cursor 같은 곳에 설치하여 쓰시지만 AWS 파트너사 직원으로써 AWS에 관하여 엮어 사용해보려 합니다. 목차 모델 컨텍스트 프로토콜(MCP) 란? Amazon Q 설치 및 SSO 가입 MCP 서버 적용 MCP한테 뭘시키지? +아쉬워서 해보는 추가 파트 마무리 모델 컨텍스트 프로토콜(MCP)란? 다들 아실 수 도있지만 그래도 간략한 설명 및 분량 채우기 를 해보자면 모델 컨텍스트 프로토콜(MCP)은 LLM 애플리케이션과 외부 데이터 소스 및 도구 간의 원활한 통합을 지원하는 개방형 프로토콜입니다. AI 기반 IDE를 구축하거나, 채팅 인터페이스를 개선하거나, 맞춤형 AI 워크플로를 만드는 경우, MCP는 LLM을 필요한 컨텍스트에 연결하는 표준화된 방법을 제공합니다. 사실 MCP는 다양한 서버가 이미 github에 올라와 있어 들고와서 사용해도 좋고 자기가 원하는 기능이 없다 하면 직접 구현하는 것도 방법입니다. github 서버는 다음 링크를 참고하세요 → https://github.com/modelcontextprotocol/servers 저희가 사용할 MCP 플로우는 대략 이런 모양이며 사전에 말했지만 CLI와 Q를 사용하여 진행해 보겠습니다. Amazon Q 설치 및 SSO 가입 1.먼저 SSO가입을 진행합니다. [Q] Amazon Q Developer Pro 구독 방법 2.가입은 다음 절차를 밟으시면 됩니다. 참고로 AWS Organization을 사용하고 그 아래의 계정으로 테스트를 진행하신다면 이는 계정 인스턴스로 취급되며 다음과 같은 Q의 제약 사항이 걸립니다. 3.다음은 CLI에 Amazon Q 설치입니다. 설치는 다음 절차를 밟으시면 됩니다. → [링크] 4.이후 터미널에 들어와 “q chat”을 쓰면 다음과 같이 화면에 나올 겁니다. 영어 울렁증인 저에게 한글지원은 필수였으며 다행이 가능하다고 하네요. : ) MCP 서버 적용 MCP 서버는 여러가지 있었지만 본 글에서는 테스트에 가장 쉽고 귀찮지 않으며 가시성이 뚜렷한 서버는 무엇일까 찾다가 Puppeteer MCP 서버를 골라봤습니다. Puppeteer를 사용하여 브라우저 자동화 기능을 제공하는 모델 컨텍스트 프로토콜 서버입니다. 이 서버를 통해 LLM은 실제 브라우저 환경에서 웹 페이지와 상호 작용하고, 스크린샷을 찍고, JavaScript를 실행하는 MCP 서버란 설명이 있지만 어떤 아웃풋이 출력되는지는 아래 테스트 결과로 확인해보겠습니다. 사전에 올린 github에 들어가 원하는 서버에 보면 서버 설정 방법이 있으며 방법 또한 매우 간단합니다. ~/.aws/amazonq 터미널에 amazonq 설정 디렉토리에 들어간 뒤 “mcp.json” 파일을 생성하고 각 서버 설정값을 붙혀넣기만 하면 끝납니다. 이후 아까 켜둔 CLI Q를 재부팅해주세요. 💡재부팅은 /quit 로 나간 뒤 다시한번 실행해주시면 됩니다.  위에 Puppeteer 로드 적혀있으면 MCP serves 적용 완료입니다. MCP한테 뭘시키지? Puppeteer로 몇번 돌려봤지만 간단하게 다음과 같은 내용을 작성해서 사용할 수 있었습니다. 웹 페이지 스크린샷 캡처 웹 페이지 내용 가져오기 웹 페이지 상호작용 중에 하나로 간단하게 mcp 공부에 도움이 되었던 블로그 링크 중 하나를 넣어 내용을 요약해보라고 했더니 다음과 같은 내용이 출력되었습니다. 해당 도구에 권한을 등록함으로써 해당 mcp가 링크를 읽고 정리해줍니다. MCP관련 블로그 내용을 요약 흠.. 그런데 뭔가 싱겁게 끝난 감이 없지않아 있네요. 그래서 이번엔 그냥 URL이 아닌 예전 정리해둔 개인정보에 관한 PDF를 넣고 ISMS-P 관련하여 분석을 요청했는데 결과가 생각보다 너무 좋았습니다. 아쉬워서 해보는 추가 파트 해당 MCP가 아무래도 URL만 사용하다보니 PDF로 열린 내용은 찾지 못한다고 에러가 나왔습니다. 아마 브라우저에서 PDF를 직접 렌더링하는 데 문제인것 같네요. 그러더니 해당 문제를 인지하고 curl 명령어를 사용하여 PDF 파일을 로컬에 다운로드했습니다. ls -la 명령어로 다운로드된 PDF 파일이 존재하는지 확인했고, 파일이 정상적으로 다운로드된 것을 확인했습니다. Python의 PyPDF2 라이브러리를 사용하여 PDF를 읽으려 했으나, 해당 라이브러리가 설치되어 있지 않았습니다. Python 가상환경을 생성하고 PyPDF2 라이브러리를 설치한 후, PDF 파일을 성공적으로 읽어 텍스트를 추출했습니다. 이후 아래 내용은 대외비라 보여드릴 수는 없지만 본 블로그의 CLI + Q 기능은 충분히 확인된것 같습니다. 마무리 본글을 마무리하고 보니 AWS 공식 유튜브에 Developer Q CLI와 MCP 서버 활용에 관한 영상이 이미 올라와 있더군요. 조금 더 서둘렀다면 먼저 Q CLI, MCP 관련 글을 올릴 수 있었을 텐데 아쉬움이 남습니다. MCP는 정말 다양한 가능성을 열어줍니다. 기존 서버에 MCP를 연동함으로써 AI 모델이 필요한 엔드포인트에 적절한 권한으로 접근할 수 있게 되었고, 이는 개발 자동화와 효율성 측면에서 한 단계 도약하는 경험이었습니다. 💡물론 MCP 도입에는 신중한 고려가 필요합니다. MCP 서버에 부여하는 권한 범위, 자체 서버 운영에 따른 로깅 및 패치 관리 등의 문제는 최근 보안 커뮤니티에서도 중요한 논점으로 다뤄지고 있습니다. Developer Q CLI를 처음 사용해본 경험은 그 자체로도 놀라웠습니다. MCP의 추가 기능 없이도, 필요한 라이브러리 설치나 버전 패치 등을 상황에 맞게 제안 하고 실행해주는 지능적인 지원은 개발 워크플로우를 완전히 바꿔놓았습니다. 과거에는 실행 오류가 발생하면 로그를 확인하고 필요한 해결책을 일일이 검색해야 했지만, 이제는 그런 번거로움 없이 훨씬 더 효율적으로 문제를 해결할 수 있게 되었습니다.

  • Recap of AWS Summit Japan 2025

    AWS Summit Japan 2025 Recap: Revisiting the Summit Japan Experience with an AWS Ambassador Written by MinHyeok Cha Contents AWS Summit Japan 2025 행사 개요 AWS Summit Japan 현장 분위기 주요 파트너사 부스 소개 기술 세션 및 전시 내용 마치며 지난 5월 서울에서 AWS Summit Seoul 2025가 열렸죠. 6월 25, 26일 이틀간 도쿄에서는 AWS Summit Japan 2025 가 열렸습니다. 이번에 일본에 간 이유는 일본 측 고객 미팅때문인데요. 마침 고객분도 Summit에 참여한다 하여, 일본 Summit 주최지인 치바 쪽에 있는 마쿠하리 멧세( 幕張メッセ )에서 뵙기로 하였습니다. AWS Summit Japan 2025 행사 개요 AWS Summit Tokyo는 일본 최대의 AWS 행사입니다. 한국과 마찬가지로 AWS 기술과 관련된 세션과 Gameday, EXPO 등을 통해 지식을 얻고 즐길 수 있는 이벤트입니다. 장소는 '마쿠하리 멧세' 라는 곳인데요, 서울의 코엑스 같은 곳이라고 생각하시면 이해하기 쉬울 것 같습니다. 이때 날씨는 역시 여름의 일본답게 덥고•습하고•비오는 불쾌함의 3박자였지만 어차피 실내활동이니까 참고 들어갔습니다. AWS Summit Japan 현장 분위기 대기하는 사람도 많았지만 앞에서 티켓 발급해주는 사람도 엄청 많았습니다. AWS Re:Invent 때보다도 더 많았던 것 같아요. 그래서 시간들이지 않고 금방금방 안으로 들어갈 수 있었습니다. 아실 분들은 아시겠지만 AWS 는 한국보다 일본에 먼저 리전이 생성되었습니다. 그만큼 현지인 분들도 AWS 관심이 많은 것인지, 확실히 인파가 많은 걸 체감할 수 있었습니다. 파트너사라면 또 궁금한 포인트죠. 일본에서는 어떤 AWS 파트너사가 있는지 확인해 볼 수 있었습니다. 주요 파트너사 부스 소개 여러 부스를 돌아봤지만 전부 일어로 적혀있어, 유명한 회사 부스에만 기웃기웃거리며 열심히 알아듣는 척만 했습니다. 먼저 snowflake 입니다. PPT에 유일하게 읽을 수 있는게 보이네요. 강력한 AI 서비스 코드 어시스턴트나 문서 읽기 등을 지원하는 강력한 AI 서비스 데이터와의 대화 BI나 AI 챗봇을 쉽게 구성할 수 있는 검색 기능  및 텍스트 → SQL 기능 모델 Snowflake와 통합된 최고급 모델들에 쉽게 접근 가능 (번역기 참조) 생각해보니 일본 Summit에서는 물론 AI의 언급이 많긴 했지만 한국이나 미국에서만큼의 대 AI시대다! 라는 느낌은 없었습니다. 그만큼 아날로그를 중시하는 나라인가 싶기도 하네요 FORTINET 부스로 갔는데 앞에 귀여운 마스코트(?)랑 사진 찍는 곳도 있었습니다. 용기가 없어 찍어달라고는 못했습니다. 한국에서도 유명한 회사 Classmethod 입니다. 여기도 작년 서밋에 고양이 마스코트가 있었다는 이야기가 있었는데 못봐서 아쉽네요. iret 이라는 회사인데 궁금해서 검색해보니 전반적인 IT 솔루션 업체인것 같았어요. 부스 사이즈도 클래스메소드와 같은 크기라 다이아몬드 파트너사 임을 짐작할 수 있었습니다. 기술 세션 및 전시 내용 AWS Re:Invent 에서도 본 세션 타임라인입니다. 비행기가 11시 도착이라 keynote는 못들었으나 아래와 같이 각 회사 혹은 AWS 측 사람이 나와 강연하는 모습이 전 세계 어딜 가나 비슷함을 느꼇습니다. 다음은 소니와 혼다에서 합작으로 만들어낸 작품 같아요. 마찬가지로 설명을 들어도 못알아 들었지만 차와 관련된 AWS 아키텍처가 마련되어 있어서 어떤 느낌인지만 감으로 확인했습니다. 마치며 스마일샤크에서 일을 하다보니 일본 Summit도 가보게 되었네요. 메인 목적이 아니기에 이틀간 참여는 못하였지만 좋은 경험이 되었습니다. 일본어를 공부해가면 더욱 구경하는 재미가 있을 것 같네요. 이상 계획에도 없던 AWS Summit japan 체험기를 끝내겠습니다. 감사합니다!

  • AWS Summit Japan 2025 참관 후기

    AWS Summit Japan 2025 참관 후기 : 앰버서더와 함께 다시보는 서밋 재팬 현장 Written by MinHyeok Cha 목차 AWS Summit Japan 2025 행사 개요 AWS Summit Japan 현장 분위기 주요 파트너사 부스 소개 기술 세션 및 전시 내용 마치며 지난 5월 서울에서 AWS Summit Seoul 2025가 열렸죠. 6월 25, 26일 이틀간 도쿄에서는 AWS Summit Japan 2025 가 열렸습니다. 이번에 일본에 간 이유는 일본 측 고객 미팅때문인데요. 마침 고객분도 Summit에 참여한다 하여, 일본 Summit 주최지인 치바 쪽에 있는 마쿠하리 멧세( 幕張メッセ )에서 뵙기로 하였습니다. AWS Summit Japan 2025 행사 개요 AWS Summit Tokyo는 일본 최대의 AWS 행사입니다. 한국과 마찬가지로 AWS 기술과 관련된 세션과 Gameday, EXPO 등을 통해 지식을 얻고 즐길 수 있는 이벤트입니다. 장소는 '마쿠하리 멧세' 라는 곳인데요, 서울의 코엑스 같은 곳이라고 생각하시면 이해하기 쉬울 것 같습니다. 이때 날씨는 역시 여름의 일본답게 덥고•습하고•비오는 불쾌함의 3박자였지만 어차피 실내활동이니까 참고 들어갔습니다. AWS Summit Japan 현장 분위기 대기하는 사람도 많았지만 앞에서 티켓 발급해주는 사람도 엄청 많았습니다. AWS Re:Invent 때보다도 더 많았던 것 같아요. 그래서 시간들이지 않고 금방금방 안으로 들어갈 수 있었습니다. 아실 분들은 아시겠지만 AWS 는 한국보다 일본에 먼저 리전이 생성되었습니다. 그만큼 현지인 분들도 AWS 관심이 많은 것인지, 확실히 인파가 많은 걸 체감할 수 있었습니다. 파트너사라면 또 궁금한 포인트죠. 일본에서는 어떤 AWS 파트너사가 있는지 확인해 볼 수 있었습니다. 주요 파트너사 부스 소개 여러 부스를 돌아봤지만 전부 일어로 적혀있어, 유명한 회사 부스에만 기웃기웃거리며 열심히 알아듣는 척만 했습니다. 먼저 snowflake 입니다. PPT에 유일하게 읽을 수 있는게 보이네요. 강력한 AI 서비스 코드 어시스턴트나 문서 읽기 등을 지원하는 강력한 AI 서비스 데이터와의 대화 BI나 AI 챗봇을 쉽게 구성할 수 있는 검색 기능  및 텍스트 → SQL 기능 모델 Snowflake와 통합된 최고급 모델들에 쉽게 접근 가능 (번역기 참조) 생각해보니 일본 Summit에서는 물론 AI의 언급이 많긴 했지만 한국이나 미국에서만큼의 대 AI시대다! 라는 느낌은 없었습니다. 그만큼 아날로그를 중시하는 나라인가 싶기도 하네요 FORTINET 부스로 갔는데 앞에 귀여운 마스코트(?)랑 사진 찍는 곳도 있었습니다. 용기가 없어 찍어달라고는 못했습니다. 한국에서도 유명한 회사 Classmethod 입니다. 여기도 작년 서밋에 고양이 마스코트가 있었다는 이야기가 있었는데 못봐서 아쉽네요. iret 이라는 회사인데 궁금해서 검색해보니 전반적인 IT 솔루션 업체인것 같았어요. 부스 사이즈도 클래스메소드와 같은 크기라 다이아몬드 파트너사 임을 짐작할 수 있었습니다. 기술 세션 및 전시 내용 AWS Re:Invent 에서도 본 세션 타임라인입니다. 비행기가 11시 도착이라 keynote는 못들었으나 아래와 같이 각 회사 혹은 AWS 측 사람이 나와 강연하는 모습이 전 세계 어딜 가나 비슷함을 느꼇습니다. 다음은 소니와 혼다에서 합작으로 만들어낸 작품 같아요. 마찬가지로 설명을 들어도 못알아 들었지만 차와 관련된 AWS 아키텍처가 마련되어 있어서 어떤 느낌인지만 감으로 확인했습니다. 마치며 스마일샤크에서 일을 하다보니 일본 Summit도 가보게 되었네요. 메인 목적이 아니기에 이틀간 참여는 못하였지만 좋은 경험이 되었습니다. 일본어를 공부해가면 더욱 구경하는 재미가 있을 것 같네요. 이상 계획에도 없던 AWS Summit japan 체험기를 끝내겠습니다. 감사합니다!

  • What is Amazon Workspaces? A Complete Guide to Remote Work Cost Reduction and Implementation in 2025

    What is Amazon Workspaces? A Complete Guide to Remote Work Cost Reduction and Implementation in 2025 Written by Hyojung Yoon Hello, this is Hyojung Yoon, Brand Team Lead, returning with a long-awaited blog post. The way we work has changed significantly since COVID-19 pandemic. According to Maeil Business News' analysis of the August 2024 Statistics Korea Economically Active Population Survey Supplementary Survey by Employment Type , telecommuters in Korea number approximately 683,000, representing 3.1% of all workers . Source: Gartner Newsroom, March 1, 2023 Additionally, Gartner predicted that 39% of global knowledge workers would work in a hybrid manner by the end of 2023 , and this trend continues in 2025. 💭 Companies now face new challenges. "How can we ensure employees work securely from anywhere?" "How can we reduce the substantial costs of building remote work infrastructure?" If these questions resonate with you, Amazon WorkSpaces could be the solution you're looking for. Contents Why Do We Need Amazon WorksSpaces Now What is Amazon WorkSpaces? VDI vs. DaaS: Differences Real Implementation Cases and Results Key Features of Amazon WorkSpaces Amazon WorkSpaces Implementation Guide Pricing and Cost Optimization Implementation Considerations Conclusion Why Do We Need Amazon WorkSpaces Now? Challenges in the Remote Work Era More companies than ever are transitioning to remote work of allowing hybrid work arrangements.However, this change has simultaneously presented several challenges for companies: First, security issues. Security threats have increased as employees access company data from personal devices. Second, infrastructure costs. Providing laptops or computers to all employees and establishing VPNs requires enormous expenses. Third, management complexity. Managing and updating distributed devices is a significant burden for IT staff (or teams). The Solution Amazon WorkSpaces Offers Cloud-based virtual desktop services have emerged to solve these problems. Among them, Amazon WorkSpaces provides the following benefits. Source: Amazon WorkSpaces Customers | Persistent Desktop Virtualization 💡 Amazon Internal Case Study Supporting 25,000 global contract and remote workers while saving $17 million annually = $680 savings per employee per year (*The $680 per person savings is based on simple calculation.) This cost reduction was achieved not simply by reducing hardware purchase costs, but through improved operational efficiency and enhanced security. What is Amazon Workspaces? Amazon Workspaces is a fully managed desktop virtualization service (DaaS, Desktop-as-a-Services) delivered via the cloud. Users can remotely access their familiar desktop environment from anywhere, at any time, through various devices—all running on AWS's secure infrastructure. Whether you're at home, in a café, or traveling abroad for business, as long as you have an internet connection, you can work as if you were at your office PC. VDI vs. DaaS: Differences Many people confuse VDI and DaaS, so let me clarify the key differences. VDI(Virtual Desktop Infrastructure) is where companies build and manage virtual desktop infrastructure in their own data centers. Think of it like purchasing a car and handling all maintenance yourself. In contrast, DaaS(Desktop-as-a-Services) is a model where cloud service providers host and manage the virtual desktop infrastructure, delivering it as a service. This is similar to leasing a car - the leasing company handles maintenance while you simply enjoy using it. VDI DaaS Initial Investment High initial hardware invest costs Monthly subscription fee Management Burden Managed directly by internal IT team Managed by service provider Scalability Limited Faster and more flexible scaling Cost Structure CapEx(Capital Expenditure) OpEX(Operational Expenditure) 💡 Note: The market uses 'VDI'in two different meanings: • Narrow meaning: Refers only to on-premises virtual desktop • Broad meaning: Refers to all virtual desktop technologies (VDI + SaaS) Therefore, when referring to the VDI market size, it usually means the entire virtual desktop market including DaaS. While Amazon WorkSpaces is clearly a DaaS service, it is classified as part of the VDI market in the broader context. Real Implementation Cases and Results Let's examine the actual results companies have achieved with Amazon WorkSpaces rather than focusing on theory. 1) Ferrari Case Study Challenge: Ferrari was working with over 500 external partners. They needed to protect important intellectual property like the latest automotive design drawings while collaborating efficiently. Solution: Through Amazon WorkSpaces, they provided isolated virtual desktop environments to each partner. Data remained under Ferrari's control while partners could perform necessary work. Results: 90% reduction in deployment time  (from several days to 1 hour) Eliminated risk of design data leakage through enhanced security Simplified partner onboarding process Ferrari WorkSpaces Case Study (Go to AWS Official Case Study) 2) Kyowa Kirin Case Study Challenge: Pharmaceutical companies must comply with strict regulations like HIPAA. At the same time, researchers needed to access data from anywhere. Results: Deployed over 1,600 WorkSpaces  in Japan and the United States 30% cost reduction  compared to on-premises VDI Reduced audit response time through compliance automation Improved employee satisfaction Kyowa Kirin Case Study (Go to AWS re:Invent Presentation) 3) Emergency Response Cases During the Pandemic Amazon WorkSpaces' ability shone even brighter during the COVID-19 pandemic. Fox Corporation:  Established remote work environment for all 5,000 employees MRS BPO:  Transitioned 700 call center employees to remote work in just 2 days This agility can be a significant competitive advantage in rapidly changing business environments. FOX Case Study (Go to AWS Official Case Study) MRS BRO Case Study (Go to AWS Official Blog Post) Key Features of Amazon WorkSpaces Supported Environments Amazon WorkSpaces supports a wide range of environments. You can access it from virtually any device. Operating System Support: Windows Server 2016, 2019, 2022 Windows 10, Windows 11 Amazon Linux 2 Ubuntu 22.04 LTS Rocky Linux 8 Client Device Support: Windows PC, Mac, Linux computers Chromebook iPad, Fire, Android tablets Web browsers (accessible without separate installation) Strengths of WorkSpaces Cloud-native architecture: Designed for the cloud from the beginning, enabling rapid deployment Transparent and predictable cost structure: Predictable pricing (pay-as-you-go) Tight integration with AWS ecosystem: Seamless integration with EC2, Lambda, S3, etc. Global infrastructure: Reliable service anywhere in the world ⚠️ However, like all services, Amazon WorkSpaces has some limitations • Lack of multi-session functionality prevents multiple users from using one WorkSpace simultaneously • Microsoft-focused organizations should compare with Azure Virtual Desktop to determine which is better • Large organizations with 5,000+ employees need to calculate and compare TCO with on-premises VDI WorkSpacesTypes Amazon WorkSpaces offers two types. Each has different purposes, so choose the one that fits your business. 💡 Pro Tip For first-time deployments, I recommend using a mix of Personal and Pools. Running full-time employees on Personal and interns or project staff on Pools maximizes cost efficiency. WorkSpaces Personal WorkSpaces Pools Description Dedicated virtual desktop for individual users Virtual desktop pool shared by multiple users Data Retention User settings and data persist Resets to initial state upon logout Suitable For General office workers, developers Call centers, training rooms, temporary staff User Experience Similar to using your own PC Like a PC cafe where anyone can use and it resets after Performance Bundle Options Amazon WorkSpaces offers various performance options tailored to different work requirements. It's like choosing between a compact car and a luxury sedan—there's an option for every need. *The tables below are not official AWS categories but are organized by GPU memory usage for easier understanding. 1) General Purpose Bundles (Non-GPU) General purpose bundles are designed for various tasks including office work, business applications, and development. These bundles do not include GPU memory and are optimized for CPU and memory-intensive workloads such as office productivity, development, and data analysis. vCPU Memory Root Volume User Volume Recommended use case Value 1 2GB 80GB ~ 100GB 10GB ~ 100GB Basic tasks, email Standard 2 4GB 80GB ~ 175GB 10GB ~ 100GB General business tasks Performance 2 8GB 80GB ~ 175GB 10GB ~ 100GB Large file processing Power 4 16GB 80GB ~ 175GB 10GB ~ 100GB Data analysis, development PowerPro 8 32GB 80GB ~ 175GB 10GB ~ 100GB High-performance computing GeneralPurpose.4xlarge 16 64GB 175GB 100GB Large0scale compilation GeneralPurpose.8xlarge 32 128GB 175GB 100GB High performance 2) GPU-Enabled Bundles GPU-enabled bundles leverage NVDIA GPUs and are optimized for high-performance graphics and computational workloads such as graphics work, 3D rendering, and media production. vCPU Memory GPU Memory Local Storage (NVMe) Recommended use case Graphics.g4dn 4 16GB 16GB 125GB NVMe CAD, design, architecture GraphicsPro.g4dn 16 64GB 16GB 225GB NVMe Media production, 3D rendering, ML Management Features 1) Self-Service Features Users can perform certain tasks independently without IT assistance, including WorkSpace restarts, volume expansions, and compute type changes. These self-service capabilities significantly reduce IT workload.. 2) Integration and Connectivity Amazon WorkSpaces seamlessly integrates with existing IT infrastructure: Active Directory integration (use existing user accounts as-is) Microsoft 365 application support Smooth integration with AWS services (S3, EC2, etc) Recent Feature Updates Amazon WorkSpaces continuously rolls out new features to enhance its platform 1) Amazon WorkSpaces Core WorkSpaces Core provides EC2-based Windows desktops using your own Microsoft Windows licenses (BYOL - Bring Your Own License), without AWS Directory Services. This approach significantly improves cost efficiency and operational simplicity for organizations with existing Microsoft licensing agreements.WorkSpaces Core also integrates with various VDI partner solutions, enabling seamless connectivity with your existing infrastructure. 2) Amazon WorkSpaces Thin Client Introduced in 2023, the Amazon WorkSpaces Thin Client is a cost-effective dedicated terminal offering rapid deployment, centralized management, and robust security features. With no local data storage or app installation capabilities, it provides enhanced security and is particularly well-suited for remote work and call center environments. Amazon WorkSpaces Implementation Guide Now let's walk through the step-by-step process of implementing Amazon WorkSpaces. Step 1: Current State Analysis and Requirements Definition First, you need to review your current IT infrastructure and analyze the number of users and work patterns. Create a list of necessary applications and understand requirements by department. In this phase, you need to answer questions like: "How many employees will work remotely?" "What applications are primarily used?" "What are the data security requirements?" Step 2: PoC (Proof of Concept) Execution Select a pilot group of 10-20 people and conduct test operations for 2 weeks. During this period, test performance in actual work environments and collect user feedback to derive improvements. It's important to test various bundle types in the PoC phase to find the optimal configuration for each department. Step 3: Full-Scale Implementation Create an AWS account and set up the WorkSpaces directory. Determine appropriate bundle types for each user and configure security policies and networks. This phase involves intensive technical work such as Active Directory integration, VPN setup, and security group configuration. Step 4: User Training and Deployment Guide users on how to install client apps for each device and train them on initial login and setup processes. Create and distribute user manuals, and operate a help desk to respond to initial inquiries. Step 5: Operations and Optimization Set up monitoring through CloudWatch and establish backup policies. Regularly analyze usage patterns to optimize costs and make continuous improvements based on user feedback. Pricing and Cost Optimization Amazon WorkSpaces Pricing Plans Amazon WorkSpaces offers two running modes with corresponding billing methods. 1) AlwaysOn (Monthly Subscription) Fixed monthly charges regardless of usage Suitable for full-time users; AlwaysOn option is advantageous if used more than 4 hours per day Predictable monthly costs $44 per month for the Standard Ubuntu  (2vCPU 4, 4GB RAM, 80GB Root volume, 10GB user volume) when using WorkSpaces Personal (July 2025, Asia Pacific Seoul region) 2) AutoStop (Hourly Billing) Charged hourly only when WorkSpace is running Automatically stops after a period of inactivity, preventing compute costs Storage costs are charged monthly regardless of WorkSpace state Suitable for part-time users; recommended for interns or short-term project personnel Generally more economical when monthly usage is less than 80 hours * This 80-hour threshold is commonly referenced in AWS official blogs as a typical break-even point , but the actual value may vary by usage pattern. For the most accurate calculation, please use the AWS pricing calculator. Amazon WorkSpaces Cost Optimization Strategies 1) Utilize Cost Optimizer Tool Automatic analysis of user patterns Recommendations for optimal pricing plans Average 20-30% cost savings possible 2) Right-sizing Strategy Monitor actual resource usage by user Adjust over-specified bundles to appropriate levels Regular review and optimization Implementation Considerations Network Requirements Stable network connectivity is essential for smooth Amazon WorkSpaces usage. AWS recommends the following minimum requirements: Minimum bandwidth:  1Mbps (basic tasks) Recommended bandwidth:  2-5Mbps (general work) Graphics-intensive work:  10Mbps or higher recommended Round-trip time (RTT):  Less than 100ms recommended Compliance and Data Sovereignty In certain industries or regions, the physical location where data is stored can be important. Since Amazon WorkSpaces provides services in multiple regions worldwide, you need to select an appropriate region that meets data sovereignty requirements. Security is one of the biggest obstacles and concerns in remote work. Amazon WorkSpaces provides the following security features to address these concerns: Storage data encryption through AWS Key Management Service (KMS) Encryption in transit using TLS 1.2 Network isolation through VPC security groups Various security standards and compliance certifications (SOC, ISO 27001, PCI DSS, HIPAA, GDPR, etc.) It can be used with confidence even in industries that must comply with strict security regulations, such as finance, healthcare, and government agencies. Change Management and User Training The greatest challenge when implementing a new system often isn't technical—it's user resistance. Here's our recommended approach: Phased implementation:  Start with a pilot group and gradually expand Sufficient training:  Provide user manuals and conduct training sessions Ongoing support:  Operate a help desk after initial implementation Conclusion Amazon WorkSpaces is a robust solution for adapting to evolving work environments. It enables rapid deployment without significant upfront investment, delivers enterprise-grade security with flexible scalability, and provides access from virtually any device. We've seen how global enterprises like Ferrari and Kyowa Kirin have achieved cost savings and improved operational efficiency with Amazon WorkSpaces. Your organization can also leverage Amazon WorkSpaces to maintain productivity while offering employees a flexible work environment. The 2025 releases of WorkSpaces Core further expand options for diverse work environments and requirements. 🚀 Are you considering implementing Amazon WorkSpaces? We will review your remote work infrastructure and build the optimal WorkSpaces environment for your company. Request WorkSpaces Implementation Consultation → Amazon WorkSpaces implementation goes beyond technical deployment— it's a transformation of how your organization works. This project requires expertise spanning technical evaluation, deployment, and operational optimization. Partnering with SmileShark, an official AWS Partner, helps minimize trial and error while establishing a stable remote work environment more efficiently.

  • How to Build RAG with Llama3 and AWS Bedrock Knowledge Base using LangChain

    Building a RAG Chatbot in Minutes: Llama3 + Bedrock KB + LangChain Written by Hyeonmin Kim When implementing chatbots, the RAG pattern has become essential rather than optional. To implement the RAG pattern directly, you need to implement a series of processes including vector embedding, vector databases, and document retrieval. However, this process can be challenging without domain knowledge. Today, we'll implement this with minimal effort using just a few clicks and minimal code with Bedrock Knowledge Bases, SageMaker JumpStart, and LangChain. We will: Set up Amazon Bedrock Knowledge Bases (referred to as Bedrock KB) Deploy the newly added Llama3 using JumpStart functionality Use LangChain to search documents and generate messages Data Preparation Bedrock KB Setup Llama3 Deployment (SageMaker Jumpstart) Inference through LangChain ⚠️ Importatn Note: Currently, Bedrock KB functionality is not available in the Seoul region. Therefore, we'll use US East (Virginia). Please make sure to check this. Data Preparation When using Bedrock KB, users don't need to directly embed data and store it in a vector database. Simply store data in S3, and KB internally uses embedding models and vector databases to index the data. From the user's perspective, you only need to create S3 and store the desired documents. Navigate to the S3 console and create an S3 bucket to store documents. Upload the desired documents. In this example, we'll upload a Korean version. Supported data formats are as follows, and each file must not exceed 50MB: Supported Data Formats: Plain text (.txt), Markdown(.md), HyperText Markup Language(.html) Microsoft Word document(.doc/.docx), Comma-separated values(.csv) Microsoft Excel spreadsheet(.xls, .xlsx), Portable Document(.pdf) Bedrock KB Setup When setting up Bedrock KB, you can select an embedding model. We'll use Cohere's Embed v3 Multi-Language model, which supports Korean, and Amazon OpenSearch Serverless as the vector database. Navigate to the Bedrock console, go to the Knowledge Base in the orchestration tab, and create a knowledge base. Set the knowledge base name. You need to set the data source name and S3. We'll specify the previously created S3. For the embedding model, we'll use Cohere's Embed Multilingual v3 model, which supports Korean and has high performance, and OpenSearch Serverless as the vector database for quick creation. Once all content is complete, create the knowledge base. This process takes several minutes. Once creation is complete, proceed with synchronization. Now the storage part for the RAG pattern has been implemented. Llama3 Deployment (SageMaker Jumpstart) Since Meta has been a SageMaker Model Provider, you can deploy models with just a few clicks using the JumpStart feature. For this model, we'll use the Llama 3 8B Instruct model with g5.2xlarge specifications. You can use models with more parameters or Inferentia types according to your situation. Access the SageMaker Studio environment and select the JumpStart feature. Search for the model and navigate to Meta-Llama-3-8B-Instruct. Select deployment. Configure deployment settings and proceed with deployment. Inference through LangChain Now let's integrate with LangChain and proceed with QnA using the generated AWS resources. First, import the necessary libraries. Since LangChain supports SageMaker, you can easily import resources: from langchain_community.llms.sagemaker_endpoint import SagemakerEndpoint from langchain_community.llms.sagemaker_endpoint import LLMContentHandler from langchain_community.retrievers import AmazonKnowledgeBasesRetriever from langchain_core.prompts import PromptTemplate from langchain.chains import RetrievalQA from typing import Dict import json Set the previously configured endpoint name and region. AWS-related config must be set, and the reason for entering the region is because we're using the Virginia region instead of the default region: endpoint_name = "hmkim-llama3" region_name = "us-east-1" Set up a handler that configures the format and content type of input/output data when communicating with the SageMaker Endpoint: class CustomContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({"inputs": prompt, **model_kwargs}) return input_str.encode('utf-8') def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode('utf-8')) return response_json["generated_text"] The difference from Llama 2 is that the received response is not composed of arrays, so the index part must be removed. Set up the SageMaker endpoint and configure the model. Use the previously configured handler as the handler: llm = SagemakerEndpoint( endpoint_name=endpoint_name, region_name=region_name, model_kwargs={"parameters": { "max_new_tokens": 1024, "top_p": 0.9, "temperature": 0.1, "stop": "<|eot_id|>" }}, content_handler=CustomContentHandler(), ) Declare the retriever. We'll set the created Bedrock Knowledge Bases as the Retriever. Set the ID of the created Bedrock KB: retriever = AmazonKnowledgeBasesRetriever( knowledge_base_id="여기에 Bedrock KB id를 입력해주세요", retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 4}}, region_name="us-east-1" ) Set a question and pass it to the retriever to test if it brings appropriate data: question = "길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고싶은데, 가능할까?" query = question retriever.get_relevant_documents(query=query) You can confirm that it brings documents like the following. These are appropriate documents. [Document(page_content='<개정 1995. 12. 29.> 제308조(사자의 명예훼손) 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자 는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다. <개정 1995. 12. 29.> 제309조(출판물 등에 의한 명예훼손) ① 사람을 비방할 목적으로 신문, 잡지 또는 라디오 기타 출판물에 의하여 제307조제1 항의 죄를 범한 자는 3년 이하의 징역이나 금고 또는 700만원 이하의 벌금에 처한다. <개정 1995. 12. 29.> ② 제1항의 방법으로 제307조제2항의 죄를 범한 자는 7년 이하의 징역, 10년 이 하의 자격정지 또는 1천500만원 이하의 벌 금에 처한다. <개정 1995. 12. 29.> 제310조(위법성의 조각) 제307조제1항의 행위가 진실한 사실로서 오로지 공공의 이 익에 관한 때에는 처벌하지 아니한다. 제311조(모욕) 공연히 사람을 모욕한 자는 1년 이하의 징역이나 금고 또는 200만원 이 하의 벌금에 처한다. <개정 1995. 12. 29.> 제312조(고소와 피해자의 의사) ① 제308 조와 제311조의 죄는 고소가 있어야 공소 를 제기할 수 있다. <개정 1995. 12. 29.> ② 제307조와 제309조의 죄는 피해자의 명시한 의사에 반하여 공소를 제기할 수 없 다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.58440983}), Document(page_content='이 경우 검사는 송부받은 날부터 90일 이내에 사법경찰관에게 반환하 여야 한다. [본조신설 2020. 2. 4.] 제245조의6(고소인 등에 대한 송부통지) 사법경찰관은 제245조의5제2호의 경우에 는 그 송부한 날부터 7일 이내에 서면으로 고소인ᆞ고발인ᆞ피해자 또는 그 법정대리 인(피해자가 사망한 경우에는 그 배우자ᆞ 형사소송법 - 215 - 직계친족ᆞ형제자매를 포함한다)에게 사건 을 검사에게 송치하지 아니하는 취지와 그 이유를 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의7(고소인 등의 이의신청) ① 제 245조의6의 통지를 받은 사람은 해당 사법 경찰관의 소속 관서의 장에게 이의를 신청 할 수 있다. ② 사법경찰관은 제1항의 신청이 있는 때에는 지체 없이 검사에게 사건을 송치하 고 관계 서류와 증거물을 송부하여야 하며, 처리결과와 그 이유를 제1항의 신청인에게 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의8(재수사요청 등) ① 검사는 제 245조의5제2호의 경우에 사법경찰관이 사 건을 송치하지 아니한 것이 위법 또는 부당 한 때에는 그 이유를 문서로 명시하여 사법 경찰관에게 재수사를 요청할 수 있다. ② 사법경찰관은 제1항의 요청이 있는 때에는 사건을 재수사하여야 한다. [본조신설 2020. 2. 4.]', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.53611267}), Document(page_content='이 경우 피의자가 이의를 제기하였던 부분 은 읽을 수 있도록 남겨두어야 한다. <개정 2007. 6. 1.> ③ 피의자가 조서에 대하여 이의나 의견 이 없음을 진술한 때에는 피의자로 하여금 그 취지를 자필로 기재하게 하고 조서에 간 인한 후 기명날인 또는 서명하게 한다. <개 정 2007. 6. 1.> 제244조의2(피의자진술의 영상녹화) ① 피 의자의 진술은 영상녹화할 수 있다. 이 경 우 미리 영상녹화사실을 알려주어야 하며, 조사의 개시부터 종료까지의 전 과정 및 객 관적 정황을 영상녹화하여야 한다. ② 제1항에 따른 영상녹화가 완료된 때 에는 피의자 또는 변호인 앞에서 지체 없이 그 원본을 봉인하고 피의자로 하여금 기명 날인 또는 서명하게 하여야 한다. ③ 제2항의 경우에 피의자 또는 변호인 의 요구가 있는 때에는 영상녹화물을 재생 하여 시청하게 하여야 한다. 이 경우 그 내 용에 대하여 이의를 진술하는 때에는 그 취 지를 기재한 서면을 첨부하여야 한다. [본조신설 2007. 6. 1.] 제244조의3(진술거부권 등의 고지) ① 검 사 또는 사법경찰관은 피의자를 신문하기 전에 다음 각 호의 사항을 알려주어야 한 다. 1. 일체의 진술을 하지 아니하거나 개개 의 질문에 대하여 진술을 하지 아니 할 수 있다는 것 2. 진술을 하지 아니하더라도 불이익을 받지 아니한다는 것 3.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.5332642}), Document(page_content='<개정 1995. 12. 29.> ③ 전2항의 청구에 응하지 아니한 때에는 그 공판조서를 유죄의 증거로 할 수 없다. 제56조(공판조서의 증명력) 공판기일의 소 송절차로서 공판조서에 기재된 것은 그 조 서만으로써 증명한다. 제56조의2(공판정에서의 속기·녹음 및 영 상녹화) ① 법원은 검사, 피고인 또는 변호 인의 신청이 있는 때에는 특별한 사정이 없 는 한 공판정에서의 심리의 전부 또는 일부 를 속기사로 하여금 속기하게 하거나 녹음 장치 또는 영상녹화장치를 사용하여 녹음 또는 영상녹화(녹음이 포함된 것을 말한다. 이하 같다)하여야 하며, 필요하다고 인정하 는 때에는 직권으로 이를 명할 수 있다. ② 법원은 속기록ᆞ녹음물 또는 영상녹화 물을 공판조서와 별도로 보관하여야 한다. ③ 검사, 피고인 또는 변호인은 비용을 부담하고 제2항에 따른 속기록ᆞ녹음물 또 는 영상녹화물의 사본을 청구할 수 있다. [전문개정 2007. 6. 1.] 제57조(공무원의 서류) ① 공무원이 작성 하는 서류에는 법률에 다른 규정이 없는 때 에는 작성 연월일과 소속공무소를 기재하고 기명날인 또는 서명하여야 한다. <개정 2007. 6. 1.> ② 서류에는 간인하거나 이에 준하는 조 치를 하여야 한다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.52974534})] Set up the prompt. We assigned the role of a competent lawyer, asked to refer to documents, and set it to answer in Korean without emojis. When modifying the prompt template, be careful as it follows Llama3's prompt template: system_template = """You are a competent lawyer. Please answer the question using the documents provided. Always answer without emojis in Korean.""" prompt_template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_template}, 문서 : {context}<|eot_id|><|start_header_id|>user<|end_header_id|> 질문: {question}<|eot_id|><|start_header_id|>assistant<|end_header_id|> """ prompt = PromptTemplate( template=prompt_template, input_variables=["context", "question"], partial_variables={"system_template": system_template} ) Create a Retrieval QA proposal using the created template: qa = RetrievalQA.from_chain_type( llm=llm, retriever=retriever, return_source_documents=True, chain_type="stuff", chain_type_kwargs={"prompt": prompt} ) When you proceed with the request and check the response, you can see which documents were referenced and what the answer is: {'query': '길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고싶은데, 가능할까?', 'result': '제308조(사자의 명예훼손)에 따르면, 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다.\n\n이 경우, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 다음을 고려해야 합니다.\n\n1. 욕설이 허위의 사실인지 확인해야 합니다. 욕설이 허위의 사실이 아니라면, 명예훼손죄가 적용되지 않을 수 있습니다.\n2. 욕설이 공연히 이루어졌는지 확인해야 합니다. 욕설이 공연히 이루어지지 않았다면, 명예훼손죄가 적용되지 않을 수 있습니다.\n3. 피해자의 명시한 의사에 반하여 공소를 제기할 수 없습니다. 피해자가 명시한 의사에 반하여 공소를 제기하면, 공소가 제기되지 않을 수 있습니다.\n\n따라서, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 위의 고려 사항을 확인하고, 피해자의 명시한 의사에 반하여 공소를 제기하지 않도록 주의해야 합니다.', 'source_documents': [Document(page_content='<개정 1995. 12. 29.> 제308조(사자의 명예훼손) 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자 는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다. <개정 1995. 12. 29.> 제309조(출판물 등에 의한 명예훼손) ① 사람을 비방할 목적으로 신문, 잡지 또는 라디오 기타 출판물에 의하여 제307조제1 항의 죄를 범한 자는 3년 이하의 징역이나 금고 또는 700만원 이하의 벌금에 처한다. <개정 1995. 12. 29.> ② 제1항의 방법으로 제307조제2항의 죄를 범한 자는 7년 이하의 징역, 10년 이 하의 자격정지 또는 1천500만원 이하의 벌 금에 처한다. <개정 1995. 12. 29.> 제310조(위법성의 조각) 제307조제1항의 행위가 진실한 사실로서 오로지 공공의 이 익에 관한 때에는 처벌하지 아니한다. 제311조(모욕) 공연히 사람을 모욕한 자는 1년 이하의 징역이나 금고 또는 200만원 이 하의 벌금에 처한다. <개정 1995. 12. 29.> 제312조(고소와 피해자의 의사) ① 제308 조와 제311조의 죄는 고소가 있어야 공소 를 제기할 수 있다. <개정 1995. 12. 29.> ② 제307조와 제309조의 죄는 피해자의 명시한 의사에 반하여 공소를 제기할 수 없 다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.58440983}), Document(page_content='이 경우 검사는 송부받은 날부터 90일 이내에 사법경찰관에게 반환하 여야 한다. [본조신설 2020. 2. 4.] 제245조의6(고소인 등에 대한 송부통지) 사법경찰관은 제245조의5제2호의 경우에 는 그 송부한 날부터 7일 이내에 서면으로 고소인ᆞ고발인ᆞ피해자 또는 그 법정대리 인(피해자가 사망한 경우에는 그 배우자ᆞ 형사소송법 - 215 - 직계친족ᆞ형제자매를 포함한다)에게 사건 을 검사에게 송치하지 아니하는 취지와 그 이유를 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의7(고소인 등의 이의신청) ① 제 245조의6의 통지를 받은 사람은 해당 사법 경찰관의 소속 관서의 장에게 이의를 신청 할 수 있다. ② 사법경찰관은 제1항의 신청이 있는 때에는 지체 없이 검사에게 사건을 송치하 고 관계 서류와 증거물을 송부하여야 하며, 처리결과와 그 이유를 제1항의 신청인에게 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의8(재수사요청 등) ① 검사는 제 245조의5제2호의 경우에 사법경찰관이 사 건을 송치하지 아니한 것이 위법 또는 부당 한 때에는 그 이유를 문서로 명시하여 사법 경찰관에게 재수사를 요청할 수 있다. ② 사법경찰관은 제1항의 요청이 있는 때에는 사건을 재수사하여야 한다. [본조신설 2020. 2. 4.]', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.53611267}), Document(page_content='이 경우 피의자가 이의를 제기하였던 부분 은 읽을 수 있도록 남겨두어야 한다. <개정 2007. 6. 1.> ③ 피의자가 조서에 대하여 이의나 의견 이 없음을 진술한 때에는 피의자로 하여금 그 취지를 자필로 기재하게 하고 조서에 간 인한 후 기명날인 또는 서명하게 한다. <개 정 2007. 6. 1.> 제244조의2(피의자진술의 영상녹화) ① 피 의자의 진술은 영상녹화할 수 있다. 이 경 우 미리 영상녹화사실을 알려주어야 하며, 조사의 개시부터 종료까지의 전 과정 및 객 관적 정황을 영상녹화하여야 한다. ② 제1항에 따른 영상녹화가 완료된 때 에는 피의자 또는 변호인 앞에서 지체 없이 그 원본을 봉인하고 피의자로 하여금 기명 날인 또는 서명하게 하여야 한다. ③ 제2항의 경우에 피의자 또는 변호인 의 요구가 있는 때에는 영상녹화물을 재생 하여 시청하게 하여야 한다. 이 경우 그 내 용에 대하여 이의를 진술하는 때에는 그 취 지를 기재한 서면을 첨부하여야 한다. [본조신설 2007. 6. 1.] 제244조의3(진술거부권 등의 고지) ① 검 사 또는 사법경찰관은 피의자를 신문하기 전에 다음 각 호의 사항을 알려주어야 한 다. 1. 일체의 진술을 하지 아니하거나 개개 의 질문에 대하여 진술을 하지 아니 할 수 있다는 것 2. 진술을 하지 아니하더라도 불이익을 받지 아니한다는 것 3.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.5332642}), Document(page_content='<개정 1995. 12. 29.> ③ 전2항의 청구에 응하지 아니한 때에는 그 공판조서를 유죄의 증거로 할 수 없다. 제56조(공판조서의 증명력) 공판기일의 소 송절차로서 공판조서에 기재된 것은 그 조 서만으로써 증명한다. 제56조의2(공판정에서의 속기·녹음 및 영 상녹화) ① 법원은 검사, 피고인 또는 변호 인의 신청이 있는 때에는 특별한 사정이 없 는 한 공판정에서의 심리의 전부 또는 일부 를 속기사로 하여금 속기하게 하거나 녹음 장치 또는 영상녹화장치를 사용하여 녹음 또는 영상녹화(녹음이 포함된 것을 말한다. 이하 같다)하여야 하며, 필요하다고 인정하 는 때에는 직권으로 이를 명할 수 있다. ② 법원은 속기록ᆞ녹음물 또는 영상녹화 물을 공판조서와 별도로 보관하여야 한다. ③ 검사, 피고인 또는 변호인은 비용을 부담하고 제2항에 따른 속기록ᆞ녹음물 또 는 영상녹화물의 사본을 청구할 수 있다. [전문개정 2007. 6. 1.] 제57조(공무원의 서류) ① 공무원이 작성 하는 서류에는 법률에 다른 규정이 없는 때 에는 작성 연월일과 소속공무소를 기재하고 기명날인 또는 서명하여야 한다. <개정 2007. 6. 1.> ② 서류에는 간인하거나 이에 준하는 조 치를 하여야 한다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.52974534})]} Looking at the output answer, you can see very accurate results were produced: 제308조(사자의 명예훼손)에 따르면, 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다. 이 경우, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 다음을 고려해야 합니다. 1. 욕설이 허위의 사실인지 확인해야 합니다. 욕설이 허위의 사실이 아니라면, 명예훼손죄가 적용되지 않을 수 있습니다. 2. 욕설이 공연히 이루어졌는지 확인해야 합니다. 욕설이 공연히 이루어지지 않았다면, 명예훼손죄가 적용되지 않을 수 있습니다. 3. 피해자의 명시한 의사에 반하여 공소를 제기할 수 없습니다. 피해자가 명시한 의사에 반하여 공소를 제기하면, 공소가 제기되지 않을 수 있습니다. 따라서, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 위의 고려 사항을 확인하고, 피해자의 명시한 의사에 반하여 공소를 제기하지 않도록 주의해야 합니다. Today, we implemented the RAG pattern with minimal effort using the Llama3 model, Bedrock KB retriever, and LangChain, a framework that helps utilize these tools. Since no separate domain knowledge is required, it should be easy to utilize even without much specialized knowledge. This concludes our post.

  • Celebrating
the General Availability of Amazon SageMaker Unified Studio 
with a Sneak Peek

    Celebrating 
the General Availability of Amazon SageMaker Unified Studio 
with a Sneak Peek Written by Minhyeok Cha Introduction Amazon SageMaker Unified Studio became generally available in mid-March, so I thought I'd take a look at what it is and what it can do for you as a partner and as an AWS cloud user. Amazon SageMaker Unified Studio is a new IDE environment that integrates services related to AI/ML data analytics that exist within AWS. These services include Amazon Athena, Amazon EMR, AWS Glue, Amazon Redshift, Amazon Managed Workflows for Apache Airflow (Amazon MWAA), and the capabilities and tools of the various standalone “studios”, query editors, and visual tools found in the existing SageMaker Studio We were able to locate and access all of the data in our AWS organization and provide a single development environment for our practitioners. This allowed us to minimize access control management and focus on AI application development. Additionally, Amazon Q Developer is included, which provides a chatbot interface like Chat GPT and accelerates tasks such as writing SQL queries, building ETL jobs, troubleshooting, and generating real-time code suggestions. Contents From Amazon SageMaker Unified Studio domains to project creation What else has been added since the general release? Wrapping up From Amazon SageMaker Unified Studio domains to project creation So let's create a domain directly from the console to see what features are available after the general release. It says to create a domain first, so let's go ahead and do that. First, the good news is that with the general release, we can create in the Seoul region. When creating a domain, you need at least three Availability Zones to deploy to. You don't need to give permission to add it to the Glue database, it will be granted automatically when you create the domain and AWS Lake Formation will register it for you. When you're done, you'll be greeted by the studio as shown in the photo. In the center, click Create Project to create a space to work in. During creation, you'll see the following items, where you'll name each of the DBs you'll use in the project. Since we're just experimenting, we'll go with the default values. What else has been added since the general release? First up is Amazon Bedrock. Once all the FM models are available for selection. I only see one model request on my test account. It's a bit embarrassing. In addition to selecting FM models, one of the main features, building a knowledge base, is now possible in the studio itself. When sourcing documents from S3, local data, or web crawling within a project and creating a Knowledge Base, you can work with the FM model selection as above. Based on the Knowledge Base you created, you can create a canvas that looks like this Connect each node to create your own configured Bedrock. Next up is the Amazon Q Developer install mentioned above. Now you can write query statements with Q. It's a free version, and it works well. Amazon Q Developer automatically adds a free subscription to Amazon SageMaker Unified Studio when you create it, so you don't need to touch it further. Here is the integration of the DB mentioned when creating the project. SageMaker Lakehouse unifies data lakes, data warehouses, and data sources for easier management. Wrapping up We didn't organize the core features in the preview stage or at AWS Re:invent 2024. What we got from this official launch is one word: “Convenient.” AWS has many services, so it was difficult to use the existing SageMaker, but with the release of SageMaker Unified Studio, it was good that even if you just build a domain, you can automatically assign permissions and integrate each DB for easy management. Of course, when importing from the existing DB to SageMaker Unified Studio, it is cumbersome to assign permissions to each service and register Lake Formation permissions for data. However, for someone starting a project for the first time, I think this was enough to reduce the burden of access.

  • Manage Your Amazon S3 Objects with Amazon S3 Metadata!

    Manage Your Amazon S3 Objects with Amazon S3 Metadata! Written by Minhyeok Cha How do you manage your Amazon S3 objects? Do you search for them directly in the console? Use CLI or SDK? Or maybe you rely on Glue crawling? Recently, my company noticed that S3 costs were gradually piling up, so I started looking for ways to reduce them. Initially, I thought, "We can just move unused data to Glacier, and that’s it." However, managing the massive amount of data accumulated over about six years in a single bucket turned out to be a bit tricky. That’s when I noticed the "table bucket" feature and thought, “Why not give the relatively new S3 Metadata a try?” Fortunately, it worked out well, and I’d like to share my experience. Table od Contents What is Amazon S3 Metadata? What is AWS Lake Formation? Demo S3 Cost Optimization Strategy Conclusion What is Amazon S3 Metadata? (Source: AWS) You can find an introduction to Amazon S3 Metadata in an article I previously wrote, titled A Summary of Key Announcements from AWS re:Invent in 10 Minutes . In that article, I mentioned that S3 Metadata can be integrated with AWS Glue Data Catalog. However, in this post, I’ll explore using AWS Lake Formation instead. Initially, I planned to use AWS Glue’s crawling feature, but decided to experiment with the officially released table bucket and Amazon S3 Metadata, which came out earlier this year. What is AWS Lake Formation? So, what exactly is AWS Lake Formation? AWS Lake Formation simplifies and automates the complex and time-consuming tasks involved in building a data lake. These tasks include collecting, cleaning, moving, cataloging data, and ensuring secure access for analytics and machine learning. It also provides its own permission management model based on AWS Identity and Access Management (IAM). This centralized permission management model allows for fine-grained access control to the data lake through a simple grant/revoke mechanism. Permissions in AWS Lake Formation can be applied at the table and column levels for all datasets in the data lake. Services integrated with this permission management include AWS Glue, Amazon Athena, Amazon Redshift Spectrum, and Amazon QuickSight. However, our primary goal is to access S3 objects for querying without crawling, so we’ll be using Lake Formation mainly as a connection pathway. Demo Since my company account has restricted permissions, this demo will be conducted using a test account. 💡 Table Buckets and Amazon S3 Metadata are only available in the Ohio and Northern Virginia regions. Step 1: Create an S3 Table Bucket Step 2: Generate Metadata for the S3 Bucket to Test That completes the connection between S3 and the table bucket. Step 3: Check with Amazon Athena However, if you try accessing Athena without cataloging, nothing will show up. In fact, you need to create a catalog through AWS Glue. Fortunately, a new feature in Lake Formation now allows for automatic alignment of S3 tables, making the setup process smoother. Step 4: Enable S3 Table Integration in AWS Lake Formation When integrating, make sure to specify a role with S3 access permissions. Once the integration is successful, the catalog will be displayed as shown below. Go into the catalog and proceed with policy settings. In the Permissions  section, click Grant  to continue. If you followed the steps correctly, go to Athena to check if the S3 data appears as expected. Step 5: Successful Amazon Athena Query! The data appeared without using AWS Glue, and the query executed successfully. S3 Cost Optimization Strategy The optimization process was straightforward. I created queries as shown below, downloaded the result as a CSV, and used the CLI to move the objects identified by the query to the Glacier storage class. S3 Lifecycle Management Following that, I configured S3 Lifecycle policies to automatically move data to Glacier over time. Conclusion I decided to try out AWS’s new features and finally got around to it in March 2025. I had heard countless times about S3 cost optimization, but trying it out myself instead of relying on consulting felt quite refreshing. For those who haven’t managed their S3 buckets before, I think this new method is definitely worth considering. It’s simpler to use than setting up Glue, which I found particularly appealing. However, I did find AWS Lake Formation’s setup a bit tricky initially. Still, if you need to manage data in your buckets, it might be worth giving it a try. ✅ Note: Deleting S3 table buckets can only be done via CLI or SDK, so keep that in mind.

  • AWS Case Study - Quantit

    Quantit: Pioneering a New Era in Financial Investment Through Cloud-Based Automation Context Need for Cloud Transition Challenges Before Implementation Meeting SmileShark Journey of Technical Innovation Implemeted AWS Services Key Improvements SmileShark's Support Differentiated Technical Support Cost Optimization Results Preparing for a Greater Leap Quantit's Future Plans Detailed services Applied to Quantit Quantit's Architecture Quantit Inc. Quantit is an innovative startup that bridges finance and IT, offering a new dimension of investment automation. With a mission to make a financial solutions more accessible to the general public, Quantit leverages data-driven technology and AI to streamline complex financial processes. Quantit's flagship platform, 'Finter', contributes to the innovation of financial services by designing and automating big data-based investment models. Additionally, Quantit operates 'Olly', a robo-advisory service that provides asset management solutions tailored for both individuals and institutions. Name Quantit Inc. Area AI-based Investment Design Platform Estb. January 2019 Site https://quantit.io Need for Cloud Transition Challenges Before Implementation Quantit makes financial services easy and convenient with financial-IT convergence: Click the image to go to the article Q. What prompted you to consider adopting the cloud? A. Initially, we were running our infrastructure using desktops in the office, but it was unstable and constantly encountering problems. The time constraints were a significant challenge, especially considering Quantit's focus on financial data. We needed computing resource management and automation to complete all model calculations by 9 AM when the market opened, as the previous day's data arrived at 6 AM. Meeting SmileShark Q. How did you find out about SmileShark? A. We were introduced to SmileShark through an AWS manager. While we had basic knowledge of the cloud and had completed the porting process, we had limited access to information about new AWS technologies and features. Q. What were the deciding factors in choosing SmileShark? A. No other MSP was as proactive in considering and addressing our requirements as SmileShark. In particular, our account manager went beyond simply signing a contract and engaged in in-depth communication to understand our mutual needs. The fact that SmileShark was also a startup was a major reason for our choice. We believed that a collaborative partnership, where both of us could grow together, would be a significant asset in the future. Journey of Technological Innovation Implemented AWS Services Tae-ho Kim, Director of Technology, being interviewed Q. What AWS services did you implement? A. We transitioned from an EC2-based on-premises approach to a container-based approach using ECS. This greatly improved the freedom of our development environment. Previously, we were limited by only being able to use the environment installed on EC2, but with the introduction of containers, modelers can now freely use their desired Python versions and libraries. Q. What changes resulted from implementing Step Functions? A. We automated model execution and pipelines through Step Functions. We were able to configure processes, such as reading information from the database, pushing images to ECR via Code Build, and preparing the appropriate environment based on the CPU/GPU model, in a graphical format. The biggest advantage is that we can now see and manage the flow of execution results at a glance. Key Improvements Customized development environment for each modeler through the container-based transition Automated data processing pipeline with the introduction of Step Functions Cost-efficient operation achieved by adjusting computing power based on time of day SmileShark's Support Differentiated Technical Support In-chul Shin, Researcher, being interviewed Q. How was SmileShark's technical support? A. The technical support through the tech support system was very fast and efficient. Recently, we inquired about an issue that occurred when requesting m7i or c7i instances (on-demand or spot) in the Seoul region, and they provided a detailed explanation along with reference documents. We were particularly impressed by their meticulousness and willingness to thoroughly guide us, without overlooking any supplemental information that could be included in the inquiry. Q. What about AWS-related information or training support? A. SmileShark treats us not simply as customers to whom they provide services, but as partners with whom they grow together. They actively inform us of growth opportunities such as ECS training and AWS re:Invent participation, and also share how to utilize PoC credits, which has been very helpful in verifying new services. Cost Optimization Results Q. What were your cost savings? A. Operating a single server used to cost us around $315k ~ $420k per month, but after moving to the cloud, the unit changed to the $7 range. The flexibility of the cloud has been very helpful, especially given the nature of our services, which require different computing power depending on the time of day. We have been able to operate efficiently by scaling up resources when intensive computation is needed during the early morning hours, and scaling down during other times. SmileShark understood the characteristics of our services, such as concentrated computing processing in the early morning and different computing power needs at different times, and proposed an optimized cloud environment. - Kim Tae-ho, Technical Director at Quantit Preparing for a Greater Leap Quantit’s Future Plans Financial investment robo-advisor ' Olly ' : Click the image to go to Olly homepage Q. What are your future plans? A. Quantit is preparing to expand into Southeast Asia alongside the growth of the domestic IRP (Individual Retirement Pension) market. We anticipate a high demand for automated investment solutions in markets like Vietnam and Indonesia, where there is a shortage of financial experts. To support this, we plan to leverage AWS’s global regions to scale our services and are preparing region-specific data management in compliance with each country’s data protection regulations. Q. Lastly, do you have any advice for those considering SmileShark? A. If you’re a startup, I highly recommend them. Although we spend a significant amount on AWS cloud services, other MSPs treated us as a small client. SmileShark was different. If you’re familiar with AWS technology but need advice on optimization or new services, or if you’re looking for a partner to grow with, SmileShark will be an excellent choice. Detailed services Applied to Quantit Quantit's Architecture * Configuration diagram for illustration purposes only and does not represent the actual architecture. [Quantit Related Article] Innovative Financial Services Appointed by Quantit Investment Advisory to Provide Robo-Advisor Service for Retirement Plans [SBA Global] Quantit 'Building easy and convenient financial services through financial-IT convergence'

  • Business Post MSP Case Study

    About the Customer Business Post focuses on providing in-depth analysis and forecasts of economic phenomena and economic activities in the real economy and financial economy. Unlike general newspapers that focus on breaking news, our articles focus on digging into the background of events and accidents, pointing out key points, and predicting the future. Company : Business Post   Industry : Newspaper Publishing Establishment : 2013.11 ​Website : https://www.businesspost.co.kr Customer Challenges Need an AWS expert partner to achieve performance, security, high availability, and cost optimization based on AWS Well-Architected Framework Requires monitoring solution to check the usage of each service Requires redundancy and security services for high security and availability Need a service that blocks aggressive traffic Requires Database migration from On-premise MySQL to RDS MySQL Aurora Proposed Solution & Architecture SmileShark provides a self-developed monitoring tool, "Shark-Mon," ensuring 24/7 operation of applications and services on behalf of the customer. "Shark-Mon" supports protocol monitoring (HTTP, TCP, SSH, DNS, ICMP, gRPC, TLS), AWS resource overview, and Kubernetes monitoring, essential for modern cloud operations. SmileShark also set up CloudWatch to collect and visualize real-time logs, metrics, and event data Operating customer’s services reliably according to SmileShark's devised cloud standard operating procedures (monitoring/failure/change/SLA) and operation service guide. Configure CloudWatch to monitor to collect and visualize real-time logs, metrics, and event data SmileShark provide a guidance to the customer enabling Multi-Factor-Authentication (MFA) to enhance account security SmileShark introduce and configure WAF services to lock aggressive traffic SmileShark configure RDS MySQL Aurora to modernize the customer database SmileShark configure AutoScaling on behalf of the customer to respond to increased traffic SmileShark build Multi-AZ for high availability Outcomes of Project & Success Metrics Reduce average time to discover infrastructure failure through CloudWatch Protect their account resources from abusing by enabling MFA The number of databases going to RDS decreases Continuous service between failures with Multi AZ configuration ※All contents provided by SmileShark are protected by related laws. Civil and criminal responsibilities may follow if you copy, distribute, sell, display, or modify SmileShark content without prior permission. If you have any questions regarding the use of content, please contact us by phone (☎: 0507-1485-2028) or email ( contact@smileshark.kr ).

  • AWS Case Study - TRIBONS

    How did TRIBONS provide uninterrupted shopping mall services to their customers? SmileShark's CloudOps Service Context Anomalous Service Failures in a Shopping Mall System Challenges Why TRIBONS Chose SmileShark Stabilizing the infrastructure and a successful digital transformation As a collaborative partner, not just a request and responder AWS Cost and Operations Optimization Consulting Building an Enhanced Security and Gifting System TRIBONS' Future Plan Detailed Services Applied to TRIBONS What is SmileShark's CloudOps? TRIBONS Architecture What is Shark-Mon? TRIBONS Inc. As an affiliate of LF (formerly LG Fashion), TRIBONS owns famous brands such as DAKS SHIRTS, the industry leader in men's shirts, as well as Notig, Bobcat, and Benovero. TRIBONS is also successfully operating FOMEL CAMELE, a fashion miscellaneous goods brand targeting women in their twenties and thirties. TRIBONS also has a strong presence in children's apparel, and through its "PastelMall" subsidiary, TRIBONS offers premium children's apparel brands such as Daks kids, Hazzys kids, PETIT BATEAU, BonTon and K.I.D.S. These brands are available in Korea's major department stores, and are also available online through Pastel Mall. TRIBONS is constantly striving to provide the customers with quality products. Name TRIBONS Inc. Area Shirt and blouse manufacturing Estab.   Jan, 2008 Site https://www.pastelmall.com/ Anomalous Service Failures in a Shopping Mall System Challenges SmileShark  When did the need for SmileShark come up in TRIBONS , and what were the challenges at the time? Hyunsoo Jang   We had previously been using an AWS cloud environment through a different partner. However, in 2022, we began to experience difficulties running its shopping mall in the cloud. As the number of customers increased, we were facing anomalous service failures. We were also considering expanding additional services due to system development. SmileShark  You mentioned that TRIBONS experienced some unusual service failures, can you tell us what it was? Hyunsoo Jang  Certain events, such as the real-time live commerce 'Parabang', were only exposed on our own mall, but sometimes we had to broadcast simultaneously on other live commerce platforms. In such cases, the difference from the usual inflow was about 10 times. In addition to this inflow, we also received customers through advertising marketing such as marketing texts and KakaoTalk Plus friends, and we could see that the inflow increased by about 5 times compared to the usual inflow. Therefore, we aimed for a more stable service. Interviewing with Hyunsoo Jang, TRIBONS online platform team leader Why TRIBONS Chose SmileShark SmileShark  Why did you choose SmileShark's CloudOps service? Hyunsoo Jang  To solve the problems we were facing, we needed a partner that could be agile and flexible, and we found SmileShark through a referral. Being recognized as an AWS Rising Star of the Year, meeting with SmileShark's CEO and engineers built trust, it convinced us that they could empathize with our problem and promise to support us. SmileShark What did you find frustrating about your previous partner? Hyunsoo Jang  As mentioned above, we were facing various issues during the operation of the shopping mall system, and there were many complicated parts that had not been explained well, so we were very disappointed with the previous partner's service provision. Changing server settings in AWS was not easy due to the absence of internal manpower, and communication was also difficult due to the difference in work areas between developers and system engineers. Therefore, the most anticipated aspect of the new partner introduction was smooth communication and proactive measures. When we used previous partners' services, issues were not shared, which led to confusion due to server reboots, checks, and policy changes during business hours, and there were many unnecessary procedures to respond to issues, so it was important to us to see if we could improve this. "TRIBONS went from having 4 ~ 5 times service outages per quarter to none with SmileShark." - Hyunsoo Jang, TRIBONS online platform team leader Stabilizing the infrastructure and a successful digital transformation As a collaborative partner, not just a request and responder SmileShark  We've heard that TRIBONS ' infrastructure issues have been dramatically stabilized since implementing SmileShark’s CloudOps, but what's it really like? Hyunsoo Jang In the year or so since we have been with SmileShark, we have seen a lot of improvements. We have been able to connect the system issue alerts to the collaboration solutions we use so we can respond to issues quickly. From time to time, AWS would send out an announcement saying, "There's an issue with a service or a region, and you may experience downtime." The emails are sent to our contacts within TRIBONS , but they are also sent to our MSP. It would be nice if the MSP partners we work with could share this with us when we miss something like this, but unfortunately this little detail hasn't been done before with the previous partner. The shopping mall was supposed to be an uninterrupted system, but we were often getting server error pages (503). SmileShark has provided us with AWS announcements months in advance so that we can plan ahead and say, "We need to address these issues around this time." It also sends out urgent announcements in the middle of the day so that we don't miss any issues.  TRIBONS doesn't have any outages now, which we used to have four to five per quarter before SmileShark. SmileShark's Announcement Emails SmileShark  What do you think makes SmileShark's CloudOps service different from other previous monitoring and operations support and MSPs? Hyunsoo Jang  When an issue arises, they analyze the cause of the problem and explain it in detail in an email, and then again on the phone, so I know exactly what the issue is, and they share their technical opinions and areas for improvement, which is very helpful. Furthermore, in the event of a failure, we are notified within one minute on average and receive prompt feedback from the person in charge, and we communicate in real time through a separate communication channel. As a result, we were able to successfully obtain the certification mark just one year after the start of the ISMS certification audit project. SmileShark CloudOps' Troubleshooting Process SmileShark How did SmileShark help TRIBONS with the ISMS certification audit? Hyunsoo Jang  During the ISMS audit, there was a part of the architecture that needed to be changed. SmileShark told us that it was a security violation to have the private and development areas in the same zone, so we had to separate them. We discussed this closely with Hosang Kwak, CloudOps team lead of SmileShark and proceeded with as little disruption to the shopping mall as possible. In fact, even when we changed the architecture structure, the shopping mall service was not interrupted and the system operated stably. When I asked how to configure the application servers such as tomcat, which are in EC2 in addition to the AWS structure, he promptly responded and took practical measures. SmileShark  In addition to running a stable infrastructure, we've heard that communication between developers has improved. Hyunsoo Jang Yes, organizations without system engineer positions end up lacking knowledge such as log analysis and server settings for each server. Communication with MSP partners was also a challenge due to the lack of communication between the teams. This was always a big concern for me due to the different job background, but I think SmileShark was the only one that worked out well in terms of communication.  AWS Cost and Operations Optimization Consulting SmileShark So, how was SmileShark's AWS consulting experience? Hyunsoo Jang  We had a cost issue with the CDN service we were using, and we thought that the fees charged due to the contract were excessive, so we were considering other CDN services, and we consulted with SmileShark about the CloudFront (CDN) service provided by AWS, which can be used at a reasonable price without a contract. We confirmed the cost-effective part of the service and are considering switching to it this year. Also, we were having frequent issues with the software configuration management server, so we consulted with SmileShark about AWS software configuration management service. I told them that I would like to be able to deploy or build servers automatically, and SmileShark told me that AWS has a structure that can automate the software configuration management. I thought that this would reduce the risk of manpower and server stabilization. However, the software configuration management server can be critical, so we are still considering it. Consulting with SmileShark helped us make the decision because we were able to put our situation into perspective. SmileShark's Monitoring Service, Shark-Mon SmileShark Thank you. Do you have any comments that might be helpful to any customer considering SmileShark? Hyunsoo Jang  I would highly recommend SmileShark's CloudOps service to any company or team that doesn't yet have an expert in the field of systems engineering, as SmileShark provides personalized support. SmileShark also helps build, manage, and optimize cloud infrastructure, making it especially useful for teams that don't have the knowledge or manpower to manage cloud in-house. I would recommend SmileShark as the best AWS partner to build the infrastructure, not only due to the technical issues, but also because SmileShark provides guidance on optimizing costs and increasing operational efficiency. Beyond just the numbers, there's something else I've been noticing a lot lately, and that's the trust SmileShark shows in the work. SmileShark is always consistent in the guidance and proactive in the solutions, and that's a big deal to me as a service provider. At a time when we felt overwhelmed by the complexity of the AWS environment, SmileShark reached out to us and made us feel comfortable just like seeing a lighthouse in the storm. "At a time when we felt overwhelmed by the complexity of the AWS environment, SmileShark reached out to us and made us feel comfortable just like seeing a lighthouse in the strom." - Hyunsoo Jang, TRIBONS online platform team leader Building an Enhanced Security and Gifting System TRIBONS’ Future Plan It has been four years since Pastel Mall (shopping mall) was launched, and we have been able to grow functionally in the service sector due to the influx of many customers. While we previously aimed to improve the service level, this year we are working to improve it by focusing on server strengthening and security to maintain a stable system. Therefore, we are aiming to obtain the enhanced ISMS-P certification. Pastel Mall Mobile Gifting (Link) SmileShark  Can you tell us about the new service TRIBONS recently launched, Gifting? Hyunsoo Jang  The Pastel Mall Gifting Service is now open, a mobile-only service that allows the customers to send DAKS shirts and other products from Pastel Mall to your loved ones. Gifts can be given from existing Pastel Mall customers to non-members, and any customer can find a variety of products that match the theme in the dedicated gift shop, and the customers can send a gift with a message card with a small sentiment, so we hope you enjoy it. Detailed Services Applied to TRIBONS What is SmileShark’s CloudOps? SmileShark Which of SmileShark's CloudOps services did TRIBONS adopt, and what was the collaboration process like? Hosang Kwak, CloudOps team lead of SmileShark   CloudOps doesn't just alert the customers when something goes wrong with their system, it also analyzes it. It's important for us to analyze, find solutions, and provide them to our customers so that they can improve their systems to prevent the same problems from happening again. CloudOps is a collaborative MSP service that doesn't solve all problems at once, but rather works with the customers to solve them and grow together. Hyunsoo Jang, TRIBONS online platform team leader , also has a good understanding of CloudOps, so he authorized us to do various tests over time. Also, when we suggested a solution, he agreed to give it a try, so to repay us for this, we are still working well together with the common goal of uninterrupted service. TRIBONS Architecture *Configuration diagram for illustration purposes only and does not represent the actual architecture. What is Shark-Mon? Shark-Mon is a monitoring tool that enables applications and services to operate around the clock without interruption, rather than being monitored by humans in the legacy way. Developed in-house by SmileShark, SharkMon provides functions necessary for cloud operations, including basic 'protocol monitoring' such as HTTP, TCP, SSH, DNS, ICMP, gRPC, TLS, 'AWS usage resource view' and 'Kubernetes monitoring', which is emerging as a global trend. It is currently in closed beta for select customers.

  • AWS Case Study - Opensurvey

    How did Opensurvey get 1 million+ survey data collection per month through AWS migration? Opensurvey 'The new future of data’ Opensurvey redefines the value of surveys with IT technology and open the era of customer experience management. With research product, they collect the data easily and analyzes customer thoughts and behaviors specifically to help companies to make right decisions based on data. Furthermore, Opensurvey is positioning themselves as a business partner that supports companies’ continuous growth by managing customers, users, and employees survey experience. Name Opensurvey Area Customer data bussiness Estab. Feb, 2011 Site https://www.opensurvey.co.kr/ Difficulties in IDC and Cloud environment Challenges Opensurvey frequently found hardware problems while using IDC and local cloud. Also, there was a hassle between the process of bonding data distributed to two sites and Complex processes occurring based on VM. So, Opensurvey decided to implement Kubernetes to fix the problem and considered migrating IDC to the cloud environment. Park Hyunmin, Opensurvey bakend chapter lead (Inverview) Why SmileShark? Opensurvey used IDC and local cloud at the same time to protect data loss . But they encountered various operational difficulties while using IDC and local cloud. As a result, they needed to migrate their services to AWS for flexible resource scaling and cost optimization to provide better services for the customers. However, it was important how to migrate more than 1 million survey data collected and analyzed every month without losing any data. So, they wanted to migrate safely to AWS with technical supports from SmileShark, AWS Premier consulting partner specializing in migration. "SmileShark is a great partner to start AWS with. Especially for those who have used IDC or different cloud service platform." - Park Hyunmin, Opensurvey backend chapter lead Adopting Kubernetes to increase organizational flexibility Increased development flexibility Opensurvey leverages various AWS services to increase reliability. In particular, Opensurvey was able to scale up and down flexibly by migrating workloads to Kubernetes and simplified the deployment process which improved developer productivity and made efficient use of resources . Through this, Opensurvey was able to manage unexpected traffic stably even if multiple enterprise customers collected large amounts of data in a short period of time. Since Opensurvey uses the Kubernetes environment, it is easy to apply Spot instances, and when developing new services, Spot instances reduce instance costs by more than 70%, increasing the speed of new development by using them without burden. Increased business flexibility Opensurvey successfully migrated from IDC to AWS through AWS Premier Partner, SmileShark. In addition, it was possible to respond flexibly to issues compared to the existing IDC by receiving an instance type suitable for the platform characteristics through a partner. Furthermore, increased flexibility allows Opensurvey to accept the requirements of the enterprise customers, enabling business and industry expansion. Hyun-min Park from Opensurvey backend chapter lead developer explains, “As Opensurvey grows, there are many difficulties in stably operating the service through the existing on-premises environment, so we adopted AWS services for greater flexibility.” Park also says, “In addition, we utilize AWS managed services to minimize the developer's resources and operate the service reliably . ” Opensurvey's Services image Transition from a consumer data platform to an experience management platform Opensurvey Next Step Opensurvey is opening a new future of data by connecting companies and consumers/users/customers based on survey data. Since the launch of ‘Opensurvey’ in 2011, they have developed a survey-based data collection and analysis product, and launched ( Feedback.io ) in 2022 to expand the product so that not only customers, users but also employee’s experience can be accumulated and managed through survey based data. Currently, the need for continuous user data collection and experience management is increasing, especially in digital companies, so they plan to continuously develop data collection and analysis products as a business partner for mutual growth. Customer Experience Management Service 'Feedback.io' Used AWS Services Amazon Elastic Kubernetes Service(EKS) Amazon Elastic Container Registry(ECR) Amazon Relational Database Service(RDS) Amazon Aurora MySQL Introduced SmileShark Services SmileShark BuildUp | Accurate infra suggestion / Rapid deployment support SmileShark Migration | SmileShark guides you through the entire migration to AWS SmileShark Tech Support | Get expert guidance and assistance achieving your objectives

  • AWS Case Study - INUC

    How did INUC leverage AWS to minimize development time by over 35% and quickly deploy SaaS? INUC Inc. INUC is a B2B media platform software development company that specializes video content management system(CMS) services. INUC’s video CMS solution features live scheduling, VOD archiving, menu organization and management, as well as web interfaces for each type of content that media managers need. INUC provides various editions(templates) for video meeting minutes, in-house broadcasting, and live commerce, so the media managers simply can select the screen they want. The media managers also have the option to choose the appropriate license (Basic/Standard/Enterprise) and cloud service according to each customer's system policy and service scale. Name INUC Inc. Area Software development and supply Estab. Nov, 2010 Site https://sedn.software/ Migration of B2B On-premises Solution to the Cloud Challenges INUC was provided on-premises based solutions; however, with changes in the market and a growing customer demand, the need for cloud adoption became apparent. As their potential customer base expands from public and enterprise to healthcare and commerce, there is a growing demand for cloud services in the form of SaaS, and INUC expects to create opportunities for global expansion. In addition, all INUC’s media services were built on Docker, consisting of containers for API, Streaming Server, Chatting, Web, Storage, etc. The migration to the SaaS model was relatively easy, given the cloud environment was already prepared. INUC Inc. Why SmileShark? SmileShark's wide experience and expertise were key attractions. Specifically, SmileShark's solutions and various suggestions during meeting helped INUC makes quick decisions. INUC expected that SmileShark's experience in various Kubernetes deployments and container operations would facilitate the achievemenet of their goal of transforming CMS services into SaaS within a tight time frame. In fact, SmileShark's prompt technical support helped INUC to smoothly migration to the cloud. " I would recommend SmileShark to startups or companies looking to migrate from on-premis solutions to SaaS " -Jason Shin, INUC CEO "Due to the nature of media services, we believe that it is reasonable to have a hybrid form that operates both existing on-premises servers and the cloud from a TCO (total cost of ownership) perspective.", "The cloud-based B2B SaaS model can be thought of as a content store that plans an independent brand," Explained Jason Shin, CEO of INUC, Inc. Safe and Swift Migration of Adopting ECS INUC's SaaS Service - AWS architecture INUC reliably adopted the Amazon Web Services (AWS) cloud through SmileShark, experiencing flexibility and scalability beyond that of the traditional on-premises model. During the AWS architecture design process, Elastic Load Balancers (ELBs) and multiple availability zones were employed to enhance business continuity and customer satisfaction. Network traffic coming into the ELB is automatically distributed to multiple servers, preventing the load from being concentrated on one server and ensuring that even if a problem occurs in one server, the entire service is not affected. Additionally, by distributing the infrastructure using two or more Availability Zones, INUC can continue operations without service interruption even if a problem occurs in one Availability Zone . To migrate data leakage and security risks, INUC organized its infrastructure with public and private subnets, placing critical data and systems in the private subnet, shielding against external threats. This approach has bolstered customer satisfaction and protected the brand value of INUC in the long run. INUC adopted ECS (Elastic Container Service) to the AWS environment to simplify and enhance the efficiency of deploying, managing, and scaling Docker container-based applications. ECS significantly shortened time to market by streamlining the process of deploying & managing applications and allowing developers to concentrate on develeoping higher-quality service To ensure consistent service during traffic spikes, INUC implemented Auto Scaling Group, dynamically managing resources based on usage. Additionally, INUC set the ECS service type as Replica to maintain a specified number of tasks running continuously, thereby ensuring scalability and resilience of the tasks, and configured it to automatically adjust to workload demands. Managed services such as ElastiCache, Aurora, S3, etc have helped INUC reduce hardware and software maintenance costs, allowing them to focus more on core business activities. INUC established a fast and efficient development process through AWS services. Supported by AWS and SmileShark, developers quickly acquired new skill and developed cloud-optimized solutions, significantly accelerating INUC's technological innovation. Upcoming Development of Intelligent Services Based on STT INUC Next Step (Left) ITS Screen (Right) Scheduling INUC is currently improving SEDN v2 with communication and incorporating AI applications based on deep learning algorithms into the cloud. Upcoming intelligent services include video scene analysis based on STT (Speech-to-Text), timestamp extraction highlight generation, and video keyword search. INUC's Media Package Solution SEDN Beta Service (Link) INUC is improving their media user experience(MX) and aims to create business opportunities with more content IP operators and strengthen their global market presence. ※ Click the image above to sign up for the SEDN beta service. Used AWS Services Amazon Elastic Container Service(ECS) Amazon Simple Storage Service(S3) Amazon ElastiCache Amazon Aurora Introduced SmileShark Services SmileShark BuildUp | Accurate infra suggestion / Rapid deployment support SmileShark Migration | SmileShark guides you through the entire migration to AWS SmileShark Tech Support | Get expert guidance and assistance achieving your objectives

bottom of page