top of page

검색 결과

62 results found with an empty search

  • AI Makes Daily Publishing Possible | MK AX

    AI Automation finally allowed us to focus on the work that truly requires human judgment. - Yoo Young-hoon Head of AI TF, MK AX [ 💡 Key Results ] 1. Daily content publishing - Reduced an expected 2-3 day podcast workflow to an automated 7 AM daily release 2. Significant resource savings - Multi-step manual editing reduced to final review only 3. Launched Korea's first conversational AI news podcast by a media company About MK AX Company   Mk AX Co.,Ltd Business   News・Internet financial information services Founded   October 1999 Website    https://www.mk.co.kr MK AX is a digital innovation subsidiary of Maeil Business Newspaper, Korea’s leading economic newspaper. Leveraging Maeil Business Media’s extensive news, data, and broadcast assets, MK AX delivers Korea’s leading economic information services. Recently, Maeil Business Media established an AI Task Force to accelerate its AI Transformation(AX) strategy. The MAI Morning Briefing project became the first major outcome of this company-wide initiative. Evolving Reader Habits, and a New Way to Deliver News MK AX had long focused on one central question: "How can we deliver news to readers more effectively?" As consumption formats diversified — text, video, audio — publishers needed new approaches. The audio market grew rapidly in the 2020s as listeners preferred formats that enabled multitasking. MK AX had already operated an audio service summarizing key news highlights. However, as the AI TF Lead recalls: "The entire workflow depends on manual effort — writing scripts, editing broadcast clips, or outsourcing production. Every path required significant resources." Outsourced conversational podcasts required 2-3 days from planning to recording and editing. The team realized a new approach was needed. Yoo Young-hoon, Head of the AI TF at MK AX, during an interview with SmileShark The Turning Point: AI Transformation Begins In 2024, a major shift occurred—MK AX launched its company-wide AI TF. Among various ideas, one stood out: “What if we built a fully AI-generated, conversational podcast?” Around that time, MK AX was introduced to AWS partner SmileShark, and the teams began a joint Proof of Concept(PoC). SmileShark’s technical expertise and AWS Cloud infrastructure significantly accelerated development, producing results that exceeded initial expectations. Preventing hallucination was our biggest technical challenge Q. How did you ensure news accuracy? A. Preventing hallucination was the top priority. Accuracy is fundamental in news, and any AI-generated content must not introduce information not present in the article. To address this, the team designed a three-stage validation workflow separating fact extraction and style transformation. Q. What guided your voice technology choice? A. Because the format was conversational, we needed more natural speech. After reviewing several technologies, ElevenLabs best matched the conversational quality we required. The goal was not simple text-to-speech — listeners needed to feel like two people were actually talking. A Structured Collaboration from Day One “This wasn’t just a proof of concept — we aimed to build a system capable of daily operation, and the collaboration reflected that level of completeness.” MK AX led service design and content planning, while SmileShark handled the AWS-based implementation. SmileShark translated requirements quickly and focused on practical, operational workflows rather than theoretical models. MAI Morning Briefing, currently available on Maeil Business Newspaper— click to visit the service page Core Technology: Balancing Accuracy and Naturalness Reliability through hallucination prevention The system strictly uses only the original article text. Each processing stage includes validation. “Human edits are now rare—occasionally we adjust a phrase, but the base script is accurate enough to use as is.” Voice Post-Processing for Natural Conversation To enhance audio quality, the team didn’t simply use raw TTS output. Contextual pauses were inserted, and natural breathing rhythms were added between utterances. “AI-generated speech can vary in tone from sentence to sentence. It might start calm, then suddenly become intense — causing volume to spike 2-3x. So after assembling all audio segments, we normalize the volume to ensure consistency throughout.” Serverless was far better than expected Q. How was the AWS serverless architecture applied? MK AX initially considered ECS or server-based options, but SmileShark proposed a serverless design. The result was more flexible than expected—easy to modify, easy to extend and ideal for long-term evolution. Q. What impressed you about AWS Step Functions? The visual workflow was intuitive for any developer. We could test individual modules—even within a 15-minute end-to-end process—without affecting the rest. This modularity made future expansion far easier. Chu Gyo-yoon, AI TF Associate at MK AX, during an interview with SmileShark Automation That Transformed Daily Operations Category Before After Improvement Production time 2-3 days Automated daily at 7 AM Full automation Human Resources Reporter / External studio Final review only ~90% reduction Publishing cycle Irregular Daily on time Stable schedule Quality consistency Dependent on producer Uniform quality Standardized “Employee feedback has been extremely positive. Even other media companies have asked about the service. The consistent daily schedule is especially well received.” Automation didn’t only reduce effort—it improved the reader experience. A Fundamental Shift in How the Team Works The biggest impact was focus. The team moved away from repetitive production tasks and spent more time on planning and high-impact work. The success also increased confidence—team members became more proactive in exploring AI projects. Just start small — that’s the best way Q. Advice for companies considering AI adoption? You only learn by trying. Start small, test hands-on, and set clear goals— PoC or full service. Aim for the right level of completeness. Q. What was the most important factor for the project's success? Setting clear performance metrics was critical. Rather than trying to change everything at once, we focused on starting small and improving continuously. Most importantly, we kept expectations realistic. AI doesn’t solve every problem, so success depended on defining the roles of people and AI and building a collaboration model around that. Lee Dong-woo, AI TF Manager at MK AX, during an interview with SmileShark SmileShark is the right partner when quality matters We highly recommend SmileShark to companies that are newly adopting AWS or considering a serverless architecture. In particular, organizations looking for practical and production-ready AI use cases will find strong value in working with SmileShark. Their strengths are most evident in projects with clearly defined goals, where execution quality matters as much as speed. SmileShark is an ideal partner for teams seeking fast results while maintaining a stable system from a long-term perspective. Another key advantage is their continued technical support even after project completion, which helps build a reliable and lasting partnership. The Journey Continues: Ongoing AI Innovation at MK AX “SmileShark paved the path. Our job now is to enhance and evolve the service” “As we’ve said, this isn’t the end — we’re only about halfway there.” “There’s still a long way to go, but I’m truly grateful for the milestones we’ve achieved together.” A Realistic Yet Proactive View on AI “I don’t believe AI will replace everything. But it clearly helps people focus on what truly matters.” MK AX takes a realistic yet proactive stance on the future of AI. “Through this project, we learned how to collaborate with AI — and we plan to apply it across many more areas.” Expanding AI Applications MK AX’s AI journey is just beginning. Building on the success of MAI Morning Briefing, the company plans to expand AI collaboration across more domains — improving script quality, enhancing voice naturalness, and exploring new content formats and topics. The company also aims to deepen internal AI expertise and build in-house development capabilities. Cross-departmental AI projects are planned to accelerate company-wide AI transformation. “With SmileShark, AI becomes easier.” These were Yoo Young-hoon’s closing words. With continued partnership, MK AX’s AI journey continues. ▼ Learn more about MK AX's story featured on the AWS Technical Blog below.

  • AWS Certification Types and Levels - 2025 Updates You Must Know

    AWS Certification Types and Levels - Updated for 2025 Written by Hyojung Yoon Amazon Web Services (AWS), the world's leading cloud computing platform, offers certifications that validate your expertise and skills in cloud technology. Whether you're advancing your career or helping your business grow, AWS certifications are a trusted way to showcase your abilities in a rapidly evolving industry. In this blog, we'll explore the latest updates for 2025, including new certifications, updated question formats, and the AWS Retake2025 promotion. Let's dive in! Contents Overview of AWS Certifications 1. What are AWS Certifications? 2. Types of AWS Certifications 3. Certification Validity Period 4. Question Types 5. Global Retake Promotion AWS Certification Levels Foundational 1. Cloud Practitioner (CLF-C02) 2. AI Practitioner (AIF-C01) Associate 1. Solutions Architect (SAA-C03) 2. Machine Learning Engineer (MLA-C01) 3. Developer (DVA-C02) 4.  CloudOps Engineer (SOA-C03) *formerly known as SysOps Administrator 5. Data Engineer (DEA-C01) Professional 1. Solutions Architect (SAP-C02) 2. DevOps Engineer (DOP-C02) 3. Generative AI Developer (AIP-C01) Specialty 1. Advanced Networking (ANS-C01) 2. Machine Learning (MLS-C01) 3. Security (SCS-C02) AWS Certification Paths Conclusion Overview of AWS Certifications 1. What are AWS Certifications? AWS certifications are globally recognized credentials that demonstrate your expertise in using the Amazon Web Services(AWS). They cover various domains, including cloud architecture, development, and operations, and are structured into multiple levels to align with different career paths. Certification exams are offered in multiple languages and are available at testing centers globally. 2. Types of AWS Certifications Types of AWS Certifications AWS offers certifications tailored to different roles and skill levels, categorized into four levels: Foundational > Associate > Professional and Specialty 3. Certification Validity Period AWS certifications are valid for three years from the date of acquisition. To maintain the validity of your certification, you need to renew it before it expires. For Foundational and Associate level certifications, you can meet the renewal requirements either by passing a higher-level exam or renewing your current certification. 4. Question Types New Question types have been introduced for the AWS Certified AI Practitioner and AWS Certified Machine Learning Engineer - Associate exams, in addition to the traditional multiple-choice and multiple-responses formats. These new question types include Ordering, Matching, and Case Study questions, designed to assess practical, real-world skills. These updates do not affect the total number of the exam duration. Explore question types Ordering questions Ordering questions require you to arrange 3-6 steps or responses in the correct logical sequence to complete a specific task. Use the drop-down menus provided to select the correct order for each step. Matching questions Matching questions present 3-6 prompts alongside a list of possible responses. Your task is to match each prompt with its correct response. Use the drop-down menus to connect the correct responses with each prompt.. Case Study questions Case Study questions provided a single scenario and require you to answer two or more questions related to that scenario. The scenario remains the same for all related questions, but each question is evaluated independently. 5. Global Retake Promotion AWS Global Retake Promotion (Click the image to learn more) AWS is running a Global Retake Promotion to help certification candidates prepare for 2026. If you join this campaign, you can register for any AWS Certification exam at a 25% discount , and if you don't pass on your first attempt, you'll have one free retake of the same exam. Whether you're stepping into AWS certification for the first time or aiming for a higher-level credential, this is your chance to take the leap with less worry. How to participate: Register for the promotion and you will receive a promo code via email. When you book your exam, enter the code at checkout to get the 25% discount . Take your first attempt between 10 November 2025, 00:01(PST) and 15 February 2026, 23:59(PST) . If you don't pass, you can retake the same exam for free until 31 March 2026, 23:59(PST) . AWS Certification Levels Foundational 1. Cloud Practitioner (CLF-C02) Target Candidates Individuals with a basic understanding of the AWS cloud platform Individuals with no IT or cloud background transitioning to a cloud career Exam Overview Topic: Cloud concepts(24%), Security and Compliance(30%), Cloud Technology and Services(34%), Billing, Pricing and Support(12%) Cost: $100 | Format: 65 questions | Duration: 90 minutes 2. AI Practitioner (AIF-C01) Target Candidates Individuals who are familiar with, but do not necessarily build, solutions using AI/ML technologies on AWS Exam Overview Topic: Fundamentals of AI and ML(20%), Fundamentals of Generative AI(24%), Applications of Foundation Models(28%), Guidelines for Responsible AI(14%), Security, Compliance, and Governance for AI Solutions(14%) Cost: $100 | Format: 65 quiestions | Duration: 90 minutes Associate 1. Solutions Architect  (SAA-C03) Target Cadidates 1 + years of hands-on experience designing cloud solutions that use AWS Services Exam Overview Topic:  Design Secure Architecture(30%), Design Resilient Architectures(26%), Design High-Performing Architectures(24%), Design Cost-Optimized Architectures(20%) Cost:  $150 | Format:  65 questions | Duration:  130 minutes 2. Machine Learning Engineer  (MLA-C01) Target Candidates Individuals with at least 1 year of experience using Amazon SageMaker and other ML engineering AWS Services Exam Overview Topic:  Data Preparation for Machine Learning(ML)(28%), ML Model Development(26%), Deployment and Orchestration of ML Workflows(22%), ML Solution Monitoring, Maintenance, and Security(24%) Cost:  $150 | Format:  65 questions | Duration:  130 minutes 3.  Developer  (DVA-C02) Target Candidates 1+ years of hands-on experience in developing and maintaining applications by using AWS servies Exam Overview Topic:  Deployment with AWS Services(32%), Security(26%), Deployment(24%), Troubleshooting and Optimization(18%) Cost:  $150 | Format:  65 questions | Duration:  130 minutes 4.   CloudOps Engineer  (SOA-C03) *formerly known as SysOps Administrator Target Candidates 1 year of experience with deployment, management, networking, and security on AWS Exam Overview The AWS Certified SysOps Administrator - Associate exam currently excludes lab-based tasks until further notice. Topic: Monitoring, Logging and Remediation(20%), Reliability and Business Continuity(16%), Deployment, Provisioning, and Automation(18%), Security and Compliance(16%), Networking and Content Delivery(18%), Cost and Performance Optimization(12%) Cost: $150 | Format: 65 questions | Duration: 130 minutes 5. Data Engineer (DEA-C01) Target Candidates 2 + years of experience in data engineering 1 + years of hands-on experience with AWS Services Exam Overview Demand for data engineer roles increased by 42% year over year per a Dice tech jobs report (source: AWS website) Topic: Data Ingestion and Transformation(34%), Data Store Management(26%), Data Operations and Support(22%), Data Security and Governance(18%) Cost:  $150 | Format:  65 questions | Duration:  130 minutes Professional 1. Solutions Architect (SAP-C02) Target Candidates 2 + years of experience in using AWS Services to design and implement cloud solutions Exam Overview Topic: Design Solutions for Organizational Complexity(26%), Design for New Solutions(29%), Continuous Improvement for Existing Solutions(25%), Accelerate Workload Migration and Modernization(20%) Cost: $300 | Format: 75 questions | Duration: 180 minutes 2. DevOps Engineer (DOP-C02) Target Candidates 2 + years of experience in provisioning, operating, and managing AWS environments Experience with software development lifecycle and programming and/or scripting Exam Overview Job listings requiring this certification have increased by 52% between Oct 2021 and Sept 2022 (source: Lightcast™ September 2022) Topic: SDLC Automation(22%), Configuration Management and IaC(17%), Resilient Cloud Solutions(15%), Monitoring and Logging(15%), Incident and Event Response(14%), Security and Compliance(17%) Cost:  $300 | Format:  75 questions | Duration:  180 minutes 3. Generative AI Developer (AIP-C01) Target Candidates 2 + years of experience building production grade applications in AWS or with open-source technologies, general AL/ML or data engineering experience 1 year of hands-on experience implementing generative AI solutions Exam Overview Registration for the beta exam opens November 18, 2025 Beta participants receiving a special Early Adopter badge upon passing. Beta Exam Cost:  $150 | Format:  85 questions | Duration:  205 minutes Specialty 1. Advanced Networking  (ANS-C01) Target Candidates 5 + years of networking experience with 2 + years of cloud and hybrid networking experience Exam Overview Topic: Network Design(30%), Network Implementation(26%), Network Management and Operation(20%), Network Security, Compliance, and Governance(24%) Cost: $300 | Format: 65 questions | Duration: 170 minutes 2. Machine Learning  (MLS-C01) Target Candidates 2 + years of experience developing, architecting, and running ML or deep learning workloads in the AWS Cloud Exam Overview This certification is being retired. The last day to take this exam is March 31, 2026. Topic: Data Engineering(20%), Exploratory Data Analysis(24%), Modeling(36%), Machine Learning Implementation and Operations(20%) Cost: $300 | Format: 65 questions | Duration: 180 minutes 3. Security  (SCS-C02) Target Candidates 2 + years of hands-on experience in securing AWS workloads 3 ~ 5 + years of experience in designing and implementing security solutions Exam Overview Topic: Threat Detection and Incident Response(14%), Security Logging and Monitoring(18%), Infrastructure Security(20%), Identity and Access Management(16%), Data Protection(18%), Management and Security Governance(14%) Cost: $300 | Format: 65 questions | Duration: 170 minutes AWS Certification Paths AWS Certification Paths *Zoom in on the image to see the AWS Certification Paths. Above are the top cloud job roles, role responsibilities, and AWS Certification paths aligned with those roles. Select the role(s) you are interested in and get started or continue your AWS Certification journey to achieve your career goals! *Note: You are not required to follow these paths. These are recommended pathway. AWS Certified Cloud Practitioner is an optional step for candidates with an It or STEM background. AWS Certified AI Practitioner is recommended for IT and non-IT professionals looking to leverage AI. ※ Which AWS Certification should I start with? New to IT and Cloud From a non-IT background, switching to a cloud career? Start with AWS Certified Cloud Practitioner to validate foundational AWS Cloud knowledge, then earn AWS Certified AI Practitioner to showcase AI knowledge. Line-of-Business Roles In sales/marketing or other business roles? Start with AWS Certified Cloud Practitioner to validate foundational AWS Cloud knowledge, then earn AWS Certified AI Practitioner to showcase AI knowledge. IT Professionals Do you have 1-3 years of IT or STEM background? Start with an Associate-level AWS Certification that aligns with your role. AWS Certified AI Practitioner is recommended to validate conceptual AI knowledge. Conclusion The AWS certifications introduced in this blog demonstrate cloud expertise. Earning AWS certifications is an excellent way to enhance your competitiveness in the cloud industry. As AWS remains the global leader in cloud computing, these certifications validate your skills and open doors to new career opportunities in this fast-paced and ever-evolving field. Links AWS Certification - Validate AWS Cloud Skills - Get AWS Certified AWS Certification: Addition of new exam question types | AWS Training and Certification Blog AWS Foundational Certification Exam Retake Become an AI/ML Early Adopter with AWS Certification | AWS Training and Certification Blog

  • AI Cuts Newsletter Production Time from 3.5 Hours to 40 Minutes | MBlock Company

    AI doesn't replace editors — it makes them more efficient. - Seong-ah Jeon, Manager at Mblock Company [ 💡 Summary ] 1. Mblock Company cut newsletter editing time by 81% - reducing a 3.5-hour workflow to just 40 minutes through full AI-driven automation. 2. Google Spreadsheet-based intuitive automation - by integrating the Naver News API, Google APIs, and the Amazon Bedrock, the system automatically collects, analyzes, and ranks about 100 news articles each day, adding two related articles per item to enrich the content 3. SmileShark, the AI innovation partner behind Mblock's transformation - provided technical expertise for multi-API integration and workflow design, enabling Mblock's media operations to achieve digital transformation through rapid communication and professional execution. Company Overview N a m e    Mblock Company Co.,Ltd. A r e a s    Blockchain Data Validation・NFT・Media・Conferences Founded   April 2022 Website   https://m-block.io/ Mblock Company, founded in 2022 by the Maeil Business Newspaper Group — one of South Korea's leading media organizations — is a blockchain-focused subsidiary. As a blockchain and digital-asset media outlet, it publishes Mblock Letter , a newsletter distributed every Wednesday and Friday to approximately 10,000 subscribers. Since introducing its AI-powered newsletter automation system in 2025, Mblock Company has reduced editing time by 81%, achieving measurable efficiency gains. As Jeon Seong-ah describes it, "With just one click, the article appears in three seconds." That moment captured how automation directly translated into tangible productivity improvements. Inside the 24/7 Digital0Asset Market — A Day in Life of an Editor Every morning at 9 a.m., Mblock Company's manager Seong-ah Jeon begins her day reviewing news from the overnight digital-asset market. In this 24-hour industry, information shifts rapidly; what's trending in the morning can be outdated by the time the newsletter draft is ready. During major events — such as the market volatility surrounding Donald Trump's election — the domestic and global digital-asset landscape would change hour by hour as regulatory discussions intensified. Within Mblock's newsroom, a "daily news-clipping" culture exists: five key articles are summarized and shared with links each morning. However, beyond articles, faster updates flow through X (formerly Twitter), Telegram, and Discord, forcing the team to spend over 10 hours a week just tracking market movements . Seong-ah Jeon, Manager of Strategic Planning at Mblock Company, during an interview with SmileShark "Honestly, it fels like I was wandering around like a hyena looking for newsletter topics." Her remark captures the reality faced by editors in the fast-moving digital-aasset market — constant monitoring, manual curation, and unrelenting time pressure AI Adoption — But New Challenges Ahead While Mblock Letter was oamong the first in its field to experiment with AI, existing tools such as ChatGPT, Gemini, and Perplexity quickly exposed critical limitations. (Top)Wednesday edition - Editor's market analysis article, (Bottom)Friday edition - AI-powered weekly news curation The biggest issue aws accurate article retrieval.  When asked to"find five Korean digital-asset news articles publiched between October 7 and October 14, 2025," only two or three results were actually valid. Many links led to unreliable sources, promotional blog posts, or outdated contect—sometimes from the previous year—demonstrating how hard it was to achieve precision. Tone and consistency were also problematic. Even when prompted to use a formal "-입니다" style, the AI would randomly shift to conversational tone halfway through. Hallucinated links were another recurring problem. When an AI-generated newsletter needed post-publication edits, locating the original prompts or data sources became extremely time-consuming due to the conversational interface. The result? More time spent entering prompts, adjusting parameters, and waiting for responses. As Jeon put it, "At some point, I couldn't tell whether I was training the AI or working as its assistant." It became clear that Mblock needed a s tructured, system-based approach aligned with its editorial standards—beyond one-off chat-based tools. The Decision to Automate Q. We heard you initially tried building automation yourself. Yes. After seeing similar examples shared in a marketing community, I tried building one on my own for two weeks using the ChatGPT API and Google Sheets. But due to my lack of development background, I kept running into error. Eventually, I realized this was beyond what I could handle alone. Q. What made you decide to pursue full automation? After that failed attempt, I was ready to give up and just wait for aI technology to mature. But then, through our AWS Partner Manager, I was introduced to SmileShark, a team that could provide both infrastructure and technical support. That's when the real development started. Building the System with SmileShark When Jeon shared her initial project brief with SmileShark—including all the pain points from her earlier attempts—she received precise feedback. The SmileShark team explained which parts were technically feasible, which were not, and why , allowing Mblock to refine the entire plan. They began by testing various integration methods. The first prototype used Slack as the vase, but after comparing multiple environments, Google Sheets proved most effective for archiving and collaboration. Whenever an experiment failed, SmileShark immediately proposed alternatives, enabling the project to move forward without delay. One of the key breakthroughs was solving the filtering problem that Jeon couldn't overcome alone. For example, searching for "coin" used to return irrelevant results such as "K-pop idols appearing at the Coin Festival." SmileShark helped formalize Mblock's filtering standards into a News Clipping Criteria Table —categorizing sources by media credibility, relevance to digital assets, and timeliness. This system allowed the AI to automatically assign scores and rank trustworthy sources across both traditional and emerging media outlets. Workflow diagram of Mblock Company's AI-powered newsletter automation system The Core Principle: ;From One Click to Full Draft' At the heart of the completed system lies simplicity — "from one click to the full article" During the news-clipping process, editorial conditions defined by Mblock are automatically applied: publication recency, media credibility, and whether the source is an official Naver News content provider. Each article receives a weighted score on a 100-point scale, and about 100 articles per day are automatically collected and ranked by score. The system then enriches each main article with two related articles to generate summaries and links automatically, reducing duplication and copyright risk. Editors simply review the list, check the boxes for the articles they approve, and the system instantly generates the body text in three styles — newsroom, conversational, and formal. "Now all I do is review and poliush the content", Jeon said. "What used to take hours is now literally one click. After Implementation — What Changed Q. What was the most memorable moment? When I clicked the checkbox for the first time and saw the article appear on screen about three seconds later , that was the moment I realized — I'll never have to wrestle with ChatGPT again. Traditional AI tools like ChatGOT or Claude could generate one article at a time, each requiring long waiting periods. In contrast, the new system processes multiple selections at once. "All I do now is decide which pieces to include," she added. "The system does the rest." Q. What did you learn form the process? The biggest lesson was that there's no such thing as a perfect first attempt. We started with a Slack-based version, switched to Google Sheets, and adjusted our filtering criteria several times. Every failure led us closer to a better system. Another thing I noticed during live operations, was that AI-selected articles often got slightly higher engagement than those picked by our editors. That really showed how far AI has come in understanding what readers care about. From 3.5 Hours to 40 Minutes — An 81% Reduction Comparison of Mblock Company's newsletter production before and after AI automation The numbers tell the story best. Before automation, each Mblock Letter issue took about 3.5 hours to produce. Immediately after implementation, that dropped to 1.5 hours, and now — including image preparation and platform upload — the entire process takes just 40 minutes. That's an 81% reduction in production time. But the most signmificant improvement wasn't just time saved — it was the psychological relief that came with it. Before automation, Jeon handled the daily morning news clipping manually. When workloads piled up, she sometimes missed deadlines — not because of external pressure, but from the sense of personal responsibility. Now with automated clipping, I can wrap up that task in under 10 minutes. It gives me more time and evergy to focus on higher-value work. That's the biggest change — not just efficiency, but mental clarity. Practical Tips for AI Newsletter Automation "AI doesn't replace editors — it makes them more efficient." Jeon offered several actionable tips for other editors and media teams exploring automation: Define your internal standards clearly. Mblock built a 100-point evaluation model based on media credibility, publication recency, and whether the outlet is a Naver News content provider. Converting subjective quality into measurable metrics was key to system reliability. Keep your tools simple. After testing multiple approaches, Google Sheets proved to be the best for record-keeping and collaboration. "Simplicity beats complexity every time," Jeon noted. Design for a one-click experience. Automation should feel effrotless — a single checkbox should trigger a full draft. "The less friction editors feel, the more likely they';; embrace the system." Work with a specialized partner. Multi-API integration and workflow design require technical depth. Collaborating with an experienced partner like SmileShark was crucial to the project's success. Create a separate development account. Using dedicated development credentials simplifies collaboration and enhances security. Dont aim for perfection — start first. We're still improving the system every week, but the key is to start. In an industry changing this fast, hesitation costs more than imperfection. Mblock Letter - Mblock Company's newsletter connecting the blockchain ecosystem with the public (click to veiw) Scalability Beyond Mblock — Expanding the Newsletter Model The automation framework Mblock built can easily be applied to other newsletters with only minor keyword adjustments. Jeon believes that industries such as finance and economics, which rely heavily on news clipping and summarization, could adopt a similar model to achieve the same level of efficiency. She emphasized that the system Mblock created can be scaled across multiple domains. If companies can clearly define their selection logic — such as how to score or filter reliable news sources — the rest of the process can be fully automated. The system's modular structure makes it flexible enough to handle new datasets, APIs, or workflows, providing a foundation for future growth across different media verticals. More Than a Technical Partner — A True AI Innovation Ally Q. What stood out the most about working with SmileShark? The communication speed . In the past, working with freelance developers often meant waiting weeks between feature requests and deliverables. But with SmileShark, communication was incredibly fast. I still remember — our assigned Solution Architect was coordinating with us the same day he came back from military reserve training. What also impressed me was how SmileShark handled limitations. When something wasn't technically feasible, they didn't just say no — they clearly explained why and offered alternativeness. That made the entire collaboration process stress-free. Honestly, this project wouldn't have been possible without our SA, Byung-joo. He truly played a major role in making our workflow automation a success. Q. How would you recommend SmileShark to other companies? Many startups don't have in-house developers. But if you know what you want and can communicate your goals clearly, SmileShark can turn that vision into an actual tool. When our AWS Partner Manager asked how the collaboration went, we told them we'd absolutely recommend SmileShark — especially Manager Jun-hong, who was amazing to work with, and the SA team for their responsiveness and expertise. They're a partner we truly trust and recommend to others. Empowering Media Companies to Focus on What Matters SmileShark helps media organizations like Mblock Company technical barriers and stay focused on innovation.

  • Recap of AWS Summit Japan 2025

    AWS Summit Japan 2025 Recap: Revisiting the Summit Japan Experience with an AWS Ambassador Written by MinHyeok Cha Contents AWS Summit Japan 2025 Event Overview AWS Summit Japan On-Site Atmosphere Key Partner Booth Introductions Technical Sessions and Exhibition Content Closing Remarks Last May, AWS Summit Seoul 2025 was held in Seoul. On June 25 and 26, AWS Summit Japan 2025 took place in Tokyo over two days. The reason for my trip to Japan was for customer meetings with Japanese clients. Since the client was also attending the Summit, we arranged to meet at Makuhari Messe in Chiba, the venue for the Japan Summit. AWS Summit Japan 2025 Event Overview AWS Summit Tokyo is Japan's largest AWS event. Similar to Korea, it's an event where you can gain knowledge and enjoy sessions related to AWS technology, Gameday, EXPO, and more. The venue is ‘Makuhari Messe’. You can think of it as similar to COEX in Seoul for easier understanding. The weather, true to summer in Japan, was hot, humid, and rainy—a trifecta of discomfort. But since it was an indoor event, we braved it and went in. AWS Summit Japan On-Site Atmosphere There were many people waiting, but there were also an incredible number of staff issuing tickets up front. It seemed even busier than AWS Re:Invent. So we didn't waste time and got inside quickly. It seemed even busier than AWS Re:Invent. So we didn't waste time and got inside quickly. As those in the know are aware, AWS established its region in Japan before Korea. Perhaps reflecting the strong local interest in AWS, the sheer crowd size was palpable. For partners, this is another point of curiosity. We could see which AWS partners are active in Japan. Major Partner Booth Introductions I visited several booths, but everything was written in Japanese. I just lingered around the booths of well-known companies, pretending to understand everything. First up is Snowflake. The only thing I could read on their PPT. Powerful AI Services Powerful AI services supporting code assistance and document reading Conversations with Data Search functionality and text-to-SQL capabilities that make it easy to build BI or AI chatbots Models Easy access to top-tier models integrated with Snowflake (Translation reference) Thinking back, while AI was certainly mentioned a lot at the Japan Summit, it didn't quite have that same “We're in the Age of AI!” feeling you get in Korea or the US. It makes you wonder if it's a country that values analog things more. I went to the FORTINET booth, and there was a spot to take pictures with a cute mascot(?) up front. I didn't have the courage to ask someone to take my picture. This is Classmethod, a company famous even in Korea. I heard they had a cat mascot at last year's summit too, but I missed it, which is a shame. It's a company called iret. I was curious and looked it up; it seems to be a general IT solutions provider. Their booth size was the same as Classmethod's, so I guessed they were a Diamond Partner. Technical Sessions and Exhibition Content This is the session timeline I saw at AWS Re:Invent too. My flight arrived at 11 AM, so I missed the keynote, but I felt that the format of companies or AWS people giving presentations was similar no matter where in the world you go. This next one seems like a joint creation by Sony and Honda. Similarly, I couldn't understand the explanation either, but since an AWS architecture related to cars was set up, I could only get a sense of what it felt like. In closing Working at SmileShark led me to attend the Japan Summit too. Since it wasn't my main goal, I couldn't participate for both days, but it was a great experience. Studying Japanese would probably make exploring even more fun. That wraps up my unplanned AWS Summit Japan experience. Thank you!

  • Using MCP 
on AWS: Now Available with Amazon Q

    Written by Minhyeok Cha There have been a couple of new developments regarding Q. One is multilingual support (including Korean), and the other is the combination of CLI+Q+MCP. MCP is hot again these days. As someone working in IT, I felt I should at least check what this thing is about, albeit late, so I wrote this article. While it's typically installed and used with things like Claude or Cursor, as an employee of an AWS partner company, I'll try to integrate it with AWS. Table of Contents What is the Model Context Protocol (MCP)? Installing Amazon Q and SSO Sign-Up Applying the MCP Server What can we make MCP do? An extra part I'm adding because I feel it's lacking Wrap-up What is the Model Context Protocol (MCP)? You might already know this, but let's briefly explain it and fill out the content anyway. The Model Context Protocol (MCP) is an open protocol that supports seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs to the necessary context. In fact, various MCP servers are already available on GitHub, so you can simply grab one and use it. If you find it lacks desired features, implementing your own is also an option. Refer to the following link for GitHub servers → → https://github.com/modelcontextprotocol/servers The MCP flow we'll use looks roughly like this. As mentioned earlier, we'll proceed using CLI and Q. Amazon Q Installation and SSO Sign-Up 1.First, sign up for SSO. [Q] How to Subscribe to Amazon Q Developer Pro 2.Follow these steps to sign up. Note: If you are using an AWS Organization and testing with an account under it, this is treated as an account instance and incurs the following Q limitations. 3. Next is installing Amazon Q on the CLI. Follow these steps for installation → [Link] 4. After that, open the terminal and type “q chat”. The screen should display as follows. Korean support was essential for me, who struggles with English, and fortunately, it's possible. :) Applying the MCP Server While there are various MCP servers, for this article, I sought the easiest, least troublesome, and most visually clear server for testing and selected the Puppeteer MCP server. It's a Model Context Protocol server that uses Puppeteer to provide browser automation capabilities. While it's described as an MCP server that lets LLMs interact with web pages, take screenshots, and execute JavaScript in a real browser environment, let's see what kind of output it produces in the test results below. Visit the GitHub repository I posted earlier. For your desired server, you'll find the server configuration method, which is also very simple. ~/.aws/amazonq Navigate to the amazonq configuration directory in your terminal, create a “mcp.json” file, and simply paste in each server's configuration values. Then, reboot the CLI Q you started earlier. 💡To reboot, exit with /quit and then run it again. If “Puppeteer loaded” appears above, MCP serves is successfully applied. What should we have MCP do? After running Puppeteer a few times, I found I could easily write and use something like this: Capture web page screenshots Retrieve web page content Interact with web pages I asked it to summarize one of the blog links that helped me study MCP, using one of these simple commands. It output the following content. By granting permissions to this tool, the MCP reads and organizes the link for you. Summarize MCP-related blog content Hmm... But it feels a bit anticlimactic. So this time, instead of just a URL, I fed it a PDF I'd previously organized about personal information and requested an analysis related to ISMS-P. The results were surprisingly good. An additional part done out of disappointment Since this MCP only uses URLs, it threw an error saying it couldn't find content opened as a PDF. It seems to be an issue with the browser directly rendering the PDF. 그러더니 해당 문제를 인지하고 curl 명령어를 사용하여 PDF 파일을 로컬에 다운로드했습니다. I used the ls -la command to verify the downloaded PDF file existed and confirmed it downloaded correctly. I attempted to read the PDF using Python's PyPDF2 library, but it wasn't installed. After creating a Python virtual environment and installing the PyPDF2 library, I successfully read the PDF file and extracted the text. While the subsequent details are confidential and cannot be shared, the CLI + Q functionality on this blog appears to be sufficiently verified. Conclusion Upon finishing this post, I noticed that AWS's official YouTube channel already has a video about using the Developer Q CLI and MCP server. It's a bit disappointing that if I had acted a little faster, I could have posted about the Q CLI and MCP first. MCP truly unlocks diverse possibilities. Integrating MCP with existing servers enabled AI models to access required endpoints with appropriate permissions, marking a significant leap forward in development automation and efficiency. 💡Of course, careful consideration is required when adopting MCP. Issues like the scope of permissions granted to the MCP server, logging, and patch management associated with operating your own server are currently significant topics of discussion within the security community. My first experience using Developer Q CLI was astonishing in itself. Even without MCP's additional features, its intelligent support—proposing and executing necessary library installations or version patches based on the situation—completely transformed my development workflow. In the past, when execution errors occurred, I had to check logs and manually search for solutions one by one. Now, I can resolve issues much more efficiently without that hassle.

  • What Sets AI Apart from AI Agents?: From Core Concepts to Real-World Applications

    What Sets AI Apart from AI Agents?: From Core Concepts to Real-World Applications Written by Eunmin Jeon Hello, I’m Eunmin Jeon from the Brand Team. Recently, terms such as AI agents  and agentic AI  have been appearing almost as frequently as “AI” itself. Although these names sound similar, their meanings differ in significant ways. In this article, we take a closer look at the concept that is attracting the most attention today— the AI Agent . Note:  In this article, AI  refers to Generative AI (GenAI) , and AI agent  refers to an agent built on Generative AI . 목차 Why Are AI Agents Gaining Attention? Defining the AI Agent AI Agent vs. AI Assistant vs. Chatbot Core Characteristics of AI Agents The Operational Loop: Think → Act → Observe Representative AI Agent Use Cases In conclusion Why Are AI Agents Gaining Attention? As AI technology evolves, tools like ChatGPT have become part of everyday life, streamlining everything from work tasks to personal errands. Now a new concept— the AI agent —is pushing these capabilities even further. For example, there are demonstrations of an AI agent ordering a pizza end-to-end , including making the call and posting a review. Others show agents logging into an app and placing a hamburger order  without human intervention. Industry analysts agree this is more than a trend. Gartner’s Top 10 Strategic Technology Trends for 2025  lists nine AI-related technologies. OpenAI CEO Sam Altman described AI agents not as mere question-and-answer tools but as “ virtual colleagues and work partners ,” stating: “The future of computing will be less about using apps and more about telling agents what to do.” In short, AI agents are more than a passing trend—they represent a shift in how we will work. But what distinguishes an AI agent from conventional AI? Let’s explore the core concepts and practical use cases. Defining the AI Agent AI vs. AI agent An AI agent  is an autonomous AI system capable of processing data, making decisions, and executing tasks without direct human intervention . It interacts with its environment to accomplish user-defined goals. The essential attribute is autonomy : once the goal is understood, the agent plans and executes the necessary steps on its own. How does this differ from traditional AI? The comparison below highlights key differences. Category AI AI Agent Definition Predictive model performing a specific task Goal-driven system that acts autonomously Interface Prompt Goal / Task Operation Input → Output Continuous task execution Autonomy None Full (decides subsequent actions) Example Document summarization, image generation Business-trip planning, automated customer service Traditional AI  follows a “prompt in, output out” pattern. For example, if you ask an AI model to “Reserve a Korean barbecue restaurant near Gangnam Station for 7 p.m. tonight,”  it cannot complete the reservation on its own. You would instead need to request: “Recommend popular barbecue restaurants,” then manually click a booking link. AI Operation Screen An AI agent , by contrast, can interpret the user’s intent from a single instruction and execute the entire workflow: Identify the required subtasks (search, comparison, reservation, confirmation). Gather candidate restaurants and analyze details such as price, menu, reviews. Verify availability and book the table. Provide a final confirmation (time, location, reservation ID). AI Agent Operation Screen AI Agent vs. AI Assistant vs. Chatbot How do AI agents compare with familiar tools such as chatbots or AI assistants? Category Chatbot AI Assistant AI Agent Purpose Provides predefined answers (FAQ) Responds flexibly to user queries Plans and acts to achieve a goal Intelligence Low (rule-based) Medium (LLM-driven natural language understanding) High (planning, reasoning, action) Initiative None (reactive only) Partial (Primarily Response-Based) High (actively completes multi-step tasks) Examples Bank customer-service bots, parcel tracking Siri, Google Assistant, Samsung Bixby Auto-GPT, Rabbit R1, Devin, Microsoft Copilot Action Range Text interactions only App integrations (weather, alarms) Executes tools, web navigation, app automation Autonomy None Limited Full A chatbot  delivers fixed responses to predefined inputs—for example, returning a shipping date when given a tracking number. A AI assistant  (think Apple’s Siri) can respond dynamically—“Hey Siri, what’s the weather tomorrow?”—but still relies on direct user prompts. An AI agent , however, can orchestrate a complete workflow and take action without continuous user guidance, making it ideal for repetitive or automated tasks. Core Characteristics of AI Agents Autonomy Unlike conventional AI systems that require continuous human involvement, an AI agent can act independently once a goal is defined. Traditional AI may provide information, but it typically relies on the user to complete the task. By contrast, an AI agent can schedule meetings, summarize files, or send messages without direct human intervention. Goal Orientation An AI agent does more than simply respond to input. It executes a sequence of tasks to accomplish a clearly defined objective. For example, when instructed to “schedule a meeting,” the agent can coordinate participants’ availability, send email invitations, and update the calendar automatically. Tool Interaction AI agents can access and integrate with a variety of tools—such as APIs, databases, or sensors—to solve problems. A common example is connecting to Google Calendar to check a user’s availability and reserve a time slot. Adaptability Rather than following fixed rules, an AI agent refines its approach as conditions change. It continuously learns from data and user feedback. If a morning calendar is fully booked, for instance, it will propose afternoon meeting times instead. The Operational Loop: Think → Act → Observe AI agents process tasks through the following workflow: Think The agent analyzes the current objective and determines the optimal sequence of actions. For example, it may consider whether to gather information first or perform a calculation before proceeding. Act Based on the plan, the agent executes the required tasks—invoking appropriate tools or sending requests to external systems (for instance, a payment gateway) to carry out the action. Observe The agent evaluates the outcome of its actions, reviews the results, and organizes the findings to inform the next step. AI agents repeat this cycle until the specified goal is achieved. Over time, they incorporate user feedback, continuously refining their strategies to deliver a more tailored and effective response. The following example illustrates how an AI agent can generate a personalized email based on user data. Source: AWS Step 1. User → Agent When a user initiates a specific action—such as requesting a movie recommendation email—the AI agent recognizes the request and begins processing. Step 2.  User Profile Verification (Feature Store) Multiple customers exist, each with stored preferences for movie genres. This information is maintained in a Feature Store  and leveraged to generate personalized recommendations. The AI agent accesses this data to prepare the newsletter content. Step 3. Movie Information Aggregation (Metadata Store) The Metadata Store  contains metadata such as movie titles, descriptions, genres, and release years. The AI agent organizes this information to select recommendations appropriate for each individual user. Step 4. Language Model Execution The language model generates a personalized recommendation email using the following inputs: The curated movie list and descriptions User profile information (genre preferences) Step 5. Email Generation and Delivery Finally, the personalized message created by the language model is sent to the user via email. Example Email: 🔥 Immerse Yourself in the Thrill of Sci-Fi and Action! Buckle up for a ride through time, space, and alternate realities with our top sci-fi action recommendations just for you. In this manner, when the user requests “Create a personalized email for my customers,” the AI agent completes the entire workflow—from consolidating customer and movie data, to drafting the personalized email, and sending it. Leveraging both user data and contextual understanding, the AI agent autonomously plans and executes each step, continuously delivering increasingly tailored experiences. Representative AI Agent Use Cases Many companies are now integrating AI agents into their services. The following examples highlight some of the most prominent implementations: OpenAI “Operator” OpenAI’s Operator  is a service that enables AI to interact directly with web interfaces. It leverages the Computer Use Agent (CUA) model , combining GPT-4o’s vision capabilities with reinforcement learning. This autonomous AI agent can engage with graphical user interfaces (GUIs), including buttons, menus, and text fields, enabling it to perform tasks that traditionally required human input. Source: OpenAI Anthropic “Computer Use” Anthropic’s Computer Use  is a task automation tool integrated with the Claude 3.5 language model. It allows Claude to operate like a human, observing computer screens, moving the mouse cursor, and clicking buttons to perform tasks. Currently, Computer Use is deployed across various platforms, including the mobile task management app Asana , the graphic design platform Canva , and the food delivery service DoorDash . Source: Anthropic Microsoft “Copilot” Microsoft’s Copilot  suite includes a range of AI agents designed to automate business processes, particularly document-related tasks: SharePoint Agent : Assists users in quickly locating information by linking to specific sites, files, or folders. Employee Self-Service Agent : Automates tasks such as leave requests, payroll and benefits inquiries, and equipment requests. Translation Agent : Provides real-time translation of conversations across nine languages during video conferences. Project Manager Agent : Automates the entire project lifecycle, from planning to execution. Source: Microsoft Butterfly Effect “Manus ai” Manus , the AI agent used in earlier examples, is an autonomous AI agent developed by the Chinese startup Butterfly Effect . What sets Manus apart is its ability to autonomously execute a wide range of tasks, including resume organization, stock data analysis, and New York real estate recommendations. In addition, Manus can operate entirely in the cloud, providing flexible, scalable deployment. Source: Manus In conclusion The era when AI functioned merely as a tool  has passed. AI agents now stand at the core of digital transformation, driving operational efficiency and 24/7 scalability. Unlike conventional AI that simply provides answers, AI agents take action —integrating with internal systems to execute real business processes. “We are entering an era in which AI agents, integrated with internal systems, can execute actual business operations.” For organizations, the question is no longer “Should we adopt AI agents?”  but rather “How should we deploy them effectively?” References Sam Altman's Blog, “Three Observations” , (2025.02.10), https://blog.samaltman.com/three-observations AWS, “What Are AI Agents?” , https://aws.amazon.com/ko/what-is/ai-agents/ Korea Local Information Development Institute, “Trends in AI Agent Technologies and Service Development Abroad” , Vol. 145, http://klidwz.or.kr/webzine/vol145/sub_2_4.html Tech42, “Anthropic Launches AI Agent Capable of ‘Using Computers Like Humans’” , (2024.10.23), https://www.tech42.co.kr/%EC%95% AI Times, “The Next Stage of Artificial Intelligence: The Era of AI Agents – Building AI Agents Through MS Copilot Studio” , (2024.11.20), https://www.aitimes.kr/news/articleView.html?idxno=32885 Chosun Ilbo, “What Is ‘Manus’… Called the World’s First General-Purpose AI” , (2025.07.27), https://www.chosun.com/economy/weeklybiz/2025/07/24/CNF44UEWCVHW5PAUBMI4V2U5UY/

  • AWS Case Study - Blynx

    Just one month after launch, Blynx created 300 chatbots How did they set their sights on global expansion? Blynx Co., Ltd. Blynx operates I’m , an AI-powered platform that enables anyone to create customized chatbots for both work and life. Unlike ChatGPT, I’m  leverages users’ proprietary data to deliver more accurate and domain-specific chatbot experiences.  The platform is already being applied across diverse scenarios: interactive replacements for traditional business cards, professional service promotion and consultation (for hospitals, law firms, and tax offices), customer support and community management, and entertainment use cases such as celebrity fan club bots and character-based chatbots. This versatility provides unlimited scalability for users. Company    Blynx Corporation Industry    AI-based Custom Chatbot Services Estab.     February 2024 Website    https://blynxlab.com From B2B SI to B2C AI Platform Initially focused on system integration (SI), Blynx faced structural limitations in a market with low margins, intense competition, and limited global potential. Given these constraints, the company decisively shifted from a B2B systems integration model to a consumer-oriented AI platform, opening up broader opportunities for global growth .  The development team embraced the shift, motivated by the opportunity to apply cutting-edge LLM technologies and high-performance data processing. While this transition presented challenges - especially the need to continuously learn and implement emerging technologies - Blynx was able to overcome them effectively with technical guidance from SmileShark and support from AWS. Choosing AWS and Partnering with SmileShark Blynx selected AWS as its cloud provider for its strong presence in the Korean market and its robust technical support system. AWS consistently delivered faster and more accurate responses than other cloud vendors, enabling the development team to move quickly. At the same time, SmileShark distinguished itself as a true technology partner. While large MSPs often struggle to provide personalized support to early-stage startups, SmileShark specializes in assisting startups and SMBs. Beyond surface-level guidance, SmileShark provided detailed benchmarks, hands-on technical insights, and practical security validation, helping Blynx accelerate development and strengthen its infrastructure. Key Milestones & Achievements Development of I’m began on April 8, 2024. Within just three months, on July 2, the company successfully launched its beta version. More than 90% of the AI technology stack involved new and unfamiliar domains, but with SmileShark’s expertise and AWS’s robust services, Blynx shortened its learning curve and adapted quickly. Remarkably, within the first month of launch, over 300 chatbots had been created without dedicated marketing efforts. Organic adoption continued to grow, attracting a diverse customer base that included startups, KOSDAQ-listed enterprises, hospitals, and law firms. “Although more than 90% of our AI tech stack involved unfamiliar technologies, with SmileShark’s expertise and AWS services, we were able to launch our beta service in just three months.” - Jaeyoo Cho, CEO of Blynx Hong Seong-bin, AI Developer, and Cho Jae-yoo, CEO, during the interview Driving Innovation with AI Over 90% of the AI stack represented new technology areas, requiring a different approach than traditional development. The Blynx team leveraged AI tools such as ChatGPT and Cursor to accelerate prototyping, while SmileShark provided professional technical validation. This combination allowed the company to innovate quickly while maintaining stability and security. From this experience, Blynx emphasizes three key principles for companies pursuing AI-driven business models: 1.  Realistic expectation management – Understand what AI can and cannot do today to avoid poor decision-making and ensure effective resource allocation. 2. Data-centric strategies – Clearly define what data is available, how it should be refined, and what outcomes are expected before starting. 3. Collaboration with professional partners  – Work with experienced partners like SmileShark, who bring extensive client experience and practical feasibility checks. Looking Ahead: Global Expansion Blynx’s long-term vision is to redefine platform businesses around LLM technologies. A recent example demonstrated this potential when a customer launched a “self-introduction chatbot” on a dating platform. In just 24 hours, the chatbot generated over 3,000 messages and facilitated 10 successful real-life matches, showcasing the transformative power of conversational AI in social networking. Building on this momentum, Blynx is preparing to launch globally. Starting as early as October 2024, the company plans to release multilingual services in English and Japanese. To support this growth, Blynx is expanding its AWS infrastructure beyond the Seoul Region to Virginia and Singapore, while also implementing international payment systems such as PayPal. Chatbots created with I'M for various purposes and languages Blynx's AI innovation journey continues. To learn more about Blynx, visit their website, read additional case studies, and explore related articles about their innovative AI solutions. 'AI 비즈니스 카드' 블링스, 'Good Start-up Award' 수상 AI 전환시대, 스타트업은 어떻게 생존하고 성장하는가 블링스, 클릭 몇 번으로 챗봇 만드는 'I'm' 출시..."AI를 명함처럼"

  • AWS Summit Japan 2025 참관 후기

    AWS Summit Japan 2025 참관 후기 : 앰버서더와 함께 다시보는 서밋 재팬 현장 Written by MinHyeok Cha 목차 AWS Summit Japan 2025 행사 개요 AWS Summit Japan 현장 분위기 주요 파트너사 부스 소개 기술 세션 및 전시 내용 마치며 지난 5월 서울에서 AWS Summit Seoul 2025가 열렸죠. 6월 25, 26일 이틀간 도쿄에서는 AWS Summit Japan 2025 가 열렸습니다. 이번에 일본에 간 이유는 일본 측 고객 미팅때문인데요. 마침 고객분도 Summit에 참여한다 하여, 일본 Summit 주최지인 치바 쪽에 있는 마쿠하리 멧세( 幕張メッセ )에서 뵙기로 하였습니다. AWS Summit Japan 2025 행사 개요 AWS Summit Tokyo는 일본 최대의 AWS 행사입니다. 한국과 마찬가지로 AWS 기술과 관련된 세션과 Gameday, EXPO 등을 통해 지식을 얻고 즐길 수 있는 이벤트입니다. 장소는 '마쿠하리 멧세' 라는 곳인데요, 서울의 코엑스 같은 곳이라고 생각하시면 이해하기 쉬울 것 같습니다. 이때 날씨는 역시 여름의 일본답게 덥고•습하고•비오는 불쾌함의 3박자였지만 어차피 실내활동이니까 참고 들어갔습니다. AWS Summit Japan 현장 분위기 대기하는 사람도 많았지만 앞에서 티켓 발급해주는 사람도 엄청 많았습니다. AWS Re:Invent 때보다도 더 많았던 것 같아요. 그래서 시간들이지 않고 금방금방 안으로 들어갈 수 있었습니다. 아실 분들은 아시겠지만 AWS 는 한국보다 일본에 먼저 리전이 생성되었습니다. 그만큼 현지인 분들도 AWS 관심이 많은 것인지, 확실히 인파가 많은 걸 체감할 수 있었습니다. 파트너사라면 또 궁금한 포인트죠. 일본에서는 어떤 AWS 파트너사가 있는지 확인해 볼 수 있었습니다. 주요 파트너사 부스 소개 여러 부스를 돌아봤지만 전부 일어로 적혀있어, 유명한 회사 부스에만 기웃기웃거리며 열심히 알아듣는 척만 했습니다. 먼저 snowflake 입니다. PPT에 유일하게 읽을 수 있는게 보이네요. 강력한 AI 서비스 코드 어시스턴트나 문서 읽기 등을 지원하는 강력한 AI 서비스 데이터와의 대화 BI나 AI 챗봇을 쉽게 구성할 수 있는 검색 기능  및 텍스트 → SQL 기능 모델 Snowflake와 통합된 최고급 모델들에 쉽게 접근 가능 (번역기 참조) 생각해보니 일본 Summit에서는 물론 AI의 언급이 많긴 했지만 한국이나 미국에서만큼의 대 AI시대다! 라는 느낌은 없었습니다. 그만큼 아날로그를 중시하는 나라인가 싶기도 하네요 FORTINET 부스로 갔는데 앞에 귀여운 마스코트(?)랑 사진 찍는 곳도 있었습니다. 용기가 없어 찍어달라고는 못했습니다. 한국에서도 유명한 회사 Classmethod 입니다. 여기도 작년 서밋에 고양이 마스코트가 있었다는 이야기가 있었는데 못봐서 아쉽네요. iret 이라는 회사인데 궁금해서 검색해보니 전반적인 IT 솔루션 업체인것 같았어요. 부스 사이즈도 클래스메소드와 같은 크기라 다이아몬드 파트너사 임을 짐작할 수 있었습니다. 기술 세션 및 전시 내용 AWS Re:Invent 에서도 본 세션 타임라인입니다. 비행기가 11시 도착이라 keynote는 못들었으나 아래와 같이 각 회사 혹은 AWS 측 사람이 나와 강연하는 모습이 전 세계 어딜 가나 비슷함을 느꼇습니다. 다음은 소니와 혼다에서 합작으로 만들어낸 작품 같아요. 마찬가지로 설명을 들어도 못알아 들었지만 차와 관련된 AWS 아키텍처가 마련되어 있어서 어떤 느낌인지만 감으로 확인했습니다. 마치며 스마일샤크에서 일을 하다보니 일본 Summit도 가보게 되었네요. 메인 목적이 아니기에 이틀간 참여는 못하였지만 좋은 경험이 되었습니다. 일본어를 공부해가면 더욱 구경하는 재미가 있을 것 같네요. 이상 계획에도 없던 AWS Summit japan 체험기를 끝내겠습니다. 감사합니다!

  • What is Amazon Workspaces? A Complete Guide to Remote Work Cost Reduction and Implementation in 2025

    What is Amazon Workspaces? A Complete Guide to Remote Work Cost Reduction and Implementation in 2025 Written by Hyojung Yoon Hello, this is Hyojung Yoon, Brand Team Lead, returning with a long-awaited blog post. The way we work has changed significantly since COVID-19 pandemic. According to Maeil Business News' analysis of the August 2024 Statistics Korea Economically Active Population Survey Supplementary Survey by Employment Type , telecommuters in Korea number approximately 683,000, representing 3.1% of all workers . Source: Gartner Newsroom, March 1, 2023 Additionally, Gartner predicted that 39% of global knowledge workers would work in a hybrid manner by the end of 2023 , and this trend continues in 2025. 💭 Companies now face new challenges. "How can we ensure employees work securely from anywhere?" "How can we reduce the substantial costs of building remote work infrastructure?" If these questions resonate with you, Amazon WorkSpaces could be the solution you're looking for. Contents Why Do We Need Amazon WorksSpaces Now What is Amazon WorkSpaces? VDI vs. DaaS: Differences Real Implementation Cases and Results Key Features of Amazon WorkSpaces Amazon WorkSpaces Implementation Guide Pricing and Cost Optimization Implementation Considerations Conclusion Why Do We Need Amazon WorkSpaces Now? Challenges in the Remote Work Era More companies than ever are transitioning to remote work of allowing hybrid work arrangements.However, this change has simultaneously presented several challenges for companies: First, security issues. Security threats have increased as employees access company data from personal devices. Second, infrastructure costs. Providing laptops or computers to all employees and establishing VPNs requires enormous expenses. Third, management complexity. Managing and updating distributed devices is a significant burden for IT staff (or teams). The Solution Amazon WorkSpaces Offers Cloud-based virtual desktop services have emerged to solve these problems. Among them, Amazon WorkSpaces provides the following benefits. Source: Amazon WorkSpaces Customers | Persistent Desktop Virtualization 💡 Amazon Internal Case Study Supporting 25,000 global contract and remote workers while saving $17 million annually = $680 savings per employee per year (*The $680 per person savings is based on simple calculation.) This cost reduction was achieved not simply by reducing hardware purchase costs, but through improved operational efficiency and enhanced security. What is Amazon Workspaces? Amazon Workspaces is a fully managed desktop virtualization service (DaaS, Desktop-as-a-Services) delivered via the cloud. Users can remotely access their familiar desktop environment from anywhere, at any time, through various devices—all running on AWS's secure infrastructure. Whether you're at home, in a café, or traveling abroad for business, as long as you have an internet connection, you can work as if you were at your office PC. VDI vs. DaaS: Differences Many people confuse VDI and DaaS, so let me clarify the key differences. VDI(Virtual Desktop Infrastructure) is where companies build and manage virtual desktop infrastructure in their own data centers. Think of it like purchasing a car and handling all maintenance yourself. In contrast, DaaS(Desktop-as-a-Services) is a model where cloud service providers host and manage the virtual desktop infrastructure, delivering it as a service. This is similar to leasing a car - the leasing company handles maintenance while you simply enjoy using it. VDI DaaS Initial Investment High initial hardware invest costs Monthly subscription fee Management Burden Managed directly by internal IT team Managed by service provider Scalability Limited Faster and more flexible scaling Cost Structure CapEx(Capital Expenditure) OpEX(Operational Expenditure) 💡 Note: The market uses 'VDI'in two different meanings: • Narrow meaning: Refers only to on-premises virtual desktop • Broad meaning: Refers to all virtual desktop technologies (VDI + SaaS) Therefore, when referring to the VDI market size, it usually means the entire virtual desktop market including DaaS. While Amazon WorkSpaces is clearly a DaaS service, it is classified as part of the VDI market in the broader context. Real Implementation Cases and Results Let's examine the actual results companies have achieved with Amazon WorkSpaces rather than focusing on theory. 1) Ferrari Case Study Challenge: Ferrari was working with over 500 external partners. They needed to protect important intellectual property like the latest automotive design drawings while collaborating efficiently. Solution: Through Amazon WorkSpaces, they provided isolated virtual desktop environments to each partner. Data remained under Ferrari's control while partners could perform necessary work. Results: 90% reduction in deployment time  (from several days to 1 hour) Eliminated risk of design data leakage through enhanced security Simplified partner onboarding process Ferrari WorkSpaces Case Study (Go to AWS Official Case Study) 2) Kyowa Kirin Case Study Challenge: Pharmaceutical companies must comply with strict regulations like HIPAA. At the same time, researchers needed to access data from anywhere. Results: Deployed over 1,600 WorkSpaces  in Japan and the United States 30% cost reduction  compared to on-premises VDI Reduced audit response time through compliance automation Improved employee satisfaction Kyowa Kirin Case Study (Go to AWS re:Invent Presentation) 3) Emergency Response Cases During the Pandemic Amazon WorkSpaces' ability shone even brighter during the COVID-19 pandemic. Fox Corporation:  Established remote work environment for all 5,000 employees MRS BPO:  Transitioned 700 call center employees to remote work in just 2 days This agility can be a significant competitive advantage in rapidly changing business environments. FOX Case Study (Go to AWS Official Case Study) MRS BRO Case Study (Go to AWS Official Blog Post) Key Features of Amazon WorkSpaces Supported Environments Amazon WorkSpaces supports a wide range of environments. You can access it from virtually any device. Operating System Support: Windows Server 2016, 2019, 2022 Windows 10, Windows 11 Amazon Linux 2 Ubuntu 22.04 LTS Rocky Linux 8 Client Device Support: Windows PC, Mac, Linux computers Chromebook iPad, Fire, Android tablets Web browsers (accessible without separate installation) Strengths of WorkSpaces Cloud-native architecture: Designed for the cloud from the beginning, enabling rapid deployment Transparent and predictable cost structure: Predictable pricing (pay-as-you-go) Tight integration with AWS ecosystem: Seamless integration with EC2, Lambda, S3, etc. Global infrastructure: Reliable service anywhere in the world ⚠️ However, like all services, Amazon WorkSpaces has some limitations • Lack of multi-session functionality prevents multiple users from using one WorkSpace simultaneously • Microsoft-focused organizations should compare with Azure Virtual Desktop to determine which is better • Large organizations with 5,000+ employees need to calculate and compare TCO with on-premises VDI WorkSpacesTypes Amazon WorkSpaces offers two types. Each has different purposes, so choose the one that fits your business. 💡 Pro Tip For first-time deployments, I recommend using a mix of Personal and Pools. Running full-time employees on Personal and interns or project staff on Pools maximizes cost efficiency. WorkSpaces Personal WorkSpaces Pools Description Dedicated virtual desktop for individual users Virtual desktop pool shared by multiple users Data Retention User settings and data persist Resets to initial state upon logout Suitable For General office workers, developers Call centers, training rooms, temporary staff User Experience Similar to using your own PC Like a PC cafe where anyone can use and it resets after Performance Bundle Options Amazon WorkSpaces offers various performance options tailored to different work requirements. It's like choosing between a compact car and a luxury sedan—there's an option for every need. *The tables below are not official AWS categories but are organized by GPU memory usage for easier understanding. 1) General Purpose Bundles (Non-GPU) General purpose bundles are designed for various tasks including office work, business applications, and development. These bundles do not include GPU memory and are optimized for CPU and memory-intensive workloads such as office productivity, development, and data analysis. vCPU Memory Root Volume User Volume Recommended use case Value 1 2GB 80GB ~ 100GB 10GB ~ 100GB Basic tasks, email Standard 2 4GB 80GB ~ 175GB 10GB ~ 100GB General business tasks Performance 2 8GB 80GB ~ 175GB 10GB ~ 100GB Large file processing Power 4 16GB 80GB ~ 175GB 10GB ~ 100GB Data analysis, development PowerPro 8 32GB 80GB ~ 175GB 10GB ~ 100GB High-performance computing GeneralPurpose.4xlarge 16 64GB 175GB 100GB Large0scale compilation GeneralPurpose.8xlarge 32 128GB 175GB 100GB High performance 2) GPU-Enabled Bundles GPU-enabled bundles leverage NVDIA GPUs and are optimized for high-performance graphics and computational workloads such as graphics work, 3D rendering, and media production. vCPU Memory GPU Memory Local Storage (NVMe) Recommended use case Graphics.g4dn 4 16GB 16GB 125GB NVMe CAD, design, architecture GraphicsPro.g4dn 16 64GB 16GB 225GB NVMe Media production, 3D rendering, ML Management Features 1) Self-Service Features Users can perform certain tasks independently without IT assistance, including WorkSpace restarts, volume expansions, and compute type changes. These self-service capabilities significantly reduce IT workload.. 2) Integration and Connectivity Amazon WorkSpaces seamlessly integrates with existing IT infrastructure: Active Directory integration (use existing user accounts as-is) Microsoft 365 application support Smooth integration with AWS services (S3, EC2, etc) Recent Feature Updates Amazon WorkSpaces continuously rolls out new features to enhance its platform 1) Amazon WorkSpaces Core WorkSpaces Core provides EC2-based Windows desktops using your own Microsoft Windows licenses (BYOL - Bring Your Own License), without AWS Directory Services. This approach significantly improves cost efficiency and operational simplicity for organizations with existing Microsoft licensing agreements.WorkSpaces Core also integrates with various VDI partner solutions, enabling seamless connectivity with your existing infrastructure. 2) Amazon WorkSpaces Thin Client Introduced in 2023, the Amazon WorkSpaces Thin Client is a cost-effective dedicated terminal offering rapid deployment, centralized management, and robust security features. With no local data storage or app installation capabilities, it provides enhanced security and is particularly well-suited for remote work and call center environments. Amazon WorkSpaces Implementation Guide Now let's walk through the step-by-step process of implementing Amazon WorkSpaces. Step 1: Current State Analysis and Requirements Definition First, you need to review your current IT infrastructure and analyze the number of users and work patterns. Create a list of necessary applications and understand requirements by department. In this phase, you need to answer questions like: "How many employees will work remotely?" "What applications are primarily used?" "What are the data security requirements?" Step 2: PoC (Proof of Concept) Execution Select a pilot group of 10-20 people and conduct test operations for 2 weeks. During this period, test performance in actual work environments and collect user feedback to derive improvements. It's important to test various bundle types in the PoC phase to find the optimal configuration for each department. Step 3: Full-Scale Implementation Create an AWS account and set up the WorkSpaces directory. Determine appropriate bundle types for each user and configure security policies and networks. This phase involves intensive technical work such as Active Directory integration, VPN setup, and security group configuration. Step 4: User Training and Deployment Guide users on how to install client apps for each device and train them on initial login and setup processes. Create and distribute user manuals, and operate a help desk to respond to initial inquiries. Step 5: Operations and Optimization Set up monitoring through CloudWatch and establish backup policies. Regularly analyze usage patterns to optimize costs and make continuous improvements based on user feedback. Pricing and Cost Optimization Amazon WorkSpaces Pricing Plans Amazon WorkSpaces offers two running modes with corresponding billing methods. 1) AlwaysOn (Monthly Subscription) Fixed monthly charges regardless of usage Suitable for full-time users; AlwaysOn option is advantageous if used more than 4 hours per day Predictable monthly costs $44 per month for the Standard Ubuntu  (2vCPU 4, 4GB RAM, 80GB Root volume, 10GB user volume) when using WorkSpaces Personal (July 2025, Asia Pacific Seoul region) 2) AutoStop (Hourly Billing) Charged hourly only when WorkSpace is running Automatically stops after a period of inactivity, preventing compute costs Storage costs are charged monthly regardless of WorkSpace state Suitable for part-time users; recommended for interns or short-term project personnel Generally more economical when monthly usage is less than 80 hours * This 80-hour threshold is commonly referenced in AWS official blogs as a typical break-even point , but the actual value may vary by usage pattern. For the most accurate calculation, please use the AWS pricing calculator. Amazon WorkSpaces Cost Optimization Strategies 1) Utilize Cost Optimizer Tool Automatic analysis of user patterns Recommendations for optimal pricing plans Average 20-30% cost savings possible 2) Right-sizing Strategy Monitor actual resource usage by user Adjust over-specified bundles to appropriate levels Regular review and optimization Implementation Considerations Network Requirements Stable network connectivity is essential for smooth Amazon WorkSpaces usage. AWS recommends the following minimum requirements: Minimum bandwidth:  1Mbps (basic tasks) Recommended bandwidth:  2-5Mbps (general work) Graphics-intensive work:  10Mbps or higher recommended Round-trip time (RTT):  Less than 100ms recommended Compliance and Data Sovereignty In certain industries or regions, the physical location where data is stored can be important. Since Amazon WorkSpaces provides services in multiple regions worldwide, you need to select an appropriate region that meets data sovereignty requirements. Security is one of the biggest obstacles and concerns in remote work. Amazon WorkSpaces provides the following security features to address these concerns: Storage data encryption through AWS Key Management Service (KMS) Encryption in transit using TLS 1.2 Network isolation through VPC security groups Various security standards and compliance certifications (SOC, ISO 27001, PCI DSS, HIPAA, GDPR, etc.) It can be used with confidence even in industries that must comply with strict security regulations, such as finance, healthcare, and government agencies. Change Management and User Training The greatest challenge when implementing a new system often isn't technical—it's user resistance. Here's our recommended approach: Phased implementation:  Start with a pilot group and gradually expand Sufficient training:  Provide user manuals and conduct training sessions Ongoing support:  Operate a help desk after initial implementation Conclusion Amazon WorkSpaces is a robust solution for adapting to evolving work environments. It enables rapid deployment without significant upfront investment, delivers enterprise-grade security with flexible scalability, and provides access from virtually any device. We've seen how global enterprises like Ferrari and Kyowa Kirin have achieved cost savings and improved operational efficiency with Amazon WorkSpaces. Your organization can also leverage Amazon WorkSpaces to maintain productivity while offering employees a flexible work environment. The 2025 releases of WorkSpaces Core further expand options for diverse work environments and requirements. 🚀 Are you considering implementing Amazon WorkSpaces? We will review your remote work infrastructure and build the optimal WorkSpaces environment for your company. Request WorkSpaces Implementation Consultation → Amazon WorkSpaces implementation goes beyond technical deployment— it's a transformation of how your organization works. This project requires expertise spanning technical evaluation, deployment, and operational optimization. Partnering with SmileShark, an official AWS Partner, helps minimize trial and error while establishing a stable remote work environment more efficiently.

  • How to Build RAG with Llama3 and AWS Bedrock Knowledge Base using LangChain

    Building a RAG Chatbot in Minutes: Llama3 + Bedrock KB + LangChain Written by Hyeonmin Kim When implementing chatbots, the RAG pattern has become essential rather than optional. To implement the RAG pattern directly, you need to implement a series of processes including vector embedding, vector databases, and document retrieval. However, this process can be challenging without domain knowledge. Today, we'll implement this with minimal effort using just a few clicks and minimal code with Bedrock Knowledge Bases, SageMaker JumpStart, and LangChain. We will: Set up Amazon Bedrock Knowledge Bases (referred to as Bedrock KB) Deploy the newly added Llama3 using JumpStart functionality Use LangChain to search documents and generate messages Data Preparation Bedrock KB Setup Llama3 Deployment (SageMaker Jumpstart) Inference through LangChain ⚠️ Importatn Note: Currently, Bedrock KB functionality is not available in the Seoul region. Therefore, we'll use US East (Virginia). Please make sure to check this. Data Preparation When using Bedrock KB, users don't need to directly embed data and store it in a vector database. Simply store data in S3, and KB internally uses embedding models and vector databases to index the data. From the user's perspective, you only need to create S3 and store the desired documents. Navigate to the S3 console and create an S3 bucket to store documents. Upload the desired documents. In this example, we'll upload a Korean version. Supported data formats are as follows, and each file must not exceed 50MB: Supported Data Formats: Plain text (.txt), Markdown(.md), HyperText Markup Language(.html) Microsoft Word document(.doc/.docx), Comma-separated values(.csv) Microsoft Excel spreadsheet(.xls, .xlsx), Portable Document(.pdf) Bedrock KB Setup When setting up Bedrock KB, you can select an embedding model. We'll use Cohere's Embed v3 Multi-Language model, which supports Korean, and Amazon OpenSearch Serverless as the vector database. Navigate to the Bedrock console, go to the Knowledge Base in the orchestration tab, and create a knowledge base. Set the knowledge base name. You need to set the data source name and S3. We'll specify the previously created S3. For the embedding model, we'll use Cohere's Embed Multilingual v3 model, which supports Korean and has high performance, and OpenSearch Serverless as the vector database for quick creation. Once all content is complete, create the knowledge base. This process takes several minutes. Once creation is complete, proceed with synchronization. Now the storage part for the RAG pattern has been implemented. Llama3 Deployment (SageMaker Jumpstart) Since Meta has been a SageMaker Model Provider, you can deploy models with just a few clicks using the JumpStart feature. For this model, we'll use the Llama 3 8B Instruct model with g5.2xlarge specifications. You can use models with more parameters or Inferentia types according to your situation. Access the SageMaker Studio environment and select the JumpStart feature. Search for the model and navigate to Meta-Llama-3-8B-Instruct. Select deployment. Configure deployment settings and proceed with deployment. Inference through LangChain Now let's integrate with LangChain and proceed with QnA using the generated AWS resources. First, import the necessary libraries. Since LangChain supports SageMaker, you can easily import resources: from langchain_community.llms.sagemaker_endpoint import SagemakerEndpoint from langchain_community.llms.sagemaker_endpoint import LLMContentHandler from langchain_community.retrievers import AmazonKnowledgeBasesRetriever from langchain_core.prompts import PromptTemplate from langchain.chains import RetrievalQA from typing import Dict import json Set the previously configured endpoint name and region. AWS-related config must be set, and the reason for entering the region is because we're using the Virginia region instead of the default region: endpoint_name = "hmkim-llama3" region_name = "us-east-1" Set up a handler that configures the format and content type of input/output data when communicating with the SageMaker Endpoint: class CustomContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({"inputs": prompt, **model_kwargs}) return input_str.encode('utf-8') def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode('utf-8')) return response_json["generated_text"] The difference from Llama 2 is that the received response is not composed of arrays, so the index part must be removed. Set up the SageMaker endpoint and configure the model. Use the previously configured handler as the handler: llm = SagemakerEndpoint( endpoint_name=endpoint_name, region_name=region_name, model_kwargs={"parameters": { "max_new_tokens": 1024, "top_p": 0.9, "temperature": 0.1, "stop": "<|eot_id|>" }}, content_handler=CustomContentHandler(), ) Declare the retriever. We'll set the created Bedrock Knowledge Bases as the Retriever. Set the ID of the created Bedrock KB: retriever = AmazonKnowledgeBasesRetriever( knowledge_base_id="여기에 Bedrock KB id를 입력해주세요", retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 4}}, region_name="us-east-1" ) Set a question and pass it to the retriever to test if it brings appropriate data: question = "길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고싶은데, 가능할까?" query = question retriever.get_relevant_documents(query=query) You can confirm that it brings documents like the following. These are appropriate documents. [Document(page_content='<개정 1995. 12. 29.> 제308조(사자의 명예훼손) 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자 는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다. <개정 1995. 12. 29.> 제309조(출판물 등에 의한 명예훼손) ① 사람을 비방할 목적으로 신문, 잡지 또는 라디오 기타 출판물에 의하여 제307조제1 항의 죄를 범한 자는 3년 이하의 징역이나 금고 또는 700만원 이하의 벌금에 처한다. <개정 1995. 12. 29.> ② 제1항의 방법으로 제307조제2항의 죄를 범한 자는 7년 이하의 징역, 10년 이 하의 자격정지 또는 1천500만원 이하의 벌 금에 처한다. <개정 1995. 12. 29.> 제310조(위법성의 조각) 제307조제1항의 행위가 진실한 사실로서 오로지 공공의 이 익에 관한 때에는 처벌하지 아니한다. 제311조(모욕) 공연히 사람을 모욕한 자는 1년 이하의 징역이나 금고 또는 200만원 이 하의 벌금에 처한다. <개정 1995. 12. 29.> 제312조(고소와 피해자의 의사) ① 제308 조와 제311조의 죄는 고소가 있어야 공소 를 제기할 수 있다. <개정 1995. 12. 29.> ② 제307조와 제309조의 죄는 피해자의 명시한 의사에 반하여 공소를 제기할 수 없 다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.58440983}), Document(page_content='이 경우 검사는 송부받은 날부터 90일 이내에 사법경찰관에게 반환하 여야 한다. [본조신설 2020. 2. 4.] 제245조의6(고소인 등에 대한 송부통지) 사법경찰관은 제245조의5제2호의 경우에 는 그 송부한 날부터 7일 이내에 서면으로 고소인ᆞ고발인ᆞ피해자 또는 그 법정대리 인(피해자가 사망한 경우에는 그 배우자ᆞ 형사소송법 - 215 - 직계친족ᆞ형제자매를 포함한다)에게 사건 을 검사에게 송치하지 아니하는 취지와 그 이유를 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의7(고소인 등의 이의신청) ① 제 245조의6의 통지를 받은 사람은 해당 사법 경찰관의 소속 관서의 장에게 이의를 신청 할 수 있다. ② 사법경찰관은 제1항의 신청이 있는 때에는 지체 없이 검사에게 사건을 송치하 고 관계 서류와 증거물을 송부하여야 하며, 처리결과와 그 이유를 제1항의 신청인에게 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의8(재수사요청 등) ① 검사는 제 245조의5제2호의 경우에 사법경찰관이 사 건을 송치하지 아니한 것이 위법 또는 부당 한 때에는 그 이유를 문서로 명시하여 사법 경찰관에게 재수사를 요청할 수 있다. ② 사법경찰관은 제1항의 요청이 있는 때에는 사건을 재수사하여야 한다. [본조신설 2020. 2. 4.]', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.53611267}), Document(page_content='이 경우 피의자가 이의를 제기하였던 부분 은 읽을 수 있도록 남겨두어야 한다. <개정 2007. 6. 1.> ③ 피의자가 조서에 대하여 이의나 의견 이 없음을 진술한 때에는 피의자로 하여금 그 취지를 자필로 기재하게 하고 조서에 간 인한 후 기명날인 또는 서명하게 한다. <개 정 2007. 6. 1.> 제244조의2(피의자진술의 영상녹화) ① 피 의자의 진술은 영상녹화할 수 있다. 이 경 우 미리 영상녹화사실을 알려주어야 하며, 조사의 개시부터 종료까지의 전 과정 및 객 관적 정황을 영상녹화하여야 한다. ② 제1항에 따른 영상녹화가 완료된 때 에는 피의자 또는 변호인 앞에서 지체 없이 그 원본을 봉인하고 피의자로 하여금 기명 날인 또는 서명하게 하여야 한다. ③ 제2항의 경우에 피의자 또는 변호인 의 요구가 있는 때에는 영상녹화물을 재생 하여 시청하게 하여야 한다. 이 경우 그 내 용에 대하여 이의를 진술하는 때에는 그 취 지를 기재한 서면을 첨부하여야 한다. [본조신설 2007. 6. 1.] 제244조의3(진술거부권 등의 고지) ① 검 사 또는 사법경찰관은 피의자를 신문하기 전에 다음 각 호의 사항을 알려주어야 한 다. 1. 일체의 진술을 하지 아니하거나 개개 의 질문에 대하여 진술을 하지 아니 할 수 있다는 것 2. 진술을 하지 아니하더라도 불이익을 받지 아니한다는 것 3.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.5332642}), Document(page_content='<개정 1995. 12. 29.> ③ 전2항의 청구에 응하지 아니한 때에는 그 공판조서를 유죄의 증거로 할 수 없다. 제56조(공판조서의 증명력) 공판기일의 소 송절차로서 공판조서에 기재된 것은 그 조 서만으로써 증명한다. 제56조의2(공판정에서의 속기·녹음 및 영 상녹화) ① 법원은 검사, 피고인 또는 변호 인의 신청이 있는 때에는 특별한 사정이 없 는 한 공판정에서의 심리의 전부 또는 일부 를 속기사로 하여금 속기하게 하거나 녹음 장치 또는 영상녹화장치를 사용하여 녹음 또는 영상녹화(녹음이 포함된 것을 말한다. 이하 같다)하여야 하며, 필요하다고 인정하 는 때에는 직권으로 이를 명할 수 있다. ② 법원은 속기록ᆞ녹음물 또는 영상녹화 물을 공판조서와 별도로 보관하여야 한다. ③ 검사, 피고인 또는 변호인은 비용을 부담하고 제2항에 따른 속기록ᆞ녹음물 또 는 영상녹화물의 사본을 청구할 수 있다. [전문개정 2007. 6. 1.] 제57조(공무원의 서류) ① 공무원이 작성 하는 서류에는 법률에 다른 규정이 없는 때 에는 작성 연월일과 소속공무소를 기재하고 기명날인 또는 서명하여야 한다. <개정 2007. 6. 1.> ② 서류에는 간인하거나 이에 준하는 조 치를 하여야 한다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.52974534})] Set up the prompt. We assigned the role of a competent lawyer, asked to refer to documents, and set it to answer in Korean without emojis. When modifying the prompt template, be careful as it follows Llama3's prompt template: system_template = """You are a competent lawyer. Please answer the question using the documents provided. Always answer without emojis in Korean.""" prompt_template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_template}, 문서 : {context}<|eot_id|><|start_header_id|>user<|end_header_id|> 질문: {question}<|eot_id|><|start_header_id|>assistant<|end_header_id|> """ prompt = PromptTemplate( template=prompt_template, input_variables=["context", "question"], partial_variables={"system_template": system_template} ) Create a Retrieval QA proposal using the created template: qa = RetrievalQA.from_chain_type( llm=llm, retriever=retriever, return_source_documents=True, chain_type="stuff", chain_type_kwargs={"prompt": prompt} ) When you proceed with the request and check the response, you can see which documents were referenced and what the answer is: {'query': '길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고싶은데, 가능할까?', 'result': '제308조(사자의 명예훼손)에 따르면, 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다.\n\n이 경우, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 다음을 고려해야 합니다.\n\n1. 욕설이 허위의 사실인지 확인해야 합니다. 욕설이 허위의 사실이 아니라면, 명예훼손죄가 적용되지 않을 수 있습니다.\n2. 욕설이 공연히 이루어졌는지 확인해야 합니다. 욕설이 공연히 이루어지지 않았다면, 명예훼손죄가 적용되지 않을 수 있습니다.\n3. 피해자의 명시한 의사에 반하여 공소를 제기할 수 없습니다. 피해자가 명시한 의사에 반하여 공소를 제기하면, 공소가 제기되지 않을 수 있습니다.\n\n따라서, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 위의 고려 사항을 확인하고, 피해자의 명시한 의사에 반하여 공소를 제기하지 않도록 주의해야 합니다.', 'source_documents': [Document(page_content='<개정 1995. 12. 29.> 제308조(사자의 명예훼손) 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자 는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다. <개정 1995. 12. 29.> 제309조(출판물 등에 의한 명예훼손) ① 사람을 비방할 목적으로 신문, 잡지 또는 라디오 기타 출판물에 의하여 제307조제1 항의 죄를 범한 자는 3년 이하의 징역이나 금고 또는 700만원 이하의 벌금에 처한다. <개정 1995. 12. 29.> ② 제1항의 방법으로 제307조제2항의 죄를 범한 자는 7년 이하의 징역, 10년 이 하의 자격정지 또는 1천500만원 이하의 벌 금에 처한다. <개정 1995. 12. 29.> 제310조(위법성의 조각) 제307조제1항의 행위가 진실한 사실로서 오로지 공공의 이 익에 관한 때에는 처벌하지 아니한다. 제311조(모욕) 공연히 사람을 모욕한 자는 1년 이하의 징역이나 금고 또는 200만원 이 하의 벌금에 처한다. <개정 1995. 12. 29.> 제312조(고소와 피해자의 의사) ① 제308 조와 제311조의 죄는 고소가 있어야 공소 를 제기할 수 있다. <개정 1995. 12. 29.> ② 제307조와 제309조의 죄는 피해자의 명시한 의사에 반하여 공소를 제기할 수 없 다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.58440983}), Document(page_content='이 경우 검사는 송부받은 날부터 90일 이내에 사법경찰관에게 반환하 여야 한다. [본조신설 2020. 2. 4.] 제245조의6(고소인 등에 대한 송부통지) 사법경찰관은 제245조의5제2호의 경우에 는 그 송부한 날부터 7일 이내에 서면으로 고소인ᆞ고발인ᆞ피해자 또는 그 법정대리 인(피해자가 사망한 경우에는 그 배우자ᆞ 형사소송법 - 215 - 직계친족ᆞ형제자매를 포함한다)에게 사건 을 검사에게 송치하지 아니하는 취지와 그 이유를 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의7(고소인 등의 이의신청) ① 제 245조의6의 통지를 받은 사람은 해당 사법 경찰관의 소속 관서의 장에게 이의를 신청 할 수 있다. ② 사법경찰관은 제1항의 신청이 있는 때에는 지체 없이 검사에게 사건을 송치하 고 관계 서류와 증거물을 송부하여야 하며, 처리결과와 그 이유를 제1항의 신청인에게 통지하여야 한다. [본조신설 2020. 2. 4.] 제245조의8(재수사요청 등) ① 검사는 제 245조의5제2호의 경우에 사법경찰관이 사 건을 송치하지 아니한 것이 위법 또는 부당 한 때에는 그 이유를 문서로 명시하여 사법 경찰관에게 재수사를 요청할 수 있다. ② 사법경찰관은 제1항의 요청이 있는 때에는 사건을 재수사하여야 한다. [본조신설 2020. 2. 4.]', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.53611267}), Document(page_content='이 경우 피의자가 이의를 제기하였던 부분 은 읽을 수 있도록 남겨두어야 한다. <개정 2007. 6. 1.> ③ 피의자가 조서에 대하여 이의나 의견 이 없음을 진술한 때에는 피의자로 하여금 그 취지를 자필로 기재하게 하고 조서에 간 인한 후 기명날인 또는 서명하게 한다. <개 정 2007. 6. 1.> 제244조의2(피의자진술의 영상녹화) ① 피 의자의 진술은 영상녹화할 수 있다. 이 경 우 미리 영상녹화사실을 알려주어야 하며, 조사의 개시부터 종료까지의 전 과정 및 객 관적 정황을 영상녹화하여야 한다. ② 제1항에 따른 영상녹화가 완료된 때 에는 피의자 또는 변호인 앞에서 지체 없이 그 원본을 봉인하고 피의자로 하여금 기명 날인 또는 서명하게 하여야 한다. ③ 제2항의 경우에 피의자 또는 변호인 의 요구가 있는 때에는 영상녹화물을 재생 하여 시청하게 하여야 한다. 이 경우 그 내 용에 대하여 이의를 진술하는 때에는 그 취 지를 기재한 서면을 첨부하여야 한다. [본조신설 2007. 6. 1.] 제244조의3(진술거부권 등의 고지) ① 검 사 또는 사법경찰관은 피의자를 신문하기 전에 다음 각 호의 사항을 알려주어야 한 다. 1. 일체의 진술을 하지 아니하거나 개개 의 질문에 대하여 진술을 하지 아니 할 수 있다는 것 2. 진술을 하지 아니하더라도 불이익을 받지 아니한다는 것 3.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.5332642}), Document(page_content='<개정 1995. 12. 29.> ③ 전2항의 청구에 응하지 아니한 때에는 그 공판조서를 유죄의 증거로 할 수 없다. 제56조(공판조서의 증명력) 공판기일의 소 송절차로서 공판조서에 기재된 것은 그 조 서만으로써 증명한다. 제56조의2(공판정에서의 속기·녹음 및 영 상녹화) ① 법원은 검사, 피고인 또는 변호 인의 신청이 있는 때에는 특별한 사정이 없 는 한 공판정에서의 심리의 전부 또는 일부 를 속기사로 하여금 속기하게 하거나 녹음 장치 또는 영상녹화장치를 사용하여 녹음 또는 영상녹화(녹음이 포함된 것을 말한다. 이하 같다)하여야 하며, 필요하다고 인정하 는 때에는 직권으로 이를 명할 수 있다. ② 법원은 속기록ᆞ녹음물 또는 영상녹화 물을 공판조서와 별도로 보관하여야 한다. ③ 검사, 피고인 또는 변호인은 비용을 부담하고 제2항에 따른 속기록ᆞ녹음물 또 는 영상녹화물의 사본을 청구할 수 있다. [전문개정 2007. 6. 1.] 제57조(공무원의 서류) ① 공무원이 작성 하는 서류에는 법률에 다른 규정이 없는 때 에는 작성 연월일과 소속공무소를 기재하고 기명날인 또는 서명하여야 한다. <개정 2007. 6. 1.> ② 서류에는 간인하거나 이에 준하는 조 치를 하여야 한다.', metadata={'location': {'s3Location': {'uri': 's3://hmkim-bedrock-kb-example/법전.pdf'}, 'type': 'S3'}, 'score': 0.52974534})]} Looking at the output answer, you can see very accurate results were produced: 제308조(사자의 명예훼손)에 따르면, 공연히 허위의 사실을 적시하여 사자의 명예를 훼손한 자는 2년 이하의 징역이나 금고 또는 500만 원 이하의 벌금에 처한다. 이 경우, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 다음을 고려해야 합니다. 1. 욕설이 허위의 사실인지 확인해야 합니다. 욕설이 허위의 사실이 아니라면, 명예훼손죄가 적용되지 않을 수 있습니다. 2. 욕설이 공연히 이루어졌는지 확인해야 합니다. 욕설이 공연히 이루어지지 않았다면, 명예훼손죄가 적용되지 않을 수 있습니다. 3. 피해자의 명시한 의사에 반하여 공소를 제기할 수 없습니다. 피해자가 명시한 의사에 반하여 공소를 제기하면, 공소가 제기되지 않을 수 있습니다. 따라서, 길을 가다가 심한 욕을 들어서 명예훼손으로 신고하고 싶은 경우, 위의 고려 사항을 확인하고, 피해자의 명시한 의사에 반하여 공소를 제기하지 않도록 주의해야 합니다. Today, we implemented the RAG pattern with minimal effort using the Llama3 model, Bedrock KB retriever, and LangChain, a framework that helps utilize these tools. Since no separate domain knowledge is required, it should be easy to utilize even without much specialized knowledge. This concludes our post.

  • Celebrating
the General Availability of Amazon SageMaker Unified Studio 
with a Sneak Peek

    Celebrating 
the General Availability of Amazon SageMaker Unified Studio 
with a Sneak Peek Written by Minhyeok Cha Introduction Amazon SageMaker Unified Studio became generally available in mid-March, so I thought I'd take a look at what it is and what it can do for you as a partner and as an AWS cloud user. Amazon SageMaker Unified Studio is a new IDE environment that integrates services related to AI/ML data analytics that exist within AWS. These services include Amazon Athena, Amazon EMR, AWS Glue, Amazon Redshift, Amazon Managed Workflows for Apache Airflow (Amazon MWAA), and the capabilities and tools of the various standalone “studios”, query editors, and visual tools found in the existing SageMaker Studio We were able to locate and access all of the data in our AWS organization and provide a single development environment for our practitioners. This allowed us to minimize access control management and focus on AI application development. Additionally, Amazon Q Developer is included, which provides a chatbot interface like Chat GPT and accelerates tasks such as writing SQL queries, building ETL jobs, troubleshooting, and generating real-time code suggestions. Contents From Amazon SageMaker Unified Studio domains to project creation What else has been added since the general release? Wrapping up From Amazon SageMaker Unified Studio domains to project creation So let's create a domain directly from the console to see what features are available after the general release. It says to create a domain first, so let's go ahead and do that. First, the good news is that with the general release, we can create in the Seoul region. When creating a domain, you need at least three Availability Zones to deploy to. You don't need to give permission to add it to the Glue database, it will be granted automatically when you create the domain and AWS Lake Formation will register it for you. When you're done, you'll be greeted by the studio as shown in the photo. In the center, click Create Project to create a space to work in. During creation, you'll see the following items, where you'll name each of the DBs you'll use in the project. Since we're just experimenting, we'll go with the default values. What else has been added since the general release? First up is Amazon Bedrock. Once all the FM models are available for selection. I only see one model request on my test account. It's a bit embarrassing. In addition to selecting FM models, one of the main features, building a knowledge base, is now possible in the studio itself. When sourcing documents from S3, local data, or web crawling within a project and creating a Knowledge Base, you can work with the FM model selection as above. Based on the Knowledge Base you created, you can create a canvas that looks like this Connect each node to create your own configured Bedrock. Next up is the Amazon Q Developer install mentioned above. Now you can write query statements with Q. It's a free version, and it works well. Amazon Q Developer automatically adds a free subscription to Amazon SageMaker Unified Studio when you create it, so you don't need to touch it further. Here is the integration of the DB mentioned when creating the project. SageMaker Lakehouse unifies data lakes, data warehouses, and data sources for easier management. Wrapping up We didn't organize the core features in the preview stage or at AWS Re:invent 2024. What we got from this official launch is one word: “Convenient.” AWS has many services, so it was difficult to use the existing SageMaker, but with the release of SageMaker Unified Studio, it was good that even if you just build a domain, you can automatically assign permissions and integrate each DB for easy management. Of course, when importing from the existing DB to SageMaker Unified Studio, it is cumbersome to assign permissions to each service and register Lake Formation permissions for data. However, for someone starting a project for the first time, I think this was enough to reduce the burden of access.

  • Manage Your Amazon S3 Objects with Amazon S3 Metadata!

    Manage Your Amazon S3 Objects with Amazon S3 Metadata! Written by Minhyeok Cha How do you manage your Amazon S3 objects? Do you search for them directly in the console? Use CLI or SDK? Or maybe you rely on Glue crawling? Recently, my company noticed that S3 costs were gradually piling up, so I started looking for ways to reduce them. Initially, I thought, "We can just move unused data to Glacier, and that’s it." However, managing the massive amount of data accumulated over about six years in a single bucket turned out to be a bit tricky. That’s when I noticed the "table bucket" feature and thought, “Why not give the relatively new S3 Metadata a try?” Fortunately, it worked out well, and I’d like to share my experience. Table od Contents What is Amazon S3 Metadata? What is AWS Lake Formation? Demo S3 Cost Optimization Strategy Conclusion What is Amazon S3 Metadata? (Source: AWS) You can find an introduction to Amazon S3 Metadata in an article I previously wrote, titled A Summary of Key Announcements from AWS re:Invent in 10 Minutes . In that article, I mentioned that S3 Metadata can be integrated with AWS Glue Data Catalog. However, in this post, I’ll explore using AWS Lake Formation instead. Initially, I planned to use AWS Glue’s crawling feature, but decided to experiment with the officially released table bucket and Amazon S3 Metadata, which came out earlier this year. What is AWS Lake Formation? So, what exactly is AWS Lake Formation? AWS Lake Formation simplifies and automates the complex and time-consuming tasks involved in building a data lake. These tasks include collecting, cleaning, moving, cataloging data, and ensuring secure access for analytics and machine learning. It also provides its own permission management model based on AWS Identity and Access Management (IAM). This centralized permission management model allows for fine-grained access control to the data lake through a simple grant/revoke mechanism. Permissions in AWS Lake Formation can be applied at the table and column levels for all datasets in the data lake. Services integrated with this permission management include AWS Glue, Amazon Athena, Amazon Redshift Spectrum, and Amazon QuickSight. However, our primary goal is to access S3 objects for querying without crawling, so we’ll be using Lake Formation mainly as a connection pathway. Demo Since my company account has restricted permissions, this demo will be conducted using a test account. 💡 Table Buckets and Amazon S3 Metadata are only available in the Ohio and Northern Virginia regions. Step 1: Create an S3 Table Bucket Step 2: Generate Metadata for the S3 Bucket to Test That completes the connection between S3 and the table bucket. Step 3: Check with Amazon Athena However, if you try accessing Athena without cataloging, nothing will show up. In fact, you need to create a catalog through AWS Glue. Fortunately, a new feature in Lake Formation now allows for automatic alignment of S3 tables, making the setup process smoother. Step 4: Enable S3 Table Integration in AWS Lake Formation When integrating, make sure to specify a role with S3 access permissions. Once the integration is successful, the catalog will be displayed as shown below. Go into the catalog and proceed with policy settings. In the Permissions  section, click Grant  to continue. If you followed the steps correctly, go to Athena to check if the S3 data appears as expected. Step 5: Successful Amazon Athena Query! The data appeared without using AWS Glue, and the query executed successfully. S3 Cost Optimization Strategy The optimization process was straightforward. I created queries as shown below, downloaded the result as a CSV, and used the CLI to move the objects identified by the query to the Glacier storage class. S3 Lifecycle Management Following that, I configured S3 Lifecycle policies to automatically move data to Glacier over time. Conclusion I decided to try out AWS’s new features and finally got around to it in March 2025. I had heard countless times about S3 cost optimization, but trying it out myself instead of relying on consulting felt quite refreshing. For those who haven’t managed their S3 buckets before, I think this new method is definitely worth considering. It’s simpler to use than setting up Glue, which I found particularly appealing. However, I did find AWS Lake Formation’s setup a bit tricky initially. Still, if you need to manage data in your buckets, it might be worth giving it a try. ✅ Note: Deleting S3 table buckets can only be done via CLI or SDK, so keep that in mind.

bottom of page