What Is the AI-DLC (AI-Driven Development Lifecycle)?
- 23 hours ago
- 4 min read
Updated: 9 hours ago
What Is the AI-DLC (AI-Driven Development Lifecycle)?

Written by Minhyeok Cha
Introduction
We are now living in an era where AI can seemingly do everything for us. With tools capable of generating code automatically, an important question naturally arises: what should developers focus on next?
The AI generated working code… but can this really be deployed to production?
Today, generating code with tools such as GitHub Copilot, Claude, and ChatGPT has become commonplace. However, building production-grade software requires far more than simply generating code.
Are the requirements clearly defined?
Is the architecture scalable?
Have security considerations been addressed?
Has the software been sufficiently tested?
There is a significant difference between AI generating code effectively and using AI to build high-quality software systems.
I occasionally experiment with AI-assisted vibe coding myself. While the initial results may appear impressive, the outcome often includes disorganized documentation and spaghetti code that quickly becomes difficult to maintain.
During an AWS Summit and workshop, I was introduced to a methodology called AI-DLC, which aims to address exactly these challenges.
What Is AI-DLC?
AI-DLC (AI-Driven Development Lifecycle) is a methodology that systematically integrates AI throughout the entire software development lifecycle.
While traditional SDLC processes are primarily human-driven, AI-DLC enables AI to actively participate in each phase while humans remain responsible for decision-making and validation.
AI-DLC is broadly divided into three stages.
Inception (Planning)
What should be built?
Establishing context from the existing codebase
Clarifying intent through user stories
Creating implementation plans at the task level
Construction (Implementation)
How should it be built?
Domain modeling (component modeling)
Code and test generation
Architecture component implementation
Deployment through IaC and testing
Operations
How should it be operated?
Production deployment using IaC
Incident management
💡 In this article, we will focus primarily on the Inception phase to better understand the overall workflow.
Why Use the AI-DLC?
Without a structured methodology such as AI-DLC, AI-assisted vibe coding can often lead to development challenges. Teams may use AI-generated code without sufficient review, or later discover that the codebase has become so disorganized that rebuilding from scratch appears easier than maintaining the existing implementation.
These are not the only problems that arise when AI coding tools are used without a structured process.
Lack of Context — AI generates code without understanding business requirements
Inconsistency — Different coding styles and implementation patterns emerge across generated outputs
Lack of Validation — Generated code may not fully satisfy functional or technical requirements
Lack of Traceability — There is no clear record explaining implementation or architectural decisions
AI-DLC addresses these challenges through a structured development process.
Example Scenario
If the concept still feels abstract, let us consider what typically happens when a user asks AI to “create a login feature.”
When asked to build a login feature, AI can often produce a working result quickly.
However, additional requirements may emerge later, such as social login, password policies, session expiration, or MFA registration. At that point, developers may need to modify the previously generated code.
In many cases, however, the existing implementation may be difficult to maintain, leading teams to rebuild the feature from scratch.
This is not necessarily a failure of AI. Rather, it is a fundamental issue caused by insufficiently communicating the necessary requirements to AI before implementation begins.
The Inception phase of AI-DLC is designed to address this problem. Before asking AI to generate code, developers first work with AI to clarify what should be built.
Let us compare two different approaches to the same login feature.
Approach 1: Code Generation

💡 Development begins immediately using commonly adopted implementation patterns.
Approach 2: AI-DLC Requirement Analysis

💡 The AI asks clarifying questions to better understand the user's exact objectives.
(Any chatbot-related references shown in the example originated from another project context and can be ignored.)
As demonstrated in the Inception phase, the Construction and Operations stages also leverage iterative interactions with AI to produce more complete and maintainable outcomes.
Applying the AI-DLC Methodology to Kiro
🔗 awslabs/aidlc-workflows https://github.com/awslabs/aidlc-workflows?tab=readme-ov-file From the repository above, download the ai-dlc-rules-v<release-number>.zip package and install it within your project directory.
The archive contains two rule-set folders, which should be added to the Kiro steering configuration using the following commands.
<MAC>
mkdir -p .kiro/steering
cp -R ~/Downloads/aidlc-rules/aws-aidlc-rules .kiro/steering/
cp -R ~/Downloads/aidlc-rules/aws-aidlc-rule-details .kiro/
<Windows>
New-Item -ItemType Directory -Force -Path ".kiro\steering"
Copy-Item -Recurse "$env:USERPROFILE\Downloads\aidlc-rules\aws-aidlc-rules" ".kiro\steering\"
Copy-Item -Recurse "$env:USERPROFILE\Downloads\aidlc-rules\aws-aidlc-rule-details" ".kiro\"
<project-root>/
├── .kiro/
│ ├── steering/
│ │ ├── aws-aidlc-rules/
│ ├── aws-aidlc-rule-details/
Please verify that the aws-aidlc-rules directory is correctly configured within the Kiro IDE steering panel, as shown below.

💡 Only one of the two rule sets, aws-aidlc-rules, is continuously loaded because keeping the entire aws-aidlc-rule-details directory in context would consume too many tokens.
The statement “When performing any phase, you MUST read and use relevant content from rule detail files” means that the AI directly reads the necessary files during workflow execution using tools such as fs_read.
Conclusion
I first came across the AI-DLC methodology around the fourth quarter of 2025.
At the time, the industry was flooded with AI-related content, and AI-DLC initially appeared to be just another topic among many similar discussions.
However, recent discussions surrounding Andrej Karpathy’s observations on the limitations of LLM-based coding attracted significant attention. A related GitHub repository built around these ideas quickly surpassed 100,000 stars, drawing widespread interest from the developer community.
🔗 forrestchang/andrej-karpathy-skills https://github.com/forrestchang/andrej-karpathy-skills
The core idea behind the project is straightforward. By providing AI systems with rule files, developers can guide the model to follow principles such as “think before coding,” “keep implementations simple,” and “verify before proceeding.”
What stood out to me was how closely this approach aligns with the AI-DLC methodology I encountered last year.
Both approaches guide AI agent behavior by injecting structured rules into the model’s working context. The difference is that while Karpathy Skills primarily focuses on improving coding behavior itself through a small set of core principles, AI-DLC incorporates those principles while systematizing the entire software development lifecycle — from planning and architecture to implementation, testing, and operations.
The fact that so many developers resonated with the idea that “providing rules to AI changes the quality of the outcome” signals an important shift in the industry. We are moving beyond simply using AI coding tools toward learning how to use them effectively.
Through this article, I hope AI-DLC can serve as one possible approach for organizations and developers looking to adopt AI more effectively within their development workflows.







