- Published on
Revolutionize Your Codebase Updates with o3-mini from ChatGPT: The Future of Automated Development

Revolutionize Your Codebase Updates with o3-mini from ChatGPT: The Future of Automated Development
In this post, I explore how the o3-mini LLM from ChatGPT is transforming the way I update my codebase. This innovative approach streamlines bug fixes and implements new features across Python, JavaScript, and JSON configuration files in a unified and semi-automated manner.
Introducing o3-mini
One of the key strengths of o3-mini is its large context size, which allows it to process entire codebases effectively. o3-mini has a context window of 200,000 tokens and a maximum output of 100,000 tokens (see the official o3-mini API launch here). This means o3-mini can accommodate multiple source files in a single prompt, enabling comprehensive code analysis and modifications across interdependent files. This makes it ideal for large-scale codebase updates without requiring fragmented inputs.
The o3-mini model is a lightweight yet powerful language model available through the ChatGPT browser. It excels at interpreting natural language instructions, making it perfect for automating complex codebase modifications. To learn more about the capabilities of ChatGPT and its models, visit ChatGPT.
My Use Case: Using o3-mini to Automate Codebase Changes
My projects, primarily built in Python and JavaScript with some JSON configuration files, needed a reliable method to handle code modifications efficiently. Here’s how I achieved that:
- Bug Fixes and Feature Additions: o3-mini analyzes my entire codebase, ensuring consistency across files when addressing bugs or adding new functionality.
- Comprehensive Updates: Instead of updating files one by one, I use full-codebase analysis, allowing the model to propose interdependent changes in a consistent and unified manner.
The Approach
My approach involves creating a prompt that includes the instruction and the complete project codebase in a structured format. I then instruct the model to generate the modified files in JSON format. The o3-mini model processes this prompt, and its response is captured to apply the changes to the codebase.
This approach is supported by two scripts:
generate_codebase_prompt.py
: Automates the creation of a structured prompt that includes the full codebase.apply_changes.py
: Parses the model’s output and updates the corresponding files in the codebase.
These Python scripts can be found at the end of this post.
This method provides a clear, organized, and scalable way to apply code changes consistently across the entire project.
The Workflow
The following diagram illustrates my workflow for instructing and implementing changes to my codebase using o3-mini:

The sections below describe each step of the workflow in detail.
1. Creating a Codebase Snapshot
I created a codebase.txt
file in the project root that lists all filenames (with relative paths) on separate lines. This snapshot is essential for identifying the files to be processed.
2. Generating a Unified Prompt
I use a Python script called generate_codebase_prompt.py
to automate this step.
generate_codebase_prompt.py
works as follows:
- Reads
codebase.txt
, ignoring lines starting with#
, to collect file paths. - Reads the full content of each file and handles errors gracefully if a file cannot be read.
- Constructs a JSON array where each entry contains the filename (with path) and file contents, then appends it to a fixed prompt text.
This prompt instructs o3-mini to output only the modified files, maintaining the same filename (with path) and updated content. It instructs the model to output the results in JSON format with a given structure. Here’s the generic structure of the prompt:
You are an AI assistant tasked with modifying a codebase based on the instruction provided above. The full codebase is provided below as a
JSON array, where each object represents a file with its relative path and complete contents. Your job is to apply the instruction and retu
rn a pure JSON object that includes only the modified files along with a brief explanation of the changes.
The JSON object must have exactly two keys:
- "files": an array of objects, each with:
- "path": the relative file path
- "content": the full content of the modified file
- "explanation": a concise explanation of the modifications made
Do not include any additional text, markdown formatting, or commentary outside this JSON object.
Codebase:
...
To quickly copy the generated prompt into ChatGPT, I use the following command:
python3 generate_codebase_prompt.py ./codebase.txt | xclip -selection clipboard
Using xclip
ensures that the output is directly placed into the clipboard, allowing me to paste it into ChatGPT without additional steps. To install xclip
on Ubuntu, run the following command:
sudo apt update && sudo apt install xclip
3. Interacting with ChatGPT
Using the generated prompt, I interact with ChatGPT by providing a clear instruction.
"Implement the new feature ..."
or
"Fix this error ..."
After typing my instruction, I paste the prompt generated in the previous step with the codebase, which is already in my clipboard. The screenshot below shows the interaction with ChatGPT.

4. Running o3-mini and Capturing the Output
I run o3-mini with the prompt and capture its output into a text file. This output contains the JSON-formatted changes, detailing only the modified files. I copy the results easily from ChatGPT's Copy
button and paste them into a text file that I will use in the next step. The screenshot below shows the output of o3-mini and the Copy button.

5. Updating the Codebase
I use the Python script apply_changes.py
to apply the changes generated by ChatGPT. This script processes a JSON file containing the modified files and their new contents, then updates or creates the files accordingly. It ensures that necessary directories exist, writes the new content, and logs the changes.
To run the script, I use the following command, where inputs.txt
is the file where I captured the output of ChatGPT in the previous step.
python3 apply_changes.py ./inputs.txt
Example output of the update script:
Updated: ./app/[locale]/top/page.tsx
Updated: ./components/navigation/Header.tsx
Total files updated: 2
6. Verifying Changes and Testing
After applying the changes, I run git diff
to review all modifications, ensuring they align with the intended updates. Finally, I run some tests to confirm that everything works.
Practical Cases: Next.js Blog Project
The project I evaluate for codebase updates in this post is a Next.js blog project. Here are some key metrics about the project:
Metric | Value |
---|---|
Files | ~130 |
LOC | ~13,000 |
Languages | TS, JS, JSON |
Frameworks | Next.js, React, Tailwind CSS |
Below are several practical cases where I apply these automated updates this Next.js blog project.
Instruction 1
To enhance the user experience, I decided to add a new menu entry called Top to the left of "Blog" in the header of the front page.
In the header of the front page, add a new menu entry to the left of "Blog", called Top (with a capital T). Consider that this is a multilingual blog, and all labels in the top menu have translations per language. Implement something equivalent for Top such that in all languages the label remains Top. When pressing Top, the Featured post entries will be shown.
This change introduced an error, which I address in the next step.
Instruction 2
After implementing the new menu entry, I encountered an error related to the dynamic route handling in Next.js. I instructed the model to fix the error.
When pressing Top I get this error. Fix it. Error: Route "/[locale]/top" used params.locale. params should be awaited before using its properties. Learn more: https://nextjs.org/docs/messages/sync-dynamic-apis at locale (app/[locale]/top/page.tsx:13:10) 11 | 12 | export default async function TopPage({ params }: PageProps) { 13 | const { locale } = params | ^ 14 | const sortedPosts = sortPosts(allBlogs) 15 | const posts = allCoreContent(sortedPosts) 16 | const featuredPosts = posts.filter((p) => p.language === locale && p.featured === true)
At this point, the new function worked as intended.
Instruction 3
To improve the user experience, I decided to remove the limitation of showing only two Featured posts when selecting Top.
When selecting Top all Featured posts will be shown. Currently there is a limitation of two. However, on the main page you will keep only a maximum of two Featured posts as implemented now.
The change worked as intended.
Instruction 4
During the build process, I encountered a formatting error related to Prettier. I instructed the model to fix the error.
Fix this error Linting and checking validity of types .. ⚠ TypeScript project references are not fully supported. Attempting to build in incremental mode. Failed to compile. ./app/[locale]/top/page.tsx 18:38 Error: Replace p with (p) prettier/prettier ./components/navigation/Header.tsx 59:53 Error: Replace {link.title·===·'Top'·?·'Top'·:·t(link.title.toLowerCase())} with ⏎······················{link.title·===·'Top'·?·'Top'·:·t(link.title.toLowerCase())}⏎···················· prettier/prettier info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/app/api-reference/config/eslint#disabling-rules
The formatting error was fixed.
Conclusion
Utilizing o3-mini to analyze and update an entire codebase offers several advantages:
- Comprehensive Insight: The LLM considers the big picture, proposing changes that account for dependencies across multiple files.
- Consistency: Automated updates ensure that interdependent modifications are applied uniformly.
- Efficiency: While processing time depends on the size of the codebase, the overall workflow is quick and effective.
- Enhanced Development Workflow: This method minimizes manual intervention, reduces errors, and accelerates both bug fixes and feature implementations.
By giving the model access to a full codebase, I have streamlined my development process and demonstrated that automated, unified updates can be smarter and more coherent than handling files individually.
Models with expanded context windows and increased output limits, such as o3-mini, allow this approach.
Next Steps
I have not used o3-mini-high for these examples, but the approach remains valid. In the future, I plan to assess this model as well to compare its effectiveness and performance in automating codebase updates.
The next natural step will be to build a tool that automates this workflow using the o3-mini API. This tool would follow the structured approach outlined in this post, enabling seamless codebase modifications while integrating directly with a Git repository to commit updates efficiently. By using o3-mini via API, the process can be fully automated, reducing manual intervention and ensuring version-controlled, structured improvements across projects.
Python Scripts
Below are the Python scripts used in this workflow. These scripts automate the process of generating prompts and applying changes to the codebase, making the workflow efficient and scalable.
generate_codebase_prompt.py
This script reads a list of file paths, extracts their content, and formats them into a structured prompt that can be used with an LLM to analyze and modify the codebase.
"""
generate_codebase_prompt.py
================
Description:
This script reads a file containing file paths (one per line), ignoring any lines
that start with the '#' character. For each valid file path, it reads the full file
contents and constructs a JSON array where each element is an object containing the file's
"path" and "content". This JSON array is appended to a fixed prompt text which is intended
for use with a Language Model (LLM) to modify the codebase based on a manually added instruction.
Usage:
python generate_codebase_prompt.py <file_with_paths>
Where:
<file_with_paths> is a text file containing a list of file paths (one per line).
Example:
Given an input file 'paths.txt' with the following content:
/path/to/file1.txt
# /path/to/file2.txt
Running:
python generate_codebase_prompt.py paths.txt
Produces output (on standard output) similar to:
=== PROMPT ===
You are an AI assistant tasked with modifying a codebase based on the instruction provided above.
The full codebase is provided below as a JSON array, where each object represents a file with its
relative path and complete contents. Your job is to apply the instruction and return a pure JSON
object that includes only the modified files along with a brief explanation of the changes.
The JSON object must have exactly two keys:
- "files": an array of objects, each with:
- "path": the relative file path
- "content": the full content of the modified file
- "explanation": a concise explanation of the modifications made
Do not include any additional text, markdown formatting, or commentary outside this JSON object.
Codebase:
[
{
"path": "/path/to/file1.txt",
"content": "full contents of file1..."
},
...
]
"""
import sys
import json
def generate_codebase_json(file_with_paths):
"""
Reads an input file containing file paths and generates a list of dictionaries,
each representing a file with its "path" and full "content".
The function ignores empty lines and lines starting with '#' (treated as comments).
If an error occurs while reading any file, the error message is stored in the "content" field
of the corresponding dictionary.
Args:
file_with_paths (str): Path to the text file containing file paths.
Returns:
list: A list of dictionaries with keys "path" and "content".
Exits:
Exits the script with a status code 1 if the input file cannot be read.
"""
codebase = []
try:
with open(file_with_paths, 'r') as f:
for line in f:
filepath = line.strip()
# Skip empty lines and commented lines
if not filepath or filepath.startswith("#"):
continue
try:
with open(filepath, 'r', encoding="utf-8") as file_content:
content = file_content.read()
codebase.append({"path": filepath, "content": content})
except Exception as e:
# If there is an error reading the file, record the error message as content
codebase.append({"path": filepath, "content": f"Error reading file: {e}"})
return codebase
except Exception as e:
print(f"Error reading input file {file_with_paths}: {e}")
sys.exit(1)
def print_prompt_with_codebase(codebase):
"""
Constructs and prints a complete prompt intended for an LLM. The prompt consists of
a fixed text instruction followed by a JSON array of the codebase.
The fixed prompt instructs the LLM to modify the codebase based on a manually added
instruction and to output a JSON object with two keys: "files" and "explanation".
Args:
codebase (list): A list of dictionaries where each dictionary contains the "path" and
"content" of a file.
"""
# Fixed prompt text (the manual instruction is expected to be added before this text)
prompt_text = (
"You are an AI assistant tasked with modifying a codebase based on the instruction provided above. "
"The full codebase is provided below as a JSON array, where each object represents a file with its relative path "
"and complete contents. Your job is to apply the instruction and return a pure JSON object that includes only the "
"modified files along with a brief explanation of the changes.\n\n"
"The JSON object must have exactly two keys:\n"
" - \"files\": an array of objects, each with:\n"
" - \"path\": the relative file path\n"
" - \"content\": the full content of the modified file\n"
" - \"explanation\": a concise explanation of the modifications made\n\n"
"Do not include any additional text, markdown formatting, or commentary outside this JSON object.\n\n"
"Codebase:"
)
# Convert the codebase list to a JSON string with indentation for better readability
codebase_json = json.dumps(codebase, indent=2)
# Output the complete prompt
print(prompt_text)
print("\n" + codebase_json)
def main():
"""
Main function that handles command-line arguments and executes the script functionality.
"""
if len(sys.argv) != 2:
print("Usage: python process_files.py <file_with_paths>")
sys.exit(1)
file_with_paths = sys.argv[1]
codebase = generate_codebase_json(file_with_paths)
print_prompt_with_codebase(codebase)
if __name__ == "__main__":
main()
apply_changes.py
This script processes the JSON output from the LLM, updates the corresponding files in the codebase, and ensures that changes are applied consistently while preserving directory structures.
"""
apply_changes.py
===============
Description:
This script reads a JSON file provided as a command-line argument. The JSON file must contain
an object with a key "files", which is an array of objects. Each object in the array should
have the following keys:
- "path": the relative or absolute path of the file to update
- "content": the full content to write into that file
The script updates or creates each file as specified in the JSON file. Necessary directories
are created if they do not exist.
Usage:
python apply_changes.py /path/to/input.json
Example JSON file structure:
{
"files": [
{
"path": "example/file1.txt",
"content": "This is the updated content for file1."
},
{
"path": "example/file2.txt",
"content": "This is the updated content for file2."
}
]
}
"""
import os
import sys
import json
def update_files_from_json(json_file_path):
"""
Reads a JSON file and updates or creates files as specified in the JSON object.
The JSON file must contain an object with a key "files", which is an array of objects.
Each object must have:
- "path": the relative or absolute path of the file to update
- "content": the full content to write into that file
The function creates any necessary directories for the file paths if they do not exist.
It prints a message for each file that is updated and a final summary of the total files updated.
Args:
json_file_path (str): The path to the JSON file containing file update information.
Exits:
The script exits with status code 1 if the JSON file cannot be read or if it does not
contain the required "files" key.
"""
try:
with open(json_file_path, 'r', encoding='utf-8') as f:
data = json.load(f)
except Exception as e:
print("Error reading JSON file '{}': {}".format(json_file_path, e))
sys.exit(1)
if 'files' not in data:
print("Invalid JSON format: missing 'files' key.")
sys.exit(1)
files_updated = 0
for file_obj in data['files']:
try:
file_path = file_obj['path']
content = file_obj['content']
# Create necessary directories if they do not exist.
directory = os.path.dirname(file_path)
if directory and not os.path.exists(directory):
os.makedirs(directory, exist_ok=True)
# Write the content to the file.
with open(file_path, 'w', encoding='utf-8') as f:
f.write(content)
print("Updated: {}".format(file_path))
files_updated += 1
except Exception as e:
print("Failed to update '{}': {}".format(file_obj.get('path', 'unknown'), e))
print("Total files updated: {}".format(files_updated))
def main():
"""
Main function that handles command-line arguments and triggers the file update process.
"""
if len(sys.argv) != 2:
print("Usage: python update_files.py /path/to/input.json")
sys.exit(1)
json_file_path = sys.argv[1]
update_files_from_json(json_file_path)
if __name__ == '__main__':
main()
Enjoyed this post? Found it helpful? Feel free to leave a comment below to share your thoughts or ask questions. A GitHub account is required to join the discussion.