by feiskyer
OpenAI Codex CLI settings, configurations, skills and prompts for vibe coding
# Add to your Claude Code skills
git clone https://github.com/feiskyer/codex-settingsA curated collection of configurations, skills and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.
For Claude Code settings, skills, agents and custom commands, please refer feiskyer/claude-code-settings.
This repository provides:
# Backup existing Codex configuration (if any)
mv ~/.codex ~/.codex.bak
# Clone this repository to ~/.codex
git clone https://github.com/feiskyer/codex-settings.git ~/.codex
# Or symlink if you prefer to keep it elsewhere
ln -s /path/to/codex-settings ~/.codex
npx skills could be used to install skills only for your AI coding tools.
# List skills
npx -y skills add -l feiskyer/codex-settings
# Install all skills
npx -y skills add --all feiskyer/codex-settings
# Manually select a list of skills to install
npx -y skills add feiskyer/codex-settings
The default config.toml uses LiteLLM as a gateway. To use it:
Install LiteLLM and Codex CLI:
pip install -U 'litellm[proxy]'
npm install -g @openai/codex
Create a LiteLLM config file (full example litellm_config.yaml):
general_settings:
master_key: sk-dummy
litellm_settings:
drop_params: true
model_list:
- model_name: gpt-5.1-codex-max
model_info:
mode: responses
supports_vision: true
litellm_params:
model: github_copilot/gpt-5.1-codex-max
drop_params: true
extra_headers:
editor-version: "vscode/1.95.0"
editor-plugin-version: "copilot-chat/0.26.7"
- model_name: claude-opus-4.5
litellm_params:
model: github_copilot/claude-opus-4.5
drop_params: true
extra_headers:
editor-version: "vscode/1.95.0"
editor-plugin-version: "copilot-chat/0.26.7"
- model_name: "*"
litellm_params:
model: "github_copilot/*"
extra_headers:
editor-version: "vscode/1.95.0"
editor-plugin-version: "copilot-chat/0.26.7"
Start LiteLLM proxy:
litellm --config ~/.codex/litellm_config.yaml
# Runs on http://localhost:4000 by default
Run Codex:
codex
gpt-5 via model_provider = "github" (Copilot proxy on http://localhost:4000)on-request; reasoning summary: detailed; reasoning effort: high; raw agent reasoning visibleclaude (local), exa (hosted), chrome (DevTools over npx)Located in configs/ directory:
To use an alternative config:
# Take ChatGPT for example
cp ~/.codex/configs/chatgpt.toml ~/.codex/config.toml
codex
Custom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.
/prompts:deep-reflector - Analyze development sessions to extract learnings, patterns, and improvements for future interactions./prompts:insight-documenter [breakthrough] - Capture and document significant technical breakthroughs into reusable knowledge assets./prompts:instruction-reflector - Analyze and improve Codex instructions in AGENTS.md based on conversation history./prompts:github-issue-fixer [issue-number] - Systematically analyze, plan, and implement fixes for GitHub issues with PR creation./prompts:github-pr-reviewer [pr-number] - Perform thorough GitHub pull request code analysis and review./prompts:ui-engineer [requirements] - Create production-ready frontend solutions with modern UI/UX standards./prompts:prompt-creator [requirements] - Create Codex custom prompts with proper structure and best practices..md file in ~/.codex/prompts/$1 to $9: Positional arguments$ARGUMENTS: All arguments joined by spaces$$: Literal dollar signSkills are reusable instruction bundles that Codex automatically discovers at startup. Each skill has a name, description, and detailed instructions stored on disk. Codex injects only metadata (name, description, path) into context - the body stays on disk until needed.
Skills are automatically loaded when Codex starts. To use a skill:
List all skills: Use the /skills command to see all available skills
/skills
Invoke a skill: Use $<skill-name> [prompt] to invoke a skill with an optional prompt
$kiro-skill Create a feature spec for user authentication
$nanobanana-skill Generate an image of a sunset over mountains
Skills are stored in ~/.codex/skills/**/SKILL.md. Only files named exactly SKILL.md are recognized.
Non-interactive automation mode for hands-off task execution using Claude Code. Use when you want to leverage Claude Code to implement features or review code.
Key Features:
Requirements: Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
Execute complex, long-running tasks across multiple sessions using a dual-agent pattern (Initializer + Executor) with automatic session continuation.
The runner keeps model selection with your active Codex config/profile and pins unattended execution through config overrides rather than hardcoding a model or relying on --full-auto.
Key Features:
.autonomous/<task-name>/)task_list.md and progress.mdUsage:
# Start a new autonomous task
~/.codex/skills/autonomous-skill/scripts/run-session.sh "Build a REST API for todo app"
# Continue an existing task
~/.codex/skills/autonomous-skill/scripts/run-session.sh --task-name build-rest-api-todo --continue
# List all tasks
~/.codex/skills/autonomous-skill/scripts/run-session.sh --list
Multi-instance (multi-agent) orchestration workflow for deep research tasks. Breaks down research objectives into parallelizable sub-goals, runs child processes via codex exec, and aggregates results into polished reports.
Key Features:
codex exec in sandboxed environmentsfirecrawl → tavily) → direct fetchUse Cases:
Workflow:
Output: All artifacts saved to .research/<name>/ directory including logs, raw data, and final polished report.
Generate or edit images using Google Gemini API via nanobanana. Use when creating, generating, or editing images.
Key Features:
Requirements:
GEMINI_API_KEY configured in ~/.nanobanana.envExtract subtitles/transcripts from a YouTube video URL and save as a local file.
No comments yet. Be the first to share your thoughts!