The Problem
Hardcoded prompts create deployment bottlenecks
- 1Prompts hardcoded in source code
- 2Small tweak requires full deploy
- 3Wait for CI/CD pipeline
- 4Prompts scattered across services
- 5Versioning via Git is clunky
- 1Prompts hosted on Promptodex
- 2Update prompts instantly
- 3No code changes needed
- 4Single source of truth
- 5Built-in versioning & rollback
See it in action
import { pod } from "promptodex";
import { openai } from "@/lib/openai";
export async function POST(request: Request) {
const { content } = await request.json();
// Fetch prompt at runtime - update anytime without redeploying
const prompt = await pod("summarize-article", {
content,
maxLength: "200 words"
});
const response = await openai.chat.completions.create({
model: "gpt-4.1",
messages: [{ role: "user", content: prompt }]
});
return Response.json({ summary: response.choices[0].message.content });
}The summarize-article prompt lives on Promptodex.
Update it anytime—your API uses the new version immediately.
Built for real workflows
Three patterns that make teams more productive
Prompt Hosting
Host prompts on Promptodex and fetch them at runtime. Your prompt engineers can iterate independently without waiting on engineering deploys.
const prompt = await pod("customer-support-reply", {
customerName: ticket.name,
issue: ticket.description,
tone: "professional"
});Shared Fragments
Use the same prompt across multiple services. Web app, mobile backend, internal tools—all pulling from one source of truth.
// service-a
const prompt = await pod("company/extract-entities", { text });
// service-b
const prompt = await pod("company/extract-entities", { text });
// Update once, both services get the changeVersion Pinning
Pin production to tested versions while iterating on new ones. Roll back instantly if something goes wrong.
// Production: pinned to version 5
const prompt = await pod("code-review@5", { code, language });
// Staging: always latest
const prompt = await pod("code-review", { code, language });Simple API
Three functions, zero configuration
pod(slug, variables?, options?)Fetch and render a prompt in one call. The most common way to use promptodex.
Returns: Promise<string>
const prompt = await pod("greeting", { name: "World" });
// "Hello World, welcome to Promptodex!"fetchPrompt(slug, options?)Fetch the raw template without rendering. Useful when you need the template content itself.
Returns: Promise<PromptResponse>
const { content } = await fetchPrompt("greeting");
// "Hello {{name}}, welcome to Promptodex!"renderPrompt(template, variables?)Render a template locally. No network call—useful when you already have the template.
Returns: string
const rendered = renderPrompt("Hello {{name}}!", { name: "World" });
// "Hello World!"Features
Everything you need to manage prompts at runtime
Runtime fetching
Prompts are fetched from Promptodex at runtime, not bundled at build time
Template rendering
Use {{variable}} syntax to inject dynamic values into your prompts
Version support
Pin to specific versions with @1, @2, etc. or always get the latest
Private prompts
Access private prompts securely with API key authentication
Zero dependencies
Under 200 lines, uses native fetch. No bloat, no security concerns
TypeScript ready
Full type definitions included. Great editor support out of the box
Private prompts for teams
Keep proprietary prompts secure while still getting the benefits of runtime hosting
import { pod } from "promptodex";
// Store your API key securely
const PROMPTODEX_KEY = process.env.PROMPTODEX_API_KEY;
// Access private prompts with your API key
const prompt = await pod(
"internal/sales-email-template",
{
customerName: lead.name,
product: lead.interest,
repName: rep.name
},
{ apiKey: PROMPTODEX_KEY }
);Ready to decouple your prompts?
Install the promptodex module and start hosting prompts at runtime.