PromptodexPromptodex
Back to Blog

Test Prompts in Your Browser with Your Own API Keys

Introducing Preview—a new way to test prompts directly in your browser using your own API keys. Your keys stay in your browser, never touching our servers.

Promptodex Team

We've heard a common request from our community: "I want to test a prompt before using it, but I don't want to copy-paste it somewhere else."

Today, we're excited to introduce Preview—a new feature that lets you test any prompt directly in your browser using your own API keys.

Why Preview?

Until now, testing a prompt on Promptodex meant:

  1. Copying the prompt with your filled-in variables
  2. Opening ChatGPT, Claude, or another AI interface
  3. Pasting the prompt and running it
  4. Coming back to Promptodex to bookmark or fork it if you liked the results

That's a lot of context switching just to see if a prompt works for your use case.

With Preview, you can run a prompt against your preferred AI model without leaving the page. Fill in the variables, click Run, and see the response instantly.

Your Keys, Your Browser

Here's the most important part: your API keys never leave your browser.

When you configure Preview, your keys are stored in your browser's localStorage—the same place websites store preferences and settings. When you run a prompt:

  1. The prompt is sent directly from your browser to the AI provider (OpenAI, Anthropic, Google, or xAI)
  2. The response comes back directly to your browser
  3. Promptodex's servers are never involved in the API call

We believe trust is earned, not assumed. That's why we designed Preview to work entirely client-side. You can verify this yourself—open your browser's Network tab and you'll see the requests going directly to the AI provider's API.

How It Works

1. Configure Your Provider

Click the Configure button on any prompt to set up your API keys. You can add keys for multiple providers:

  • OpenAI — GPT-4.1, GPT-4 Turbo, and more
  • Anthropic — Claude Sonnet, Opus, Haiku
  • Google — Gemini Pro, Ultra
  • xAI — Grok models

For each provider, you'll enter your API key and select your preferred model. If you've added multiple providers, you can choose which one to use by default.

2. Fill In Variables

Most prompts have variables like {{topic}} or {{content}}. Fill these in using the prompt's interactive editor—click on any highlighted variable to enter your value.

The Run button stays disabled until all required variables are filled. This prevents accidental API calls with incomplete prompts.

3. Run and Review

Click Run to execute the prompt. You'll see:

  • The AI's response in a clean modal
  • Token usage (input and output tokens)
  • Response latency in milliseconds
  • A copy button to grab the response

4. Track Your History

Preview keeps track of your last 5 runs. Open the configuration modal and switch to the Recent Runs tab to see your history. Click any previous run to view the response again—useful when comparing different variable values or models.

Model Recommendations

When a prompt author recommends a specific model (like "Claude Sonnet 4 for best results"), Preview will let you know if you're about to run with a different model. You can:

  • Use your configured model — If you prefer your default
  • Switch to the recommended model — If you have that provider configured
  • Continue anyway — The choice is always yours

Privacy First

We want to be completely transparent about how Preview handles your data:

  • API keys: Stored only in your browser's localStorage. Never sent to our servers.
  • Prompts: Rendered locally using your variable values.
  • Responses: Returned directly from the AI provider to your browser.
  • History: Stored locally in your browser. We don't track what you run.

You can clear all Preview data at any time using the "Clear all data" button in the configuration modal.

Getting Started

Preview is available now on every prompt page. Look for the Configure or Run button in the prompt header. If you're creating or editing a prompt, you'll see it in the preview area too.

We designed Preview to be as unintrusive as possible. If you never configure it, you won't see any difference in your Promptodex experience. But when you need to quickly test a prompt, it's there waiting for you.

We'd Love Your Feedback

Preview is our first step toward making Promptodex a more complete prompt development environment. We're already thinking about:

  • Support for additional AI providers
  • Streaming responses for longer outputs
  • Saved response templates for common workflows
  • Integration with the pod CLI

But we want to hear from you. What would make Preview more useful for your workflow? What's missing? What's confusing?

Drop us feedback at https://x.com/promptodex or open an issue on our GitHub. We read everything.

Thank you for being part of the Promptodex community. Now go test some prompts!