PM / DASHBOARD
CONNECTING
SYS: DASHBOARD
loading workspace...
TOTAL_PROMPTS
—
this environment
LIVE_IN_PROD
—
approved versions
PENDING_REVIEW
—
awaiting engineer
TEAM_MEMBERS
—
active
RECENT PROMPTS
PENDING APPROVALS
QUICK_START
✦
CREATE PROMPT
Name your prompt, write the content, save as draft
⌘
GET API KEY
Create a serve key for your agents and workflows
⊕
CONNECT AGENT
See integration code for n8n, Python, Node, curl
▦ PROMPT_REGISTRY
all prompts in this environment · click to open editor
✦ PROMPT_EDITOR
select a prompt from Registry
DETECTED VARIABLES
Write {{variable}} in your prompt to define dynamic values. They appear here automatically.
VERSION HISTORY
Select a prompt to see version history
◉ APPROVAL_QUEUE
versions awaiting your review · engineer+ role required to approve
◈ EVALUATIONS
LLM-as-judge · test your prompts before approving
RUN EVAL
SAVED TEAM KEYS
No saved keys. Add one to run evals without pasting keys each time.
⑂ VERSION_HISTORY
complete audit trail · every version ever proposed
⌘ API_KEYS
serve keys for your agents · each key tied to one environment
HOW AGENTS USE THESE KEYS
# Python / any HTTP client
# Replace pm_live_xxxx with your actual key
r = requests.get(
"https://api.promptmatrix.io/pm/serve/your.prompt.key",
headers={"Authorization": "Bearer pm_live_xxxx"},
timeout=2
)
system_prompt = r.text # returns the approved content
◌ ENVIRONMENTS
dev → staging → production · each environment has its own prompts and keys
≡ AUDIT_LOG
every action · append-only · who did what and when
◬ PERFORMANCE_TELEMETRY
prompt scores · call logs · cost · version history · submitted by agents or humans
TOTAL_EVALS
—
all time · this env
AVG_SCORE
—
across all prompts
PASS_RATE
—
threshold: 7.0
AGENT_SUBMISSIONS
—
via API key · automated
SCORE_DISTRIBUTION
loading...
AVG_SCORE_TREND — last 14 evals
—
◬ AGENT_TELEMETRY — submit performance data from any agent or product
Agents call
POST /api/v1/agent/evals after each LLM call to log prompt performance.
No LLM key needed — your agent already ran the call and has the result. Just report back what happened.
# After your LLM call — report the outcome back to PromptMatrix
import requests, time
# 1. Fetch prompt (you already do this)
t_start = time.time()
prompt = requests.get(
"https://api.promptmatrix.io/pm/serve/agent.radar.system",
headers={"Authorization": "Bearer YOUR_KEY"}
).text
# 2. Run your LLM call (your existing code — unchanged)
response = your_llm.call(prompt, user_input)
latency_ms = int((time.time() - t_start) * 1000)
# 3. Report outcome — ONE extra call, no LLM key needed
requests.post(
"https://api.promptmatrix.io/api/v1/agent/evals",
headers={"Authorization": "Bearer YOUR_KEY"},
json={
"prompt_key": "agent.radar.system", # which prompt
"score": 8.4, # your eval score (or omit for auto)
"latency_ms": latency_ms, # how long the LLM took
"tokens_in": response.usage.input, # input tokens
"tokens_out": response.usage.output, # output tokens
"model": "claude-sonnet-4-6", # model used
"success": True, # did it succeed?
"source": "agent", # "agent" | "product" | "test"
# optional: "rationale": "response was coherent, on-topic"
# optional: "session_id": "run-abc123" — group related calls
}
)
# That's it. PromptMatrix logs it, scores it, shows it in Performance tab.
# If score < threshold → auto-flags for your Monday approval queue.
✓ Works with any LLM — Claude, GPT, Gemini, Groq, Mistral, local
✓ Works from n8n, Python, Node, curl, any HTTP client
✓ score is optional — rule-based auto-scores if omitted
✓ Same API key as serve endpoint — no new auth
⊕ INTEGRATIONS
connect your agents · copy integration code
PYTHON
import requests
def get_prompt(key, vars={}):
url = f"https://api.promptmatrix.io/pm/serve/{key}"
if vars:
url += "?vars=" + ",".join(
f"{k}={v}" for k,v in vars.items()
)
r = requests.get(url, headers={
"Authorization": "Bearer YOUR_KEY"
}, timeout=2)
return r.text if r.ok else "[fallback prompt]"
# Usage
system = get_prompt("assistant.system")
system = get_prompt("email.writer", {"tone": "formal"})
n8n / MAKE / ZAPIER
Add an HTTP Request node before your AI node:
Method: GET
URL: https://api.promptmatrix.io
/pm/serve/{{$vars.promptKey}}
Headers:
Authorization: Bearer YOUR_KEY
# Output maps to system prompt input
# of your OpenAI / Claude / Gemini node
JAVASCRIPT / NODE
const getPrompt = async (key, vars={}) => {
const q = Object.entries(vars)
.map(([k,v]) => `${k}=${v}`).join(",")
const url = `https://api.promptmatrix.io
/pm/serve/${key}${q ? "?vars="+q : ""}`
const r = await fetch(url, {
headers: { Authorization: "Bearer YOUR_KEY" }
})
return r.ok ? r.text() : "[fallback]"
}
CURL
# Plain text response
curl https://api.promptmatrix.io \
/pm/serve/assistant.system \
-H "Authorization: Bearer YOUR_KEY"
# JSON response (with metadata)
curl "...?format=json" \
-H "Authorization: Bearer YOUR_KEY"
# With variable substitution
curl "...?vars=tone=formal,name=Alex" \
-H "Authorization: Bearer YOUR_KEY"
◫ TEAM
members · roles · invite
ROLE PERMISSIONS
owner — full control, billing, delete org ·
admin — everything + invite members + promote to production ·
engineer — approve/reject versions, create API keys ·
editor — propose versions, cannot approve ·
viewer — read-only
⚙ SETTINGS
workspace · account · danger zone
WORKSPACE
Org: —
Plan: —
Slug: —
SERVE ENDPOINT
Your agents call this endpoint. Replace {key} with your prompt key.
loading...
ACCOUNT
DANGER ZONE