Compatibility¶
This guide explains compatibility considerations for the Prompt Decorators framework, including LLM provider compatibility, decorator conflicts, and integration options.
LLM Provider Compatibility¶
The Prompt Decorators framework is designed to work with any LLM provider, as it transforms decorators into natural language instructions that any model can understand. However, there might be important considerations for each provider.
OpenAI¶
OpenAI models work well with Prompt Decorators.
import openai
from prompt_decorators import apply_dynamic_decorators
# Create a decorated prompt
prompt = """
+++StepByStep(numbered=true)
+++Audience(level="beginner")
Explain quantum computing.
"""
transformed_prompt = apply_dynamic_decorators(prompt)
# Send to OpenAI
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": transformed_prompt}
]
)
Considerations¶
- gpt-4o and newer models have the best understanding of complex instructions
- Add an explicit
system
message to set baseline behavior
Anthropic Claude¶
Claude models work well with Prompt Decorators' natural language transformations.
import anthropic
from prompt_decorators import apply_dynamic_decorators
# Set up Anthropic client
client = anthropic.Anthropic(api_key="your-anthropic-api-key")
# Create a decorated prompt
prompt = """
+++Reasoning(depth="comprehensive")
+++OutputFormat(format="markdown")
What are the implications of quantum computing for cryptography?
"""
transformed_prompt = apply_dynamic_decorators(prompt)
# Send to Anthropic Claude
message = client.messages.create(
model="claude-3-7-sonnet-latest",
max_tokens=1000,
messages=[
{"role": "user", "content": transformed_prompt}
]
)
Considerations¶
- Claude models excel at following structured instructions
- Use the Claude Desktop integration for seamless experience
Other Providers¶
The framework is compatible with many other LLM providers:
- Hugging Face Models: Work well with explicit instructions generated by decorators
- Google Gemini: Compatible with transformer decorators
- Mistral AI: Works with instructions generated by the framework
- Llama/Llama 2/Llama 3: Compatible with properly transformed prompts
- Local models: Can use decorators with any locally deployed model
Decorator Compatibility and Conflicts¶
Some decorators may have incompatible behaviors. The framework handles conflicts according to these rules:
- Precedence Rule: When decorators have fundamentally incompatible requirements, the later decorator in the sequence takes precedence
- Parameter Conflicts: When facing a parameter conflict between decorators, the parameter in the later decorator takes precedence
- Graceful Degradation: If a model can't fully implement a decorator's behavior, it degrades gracefully with partial implementation
Known Decorator Conflicts¶
Decorator | Incompatible With | Reason | Resolution |
---|---|---|---|
ELI5 | Technical , Academic | Contradictory audience adaptation | Last decorator wins |
Concise | Detailed | Contradictory verbosity goals | Last decorator wins |
Inductive | Deductive | Contradictory reasoning methods | Last decorator wins |
Bullet | OutputFormat(format=json) | Structural conflict | Last decorator wins |
Compatibility Checking¶
You can programmatically check for known conflicts:
from prompt_decorators import (
get_available_decorators,
create_decorator_instance
)
# Get decorator definitions
decorators = get_available_decorators()
# Check if two decorators are compatible
def check_compatibility(decorator1_name, decorator2_name):
dec1 = next((d for d in decorators if d.name == decorator1_name), None)
dec2 = next((d for d in decorators if d.name == decorator2_name), None)
if not dec1 or not dec2:
return "One or both decorators not found"
# Check for known conflicts
known_conflicts = {
"ELI5": ["Technical", "Academic"],
"Concise": ["Detailed"],
"Inductive": ["Deductive"],
"Bullet": [] # Special case for OutputFormat conflicts
}
if dec1.name in known_conflicts and dec2.name in known_conflicts[dec1.name]:
return f"Known conflict: {dec1.name} conflicts with {dec2.name}"
if dec2.name in known_conflicts and dec1.name in known_conflicts[dec2.name]:
return f"Known conflict: {dec2.name} conflicts with {dec1.name}"
# Special case for OutputFormat with format=json and Bullet
if dec1.name == "OutputFormat" and dec2.name == "Bullet":
if hasattr(dec1, "parameters") and any(p.get("name") == "format" and p.get("default") == "json" for p in dec1.parameters):
return "Conflict: OutputFormat(format=json) conflicts with Bullet"
return "No known conflicts"
# Example usage
print(check_compatibility("ELI5", "Technical")) # Known conflict
print(check_compatibility("StepByStep", "Reasoning")) # No conflict
Integration Options¶
Direct Integration¶
The simplest integration method is to apply decorators before sending prompts to an LLM:
from prompt_decorators import apply_dynamic_decorators
from your_llm_client import LLMClient
client = LLMClient()
def enhanced_prompt(prompt_text, decorators=None):
decorated_text = prompt_text
# Apply inline decorators from the text
if "+++" in prompt_text:
decorated_text = apply_dynamic_decorators(prompt_text)
# Apply additional decorators programmatically
if decorators:
for decorator_name, params in decorators.items():
decorator = create_decorator_instance(decorator_name, **params)
decorated_text = decorator(decorated_text)
# Send to LLM
return client.generate(decorated_text)
MCP Integration¶
For Anthropic Claude and other compatible clients, use the Model Context Protocol (MCP) integration:
# Run the MCP server (general use)
python -m prompt_decorators.integrations.mcp
# For Claude Desktop integration
python -m prompt_decorators.integrations.mcp.claude_desktop
This exposes decorator functionality through a set of tools that can be used by any MCP-compatible client.
Middleware Integration¶
For web applications or API services, you can implement decorators as middleware:
from fastapi import FastAPI, Request, Depends
from prompt_decorators import apply_dynamic_decorators
app = FastAPI()
async def prompt_decorator_middleware(request: Request):
# Extract the prompt from the request
body = await request.json()
if "prompt" in body:
body["prompt"] = apply_dynamic_decorators(body["prompt"])
return body
@app.post("/generate")
async def generate(data: dict = Depends(prompt_decorator_middleware)):
# The prompt has already been transformed by the middleware
return {"result": llm_client.generate(data["prompt"])}
Model-Specific Optimization¶
Different models may respond better to slightly different instruction formats. You can optimize for specific models:
from prompt_decorators import DecoratorDefinition, register_decorator
# Define model-specific versions of decorators
gpt4_reasoning_decorator = DecoratorDefinition(
name="GPT4_Reasoning",
description="Reasoning decorator optimized for gpt-4o",
category="ModelSpecific",
parameters=[
{
"name": "depth",
"type": "enum",
"description": "Depth of reasoning",
"enum": ["basic", "moderate", "comprehensive"],
"default": "moderate"
}
],
transform_function="""
let instruction = "I want you to think through this step by step, showing your full reasoning process. ";
if (depth === "basic") {
instruction += "Focus on the key logical steps.";
} else if (depth === "moderate") {
instruction += "Provide a balanced explanation with important details and connections.";
} else if (depth === "comprehensive") {
instruction += "Show a detailed, thorough analysis considering multiple angles and perspectives.";
}
return instruction + "\\n\\n" + text;
"""
)
# Register the model-specific decorator
register_decorator(gpt4_reasoning_decorator)
# Use based on the target model
def optimize_for_model(prompt, model_name):
if model_name.startswith("gpt-4o"):
# Use gpt-4o optimized decorators
reasoning = create_decorator_instance("GPT4_Reasoning", depth="comprehensive")
return reasoning(prompt)
elif model_name.startswith("claude"):
# Use Claude optimized decorators
reasoning = create_decorator_instance("Reasoning", depth="comprehensive")
return reasoning(prompt)
else:
# Use default decorators
reasoning = create_decorator_instance("Reasoning", depth="comprehensive")
return reasoning(prompt)
Browser Extension Integration¶
For services without direct API access, you can build browser extensions:
// JavaScript example for a browser extension
function applyDecorators(text) {
// Simple regex-based parser for browser extension
const decoratorPattern = /\+\+\+([A-Za-z]+)(?:\(([^)]+)\))?/g;
let transformed = text;
// Extract decorators
let decorators = [];
let match;
while ((match = decoratorPattern.exec(text)) !== null) {
const name = match[1];
const paramsStr = match[2] || "";
// Parse parameters
const params = {};
if (paramsStr) {
paramsStr.split(',').forEach(pair => {
const [key, value] = pair.split('=').map(s => s.trim());
params[key] = value;
});
}
decorators.push({ name, params, fullMatch: match[0] });
}
// Apply transformations (simplified)
decorators.forEach(dec => {
transformed = transformed.replace(dec.fullMatch, "");
if (dec.name === "StepByStep") {
const numbered = dec.params.numbered === "true";
transformed = `Please break down your response into ${numbered ? "numbered" : "clear"} steps.\n\n${transformed}`;
} else if (dec.name === "Reasoning") {
const depth = dec.params.depth || "moderate";
transformed = `Please provide ${depth} reasoning in your response.\n\n${transformed}`;
}
// Add more decorator implementations as needed
});
return transformed;
}
// Hook into webpage
document.querySelector('#prompt-textarea').addEventListener('keydown', function(e) {
if (e.key === 'Enter' && !e.shiftKey) {
const text = this.value;
if (text.includes('+++')) {
e.preventDefault();
this.value = applyDecorators(text);
// Then submit the form programmatically
document.querySelector('form').submit();
}
}
});
Next Steps¶
- Explore MCP integration for Claude and other LLMs
- Learn about creating custom decorators optimized for specific models
- Check the tutorials for examples of compatible decorator combinations