Contributing to SIARE
Thank you for your interest in contributing to SIARE! This guide will help you get started.
Quick Start
# Fork and clone
git clone https://github.com/YOUR_USERNAME/siare.git
cd siare
# Set up development environment
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e ".[dev]"
# Verify setup
pytest
# Create a branch for your work
git checkout -b feature/your-feature-name
Development Setup
Prerequisites
- Python 3.10+
- Git
- An OpenAI API key (for integration tests)
Full Setup
# Clone your fork
git clone https://github.com/YOUR_USERNAME/siare.git
cd siare
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install with dev dependencies
pip install -e ".[dev]"
# Set up pre-commit hooks
pre-commit install
# Set environment variables
export OPENAI_API_KEY=your_key_here
Verify Installation
# Run all tests
pytest
# Run linting
ruff check .
# Run type checking
mypy siare/
# Format code
ruff format .
Code Quality Standards
Style Guide
We use:
- ruff for linting and formatting
- mypy for type checking
- pytest for testing
# Check everything
ruff check .
mypy siare/
pytest
Key Principles
- Type hints everywhere
# Good def execute(self, task: Task) -> ExecutionTrace: ... # Bad def execute(self, task): ... - Models in models.py
- All Pydantic data structures go in
siare/core/models.py - No data structures scattered across modules
- All Pydantic data structures go in
- Dependency injection in API
# Good @app.get("/sops") def get_sops(gene_pool: GenePool = Depends(get_gene_pool)): return gene_pool.list_sops() # Bad - global state _gene_pool = GenePool() @app.get("/sops") def get_sops(): return _gene_pool.list_sops() - No silent fallbacks to mocks
# Bad - silent mock def evaluate(self): if self.provider: return self.provider.call() return {"score": 0.85} # Silent fake! # Good - fail loudly def evaluate(self): if not self.provider: raise RuntimeError("LLM provider required") return self.provider.call() - Validate constraints before mutations
errors = director.validate_constraints(sop, constraints) if errors: raise ValueError(f"Constraint violations: {errors}")
Testing
Test Organization
tests/
├── unit/ # Single component, mocked dependencies
├── integration/ # Multiple components, real interactions
└── conftest.py # Shared fixtures
Running Tests
# All tests
pytest
# Single file
pytest tests/unit/test_director.py -v
# Specific test
pytest tests/unit/test_director.py::test_validate_prompt_constraints -v
# With coverage
pytest --cov=siare --cov-report=html
Writing Tests
Name tests as: test_<component>_<scenario>_<expected_outcome>
def test_director_diagnose_identifies_weak_metric():
"""Director correctly identifies the weakest metric in evaluation."""
# Arrange
evaluation = EvaluationVector(
metrics={"accuracy": 0.9, "latency": 0.3, "cost": 0.8}
)
# Act
diagnosis = director.diagnose(evaluation)
# Assert
assert diagnosis.weakest_metric == "latency"
assert "latency" in diagnosis.mutation_targets
Integration Tests
Mark tests that need external services:
@pytest.mark.integration
@pytest.mark.skipif(
not os.environ.get("OPENAI_API_KEY"),
reason="OPENAI_API_KEY required"
)
def test_execution_engine_with_real_llm():
...
Pull Request Process
Before Submitting
- Run all checks
ruff check . ruff format . mypy siare/ pytest -
Update documentation if adding features
-
Add tests for new functionality
- Write a clear commit message
feat(director): add TextGrad prompt optimization strategy - Implement TextGrad strategy for prompt evolution - Add gradient-based optimization using LLM critiques - Include unit tests for edge cases
Commit Message Format
<type>(<scope>): <short description>
<body - what and why>
<footer - breaking changes, issue refs>
Types:
feat: New featurefix: Bug fixdocs: Documentation onlyrefactor: Code change that neither fixes nor addstest: Adding missing testschore: Build process or auxiliary tools
PR Template
Your PR should include:
- Summary: What does this change?
- Motivation: Why is this change needed?
- Test plan: How was this tested?
- Breaking changes: Does this break existing APIs?
Adding New Features
Adding a New Mutation Type
- Add to enum in
siare/core/models.py:class MutationType(str, Enum): PROMPT_CHANGE = "prompt_change" # ... existing types NEW_TYPE = "new_type" # Add your type - Implement handler in
siare/services/director.py:def _apply_new_type_mutation(self, sop: ProcessConfig, target: str) -> ProcessConfig: """Apply NEW_TYPE mutation.""" # Implementation return mutated_sop - Add to dispatch in
mutate_sop():if mutation.type == MutationType.NEW_TYPE: return self._apply_new_type_mutation(sop, mutation.target) -
Write tests in
tests/unit/test_director.py - Document in
docs/reference/mutation-operators.md
Adding a New Tool Adapter
- Create adapter in
siare/adapters/:from siare.adapters.base import ToolAdapter, register_adapter @register_adapter("my_tool") class MyToolAdapter(ToolAdapter): def initialize(self) -> None: self.is_initialized = True def execute(self, inputs: dict) -> dict: return {"result": "..."} def validate_inputs(self, inputs: dict) -> list[str]: return [] -
Add tests in
tests/unit/test_adapters.py - Document in
docs/guides/custom-extensions.md
Adding a New Metric
- Define metric function:
def my_metric(trace: ExecutionTrace, task_data: dict) -> float: return 0.85 # Score 0-1 - Register with service:
evaluation_service.register_metric_function("my_metric", my_metric) -
Add tests
- Document in
docs/guides/custom-extensions.md
Adding a Prompt Evolution Strategy
- Extend base class in
siare/services/prompt_evolution/strategies/:from siare.services.prompt_evolution.strategies.base import PromptEvolutionStrategy class MyStrategy(PromptEvolutionStrategy): def optimize(self, prompt: str, failure_context: dict) -> str: improved_prompt = self._apply_improvements(prompt, failure_context) return improved_prompt def select_strategy(self, failure_patterns: list[str]) -> bool: return "my_pattern" in failure_patterns - Register with factory:
factory = PromptOptimizationFactory() factory.register_strategy("my_strategy", MyStrategy) - Add tests and documentation
Documentation
Writing Documentation
- Use clear, concise language
- Include code examples
- Add cross-references with “See Also” sections
- Update the docs/README.md navigation hub
Documentation Structure
docs/
├── README.md # Navigation hub
├── QUICKSTART.md # Getting started
├── CONFIGURATION.md # Complete reference
├── TROUBLESHOOTING.md # Common issues
├── DEPLOYMENT.md # Production deployment
├── concepts/ # How things work
├── guides/ # Step-by-step tutorials
├── reference/ # API and operator reference
├── production/ # Security, cost, monitoring
└── examples/ # Worked examples
Building Docs Locally
# If using MkDocs (optional)
pip install mkdocs-material
mkdocs serve
Getting Help
Questions
- Open a GitHub issue with the “question” label
- Check existing issues and docs first
Reporting Bugs
Include:
- SIARE version
- Python version
- Steps to reproduce
- Expected vs actual behavior
- Error messages/logs
Feature Requests
Include:
- What you’re trying to accomplish
- Current workaround (if any)
- Proposed solution
- Alternatives considered
Code of Conduct
Our Standards
- Be respectful and inclusive
- Accept constructive criticism gracefully
- Focus on what’s best for the community
- Show empathy towards others
Enforcement
Unacceptable behavior may be reported to the maintainers. All complaints will be reviewed and investigated.
License
By contributing, you agree that your contributions will be licensed under the project’s MIT License.
Recognition
Contributors are recognized in:
- GitHub contributors page
- Release notes for significant contributions
Thank you for contributing to SIARE!