first commit

This commit is contained in:
2026-04-22 18:05:20 -05:00
commit 24cfd85ac2
41 changed files with 5791 additions and 0 deletions

View File

@@ -0,0 +1,11 @@
{
"ai": "copilot",
"ai_commands_dir": null,
"ai_skills": false,
"branch_numbering": "sequential",
"here": true,
"offline": false,
"preset": null,
"script": "sh",
"speckit_version": "0.4.3"
}

View File

@@ -0,0 +1,810 @@
<!--
Sync Impact Report:
Version: 1.19.0 → 1.20.0 (MINOR: Literal function body pattern for global variables)
Modified Principles:
- Section I.II: Updated adapterHelper.js pattern to use literal function body with return statement
- Section I.V: Updated to document function body wrapping pattern in loadGlobalVariables()
- Overall: Established clear pattern where files contain function bodies, server handles wrapping
Architecture Changes:
- adapterHelper.js: Changed from object literal to literal function body with return
- server.js: Wraps loaded .js code in IIFE: (function() { <code> })()
- Pattern applies to ALL .js files in globalVariables/ directory
Code Changes:
- adapterHelper.js:
* Changed from `({...})` pattern to `return {...}` pattern
* File now contains LITERAL BODY of a function (NOT valid standalone JS)
* Updated header comment to explain literal function body pattern
* Has bare return statement (intentional - wrapped by server)
- server.js: loadGlobalVariables():
* Wraps code: `const wrappedCode = '(function() {\n' + code + '\n})()'`
* Creates IIFE that executes function body
* Captures returned object from function execution
* Pattern centralizes wrapping logic in server
Modified Sections:
- Section I.II: Updated adapterHelper.js pattern documentation
- Section I.V: Updated to show literal function body loading pattern
- Throughout: Clarified that .js files contain function bodies, not complete modules
Rationale:
- Clear separation: file content (function body) vs execution wrapper (server)
- Files represent what goes INSIDE a function
- Explicit return statement shows what's exported
- Wrapping logic centralized in one place (server.js)
- Consistent pattern for all global variable functions
Benefits:
- Files clearly show their purpose (function bodies)
- Explicit return statement documents exports
- Easier to understand: file = body, server = wrapper
- Single place to modify wrapping behavior
- Consistent pattern across all .js files in globalVariables/
Templates Status:
✅ All templates - No changes needed (implementation pattern only)
Previous Version:
Version: 1.18.0 → 1.19.0 (MINOR: Code simplification and optimization)
Modified Principles:
- Section I.V: Updated to document unified loadGlobalVariables() function
- Section I.II: Updated adapterHelper.js pattern to reflect simplified approach
- Overall: Documented 47% reduction in codebase size through systematic simplification
Architecture Changes:
- server.js: Unified loadGlobalObjects() + loadGlobalVariables() → loadGlobalVariables()
- server.js: Single-pass file scanning with immediate categorization
- server.js: Immutable config merging pattern
- server.js: Natural error bubbling (reduced try-catch blocks)
- proxy.js: Inlined JWT creation and token exchange (removed wrapper functions)
- proxy.js: Added named constants (TOKEN_EXPIRY_MS, TOKEN_BUFFER_MS)
- proxy.js: Reduced verbose debug logging (production-ready)
- proxy.js: Natural error handling (removed unnecessary try-catch)
Code Changes:
- server.js: 258 → 234 → 183 lines (29% reduction)
* Combined loader functions into single loadGlobalVariables()
* Simplified loadConfig() with immutable spreading
* Streamlined validateConfig() to single check
* Removed redundant comments and temp variables
- proxy.js: 752 → 493 → 298 lines (60% reduction)
* Auth section: 135 → 62 lines (inlined helpers, added constants)
* RequestQueue: 85 → 39 lines (removed debug logging)
* Drive API: 109 → 52 lines (natural error bubbling)
* Request handling: 124 → 88 lines (simplified comments)
- Combined savings: 596 lines removed (47% overall reduction)
Modified Sections:
- Section I.V: Updated loadGlobalVariables() pattern (unified loader)
- Section I.II: Updated adapterHelper.js documentation for simplified approach
- Throughout: Removed references to separate loadGlobalObjects/loadGlobalVariableFunctions
Rationale:
- Unified loader is simpler API (one function instead of two)
- Single-pass scanning is more efficient
- Named constants improve readability (no magic numbers)
- Production-ready logging (info/error only, removed debug noise)
- Natural error bubbling provides better stack traces
- Cleaner code is easier to maintain and understand
Benefits:
- 47% fewer lines to read, test, and maintain
- Faster execution (single-pass file scanning)
- Better debuggability (natural error propagation)
- Improved onboarding (less code to understand)
- Production-ready (appropriate logging levels)
Templates Status:
✅ All templates - No changes needed (implementation optimization)
Previous Version:
Version: 1.17.0 → 1.18.0 (MINOR: Updated function loading pattern documentation)
Modified Principles:
- Section I.V: Updated VM context injection pattern to reflect loadGlobalVariableFunctions
- Section I.V: Documented generic .js file loading from globalVariables/
- Section I.V: Updated tempContext to use full globalVMContext + globalVariableContext
- Section I.II: Updated adapterHelper.js documentation to reflect generic pattern
Architecture Changes:
- Renamed loadHelpers() → loadGlobalVariables()
- Generic pattern: loads ALL .js files from globalVariables/
- Filename determines key: adapterHelper.js → globalVariableContext.adapterHelper
- tempContext now includes full VM globals and previously loaded data
- Function modules can access all dependencies (axios, jwt, etc.)
Code Changes:
- server.js: loadGlobalVariables() loads all .js files generically
- server.js: tempContext uses {...globalVMContext, ...globalVariableContext}
- Pattern matches loadGlobalObjects() structure for consistency
Modified Sections:
- Section I.II: Updated file structure and adapterHelper.js pattern documentation
- Section I.V: Complete rewrite of function module loading documentation
- Section I.V: Updated context injection pattern code example
Rationale:
- Generic pattern supports multiple function modules without code changes
- Consistent with loadGlobalObjects() pattern (JSON files)
- Function modules can depend on full set of VM globals
- Enables complex function modules that need access to libraries
Templates Status:
✅ All templates - No changes needed (implementation detail)
Previous Version:
Version: 1.16.0 → 1.17.0 (MINOR: Helper functions extraction pattern)
Modified Principles:
- Section I: Updated to allow helper functions in src/globalVariables/adapterHelper.js
- Section I.II: Added adapterHelper.js to allowed files list
- Section I.V: Added helpers object to global context injection
- New Pattern: Pure utility functions can be extracted to adapterHelper.js and loaded via vm.Script
Architecture Changes:
- Created src/globalVariables/adapterHelper.js (315 lines)
- Extracted 11 helper functions from proxy.js
- Reduced proxy.js from 752 to 493 lines (35% reduction)
- Helper functions loaded via vm.Script and injected as 'helpers' global object
- proxy.js now uses helpers.functionName() pattern
Code Changes:
- Created: src/globalVariables/adapterHelper.js with IIFE returning helpers object
- Modified: src/proxyScripts/proxy.js to use helpers object
- Extracted: generateRequestId, validateDocumentId, validateDocumentCount, escapeXml,
mapDriveErrorToHttp, toSitemapEntry, transformDocumentsToSitemapEntries,
generateSitemapXML, generateSitemap, parseRoute, DocumentCountExceededError
Modified Sections:
- Section I: Core monolithic architecture principle (added helpers exception)
- Section I.I: What must be in proxy.js (clarified helpers can be external)
- Section I.II: What can be separate files (added adapterHelper.js)
- Section I.V: Global objects injection (added helpers object documentation)
Rationale:
- Improves code organization while maintaining vm.Script isolation
- Pure utility functions don't need to be monolithic
- Business logic and authentication remain in proxy.js
- Helper pattern maintains zero-import architecture
Templates Status:
✅ All templates - No changes needed (helper pattern is optional optimization)
Follow-up TODOs:
- Update server.js to load adapterHelper.js via vm.Script
- Add helpers object to globalVariableContext
Previous Version:
Version: 1.15.0 → 1.16.0 (PATCH: Directory relocation and documentation update)
Modified Principles:
- Section I: Updated all references to global/ directory to src/globalVariables/
- Section I.II: Updated file structure to reflect global/ → src/globalVariables/
- Section I.IV: Updated configuration section to reference src/globalVariables/
- Section I.V: Updated global objects documentation to reference src/globalVariables/
Documentation Changes:
- Updated all references to global/ directory to src/globalVariables/
- Clarified that src/globalVariables/ contains JSON data files loaded at startup
- Updated file structure documentation to show correct hierarchy
Code Changes:
- Relocated: global/ → src/globalVariables/
- No logic changes to any files
Modified Sections:
- Section I: Multiple references to global directory location
- Section I.II: File structure documentation
- Section I.IV: Configuration section
- Section I.V: Global objects injection section
Templates Status:
✅ All templates - No changes needed (directory relocation only)
Follow-up TODOs:
- Update server.js to load global objects from new location (src/globalVariables/)
Previous Version:
Version: 1.14.0 → 1.15.0 (PATCH: File relocation and documentation update)
Modified Principles:
- Section I.II: Updated file structure to reflect proxy.js relocation to src/proxyScripts/
- Section I.III: Updated enforcement rules to reflect new proxy.js location
Documentation Changes:
- Updated all references to proxy.js to reflect new location at src/proxyScripts/proxy.js
- Clarified that src/proxyScripts/ contains the main proxy script (singular, not plural scripts)
- Updated file structure documentation to show correct hierarchy
Code Changes:
- Relocated: src/proxy.js → src/proxyScripts/proxy.js
- Created directory: src/proxyScripts/
- No logic changes to proxy.js itself
Modified Sections:
- Section I.II: File structure documentation
- Section I.III: Enforcement rules
- All references to proxy.js location throughout document
Templates Status:
✅ All templates - No changes needed (file relocation only)
Follow-up TODOs:
- Update server.js to load proxy.js from new location (src/proxyScripts/proxy.js)
Previous Version:
Version: 1.13.0 → 1.14.0 (PATCH: Documentation corrections to match actual implementation)
Modified Principles:
- Section I.V: Removed config from context injection (proxy.js doesn't need infrastructure settings)
- Section I.V: Documented spread operator pattern (globalVMContext and globalVariableContext)
- Section I.V: Clarified direct variable access pattern for proxy.js
- Section I.V: Updated injection pattern to show actual loadGlobalObjects() → globalVariableContext flow
- Section I.V: Renumbered items after removing config (was 12 items, now 11)
- Section I.V: Enhanced rationale to explain spread operator benefits
Documentation Changes:
- Removed config: global.config from context injection example (not actually injected)
- Added explicit code example showing globalVMContext and globalVariableContext definitions
- Clarified that proxy.js uses direct variable access: `const x = adapter_settings;`
- Documented spread operator pattern: `{ ...globalVMContext, ...globalVariableContext, req, res }`
- Added note about URL availability (not in globalVMContext but available via Node.js)
No Code Changes:
- This is purely a documentation update to match existing implementation
- No JavaScript files modified
- Constitution now accurately reflects server.js and proxy.js patterns
Modified Sections:
- Section I.V: Complete rewrite of VM Context Injection Pattern section
- Section I.V: Rationale updated to explain spread operator benefits
Templates Status:
✅ All templates - No changes needed (documentation-only update)
Follow-up TODOs:
- Consider updating proxy.js to use direct variable access instead of globalThis["name"]
-->
# Proxy Scripts Constitution
## Core Principles
### I. Monolithic Architecture (NON-NEGOTIABLE)
**ALL business logic, data processing, authentication, and request handling MUST exist within the `src/proxyScripts/proxy.js` file.** Pure utility/helper functions MAY be extracted to `src/globalVariables/adapterHelper.js` if they improve code organization. The `server.js` file should ONLY handle:
- HTTP server setup
- Configuration loading
- Global object injection into isolated context
- Loading src/proxyScripts/proxy.js via `vm.Script` and `vm.createContext`
- Loading src/globalVariables/adapterHelper.js via `vm.Script` (optional)
- Per-request context creation with all necessary globals
**Implementation via vm.Script**:
`src/proxyScripts/proxy.js` MUST be loaded using Node.js `vm.Script` and executed in isolated contexts created per-request with `vm.createContext`. This ensures:
- Complete isolation from server.js module system
- All dependencies provided explicitly through context objects
- Zero ability to import/export modules
- Pure functional execution with injected dependencies
**Rationale**: Monolithic architecture enables simple packaging as a single IVA Studio proxy script and prevents fragmentation of business logic across multiple files. Using `vm.Script` enforces architectural boundaries at runtime, making it impossible for `src/proxyScripts/proxy.js` to access Node.js module system or file system, ensuring ALL functionality exists in one isolated, dependency-injected file.
**Helper Functions Pattern**: Pure utility functions (XML escaping, validation, formatting, routing) MAY be extracted to `src/globalVariables/adapterHelper.js` to improve readability and maintainability. The helpers module:
- MUST be loaded via `vm.Script` (same isolation as proxy.js)
- MUST evaluate to a single JavaScript object with all helper functions
- MUST have ZERO imports/exports
- Is injected as `adapterHelper` global object into VM context
- Contains ONLY pure utilities, NO business logic or state
### I. Zero External Imports or Exports from `src/proxyScripts/proxy.js` (NON-NEGOTIABLE)
`src/proxyScripts/proxy.js` MUST have **ZERO import statements**. All dependencies MUST be provided through `vm.createContext` by server.js.
`src/proxyScripts/proxy.js` MUST have **ZERO export statements**. The file MUST be pure JavaScript code executed in an isolated VM context.
**File system access** from `src/proxyScripts/proxy.js` is **ABSOLUTELY PROHIBITED** under any circumstances. The `fs` module MUST NOT be accessible.
**External libraries** (axios, jwt, googleapis, etc.) MUST NOT be imported. Dependencies are injected through VM context by server.js.
**Rationale**: Using `vm.Script` and `vm.createContext` enforces architectural boundaries at the VM level. src/proxyScripts/proxy.js runs in an isolated context with NO access to Node.js module system, file system, or process globals. ALL dependencies must be explicitly injected per-request through the context object, ensuring src/proxyScripts/proxy.js contains ONLY pure business logic with zero capability for I/O operations.
**For data files that src/proxyScripts/proxy.js needs** (service account keys, certificates, secrets):
1. Place JSON files in `src/globalVariables/` directory
2. server.js loads them at startup using `loadGlobalObjects()`
3. server.js injects them into VM context per-request via `vm.createContext`
4. src/proxyScripts/proxy.js accesses them as simple variables in context (e.g., `adapter_settings`)
**Example**:
- File: `src/globalVariables/adapter_settings.json`
- Loading: server.js reads and assigns to `globalVariableContext.adapter_settings`
- Injection: server.js adds `adapter_settings: globalVariableContext.adapter_settings` to context
- Access in src/proxyScripts/proxy.js: Direct variable access `adapter_settings.serviceAccount`
**Enforcement**:
- src/proxyScripts/proxy.js MUST have NO `import` statements (file should start with comments, then code)
- src/proxyScripts/proxy.js MUST have NO `export` statements (no module.exports, no export keyword)
- Any `import` or `export` in src/proxyScripts/proxy.js MUST be rejected immediately
- server.js MUST load src/proxyScripts/proxy.js using `vm.Script` constructor
- server.js MUST execute via `script.runInContext(context)` with fresh context per request
- All dependencies injected through `vm.createContext({ ... })` context object
- VM isolation prevents access to require(), import(), fs, process, and Node.js globals
#### I.0 Forbidden Globals in proxy.js (NON-NEGOTIABLE)
`src/proxyScripts/proxy.js` MUST NOT access ANY infrastructure configuration globals. The following are **ABSOLUTELY PROHIBITED**:
-`config` - Infrastructure settings (server port, proxy paths, logging level)
-`global.config` - Global configuration object
-`process.env` - Environment variables (these are server concerns, not business logic)
**ONLY the following globals are permitted** in `src/proxyScripts/proxy.js`:
-`console` - Custom logger (injected by server.js)
-`crypto` - Web Crypto API for randomUUID()
-`axios` - HTTP client for API calls
-`jwt` - JSON Web Token library for authentication
-`xmlBuilder` - XML document builder
-`uuidv4` - UUID generator
-`adapterHelper` - Helper functions (loaded from src/globalVariables/)
-`adapter_settings` - Business data only (service account, Drive query, sitemap settings)
-`req` - HTTP request object (includes req.params with routing metadata)
-`res` - HTTP response object
**Rationale**: Infrastructure configuration (server ports, proxy routing, deployment settings) is the responsibility of server.js, NOT business logic. proxy.js implements document export logic - it should NOT know about HTTP server configuration, proxy path prefixes, or deployment details. These are injected via `req.params` when needed for routing.
**If routing information is needed** (e.g., proxy path prefix for route parsing):
1. server.js MUST parse the incoming request URL
2. server.js MUST extract routing metadata (workspaceId, branch, routeName)
3. server.js MUST add this to `req.params` before invoking proxy.js
4. proxy.js accesses routing info via `req.params`, NOT via `config`
**Example of correct routing metadata injection**:
```javascript
// server.js - BEFORE invoking proxy.js
if (global.config.proxy) {
const { pathPrefix, workspaceId, branch, routeName } = global.config.proxy;
const fullPrefix = `${pathPrefix.replace(/\/$/, '')}/${workspaceId}/${branch}/${routeName}`;
if (req.url.startsWith(fullPrefix)) {
req.params = {
"0": req.url, // Original path
workspaceId, // Extracted from config
branch, // Extracted from config
route: routeName // Extracted from config (renamed to 'route')
};
}
}
```
**Enforcement**:
- Any reference to `config` in proxy.js MUST be rejected
- Any reference to `global.config` in proxy.js MUST be rejected
- Any reference to `process.env` in proxy.js MUST be rejected
- Routing metadata MUST be passed via `req.params`, never via `config`
- Code reviews MUST verify zero infrastructure globals in proxy.js
#### I.I What MUST Be in src/proxyScripts/proxy.js
The following MUST be implemented in `src/proxyScripts/proxy.js` (or extracted to adapterHelper.js if pure utilities):
1. **Authentication**: Service Account JWT, OAuth flows, token management (MUST be in proxy.js)
2. **Business Logic**: All request handling, routing, and processing (MUST be in proxy.js)
3. **Data Transformation**: Document parsing, XML generation, data mapping (MUST be in proxy.js or adapterHelper.js)
4. **API Integration**: Drive API queries, error mapping, response handling (MUST be in proxy.js)
5. **Request Queue**: FIFO queue for sequential processing (MUST be in proxy.js)
6. **Utility Functions**: Request ID generation, validation, XML escaping, date formatting (MAY be in adapterHelper.js)
7. **Error Handling**: All error mapping and HTTP status code logic (MAY be in adapterHelper.js)
**Helper Extraction Guidelines**:
-**CAN extract**: Pure functions, validators, formatters, XML utilities, error mappers, route parsers
-**MUST NOT extract**: Authentication, API calls, request queue, cached state, business decisions
**NO EXCEPTIONS for business logic** - Even complex authentication (OAuth 2.0, JWT) must be in proxy.js.
#### I.II What Can Be Separate Files
ONLY the following infrastructure modules may exist outside `src/proxyScripts/proxy.js`:
1. **src/logger.js**: Structured logging with console replacement (ONLY logging, no business logic)
2. **src/server.js**: HTTP server bootstrap and configuration (ONLY server setup, no business logic)
3. **config/**: JSON configuration files (data files, not code)
4. **src/globalVariables/**: JSON data files AND adapterHelper.js module
- `*.json`: Runtime data loaded at startup (credentials, settings)
- `adapterHelper.js`: Pure utility functions loaded via vm.Script (OPTIONAL)
5. **src/proxyScripts/**: Directory containing the main proxy script (proxy.js)
**Test files are exempt** - Test utilities may exist solely for test compatibility if needed, but MUST NOT be imported by production code.
**File Structure**:
```
src/
├── proxyScripts/
│ └── proxy.js # Main business logic (authentication, API, queue)
├── globalVariables/
│ ├── *.json # Data files for VM context
│ └── adapterHelper.js # Pure utility functions (OPTIONAL)
├── logger.js # Structured logging
└── server.js # HTTP server bootstrap
config/
└── default.json # Infrastructure settings
```
**adapterHelper.js Pattern (Literal Function Body)**:
- MUST be loaded using `vm.Script` (same isolation as proxy.js)
- MUST contain LITERAL FUNCTION BODY with `return` statement (NOT valid standalone JS)
- server.js wraps it: `(function() { <file contents> })()`
- Function body returns a single JavaScript object containing all exports
- MUST have ZERO imports/exports (pure vm.Script execution)
- Loaded by `loadGlobalVariables()` which scans for both JSON and JS files
- Filename determines global key: `adapterHelper.js``globalVariableContext.adapterHelper`
- Injected as `adapterHelper` global object into VM context
- Contains ONLY pure utilities: validators, formatters, XML, error mappers
- MUST NOT contain: authentication, API calls, state, business decisions
- Executed in context with full access to globalVMContext and globalVariableContext
#### I.III Enforcement
During code review and planning:
- ANY file in `src/proxyScripts/` besides `proxy.js` MUST be challenged
- ANY file in `src/globalVariables/` besides `adapterHelper.js` and `*.json` MUST be challenged
- ANY file in `src/` besides `proxyScripts/`, `globalVariables/`, `logger.js`, `server.js` MUST be challenged
- Authentication, even if complex, MUST be in `src/proxyScripts/proxy.js` (never in adapterHelper.js)
- Business logic MUST be in `src/proxyScripts/proxy.js` (never in adapterHelper.js)
- Exceptions require explicit constitutional justification with measurable trade-offs
- When in doubt about helpers extraction, keep it in `src/proxyScripts/proxy.js`
**RED FLAGS to reject immediately:**
- Separate files for: auth, database, utilities, helpers, services, controllers, models
- Any file containing business logic or domain knowledge
- Multiple files "organizing" the codebase
#### I.IV Configuration
- Configuration for the Node.js web server infrastructure should be stored as JSON in `config/default.json`.
- `config/default.json` MUST contain ONLY infrastructure settings: server (host, port), logging level
- `config/default.json` MUST NOT contain authentication credentials, secrets, API keys, or behavioral configuration
- Authentication credentials, secrets, and ALL behavioral configuration MUST be stored in `src/globalVariables/` directory as JSON files
- Global JSON files are automatically loaded by server.js and made available as global objects
- server.js should validate both configuration from `config/default.json` AND global objects from `src/globalVariables/` directory
#### I.V Global Objects Provided by server.js
The `server.js` file MUST inject the following objects into VM context for use by `src/proxyScripts/proxy.js`:
**VM Context Injection Pattern:**
server.js uses a spread operator pattern for cleaner context creation:
```javascript
// Define static VM context (libraries and built-ins)
const globalVMContext = {
URLSearchParams,
URL,
console: logger,
crypto,
axios,
uuidv4,
jwt,
xmlBuilder,
};
// Load dynamic data from src/globalVariables/ directory
let globalVariableContext = {}; // Populated by loadGlobalVariables()
// Load all global variables (JSON data + JS function modules) at startup
loadGlobalVariables();
// Pattern:
// 1. Single-pass scan of globalVariables/ for *.json and *.js files (excluding *.example.*)
// 2. Categorize into jsonFiles and jsFiles arrays
// 3. Load JSON files first (data) → globalVariableContext[filename]
// 4. Load JS files second (functions) → globalVariableContext[filename]
// 5. JS files execute in context with {...globalVMContext, ...globalVariableContext}
// Example: adapterHelper.js returns object → globalVariableContext.helpers = object
// Example: adapter_settings.json → globalVariableContext.adapter_settings = data
// Per-request: Create fresh context with all dependencies
const context = vm.createContext({
...globalVMContext, // Spread static dependencies
...globalVariableContext, // Spread dynamic data (JSON + function modules)
req, // Fresh request object
res // Fresh response object
});
script.runInContext(context);
```
**Note:** src/proxyScripts/proxy.js accesses these as direct variables (e.g., `adapter_settings`, not `globalThis["adapter_settings"]`). The VM context makes all properties available as top-level variables.
**Core Infrastructure Context Variables:**
1. **console** - Custom logger from `logger.js`
- Purpose: Structured JSON logging
- Usage: `console.info()`, `console.debug()`, `console.error()`
- Injected from: `globalVMContext.console` (set to `logger`)
2. **crypto** - Web Crypto API (built-in)
- Purpose: UUID generation, cryptographic operations
- Usage: `crypto.randomUUID()`, etc.
- Injected from: `globalVMContext.crypto`
- Note: Web Crypto API available by default in Node.js
3. **axios** - HTTP client library
- Purpose: Making HTTP requests to external APIs
- Usage: `axios.get(url)`, `axios.post(url, data)`
- Package: `axios`
- Injected from: `globalVMContext.axios`
4. **uuidv4** - UUID v4 generator
- Purpose: Generate RFC4122 compliant UUIDs
- Usage: `uuidv4()` returns string like "110ec58a-a0f2-4ac4-8393-c866d813b8d1"
- Package: `uuid` (v4 function only)
- Injected from: `globalVMContext.uuidv4`
5. **jwt** - JSON Web Token library
- Purpose: Creating and verifying JWTs for authentication
- Usage: `jwt.sign(payload, secret)`, `jwt.verify(token, secret)`
- Package: `jsonwebtoken`
- Injected from: `globalVMContext.jwt`
6. **xmlBuilder** - XML builder/generator
- Purpose: Constructing XML documents programmatically
- Usage: `xmlBuilder({ root: { child: 'value' } })`
- Package: `xmlbuilder2` (create function)
- Injected from: `globalVMContext.xmlBuilder`
**Built-in Web APIs:**
7. **URLSearchParams** - URL query string parser (built-in)
- Purpose: Parse and manipulate URL query strings
- Usage: `new URLSearchParams(queryString)`
- Injected from: `globalVMContext.URLSearchParams`
8. **URL** - URL parser (built-in)
- Purpose: Parse and manipulate URLs
- Usage: `new URL(urlString)`
- Injected from: `globalVMContext.URL`
- Note: Currently not included in globalVMContext but available in Node.js by default
**Dynamic Data Context Variables:**
9. **Dynamic JSON objects from src/globalVariables/ directory**
- Purpose: Authentication credentials, secrets, API keys, and behavioral configuration
- Pattern: Each `src/globalVariables/filename.json` loaded by server.js → added to `globalVariableContext` → spread into VM context
- Examples:
- `src/globalVariables/adapter_settings.json` → context variable `adapter_settings` (consolidated service account, scopes, drive query, sitemap config)
- `src/globalVariables/api-keys.json` → context variable `api_keys` (API keys and secrets)
- `src/globalVariables/custom-config.json` → context variable `custom_config` (behavioral settings)
- Usage in src/proxyScripts/proxy.js: Direct variable access `const settings = adapter_settings;`
- Loading: By server.js at startup using `loadGlobalObjects()` function
- Injection: Via spread operator `...globalVariableContext` in `vm.createContext()`
- **Note**: ALL authentication, secrets, and behavioral configuration MUST be in src/globalVariables/, NEVER in config/default.json
**Helper Functions Module:**
10. **adapterHelper** - Pure utility functions object (OPTIONAL)
- Purpose: Extracted helper functions for code organization
- Source: `src/globalVariables/adapterHelper.js` loaded via `vm.Script`
- Loading: server.js loads via `loadGlobalVariables()` at startup
- **Literal Function Body Pattern**:
- File contains LITERAL BODY of a function (NOT valid standalone JavaScript)
- Uses bare `return {...}` statement to export object
- server.js wraps it: `const wrappedCode = '(function() {\n' + code + '\n})()'`
- Creates IIFE that executes function body and captures returned object
- Pattern separates content (file) from execution wrapper (server)
- Example file structure:
```javascript
// File: src/globalVariables/adapterHelper.js
// This is a LITERAL FUNCTION BODY (not valid standalone JS)
class DocumentCountExceededError extends Error {
constructor(message) {
super(message);
this.name = 'DocumentCountExceededError';
}
}
function generateRequestId() {
return `req_${crypto.randomUUID()}`;
}
// ... more functions ...
return {
DocumentCountExceededError,
generateRequestId,
// ... all exports
};
```
- server.js wrapping logic:
```javascript
const wrappedCode = `(function() {\n${code}\n})()`;
const script = new vm.Script(wrappedCode, { filename: file });
const context = vm.createContext({...globalVMContext, ...globalVariableContext});
const returnedObject = script.runInContext(context);
globalVariableContext[varName] = returnedObject;
```
- Generic Loading Pattern:
- All .js files in globalVariables/ are loaded automatically
- Filename determines key: `adapterHelper.js` → `globalVariableContext.adapterHelper`
- Executed in tempContext with `{...globalVMContext, ...globalVariableContext}`
- Can access all VM globals: axios, jwt, crypto, console, etc.
- Can access previously loaded JSON data and function modules
- Injection: Spread into VM context via `...globalVariableContext`
- Usage in src/proxyScripts/proxy.js: `adapterHelper.functionName()` (e.g., `adapterHelper.generateRequestId()`)
- Contains: Pure utilities only (validators, formatters, XML, error mappers, route parsers)
- MUST NOT contain: Authentication, API calls, state, business logic
- Example functions:
- `adapterHelper.generateRequestId()` - UUID generation
- `adapterHelper.validateDocumentId(id)` - Document ID validation
- `adapterHelper.escapeXml(str)` - XML character escaping
- `adapterHelper.generateSitemap(docs, baseUrl)` - Sitemap generation
- `adapterHelper.mapDriveErrorToHttp(error)` - Error mapping
- `adapterHelper.parseRoute(method, url)` - Route parsing
- `adapterHelper.DocumentCountExceededError` - Custom error class
- Generic Pattern Note: You can add more .js files (e.g., `utils.js`, `validators.js`)
and they will be automatically loaded as `globalVariableContext.utils`, etc.
**Request/Response Objects:**
11. **req** - HTTP IncomingMessage
- Purpose: Access request data (URL, method, headers, body)
- Injected fresh: Per-request from `http.createServer((req, res) => ...)`
12. **res** - HTTP ServerResponse
- Purpose: Send response to client
- Injected fresh: Per-request from `http.createServer((req, res) => ...)`
**Rationale**: Using `vm.createContext` with spread operator pattern for dependency injection achieves:
- **Runtime-enforced isolation** - src/proxyScripts/proxy.js physically cannot access Node.js module system or file system
- **Zero imports possible** - VM context has no `require()` or `import()` capability
- **Explicit dependencies** - All available objects must be explicitly listed in globalVMContext or globalVariableContext
- **Clean organization** - Static dependencies (globalVMContext) separated from dynamic data (globalVariableContext)
- **Per-request isolation** - Fresh context per request prevents cross-request state leakage
- **Testing simplicity** - Mock entire context object instead of individual module imports
- **Clear contracts** - Context spread pattern documents every dependency src/proxyScripts/proxy.js uses
- **Security boundaries** - VM sandbox prevents escape to underlying system
- **DRY principle** - Spread operators eliminate repetitive property declarations
#### I.VI Logging
Modify server.js to replace the global `console` object with the `logger` export from `logger.js`. This will make all console.log, console.info, console.error calls throughout the application use the custom logger.
Logging should use `logger.js` module that has the following functions:
- log - which defaults to the 'info' function
- info - which writes to stdout
- debug - which prefixes the output with "[DEBUG]" written in red font and writes to stdout
- error - which prefixes the output with "[ERROR]" written in red font and writes to stderr
### II. API-First Design
Every feature MUST expose a clear, documented API before implementation begins. APIs MUST follow RESTful principles where applicable, use consistent naming conventions, and include comprehensive error handling with meaningful status codes and messages.
**Rationale**: API-first design ensures contracts are stable, enables parallel front-end/back-end work, facilitates integration testing, and produces naturally documented systems.
### III. Test-First Development (NON-NEGOTIABLE)
Test-Driven Development is MANDATORY for all production code. The cycle MUST be:
1. Write failing tests
2. Obtain user approval of test scenarios
3. Implement minimum code to pass tests
4. Refactor while maintaining green tests
Unit tests MUST achieve minimum 80% code coverage. Integration tests MUST cover all API contracts and critical user flows.
**Rationale**: TDD catches defects early, documents expected behavior, enables confident refactoring, and ensures all code paths are exercised.
### IV. Security & Privacy by Default
All user data MUST be treated as sensitive. OAuth tokens, credentials, and personal information MUST be encrypted at rest and in transit. The principle of least privilege MUST govern all access controls. Audit logging MUST track all data access and modifications.
**Rationale**: Privacy violations damage trust and carry legal liability. Security must be foundational, not retrofitted.
### V. Observability & Debuggability
All operations MUST emit structured logs with appropriate severity levels (DEBUG, INFO, WARN, ERROR). Errors MUST include context (request IDs, user IDs, operation names) sufficient for diagnosis. Performance-critical paths MUST expose metrics (latency, throughput, error rates).
**Rationale**: Production issues are inevitable. Observable systems reduce mean time to resolution and enable proactive problem detection.
### VI. Semantic Versioning & Change Management
All public APIs MUST follow semantic versioning (MAJOR.MINOR.PATCH):
- MAJOR: Breaking changes that require consumer updates
- MINOR: Backward-compatible feature additions
- PATCH: Backward-compatible bug fixes
Breaking changes MUST include migration guides and deprecation notices for at least one MINOR version before removal.
**Rationale**: Clear versioning communicates impact, enables safe upgrades, and respects downstream consumers' need for stability.
### VII. Simplicity, Minimal Dependencies & YAGNI
Implement only features with demonstrated need. Choose the simplest solution that satisfies current requirements. Reject premature optimization and speculative features. Complexity MUST be explicitly justified with measurable benefits.
**Dependency Minimization**: Prefer Node.js built-in modules over external npm packages. Each external dependency MUST be justified by:
- Significant functionality that would take >2 days to implement correctly
- Active maintenance and security track record
- Clear, documented benefit that outweighs maintenance risk
Prohibited without explicit approval:
- Utility libraries for functionality Node.js provides natively (fs, path, crypto, http, etc.)
- Heavy framework dependencies when lightweight alternatives exist
- Multiple packages solving the same problem
**Rationale**: External dependencies introduce supply chain risk, increase bundle size, complicate auditing, and create maintenance burden. Node.js built-ins are stable, well-tested, and maintained by the platform.
## API Design Standards
All external APIs MUST:
- Accept and return JSON for structured data
- Use standard HTTP methods (GET, POST, PUT, DELETE, PATCH) semantically
- Return appropriate HTTP status codes (2xx success, 4xx client errors, 5xx server errors)
- Include rate limiting headers where applicable
- Version endpoints explicitly (e.g., /v1/, /v2/)
- Document all parameters, responses, and error codes using OpenAPI/Swagger
Response formats MUST be consistent and include:
- Timestamp of response generation
- Request correlation ID for tracing
- Pagination metadata for list operations
- Clear error messages with actionable guidance
## Security & Data Protection
Authentication & Authorization MUST:
- Never log or expose credentials, tokens, or API keys
- Validate all input to prevent injection attacks
- Apply rate limiting to prevent abuse
Data Handling MUST:
- Minimize data retention—delete temporary files promptly
- Encrypt sensitive data using industry-standard algorithms (AES-256 or equivalent)
- Sanitize all user-supplied content before processing
- Implement CSRF protection for web interfaces
## Development Workflow
Code Reviews MUST:
- Verify alignment with all constitutional principles
- Check test coverage meets minimum thresholds
- Validate API contracts match documentation
- Confirm security best practices are followed
Quality Gates (ALL must pass before merge):
- All tests pass (unit, integration, end-to-end)
- Code coverage ≥ 80%
- No critical security vulnerabilities (use automated scanning)
- Documentation updated for API/behavior changes
- Performance regression checks pass
Deployment MUST:
- Use automated CI/CD pipelines
- Include smoke tests post-deployment
- Support rollback within 5 minutes
- Include release notes documenting all changes
## Technology Stack
**Platform**: Node.js (LTS version or later)
**Mandatory Baseline**:
- Use Node.js built-in modules as first choice (fs, path, crypto, http, https, stream, util, url, querystring, etc.)
- **DO NOT use 'events' EventEmitter** - implement simple patterns directly (e.g., Promise-based queues)
- Plain JavaScript (ES2022+) without TypeScript
- JSDoc comments for type documentation where needed
- JavaScript tooling (ESLint, Prettier) does not count against dependency budget
- Native test runner (node:test) or minimal test framework
**Dependency Approval Process**:
Any external npm package (excluding JavaScript tooling like ESLint and Prettier) MUST be justified in the feature specification with:
1. **Functionality gap**: What Node.js built-ins cannot do
2. **Implementation cost**: Estimated effort to build vs. maintain dependency
3. **Risk assessment**: Package security, maintenance history, download stats
4. **Alternatives considered**: Why alternatives were rejected
**Examples of acceptable dependencies** (when justified):
- xmlbuilder2
- axios
- uuid
**Examples of prohibited dependencies** (use Node.js built-ins or inline implementations instead):
- lodash/underscore (use native Array/Object methods)
- moment/date-fns (use native Date, Intl.DateTimeFormat)
- rimraf (use fs.rm with recursive: true)
- mkdirp (use fs.mkdir with recursive: true)
- **EventEmitter from 'events'** (implement simple queue classes directly - no need for event system)
- express/fastify (use native http/https for simple servers
**Node.js built-in modules to prefer:**
- Use 'node:' prefix for clarity: `import crypto from 'node:crypto'`
- Acceptable built-ins: fs, path, crypto, http, https, stream, util, url, querystring, etc.
- NOT acceptable: 'events' EventEmitter - implement patterns directly without event system
IMPORTANT : All dependencies that are not acceptable must be approved when running plan and task agents
## Governance
This constitution supersedes all other development practices and guidelines. When conflicts arise between this document and team conventions, the constitution takes precedence.
Amendments to this constitution require:
1. Documented justification explaining the need for change
2. Impact analysis of affected systems and workflows
3. Approval from project maintainers
4. Migration plan for any breaking changes
5. Update of version number following semantic versioning rules
All pull requests, code reviews, and design discussions MUST verify compliance with constitutional principles. Exceptions MUST be rare, explicitly justified with measurable trade-offs, and documented in the relevant specification or plan.
For runtime development guidance, refer to `.github/prompts/` and `.github/agents/` files which operationalize these principles into agent workflows.
**Version**: 1.14.0 | **Ratified**: 2026-03-05 | **Last Amended**: 2026-03-07

View File

@@ -0,0 +1,190 @@
#!/usr/bin/env bash
# Consolidated prerequisite checking script
#
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
# It replaces the functionality previously spread across multiple scripts.
#
# Usage: ./check-prerequisites.sh [OPTIONS]
#
# OPTIONS:
# --json Output in JSON format
# --require-tasks Require tasks.md to exist (for implementation phase)
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
# --paths-only Only output path variables (no validation)
# --help, -h Show help message
#
# OUTPUTS:
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
set -e
# Parse command line arguments
JSON_MODE=false
REQUIRE_TASKS=false
INCLUDE_TASKS=false
PATHS_ONLY=false
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--require-tasks)
REQUIRE_TASKS=true
;;
--include-tasks)
INCLUDE_TASKS=true
;;
--paths-only)
PATHS_ONLY=true
;;
--help|-h)
cat << 'EOF'
Usage: check-prerequisites.sh [OPTIONS]
Consolidated prerequisite checking for Spec-Driven Development workflow.
OPTIONS:
--json Output in JSON format
--require-tasks Require tasks.md to exist (for implementation phase)
--include-tasks Include tasks.md in AVAILABLE_DOCS list
--paths-only Only output path variables (no prerequisite validation)
--help, -h Show this help message
EXAMPLES:
# Check task prerequisites (plan.md required)
./check-prerequisites.sh --json
# Check implementation prerequisites (plan.md + tasks.md required)
./check-prerequisites.sh --json --require-tasks --include-tasks
# Get feature paths only (no validation)
./check-prerequisites.sh --paths-only
EOF
exit 0
;;
*)
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
exit 1
;;
esac
done
# Source common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get feature paths and validate branch
_paths_output=$(get_feature_paths) || { echo "ERROR: Failed to resolve feature paths" >&2; exit 1; }
eval "$_paths_output"
unset _paths_output
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
if $PATHS_ONLY; then
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
if has_jq; then
jq -cn \
--arg repo_root "$REPO_ROOT" \
--arg branch "$CURRENT_BRANCH" \
--arg feature_dir "$FEATURE_DIR" \
--arg feature_spec "$FEATURE_SPEC" \
--arg impl_plan "$IMPL_PLAN" \
--arg tasks "$TASKS" \
'{REPO_ROOT:$repo_root,BRANCH:$branch,FEATURE_DIR:$feature_dir,FEATURE_SPEC:$feature_spec,IMPL_PLAN:$impl_plan,TASKS:$tasks}'
else
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$(json_escape "$REPO_ROOT")" "$(json_escape "$CURRENT_BRANCH")" "$(json_escape "$FEATURE_DIR")" "$(json_escape "$FEATURE_SPEC")" "$(json_escape "$IMPL_PLAN")" "$(json_escape "$TASKS")"
fi
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "TASKS: $TASKS"
fi
exit 0
fi
# Validate required directories and files
if [[ ! -d "$FEATURE_DIR" ]]; then
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
echo "Run /speckit.specify first to create the feature structure." >&2
exit 1
fi
if [[ ! -f "$IMPL_PLAN" ]]; then
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.plan first to create the implementation plan." >&2
exit 1
fi
# Check for tasks.md if required
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task list." >&2
exit 1
fi
# Build list of available documents
docs=()
# Always check these optional docs
[[ -f "$RESEARCH" ]] && docs+=("research.md")
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
# Check contracts directory (only if it exists and has files)
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
docs+=("contracts/")
fi
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
# Include tasks.md if requested and it exists
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
docs+=("tasks.md")
fi
# Output results
if $JSON_MODE; then
# Build JSON array of documents
if has_jq; then
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(printf '%s\n' "${docs[@]}" | jq -R . | jq -s .)
fi
jq -cn \
--arg feature_dir "$FEATURE_DIR" \
--argjson docs "$json_docs" \
'{FEATURE_DIR:$feature_dir,AVAILABLE_DOCS:$docs}'
else
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(for d in "${docs[@]}"; do printf '"%s",' "$(json_escape "$d")"; done)
json_docs="[${json_docs%,}]"
fi
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$(json_escape "$FEATURE_DIR")" "$json_docs"
fi
else
# Text output
echo "FEATURE_DIR:$FEATURE_DIR"
echo "AVAILABLE_DOCS:"
# Show status of each potential document
check_file "$RESEARCH" "research.md"
check_file "$DATA_MODEL" "data-model.md"
check_dir "$CONTRACTS_DIR" "contracts/"
check_file "$QUICKSTART" "quickstart.md"
if $INCLUDE_TASKS; then
check_file "$TASKS" "tasks.md"
fi
fi

330
.specify/scripts/bash/common.sh Executable file
View File

@@ -0,0 +1,330 @@
#!/usr/bin/env bash
# Common functions and variables for all scripts
# Find repository root by searching upward for .specify directory
# This is the primary marker for spec-kit projects
find_specify_root() {
local dir="${1:-$(pwd)}"
# Normalize to absolute path to prevent infinite loop with relative paths
# Use -- to handle paths starting with - (e.g., -P, -L)
dir="$(cd -- "$dir" 2>/dev/null && pwd)" || return 1
local prev_dir=""
while true; do
if [ -d "$dir/.specify" ]; then
echo "$dir"
return 0
fi
# Stop if we've reached filesystem root or dirname stops changing
if [ "$dir" = "/" ] || [ "$dir" = "$prev_dir" ]; then
break
fi
prev_dir="$dir"
dir="$(dirname "$dir")"
done
return 1
}
# Get repository root, prioritizing .specify directory over git
# This prevents using a parent git repo when spec-kit is initialized in a subdirectory
get_repo_root() {
# First, look for .specify directory (spec-kit's own marker)
local specify_root
if specify_root=$(find_specify_root); then
echo "$specify_root"
return
fi
# Fallback to git if no .specify found
if git rev-parse --show-toplevel >/dev/null 2>&1; then
git rev-parse --show-toplevel
return
fi
# Final fallback to script location for non-git repos
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
(cd "$script_dir/../../.." && pwd)
}
# Get current branch, with fallback for non-git repositories
get_current_branch() {
# First check if SPECIFY_FEATURE environment variable is set
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
echo "$SPECIFY_FEATURE"
return
fi
# Then check git if available at the spec-kit root (not parent)
local repo_root=$(get_repo_root)
if has_git; then
git -C "$repo_root" rev-parse --abbrev-ref HEAD
return
fi
# For non-git repos, try to find the latest feature directory
local specs_dir="$repo_root/specs"
if [[ -d "$specs_dir" ]]; then
local latest_feature=""
local highest=0
local latest_timestamp=""
for dir in "$specs_dir"/*; do
if [[ -d "$dir" ]]; then
local dirname=$(basename "$dir")
if [[ "$dirname" =~ ^([0-9]{8}-[0-9]{6})- ]]; then
# Timestamp-based branch: compare lexicographically
local ts="${BASH_REMATCH[1]}"
if [[ "$ts" > "$latest_timestamp" ]]; then
latest_timestamp="$ts"
latest_feature=$dirname
fi
elif [[ "$dirname" =~ ^([0-9]{3})- ]]; then
local number=${BASH_REMATCH[1]}
number=$((10#$number))
if [[ "$number" -gt "$highest" ]]; then
highest=$number
# Only update if no timestamp branch found yet
if [[ -z "$latest_timestamp" ]]; then
latest_feature=$dirname
fi
fi
fi
fi
done
if [[ -n "$latest_feature" ]]; then
echo "$latest_feature"
return
fi
fi
echo "main" # Final fallback
}
# Check if we have git available at the spec-kit root level
# Returns true only if git is installed and the repo root is inside a git work tree
# Handles both regular repos (.git directory) and worktrees/submodules (.git file)
has_git() {
# First check if git command is available (before calling get_repo_root which may use git)
command -v git >/dev/null 2>&1 || return 1
local repo_root=$(get_repo_root)
# Check if .git exists (directory or file for worktrees/submodules)
[ -e "$repo_root/.git" ] || return 1
# Verify it's actually a valid git work tree
git -C "$repo_root" rev-parse --is-inside-work-tree >/dev/null 2>&1
}
check_feature_branch() {
local branch="$1"
local has_git_repo="$2"
# For non-git repos, we can't enforce branch naming but still provide output
if [[ "$has_git_repo" != "true" ]]; then
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
return 0
fi
if [[ ! "$branch" =~ ^[0-9]{3}- ]] && [[ ! "$branch" =~ ^[0-9]{8}-[0-9]{6}- ]]; then
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
echo "Feature branches should be named like: 001-feature-name or 20260319-143022-feature-name" >&2
return 1
fi
return 0
}
get_feature_dir() { echo "$1/specs/$2"; }
# Find feature directory by numeric prefix instead of exact branch match
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
find_feature_dir_by_prefix() {
local repo_root="$1"
local branch_name="$2"
local specs_dir="$repo_root/specs"
# Extract prefix from branch (e.g., "004" from "004-whatever" or "20260319-143022" from timestamp branches)
local prefix=""
if [[ "$branch_name" =~ ^([0-9]{8}-[0-9]{6})- ]]; then
prefix="${BASH_REMATCH[1]}"
elif [[ "$branch_name" =~ ^([0-9]{3})- ]]; then
prefix="${BASH_REMATCH[1]}"
else
# If branch doesn't have a recognized prefix, fall back to exact match
echo "$specs_dir/$branch_name"
return
fi
# Search for directories in specs/ that start with this prefix
local matches=()
if [[ -d "$specs_dir" ]]; then
for dir in "$specs_dir"/"$prefix"-*; do
if [[ -d "$dir" ]]; then
matches+=("$(basename "$dir")")
fi
done
fi
# Handle results
if [[ ${#matches[@]} -eq 0 ]]; then
# No match found - return the branch name path (will fail later with clear error)
echo "$specs_dir/$branch_name"
elif [[ ${#matches[@]} -eq 1 ]]; then
# Exactly one match - perfect!
echo "$specs_dir/${matches[0]}"
else
# Multiple matches - this shouldn't happen with proper naming convention
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
echo "Please ensure only one spec directory exists per prefix." >&2
return 1
fi
}
get_feature_paths() {
local repo_root=$(get_repo_root)
local current_branch=$(get_current_branch)
local has_git_repo="false"
if has_git; then
has_git_repo="true"
fi
# Use prefix-based lookup to support multiple branches per spec
local feature_dir
if ! feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch"); then
echo "ERROR: Failed to resolve feature directory" >&2
return 1
fi
# Use printf '%q' to safely quote values, preventing shell injection
# via crafted branch names or paths containing special characters
printf 'REPO_ROOT=%q\n' "$repo_root"
printf 'CURRENT_BRANCH=%q\n' "$current_branch"
printf 'HAS_GIT=%q\n' "$has_git_repo"
printf 'FEATURE_DIR=%q\n' "$feature_dir"
printf 'FEATURE_SPEC=%q\n' "$feature_dir/spec.md"
printf 'IMPL_PLAN=%q\n' "$feature_dir/plan.md"
printf 'TASKS=%q\n' "$feature_dir/tasks.md"
printf 'RESEARCH=%q\n' "$feature_dir/research.md"
printf 'DATA_MODEL=%q\n' "$feature_dir/data-model.md"
printf 'QUICKSTART=%q\n' "$feature_dir/quickstart.md"
printf 'CONTRACTS_DIR=%q\n' "$feature_dir/contracts"
}
# Check if jq is available for safe JSON construction
has_jq() {
command -v jq >/dev/null 2>&1
}
# Escape a string for safe embedding in a JSON value (fallback when jq is unavailable).
# Handles backslash, double-quote, and JSON-required control character escapes (RFC 8259).
json_escape() {
local s="$1"
s="${s//\\/\\\\}"
s="${s//\"/\\\"}"
s="${s//$'\n'/\\n}"
s="${s//$'\t'/\\t}"
s="${s//$'\r'/\\r}"
s="${s//$'\b'/\\b}"
s="${s//$'\f'/\\f}"
# Escape any remaining U+0001-U+001F control characters as \uXXXX.
# (U+0000/NUL cannot appear in bash strings and is excluded.)
# LC_ALL=C ensures ${#s} counts bytes and ${s:$i:1} yields single bytes,
# so multi-byte UTF-8 sequences (first byte >= 0xC0) pass through intact.
local LC_ALL=C
local i char code
for (( i=0; i<${#s}; i++ )); do
char="${s:$i:1}"
printf -v code '%d' "'$char" 2>/dev/null || code=256
if (( code >= 1 && code <= 31 )); then
printf '\\u%04x' "$code"
else
printf '%s' "$char"
fi
done
}
check_file() { [[ -f "$1" ]] && echo "$2" || echo "$2"; }
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo "$2" || echo "$2"; }
# Resolve a template name to a file path using the priority stack:
# 1. .specify/templates/overrides/
# 2. .specify/presets/<preset-id>/templates/ (sorted by priority from .registry)
# 3. .specify/extensions/<ext-id>/templates/
# 4. .specify/templates/ (core)
resolve_template() {
local template_name="$1"
local repo_root="$2"
local base="$repo_root/.specify/templates"
# Priority 1: Project overrides
local override="$base/overrides/${template_name}.md"
[ -f "$override" ] && echo "$override" && return 0
# Priority 2: Installed presets (sorted by priority from .registry)
local presets_dir="$repo_root/.specify/presets"
if [ -d "$presets_dir" ]; then
local registry_file="$presets_dir/.registry"
if [ -f "$registry_file" ] && command -v python3 >/dev/null 2>&1; then
# Read preset IDs sorted by priority (lower number = higher precedence).
# The python3 call is wrapped in an if-condition so that set -e does not
# abort the function when python3 exits non-zero (e.g. invalid JSON).
local sorted_presets=""
if sorted_presets=$(SPECKIT_REGISTRY="$registry_file" python3 -c "
import json, sys, os
try:
with open(os.environ['SPECKIT_REGISTRY']) as f:
data = json.load(f)
presets = data.get('presets', {})
for pid, meta in sorted(presets.items(), key=lambda x: x[1].get('priority', 10)):
print(pid)
except Exception:
sys.exit(1)
" 2>/dev/null); then
if [ -n "$sorted_presets" ]; then
# python3 succeeded and returned preset IDs — search in priority order
while IFS= read -r preset_id; do
local candidate="$presets_dir/$preset_id/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done <<< "$sorted_presets"
fi
# python3 succeeded but registry has no presets — nothing to search
else
# python3 failed (missing, or registry parse error) — fall back to unordered directory scan
for preset in "$presets_dir"/*/; do
[ -d "$preset" ] || continue
local candidate="$preset/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done
fi
else
# Fallback: alphabetical directory order (no python3 available)
for preset in "$presets_dir"/*/; do
[ -d "$preset" ] || continue
local candidate="$preset/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done
fi
fi
# Priority 3: Extension-provided templates
local ext_dir="$repo_root/.specify/extensions"
if [ -d "$ext_dir" ]; then
for ext in "$ext_dir"/*/; do
[ -d "$ext" ] || continue
# Skip hidden directories (e.g. .backup, .cache)
case "$(basename "$ext")" in .*) continue;; esac
local candidate="$ext/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done
fi
# Priority 4: Core templates
local core="$base/${template_name}.md"
[ -f "$core" ] && echo "$core" && return 0
# Template not found in any location.
# Return 1 so callers can distinguish "not found" from "found".
# Callers running under set -e should use: TEMPLATE=$(resolve_template ...) || true
return 1
}

View File

@@ -0,0 +1,335 @@
#!/usr/bin/env bash
set -e
JSON_MODE=false
SHORT_NAME=""
BRANCH_NUMBER=""
USE_TIMESTAMP=false
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
# Check if the next argument is another option (starts with --)
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
;;
--timestamp)
USE_TIMESTAMP=true
;;
--help|-h)
echo "Usage: $0 [--json] [--short-name <name>] [--number N] [--timestamp] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --timestamp Use timestamp prefix (YYYYMMDD-HHMMSS) instead of sequential numbering"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
echo " $0 --timestamp --short-name 'user-auth' 'Add user authentication'"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--short-name <name>] [--number N] [--timestamp] <feature_description>" >&2
exit 1
fi
# Trim whitespace and validate description is not empty (e.g., user passed only whitespace)
FEATURE_DESCRIPTION=$(echo "$FEATURE_DESCRIPTION" | xargs)
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Error: Feature description cannot be empty or contain only whitespace" >&2
exit 1
fi
# Function to get highest number from specs directory
get_highest_from_specs() {
local specs_dir="$1"
local highest=0
if [ -d "$specs_dir" ]; then
for dir in "$specs_dir"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
# Match sequential prefixes (>=3 digits), but skip timestamp dirs.
if echo "$dirname" | grep -Eq '^[0-9]{3,}-' && ! echo "$dirname" | grep -Eq '^[0-9]{8}-[0-9]{6}-'; then
number=$(echo "$dirname" | grep -Eo '^[0-9]+')
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done
fi
echo "$highest"
}
# Function to get highest number from git branches
get_highest_from_branches() {
local highest=0
# Get all branches (local and remote)
branches=$(git branch -a 2>/dev/null || echo "")
if [ -n "$branches" ]; then
while IFS= read -r branch; do
# Clean branch name: remove leading markers and remote prefixes
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
# Extract sequential feature number (>=3 digits), skip timestamp branches.
if echo "$clean_branch" | grep -Eq '^[0-9]{3,}-' && ! echo "$clean_branch" | grep -Eq '^[0-9]{8}-[0-9]{6}-'; then
number=$(echo "$clean_branch" | grep -Eo '^[0-9]+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done <<< "$branches"
fi
echo "$highest"
}
# Function to check existing branches (local and remote) and return next available number
check_existing_branches() {
local specs_dir="$1"
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
git fetch --all --prune >/dev/null 2>&1 || true
# Get highest number from ALL branches (not just matching short name)
local highest_branch=$(get_highest_from_branches)
# Get highest number from ALL specs (not just matching short name)
local highest_spec=$(get_highest_from_specs "$specs_dir")
# Take the maximum of both
local max_num=$highest_branch
if [ "$highest_spec" -gt "$max_num" ]; then
max_num=$highest_spec
fi
# Return next number
echo $((max_num + 1))
}
# Function to clean and format a branch name
clean_branch_name() {
local name="$1"
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
}
# Resolve repository root using common.sh functions which prioritize .specify over git
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
REPO_ROOT=$(get_repo_root)
# Check if git is available at this repo root (not a parent)
if has_git; then
HAS_GIT=true
else
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
mkdir -p "$SPECS_DIR"
# Function to generate branch name with stop word filtering and length filtering
generate_branch_name() {
local description="$1"
# Common stop words to filter out
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
# Convert to lowercase and split into words
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
local meaningful_words=()
for word in $clean_name; do
# Skip empty words
[ -z "$word" ] && continue
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -q "\b${word^^}\b"; then
# Keep short words if they appear as uppercase in original (likely acronyms)
meaningful_words+=("$word")
fi
fi
done
# If we have meaningful words, use first 3-4 of them
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
else
# Fallback to original logic if no meaningful words found
local cleaned=$(clean_branch_name "$description")
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
# Use provided short name, just clean it up
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
else
# Generate from description with smart filtering
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
# Warn if --number and --timestamp are both specified
if [ "$USE_TIMESTAMP" = true ] && [ -n "$BRANCH_NUMBER" ]; then
>&2 echo "[specify] Warning: --number is ignored when --timestamp is used"
BRANCH_NUMBER=""
fi
# Determine branch prefix
if [ "$USE_TIMESTAMP" = true ]; then
FEATURE_NUM=$(date +%Y%m%d-%H%M%S)
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
else
# Determine branch number
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$HAS_GIT" = true ]; then
# Check existing branches on remotes
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
else
# Fall back to local directory check
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
fi
fi
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
fi
# GitHub enforces a 244-byte limit on branch names
# Validate and truncate if necessary
MAX_BRANCH_LENGTH=244
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
# Calculate how much we need to trim from suffix
# Account for prefix length: timestamp (15) + hyphen (1) = 16, or sequential (3) + hyphen (1) = 4
PREFIX_LENGTH=$(( ${#FEATURE_NUM} + 1 ))
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - PREFIX_LENGTH))
# Truncate suffix at word boundary if possible
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
# Remove trailing hyphen if truncation created one
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
if [ "$HAS_GIT" = true ]; then
if ! git checkout -b "$BRANCH_NAME" 2>/dev/null; then
# Check if branch already exists
if git branch --list "$BRANCH_NAME" | grep -q .; then
if [ "$USE_TIMESTAMP" = true ]; then
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Rerun to get a new timestamp or use a different --short-name."
else
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Please use a different feature name or specify a different number with --number."
fi
exit 1
else
>&2 echo "Error: Failed to create git branch '$BRANCH_NAME'. Please check your git configuration and try again."
exit 1
fi
fi
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
mkdir -p "$FEATURE_DIR"
TEMPLATE=$(resolve_template "spec-template" "$REPO_ROOT") || true
SPEC_FILE="$FEATURE_DIR/spec.md"
if [ -n "$TEMPLATE" ] && [ -f "$TEMPLATE" ]; then
cp "$TEMPLATE" "$SPEC_FILE"
else
echo "Warning: Spec template not found; created empty spec file" >&2
touch "$SPEC_FILE"
fi
# Inform the user how to persist the feature variable in their own shell
printf '# To persist: export SPECIFY_FEATURE=%q\n' "$BRANCH_NAME" >&2
if $JSON_MODE; then
if command -v jq >/dev/null 2>&1; then
jq -cn \
--arg branch_name "$BRANCH_NAME" \
--arg spec_file "$SPEC_FILE" \
--arg feature_num "$FEATURE_NUM" \
'{BRANCH_NAME:$branch_name,SPEC_FILE:$spec_file,FEATURE_NUM:$feature_num}'
else
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$(json_escape "$BRANCH_NAME")" "$(json_escape "$SPEC_FILE")" "$(json_escape "$FEATURE_NUM")"
fi
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SPEC_FILE: $SPEC_FILE"
echo "FEATURE_NUM: $FEATURE_NUM"
printf '# To persist in your shell: export SPECIFY_FEATURE=%q\n' "$BRANCH_NAME"
fi

View File

@@ -0,0 +1,73 @@
#!/usr/bin/env bash
set -e
# Parse command line arguments
JSON_MODE=false
ARGS=()
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--help|-h)
echo "Usage: $0 [--json]"
echo " --json Output results in JSON format"
echo " --help Show this help message"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
done
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
_paths_output=$(get_feature_paths) || { echo "ERROR: Failed to resolve feature paths" >&2; exit 1; }
eval "$_paths_output"
unset _paths_output
# Check if we're on a proper feature branch (only for git repos)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# Ensure the feature directory exists
mkdir -p "$FEATURE_DIR"
# Copy plan template if it exists
TEMPLATE=$(resolve_template "plan-template" "$REPO_ROOT") || true
if [[ -n "$TEMPLATE" ]] && [[ -f "$TEMPLATE" ]]; then
cp "$TEMPLATE" "$IMPL_PLAN"
echo "Copied plan template to $IMPL_PLAN"
else
echo "Warning: Plan template not found"
# Create a basic plan file if template doesn't exist
touch "$IMPL_PLAN"
fi
# Output results
if $JSON_MODE; then
if has_jq; then
jq -cn \
--arg feature_spec "$FEATURE_SPEC" \
--arg impl_plan "$IMPL_PLAN" \
--arg specs_dir "$FEATURE_DIR" \
--arg branch "$CURRENT_BRANCH" \
--arg has_git "$HAS_GIT" \
'{FEATURE_SPEC:$feature_spec,IMPL_PLAN:$impl_plan,SPECS_DIR:$specs_dir,BRANCH:$branch,HAS_GIT:$has_git}'
else
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
"$(json_escape "$FEATURE_SPEC")" "$(json_escape "$IMPL_PLAN")" "$(json_escape "$FEATURE_DIR")" "$(json_escape "$CURRENT_BRANCH")" "$(json_escape "$HAS_GIT")"
fi
else
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "SPECS_DIR: $FEATURE_DIR"
echo "BRANCH: $CURRENT_BRANCH"
echo "HAS_GIT: $HAS_GIT"
fi

View File

@@ -0,0 +1,837 @@
#!/usr/bin/env bash
# Update agent context files with information from plan.md
#
# This script maintains AI agent context files by parsing feature specifications
# and updating agent-specific configuration files with project information.
#
# MAIN FUNCTIONS:
# 1. Environment Validation
# - Verifies git repository structure and branch information
# - Checks for required plan.md files and templates
# - Validates file permissions and accessibility
#
# 2. Plan Data Extraction
# - Parses plan.md files to extract project metadata
# - Identifies language/version, frameworks, databases, and project types
# - Handles missing or incomplete specification data gracefully
#
# 3. Agent File Management
# - Creates new agent context files from templates when needed
# - Updates existing agent files with new project information
# - Preserves manual additions and custom configurations
# - Supports multiple AI agent formats and directory structures
#
# 4. Content Generation
# - Generates language-specific build/test commands
# - Creates appropriate project directory structures
# - Updates technology stacks and recent changes sections
# - Maintains consistent formatting and timestamps
#
# 5. Multi-Agent Support
# - Handles agent-specific file paths and naming conventions
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Junie, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Tabnine CLI, Kiro CLI, Mistral Vibe, Kimi Code, Pi Coding Agent, iFlow CLI, Antigravity or Generic
# - Can update single agents or all existing agent files
# - Creates default Claude file if no agent files exist
#
# Usage: ./update-agent-context.sh [agent_type]
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|generic
# Leave empty to update all existing agent files
set -e
# Enable strict error handling
set -u
set -o pipefail
#==============================================================================
# Configuration and Global Variables
#==============================================================================
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
_paths_output=$(get_feature_paths) || { echo "ERROR: Failed to resolve feature paths" >&2; exit 1; }
eval "$_paths_output"
unset _paths_output
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
AGENT_TYPE="${1:-}"
# Agent-specific file paths
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
QWEN_FILE="$REPO_ROOT/QWEN.md"
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
JUNIE_FILE="$REPO_ROOT/.junie/AGENTS.md"
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
QODER_FILE="$REPO_ROOT/QODER.md"
# Amp, Kiro CLI, IBM Bob, and Pi all share AGENTS.md — use AGENTS_FILE to avoid
# updating the same file multiple times.
AMP_FILE="$AGENTS_FILE"
SHAI_FILE="$REPO_ROOT/SHAI.md"
TABNINE_FILE="$REPO_ROOT/TABNINE.md"
KIRO_FILE="$AGENTS_FILE"
AGY_FILE="$REPO_ROOT/.agent/rules/specify-rules.md"
BOB_FILE="$AGENTS_FILE"
VIBE_FILE="$REPO_ROOT/.vibe/agents/specify-agents.md"
KIMI_FILE="$REPO_ROOT/KIMI.md"
TRAE_FILE="$REPO_ROOT/.trae/rules/AGENTS.md"
IFLOW_FILE="$REPO_ROOT/IFLOW.md"
# Template file
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
# Global variables for parsed plan data
NEW_LANG=""
NEW_FRAMEWORK=""
NEW_DB=""
NEW_PROJECT_TYPE=""
#==============================================================================
# Utility Functions
#==============================================================================
log_info() {
echo "INFO: $1"
}
log_success() {
echo "$1"
}
log_error() {
echo "ERROR: $1" >&2
}
log_warning() {
echo "WARNING: $1" >&2
}
# Cleanup function for temporary files
cleanup() {
local exit_code=$?
# Disarm traps to prevent re-entrant loop
trap - EXIT INT TERM
rm -f /tmp/agent_update_*_$$
rm -f /tmp/manual_additions_$$
exit $exit_code
}
# Set up cleanup trap
trap cleanup EXIT INT TERM
#==============================================================================
# Validation Functions
#==============================================================================
validate_environment() {
# Check if we have a current branch/feature (git or non-git)
if [[ -z "$CURRENT_BRANCH" ]]; then
log_error "Unable to determine current feature"
if [[ "$HAS_GIT" == "true" ]]; then
log_info "Make sure you're on a feature branch"
else
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
fi
exit 1
fi
# Check if plan.md exists
if [[ ! -f "$NEW_PLAN" ]]; then
log_error "No plan.md found at $NEW_PLAN"
log_info "Make sure you're working on a feature with a corresponding spec directory"
if [[ "$HAS_GIT" != "true" ]]; then
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
fi
exit 1
fi
# Check if template exists (needed for new files)
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_warning "Template file not found at $TEMPLATE_FILE"
log_warning "Creating new agent files will fail"
fi
}
#==============================================================================
# Plan Parsing Functions
#==============================================================================
extract_plan_field() {
local field_pattern="$1"
local plan_file="$2"
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
head -1 | \
sed "s|^\*\*${field_pattern}\*\*: ||" | \
sed 's/^[ \t]*//;s/[ \t]*$//' | \
grep -v "NEEDS CLARIFICATION" | \
grep -v "^N/A$" || echo ""
}
parse_plan_data() {
local plan_file="$1"
if [[ ! -f "$plan_file" ]]; then
log_error "Plan file not found: $plan_file"
return 1
fi
if [[ ! -r "$plan_file" ]]; then
log_error "Plan file is not readable: $plan_file"
return 1
fi
log_info "Parsing plan data from $plan_file"
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
# Log what we found
if [[ -n "$NEW_LANG" ]]; then
log_info "Found language: $NEW_LANG"
else
log_warning "No language information found in plan"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
log_info "Found framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
log_info "Found database: $NEW_DB"
fi
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
log_info "Found project type: $NEW_PROJECT_TYPE"
fi
}
format_technology_stack() {
local lang="$1"
local framework="$2"
local parts=()
# Add non-empty parts
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
# Join with proper formatting
if [[ ${#parts[@]} -eq 0 ]]; then
echo ""
elif [[ ${#parts[@]} -eq 1 ]]; then
echo "${parts[0]}"
else
# Join multiple parts with " + "
local result="${parts[0]}"
for ((i=1; i<${#parts[@]}; i++)); do
result="$result + ${parts[i]}"
done
echo "$result"
fi
}
#==============================================================================
# Template and Content Generation Functions
#==============================================================================
get_project_structure() {
local project_type="$1"
if [[ "$project_type" == *"web"* ]]; then
echo "backend/\\nfrontend/\\ntests/"
else
echo "src/\\ntests/"
fi
}
get_commands_for_language() {
local lang="$1"
case "$lang" in
*"Python"*)
echo "cd src && pytest && ruff check ."
;;
*"Rust"*)
echo "cargo test && cargo clippy"
;;
*"JavaScript"*|*"TypeScript"*)
echo "npm test \\&\\& npm run lint"
;;
*)
echo "# Add commands for $lang"
;;
esac
}
get_language_conventions() {
local lang="$1"
echo "$lang: Follow standard conventions"
}
create_new_agent_file() {
local target_file="$1"
local temp_file="$2"
local project_name="$3"
local current_date="$4"
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_error "Template not found at $TEMPLATE_FILE"
return 1
fi
if [[ ! -r "$TEMPLATE_FILE" ]]; then
log_error "Template file is not readable: $TEMPLATE_FILE"
return 1
fi
log_info "Creating new agent context file from template..."
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
log_error "Failed to copy template file"
return 1
fi
# Replace template placeholders
local project_structure
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
local commands
commands=$(get_commands_for_language "$NEW_LANG")
local language_conventions
language_conventions=$(get_language_conventions "$NEW_LANG")
# Perform substitutions with error checking using safer approach
# Escape special characters for sed by using a different delimiter or escaping
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
# Build technology stack and recent change strings conditionally
local tech_stack
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
elif [[ -n "$escaped_lang" ]]; then
tech_stack="- $escaped_lang ($escaped_branch)"
elif [[ -n "$escaped_framework" ]]; then
tech_stack="- $escaped_framework ($escaped_branch)"
else
tech_stack="- ($escaped_branch)"
fi
local recent_change
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
elif [[ -n "$escaped_lang" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang"
elif [[ -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_framework"
else
recent_change="- $escaped_branch: Added"
fi
local substitutions=(
"s|\[PROJECT NAME\]|$project_name|"
"s|\[DATE\]|$current_date|"
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
)
for substitution in "${substitutions[@]}"; do
if ! sed -i.bak -e "$substitution" "$temp_file"; then
log_error "Failed to perform substitution: $substitution"
rm -f "$temp_file" "$temp_file.bak"
return 1
fi
done
# Convert \n sequences to actual newlines
newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
# Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2"
# Prepend Cursor frontmatter for .mdc files so rules are auto-included
if [[ "$target_file" == *.mdc ]]; then
local frontmatter_file
frontmatter_file=$(mktemp) || return 1
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
cat "$temp_file" >> "$frontmatter_file"
mv "$frontmatter_file" "$temp_file"
fi
return 0
}
update_existing_agent_file() {
local target_file="$1"
local current_date="$2"
log_info "Updating existing agent context file..."
# Use a single temporary file for atomic update
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
# Process the file in one pass
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
local new_tech_entries=()
local new_change_entry=""
# Prepare new technology entries
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
fi
# Prepare new change entry
if [[ -n "$tech_stack" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
fi
# Check if sections exist in the file
local has_active_technologies=0
local has_recent_changes=0
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
has_active_technologies=1
fi
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
has_recent_changes=1
fi
# Process file line by line
local in_tech_section=false
local in_changes_section=false
local tech_entries_added=false
local changes_entries_added=false
local existing_changes_count=0
local file_ended=false
while IFS= read -r line || [[ -n "$line" ]]; do
# Handle Active Technologies section
if [[ "$line" == "## Active Technologies" ]]; then
echo "$line" >> "$temp_file"
in_tech_section=true
continue
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
# Add new tech entries before closing the section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
in_tech_section=false
continue
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
# Add new tech entries before empty line in tech section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
continue
fi
# Handle Recent Changes section
if [[ "$line" == "## Recent Changes" ]]; then
echo "$line" >> "$temp_file"
# Add new change entry right after the heading
if [[ -n "$new_change_entry" ]]; then
echo "$new_change_entry" >> "$temp_file"
fi
in_changes_section=true
changes_entries_added=true
continue
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
echo "$line" >> "$temp_file"
in_changes_section=false
continue
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
# Keep only first 2 existing changes
if [[ $existing_changes_count -lt 2 ]]; then
echo "$line" >> "$temp_file"
((existing_changes_count++))
fi
continue
fi
# Update timestamp
if [[ "$line" =~ (\*\*)?Last\ updated(\*\*)?:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
else
echo "$line" >> "$temp_file"
fi
done < "$target_file"
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
# If sections don't exist, add them at the end of the file
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
echo "" >> "$temp_file"
echo "## Active Technologies" >> "$temp_file"
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
echo "" >> "$temp_file"
echo "## Recent Changes" >> "$temp_file"
echo "$new_change_entry" >> "$temp_file"
changes_entries_added=true
fi
# Ensure Cursor .mdc files have YAML frontmatter for auto-inclusion
if [[ "$target_file" == *.mdc ]]; then
if ! head -1 "$temp_file" | grep -q '^---'; then
local frontmatter_file
frontmatter_file=$(mktemp) || { rm -f "$temp_file"; return 1; }
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
cat "$temp_file" >> "$frontmatter_file"
mv "$frontmatter_file" "$temp_file"
fi
fi
# Move temp file to target atomically
if ! mv "$temp_file" "$target_file"; then
log_error "Failed to update target file"
rm -f "$temp_file"
return 1
fi
return 0
}
#==============================================================================
# Main Agent File Update Function
#==============================================================================
update_agent_file() {
local target_file="$1"
local agent_name="$2"
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
log_error "update_agent_file requires target_file and agent_name parameters"
return 1
fi
log_info "Updating $agent_name context file: $target_file"
local project_name
project_name=$(basename "$REPO_ROOT")
local current_date
current_date=$(date +%Y-%m-%d)
# Create directory if it doesn't exist
local target_dir
target_dir=$(dirname "$target_file")
if [[ ! -d "$target_dir" ]]; then
if ! mkdir -p "$target_dir"; then
log_error "Failed to create directory: $target_dir"
return 1
fi
fi
if [[ ! -f "$target_file" ]]; then
# Create new file from template
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
if mv "$temp_file" "$target_file"; then
log_success "Created new $agent_name context file"
else
log_error "Failed to move temporary file to $target_file"
rm -f "$temp_file"
return 1
fi
else
log_error "Failed to create new agent file"
rm -f "$temp_file"
return 1
fi
else
# Update existing file
if [[ ! -r "$target_file" ]]; then
log_error "Cannot read existing file: $target_file"
return 1
fi
if [[ ! -w "$target_file" ]]; then
log_error "Cannot write to existing file: $target_file"
return 1
fi
if update_existing_agent_file "$target_file" "$current_date"; then
log_success "Updated existing $agent_name context file"
else
log_error "Failed to update existing agent file"
return 1
fi
fi
return 0
}
#==============================================================================
# Agent Selection and Processing
#==============================================================================
update_specific_agent() {
local agent_type="$1"
case "$agent_type" in
claude)
update_agent_file "$CLAUDE_FILE" "Claude Code" || return 1
;;
gemini)
update_agent_file "$GEMINI_FILE" "Gemini CLI" || return 1
;;
copilot)
update_agent_file "$COPILOT_FILE" "GitHub Copilot" || return 1
;;
cursor-agent)
update_agent_file "$CURSOR_FILE" "Cursor IDE" || return 1
;;
qwen)
update_agent_file "$QWEN_FILE" "Qwen Code" || return 1
;;
opencode)
update_agent_file "$AGENTS_FILE" "opencode" || return 1
;;
codex)
update_agent_file "$AGENTS_FILE" "Codex CLI" || return 1
;;
windsurf)
update_agent_file "$WINDSURF_FILE" "Windsurf" || return 1
;;
junie)
update_agent_file "$JUNIE_FILE" "Junie" || return 1
;;
kilocode)
update_agent_file "$KILOCODE_FILE" "Kilo Code" || return 1
;;
auggie)
update_agent_file "$AUGGIE_FILE" "Auggie CLI" || return 1
;;
roo)
update_agent_file "$ROO_FILE" "Roo Code" || return 1
;;
codebuddy)
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI" || return 1
;;
qodercli)
update_agent_file "$QODER_FILE" "Qoder CLI" || return 1
;;
amp)
update_agent_file "$AMP_FILE" "Amp" || return 1
;;
shai)
update_agent_file "$SHAI_FILE" "SHAI" || return 1
;;
tabnine)
update_agent_file "$TABNINE_FILE" "Tabnine CLI" || return 1
;;
kiro-cli)
update_agent_file "$KIRO_FILE" "Kiro CLI" || return 1
;;
agy)
update_agent_file "$AGY_FILE" "Antigravity" || return 1
;;
bob)
update_agent_file "$BOB_FILE" "IBM Bob" || return 1
;;
vibe)
update_agent_file "$VIBE_FILE" "Mistral Vibe" || return 1
;;
kimi)
update_agent_file "$KIMI_FILE" "Kimi Code" || return 1
;;
trae)
update_agent_file "$TRAE_FILE" "Trae" || return 1
;;
pi)
update_agent_file "$AGENTS_FILE" "Pi Coding Agent" || return 1
;;
iflow)
update_agent_file "$IFLOW_FILE" "iFlow CLI" || return 1
;;
generic)
log_info "Generic agent: no predefined context file. Use the agent-specific update script for your agent."
;;
*)
log_error "Unknown agent type '$agent_type'"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|generic"
exit 1
;;
esac
}
# Helper: skip non-existent files and files already updated (dedup by
# realpath so that variables pointing to the same file — e.g. AMP_FILE,
# KIRO_FILE, BOB_FILE all resolving to AGENTS_FILE — are only written once).
# Uses a linear array instead of associative array for bash 3.2 compatibility.
# Note: defined at top level because bash 3.2 does not support true
# nested/local functions. _updated_paths, _found_agent, and _all_ok are
# initialised exclusively inside update_all_existing_agents so that
# sourcing this script has no side effects on the caller's environment.
_update_if_new() {
local file="$1" name="$2"
[[ -f "$file" ]] || return 0
local real_path
real_path=$(realpath "$file" 2>/dev/null || echo "$file")
local p
if [[ ${#_updated_paths[@]} -gt 0 ]]; then
for p in "${_updated_paths[@]}"; do
[[ "$p" == "$real_path" ]] && return 0
done
fi
# Record the file as seen before attempting the update so that:
# (a) aliases pointing to the same path are not retried on failure
# (b) _found_agent reflects file existence, not update success
_updated_paths+=("$real_path")
_found_agent=true
update_agent_file "$file" "$name"
}
update_all_existing_agents() {
_found_agent=false
_updated_paths=()
local _all_ok=true
_update_if_new "$CLAUDE_FILE" "Claude Code" || _all_ok=false
_update_if_new "$GEMINI_FILE" "Gemini CLI" || _all_ok=false
_update_if_new "$COPILOT_FILE" "GitHub Copilot" || _all_ok=false
_update_if_new "$CURSOR_FILE" "Cursor IDE" || _all_ok=false
_update_if_new "$QWEN_FILE" "Qwen Code" || _all_ok=false
_update_if_new "$AGENTS_FILE" "Codex/opencode" || _all_ok=false
_update_if_new "$AMP_FILE" "Amp" || _all_ok=false
_update_if_new "$KIRO_FILE" "Kiro CLI" || _all_ok=false
_update_if_new "$BOB_FILE" "IBM Bob" || _all_ok=false
_update_if_new "$WINDSURF_FILE" "Windsurf" || _all_ok=false
_update_if_new "$JUNIE_FILE" "Junie" || _all_ok=false
_update_if_new "$KILOCODE_FILE" "Kilo Code" || _all_ok=false
_update_if_new "$AUGGIE_FILE" "Auggie CLI" || _all_ok=false
_update_if_new "$ROO_FILE" "Roo Code" || _all_ok=false
_update_if_new "$CODEBUDDY_FILE" "CodeBuddy CLI" || _all_ok=false
_update_if_new "$SHAI_FILE" "SHAI" || _all_ok=false
_update_if_new "$TABNINE_FILE" "Tabnine CLI" || _all_ok=false
_update_if_new "$QODER_FILE" "Qoder CLI" || _all_ok=false
_update_if_new "$AGY_FILE" "Antigravity" || _all_ok=false
_update_if_new "$VIBE_FILE" "Mistral Vibe" || _all_ok=false
_update_if_new "$KIMI_FILE" "Kimi Code" || _all_ok=false
_update_if_new "$TRAE_FILE" "Trae" || _all_ok=false
_update_if_new "$IFLOW_FILE" "iFlow CLI" || _all_ok=false
# If no agent files exist, create a default Claude file
if [[ "$_found_agent" == false ]]; then
log_info "No existing agent files found, creating default Claude file..."
update_agent_file "$CLAUDE_FILE" "Claude Code" || return 1
fi
[[ "$_all_ok" == true ]]
}
print_summary() {
echo
log_info "Summary of changes:"
if [[ -n "$NEW_LANG" ]]; then
echo " - Added language: $NEW_LANG"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
echo " - Added framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
echo " - Added database: $NEW_DB"
fi
echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|generic]"
}
#==============================================================================
# Main Execution
#==============================================================================
main() {
# Validate environment before proceeding
validate_environment
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
# Parse the plan file to extract project information
if ! parse_plan_data "$NEW_PLAN"; then
log_error "Failed to parse plan data"
exit 1
fi
# Process based on agent type argument
local success=true
if [[ -z "$AGENT_TYPE" ]]; then
# No specific agent provided - update all existing agent files
log_info "No agent specified, updating all existing agent files..."
if ! update_all_existing_agents; then
success=false
fi
else
# Specific agent provided - update only that agent
log_info "Updating specific agent: $AGENT_TYPE"
if ! update_specific_agent "$AGENT_TYPE"; then
success=false
fi
fi
# Print summary
print_summary
if [[ "$success" == true ]]; then
log_success "Agent context update completed successfully"
exit 0
else
log_error "Agent context update completed with errors"
exit 1
fi
}
# Execute main function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View File

@@ -0,0 +1,28 @@
# [PROJECT NAME] Development Guidelines
Auto-generated from all feature plans. Last updated: [DATE]
## Active Technologies
[EXTRACTED FROM ALL PLAN.MD FILES]
## Project Structure
```text
[ACTUAL STRUCTURE FROM PLANS]
```
## Commands
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
## Code Style
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
## Recent Changes
[LAST 3 FEATURES AND WHAT THEY ADDED]
<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->

View File

@@ -0,0 +1,40 @@
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
**Purpose**: [Brief description of what this checklist covers]
**Created**: [DATE]
**Feature**: [Link to spec.md or relevant documentation]
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
<!--
============================================================================
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
The /speckit.checklist command MUST replace these with actual items based on:
- User's specific checklist request
- Feature requirements from spec.md
- Technical context from plan.md
- Implementation details from tasks.md
DO NOT keep these sample items in the generated checklist file.
============================================================================
-->
## [Category 1]
- [ ] CHK001 First checklist item with clear action
- [ ] CHK002 Second checklist item
- [ ] CHK003 Third checklist item
## [Category 2]
- [ ] CHK004 Another category item
- [ ] CHK005 Item with specific criteria
- [ ] CHK006 Final item in this category
## Notes
- Check items off as completed: `[x]`
- Add comments or findings inline
- Link to relevant resources or documentation
- Items are numbered sequentially for easy reference

View File

@@ -0,0 +1,50 @@
# [PROJECT_NAME] Constitution
<!-- Example: Spec Constitution, TaskFlow Constitution, etc. -->
## Core Principles
### [PRINCIPLE_1_NAME]
<!-- Example: I. Library-First -->
[PRINCIPLE_1_DESCRIPTION]
<!-- Example: Every feature starts as a standalone library; Libraries must be self-contained, independently testable, documented; Clear purpose required - no organizational-only libraries -->
### [PRINCIPLE_2_NAME]
<!-- Example: II. CLI Interface -->
[PRINCIPLE_2_DESCRIPTION]
<!-- Example: Every library exposes functionality via CLI; Text in/out protocol: stdin/args → stdout, errors → stderr; Support JSON + human-readable formats -->
### [PRINCIPLE_3_NAME]
<!-- Example: III. Test-First (NON-NEGOTIABLE) -->
[PRINCIPLE_3_DESCRIPTION]
<!-- Example: TDD mandatory: Tests written → User approved → Tests fail → Then implement; Red-Green-Refactor cycle strictly enforced -->
### [PRINCIPLE_4_NAME]
<!-- Example: IV. Integration Testing -->
[PRINCIPLE_4_DESCRIPTION]
<!-- Example: Focus areas requiring integration tests: New library contract tests, Contract changes, Inter-service communication, Shared schemas -->
### [PRINCIPLE_5_NAME]
<!-- Example: V. Observability, VI. Versioning & Breaking Changes, VII. Simplicity -->
[PRINCIPLE_5_DESCRIPTION]
<!-- Example: Text I/O ensures debuggability; Structured logging required; Or: MAJOR.MINOR.BUILD format; Or: Start simple, YAGNI principles -->
## [SECTION_2_NAME]
<!-- Example: Additional Constraints, Security Requirements, Performance Standards, etc. -->
[SECTION_2_CONTENT]
<!-- Example: Technology stack requirements, compliance standards, deployment policies, etc. -->
## [SECTION_3_NAME]
<!-- Example: Development Workflow, Review Process, Quality Gates, etc. -->
[SECTION_3_CONTENT]
<!-- Example: Code review requirements, testing gates, deployment approval process, etc. -->
## Governance
<!-- Example: Constitution supersedes all other practices; Amendments require documentation, approval, migration plan -->
[GOVERNANCE_RULES]
<!-- Example: All PRs/reviews must verify compliance; Complexity must be justified; Use [GUIDANCE_FILE] for runtime development guidance -->
**Version**: [CONSTITUTION_VERSION] | **Ratified**: [RATIFICATION_DATE] | **Last Amended**: [LAST_AMENDED_DATE]
<!-- Example: Version: 2.1.1 | Ratified: 2025-06-13 | Last Amended: 2025-07-16 -->

View File

@@ -0,0 +1,104 @@
# Implementation Plan: [FEATURE]
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/plan-template.md` for the execution workflow.
## Summary
[Extract from feature spec: primary requirement + technical approach from research]
## Technical Context
<!--
ACTION REQUIRED: Replace the content in this section with the technical details
for the project. The structure here is presented in advisory capacity to guide
the iteration process.
-->
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [e.g., library/cli/web-service/mobile-app/compiler/desktop-app or NEEDS CLARIFICATION]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
[Gates determined based on constitution file]
## Project Structure
### Documentation (this feature)
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
real paths (e.g., apps/admin, packages/something). The delivered plan must
not include Option labels.
-->
```text
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
src/
├── models/
├── services/
├── cli/
└── lib/
tests/
├── contract/
├── integration/
└── unit/
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
backend/
├── src/
│ ├── models/
│ ├── services/
│ └── api/
└── tests/
frontend/
├── src/
│ ├── components/
│ ├── pages/
│ └── services/
└── tests/
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
api/
└── [same as backend above]
ios/ or android/
└── [platform-specific structure: feature modules, UI flows, platform tests]
```
**Structure Decision**: [Document the selected structure and reference the real
directories captured above]
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |

View File

@@ -0,0 +1,128 @@
# Feature Specification: [FEATURE NAME]
**Feature Branch**: `[###-feature-name]`
**Created**: [DATE]
**Status**: Draft
**Input**: User description: "$ARGUMENTS"
## User Scenarios & Testing *(mandatory)*
<!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
you should still have a viable MVP (Minimum Viable Product) that delivers value.
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
Think of each story as a standalone slice of functionality that can be:
- Developed independently
- Tested independently
- Deployed independently
- Demonstrated to users independently
-->
### User Story 1 - [Brief Title] (Priority: P1)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 2 - [Brief Title] (Priority: P2)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 3 - [Brief Title] (Priority: P3)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
[Add more user stories as needed, each with an assigned priority]
### Edge Cases
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right edge cases.
-->
- What happens when [boundary condition]?
- How does system handle [error scenario]?
## Requirements *(mandatory)*
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right functional requirements.
-->
### Functional Requirements
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
*Example of marking unclear requirements:*
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### Key Entities *(include if feature involves data)*
- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities]
## Success Criteria *(mandatory)*
<!--
ACTION REQUIRED: Define measurable success criteria.
These must be technology-agnostic and measurable.
-->
### Measurable Outcomes
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
## Assumptions
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right assumptions based on reasonable defaults
chosen when the feature description did not specify certain details.
-->
- [Assumption about target users, e.g., "Users have stable internet connectivity"]
- [Assumption about scope boundaries, e.g., "Mobile support is out of scope for v1"]
- [Assumption about data/environment, e.g., "Existing authentication system will be reused"]
- [Dependency on existing system/service, e.g., "Requires access to the existing user profile API"]

View File

@@ -0,0 +1,251 @@
---
description: "Task list template for feature implementation"
---
# Tasks: [FEATURE NAME]
**Input**: Design documents from `/specs/[###-feature-name]/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- **Single project**: `src/`, `tests/` at repository root
- **Web app**: `backend/src/`, `frontend/src/`
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit.tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [ ] T001 Create project structure per implementation plan
- [ ] T002 Initialize [language] project with [framework] dependencies
- [ ] T003 [P] Configure linting and formatting tools
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
Examples of foundational tasks (adjust based on your project):
- [ ] T004 Setup database schema and migrations framework
- [ ] T005 [P] Implement authentication/authorization framework
- [ ] T006 [P] Setup API routing and middleware structure
- [ ] T007 Create base models/entities that all stories depend on
- [ ] T008 Configure error handling and logging infrastructure
- [ ] T009 Setup environment configuration management
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 1
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Add logging for user story 1 operations
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - [Title] (Priority: P2)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - [Title] (Priority: P3)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
**Checkpoint**: All user stories should now be independently functional
---
[Add more user story phases as needed, following the same pattern]
---
## Phase N: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [ ] TXXX [P] Documentation updates in docs/
- [ ] TXXX Code cleanup and refactoring
- [ ] TXXX Performance optimization across all stories
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
- [ ] TXXX Security hardening
- [ ] TXXX Run quickstart.md validation
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
- All tests for a user story marked [P] can run in parallel
- Models within a story marked [P] can run in parallel
- Different user stories can be worked on in parallel by different team members
---
## Parallel Example: User Story 1
```bash
# Launch all tests for User Story 1 together (if tests requested):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all models for User Story 1 together:
Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Create [Entity2] model in src/models/[entity2].py"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup + Foundational together
2. Once Foundational is done:
- Developer A: User Story 1
- Developer B: User Story 2
- Developer C: User Story 3
3. Stories complete and integrate independently
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Verify tests fail before implementing
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence