Files

759 lines
26 KiB
Markdown

# Technical Research: Mock GDS MCP Server
**Branch**: `001-mock-gds-server` | **Date**: 2026-04-07
## Phase 0: Technology Decisions
### Decision 1: MCP SDK and Protocol Implementation
**Decision**: Use `@modelcontextprotocol/sdk` official Node.js SDK
**Rationale**:
- Official SDK ensures MCP protocol compliance (Constitution Principle I)
- Handles JSON-RPC 2.0 message format automatically
- Provides TypeScript types for type safety
- Active maintenance by Anthropic
- Simplified tool registration and schema management
**Alternatives Considered**:
- **Custom MCP implementation**: Rejected - high risk of protocol non-compliance, significant development effort
- **Python MCP SDK**: Rejected - requirement specifies Node.js with minimal dependencies
**Implementation Notes**:
- SDK provides `Server` class for initialization
- Tool handlers registered via `server.setRequestHandler`
- Automatic capability negotiation during handshake
- Built-in error handling with standard MCP error codes
### Decision 2: Valkey Client Library
**Decision**: Use `ioredis` v5.x as Valkey client
**Rationale**:
- Valkey is Redis protocol-compatible, ioredis is the most mature Node.js Redis client
- Full support for Redis/Valkey commands (GET, SET, HSET, EXPIRE, etc.)
- Connection pooling and automatic reconnection
- Cluster and sentinel support (for future scaling)
- Pipeline and transaction support
- Active maintenance and TypeScript support
**Alternatives Considered**:
- **node-redis**: Rejected - ioredis has better TypeScript support and more features
- **Custom Valkey protocol**: Rejected - unnecessarily complex
- **No persistence (memory-only)**: Rejected - requirement specifies Valkey for persistence
**Implementation Notes**:
- Use Redis-compatible commands only (avoid Redis-specific extensions)
- Session data stored with TTL (e.g., 1 hour default)
- Key naming: `gds:session:{sessionId}:bookings:{pnr}`
- Use hash structures for complex objects (bookings, searches)
- Enable Valkey RDB/AOF persistence via docker-compose configuration
### Decision 3: Minimal Dependencies Strategy
**Decision**: Limit external dependencies to essentials only
**Core Dependencies** (production):
1. `@modelcontextprotocol/sdk` - MCP protocol (required)
2. `ioredis` - Valkey client (required for persistence)
3. `pino` - Structured logging (minimal, fast, constitution requires observability)
**Development Dependencies**:
- Node.js native test runner (`node:test`) - no jest/mocha overhead
- `c8` - code coverage (lightweight)
**Rationale**:
- Aligns with "minimal libraries" requirement
- Reduces attack surface and dependency maintenance burden
- Faster container builds and smaller images
- Native Node.js features (test runner, assert) are mature and sufficient
**Explicitly Avoided**:
- ❌ Express/Fastify/Koa - MCP uses stdio/SSE, no HTTP server needed
- ❌ TypeORM/Prisma - direct Valkey commands sufficient for key-value storage
- ❌ Jest/Mocha - native test runner adequate
- ❌ Lodash/Ramda - native JS methods sufficient
- ❌ Moment.js/date-fns - native Date and Temporal API (when available)
### Decision 4: Docker Build Strategy
**Decision**: Use Docker Buildx with `docker-bake.hcl` for multi-platform builds
**Rationale**:
- `docker buildx bake` supports complex build configurations
- Multi-platform builds (linux/amd64, linux/arm64) in single command
- Build matrix for multiple tags (latest, version, dev)
- Better caching and parallelization than traditional Dockerfile
- Aligns with requirement specification
**Build Configuration**:
```hcl
# docker-bake.hcl
target "default" {
dockerfile = "docker/Dockerfile"
tags = ["gds-mock-mcp:latest"]
platforms = ["linux/amd64", "linux/arm64"]
cache-from = ["type=registry,ref=gds-mock-mcp:buildcache"]
cache-to = ["type=registry,ref=gds-mock-mcp:buildcache,mode=max"]
}
```
**Dockerfile Strategy**:
- Multi-stage build: builder → production
- Builder stage: install dependencies, run tests
- Production stage: copy only production deps and source
- Use Node.js 20 Alpine for minimal image size
- Non-root user for security
- Health check via MCP ping or valkey connection test
**Alternatives Considered**:
- **Traditional Dockerfile only**: Rejected - buildx bake provides better DX and multi-platform support
- **Docker Compose build**: Rejected - less flexible than buildx, no multi-platform
- **Podman**: Rejected - Docker specified in requirements
### Decision 5: Mock Data Architecture
**Decision**: Embed realistic GDS data in JavaScript modules with deterministic generation
**Data Structure**:
- **Airports**: ~100 major airports with IATA codes (JFK, LAX, ORD, etc.)
- **Airlines**: Major carriers with IATA codes (AA, DL, UA, BA, etc.)
- **Hotels**: 50+ chains/properties across major cities
- **Car Rentals**: Major companies (Hertz, Avis, Enterprise) with vehicle types
- **Flight Routes**: Pre-defined routes with realistic times and prices
- **Pricing Tiers**: Economy ($200-$600 domestic), Business ($800-$2000), First Class ($2500+)
**Generation Strategy**:
- Deterministic: Same search inputs produce same results (for testing reproducibility)
- Controlled Randomness: Optional seed parameter for demo variety
- Rule-Based Pricing: Distance-based pricing with time-of-day adjustments
- Availability Simulation: Random sold-out scenarios (10% of flights)
**Rationale**:
- Embedded data = no external dependencies (fast startup)
- Deterministic = reliable integration tests
- Realistic codes = constitution compliance (Principle II)
- Pre-computed routes = sub-2s response times
**Alternatives Considered**:
- **External API (Skyscanner, etc.)**: Rejected - violates "no external connections" (Constitution Principle III)
- **Database seeding**: Rejected - overhead, embedded data sufficient for mock scope
- **Fully random data**: Rejected - testing requires deterministic outputs
### Decision 6: Testing Strategy
**Decision**: Use Node.js native test runner with three-tier test structure
**Test Tiers**:
1. **Unit Tests**: Individual tool handlers, data generators, validators
- Fast (<100ms total), isolated, no external dependencies
- Mock Valkey client for session tests
2. **Integration Tests**: Full MCP workflows with real Valkey (test container)
- End-to-end booking flows (search → book → retrieve → cancel)
- Multi-service workflows (flight + hotel + car)
- Concurrent session isolation tests
- Use docker-compose test profile for Valkey
3. **Contract Tests**: MCP protocol compliance validation
- Verify JSON-RPC 2.0 format
- Tool schema validation
- Error response structure
**Test Execution**:
```bash
npm test # All tests
npm run test:unit # Fast unit tests only
npm run test:integration # Requires Valkey
npm run test:coverage # Coverage report with c8
```
**Rationale**:
- Native test runner = minimal dependencies
- Three tiers = appropriate test coverage
- Docker test containers = realistic integration tests
- Fast unit tests = quick feedback loop
### Decision 7: Configuration Management
**Decision**: Environment variables with secure defaults
**Configuration Variables**:
```bash
# MCP Server
MCP_TRANSPORT=stdio # stdio or sse
MCP_SESSION_TIMEOUT=3600 # 1 hour session TTL
# Valkey
VALKEY_HOST=localhost
VALKEY_PORT=6379
VALKEY_PASSWORD= # Empty for dev, required for prod
VALKEY_DB=0
VALKEY_KEY_PREFIX=gds:
# Logging
LOG_LEVEL=info # silent, error, warn, info, debug, trace
LOG_PRETTY=false # Pretty print for dev
# Mock Data
MOCK_DATA_SEED=fixed # fixed or random
MOCK_RESPONSE_DELAY=0 # Artificial delay (ms) for demo purposes
```
**Security Defaults**:
- No production credentials in code or .env.example
- Configuration validation on startup
- Reject production-like patterns (Constitution Principle III)
**Rationale**:
- Standard practice for containerized apps
- Easy to override in docker-compose
- Secure defaults prevent accidents
### Decision 8: PNR Generation Strategy
**Decision**: Deterministic PNR generation with TEST prefix
**Format**: `TEST-{BASE32}` where BASE32 is 6 characters
**Example**: `TEST-A1B2C3`
**Generation Algorithm**:
1. Generate session-scoped sequence number
2. Combine with booking timestamp
3. Hash with SHA-256
4. Take first 6 characters of base32 encoding
5. Prefix with "TEST-"
**Rationale**:
- "TEST-" prefix = clear mock indicator (Constitution Principle III)
- Base32 = human-readable, unambiguous (no 0/O, 1/I confusion)
- 6 characters = 1 billion unique combinations (sufficient for testing)
- Deterministic = reproducible test scenarios
- Session-scoped = prevents conflicts
**Alternatives Considered**:
- **Random UUID**: Rejected - too long, not human-friendly
- **Sequential numbers**: Rejected - predictable, not realistic
- **No prefix**: Rejected - violates safety requirement
## Technology Stack Summary
| Component | Technology | Version | Rationale |
|-----------|-----------|---------|-----------|
| Runtime | Node.js | 20 LTS | Current stable, long-term support |
| MCP SDK | @modelcontextprotocol/sdk | Latest | Official SDK, protocol compliance |
| Persistence | Valkey | 8.0+ | Redis-compatible, requirement specified |
| Valkey Client | ioredis | 5.x | Mature, feature-rich, TypeScript support |
| Logging | Pino | Latest | Fast, structured, minimal overhead |
| Testing | node:test | Built-in | Native, zero dependencies |
| Coverage | c8 | Latest | V8 coverage, lightweight |
| Container | Docker | 24+ | Buildx support, multi-platform |
| Orchestration | docker-compose | 2.x | Development environment |
## Performance Considerations
### Expected Performance Profile
- **Search Operations**: <500ms (data generation + Valkey lookup)
- **Booking Operations**: <200ms (validation + Valkey write)
- **Retrieval Operations**: <100ms (Valkey read)
- **Concurrent Sessions**: 50+ (limited by Valkey and Node.js event loop)
- **Memory Footprint**: <100MB per server instance
- **Container Image Size**: <50MB (Alpine-based)
### Optimization Strategies
1. **Caching**: Pre-compute common search results in Valkey
2. **Connection Pooling**: ioredis maintains persistent Valkey connections
3. **Lazy Loading**: Load mock data modules on-demand
4. **Batch Operations**: Use Valkey pipelines for multi-key operations
## Security Considerations
### Mock Data Safety
- ✅ No real API keys or credentials stored
- ✅ Configuration validation rejects production patterns
- ✅ All PNRs prefixed with "TEST-"
- ✅ No external network calls (except to local Valkey)
- ✅ Non-root container user
### Docker Security
- Use official Node.js Alpine base images
- Run as non-root user (node:node)
- Minimal attack surface (no shell, no dev tools in prod image)
- Regular security updates via base image updates
## Open Questions (Resolved)
All technical unknowns from initial planning have been resolved through research above. No blocking issues identified.
## Next Steps
Proceed to Phase 1: Design & Contracts
- Create data-model.md (data structures)
- Define MCP tool contracts in contracts/
- Generate quickstart.md with usage examples
- Update agent context with technology decisions
---
## Remote Access Research (Added 2026-04-07)
This section documents research findings for remote MCP access requirements based on clarifications received after initial planning.
### Decision 8: Streamable HTTP Transport Implementation (MCP Specification Compliant)
**Decision**: Use MCP SDK's `StreamableHTTPServerTransport` over HTTP/1.1 with Server-Sent Events (SSE)
**Question**: How to implement remote transport for MCP SDK per official specification?
**Investigation Findings**:
The MCP specification (2025-11-25) defines **Streamable HTTP** as the standard remote transport. The MCP SDK (@modelcontextprotocol/sdk v1.0.4) provides official transport implementations:
1. **StdioServerTransport** - stdio for local process communication
2. **StreamableHTTPServerTransport** - Remote HTTP/1.1 using SSE (spec-compliant)
3. **WebStandardStreamableHTTPServerTransport** - Platform-agnostic HTTP
4. **SSEServerTransport** - Deprecated legacy transport
**Key Finding**: MCP's **Streamable HTTP** transport uses HTTP/1.1 with Server-Sent Events (SSE), NOT HTTP/2. The specification requires:
- Single endpoint supporting POST, GET, and DELETE methods
- POST for client→server messages (returns SSE stream or 202)
- GET for server→client message stream (optional)
- DELETE for explicit session termination
- SSE for server→client streaming
- Session management via `Mcp-Session-Id` header
- Protocol version via `MCP-Protocol-Version` header (REQUIRED per clarification 2026-04-08)
- Origin header validation for security
- SSE polling pattern: Server sends initial event with ID and empty data, MAY close connection after response, clients reconnect using Last-Event-ID, server sends `retry` field before closing
**Integration Approaches Evaluated**:
**Option A: Native HTTP/1.1 + SSE (MCP Spec Compliant)** ⭐ SELECTED
```
Client → Node.js MCP Server (HTTP/1.1 + SSE via StreamableHTTPServerTransport)
```
- ✅ Direct implementation per MCP specification
- ✅ Zero code complexity - use SDK's `StreamableHTTPServerTransport` directly
- ✅ Single process deployment (no reverse proxy required for spec compliance)
- ✅ Simplified debugging and local development
- ✅ Meets all MCP security requirements (Origin validation, localhost binding)
- ⚠️ Optional: Can add Nginx/Caddy for TLS termination and HTTP/2 upgrade (production enhancement)
**Option B: Reverse Proxy with HTTP/2 Upgrade**
```
Client (HTTP/2) → Nginx/Caddy (HTTP/2 → HTTP/1.1) → Node.js MCP Server (HTTP/1.1+SSE)
```
- ✅ Adds HTTP/2 multiplexing for client connections
- ✅ TLS termination in reverse proxy
- ⚠️ Adds deployment complexity
- ⚠️ Not required for MCP specification compliance
**Rationale for Selection**:
- MCP specification explicitly defines Streamable HTTP as HTTP/1.1 + SSE
- HTTP/2 is an optional enhancement, not a requirement
- Simpler deployment path (single Node.js process)
**Implementation Strategy**:
```javascript
// src/transports/streamable-http.js
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import http from 'node:http';
import { randomUUID } from 'node:crypto';
export class HTTPTransport {
constructor(options = {}) {
this.port = options.port || 3000;
this.host = options.host || '127.0.0.1'; // Localhost for proxy
this.mcpTransport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID(),
enableJsonResponse: false // Use SSE streaming
});
this.server = http.createServer(async (req, res) => {
// CORS, rate limiting, health check middleware
// Then delegate to mcpTransport.handleRequest()
});
}
}
```
**Docker Configuration**:
```yaml
# docker-compose.yaml
services:
mcp-server:
environment:
TRANSPORT_MODE: http
HTTP_PORT: 3000
HTTP_HOST: 127.0.0.1 # Only accessible via nginx
nginx:
image: nginx:alpine
ports:
- "8080:8080" # External HTTP/2 port
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
depends_on:
- mcp-server
```
**Nginx Configuration** (nginx/nginx.conf):
```nginx
server {
listen 8080 ssl http2;
server_name localhost;
ssl_certificate /etc/nginx/certs/cert.pem;
ssl_certificate_key /etc/nginx/certs/key.pem;
location /mcp {
proxy_pass http://mcp-server:3000;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# SSE support
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding on;
}
location /health {
proxy_pass http://mcp-server:3000/health;
}
}
```
**Alternatives**: Future enhancement can add native HTTP/2 option for single-binary deployment.
---
### Decision 9: MCP Session Management for Remote Access
**Decision**: Stateful sessions with Valkey backing, MCP Session ID mapped to Valkey session
**Question**: How should connection lifecycle and session management work for remote MCP?
**Key Distinction**: HTTP/2 Stream ≠ MCP Session
- **HTTP/2 Stream**: Single request/response cycle, multiplexed over one TCP connection
- **MCP Session**: Persistent identifier (`sessionId`) spanning multiple HTTP requests
- **Valkey Session**: Business logic session tracking PNRs, searches, user context
**Session Lifecycle**:
1. **Initialize**: Client connects → Transport generates sessionId (UUID) → Create Valkey session
2. **Active**: Client makes requests with `MCP-Session-ID` header → Refresh session TTL
3. **Idle**: No requests for N minutes → Session remains in Valkey (TTL not expired)
4. **Expired**: TTL reaches zero → Valkey auto-deletes session data
5. **Reconnect**: Client can resume with same `MCP-Session-ID` header
**Storage Pattern**:
```
session:{sessionId}:metadata → { createdAt, lastActivityAt, transportType, remoteIP }
session:{sessionId}:searches → Recent search results (optional caching)
session:{sessionId}:pnrs → Set of PNR codes created in this session
pnr:{pnr} → Global PNR storage (not session-namespaced)
```
**Implementation**:
```javascript
// src/session/manager.js
async function handleToolCall(request, sessionId) {
if (!sessionId) throw new Error('Session ID required');
let valkeySession = await sessionManager.getSession(sessionId);
if (!valkeySession) {
valkeySession = await sessionManager.createSession(sessionId);
}
await sessionManager.updateActivity(sessionId);
return await toolHandler(request.params, sessionId);
}
```
**Session Isolation**: Each session has isolated Valkey namespace. PNRs stored globally (separate from sessions).
---
### Decision 10: IP-Based Rate Limiting Algorithm
**Decision**: Sliding Window Counter (Hybrid Approach)
**Question**: What rate limiting algorithm works for IP-based tracking without authentication?
**Algorithms Evaluated**:
1. **Fixed Window**: Simple but has burst problem (200 req in 1 second across window boundary)
2. **Sliding Window Log**: Accurate but high memory (stores timestamp per request)
3. **Sliding Window Counter**: Approximation balancing accuracy and performance ⭐ SELECTED
4. **Token Bucket**: Good for burst allowance but complex state management
**Selected Algorithm**: Sliding Window Counter
- ✅ Prevents large bursts (unlike fixed window)
- ✅ Low memory (2 counters per IP)
- ✅ Simple implementation (no Lua scripts required)
- ✅ Accuracy within 1-2% of perfect sliding window
**Implementation**:
```javascript
// src/remote/ratelimit.js
async function checkRateLimit(clientIP, limit = 100, windowSeconds = 60) {
const now = Math.floor(Date.now() / 1000);
const currentWindow = Math.floor(now / windowSeconds);
const previousWindow = currentWindow - 1;
const currentKey = `ratelimit:${clientIP}:${currentWindow}`;
const previousKey = `ratelimit:${clientIP}:${previousWindow}`;
const [currentCount, previousCount] = await Promise.all([
storage.incr(currentKey),
storage.get(previousKey) || 0
]);
if (currentCount === 1) {
await storage.expire(currentKey, windowSeconds * 2);
}
const elapsedInWindow = now % windowSeconds;
const previousWeight = 1 - (elapsedInWindow / windowSeconds);
const estimatedCount = (previousCount * previousWeight) + currentCount;
if (estimatedCount > limit) {
throw new RateLimitError({ limit, current: Math.floor(estimatedCount) });
}
return { allowed: true, remaining: limit - Math.floor(estimatedCount) };
}
```
**Performance**: ~3 Valkey ops per request, ~100 bytes per IP, within 1-2% of perfect sliding window.
**HTTP Headers**:
```
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1712486460
Retry-After: 15
```
**Client IP Extraction**:
```javascript
function getClientIP(request) {
const forwarded = request.headers['x-forwarded-for'];
if (forwarded) return forwarded.split(',')[0].trim();
const realIP = request.headers['x-real-ip'];
if (realIP) return realIP;
return request.socket.remoteAddress;
}
```
**Configuration**:
```bash
RATE_LIMIT_ENABLED=true
RATE_LIMIT_PER_MINUTE=100
RATE_LIMIT_WINDOW_SECONDS=60
```
---
### Decision 11: Global PNR Storage with TTL
**Decision**: Global namespace with SETEX, independent PNR lifecycle from sessions
**Question**: How to implement global PNR retrieval across sessions with configurable expiration?
**Requirements**:
1. PNRs globally retrievable (any session can retrieve any PNR)
2. PNRs expire after TTL (default 1 hour)
3. PNR creation session logged but doesn't restrict retrieval
4. Session expiration doesn't delete PNRs
**Storage Pattern**:
**Global PNR** (not session-scoped):
```
pnr:TEST-ABC123 → { pnr, status, segments, passengers, createdAt, expiresAt, creatingSessionId }
```
**Session Reference** (for listBookings tool):
```
session:{sessionId}:pnrs → Set<pnr> // PNRs created in this session
```
**Implementation**:
```javascript
// Create PNR
async function createPNR(pnrData, ttlHours = 1) {
const pnr = generatePNR(); // TEST-XXXXXX
const ttlSeconds = ttlHours * 3600;
const pnrRecord = {
pnr,
status: 'confirmed',
createdAt: new Date().toISOString(),
expiresAt: new Date(Date.now() + ttlSeconds * 1000).toISOString(),
creatingSessionId: pnrData.sessionId, // For logging only
...pnrData
};
// Store globally with TTL
await storage.setex(`pnr:${pnr}`, ttlSeconds, JSON.stringify(pnrRecord));
// Add to session's created PNRs list
await storage.sadd(`session:${pnrData.sessionId}:pnrs`, pnr);
return pnrRecord;
}
// Retrieve PNR (global, any session)
async function retrieveBooking({ pnr }, sessionId) {
const pnrData = await storage.get(`pnr:${pnr}`);
if (!pnrData) {
throw new NotFoundError(`PNR ${pnr} not found or expired`);
}
return JSON.parse(pnrData);
}
// List PNRs created in session
async function listBookings({ limit = 10 }, sessionId) {
const pnrCodes = await storage.smembers(`session:${sessionId}:pnrs`);
const pnrs = await Promise.all(
pnrCodes.map(async (code) => {
const data = await storage.get(`pnr:${code}`);
return data ? JSON.parse(data) : null;
})
);
return pnrs.filter(Boolean).slice(0, limit);
}
```
**Edge Cases**:
1. **PNR Expired**: `retrieveBooking` returns "PNR not found or expired"
2. **Session Expires Before PNR**: PNR remains globally retrievable
3. **List After Session Expiry**: Returns empty (session reference deleted)
**Configuration**:
```bash
PNR_TTL_HOURS=1
SESSION_TTL_HOURS=24
```
**Storage Efficiency**: ~2KB per PNR, 1000 PNRs = 2MB, auto-cleanup via Valkey TTL.
---
### Decision 12: CORS Configuration
**Decision**: Permissive Wildcard CORS with Network-Level Access Control
**Question**: How to configure CORS for web-based MCP clients?
**CORS Policy**: `Access-Control-Allow-Origin: *` (wildcard)
**Rationale**:
- ✅ Maximum compatibility - any web client can connect from any domain
- ✅ Simplifies development - no origin whitelist configuration
- ✅ Enables browser tools, Chrome extensions, web IDEs
- ⚠️ Requires network-level security (firewall, VPN, private network)
- ⚠️ Only acceptable for trusted development/testing environments
**Security Implications**:
1. **No Credentials**: Wildcard incompatible with `Access-Control-Allow-Credentials: true` (acceptable - we have no auth)
2. **Public Data**: Any website can make requests (acceptable - test data only)
3. **CSRF Potential**: Limited risk (no authentication, state changes require valid PNR)
4. **Network Security**: Deploy within private networks, use firewall rules
**Implementation**:
```javascript
// src/remote/cors.js
export function applyCORS(req, res) {
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Credentials', 'false');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
res.setHeader('Access-Control-Allow-Headers', 'Content-Type, MCP-Session-ID');
res.setHeader('Access-Control-Expose-Headers', 'X-RateLimit-Limit, X-RateLimit-Remaining');
res.setHeader('Access-Control-Max-Age', '86400');
if (req.method === 'OPTIONS') {
res.writeHead(204);
res.end();
return true; // Handled
}
return false; // Continue
}
```
**Nginx Alternative**:
```nginx
location /mcp {
if ($request_method = OPTIONS) {
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
return 204;
}
add_header Access-Control-Allow-Origin * always;
proxy_pass http://mcp-server:3000;
}
```
**Security Posture**: Wildcard CORS acceptable for mock server because:
1. Contains only test data (no sensitive information)
2. No authentication (no credentials to steal)
3. Network-level access controls provide security boundary
4. Maximizes developer flexibility for ad-hoc tooling
**Configuration**:
```bash
CORS_ENABLED=true
CORS_ORIGINS=*
CORS_MAX_AGE=86400
```
---
## Remote Access Technology Summary
| Component | Technology | Decision |
|-----------|-----------|----------|
| **HTTP/2 Server** | Nginx reverse proxy | Terminate HTTP/2, proxy to HTTP/1.1 |
| **MCP Transport** | StreamableHTTPServerTransport | Over HTTP/1.1 (proxied) |
| **Rate Limiting** | Sliding window counter | Valkey-backed, IP-based |
| **PNR Storage** | Global with TTL | Valkey SETEX, independent lifecycle |
| **CORS** | Wildcard policy | `Access-Control-Allow-Origin: *` |
| **Health Check** | Unauthenticated endpoint | `/health` returning JSON status |
## Environment Variables (Remote Mode)
```bash
# Transport
TRANSPORT_MODE=stdio|http|both
HTTP_PORT=3000
HTTP_HOST=127.0.0.1
# Rate Limiting
RATE_LIMIT_ENABLED=true
RATE_LIMIT_PER_MINUTE=100
RATE_LIMIT_WINDOW_SECONDS=60
# PNR/Session
PNR_TTL_HOURS=1
SESSION_TTL_HOURS=24
# CORS
CORS_ENABLED=true
CORS_ORIGINS=*
CORS_MAX_AGE=86400
```
## Next Steps
All remote access research complete. Proceed to update:
1. ✅ data-model.md - Add RemoteConnection, RateLimitRecord, HealthStatus entities
2. ✅ contracts/ - Add health endpoint contract
3. ✅ quickstart.md - Add remote access setup instructions