# A/B Testing Service Advanced A/B testing and experimentation service for the Telegram Marketing Intelligence Agent system. ## Overview The A/B Testing service provides comprehensive experiment management, traffic allocation, and statistical analysis capabilities for optimizing marketing campaigns. ## Features ### Experiment Management - Multiple experiment types (A/B, multivariate, bandit) - Flexible variant configuration - Target audience filtering - Scheduled experiments - Early stopping support ### Traffic Allocation Algorithms - **Random**: Fixed percentage allocation - **Epsilon-Greedy**: Balance exploration and exploitation - **UCB (Upper Confidence Bound)**: Optimistic exploration strategy - **Thompson Sampling**: Bayesian approach for optimal allocation ### Statistical Analysis - Frequentist hypothesis testing - Bayesian analysis - Confidence intervals - Power analysis - Multiple testing correction ### Real-time Features - Live metrics tracking - Dynamic allocation updates - Real-time results visualization - Automated winner detection ## Architecture ``` ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ API Gateway │────▶│ A/B Testing Service│────▶│ MongoDB │ └─────────────────┘ └──────────────────┘ └─────────────────┘ │ │ ├───────────────────────────┤ ▼ ▼ ┌─────────────┐ ┌──────────────┐ │ Redis │ │ RabbitMQ │ └─────────────┘ └──────────────┘ ``` ## API Endpoints ### Experiments - `GET /api/experiments` - List experiments - `POST /api/experiments` - Create experiment - `GET /api/experiments/:id` - Get experiment details - `PUT /api/experiments/:id` - Update experiment - `DELETE /api/experiments/:id` - Delete experiment - `POST /api/experiments/:id/start` - Start experiment - `POST /api/experiments/:id/pause` - Pause experiment - `POST /api/experiments/:id/complete` - Complete experiment ### Allocations - `POST /api/allocations/allocate` - Allocate user to variant - `GET /api/allocations/allocation/:experimentId/:userId` - Get user allocation - `POST /api/allocations/conversion` - Record conversion - `POST /api/allocations/event` - Record custom event - `POST /api/allocations/batch/allocate` - Batch allocate users - `GET /api/allocations/stats/:experimentId` - Get allocation statistics ### Results - `GET /api/results/:experimentId` - Get experiment results - `GET /api/results/:experimentId/metrics` - Get real-time metrics - `GET /api/results/:experimentId/segments` - Get segment analysis - `GET /api/results/:experimentId/funnel` - Get funnel analysis - `GET /api/results/:experimentId/export` - Export results ## Configuration ### Environment Variables - `PORT` - Service port (default: 3005) - `MONGODB_URI` - MongoDB connection string - `REDIS_URL` - Redis connection URL - `RABBITMQ_URL` - RabbitMQ connection URL - `JWT_SECRET` - JWT signing secret ### Experiment Configuration - `DEFAULT_EXPERIMENT_DURATION` - Default duration in ms - `MIN_SAMPLE_SIZE` - Minimum sample size per variant - `CONFIDENCE_LEVEL` - Statistical confidence level - `MDE` - Minimum detectable effect ### Allocation Configuration - `ALLOCATION_ALGORITHM` - Default algorithm - `EPSILON` - Epsilon for epsilon-greedy - `UCB_C` - Exploration parameter for UCB ## Usage Examples ### Create an A/B Test ```javascript const experiment = { name: "New CTA Button Test", type: "ab", targetMetric: { name: "conversion_rate", type: "conversion", goalDirection: "increase" }, variants: [ { variantId: "control", name: "Current Button", config: { buttonText: "Sign Up" }, allocation: { percentage: 50 } }, { variantId: "variant_a", name: "New Button", config: { buttonText: "Get Started Free" }, allocation: { percentage: 50 } } ], control: "control", allocation: { method: "random" } }; const response = await abTestingClient.createExperiment(experiment); ``` ### Allocate User ```javascript const allocation = await abTestingClient.allocate({ experimentId: "exp_123", userId: "user_456", context: { deviceType: "mobile", platform: "iOS", location: { country: "US", region: "CA" } } }); // Use variant configuration if (allocation.variantId === "variant_a") { showNewButton(); } else { showCurrentButton(); } ``` ### Record Conversion ```javascript await abTestingClient.recordConversion({ experimentId: "exp_123", userId: "user_456", value: 1, metadata: { revenue: 99.99, itemId: "premium_plan" } }); ``` ### Get Results ```javascript const results = await abTestingClient.getResults("exp_123"); // Check winner if (results.analysis.summary.winner) { console.log(`Winner: ${results.analysis.summary.winner.name}`); console.log(`Improvement: ${results.analysis.summary.winner.improvement}%`); } ``` ## Adaptive Allocation ### Epsilon-Greedy ```javascript const experiment = { allocation: { method: "epsilon-greedy", parameters: { epsilon: 0.1 // 10% exploration } } }; ``` ### Thompson Sampling ```javascript const experiment = { allocation: { method: "thompson" } }; ``` ## Statistical Analysis ### Power Analysis The service automatically calculates required sample sizes based on: - Baseline conversion rate - Minimum detectable effect - Statistical power (default: 80%) - Confidence level (default: 95%) ### Multiple Testing Correction When running experiments with multiple variants: - Bonferroni correction - Benjamini-Hochberg procedure ## Best Practices 1. **Sample Size Planning** - Use power analysis to determine duration - Don't stop experiments too early - Account for weekly/seasonal patterns 2. **Metric Selection** - Choose primary metrics aligned with business goals - Monitor guardrail metrics - Consider long-term effects 3. **Audience Targeting** - Use consistent targeting criteria - Ensure sufficient traffic in each segment - Consider interaction effects 4. **Statistical Rigor** - Pre-register hypotheses - Avoid peeking at results - Use appropriate statistical tests ## Monitoring ### Health Check ```bash curl http://localhost:3005/health ``` ### Metrics - Prometheus metrics at `/metrics` - Key metrics: - Active experiments count - Allocation latency - Conversion rates by variant - Algorithm performance ## Development ### Setup ```bash npm install cp .env.example .env npm run dev ``` ### Testing ```bash npm test npm run test:integration npm run test:statistical ``` ### Docker ```bash docker build -t ab-testing-service . docker run -p 3005:3005 --env-file .env ab-testing-service ``` ## Performance - Allocation latency: <10ms p99 - Results calculation: <100ms for 100K users - Real-time updates: <50ms latency - Supports 10K allocations/second ## Security - JWT authentication required - Experiment isolation by account - Rate limiting per account - Audit logging for all changes ## Support For issues and questions: - Review the statistical methodology guide - Check the troubleshooting section - Contact the development team