Initial commit: Telegram Management System
Some checks failed
Deploy / deploy (push) Has been cancelled
Some checks failed
Deploy / deploy (push) Has been cancelled
Full-stack web application for Telegram management - Frontend: Vue 3 + Vben Admin - Backend: NestJS - Features: User management, group broadcast, statistics 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
331
marketing-agent/services/analytics/README.md
Normal file
331
marketing-agent/services/analytics/README.md
Normal file
@@ -0,0 +1,331 @@
|
||||
# Analytics Service
|
||||
|
||||
Real-time analytics and reporting service for the Telegram Marketing Intelligence Agent system.
|
||||
|
||||
## Overview
|
||||
|
||||
The Analytics service provides comprehensive event tracking, metrics processing, real-time analytics, report generation, and alert management capabilities.
|
||||
|
||||
## Features
|
||||
|
||||
### Event Tracking
|
||||
- Real-time event ingestion
|
||||
- Batch event processing
|
||||
- Event validation and enrichment
|
||||
- Multi-dimensional event storage
|
||||
|
||||
### Metrics Processing
|
||||
- Custom metric definitions
|
||||
- Real-time metric calculation
|
||||
- Aggregation and rollups
|
||||
- Time-series data management
|
||||
|
||||
### Report Generation
|
||||
- Scheduled report generation
|
||||
- Custom report templates
|
||||
- Multiple export formats (PDF, Excel, CSV)
|
||||
- Report distribution
|
||||
|
||||
### Real-time Analytics
|
||||
- WebSocket streaming
|
||||
- Live dashboards
|
||||
- Real-time alerts
|
||||
- Performance monitoring
|
||||
|
||||
### Alert Management
|
||||
- Threshold-based alerts
|
||||
- Anomaly detection
|
||||
- Multi-channel notifications
|
||||
- Alert history tracking
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ API Gateway │────▶│ Analytics Service │────▶│ Elasticsearch │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
│ │
|
||||
├───────────────────────────┤
|
||||
▼ ▼
|
||||
┌─────────────┐ ┌──────────────┐
|
||||
│ MongoDB │ │ ClickHouse │
|
||||
└─────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Events
|
||||
- `POST /api/events` - Track single event
|
||||
- `POST /api/events/batch` - Track multiple events
|
||||
- `GET /api/events/search` - Search events
|
||||
|
||||
### Metrics
|
||||
- `GET /api/metrics` - List all metrics
|
||||
- `POST /api/metrics` - Create custom metric
|
||||
- `GET /api/metrics/:id` - Get metric details
|
||||
- `GET /api/metrics/:id/data` - Get metric data
|
||||
|
||||
### Reports
|
||||
- `GET /api/reports` - List reports
|
||||
- `POST /api/reports` - Generate report
|
||||
- `GET /api/reports/:id` - Get report details
|
||||
- `GET /api/reports/:id/download` - Download report
|
||||
|
||||
### Alerts
|
||||
- `GET /api/alerts` - List alerts
|
||||
- `POST /api/alerts` - Create alert
|
||||
- `PUT /api/alerts/:id` - Update alert
|
||||
- `DELETE /api/alerts/:id` - Delete alert
|
||||
- `GET /api/alerts/:id/history` - Get alert history
|
||||
|
||||
### Real-time
|
||||
- `WS /ws/analytics` - Real-time analytics stream
|
||||
- `WS /ws/metrics/:id` - Metric-specific stream
|
||||
|
||||
## Data Models
|
||||
|
||||
### Event Schema
|
||||
```javascript
|
||||
{
|
||||
eventId: String,
|
||||
accountId: String,
|
||||
sessionId: String,
|
||||
eventType: String,
|
||||
eventName: String,
|
||||
timestamp: Date,
|
||||
properties: Object,
|
||||
context: {
|
||||
ip: String,
|
||||
userAgent: String,
|
||||
locale: String
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metric Schema
|
||||
```javascript
|
||||
{
|
||||
metricId: String,
|
||||
accountId: String,
|
||||
name: String,
|
||||
type: String, // 'counter', 'gauge', 'histogram'
|
||||
unit: String,
|
||||
formula: String,
|
||||
aggregation: String,
|
||||
filters: Array,
|
||||
dimensions: Array
|
||||
}
|
||||
```
|
||||
|
||||
### Report Schema
|
||||
```javascript
|
||||
{
|
||||
reportId: String,
|
||||
accountId: String,
|
||||
name: String,
|
||||
template: String,
|
||||
schedule: String,
|
||||
parameters: Object,
|
||||
recipients: Array,
|
||||
format: String
|
||||
}
|
||||
```
|
||||
|
||||
### Alert Schema
|
||||
```javascript
|
||||
{
|
||||
alertId: String,
|
||||
accountId: String,
|
||||
name: String,
|
||||
metric: String,
|
||||
condition: Object,
|
||||
threshold: Number,
|
||||
severity: String,
|
||||
channels: Array,
|
||||
cooldown: Number
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
- `PORT` - Service port (default: 3004)
|
||||
- `MONGODB_URI` - MongoDB connection string
|
||||
- `ELASTICSEARCH_NODE` - Elasticsearch URL
|
||||
- `CLICKHOUSE_HOST` - ClickHouse host
|
||||
- `REDIS_URL` - Redis connection URL
|
||||
- `RABBITMQ_URL` - RabbitMQ connection URL
|
||||
|
||||
### Storage Configuration
|
||||
- `REPORTS_DIR` - Report storage directory
|
||||
- `EXPORTS_DIR` - Export storage directory
|
||||
- `RETENTION_DAYS` - Data retention period
|
||||
|
||||
### Processing Configuration
|
||||
- `BATCH_SIZE` - Event batch size
|
||||
- `PROCESSING_INTERVAL` - Processing interval (ms)
|
||||
- `STREAM_BUFFER_SIZE` - Real-time buffer size
|
||||
|
||||
## Deployment
|
||||
|
||||
### Docker
|
||||
```bash
|
||||
docker build -t analytics-service .
|
||||
docker run -p 3004:3004 --env-file .env analytics-service
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
```bash
|
||||
kubectl apply -f k8s/deployment.yaml
|
||||
kubectl apply -f k8s/service.yaml
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Track Event
|
||||
```javascript
|
||||
const event = {
|
||||
eventType: 'message',
|
||||
eventName: 'message_sent',
|
||||
properties: {
|
||||
campaignId: '123',
|
||||
groupId: '456',
|
||||
messageType: 'text',
|
||||
charactersCount: 150
|
||||
}
|
||||
};
|
||||
|
||||
await analyticsClient.trackEvent(event);
|
||||
```
|
||||
|
||||
### Create Custom Metric
|
||||
```javascript
|
||||
const metric = {
|
||||
name: 'Message Delivery Rate',
|
||||
type: 'gauge',
|
||||
formula: '(delivered / sent) * 100',
|
||||
unit: 'percentage',
|
||||
aggregation: 'avg',
|
||||
dimensions: ['campaignId', 'groupId']
|
||||
};
|
||||
|
||||
await analyticsClient.createMetric(metric);
|
||||
```
|
||||
|
||||
### Generate Report
|
||||
```javascript
|
||||
const report = {
|
||||
template: 'campaign_performance',
|
||||
parameters: {
|
||||
campaignId: '123',
|
||||
dateRange: {
|
||||
start: '2024-01-01',
|
||||
end: '2024-01-31'
|
||||
}
|
||||
},
|
||||
format: 'pdf'
|
||||
};
|
||||
|
||||
const result = await analyticsClient.generateReport(report);
|
||||
```
|
||||
|
||||
### Create Alert
|
||||
```javascript
|
||||
const alert = {
|
||||
name: 'High Error Rate',
|
||||
metric: 'error_rate',
|
||||
condition: 'greater_than',
|
||||
threshold: 5,
|
||||
severity: 'critical',
|
||||
channels: ['email', 'slack']
|
||||
};
|
||||
|
||||
await analyticsClient.createAlert(alert);
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Check
|
||||
```bash
|
||||
curl http://localhost:3004/health
|
||||
```
|
||||
|
||||
### Metrics
|
||||
- Prometheus metrics available at `/metrics`
|
||||
- Grafana dashboards included in `/dashboards`
|
||||
|
||||
### Logging
|
||||
- Structured JSON logging
|
||||
- Log levels: error, warn, info, debug
|
||||
- Logs shipped to Elasticsearch
|
||||
|
||||
## Development
|
||||
|
||||
### Setup
|
||||
```bash
|
||||
npm install
|
||||
cp .env.example .env
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
npm test
|
||||
npm run test:integration
|
||||
npm run test:e2e
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
```bash
|
||||
npm run lint
|
||||
npm run format
|
||||
npm run type-check
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Event Tracking**
|
||||
- Use consistent event naming
|
||||
- Include relevant context
|
||||
- Batch events when possible
|
||||
- Validate event schema
|
||||
|
||||
2. **Metrics Design**
|
||||
- Keep metrics simple and focused
|
||||
- Use appropriate aggregations
|
||||
- Consider cardinality
|
||||
- Plan for scale
|
||||
|
||||
3. **Report Generation**
|
||||
- Schedule during off-peak hours
|
||||
- Use caching for common reports
|
||||
- Optimize queries
|
||||
- Monitor generation time
|
||||
|
||||
4. **Alert Configuration**
|
||||
- Set appropriate thresholds
|
||||
- Use severity levels wisely
|
||||
- Configure cooldown periods
|
||||
- Test alert channels
|
||||
|
||||
## Security
|
||||
|
||||
- JWT authentication for API access
|
||||
- Field-level encryption for sensitive data
|
||||
- Rate limiting per account
|
||||
- Audit logging for all operations
|
||||
- RBAC for multi-tenant access
|
||||
|
||||
## Performance
|
||||
|
||||
- Event ingestion: 10K events/second
|
||||
- Query response: <100ms p99
|
||||
- Report generation: <30s for standard reports
|
||||
- Real-time latency: <50ms
|
||||
|
||||
## Support
|
||||
|
||||
For issues and questions:
|
||||
- Check the documentation
|
||||
- Review common issues in FAQ
|
||||
- Contact the development team
|
||||
Reference in New Issue
Block a user