Some checks failed
Deploy / deploy (push) Has been cancelled
Full-stack web application for Telegram management - Frontend: Vue 3 + Vben Admin - Backend: NestJS - Features: User management, group broadcast, statistics 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
334 lines
8.2 KiB
Markdown
334 lines
8.2 KiB
Markdown
# Logging Service
|
|
|
|
Centralized logging, monitoring, and alerting service for the Telegram Marketing Agent System.
|
|
|
|
## Features
|
|
|
|
- **Log Collection**: Centralized log collection from all microservices
|
|
- **Real-time Analysis**: Pattern detection and anomaly analysis
|
|
- **Alert Management**: Multi-channel alerting (Email, Slack, Webhook)
|
|
- **Dashboard**: Real-time monitoring dashboard
|
|
- **Log Storage**: Elasticsearch-based storage with retention policies
|
|
- **Performance Metrics**: System and application performance tracking
|
|
|
|
## Architecture
|
|
|
|
```
|
|
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
|
│ Microservices │────▶│ Log Collector │────▶│ Elasticsearch │
|
|
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
|
│ │
|
|
▼ ▼
|
|
┌─────────────────┐ ┌─────────────────┐
|
|
│ Log Analyzer │────▶│ Alert Manager │
|
|
└─────────────────┘ └─────────────────┘
|
|
│
|
|
▼
|
|
┌──────────────┐
|
|
│ Channels │
|
|
├──────────────┤
|
|
│ Email │
|
|
│ Slack │
|
|
│ Webhook │
|
|
└──────────────┘
|
|
```
|
|
|
|
## Prerequisites
|
|
|
|
- Node.js 18+
|
|
- Elasticsearch 8.x
|
|
- Redis 6.x
|
|
|
|
## Installation
|
|
|
|
1. Install dependencies:
|
|
```bash
|
|
cd services/logging
|
|
npm install
|
|
```
|
|
|
|
2. Configure environment variables:
|
|
```bash
|
|
cp .env.example .env
|
|
# Edit .env with your configuration
|
|
```
|
|
|
|
3. Start Elasticsearch:
|
|
```bash
|
|
docker run -d --name elasticsearch \
|
|
-e "discovery.type=single-node" \
|
|
-e "xpack.security.enabled=false" \
|
|
-p 9200:9200 \
|
|
elasticsearch:8.12.0
|
|
```
|
|
|
|
4. Start Redis:
|
|
```bash
|
|
docker run -d --name redis \
|
|
-p 6379:6379 \
|
|
redis:latest
|
|
```
|
|
|
|
5. Start the service:
|
|
```bash
|
|
npm start
|
|
```
|
|
|
|
## Configuration
|
|
|
|
### Environment Variables
|
|
|
|
| Variable | Description | Default |
|
|
|----------|-------------|---------|
|
|
| PORT | Service port | 3010 |
|
|
| ELASTICSEARCH_NODE | Elasticsearch URL | http://localhost:9200 |
|
|
| ELASTICSEARCH_USERNAME | Elasticsearch username | elastic |
|
|
| ELASTICSEARCH_PASSWORD | Elasticsearch password | changeme |
|
|
| REDIS_HOST | Redis host | localhost |
|
|
| REDIS_PORT | Redis port | 6379 |
|
|
| ALERT_EMAIL_ENABLED | Enable email alerts | false |
|
|
| ALERT_SLACK_ENABLED | Enable Slack alerts | false |
|
|
| ALERT_WEBHOOK_ENABLED | Enable webhook alerts | false |
|
|
|
|
### Alert Thresholds
|
|
|
|
Configure alert thresholds in `config/index.js`:
|
|
|
|
```javascript
|
|
alerts: {
|
|
rules: {
|
|
errorRate: {
|
|
threshold: 0.05, // 5% error rate
|
|
window: 300000 // 5 minutes
|
|
},
|
|
responseTime: {
|
|
threshold: 1000, // 1 second
|
|
window: 300000 // 5 minutes
|
|
},
|
|
systemResources: {
|
|
cpu: 80, // 80% CPU usage
|
|
memory: 85, // 85% memory usage
|
|
disk: 90 // 90% disk usage
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
## API Endpoints
|
|
|
|
### Log Management
|
|
|
|
- `GET /api/logs/search` - Search logs
|
|
- `GET /api/logs/stats` - Get log statistics
|
|
- `GET /api/logs/metrics` - Get aggregated metrics
|
|
- `GET /api/logs/stream` - Stream logs in real-time
|
|
- `DELETE /api/logs/cleanup` - Delete old logs
|
|
|
|
### Alert Management
|
|
|
|
- `GET /api/alerts/history` - Get alert history
|
|
- `GET /api/alerts/active` - Get active alerts
|
|
- `POST /api/alerts/:id/acknowledge` - Acknowledge an alert
|
|
- `DELETE /api/alerts/:id` - Clear an alert
|
|
- `POST /api/alerts/test` - Send test alert
|
|
- `GET /api/alerts/config` - Get alert configuration
|
|
|
|
### Dashboard
|
|
|
|
- `GET /api/dashboard/overview` - Get dashboard overview
|
|
- `GET /api/dashboard/health` - Get service health
|
|
- `GET /api/dashboard/trends` - Get performance trends
|
|
- `GET /api/dashboard/top-errors` - Get top errors
|
|
|
|
## Log Format
|
|
|
|
### Application Logs
|
|
```json
|
|
{
|
|
"@timestamp": "2024-01-15T10:30:00.000Z",
|
|
"service": "api-gateway",
|
|
"level": "error",
|
|
"message": "Request failed",
|
|
"userId": "user123",
|
|
"requestId": "req-456",
|
|
"method": "POST",
|
|
"path": "/api/campaigns",
|
|
"statusCode": 500,
|
|
"responseTime": 234.5,
|
|
"error": {
|
|
"type": "ValidationError",
|
|
"message": "Invalid campaign data",
|
|
"stack": "..."
|
|
}
|
|
}
|
|
```
|
|
|
|
### Metrics
|
|
```json
|
|
{
|
|
"@timestamp": "2024-01-15T10:30:00.000Z",
|
|
"service": "api-gateway",
|
|
"metric": "response_time",
|
|
"value": 234.5,
|
|
"unit": "ms",
|
|
"dimensions": {
|
|
"endpoint": "/api/campaigns",
|
|
"method": "POST",
|
|
"status": "success"
|
|
}
|
|
}
|
|
```
|
|
|
|
## Integration
|
|
|
|
### Sending Logs from Services
|
|
|
|
1. Install logging client:
|
|
```bash
|
|
npm install winston winston-elasticsearch
|
|
```
|
|
|
|
2. Configure winston logger:
|
|
```javascript
|
|
import winston from 'winston';
|
|
import { ElasticsearchTransport } from 'winston-elasticsearch';
|
|
|
|
const logger = winston.createLogger({
|
|
transports: [
|
|
new ElasticsearchTransport({
|
|
level: 'info',
|
|
clientOpts: {
|
|
node: 'http://localhost:3010',
|
|
auth: { username: 'elastic', password: 'changeme' }
|
|
},
|
|
index: 'marketing-agent-logs'
|
|
})
|
|
]
|
|
});
|
|
```
|
|
|
|
3. Send logs:
|
|
```javascript
|
|
logger.info('Campaign created', {
|
|
service: 'campaign-service',
|
|
userId: 'user123',
|
|
campaignId: 'camp456',
|
|
action: 'create'
|
|
});
|
|
```
|
|
|
|
### Real-time Log Streaming
|
|
|
|
```javascript
|
|
const eventSource = new EventSource('/api/logs/stream?service=api-gateway&level=error');
|
|
|
|
eventSource.onmessage = (event) => {
|
|
const log = JSON.parse(event.data);
|
|
console.log('New log:', log);
|
|
};
|
|
```
|
|
|
|
## Monitoring
|
|
|
|
### Health Check
|
|
```bash
|
|
curl http://localhost:3010/health
|
|
```
|
|
|
|
### Queue Statistics
|
|
```bash
|
|
curl http://localhost:3010/api/dashboard/overview
|
|
```
|
|
|
|
### Service Health
|
|
```bash
|
|
curl http://localhost:3010/api/dashboard/health
|
|
```
|
|
|
|
## Maintenance
|
|
|
|
### Log Retention
|
|
|
|
Logs are automatically deleted based on retention policies:
|
|
- Application logs: 30 days
|
|
- Metrics: 90 days
|
|
- Error logs: 60 days
|
|
|
|
### Manual Cleanup
|
|
```bash
|
|
curl -X DELETE http://localhost:3010/api/logs/cleanup
|
|
```
|
|
|
|
### Index Management
|
|
|
|
The service automatically creates and manages Elasticsearch indices with:
|
|
- Lifecycle policies for automatic rollover
|
|
- Retention policies for automatic deletion
|
|
- Optimized mappings for different log types
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
1. **Elasticsearch Connection Failed**
|
|
- Check Elasticsearch is running: `curl http://localhost:9200`
|
|
- Verify credentials in .env file
|
|
- Check network connectivity
|
|
|
|
2. **High Memory Usage**
|
|
- Adjust batch size in config: `collection.batchSize`
|
|
- Reduce flush interval: `collection.flushInterval`
|
|
- Check Elasticsearch heap size
|
|
|
|
3. **Alerts Not Sending**
|
|
- Verify alert channel configuration
|
|
- Check SMTP settings for email
|
|
- Test webhook URL accessibility
|
|
|
|
### Debug Mode
|
|
|
|
Enable debug logging:
|
|
```bash
|
|
LOG_LEVEL=debug npm start
|
|
```
|
|
|
|
## Development
|
|
|
|
### Running Tests
|
|
```bash
|
|
npm test
|
|
```
|
|
|
|
### Development Mode
|
|
```bash
|
|
npm run dev
|
|
```
|
|
|
|
### Adding New Alert Channels
|
|
|
|
1. Create channel handler in `services/alertManager.js`:
|
|
```javascript
|
|
async sendCustomAlert(alert) {
|
|
// Implement channel logic
|
|
}
|
|
```
|
|
|
|
2. Add to alert sending pipeline
|
|
3. Update configuration schema
|
|
|
|
## Performance Tuning
|
|
|
|
### Elasticsearch Optimization
|
|
- Increase shards for high-volume indices
|
|
- Configure index lifecycle management
|
|
- Use bulk operations for better throughput
|
|
|
|
### Redis Optimization
|
|
- Configure maxmemory policy
|
|
- Use Redis clustering for high availability
|
|
- Monitor queue sizes
|
|
|
|
### Application Optimization
|
|
- Adjust batch sizes based on load
|
|
- Configure worker concurrency
|
|
- Use connection pooling |