Database Configuration
Database Configuration
Section titled “Database Configuration”Alita Robot uses PostgreSQL as its primary database with GORM as the ORM layer. This guide covers database setup, migrations, connection pooling, and schema design.
Requirements
Section titled “Requirements”- PostgreSQL 14 or higher
- UTF-8 encoding support
- Minimum 512MB RAM for the database
Connection String
Section titled “Connection String”Configure the database URL in your environment:
# FormatDATABASE_URL=postgres://username:password@host:port/database?sslmode=disable
# Example (local development)DATABASE_URL=postgres://postgres:password@localhost:5432/alita_robot?sslmode=disable
# Example (Docker Compose)DATABASE_URL=postgresql://alita:alita@postgres:5432/alita
# Example (production with SSL)DATABASE_URL=postgres://user:pass@db.example.com:5432/alita?sslmode=requireSSL Modes
Section titled “SSL Modes”| Mode | Description |
|---|---|
disable | No SSL (development only) |
require | Require SSL but don’t verify certificate |
verify-ca | Require SSL and verify server certificate |
verify-full | Require SSL, verify certificate and hostname |
Automatic Migrations
Section titled “Automatic Migrations”Alita Robot supports automatic database migrations on startup, eliminating the need to manually run migration commands.
Enabling Auto-Migration
Section titled “Enabling Auto-Migration”# Enable automatic migrationsAUTO_MIGRATE=true
# Optional: Continue running even if migrations fail (not recommended for production)AUTO_MIGRATE_SILENT_FAIL=false
# Optional: Custom migration directoryMIGRATIONS_PATH=migrationsHow Auto-Migration Works
Section titled “How Auto-Migration Works”- Migration Files: SQL migrations are stored in the
migrations/directory - Version Tracking: Applied migrations are tracked in the
schema_migrationstable - Idempotent: Migrations are only applied once; safe to run multiple times
- Transactional: Each migration runs in a transaction for atomicity
- Auto-Cleaning: Supabase-specific SQL (GRANT statements, RLS policies) is automatically removed
Migration Process
Section titled “Migration Process”When AUTO_MIGRATE=true, the bot will:
- Check for pending migrations in
migrations/ - Clean any Supabase-specific SQL commands automatically
- Apply migrations in alphabetical order (by filename)
- Track applied migrations in
schema_migrationstable - Log migration status and any errors
Example log output:
[Migrations] Starting automatic database migration...[Migrations] Found 15 migration files[Migrations] Applying 20240101_initial_schema.sql...[Migrations] Successfully applied 20240101_initial_schema.sql[Migrations] Migration complete - Applied: 5, Skipped: 10Manual Migration Commands
Section titled “Manual Migration Commands”If you prefer manual control over migrations, use the Makefile targets:
Apply Migrations
Section titled “Apply Migrations”# Set required environment variablesexport PSQL_DB_HOST=localhostexport PSQL_DB_NAME=alitaexport PSQL_DB_USER=postgresexport PSQL_DB_PASSWORD=passwordexport PSQL_DB_PORT=5432 # Optional, defaults to 5432
# Apply all pending migrationsmake psql-migrateCheck Migration Status
Section titled “Check Migration Status”make psql-statusOutput:
version | executed_at----------------------------+---------------------------- 20240315_add_captcha.sql | 2024-03-15 10:30:00.000000 20240301_add_notes.sql | 2024-03-01 09:15:00.000000 20240101_initial.sql | 2024-01-01 00:00:00.000000Reset Database (DANGEROUS)
Section titled “Reset Database (DANGEROUS)”This will drop all tables and recreate the schema:
make psql-resetYou will be prompted to confirm with yes before proceeding.
Connection Pool Configuration
Section titled “Connection Pool Configuration”Optimize database performance with connection pooling:
# Maximum idle connections in the pool# Default: 10, Recommended: 30-80 depending on deployment sizeDB_MAX_IDLE_CONNS=50
# Maximum open connections to the database# Default: 100, Recommended: 150-400 depending on deployment sizeDB_MAX_OPEN_CONNS=200
# Maximum connection lifetime in minutes# Default: 60, Recommended: 120-480 minutesDB_CONN_MAX_LIFETIME_MIN=240
# Maximum idle time in minutes# Default: 10, Recommended: 30-120 minutesDB_CONN_MAX_IDLE_TIME_MIN=60Sizing Guidelines
Section titled “Sizing Guidelines”| Deployment Size | MAX_IDLE_CONNS | MAX_OPEN_CONNS | Use Case |
|---|---|---|---|
| Small | 10-30 | 100 | < 50 groups |
| Medium | 30-50 | 200 | 50-500 groups |
| Large | 50-80 | 300-400 | 500+ groups |
Schema Design Patterns
Section titled “Schema Design Patterns”Alita Robot uses a surrogate key pattern for all database tables:
Primary Keys
Section titled “Primary Keys”Each table has an auto-incremented id field as the primary key (internal identifier):
CREATE TABLE users ( id SERIAL PRIMARY KEY, -- Internal ID user_id BIGINT NOT NULL UNIQUE, -- Telegram user ID username VARCHAR(255), name VARCHAR(255), language VARCHAR(10) DEFAULT 'en', created_at TIMESTAMP, updated_at TIMESTAMP);Benefits
Section titled “Benefits”- Decoupling: Internal schema is independent of external systems (Telegram IDs)
- Stability: If external IDs change or new platforms are added, internal references remain stable
- Performance: Integer primary keys are faster for joins and indexing
- GORM Compatibility: Consistent integer primary keys simplify ORM operations
Business Keys
Section titled “Business Keys”External identifiers like user_id (Telegram user ID) and chat_id (Telegram chat ID) are stored with unique constraints:
user_id BIGINT NOT NULL UNIQUE -- Prevents duplicateschat_id BIGINT NOT NULL UNIQUE -- Ensures one row per chatException: Join Tables
Section titled “Exception: Join Tables”The chat_users join table uses a composite primary key:
CREATE TABLE chat_users ( chat_id BIGINT NOT NULL, user_id BIGINT NOT NULL, PRIMARY KEY (chat_id, user_id));Database Tables
Section titled “Database Tables”Alita Robot creates the following tables:
| Table | Purpose |
|---|---|
users | Telegram user data |
chats | Chat/group information |
chat_users | User-chat membership |
warns_settings | Warning configuration per chat |
warns_users | User warning records |
greetings | Welcome/goodbye messages |
filters | Chat filters |
notes | Saved notes |
notes_settings | Notes configuration |
rules | Chat rules |
blacklists | Blacklisted words |
locks | Lock settings |
pins | Pin settings |
admin | Admin settings |
antiflood_settings | Anti-flood configuration |
connection | Connection settings |
connection_settings | Chat connection config |
disable | Disabled commands |
disable_chat_settings | Per-chat disable settings |
report_chat_settings | Report configuration |
report_user_settings | User report preferences |
devs | Developer settings |
channels | Linked channels |
captcha_settings | Captcha configuration |
captcha_attempts | Active captcha attempts |
captcha_muted_users | Users muted due to captcha failure |
stored_messages | Messages stored during captcha |
schema_migrations | Migration tracking |
Backup and Restore
Section titled “Backup and Restore”Backup
Section titled “Backup”# Using pg_dumppg_dump -h localhost -U postgres -d alita > backup.sql
# Compressed backuppg_dump -h localhost -U postgres -d alita | gzip > backup.sql.gz
# Dockerdocker-compose exec -T postgres pg_dump -U alita -d alita > backup.sqlRestore
Section titled “Restore”# From SQL filepsql -h localhost -U postgres -d alita < backup.sql
# From compressed filegunzip -c backup.sql.gz | psql -h localhost -U postgres -d alita
# Dockerdocker-compose exec -T postgres psql -U alita -d alita < backup.sqlTroubleshooting
Section titled “Troubleshooting”Connection refused
Section titled “Connection refused”Failed to connect to database: connection refused- Verify PostgreSQL is running
- Check host and port in
DATABASE_URL - Ensure firewall allows connections on port 5432
Authentication failed
Section titled “Authentication failed”password authentication failed for user- Verify username and password in connection string
- Check
pg_hba.confauthentication settings
Too many connections
Section titled “Too many connections”too many connections for role- Reduce
DB_MAX_OPEN_CONNS - Increase PostgreSQL
max_connectionsinpostgresql.conf - Consider using connection pooling (PgBouncer)
Migration failed
Section titled “Migration failed”Migration failed: column already existsThis usually means the migration was partially applied. Options:
- Set
AUTO_MIGRATE_SILENT_FAIL=trueand let the bot continue - Manually fix the schema and mark the migration as applied:
INSERT INTO schema_migrations (version, executed_at)VALUES ('problematic_migration.sql', NOW());
Slow queries
Section titled “Slow queries”Enable query logging in PostgreSQL:
ALTER SYSTEM SET log_min_duration_statement = 1000; -- Log queries > 1 secondSELECT pg_reload_conf();Or enable debug mode in the bot:
DEBUG=true