feat: Introduce production Docker and CI/CD setup, removing internal documentation and agent workflows.

This commit is contained in:
syntaxbullet
2026-01-30 13:43:59 +01:00
parent 3a620a84c5
commit 1a2bbb011c
16 changed files with 613 additions and 896 deletions

View File

@@ -1,63 +0,0 @@
---
description: Converts conversational brain dumps into structured, metric-driven Markdown tickets in the ./tickets directory.
---
# WORKFLOW: PRAGMATIC ARCHITECT TICKET GENERATOR
## 1. High-Level Goal
Transform informal user "brain dumps" into high-precision, metric-driven engineering tickets stored as Markdown files in the `./tickets/` directory. The workflow enforces a quality gate via targeted inquiry before any file persistence occurs, ensuring all tasks are observable, measurable, and actionable.
## 2. Assumptions & Clarifications
- **Assumptions:** The agent has write access to the `./tickets/` and `/temp/` directories. The current date is accessible for naming conventions. "Metrics" refer to quantifiable constraints (latency, line counts, status codes).
- **Ambiguities:** If the user provides a second brain dump while a ticket is in progress, the agent will prioritize the current workflow until completion or explicit cancellation.
## 3. Stage Breakdown
### Stage 1: Discovery & Quality Gate
- **Stage Name:** Requirement Analysis
- **Purpose:** Analyze input for vagueness and enforce the "Quality Gate" by extracting metrics.
- **Inputs:** Raw user brain dump (text).
- **Actions:** 1. Identify "Known Unknowns" (vague terms like "fast," "better," "clean").
2. Formulate exactly three (3) targeted questions to convert vague goals into comparable metrics.
3. Check for logical inconsistencies in the request.
- **Outputs:** Three questions presented to the user.
- **Persistence Strategy:** Save the original brain dump and the three questions to `/temp/pending_ticket_state.json`.
### Stage 2: Drafting & Refinement
- **Stage Name:** Ticket Drafting
- **Purpose:** Synthesize the original dump and user answers into a structured Markdown draft.
- **Inputs:** User responses to the three questions; `/temp/pending_ticket_state.json`.
- **Actions:** 1. Construct a Markdown draft using the provided template.
2. Generate a slug-based filename: `YYYYMMDD-slug.md`.
3. Present the draft and filename to the user for review.
- **Outputs:** Formatted Markdown text and suggested filename displayed in the chat.
- **Persistence Strategy:** Update `/temp/pending_ticket_state.json` with the full Markdown content and the proposed filename.
### Stage 3: Execution & Persistence
- **Stage Name:** Finalization
- **Purpose:** Commit the approved ticket to the permanent `./tickets/` directory.
- **Inputs:** User confirmation (e.g., "Go," "Approved"); `/temp/pending_ticket_state.json`.
- **Actions:** 1. Write the finalized Markdown content to `./tickets/[filename]`.
2. Delete the temporary state file in `/temp/`.
- **Outputs:** Confirmation message containing the relative path to the new file.
- **Persistence Strategy:** Permanent write to `./tickets/`.
## 4. Data & File Contracts
- **State File:** `/temp/pending_ticket_state.json`
- Schema: `{ "original_input": string, "questions": string[], "answers": string[], "draft_content": string, "filename": string, "step": integer }`
- **Output File:** `./tickets/YYYYMMDD-[slug].md`
- Format: Markdown
- Sections: `# Title`, `## Context`, `## Acceptance Criteria`, `## Suggested Affected Files`, `## Technical Constraints`.
## 5. Failure & Recovery Handling
- **Incomplete Inputs:** If the user fails to answer the 3 questions, the agent must politely restate that metrics are required for high-precision engineering and repeat the questions.
- **Inconsistencies:** If the users answers contradict the original dump, the agent must flag the contradiction and ask for a tie-break before drafting.
- **Missing Directory:** If `./tickets/` does not exist during Stage 3, the agent must attempt to create it before writing the file.
## 6. Final Deliverable Specification
- **Format:** A valid Markdown file in the `./tickets/` folder.
- **Quality Bar:**
- Zero fluff in the Context section.
- All Acceptance Criteria must be binary (pass/fail) or metric-based.
- Filename must strictly follow `YYYYMMDD-slug.md` (e.g., `20240520-auth-refactor.md`).
- No "Status" or "Priority" fields.

View File

@@ -1,89 +0,0 @@
---
description: Analyzes the codebase to find dependencies and side effects related to a specific ticket.
---
# WORKFLOW: Dependency Architect & Blast Radius Analysis
## 1. High-Level Goal
Perform a deterministic "Blast Radius" analysis for a code change defined in a Jira/Linear-style ticket. The agent will identify direct consumers, side effects, and relevant test suites, then append a structured "Impact Analysis" section to the original ticket file to guide developers and ensure high-velocity execution without regressions.
## 2. Assumptions & Clarifications
- **Location:** Tickets are stored in the `./tickets/` directory as Markdown files.
- **Code Access:** The agent has full read access to the project root and subdirectories.
- **Scope:** Dependency tracing is limited to "one level deep" (direct imports/references) unless a global configuration or core database schema change is detected.
- **Ambiguity Handling:** If "Suggested Affected Files" are missing from the ticket, the agent will attempt to infer them from the "Acceptance Criteria" logic; if inference is impossible, the agent will halt and request the file list.
## 3. Stage Breakdown
### Stage 1: Ticket Parsing & Context Extraction
- **Purpose:** Extract the specific files and logic constraints requiring analysis.
- **Inputs:** A specific ticket filename (e.g., `./tickets/TASK-123.md`).
- **Actions:**
1. Read the ticket file.
2. Extract the list of "Suggested Affected Files".
3. Extract keywords and logic from the "Acceptance Criteria".
4. Validate that all "Suggested Affected Files" exist in the current codebase.
- **Outputs:** A JSON object containing the target file list and key logic requirements.
- **Persistence Strategy:** Save extracted data to `/temp/context.json`.
### Stage 2: Recursive Dependency Mapping
- **Purpose:** Identify which external modules rely on the target files.
- **Inputs:** `/temp/context.json`.
- **Actions:**
1. For each file in the target list, perform a search (e.g., `grep` or AST walk) for import statements or references in the rest of the codebase.
2. Filter out internal references within the same module (focus on external consumers).
3. Detect if the change involves shared utilities (e.g., `utils/`, `common/`) or database schemas (e.g., `prisma/schema.prisma`).
- **Outputs:** A list of unique consumer file paths and their specific usage context.
- **Persistence Strategy:** Save findings to `/temp/dependencies.json`.
### Stage 3: Test Suite Identification
- **Purpose:** Locate the specific test files required to validate the change.
- **Inputs:** `/temp/context.json` and `/temp/dependencies.json`.
- **Actions:**
1. Search for files following patterns: `[filename].test.ts`, `[filename].spec.js`, or within `__tests__` folders related to affected files.
2. Identify integration or E2E tests that cover the consumer paths identified in Stage 2.
- **Outputs:** A list of relevant test file paths.
- **Persistence Strategy:** Save findings to `/temp/tests.json`.
### Stage 4: Risk Hotspot Synthesis
- **Purpose:** Interpret raw dependency data into actionable risk warnings.
- **Inputs:** All files in `/temp/`.
- **Actions:**
1. Analyze the volume of consumers; if a file has >5 consumers, flag it as a "High Impact Hotspot."
2. Check for breaking contract changes (e.g., interface modifications) based on the "Acceptance Criteria".
3. Formulate specific "Risk Hotspot" warnings (e.g., "Changing Auth interface affects 12 files; consider a wrapper.").
- **Outputs:** A structured Markdown-ready report object.
- **Persistence Strategy:** Save final report data to `/temp/final_analysis.json`.
### Stage 5: Ticket Augmentation & Finalization
- **Purpose:** Update the physical ticket file with findings.
- **Inputs:** Original ticket file and `/temp/final_analysis.json`.
- **Actions:**
1. Read the current content of the ticket file.
2. Generate a Markdown section titled `## Impact Analysis (Generated: 2026-01-09)`.
3. Append the Direct Consumers, Test Coverage, and Risk Hotspots sections.
4. Write the combined content back to the original file path.
- **Outputs:** Updated Markdown ticket.
- **Persistence Strategy:** None (Final Action).
## 4. Data & File Contracts
- **State File (`/temp/state.json`):** - `affected_files`: string[]
- `consumers`: { path: string, context: string }[]
- `tests`: string[]
- `risks`: string[]
- **File Format:** All `/temp` files must be valid JSON.
- **Ticket Format:** Standard Markdown. Use `###` for sub-headers in the generated section.
## 5. Failure & Recovery Handling
- **Missing Ticket:** If the ticket path is invalid, exit immediately with error: "TICKET_NOT_FOUND".
- **Zero Consumers Found:** If no external consumers are found, state "No external dependencies detected" in the report; do not fail.
- **Broken Imports:** If AST parsing fails due to syntax errors in the codebase, fallback to `grep` for string-based matching.
- **Write Permission:** If the ticket file is read-only, output the final Markdown to the console and provide a warning.
## 6. Final Deliverable Specification
- **Format:** The original ticket file must be modified in-place.
- **Content:**
- **Direct Consumers:** Bulleted list of `[File Path]: [Usage description]`.
- **Test Coverage:** Bulleted list of `[File Path]`.
- **Risk Hotspots:** Clear, one-sentence warnings for high-risk areas.
- **Quality Bar:** No hallucinations. Every file path listed must exist in the repository. No deletions of original ticket content.

View File

@@ -1,72 +0,0 @@
---
description: Performs a high-intensity, "hostile" technical audit of the provided code.
---
# WORKFLOW: HOSTILE TECHNICAL AUDIT & SECURITY REVIEW
## 1. High-Level Goal
Execute a multi-pass, hyper-critical technical audit of provided source code to identify fatal logic flaws, security vulnerabilities, and architectural debt. The agent acts as a hostile reviewer with a "guilty until proven innocent" mindset, aiming to justify a REJECTED verdict unless the code demonstrates exceptional robustness and simplicity.
## 2. Assumptions & Clarifications
- **Assumption:** The user will provide either raw code snippets or paths to files within the agent's accessible environment.
- **Assumption:** The agent has access to `/temp/` for multi-stage state persistence.
- **Clarification:** If a "ticket description" or "requirement" is not provided, the agent will infer intent from the code but must flag "Lack of Context" as a potential risk.
- **Clarification:** "Hostile" refers to a rigorous, zero-tolerance standard, not unprofessional language.
## 3. Stage Breakdown
### Stage 1: Contextual Ingestion & Dependency Mapping
- **Purpose:** Map the attack surface and understand the logical flow before the audit.
- **Inputs:** Target source code files.
- **Actions:** - Identify all external dependencies and entry points.
- Map data flow from input to storage/output.
- Identify "High-Risk Zones" (e.g., auth logic, DB queries, memory management).
- **Outputs:** A structured map of the code's architecture.
- **Persistence Strategy:** Save `audit_map.json` to `/temp/` containing the file list and identified High-Risk Zones.
### Stage 2: Security & Logic Stress Test (The "Hostile" Pass)
- **Purpose:** Identify reasons to reject the code based on security and logical integrity.
- **Inputs:** `/temp/audit_map.json` and source code.
- **Actions:**
- Scan for injection, race conditions, and improper state handling.
- Simulate edge cases: null inputs, buffer overflows, and malformed data.
- Evaluate "Silent Failures": Does the code swallow exceptions or fail to log critical errors?
- **Outputs:** List of fatal flaws and security risks.
- **Persistence Strategy:** Save `vulnerabilities.json` to `/temp/`.
### Stage 3: Performance & Velocity Debt Assessment
- **Purpose:** Evaluate the "Pragmatic Performance" and maintainability of the implementation.
- **Inputs:** Source code and `/temp/vulnerabilities.json`.
- **Actions:**
- Identify redundant API calls or unnecessary allocations.
- Flag "Over-Engineering" (unnecessary abstractions) vs. "Lazy Code" (hardcoded values).
- Identify missing unit test scenarios for identified edge cases.
- **Outputs:** List of optimization debt and missing test scenarios.
- **Persistence Strategy:** Save `debt_and_tests.json` to `/temp/`.
### Stage 4: Synthesis & Verdict Generation
- **Purpose:** Compile all findings into the final "Hostile Audit" report.
- **Inputs:** `/temp/vulnerabilities.json` and `/temp/debt_and_tests.json`.
- **Actions:**
- Consolidate all findings into the mandated "Response Format."
- Apply the "Burden of Proof" rule: If any Fatal Flaws or Security Risks exist, the verdict is REJECTED.
- Ensure no sycophantic language is present.
- **Outputs:** Final Audit Report.
- **Persistence Strategy:** Final output is delivered to the user; `/temp/` files may be purged.
## 4. Data & File Contracts
- **Filename:** `/temp/audit_context.json` | **Schema:** `{ "high_risk_zones": [], "entry_points": [] }`
- **Filename:** `/temp/findings.json` | **Schema:** `{ "fatal_flaws": [], "security_risks": [], "debt": [], "missing_tests": [] }`
- **Final Report Format:** Markdown with specific headers: `## 🛑 FATAL FLAWS`, `## ⚠️ SECURITY & VULNERABILITIES`, `## 📉 VELOCITY DEBT`, `## 🧪 MISSING TESTS`, and `### VERDICT`.
## 5. Failure & Recovery Handling
- **Incomplete Input:** If the code is snippet-based and missing context, the agent must assume the worst-case scenario for the missing parts and flag them as "Critical Unknowns."
- **Stage Failure:** If a specific file cannot be parsed, log the error in the `findings.json` and proceed with the remaining files.
- **Clarification:** The agent will NOT ask for clarification mid-audit. It will make a "hostile assumption" and document it as a risk.
## 6. Final Deliverable Specification
- **Tone:** Senior Security Auditor. Clinical, critical, and direct.
- **Acceptance Criteria:** - No "Good job" or introductory filler.
- Every flaw must include [Why it fails] and [How to fix it].
- Verdict must be REJECTED unless the code is "solid" (simple, robust, and secure).
- Must identify at least one specific edge case for the "Missing Tests" section.

View File

@@ -1,99 +0,0 @@
---
description: Work on a ticket
---
# WORKFLOW: Automated Feature Implementation and Review Cycle
## 1. High-Level Goal
The objective of this workflow is to autonomously ingest a task from a local `/tickets` directory, establish a dedicated development environment via Git branching, implement the requested changes with incremental commits, validate the work through an internal review process, and finalize the lifecycle by cleaning up ticket artifacts and seeking user authorization for the final merge.
---
## 2. Assumptions & Clarifications
- **Assumptions:**
- The `/tickets` directory contains one or more files representing tasks (e.g., `.md` or `.txt`).
- The agent has authenticated access to the local Git repository.
- A "Review Workflow" exists as an executable command or internal process.
- The branch naming convention is `feature/[ticket-filename-slug]`.
- **Ambiguities:**
- If multiple tickets exist, the agent will select the one with the earliest "Last Modified" timestamp.
- "Regular commits" are defined as committing after every logically complete file change or functional milestone.
---
## 3. Stage Breakdown
### Stage 1: Ticket Selection and Branch Initialization
- **Purpose:** Identify the next task and prepare the workspace.
- **Inputs:** Contents of the `/tickets` directory.
- **Actions:**
1. Scan `/tickets` and select the oldest file.
2. Parse the ticket content to understand requirements.
3. Ensure the current working directory is a Git repository.
4. Create and switch to a new branch: `feature/[ticket-id]`.
- **Outputs:** Active feature branch.
- **Persistence Strategy:** Save `state.json` to `/temp` containing `ticket_path`, `branch_name`, and `start_time`.
### Stage 2: Implementation and Incremental Committing
- **Purpose:** Execute the technical requirements of the ticket.
- **Inputs:** `/temp/state.json`, Ticket requirements.
- **Actions:**
1. Modify codebase according to requirements.
2. For every distinct file change or logical unit of work:
- Run basic syntax checks.
- Execute `git add [file]`.
- Execute `git commit -m "feat: [brief description of change]"`
3. Repeat until the feature is complete.
- **Outputs:** Committed code changes on the feature branch.
- **Persistence Strategy:** Update `state.json` with `implementation_complete: true` and a list of `modified_files`.
### Stage 3: Review Workflow Execution
- **Purpose:** Validate the implementation against quality standards.
- **Inputs:** `/temp/state.json`, Modified codebase.
- **Actions:**
1. Trigger the "Review Workflow" (static analysis, tests, or linter).
2. If errors are found:
- Log errors to `/temp/review_log.txt`.
- Re-enter Stage 2 to apply fixes and commit.
3. If review passes:
- Proceed to Stage 4.
- **Outputs:** Review results/logs.
- **Persistence Strategy:** Update `state.json` with `review_passed: true`.
### Stage 4: Cleanup and User Handoff
- **Purpose:** Finalize the ticket lifecycle and request merge permission.
- **Inputs:** `/temp/state.json`.
- **Actions:**
1. Delete the ticket file from `/tickets` using the path stored in `state.json`.
2. Format a summary of changes and a request for merge.
- **Outputs:** Deletion of the ticket file; user-facing summary.
- **Persistence Strategy:** Clear `/temp/state.json` upon successful completion.
---
## 4. Data & File Contracts
- **State File:** `/temp/state.json`
- Format: JSON
- Schema: `{ "ticket_path": string, "branch_name": string, "implementation_complete": boolean, "review_passed": boolean }`
- **Ticket Files:** Located in `/tickets/*` (Markdown or Plain Text).
- **Logs:** `/temp/review_log.txt` (Plain Text) for capturing stderr from review tools.
---
## 5. Failure & Recovery Handling
- **Empty Ticket Directory:** If no files are found in `/tickets`, the agent will output "NO_TICKETS_FOUND" and terminate the workflow.
- **Commit Failures:** If a commit fails (e.g., pre-commit hooks), the agent must resolve the hook violation before retrying the commit.
- **Review Failure Loop:** If the review fails more than 3 times for the same issue, the agent must halt and output a "BLOCKER_REPORT" detailing the persistent errors to the user.
- **State Recovery:** On context reset, the agent must check `/temp/state.json` to resume the workflow from the last recorded stage.
---
## 6. Final Deliverable Specification
- **Final Output:** A clear message to the user in the following format:
> **Task Completed:** [Ticket Name]
> **Branch:** [Branch Name]
> **Changes:** [Brief list of modified files]
> **Review Status:** Passed
> **Cleanup:** Ticket file removed from /tickets.
> **Action Required:** Would you like me to merge [Branch Name] into `main`? (Yes/No)
- **Quality Bar:** Code must be committed with descriptive messages; the ticket file must be successfully deleted; the workspace must be left on the feature branch awaiting the merge command.

View File

@@ -1,12 +1,26 @@
# =============================================================================
# Aurora Environment Configuration
# =============================================================================
# Copy this file to .env and update with your values
# For production, see .env.prod.example with security recommendations
# =============================================================================
# Database
# For production: use a strong password (openssl rand -base64 32)
DB_USER=aurora DB_USER=aurora
DB_PASSWORD=aurora DB_PASSWORD=aurora
DB_NAME=aurora DB_NAME=aurora
DB_PORT=5432 DB_PORT=5432
DB_HOST=db DB_HOST=db
DATABASE_URL=postgres://aurora:aurora@db:5432/aurora
# Discord
# Get from: https://discord.com/developers/applications
DISCORD_BOT_TOKEN=your-discord-bot-token DISCORD_BOT_TOKEN=your-discord-bot-token
DISCORD_CLIENT_ID=your-discord-client-id DISCORD_CLIENT_ID=your-discord-client-id
DISCORD_GUILD_ID=your-discord-guild-id DISCORD_GUILD_ID=your-discord-guild-id
DATABASE_URL=postgres://aurora:aurora@db:5432/aurora
VPS_USER=your-vps-user # Server (for remote access scripts)
# Use a non-root user (see shared/scripts/setup-server.sh)
VPS_USER=deploy
VPS_HOST=your-vps-ip VPS_HOST=your-vps-ip

38
.env.prod.example Normal file
View File

@@ -0,0 +1,38 @@
# =============================================================================
# Aurora Production Environment Template
# =============================================================================
# Copy this file to .env and fill in the values
# IMPORTANT: Use strong, unique passwords in production!
# =============================================================================
# -----------------------------------------------------------------------------
# Database Configuration
# -----------------------------------------------------------------------------
# Generate strong password: openssl rand -base64 32
DB_USER=aurora_prod
DB_PASSWORD=CHANGE_ME_USE_STRONG_PASSWORD
DB_NAME=aurora_prod
DB_PORT=5432
DB_HOST=localhost
# Constructed database URL (used by Drizzle)
DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}@localhost:${DB_PORT}/${DB_NAME}
# -----------------------------------------------------------------------------
# Discord Configuration
# -----------------------------------------------------------------------------
# Get these from Discord Developer Portal: https://discord.com/developers
DISCORD_BOT_TOKEN=your_bot_token_here
DISCORD_CLIENT_ID=your_client_id_here
DISCORD_GUILD_ID=your_guild_id_here
# -----------------------------------------------------------------------------
# Server Configuration (for SSH deployment scripts)
# -----------------------------------------------------------------------------
# Use a non-root user for security!
VPS_USER=deploy
VPS_HOST=your_server_ip_here
# Optional: Custom ports for remote access
# DASHBOARD_PORT=3000
# STUDIO_PORT=4983

136
.github/workflows/deploy.yml vendored Normal file
View File

@@ -0,0 +1,136 @@
# Aurora CI/CD Pipeline
# Builds, tests, and deploys to production server
name: Deploy to Production
on:
push:
branches: [main]
workflow_dispatch: # Allow manual trigger
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# ==========================================================================
# Test Job
# ==========================================================================
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install Dependencies
run: bun install --frozen-lockfile
- name: Run Tests
run: bun test
# ==========================================================================
# Build Job
# ==========================================================================
build:
runs-on: ubuntu-latest
needs: test
permissions:
contents: read
packages: write
outputs:
image_tag: ${{ steps.meta.outputs.tags }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=
type=raw,value=latest
- name: Build and Push Docker Image
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile.prod
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# ==========================================================================
# Deploy Job
# ==========================================================================
deploy:
runs-on: ubuntu-latest
needs: build
environment: production
steps:
- name: Deploy to Production Server
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
cd ~/Aurora
# Pull latest code
git pull origin main
# Pull latest Docker image
docker compose -f docker-compose.prod.yml pull 2>/dev/null || true
# Build and restart containers
docker compose -f docker-compose.prod.yml build --no-cache
docker compose -f docker-compose.prod.yml down
docker compose -f docker-compose.prod.yml up -d
# Wait for health checks
sleep 15
# Verify deployment
docker ps | grep aurora
# Cleanup old images
docker image prune -f
- name: Verify Deployment
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
# Check if app container is healthy
if docker ps | grep -q "aurora_app.*healthy"; then
echo "✅ Deployment successful - aurora_app is healthy"
exit 0
else
echo "⚠️ Health check pending, checking container status..."
docker ps | grep aurora
docker logs aurora_app --tail 20
exit 0
fi

54
Dockerfile.prod Normal file
View File

@@ -0,0 +1,54 @@
# =============================================================================
# Stage 1: Dependencies & Build
# =============================================================================
FROM oven/bun:latest AS builder
WORKDIR /app
# Install system dependencies needed for build
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install root project dependencies
COPY package.json bun.lock ./
RUN bun install --frozen-lockfile
# Install web project dependencies
COPY web/package.json web/bun.lock ./web/
RUN cd web && bun install --frozen-lockfile
# Copy source code
COPY . .
# Build web assets for production
RUN cd web && bun run build
# =============================================================================
# Stage 2: Production Runtime
# =============================================================================
FROM oven/bun:latest AS production
WORKDIR /app
# Create non-root user for security
RUN groupadd --system appgroup && useradd --system --gid appgroup appuser
# Copy only what's needed for production
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appgroup /app/web/node_modules ./web/node_modules
COPY --from=builder --chown=appuser:appgroup /app/web/dist ./web/dist
COPY --from=builder --chown=appuser:appgroup /app/bot ./bot
COPY --from=builder --chown=appuser:appgroup /app/shared ./shared
COPY --from=builder --chown=appuser:appgroup /app/package.json .
COPY --from=builder --chown=appuser:appgroup /app/drizzle.config.ts .
COPY --from=builder --chown=appuser:appgroup /app/tsconfig.json .
# Switch to non-root user
USER appuser
# Expose web dashboard port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD bun -e "fetch('http://localhost:3000/api/health').then(r => r.ok ? process.exit(0) : process.exit(1)).catch(() => process.exit(1))"
# Run in production mode
CMD ["bun", "run", "bot/index.ts"]

78
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,78 @@
# Production Docker Compose Configuration
# Usage: docker compose -f docker-compose.prod.yml up -d
#
# IMPORTANT: Database data is preserved in ./shared/db/data volume
services:
db:
image: postgres:17-alpine
container_name: aurora_db
restart: unless-stopped
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
volumes:
# Database data - persisted across container rebuilds
- ./shared/db/data:/var/lib/postgresql/data
- ./shared/db/log:/var/log/postgresql
networks:
- internal
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}" ]
interval: 10s
timeout: 5s
retries: 5
# Security: limit resources
deploy:
resources:
limits:
memory: 512M
app:
container_name: aurora_app
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile.prod
target: production
image: aurora-app:latest
ports:
- "127.0.0.1:3000:3000"
# NO source code volumes - production image is self-contained
environment:
- NODE_ENV=production
- HOST=0.0.0.0
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME}
- DB_PORT=5432
- DB_HOST=db
- DISCORD_BOT_TOKEN=${DISCORD_BOT_TOKEN}
- DISCORD_GUILD_ID=${DISCORD_GUILD_ID}
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DATABASE_URL=postgresql://${DB_USER}:${DB_PASSWORD}@db:5432/${DB_NAME}
depends_on:
db:
condition: service_healthy
networks:
- internal
- web
# Security: limit resources
deploy:
resources:
limits:
memory: 1G
# Logging configuration
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
networks:
internal:
driver: bridge
internal: true # No external access - DB isolated
web:
driver: bridge # App accessible from host (via reverse proxy)

View File

@@ -1,63 +0,0 @@
# Command Reference
This document lists all available slash commands in Aurora, categorized by their function.
## Economy
| Command | Description | Options | Permissions |
|---|---|---|---|
| `/balance` | View your or another user's balance. | `user` (Optional): The user to check. | Everyone |
| `/daily` | Claim your daily currency reward and streak bonus. | None | Everyone |
| `/pay` | Transfer currency to another user. | `user` (Required): Recipient.<br>`amount` (Required): Amount to send. | Everyone |
| `/trade` | Start a trade session with another user. | `user` (Required): The user to trade with. | Everyone |
| `/exam` | Take your weekly exam to earn rewards based on XP gain. | None | Everyone |
## Inventory & Items
| Command | Description | Options | Permissions |
|---|---|---|---|
| `/inventory` | View your or another user's inventory. | `user` (Optional): The user to check. | Everyone |
| `/use` | Use an item from your inventory. | `item` (Required): The item to use (Autocomplete). | Everyone |
## User & Social
| Command | Description | Options | Permissions |
|---|---|---|---|
| `/profile` | View your or another user's Student ID card. | `user` (Optional): The user to view. | Everyone |
| `/leaderboard` | View top players. | `type` (Required): 'Level / XP' or 'Balance'. | Everyone |
| `/feedback` | Submit feedback, bug reports, or suggestions. | None | Everyone |
| `/quests` | View your active quests. | None | Everyone |
## Admin
> [!IMPORTANT]
> These commands require Administrator permissions or specific roles as configured.
### General Management
| Command | Description | Options |
|---|---|---|
| `/config` | Manage bot configuration. | `group` (Req): Section.<br>`key` (Req): Setting.<br>`value` (Req): New value. |
| `/refresh` | Refresh commands or configuration cache. | `type`: 'Commands' or 'Config'. |
| `/update` | Update the bot from the repository. | None |
| `/features` | Enable/Disable system features. | `feature` (Req): Feature name.<br>`enabled` (Req): True/False. |
| `/webhook` | Send a message via webhook. | `payload` (Req): JSON payload. |
### Moderation
| Command | Description | Options |
|---|---|---|
| `/warn` | Warn a user. | `user` (Req): Target.<br>`reason` (Req): Reason. |
| `/warnings` | View active warnings for a user. | `user` (Req): Target. |
| `/clearwarning`| Clear a specific warning. | `case_id` (Req): Case ID. |
| `/case` | View details of a specific moderation case. | `case_id` (Req): Case ID. |
| `/cases` | View moderation history for a user. | `user` (Req): Target. |
| `/note` | Add a note to a user. | `user` (Req): Target.<br>`note` (Req): Content. |
| `/notes` | View notes for a user. | `user` (Req): Target. |
| `/prune` | Bulk delete messages. | `amount` (Req): Number (1-100). |
### Game Admin
| Command | Description | Options |
|---|---|---|
| `/create_item` | Create a new item in the database. | (Modal based interaction) |
| `/create_color`| Create a new color role. | `name` (Req): Role name.<br>`hex` (Req): Hex color code. |
| `/listing` | Manage shop listings (Admin view). | None (Context sensitive?) |
| `/terminal` | Control the terminal display channel. | `action`: 'setup', 'update', 'clear'. |

View File

@@ -1,160 +0,0 @@
# Configuration Guide
This document outlines the structure and available options for the `config/config.json` file. The configuration is validated using Zod schemas at runtime (see `src/lib/config.ts`).
## Core Structure
### Leveling
Configuration for the XP and leveling system.
| Field | Type | Description |
|-------|------|-------------|
| `base` | `number` | The base XP required for the first level. |
| `exponent` | `number` | The exponent used to calculate XP curves. |
| `chat.cooldownMs` | `number` | Time in milliseconds between XP gains from chat. |
| `chat.minXp` | `number` | Minimum XP awarded per message. |
| `chat.maxXp` | `number` | Maximum XP awarded per message. |
### Economy
Settings for currency, rewards, and transfers.
#### Daily
| Field | Type | Description |
|-------|------|-------------|
| `amount` | `integer` | Base amount granted by `/daily`. |
| `streakBonus` | `integer` | Bonus amount per streak day. |
| `weeklyBonus` | `integer` | Bonus amount for a 7-day streak. |
| `cooldownMs` | `number` | Cooldown period for the command (usually 24h). |
#### Transfers
| Field | Type | Description |
|-------|------|-------------|
| `allowSelfTransfer` | `boolean` | Whether users can transfer money to themselves. |
| `minAmount` | `integer` | Minimum amount required for a transfer. |
#### Exam
| Field | Type | Description |
|-------|------|-------------|
| `multMin` | `number` | Minimum multiplier for exam rewards. |
| `multMax` | `number` | Maximum multiplier for exam rewards. |
### Inventory
| Field | Type | Description |
|-------|------|-------------|
| `maxStackSize` | `integer` | Maximum count of a single item in one slot. |
| `maxSlots` | `number` | Total number of inventory slots available. |
### Lootdrop
Settings for the random chat loot drop events.
| Field | Type | Description |
|-------|------|-------------|
| `activityWindowMs` | `number` | Time window to track activity for spawning drops. |
| `minMessages` | `number` | Minimum messages required in window to trigger drop. |
| `spawnChance` | `number` | Probability (0-1) of a drop spawning when conditions met. |
| `cooldownMs` | `number` | Minimum time between loot drops. |
| `reward.min` | `number` | Minimum currency reward. |
| `reward.max` | `number` | Maximum currency reward. |
| `reward.currency` | `string` | The currency ID/Symbol used for rewards. |
### Roles
| Field | Type | Description |
|-------|------|-------------|
| `studentRole` | `string` | Discord Role ID for students. |
| `visitorRole` | `string` | Discord Role ID for visitors. |
| `colorRoles` | `string[]` | List of Discord Role IDs available as color roles. |
### Moderation
Automated moderation settings.
#### Prune
| Field | Type | Description |
|-------|------|-------------|
| `maxAmount` | `number` | Maximum messages to delete in one go. |
| `confirmThreshold` | `number` | Amount above which confirmation is required. |
| `batchSize` | `number` | Size of delete batches. |
| `batchDelayMs` | `number` | Delay between batches. |
#### Cases
| Field | Type | Description |
|-------|------|-------------|
| `dmOnWarn` | `boolean` | Whether to DM users when they are warned. |
| `logChannelId` | `string` | (Optional) Channel ID for moderation logs. |
| `autoTimeoutThreshold` | `number` | (Optional) Warn count to trigger auto-timeout. |
### System & Misc
| Field | Type | Description |
|-------|------|-------------|
| `commands` | `Object` | Map of command names (keys) to boolean (values) to enable/disable them. |
| `welcomeChannelId` | `string` | (Optional) Channel ID for welcome messages. |
| `welcomeMessage` | `string` | (Optional) Custom welcome message text. |
| `feedbackChannelId` | `string` | (Optional) Channel ID where feedback is posted. |
| `terminal.channelId` | `string` | (Optional) Channel ID for terminal display. |
| `terminal.messageId` | `string` | (Optional) Message ID for terminal display. |
## Example Config
```json
{
"leveling": {
"base": 100,
"exponent": 1.5,
"chat": {
"cooldownMs": 60000,
"minXp": 15,
"maxXp": 25
}
},
"economy": {
"daily": {
"amount": "100",
"streakBonus": "10",
"weeklyBonus": "500",
"cooldownMs": 86400000
},
"transfers": {
"allowSelfTransfer": false,
"minAmount": "10"
},
"exam": {
"multMin": 1.0,
"multMax": 2.0
}
},
"inventory": {
"maxStackSize": "99",
"maxSlots": 20
},
"lootdrop": {
"activityWindowMs": 300000,
"minMessages": 10,
"spawnChance": 0.05,
"cooldownMs": 3600000,
"reward": {
"min": 50,
"max": 150,
"currency": "CREDITS"
}
},
"commands": {
"example": true
},
"studentRole": "123456789012345678",
"visitorRole": "123456789012345678",
"colorRoles": [],
"moderation": {
"prune": {
"maxAmount": 100,
"confirmThreshold": 50,
"batchSize": 100,
"batchDelayMs": 1000
},
"cases": {
"dmOnWarn": true
}
}
}
```
> [!NOTE]
> Fields marked as `integer` or `bigint` in the types can often be provided as strings in the JSON to ensure precision, but the system handles parsing them.

View File

@@ -1,149 +0,0 @@
# Database Schema
This document outlines the database schema for the Aurora project. The database is PostgreSQL, managed via Drizzle ORM.
## Tables
### Users (`users`)
Stores user data, economy, and progression.
| Column | Type | Description |
|---|---|---|
| `id` | `bigint` | Primary Key. Discord User ID. |
| `class_id` | `bigint` | Foreign Key -> `classes.id`. |
| `username` | `varchar(255)` | User's Discord username. |
| `is_active` | `boolean` | Whether the user is active (default: true). |
| `balance` | `bigint` | User's currency balance. |
| `xp` | `bigint` | User's experience points. |
| `level` | `integer` | User's level. |
| `daily_streak` | `integer` | Current streak of daily command usage. |
| `settings` | `jsonb` | User-specific settings. |
| `created_at` | `timestamp` | Record creation time. |
| `updated_at` | `timestamp` | Last update time. |
### Classes (`classes`)
Available character classes.
| Column | Type | Description |
|---|---|---|
| `id` | `bigint` | Primary Key. Custom ID. |
| `name` | `varchar(255)` | Class name (Unique). |
| `balance` | `bigint` | Class bank balance (shared/flavor). |
| `role_id` | `varchar(255)` | Discord Role ID associated with the class. |
### Items (`items`)
Definitions of items available in the game.
| Column | Type | Description |
|---|---|---|
| `id` | `serial` | Primary Key. Auto-incrementing ID. |
| `name` | `varchar(255)` | Item name (Unique). |
| `description` | `text` | Item description. |
| `rarity` | `varchar(20)` | Common, Rare, etc. Default: 'Common'. |
| `type` | `varchar(50)` | MATERIAL, CONSUMABLE, EQUIPMENT, etc. |
| `usage_data` | `jsonb` | Effect data for consumables/usables. |
| `price` | `bigint` | Base value of the item. |
| `icon_url` | `text` | URL for the item's icon. |
| `image_url` | `text` | URL for the item's large image. |
### Inventory (`inventory`)
Items held by users.
| Column | Type | Description |
|---|---|---|
| `user_id` | `bigint` | PK/FK -> `users.id`. |
| `item_id` | `integer` | PK/FK -> `items.id`. |
| `quantity` | `bigint` | Amount held. Must be > 0. |
### Transactions (`transactions`)
Currency transaction history.
| Column | Type | Description |
|---|---|---|
| `id` | `bigserial` | Primary Key. |
| `user_id` | `bigint` | FK -> `users.id`. The user affecting the balance. |
| `related_user_id` | `bigint` | FK -> `users.id`. The other party (if any). |
| `amount` | `bigint` | Amount transferred. |
| `type` | `varchar(50)` | Transaction type identifier. |
| `description` | `text` | Human-readable description. |
| `created_at` | `timestamp` | Time of transaction. |
### Item Transactions (`item_transactions`)
Item flow history.
| Column | Type | Description |
|---|---|---|
| `id` | `bigserial` | Primary Key. |
| `user_id` | `bigint` | FK -> `users.id`. |
| `related_user_id` | `bigint` | FK -> `users.id`. |
| `item_id` | `integer` | FK -> `items.id`. |
| `quantity` | `bigint` | Amount gained (+) or lost (-). |
| `type` | `varchar(50)` | TRADE, SHOP_BUY, DROP, etc. |
| `description` | `text` | Description. |
| `created_at` | `timestamp` | Time of transaction. |
### Quests (`quests`)
Quest definitions.
| Column | Type | Description |
|---|---|---|
| `id` | `serial` | Primary Key. |
| `name` | `varchar(255)` | Quest title. |
| `description` | `text` | Quest text. |
| `trigger_event` | `varchar(50)` | Event that triggers progress checks. |
| `requirements` | `jsonb` | Completion criteria. |
| `rewards` | `jsonb` | Rewards for completion. |
### User Quests (`user_quests`)
User progress on quests.
| Column | Type | Description |
|---|---|---|
| `user_id` | `bigint` | PK/FK -> `users.id`. |
| `quest_id` | `integer` | PK/FK -> `quests.id`. |
| `progress` | `integer` | Current progress value. |
| `completed_at` | `timestamp` | Completion time (null if active). |
### User Timers (`user_timers`)
Generic timers for cooldowns, temporary effects, etc.
| Column | Type | Description |
|---|---|---|
| `user_id` | `bigint` | PK/FK -> `users.id`. |
| `type` | `varchar(50)` | PK. Timer type (COOLDOWN, EFFECT, ACCESS). |
| `key` | `varchar(100)` | PK. specific ID (e.g. 'daily'). |
| `expires_at` | `timestamp` | When the timer expires. |
| `metadata` | `jsonb` | Extra data. |
### Lootdrops (`lootdrops`)
Active chat loot drop events.
| Column | Type | Description |
|---|---|---|
| `message_id` | `varchar(255)` | Primary Key. Discord Message ID. |
| `channel_id` | `varchar(255)` | Discord Channel ID. |
| `reward_amount` | `integer` | Currency amount. |
| `currency` | `varchar(50)` | Currency type constant. |
| `claimed_by` | `bigint` | FK -> `users.id`. Null if unclaimed. |
| `created_at` | `timestamp` | Spawn time. |
| `expires_at` | `timestamp` | Despawn time. |
### Moderation Cases (`moderation_cases`)
History of moderation actions.
| Column | Type | Description |
|---|---|---|
| `id` | `bigserial` | Primary Key. |
| `case_id` | `varchar(50)` | Unique friendly ID. |
| `type` | `varchar(20)` | warn, timeout, kick, ban, etc. |
| `user_id` | `bigint` | Target user ID. |
| `username` | `varchar(255)` | Target username snapshot. |
| `moderator_id` | `bigint` | Acting moderator ID. |
| `moderator_name` | `varchar(255)` | Moderator username snapshot. |
| `reason` | `text` | Reason for action. |
| `metadata` | `jsonb` | Extra data. |
| `active` | `boolean` | Is this case active? |
| `created_at` | `timestamp` | Creation time. |
| `resolved_at` | `timestamp` | Resolution/Expiration time. |
| `resolved_by` | `bigint` | User ID who resolved it. |
| `resolved_reason` | `text` | Reason for resolution. |

View File

@@ -1,127 +0,0 @@
# Lootbox Creation Guide
Currently, the Item Wizard does not support creating **Lootbox** items directly. Instead, they must be inserted manually into the database. This guide details the required JSON structure for the `LOOTBOX` effect.
## Item Structure
To create a lootbox, you need to insert a row into the `items` table. The critical part is the `usageData` JSON column.
```json
{
"consume": true,
"effects": [
{
"type": "LOOTBOX",
"pool": [ ... ]
}
]
}
```
## Loot Table Structure
The `pool` property is an array of `LootTableItem` objects. A random item is selected based on the total `weight` of all items in the pool.
| Field | Type | Description |
|-------|------|-------------|
| `type` | `string` | One of: `CURRENCY`, `ITEM`, `XP`, `NOTHING`. |
| `weight` | `number` | The relative probability weight of this outcome. |
| `message` | `string` | (Optional) Custom message to display when this outcome is selected. |
### Outcome Types
#### 1. Currency
Gives the user coins.
```json
{
"type": "CURRENCY",
"weight": 50,
"amount": 100, // Fixed amount OR
"minAmount": 50, // Minimum random amount
"maxAmount": 150 // Maximum random amount
}
```
#### 2. XP
Gives the user Experience Points.
```json
{
"type": "XP",
"weight": 30,
"amount": 500 // Fixed amount OR range (minAmount/maxAmount)
}
```
#### 3. Item
Gives the user another item (by ID).
```json
{
"type": "ITEM",
"weight": 10,
"itemId": 42, // The ID of the item to give
"amount": 1 // (Optional) Quantity to give, default 1
}
```
#### 4. Nothing
An empty roll.
```json
{
"type": "NOTHING",
"weight": 10,
"message": "The box was empty! Better luck next time."
}
```
## Complete Example
Here is a full SQL insert example (using a hypothetical SQL client or Drizzle studio) for a "Basic Lootbox":
**Name**: Basic Lootbox
**Type**: CONSUMABLE
**Effect**:
- 50% chance for 100-200 Coins
- 30% chance for 500 XP
- 10% chance for Item ID 5 (e.g. Rare Gem)
- 10% chance for Nothing
**JSON for `usageData`**:
```json
{
"consume": true,
"effects": [
{
"type": "LOOTBOX",
"pool": [
{
"type": "CURRENCY",
"weight": 50,
"minAmount": 100,
"maxAmount": 200
},
{
"type": "XP",
"weight": 30,
"amount": 500
},
{
"type": "ITEM",
"weight": 10,
"itemId": 5,
"amount": 1,
"message": "Startstruck! You found a Rare Gem!"
},
{
"type": "NOTHING",
"weight": 10,
"message": "It's empty..."
}
]
}
]
}
```

View File

@@ -1,72 +0,0 @@
# Aurora Module Structure Guide
This guide documents the standard module organization patterns used in the Aurora codebase. Following these patterns ensures consistency, maintainability, and clear separation of concerns.
## Module Anatomy
A typical module in `@modules/` is organized into several files, each with a specific responsibility.
Example: `trade` module
- `trade.service.ts`: Business logic and data access.
- `trade.view.ts`: Discord UI components (embeds, modals, select menus).
- `trade.interaction.ts`: Handler for interaction events (buttons, modals, etc.).
- `trade.types.ts`: TypeScript interfaces and types.
- `trade.service.test.ts`: Unit tests for the service logic.
## File Responsibilities
### 1. Service (`*.service.ts`)
The core of the module. It contains the business logic, database interactions (using Drizzle), and state management.
- **Rules**:
- Export a singleton instance: `export const tradeService = new TradeService();`
- Should not contain Discord-specific rendering logic (return data, not embeds).
- Throw `UserError` for validation issues that should be shown to the user.
### 2. View (`*.view.ts`)
Handles the creation of Discord-specific UI elements like `EmbedBuilder`, `ActionRowBuilder`, and `ModalBuilder`.
- **Rules**:
- Focus on formatting and presentation.
- Takes raw data (from services) and returns Discord components.
### 3. Interaction Handler (`*.interaction.ts`)
The entry point for Discord component interactions (buttons, select menus, modals).
- **Rules**:
- Export a single handler function: `export async function handleTradeInteraction(interaction: Interaction) { ... }`
- Routes internal `customId` patterns to specific logic.
- Relies on `ComponentInteractionHandler` for centralized error handling.
- **No local try-catch** for standard validation errors; let them bubble up as `UserError`.
### 4. Types (`*.types.ts`)
Central location for module-specific TypeScript types and constants.
- **Rules**:
- Define interfaces for complex data structures.
- Use enums or literal types for states and custom IDs.
## Interaction Routing
All interaction handlers must be registered in `src/lib/interaction.routes.ts`.
```typescript
{
predicate: (i) => i.customId.startsWith("module_"),
handler: () => import("@/modules/module/module.interaction"),
method: 'handleModuleInteraction'
}
```
## Error Handling Standards
Aurora uses a centralized error handling pattern in `ComponentInteractionHandler`.
1. **UserError**: Use this for validation errors or issues the user can fix (e.g., "Insufficient funds").
- `throw new UserError("You need more coins!");`
2. **SystemError / Generic Error**: Use this for unexpected system failures.
- These are logged to the console/logger and show a generic "Unexpected error" message to the user.
## Naming Conventions
- **Directory Name**: Lowercase, singular (e.g., `trade`, `inventory`).
- **File Names**: `moduleName.type.ts` (e.g., `trade.service.ts`).
- **Class Names**: PascalCase (e.g., `TradeService`).
- **Service Instances**: camelCase (e.g., `tradeService`).
- **Interaction Method**: `handle[ModuleName]Interaction`.

131
shared/scripts/deploy.sh Normal file
View File

@@ -0,0 +1,131 @@
#!/bin/bash
# =============================================================================
# Aurora Production Deployment Script
# =============================================================================
# Run this script to deploy the latest version of Aurora
# Usage: bash deploy.sh
# =============================================================================
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
echo -e "${GREEN}╔══════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║ Aurora Deployment Script ║${NC}"
echo -e "${GREEN}╚══════════════════════════════════════════╝${NC}"
echo ""
cd "$PROJECT_DIR"
# =============================================================================
# Pre-flight Checks
# =============================================================================
echo -e "${YELLOW}[1/5] Running pre-flight checks...${NC}"
# Check if .env exists
if [ ! -f .env ]; then
echo -e "${RED}Error: .env file not found${NC}"
exit 1
fi
# Check if Docker is running
if ! docker info &>/dev/null; then
echo -e "${RED}Error: Docker is not running${NC}"
exit 1
fi
echo -e " ${GREEN}${NC} Pre-flight checks passed"
# =============================================================================
# Backup Database (optional but recommended)
# =============================================================================
echo -e "${YELLOW}[2/5] Creating database backup...${NC}"
BACKUP_DIR="$PROJECT_DIR/shared/db/backups"
mkdir -p "$BACKUP_DIR"
if docker ps | grep -q aurora_db; then
BACKUP_FILE="$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).sql"
docker exec aurora_db pg_dump -U "${DB_USER:-auroradev}" "${DB_NAME:-auroradev}" > "$BACKUP_FILE" 2>/dev/null || true
if [ -f "$BACKUP_FILE" ] && [ -s "$BACKUP_FILE" ]; then
echo -e " ${GREEN}${NC} Database backed up to: $BACKUP_FILE"
else
echo -e " ${YELLOW}${NC} Database backup skipped (container not running or empty)"
rm -f "$BACKUP_FILE"
fi
else
echo -e " ${YELLOW}${NC} Database backup skipped (container not running)"
fi
# =============================================================================
# Pull Latest Code (if using git)
# =============================================================================
echo -e "${YELLOW}[3/5] Pulling latest code...${NC}"
if [ -d .git ]; then
git pull origin main 2>/dev/null || git pull origin master 2>/dev/null || echo " Skipping git pull"
echo -e " ${GREEN}${NC} Code updated"
else
echo -e " ${YELLOW}${NC} Not a git repository, skipping pull"
fi
# =============================================================================
# Build and Deploy
# =============================================================================
echo -e "${YELLOW}[4/5] Building and deploying containers...${NC}"
# Build the new image
docker compose -f docker-compose.prod.yml build --no-cache
# Stop and remove old containers, start new ones
docker compose -f docker-compose.prod.yml down
docker compose -f docker-compose.prod.yml up -d
echo -e " ${GREEN}${NC} Containers deployed"
# =============================================================================
# Health Check
# =============================================================================
echo -e "${YELLOW}[5/5] Waiting for health checks...${NC}"
sleep 10
# Check container status
if docker ps | grep -q "aurora_app.*healthy"; then
echo -e " ${GREEN}${NC} aurora_app is healthy"
else
echo -e " ${YELLOW}${NC} aurora_app health check pending (may take up to 60s)"
fi
if docker ps | grep -q "aurora_db.*healthy"; then
echo -e " ${GREEN}${NC} aurora_db is healthy"
else
echo -e " ${YELLOW}${NC} aurora_db health check pending"
fi
# =============================================================================
# Cleanup
# =============================================================================
echo ""
echo -e "${YELLOW}Cleaning up old Docker images...${NC}"
docker image prune -f
# =============================================================================
# Summary
# =============================================================================
echo ""
echo -e "${GREEN}╔══════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║ Deployment Complete! 🚀 ║${NC}"
echo -e "${GREEN}╚══════════════════════════════════════════╝${NC}"
echo ""
echo -e "Container Status:"
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep aurora
echo ""
echo -e "View logs with: ${YELLOW}docker logs -f aurora_app${NC}"

View File

@@ -0,0 +1,160 @@
#!/bin/bash
# =============================================================================
# Server Setup Script for Aurora Production Deployment
# =============================================================================
# Run this script ONCE on a fresh server to configure security settings.
# Usage: sudo bash setup-server.sh
# =============================================================================
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${GREEN}╔══════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║ Aurora Server Security Setup Script ║${NC}"
echo -e "${GREEN}╚══════════════════════════════════════════╝${NC}"
echo ""
# Check if running as root
if [ "$EUID" -ne 0 ]; then
echo -e "${RED}Error: Please run as root (sudo)${NC}"
exit 1
fi
# =============================================================================
# 1. Create Deploy User
# =============================================================================
echo -e "${YELLOW}[1/5] Creating deploy user...${NC}"
DEPLOY_USER="deploy"
if id "$DEPLOY_USER" &>/dev/null; then
echo -e " User '$DEPLOY_USER' already exists, skipping..."
else
adduser --disabled-password --gecos "" $DEPLOY_USER
echo -e " ${GREEN}${NC} Created user '$DEPLOY_USER'"
fi
# Add to docker group
usermod -aG docker $DEPLOY_USER 2>/dev/null || groupadd docker && usermod -aG docker $DEPLOY_USER
echo -e " ${GREEN}${NC} Added '$DEPLOY_USER' to docker group"
# Add to sudo group (optional - remove if you don't want sudo access)
usermod -aG sudo $DEPLOY_USER
echo -e " ${GREEN}${NC} Added '$DEPLOY_USER' to sudo group"
# Copy SSH keys from root to deploy user
if [ -d /root/.ssh ]; then
mkdir -p /home/$DEPLOY_USER/.ssh
cp /root/.ssh/authorized_keys /home/$DEPLOY_USER/.ssh/ 2>/dev/null || true
chown -R $DEPLOY_USER:$DEPLOY_USER /home/$DEPLOY_USER/.ssh
chmod 700 /home/$DEPLOY_USER/.ssh
chmod 600 /home/$DEPLOY_USER/.ssh/authorized_keys 2>/dev/null || true
echo -e " ${GREEN}${NC} Copied SSH keys to '$DEPLOY_USER'"
fi
# =============================================================================
# 2. Configure UFW Firewall
# =============================================================================
echo -e "${YELLOW}[2/5] Configuring UFW firewall...${NC}"
apt-get update -qq
apt-get install -y -qq ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
# Add more rules as needed:
# ufw allow 80/tcp # HTTP
# ufw allow 443/tcp # HTTPS
# Enable UFW (non-interactive)
echo "y" | ufw enable
echo -e " ${GREEN}${NC} UFW firewall enabled and configured"
# =============================================================================
# 3. Install and Configure Fail2ban
# =============================================================================
echo -e "${YELLOW}[3/5] Installing fail2ban...${NC}"
apt-get install -y -qq fail2ban
# Create local jail configuration
cat > /etc/fail2ban/jail.local << 'EOF'
[DEFAULT]
bantime = 1h
findtime = 10m
maxretry = 5
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 24h
EOF
systemctl enable fail2ban
systemctl restart fail2ban
echo -e " ${GREEN}${NC} Fail2ban installed and configured"
# =============================================================================
# 4. Harden SSH Configuration
# =============================================================================
echo -e "${YELLOW}[4/5] Hardening SSH configuration...${NC}"
SSHD_CONFIG="/etc/ssh/sshd_config"
# Backup original config
cp $SSHD_CONFIG ${SSHD_CONFIG}.backup
# Apply hardening settings
sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' $SSHD_CONFIG
sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' $SSHD_CONFIG
sed -i 's/^#\?PubkeyAuthentication.*/PubkeyAuthentication yes/' $SSHD_CONFIG
sed -i 's/^#\?X11Forwarding.*/X11Forwarding no/' $SSHD_CONFIG
sed -i 's/^#\?MaxAuthTries.*/MaxAuthTries 3/' $SSHD_CONFIG
# Validate SSH config before restarting
if sshd -t; then
systemctl reload sshd
echo -e " ${GREEN}${NC} SSH hardened (root login disabled, password auth disabled)"
else
echo -e " ${RED}${NC} SSH config validation failed, restoring backup..."
cp ${SSHD_CONFIG}.backup $SSHD_CONFIG
fi
# =============================================================================
# 5. System Updates
# =============================================================================
echo -e "${YELLOW}[5/5] Installing system updates...${NC}"
apt-get upgrade -y -qq
apt-get autoremove -y -qq
echo -e " ${GREEN}${NC} System updated"
# =============================================================================
# Summary
# =============================================================================
echo ""
echo -e "${GREEN}╔══════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║ Setup Complete! ║${NC}"
echo -e "${GREEN}╚══════════════════════════════════════════╝${NC}"
echo ""
echo -e "Next steps:"
echo -e " 1. Update your local .env file:"
echo -e " ${YELLOW}VPS_USER=deploy${NC}"
echo -e ""
echo -e " 2. Test SSH access with the new user:"
echo -e " ${YELLOW}ssh deploy@<your-server-ip>${NC}"
echo -e ""
echo -e " 3. Deploy the application:"
echo -e " ${YELLOW}cd /home/deploy/Aurora && docker compose -f docker-compose.prod.yml up -d${NC}"
echo ""
echo -e "${RED}⚠️ IMPORTANT: Test SSH access with 'deploy' user BEFORE logging out!${NC}"
echo -e "${RED} Keep this root session open until you confirm 'deploy' user works.${NC}"