forked from syntaxbullet/aurorabot
Compare commits
10 Commits
3a620a84c5
...
f8436e9755
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f8436e9755 | ||
|
|
194a032c7f | ||
|
|
94a5a183d0 | ||
|
|
c7730b9355 | ||
|
|
1e20a5a7a0 | ||
|
|
54944283a3 | ||
|
|
f79ee6fbc7 | ||
|
|
915f1bc4ad | ||
|
|
4af2690bab | ||
|
|
6e57ab07e4 |
@@ -1,63 +0,0 @@
|
||||
---
|
||||
description: Converts conversational brain dumps into structured, metric-driven Markdown tickets in the ./tickets directory.
|
||||
---
|
||||
|
||||
# WORKFLOW: PRAGMATIC ARCHITECT TICKET GENERATOR
|
||||
|
||||
## 1. High-Level Goal
|
||||
Transform informal user "brain dumps" into high-precision, metric-driven engineering tickets stored as Markdown files in the `./tickets/` directory. The workflow enforces a quality gate via targeted inquiry before any file persistence occurs, ensuring all tasks are observable, measurable, and actionable.
|
||||
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Assumptions:** The agent has write access to the `./tickets/` and `/temp/` directories. The current date is accessible for naming conventions. "Metrics" refer to quantifiable constraints (latency, line counts, status codes).
|
||||
- **Ambiguities:** If the user provides a second brain dump while a ticket is in progress, the agent will prioritize the current workflow until completion or explicit cancellation.
|
||||
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### Stage 1: Discovery & Quality Gate
|
||||
- **Stage Name:** Requirement Analysis
|
||||
- **Purpose:** Analyze input for vagueness and enforce the "Quality Gate" by extracting metrics.
|
||||
- **Inputs:** Raw user brain dump (text).
|
||||
- **Actions:** 1. Identify "Known Unknowns" (vague terms like "fast," "better," "clean").
|
||||
2. Formulate exactly three (3) targeted questions to convert vague goals into comparable metrics.
|
||||
3. Check for logical inconsistencies in the request.
|
||||
- **Outputs:** Three questions presented to the user.
|
||||
- **Persistence Strategy:** Save the original brain dump and the three questions to `/temp/pending_ticket_state.json`.
|
||||
|
||||
### Stage 2: Drafting & Refinement
|
||||
- **Stage Name:** Ticket Drafting
|
||||
- **Purpose:** Synthesize the original dump and user answers into a structured Markdown draft.
|
||||
- **Inputs:** User responses to the three questions; `/temp/pending_ticket_state.json`.
|
||||
- **Actions:** 1. Construct a Markdown draft using the provided template.
|
||||
2. Generate a slug-based filename: `YYYYMMDD-slug.md`.
|
||||
3. Present the draft and filename to the user for review.
|
||||
- **Outputs:** Formatted Markdown text and suggested filename displayed in the chat.
|
||||
- **Persistence Strategy:** Update `/temp/pending_ticket_state.json` with the full Markdown content and the proposed filename.
|
||||
|
||||
### Stage 3: Execution & Persistence
|
||||
- **Stage Name:** Finalization
|
||||
- **Purpose:** Commit the approved ticket to the permanent `./tickets/` directory.
|
||||
- **Inputs:** User confirmation (e.g., "Go," "Approved"); `/temp/pending_ticket_state.json`.
|
||||
- **Actions:** 1. Write the finalized Markdown content to `./tickets/[filename]`.
|
||||
2. Delete the temporary state file in `/temp/`.
|
||||
- **Outputs:** Confirmation message containing the relative path to the new file.
|
||||
- **Persistence Strategy:** Permanent write to `./tickets/`.
|
||||
|
||||
## 4. Data & File Contracts
|
||||
- **State File:** `/temp/pending_ticket_state.json`
|
||||
- Schema: `{ "original_input": string, "questions": string[], "answers": string[], "draft_content": string, "filename": string, "step": integer }`
|
||||
- **Output File:** `./tickets/YYYYMMDD-[slug].md`
|
||||
- Format: Markdown
|
||||
- Sections: `# Title`, `## Context`, `## Acceptance Criteria`, `## Suggested Affected Files`, `## Technical Constraints`.
|
||||
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Incomplete Inputs:** If the user fails to answer the 3 questions, the agent must politely restate that metrics are required for high-precision engineering and repeat the questions.
|
||||
- **Inconsistencies:** If the user’s answers contradict the original dump, the agent must flag the contradiction and ask for a tie-break before drafting.
|
||||
- **Missing Directory:** If `./tickets/` does not exist during Stage 3, the agent must attempt to create it before writing the file.
|
||||
|
||||
## 6. Final Deliverable Specification
|
||||
- **Format:** A valid Markdown file in the `./tickets/` folder.
|
||||
- **Quality Bar:**
|
||||
- Zero fluff in the Context section.
|
||||
- All Acceptance Criteria must be binary (pass/fail) or metric-based.
|
||||
- Filename must strictly follow `YYYYMMDD-slug.md` (e.g., `20240520-auth-refactor.md`).
|
||||
- No "Status" or "Priority" fields.
|
||||
@@ -1,89 +0,0 @@
|
||||
---
|
||||
description: Analyzes the codebase to find dependencies and side effects related to a specific ticket.
|
||||
---
|
||||
|
||||
# WORKFLOW: Dependency Architect & Blast Radius Analysis
|
||||
|
||||
## 1. High-Level Goal
|
||||
Perform a deterministic "Blast Radius" analysis for a code change defined in a Jira/Linear-style ticket. The agent will identify direct consumers, side effects, and relevant test suites, then append a structured "Impact Analysis" section to the original ticket file to guide developers and ensure high-velocity execution without regressions.
|
||||
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Location:** Tickets are stored in the `./tickets/` directory as Markdown files.
|
||||
- **Code Access:** The agent has full read access to the project root and subdirectories.
|
||||
- **Scope:** Dependency tracing is limited to "one level deep" (direct imports/references) unless a global configuration or core database schema change is detected.
|
||||
- **Ambiguity Handling:** If "Suggested Affected Files" are missing from the ticket, the agent will attempt to infer them from the "Acceptance Criteria" logic; if inference is impossible, the agent will halt and request the file list.
|
||||
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### Stage 1: Ticket Parsing & Context Extraction
|
||||
- **Purpose:** Extract the specific files and logic constraints requiring analysis.
|
||||
- **Inputs:** A specific ticket filename (e.g., `./tickets/TASK-123.md`).
|
||||
- **Actions:**
|
||||
1. Read the ticket file.
|
||||
2. Extract the list of "Suggested Affected Files".
|
||||
3. Extract keywords and logic from the "Acceptance Criteria".
|
||||
4. Validate that all "Suggested Affected Files" exist in the current codebase.
|
||||
- **Outputs:** A JSON object containing the target file list and key logic requirements.
|
||||
- **Persistence Strategy:** Save extracted data to `/temp/context.json`.
|
||||
|
||||
### Stage 2: Recursive Dependency Mapping
|
||||
- **Purpose:** Identify which external modules rely on the target files.
|
||||
- **Inputs:** `/temp/context.json`.
|
||||
- **Actions:**
|
||||
1. For each file in the target list, perform a search (e.g., `grep` or AST walk) for import statements or references in the rest of the codebase.
|
||||
2. Filter out internal references within the same module (focus on external consumers).
|
||||
3. Detect if the change involves shared utilities (e.g., `utils/`, `common/`) or database schemas (e.g., `prisma/schema.prisma`).
|
||||
- **Outputs:** A list of unique consumer file paths and their specific usage context.
|
||||
- **Persistence Strategy:** Save findings to `/temp/dependencies.json`.
|
||||
|
||||
### Stage 3: Test Suite Identification
|
||||
- **Purpose:** Locate the specific test files required to validate the change.
|
||||
- **Inputs:** `/temp/context.json` and `/temp/dependencies.json`.
|
||||
- **Actions:**
|
||||
1. Search for files following patterns: `[filename].test.ts`, `[filename].spec.js`, or within `__tests__` folders related to affected files.
|
||||
2. Identify integration or E2E tests that cover the consumer paths identified in Stage 2.
|
||||
- **Outputs:** A list of relevant test file paths.
|
||||
- **Persistence Strategy:** Save findings to `/temp/tests.json`.
|
||||
|
||||
### Stage 4: Risk Hotspot Synthesis
|
||||
- **Purpose:** Interpret raw dependency data into actionable risk warnings.
|
||||
- **Inputs:** All files in `/temp/`.
|
||||
- **Actions:**
|
||||
1. Analyze the volume of consumers; if a file has >5 consumers, flag it as a "High Impact Hotspot."
|
||||
2. Check for breaking contract changes (e.g., interface modifications) based on the "Acceptance Criteria".
|
||||
3. Formulate specific "Risk Hotspot" warnings (e.g., "Changing Auth interface affects 12 files; consider a wrapper.").
|
||||
- **Outputs:** A structured Markdown-ready report object.
|
||||
- **Persistence Strategy:** Save final report data to `/temp/final_analysis.json`.
|
||||
|
||||
### Stage 5: Ticket Augmentation & Finalization
|
||||
- **Purpose:** Update the physical ticket file with findings.
|
||||
- **Inputs:** Original ticket file and `/temp/final_analysis.json`.
|
||||
- **Actions:**
|
||||
1. Read the current content of the ticket file.
|
||||
2. Generate a Markdown section titled `## Impact Analysis (Generated: 2026-01-09)`.
|
||||
3. Append the Direct Consumers, Test Coverage, and Risk Hotspots sections.
|
||||
4. Write the combined content back to the original file path.
|
||||
- **Outputs:** Updated Markdown ticket.
|
||||
- **Persistence Strategy:** None (Final Action).
|
||||
|
||||
## 4. Data & File Contracts
|
||||
- **State File (`/temp/state.json`):** - `affected_files`: string[]
|
||||
- `consumers`: { path: string, context: string }[]
|
||||
- `tests`: string[]
|
||||
- `risks`: string[]
|
||||
- **File Format:** All `/temp` files must be valid JSON.
|
||||
- **Ticket Format:** Standard Markdown. Use `###` for sub-headers in the generated section.
|
||||
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Missing Ticket:** If the ticket path is invalid, exit immediately with error: "TICKET_NOT_FOUND".
|
||||
- **Zero Consumers Found:** If no external consumers are found, state "No external dependencies detected" in the report; do not fail.
|
||||
- **Broken Imports:** If AST parsing fails due to syntax errors in the codebase, fallback to `grep` for string-based matching.
|
||||
- **Write Permission:** If the ticket file is read-only, output the final Markdown to the console and provide a warning.
|
||||
|
||||
## 6. Final Deliverable Specification
|
||||
- **Format:** The original ticket file must be modified in-place.
|
||||
- **Content:**
|
||||
- **Direct Consumers:** Bulleted list of `[File Path]: [Usage description]`.
|
||||
- **Test Coverage:** Bulleted list of `[File Path]`.
|
||||
- **Risk Hotspots:** Clear, one-sentence warnings for high-risk areas.
|
||||
- **Quality Bar:** No hallucinations. Every file path listed must exist in the repository. No deletions of original ticket content.
|
||||
@@ -1,72 +0,0 @@
|
||||
---
|
||||
description: Performs a high-intensity, "hostile" technical audit of the provided code.
|
||||
---
|
||||
|
||||
# WORKFLOW: HOSTILE TECHNICAL AUDIT & SECURITY REVIEW
|
||||
|
||||
## 1. High-Level Goal
|
||||
Execute a multi-pass, hyper-critical technical audit of provided source code to identify fatal logic flaws, security vulnerabilities, and architectural debt. The agent acts as a hostile reviewer with a "guilty until proven innocent" mindset, aiming to justify a REJECTED verdict unless the code demonstrates exceptional robustness and simplicity.
|
||||
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Assumption:** The user will provide either raw code snippets or paths to files within the agent's accessible environment.
|
||||
- **Assumption:** The agent has access to `/temp/` for multi-stage state persistence.
|
||||
- **Clarification:** If a "ticket description" or "requirement" is not provided, the agent will infer intent from the code but must flag "Lack of Context" as a potential risk.
|
||||
- **Clarification:** "Hostile" refers to a rigorous, zero-tolerance standard, not unprofessional language.
|
||||
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### Stage 1: Contextual Ingestion & Dependency Mapping
|
||||
- **Purpose:** Map the attack surface and understand the logical flow before the audit.
|
||||
- **Inputs:** Target source code files.
|
||||
- **Actions:** - Identify all external dependencies and entry points.
|
||||
- Map data flow from input to storage/output.
|
||||
- Identify "High-Risk Zones" (e.g., auth logic, DB queries, memory management).
|
||||
- **Outputs:** A structured map of the code's architecture.
|
||||
- **Persistence Strategy:** Save `audit_map.json` to `/temp/` containing the file list and identified High-Risk Zones.
|
||||
|
||||
### Stage 2: Security & Logic Stress Test (The "Hostile" Pass)
|
||||
- **Purpose:** Identify reasons to reject the code based on security and logical integrity.
|
||||
- **Inputs:** `/temp/audit_map.json` and source code.
|
||||
- **Actions:**
|
||||
- Scan for injection, race conditions, and improper state handling.
|
||||
- Simulate edge cases: null inputs, buffer overflows, and malformed data.
|
||||
- Evaluate "Silent Failures": Does the code swallow exceptions or fail to log critical errors?
|
||||
- **Outputs:** List of fatal flaws and security risks.
|
||||
- **Persistence Strategy:** Save `vulnerabilities.json` to `/temp/`.
|
||||
|
||||
### Stage 3: Performance & Velocity Debt Assessment
|
||||
- **Purpose:** Evaluate the "Pragmatic Performance" and maintainability of the implementation.
|
||||
- **Inputs:** Source code and `/temp/vulnerabilities.json`.
|
||||
- **Actions:**
|
||||
- Identify redundant API calls or unnecessary allocations.
|
||||
- Flag "Over-Engineering" (unnecessary abstractions) vs. "Lazy Code" (hardcoded values).
|
||||
- Identify missing unit test scenarios for identified edge cases.
|
||||
- **Outputs:** List of optimization debt and missing test scenarios.
|
||||
- **Persistence Strategy:** Save `debt_and_tests.json` to `/temp/`.
|
||||
|
||||
### Stage 4: Synthesis & Verdict Generation
|
||||
- **Purpose:** Compile all findings into the final "Hostile Audit" report.
|
||||
- **Inputs:** `/temp/vulnerabilities.json` and `/temp/debt_and_tests.json`.
|
||||
- **Actions:**
|
||||
- Consolidate all findings into the mandated "Response Format."
|
||||
- Apply the "Burden of Proof" rule: If any Fatal Flaws or Security Risks exist, the verdict is REJECTED.
|
||||
- Ensure no sycophantic language is present.
|
||||
- **Outputs:** Final Audit Report.
|
||||
- **Persistence Strategy:** Final output is delivered to the user; `/temp/` files may be purged.
|
||||
|
||||
## 4. Data & File Contracts
|
||||
- **Filename:** `/temp/audit_context.json` | **Schema:** `{ "high_risk_zones": [], "entry_points": [] }`
|
||||
- **Filename:** `/temp/findings.json` | **Schema:** `{ "fatal_flaws": [], "security_risks": [], "debt": [], "missing_tests": [] }`
|
||||
- **Final Report Format:** Markdown with specific headers: `## 🛑 FATAL FLAWS`, `## ⚠️ SECURITY & VULNERABILITIES`, `## 📉 VELOCITY DEBT`, `## 🧪 MISSING TESTS`, and `### VERDICT`.
|
||||
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Incomplete Input:** If the code is snippet-based and missing context, the agent must assume the worst-case scenario for the missing parts and flag them as "Critical Unknowns."
|
||||
- **Stage Failure:** If a specific file cannot be parsed, log the error in the `findings.json` and proceed with the remaining files.
|
||||
- **Clarification:** The agent will NOT ask for clarification mid-audit. It will make a "hostile assumption" and document it as a risk.
|
||||
|
||||
## 6. Final Deliverable Specification
|
||||
- **Tone:** Senior Security Auditor. Clinical, critical, and direct.
|
||||
- **Acceptance Criteria:** - No "Good job" or introductory filler.
|
||||
- Every flaw must include [Why it fails] and [How to fix it].
|
||||
- Verdict must be REJECTED unless the code is "solid" (simple, robust, and secure).
|
||||
- Must identify at least one specific edge case for the "Missing Tests" section.
|
||||
@@ -1,99 +0,0 @@
|
||||
---
|
||||
description: Work on a ticket
|
||||
---
|
||||
|
||||
# WORKFLOW: Automated Feature Implementation and Review Cycle
|
||||
|
||||
## 1. High-Level Goal
|
||||
The objective of this workflow is to autonomously ingest a task from a local `/tickets` directory, establish a dedicated development environment via Git branching, implement the requested changes with incremental commits, validate the work through an internal review process, and finalize the lifecycle by cleaning up ticket artifacts and seeking user authorization for the final merge.
|
||||
|
||||
---
|
||||
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Assumptions:**
|
||||
- The `/tickets` directory contains one or more files representing tasks (e.g., `.md` or `.txt`).
|
||||
- The agent has authenticated access to the local Git repository.
|
||||
- A "Review Workflow" exists as an executable command or internal process.
|
||||
- The branch naming convention is `feature/[ticket-filename-slug]`.
|
||||
- **Ambiguities:**
|
||||
- If multiple tickets exist, the agent will select the one with the earliest "Last Modified" timestamp.
|
||||
- "Regular commits" are defined as committing after every logically complete file change or functional milestone.
|
||||
|
||||
---
|
||||
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### Stage 1: Ticket Selection and Branch Initialization
|
||||
- **Purpose:** Identify the next task and prepare the workspace.
|
||||
- **Inputs:** Contents of the `/tickets` directory.
|
||||
- **Actions:**
|
||||
1. Scan `/tickets` and select the oldest file.
|
||||
2. Parse the ticket content to understand requirements.
|
||||
3. Ensure the current working directory is a Git repository.
|
||||
4. Create and switch to a new branch: `feature/[ticket-id]`.
|
||||
- **Outputs:** Active feature branch.
|
||||
- **Persistence Strategy:** Save `state.json` to `/temp` containing `ticket_path`, `branch_name`, and `start_time`.
|
||||
|
||||
### Stage 2: Implementation and Incremental Committing
|
||||
- **Purpose:** Execute the technical requirements of the ticket.
|
||||
- **Inputs:** `/temp/state.json`, Ticket requirements.
|
||||
- **Actions:**
|
||||
1. Modify codebase according to requirements.
|
||||
2. For every distinct file change or logical unit of work:
|
||||
- Run basic syntax checks.
|
||||
- Execute `git add [file]`.
|
||||
- Execute `git commit -m "feat: [brief description of change]"`
|
||||
3. Repeat until the feature is complete.
|
||||
- **Outputs:** Committed code changes on the feature branch.
|
||||
- **Persistence Strategy:** Update `state.json` with `implementation_complete: true` and a list of `modified_files`.
|
||||
|
||||
### Stage 3: Review Workflow Execution
|
||||
- **Purpose:** Validate the implementation against quality standards.
|
||||
- **Inputs:** `/temp/state.json`, Modified codebase.
|
||||
- **Actions:**
|
||||
1. Trigger the "Review Workflow" (static analysis, tests, or linter).
|
||||
2. If errors are found:
|
||||
- Log errors to `/temp/review_log.txt`.
|
||||
- Re-enter Stage 2 to apply fixes and commit.
|
||||
3. If review passes:
|
||||
- Proceed to Stage 4.
|
||||
- **Outputs:** Review results/logs.
|
||||
- **Persistence Strategy:** Update `state.json` with `review_passed: true`.
|
||||
|
||||
### Stage 4: Cleanup and User Handoff
|
||||
- **Purpose:** Finalize the ticket lifecycle and request merge permission.
|
||||
- **Inputs:** `/temp/state.json`.
|
||||
- **Actions:**
|
||||
1. Delete the ticket file from `/tickets` using the path stored in `state.json`.
|
||||
2. Format a summary of changes and a request for merge.
|
||||
- **Outputs:** Deletion of the ticket file; user-facing summary.
|
||||
- **Persistence Strategy:** Clear `/temp/state.json` upon successful completion.
|
||||
|
||||
---
|
||||
|
||||
## 4. Data & File Contracts
|
||||
- **State File:** `/temp/state.json`
|
||||
- Format: JSON
|
||||
- Schema: `{ "ticket_path": string, "branch_name": string, "implementation_complete": boolean, "review_passed": boolean }`
|
||||
- **Ticket Files:** Located in `/tickets/*` (Markdown or Plain Text).
|
||||
- **Logs:** `/temp/review_log.txt` (Plain Text) for capturing stderr from review tools.
|
||||
|
||||
---
|
||||
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Empty Ticket Directory:** If no files are found in `/tickets`, the agent will output "NO_TICKETS_FOUND" and terminate the workflow.
|
||||
- **Commit Failures:** If a commit fails (e.g., pre-commit hooks), the agent must resolve the hook violation before retrying the commit.
|
||||
- **Review Failure Loop:** If the review fails more than 3 times for the same issue, the agent must halt and output a "BLOCKER_REPORT" detailing the persistent errors to the user.
|
||||
- **State Recovery:** On context reset, the agent must check `/temp/state.json` to resume the workflow from the last recorded stage.
|
||||
|
||||
---
|
||||
|
||||
## 6. Final Deliverable Specification
|
||||
- **Final Output:** A clear message to the user in the following format:
|
||||
> **Task Completed:** [Ticket Name]
|
||||
> **Branch:** [Branch Name]
|
||||
> **Changes:** [Brief list of modified files]
|
||||
> **Review Status:** Passed
|
||||
> **Cleanup:** Ticket file removed from /tickets.
|
||||
> **Action Required:** Would you like me to merge [Branch Name] into `main`? (Yes/No)
|
||||
- **Quality Bar:** Code must be committed with descriptive messages; the ticket file must be successfully deleted; the workspace must be left on the feature branch awaiting the merge command.
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,5 +1,6 @@
|
||||
.env
|
||||
node_modules
|
||||
docker-compose.override.yml
|
||||
shared/db-logs
|
||||
shared/db/data
|
||||
shared/db/loga
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
} from "discord.js";
|
||||
import { inventoryService } from "@shared/modules/inventory/inventory.service";
|
||||
import { createSuccessEmbed, createErrorEmbed, createBaseEmbed } from "@lib/embeds";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
import { items } from "@db/schema";
|
||||
import { ilike, isNotNull, and } from "drizzle-orm";
|
||||
import { DrizzleClient } from "@shared/db/DrizzleClient";
|
||||
@@ -65,10 +65,10 @@ export const listing = createCommand({
|
||||
await interaction.editReply({ content: `✅ Listing for **${item.name}** posted in ${targetChannel}.` });
|
||||
} catch (error: any) {
|
||||
if (error instanceof UserError) {
|
||||
await interaction.reply({ embeds: [createErrorEmbed(error.message)], ephemeral: true });
|
||||
await interaction.editReply({ embeds: [createErrorEmbed(error.message)] });
|
||||
} else {
|
||||
console.error("Error creating listing:", error);
|
||||
await interaction.reply({ embeds: [createErrorEmbed("An unexpected error occurred.")], ephemeral: true });
|
||||
await interaction.editReply({ embeds: [createErrorEmbed("An unexpected error occurred.")] });
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -3,13 +3,14 @@ import { createCommand } from "@shared/lib/utils";
|
||||
import { SlashCommandBuilder } from "discord.js";
|
||||
import { economyService } from "@shared/modules/economy/economy.service";
|
||||
import { createErrorEmbed, createSuccessEmbed } from "@lib/embeds";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
|
||||
export const daily = createCommand({
|
||||
data: new SlashCommandBuilder()
|
||||
.setName("daily")
|
||||
.setDescription("Claim your daily reward"),
|
||||
execute: async (interaction) => {
|
||||
await interaction.deferReply();
|
||||
try {
|
||||
const result = await economyService.claimDaily(interaction.user.id);
|
||||
|
||||
@@ -21,14 +22,14 @@ export const daily = createCommand({
|
||||
)
|
||||
.setColor("Gold");
|
||||
|
||||
await interaction.reply({ embeds: [embed] });
|
||||
await interaction.editReply({ embeds: [embed] });
|
||||
|
||||
} catch (error: any) {
|
||||
if (error instanceof UserError) {
|
||||
await interaction.reply({ embeds: [createErrorEmbed(error.message)], ephemeral: true });
|
||||
await interaction.editReply({ embeds: [createErrorEmbed(error.message)] });
|
||||
} else {
|
||||
console.error("Error claiming daily:", error);
|
||||
await interaction.reply({ embeds: [createErrorEmbed("An unexpected error occurred.")], ephemeral: true });
|
||||
await interaction.editReply({ embeds: [createErrorEmbed("An unexpected error occurred.")] });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,21 +1,7 @@
|
||||
import { createCommand } from "@shared/lib/utils";
|
||||
import { SlashCommandBuilder } from "discord.js";
|
||||
import { userService } from "@shared/modules/user/user.service";
|
||||
import { createErrorEmbed, createSuccessEmbed } from "@lib/embeds";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { userTimers, users } from "@db/schema";
|
||||
import { eq, and, sql } from "drizzle-orm";
|
||||
import { DrizzleClient } from "@shared/db/DrizzleClient";
|
||||
import { config } from "@shared/lib/config";
|
||||
import { TimerType } from "@shared/lib/constants";
|
||||
|
||||
const EXAM_TIMER_TYPE = TimerType.EXAM_SYSTEM;
|
||||
const EXAM_TIMER_KEY = 'default';
|
||||
|
||||
interface ExamMetadata {
|
||||
examDay: number;
|
||||
lastXp: string;
|
||||
}
|
||||
import { examService, ExamStatus } from "@shared/modules/economy/exam.service";
|
||||
|
||||
const DAYS = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"];
|
||||
|
||||
@@ -25,105 +11,42 @@ export const exam = createCommand({
|
||||
.setDescription("Take your weekly exam to earn rewards based on your XP progress."),
|
||||
execute: async (interaction) => {
|
||||
await interaction.deferReply();
|
||||
const user = await userService.getOrCreateUser(interaction.user.id, interaction.user.username);
|
||||
if (!user) {
|
||||
await interaction.editReply({ embeds: [createErrorEmbed("Failed to retrieve user data.")] });
|
||||
return;
|
||||
}
|
||||
const now = new Date();
|
||||
const currentDay = now.getDay();
|
||||
|
||||
try {
|
||||
// 1. Fetch existing timer/exam data
|
||||
const timer = await DrizzleClient.query.userTimers.findFirst({
|
||||
where: and(
|
||||
eq(userTimers.userId, user.id),
|
||||
eq(userTimers.type, EXAM_TIMER_TYPE),
|
||||
eq(userTimers.key, EXAM_TIMER_KEY)
|
||||
)
|
||||
});
|
||||
// First, try to take the exam or check status
|
||||
const result = await examService.takeExam(interaction.user.id);
|
||||
|
||||
// 2. First Run Logic
|
||||
if (!timer) {
|
||||
// Set exam day to today
|
||||
const nextExamDate = new Date(now);
|
||||
nextExamDate.setDate(now.getDate() + 7);
|
||||
nextExamDate.setHours(0, 0, 0, 0);
|
||||
const nextExamTimestamp = Math.floor(nextExamDate.getTime() / 1000);
|
||||
|
||||
const metadata: ExamMetadata = {
|
||||
examDay: currentDay,
|
||||
lastXp: (user.xp ?? 0n).toString()
|
||||
};
|
||||
|
||||
await DrizzleClient.insert(userTimers).values({
|
||||
userId: user.id,
|
||||
type: EXAM_TIMER_TYPE,
|
||||
key: EXAM_TIMER_KEY,
|
||||
expiresAt: nextExamDate,
|
||||
metadata: metadata
|
||||
});
|
||||
if (result.status === ExamStatus.NOT_REGISTERED) {
|
||||
// Register the user
|
||||
const regResult = await examService.registerForExam(interaction.user.id, interaction.user.username);
|
||||
const nextRegTimestamp = Math.floor(regResult.nextExamAt!.getTime() / 1000);
|
||||
|
||||
await interaction.editReply({
|
||||
embeds: [createSuccessEmbed(
|
||||
`You have registered for the exam! Your exam day is **${DAYS[currentDay]}** (Server Time).\n` +
|
||||
`Come back on <t:${nextExamTimestamp}:D> (<t:${nextExamTimestamp}:R>) to take your first exam!`,
|
||||
`You have registered for the exam! Your exam day is **${DAYS[regResult.examDay!]}** (Server Time).\n` +
|
||||
`Come back on <t:${nextRegTimestamp}:D> (<t:${nextRegTimestamp}:R>) to take your first exam!`,
|
||||
"Exam Registration Successful"
|
||||
)]
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const metadata = timer.metadata as unknown as ExamMetadata;
|
||||
const examDay = metadata.examDay;
|
||||
|
||||
// 3. Cooldown Check
|
||||
const expiresAt = new Date(timer.expiresAt);
|
||||
expiresAt.setHours(0, 0, 0, 0);
|
||||
|
||||
if (now < expiresAt) {
|
||||
// Calculate time remaining
|
||||
const timestamp = Math.floor(expiresAt.getTime() / 1000);
|
||||
const nextExamTimestamp = Math.floor(result.nextExamAt!.getTime() / 1000);
|
||||
|
||||
if (result.status === ExamStatus.COOLDOWN) {
|
||||
await interaction.editReply({
|
||||
embeds: [createErrorEmbed(
|
||||
`You have already taken your exam for this week (or are waiting for your first week to pass).\n` +
|
||||
`Next exam available: <t:${timestamp}:D> (<t:${timestamp}:R>)`
|
||||
`Next exam available: <t:${nextExamTimestamp}:D> (<t:${nextExamTimestamp}:R>)`
|
||||
)]
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// 4. Day Check
|
||||
if (currentDay !== examDay) {
|
||||
// Calculate next correct exam day to correct the schedule
|
||||
let daysUntil = (examDay - currentDay + 7) % 7;
|
||||
if (daysUntil === 0) daysUntil = 7;
|
||||
|
||||
const nextExamDate = new Date(now);
|
||||
nextExamDate.setDate(now.getDate() + daysUntil);
|
||||
nextExamDate.setHours(0, 0, 0, 0);
|
||||
const nextExamTimestamp = Math.floor(nextExamDate.getTime() / 1000);
|
||||
|
||||
const newMetadata: ExamMetadata = {
|
||||
examDay: examDay,
|
||||
lastXp: (user.xp ?? 0n).toString()
|
||||
};
|
||||
|
||||
await DrizzleClient.update(userTimers)
|
||||
.set({
|
||||
expiresAt: nextExamDate,
|
||||
metadata: newMetadata
|
||||
})
|
||||
.where(and(
|
||||
eq(userTimers.userId, user.id),
|
||||
eq(userTimers.type, EXAM_TIMER_TYPE),
|
||||
eq(userTimers.key, EXAM_TIMER_KEY)
|
||||
));
|
||||
|
||||
if (result.status === ExamStatus.MISSED) {
|
||||
await interaction.editReply({
|
||||
embeds: [createErrorEmbed(
|
||||
`You missed your exam day! Your exam day is **${DAYS[examDay]}** (Server Time).\n` +
|
||||
`You missed your exam day! Your exam day is **${DAYS[result.examDay!]}** (Server Time).\n` +
|
||||
`You verify your attendance but score a **0**.\n` +
|
||||
`Your next exam opportunity is: <t:${nextExamTimestamp}:D> (<t:${nextExamTimestamp}:R>)`,
|
||||
"Exam Failed"
|
||||
@@ -132,74 +55,21 @@ export const exam = createCommand({
|
||||
return;
|
||||
}
|
||||
|
||||
// 5. Reward Calculation
|
||||
const lastXp = BigInt(metadata.lastXp || "0"); // Fallback just in case
|
||||
const currentXp = user.xp ?? 0n;
|
||||
const diff = currentXp - lastXp;
|
||||
|
||||
// Calculate Reward
|
||||
const multMin = config.economy.exam.multMin;
|
||||
const multMax = config.economy.exam.multMax;
|
||||
const multiplier = Math.random() * (multMax - multMin) + multMin;
|
||||
|
||||
// Allow negative reward? existing description implies "difference", usually gain.
|
||||
// If diff is negative (lost XP?), reward might be 0.
|
||||
let reward = 0n;
|
||||
if (diff > 0n) {
|
||||
reward = BigInt(Math.floor(Number(diff) * multiplier));
|
||||
}
|
||||
|
||||
// 6. Update State
|
||||
const nextExamDate = new Date(now);
|
||||
nextExamDate.setDate(now.getDate() + 7);
|
||||
nextExamDate.setHours(0, 0, 0, 0);
|
||||
const nextExamTimestamp = Math.floor(nextExamDate.getTime() / 1000);
|
||||
|
||||
const newMetadata: ExamMetadata = {
|
||||
examDay: examDay,
|
||||
lastXp: currentXp.toString()
|
||||
};
|
||||
|
||||
await DrizzleClient.transaction(async (tx) => {
|
||||
// Update Timer
|
||||
await tx.update(userTimers)
|
||||
.set({
|
||||
expiresAt: nextExamDate,
|
||||
metadata: newMetadata
|
||||
})
|
||||
.where(and(
|
||||
eq(userTimers.userId, user.id),
|
||||
eq(userTimers.type, EXAM_TIMER_TYPE),
|
||||
eq(userTimers.key, EXAM_TIMER_KEY)
|
||||
));
|
||||
|
||||
// Add Currency
|
||||
if (reward > 0n) {
|
||||
await tx.update(users)
|
||||
.set({
|
||||
balance: sql`${users.balance} + ${reward}`
|
||||
})
|
||||
.where(eq(users.id, user.id));
|
||||
}
|
||||
});
|
||||
|
||||
// If it reached here with AVAILABLE, it means they passed
|
||||
await interaction.editReply({
|
||||
embeds: [createSuccessEmbed(
|
||||
`**XP Gained:** ${diff.toString()}\n` +
|
||||
`**Multiplier:** x${multiplier.toFixed(2)}\n` +
|
||||
`**Reward:** ${reward.toString()} Currency\n\n` +
|
||||
`**XP Gained:** ${result.xpDiff?.toString()}\n` +
|
||||
`**Multiplier:** x${result.multiplier?.toFixed(2)}\n` +
|
||||
`**Reward:** ${result.reward?.toString()} Currency\n\n` +
|
||||
`See you next week: <t:${nextExamTimestamp}:D>`,
|
||||
"Exam Passed!"
|
||||
)]
|
||||
});
|
||||
|
||||
} catch (error: any) {
|
||||
if (error instanceof UserError) {
|
||||
await interaction.reply({ embeds: [createErrorEmbed(error.message)], ephemeral: true });
|
||||
} else {
|
||||
console.error("Error in exam command:", error);
|
||||
await interaction.reply({ embeds: [createErrorEmbed("An unexpected error occurred.")], ephemeral: true });
|
||||
}
|
||||
console.error("Error in exam command:", error);
|
||||
await interaction.editReply({ embeds: [createErrorEmbed(error.message || "An unexpected error occurred.")] });
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ import { economyService } from "@shared/modules/economy/economy.service";
|
||||
import { userService } from "@shared/modules/user/user.service";
|
||||
import { config } from "@shared/lib/config";
|
||||
import { createErrorEmbed, createSuccessEmbed } from "@lib/embeds";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
|
||||
export const pay = createCommand({
|
||||
data: new SlashCommandBuilder()
|
||||
|
||||
@@ -3,7 +3,7 @@ import { SlashCommandBuilder } from "discord.js";
|
||||
import { triviaService } from "@shared/modules/trivia/trivia.service";
|
||||
import { getTriviaQuestionView } from "@/modules/trivia/trivia.view";
|
||||
import { createErrorEmbed } from "@lib/embeds";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
import { config } from "@shared/lib/config";
|
||||
import { TriviaCategory } from "@shared/lib/constants";
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ import { userService } from "@shared/modules/user/user.service";
|
||||
import { createErrorEmbed } from "@lib/embeds";
|
||||
import { getItemUseResultEmbed } from "@/modules/inventory/inventory.view";
|
||||
import type { ItemUsageData } from "@shared/lib/types";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
import { config } from "@shared/lib/config";
|
||||
|
||||
export const use = createCommand({
|
||||
|
||||
@@ -1,4 +1,15 @@
|
||||
import { Colors, type ColorResolvable, EmbedBuilder } from "discord.js";
|
||||
import { BRANDING } from "@shared/lib/constants";
|
||||
import pkg from "../../package.json";
|
||||
|
||||
/**
|
||||
* Applies standard branding to an embed.
|
||||
*/
|
||||
function applyBranding(embed: EmbedBuilder): EmbedBuilder {
|
||||
return embed.setFooter({
|
||||
text: `${BRANDING.FOOTER_TEXT} v${pkg.version}`
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a standardized error embed.
|
||||
@@ -7,11 +18,13 @@ import { Colors, type ColorResolvable, EmbedBuilder } from "discord.js";
|
||||
* @returns An EmbedBuilder instance configured as an error.
|
||||
*/
|
||||
export function createErrorEmbed(message: string, title: string = "Error"): EmbedBuilder {
|
||||
return new EmbedBuilder()
|
||||
const embed = new EmbedBuilder()
|
||||
.setTitle(`❌ ${title}`)
|
||||
.setDescription(message)
|
||||
.setColor(Colors.Red)
|
||||
.setTimestamp();
|
||||
|
||||
return applyBranding(embed);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -21,11 +34,13 @@ export function createErrorEmbed(message: string, title: string = "Error"): Embe
|
||||
* @returns An EmbedBuilder instance configured as a warning.
|
||||
*/
|
||||
export function createWarningEmbed(message: string, title: string = "Warning"): EmbedBuilder {
|
||||
return new EmbedBuilder()
|
||||
const embed = new EmbedBuilder()
|
||||
.setTitle(`⚠️ ${title}`)
|
||||
.setDescription(message)
|
||||
.setColor(Colors.Yellow)
|
||||
.setTimestamp();
|
||||
|
||||
return applyBranding(embed);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -35,11 +50,13 @@ export function createWarningEmbed(message: string, title: string = "Warning"):
|
||||
* @returns An EmbedBuilder instance configured as a success.
|
||||
*/
|
||||
export function createSuccessEmbed(message: string, title: string = "Success"): EmbedBuilder {
|
||||
return new EmbedBuilder()
|
||||
const embed = new EmbedBuilder()
|
||||
.setTitle(`✅ ${title}`)
|
||||
.setDescription(message)
|
||||
.setColor(Colors.Green)
|
||||
.setTimestamp();
|
||||
|
||||
return applyBranding(embed);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -49,11 +66,13 @@ export function createSuccessEmbed(message: string, title: string = "Success"):
|
||||
* @returns An EmbedBuilder instance configured as info.
|
||||
*/
|
||||
export function createInfoEmbed(message: string, title: string = "Info"): EmbedBuilder {
|
||||
return new EmbedBuilder()
|
||||
const embed = new EmbedBuilder()
|
||||
.setTitle(`ℹ️ ${title}`)
|
||||
.setDescription(message)
|
||||
.setColor(Colors.Blue)
|
||||
.setTimestamp();
|
||||
|
||||
return applyBranding(embed);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -65,11 +84,12 @@ export function createInfoEmbed(message: string, title: string = "Info"): EmbedB
|
||||
*/
|
||||
export function createBaseEmbed(title?: string, description?: string, color?: ColorResolvable): EmbedBuilder {
|
||||
const embed = new EmbedBuilder()
|
||||
.setTimestamp();
|
||||
.setTimestamp()
|
||||
.setColor(color ?? BRANDING.COLOR);
|
||||
|
||||
if (title) embed.setTitle(title);
|
||||
if (description) embed.setDescription(description);
|
||||
if (color) embed.setColor(color);
|
||||
|
||||
return embed;
|
||||
return applyBranding(embed);
|
||||
}
|
||||
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
export class ApplicationError extends Error {
|
||||
constructor(message: string) {
|
||||
super(message);
|
||||
this.name = this.constructor.name;
|
||||
}
|
||||
}
|
||||
|
||||
export class UserError extends ApplicationError {
|
||||
constructor(message: string) {
|
||||
super(message);
|
||||
}
|
||||
}
|
||||
|
||||
export class SystemError extends ApplicationError {
|
||||
constructor(message: string) {
|
||||
super(message);
|
||||
}
|
||||
}
|
||||
@@ -1,5 +1,6 @@
|
||||
import { AutocompleteInteraction } from "discord.js";
|
||||
import { AuroraClient } from "@/lib/BotClient";
|
||||
import { logger } from "@shared/lib/logger";
|
||||
|
||||
|
||||
/**
|
||||
@@ -16,7 +17,7 @@ export class AutocompleteHandler {
|
||||
try {
|
||||
await command.autocomplete(interaction);
|
||||
} catch (error) {
|
||||
console.error(`Error handling autocomplete for ${interaction.commandName}:`, error);
|
||||
logger.error("bot", `Error handling autocomplete for ${interaction.commandName}`, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ import { ChatInputCommandInteraction, MessageFlags } from "discord.js";
|
||||
import { AuroraClient } from "@/lib/BotClient";
|
||||
import { userService } from "@shared/modules/user/user.service";
|
||||
import { createErrorEmbed } from "@lib/embeds";
|
||||
import { logger } from "@shared/lib/logger";
|
||||
|
||||
|
||||
/**
|
||||
@@ -13,7 +14,7 @@ export class CommandHandler {
|
||||
const command = AuroraClient.commands.get(interaction.commandName);
|
||||
|
||||
if (!command) {
|
||||
console.error(`No command matching ${interaction.commandName} was found.`);
|
||||
logger.error("bot", `No command matching ${interaction.commandName} was found.`);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -28,14 +29,14 @@ export class CommandHandler {
|
||||
try {
|
||||
await userService.getOrCreateUser(interaction.user.id, interaction.user.username);
|
||||
} catch (error) {
|
||||
console.error("Failed to ensure user exists:", error);
|
||||
logger.error("bot", "Failed to ensure user exists", error);
|
||||
}
|
||||
|
||||
try {
|
||||
await command.execute(interaction);
|
||||
AuroraClient.lastCommandTimestamp = Date.now();
|
||||
} catch (error) {
|
||||
console.error(String(error));
|
||||
logger.error("bot", `Error executing command ${interaction.commandName}`, error);
|
||||
const errorEmbed = createErrorEmbed('There was an error while executing this command!');
|
||||
|
||||
if (interaction.replied || interaction.deferred) {
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
import { ButtonInteraction, StringSelectMenuInteraction, ModalSubmitInteraction, MessageFlags } from "discord.js";
|
||||
|
||||
import { UserError } from "@lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
import { createErrorEmbed } from "@lib/embeds";
|
||||
import { logger } from "@shared/lib/logger";
|
||||
|
||||
type ComponentInteraction = ButtonInteraction | StringSelectMenuInteraction | ModalSubmitInteraction;
|
||||
|
||||
@@ -28,7 +29,7 @@ export class ComponentInteractionHandler {
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
console.error(`Handler method ${route.method} not found in module`);
|
||||
logger.error("bot", `Handler method ${route.method} not found in module`);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -52,7 +53,7 @@ export class ComponentInteractionHandler {
|
||||
|
||||
// Log system errors (non-user errors) for debugging
|
||||
if (!isUserError) {
|
||||
console.error(`Error in ${handlerName}:`, error);
|
||||
logger.error("bot", `Error in ${handlerName}`, error);
|
||||
}
|
||||
|
||||
const errorEmbed = createErrorEmbed(errorMessage);
|
||||
@@ -72,7 +73,7 @@ export class ComponentInteractionHandler {
|
||||
}
|
||||
} catch (replyError) {
|
||||
// If we can't send a reply, log it
|
||||
console.error(`Failed to send error response in ${handlerName}:`, replyError);
|
||||
logger.error("bot", `Failed to send error response in ${handlerName}`, replyError);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { ButtonInteraction } from "discord.js";
|
||||
import { lootdropService } from "@shared/modules/economy/lootdrop.service";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
import { getLootdropClaimedMessage } from "./lootdrop.view";
|
||||
|
||||
export async function handleLootdropInteraction(interaction: ButtonInteraction) {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { ButtonInteraction, MessageFlags } from "discord.js";
|
||||
import { inventoryService } from "@shared/modules/inventory/inventory.service";
|
||||
import { userService } from "@shared/modules/user/user.service";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
|
||||
export async function handleShopInteraction(interaction: ButtonInteraction) {
|
||||
if (!interaction.customId.startsWith("shop_buy_")) return;
|
||||
|
||||
@@ -4,7 +4,7 @@ import { config } from "@shared/lib/config";
|
||||
import { AuroraClient } from "@/lib/BotClient";
|
||||
import { buildFeedbackMessage, getFeedbackModal } from "./feedback.view";
|
||||
import { FEEDBACK_CUSTOM_IDS, type FeedbackType, type FeedbackData } from "./feedback.types";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
|
||||
export const handleFeedbackInteraction = async (interaction: Interaction) => {
|
||||
// Handle select menu for choosing feedback type
|
||||
|
||||
@@ -10,7 +10,7 @@ import {
|
||||
import { tradeService } from "@shared/modules/trade/trade.service";
|
||||
import { inventoryService } from "@shared/modules/inventory/inventory.service";
|
||||
import { createErrorEmbed, createWarningEmbed, createSuccessEmbed, createInfoEmbed } from "@lib/embeds";
|
||||
import { UserError } from "@lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
import { getTradeDashboard, getTradeMoneyModal, getItemSelectMenu, getTradeCompletedEmbed } from "./trade.view";
|
||||
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { ButtonInteraction } from "discord.js";
|
||||
import { triviaService } from "@shared/modules/trivia/trivia.service";
|
||||
import { getTriviaResultView, getTriviaTimeoutView } from "./trivia.view";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
|
||||
export async function handleTriviaInteraction(interaction: ButtonInteraction) {
|
||||
const parts = interaction.customId.split('_');
|
||||
|
||||
@@ -3,7 +3,7 @@ import { config } from "@shared/lib/config";
|
||||
import { getEnrollmentSuccessMessage } from "./enrollment.view";
|
||||
import { classService } from "@shared/modules/class/class.service";
|
||||
import { userService } from "@shared/modules/user/user.service";
|
||||
import { UserError } from "@/lib/errors";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
import { sendWebhookMessage } from "@/lib/webhookUtils";
|
||||
|
||||
export async function handleEnrollmentInteraction(interaction: ButtonInteraction) {
|
||||
|
||||
2
bun.lock
2
bun.lock
@@ -9,12 +9,12 @@
|
||||
"discord.js": "^14.25.1",
|
||||
"dotenv": "^17.2.3",
|
||||
"drizzle-orm": "^0.44.7",
|
||||
"postgres": "^3.4.7",
|
||||
"zod": "^4.1.13",
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest",
|
||||
"drizzle-kit": "^0.31.7",
|
||||
"postgres": "^3.4.7",
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": "^5",
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
{
|
||||
"name": "app",
|
||||
"version": "1.1.3",
|
||||
"module": "bot/index.ts",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
|
||||
@@ -85,3 +85,9 @@ export enum TriviaCategory {
|
||||
ANIMALS = 27,
|
||||
ANIME_MANGA = 31,
|
||||
}
|
||||
|
||||
export const BRANDING = {
|
||||
COLOR: 0x00d4ff as const,
|
||||
FOOTER_TEXT: 'Aurora' as const,
|
||||
};
|
||||
|
||||
|
||||
118
shared/lib/logger.test.ts
Normal file
118
shared/lib/logger.test.ts
Normal file
@@ -0,0 +1,118 @@
|
||||
import { expect, test, describe, beforeAll, afterAll, spyOn } from "bun:test";
|
||||
import { logger } from "./logger";
|
||||
import { existsSync, unlinkSync, readFileSync, writeFileSync } from "fs";
|
||||
import { join } from "path";
|
||||
|
||||
describe("Logger", () => {
|
||||
const logDir = join(process.cwd(), "logs");
|
||||
const logFile = join(logDir, "error.log");
|
||||
|
||||
beforeAll(() => {
|
||||
// Cleanup if exists
|
||||
try {
|
||||
if (existsSync(logFile)) unlinkSync(logFile);
|
||||
} catch (e) {}
|
||||
});
|
||||
|
||||
test("should log info messages to console with correct format", () => {
|
||||
const spy = spyOn(console, "log");
|
||||
const message = "Formatting test";
|
||||
logger.info("system", message);
|
||||
|
||||
expect(spy).toHaveBeenCalled();
|
||||
const callArgs = spy.mock.calls[0]?.[0];
|
||||
expect(callArgs).toBeDefined();
|
||||
if (callArgs) {
|
||||
// Strict regex check for ISO timestamp and format
|
||||
const regex = /^\[\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z\] \[INFO\] \[SYSTEM\] Formatting test$/;
|
||||
expect(callArgs).toMatch(regex);
|
||||
}
|
||||
spy.mockRestore();
|
||||
});
|
||||
|
||||
test("should write error logs to file with stack trace", async () => {
|
||||
const errorMessage = "Test error message";
|
||||
const testError = new Error("Source error");
|
||||
|
||||
logger.error("system", errorMessage, testError);
|
||||
|
||||
// Polling for file write instead of fixed timeout
|
||||
let content = "";
|
||||
for (let i = 0; i < 20; i++) {
|
||||
if (existsSync(logFile)) {
|
||||
content = readFileSync(logFile, "utf-8");
|
||||
if (content.includes("Source error")) break;
|
||||
}
|
||||
await new Promise(resolve => setTimeout(resolve, 50));
|
||||
}
|
||||
|
||||
expect(content).toMatch(/^\[\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z\] \[ERROR\] \[SYSTEM\] Test error message: Source error/);
|
||||
expect(content).toContain("Stack Trace:");
|
||||
expect(content).toContain("Error: Source error");
|
||||
expect(content).toContain("logger.test.ts");
|
||||
});
|
||||
|
||||
test("should handle log directory creation failures gracefully", async () => {
|
||||
const consoleSpy = spyOn(console, "error");
|
||||
|
||||
// We trigger an error by trying to use a path that is a file where a directory should be
|
||||
const triggerFile = join(process.cwd(), "logs_fail_trigger");
|
||||
|
||||
try {
|
||||
writeFileSync(triggerFile, "not a directory");
|
||||
|
||||
// Manually override paths for this test instance
|
||||
const originalLogDir = (logger as any).logDir;
|
||||
const originalLogPath = (logger as any).errorLogPath;
|
||||
|
||||
(logger as any).logDir = triggerFile;
|
||||
(logger as any).errorLogPath = join(triggerFile, "error.log");
|
||||
(logger as any).initialized = false;
|
||||
|
||||
logger.error("system", "This should fail directory creation");
|
||||
|
||||
// Wait for async initialization attempt
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
|
||||
expect(consoleSpy).toHaveBeenCalled();
|
||||
expect(consoleSpy.mock.calls.some(call =>
|
||||
String(call[0]).includes("Failed to initialize logger directory")
|
||||
)).toBe(true);
|
||||
|
||||
// Reset logger state
|
||||
(logger as any).logDir = originalLogDir;
|
||||
(logger as any).errorLogPath = originalLogPath;
|
||||
(logger as any).initialized = false;
|
||||
} finally {
|
||||
if (existsSync(triggerFile)) unlinkSync(triggerFile);
|
||||
consoleSpy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
test("should include complex data objects in logs", () => {
|
||||
const spy = spyOn(console, "log");
|
||||
const data = { userId: "123", tags: ["test"] };
|
||||
logger.info("bot", "Message with data", data);
|
||||
|
||||
expect(spy).toHaveBeenCalled();
|
||||
const callArgs = spy.mock.calls[0]?.[0];
|
||||
expect(callArgs).toBeDefined();
|
||||
if (callArgs) {
|
||||
expect(callArgs).toContain(` | Data: ${JSON.stringify(data)}`);
|
||||
}
|
||||
spy.mockRestore();
|
||||
});
|
||||
|
||||
test("should handle circular references in data objects", () => {
|
||||
const spy = spyOn(console, "log");
|
||||
const data: any = { name: "circular" };
|
||||
data.self = data;
|
||||
|
||||
logger.info("bot", "Circular test", data);
|
||||
|
||||
expect(spy).toHaveBeenCalled();
|
||||
const callArgs = spy.mock.calls[0]?.[0];
|
||||
expect(callArgs).toContain("[Circular]");
|
||||
spy.mockRestore();
|
||||
});
|
||||
});
|
||||
162
shared/lib/logger.ts
Normal file
162
shared/lib/logger.ts
Normal file
@@ -0,0 +1,162 @@
|
||||
import { join, resolve } from "path";
|
||||
import { appendFile, mkdir, stat } from "fs/promises";
|
||||
import { existsSync } from "fs";
|
||||
|
||||
export enum LogLevel {
|
||||
DEBUG = 0,
|
||||
INFO = 1,
|
||||
WARN = 2,
|
||||
ERROR = 3,
|
||||
}
|
||||
|
||||
const LogLevelNames = {
|
||||
[LogLevel.DEBUG]: "DEBUG",
|
||||
[LogLevel.INFO]: "INFO",
|
||||
[LogLevel.WARN]: "WARN",
|
||||
[LogLevel.ERROR]: "ERROR",
|
||||
};
|
||||
|
||||
export type LogSource = "bot" | "web" | "shared" | "system";
|
||||
|
||||
export interface LogEntry {
|
||||
timestamp: string;
|
||||
level: string;
|
||||
source: LogSource;
|
||||
message: string;
|
||||
data?: any;
|
||||
stack?: string;
|
||||
}
|
||||
|
||||
class Logger {
|
||||
private logDir: string;
|
||||
private errorLogPath: string;
|
||||
private initialized: boolean = false;
|
||||
private initPromise: Promise<void> | null = null;
|
||||
|
||||
constructor() {
|
||||
// Use resolve with __dirname or process.cwd() but make it more robust
|
||||
// Since this is in shared/lib/, we can try to find the project root
|
||||
// For now, let's stick to a resolved path from process.cwd() or a safer alternative
|
||||
this.logDir = resolve(process.cwd(), "logs");
|
||||
this.errorLogPath = join(this.logDir, "error.log");
|
||||
}
|
||||
|
||||
private async ensureInitialized() {
|
||||
if (this.initialized) return;
|
||||
if (this.initPromise) return this.initPromise;
|
||||
|
||||
this.initPromise = (async () => {
|
||||
try {
|
||||
await mkdir(this.logDir, { recursive: true });
|
||||
this.initialized = true;
|
||||
} catch (err: any) {
|
||||
if (err.code === "EEXIST" || err.code === "ENOTDIR") {
|
||||
try {
|
||||
const stats = await stat(this.logDir);
|
||||
if (stats.isDirectory()) {
|
||||
this.initialized = true;
|
||||
return;
|
||||
}
|
||||
} catch (statErr) {}
|
||||
}
|
||||
console.error(`[SYSTEM] Failed to initialize logger directory at ${this.logDir}:`, err);
|
||||
} finally {
|
||||
this.initPromise = null;
|
||||
}
|
||||
})();
|
||||
|
||||
return this.initPromise;
|
||||
}
|
||||
|
||||
private safeStringify(data: any): string {
|
||||
try {
|
||||
return JSON.stringify(data);
|
||||
} catch (err) {
|
||||
const seen = new WeakSet();
|
||||
return JSON.stringify(data, (key, value) => {
|
||||
if (typeof value === "object" && value !== null) {
|
||||
if (seen.has(value)) return "[Circular]";
|
||||
seen.add(value);
|
||||
}
|
||||
return value;
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
private formatMessage(entry: LogEntry): string {
|
||||
const dataStr = entry.data ? ` | Data: ${this.safeStringify(entry.data)}` : "";
|
||||
const stackStr = entry.stack ? `\nStack Trace:\n${entry.stack}` : "";
|
||||
return `[${entry.timestamp}] [${entry.level}] [${entry.source.toUpperCase()}] ${entry.message}${dataStr}${stackStr}`;
|
||||
}
|
||||
|
||||
private async writeToErrorLog(formatted: string) {
|
||||
await this.ensureInitialized();
|
||||
try {
|
||||
await appendFile(this.errorLogPath, formatted + "\n");
|
||||
} catch (err) {
|
||||
console.error("[SYSTEM] Failed to write to error log file:", err);
|
||||
}
|
||||
}
|
||||
|
||||
private log(level: LogLevel, source: LogSource, message: string, errorOrData?: any) {
|
||||
const timestamp = new Date().toISOString();
|
||||
const levelName = LogLevelNames[level];
|
||||
|
||||
const entry: LogEntry = {
|
||||
timestamp,
|
||||
level: levelName,
|
||||
source,
|
||||
message,
|
||||
};
|
||||
|
||||
if (level === LogLevel.ERROR && errorOrData instanceof Error) {
|
||||
entry.stack = errorOrData.stack;
|
||||
entry.message = `${message}: ${errorOrData.message}`;
|
||||
} else if (errorOrData !== undefined) {
|
||||
entry.data = errorOrData;
|
||||
}
|
||||
|
||||
const formatted = this.formatMessage(entry);
|
||||
|
||||
// Print to console
|
||||
switch (level) {
|
||||
case LogLevel.DEBUG:
|
||||
console.debug(formatted);
|
||||
break;
|
||||
case LogLevel.INFO:
|
||||
console.log(formatted);
|
||||
break;
|
||||
case LogLevel.WARN:
|
||||
console.warn(formatted);
|
||||
break;
|
||||
case LogLevel.ERROR:
|
||||
console.error(formatted);
|
||||
break;
|
||||
}
|
||||
|
||||
// Persistent error logging
|
||||
if (level === LogLevel.ERROR) {
|
||||
this.writeToErrorLog(formatted).catch(() => {
|
||||
// Silently fail to avoid infinite loops
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
debug(source: LogSource, message: string, data?: any) {
|
||||
this.log(LogLevel.DEBUG, source, message, data);
|
||||
}
|
||||
|
||||
info(source: LogSource, message: string, data?: any) {
|
||||
this.log(LogLevel.INFO, source, message, data);
|
||||
}
|
||||
|
||||
warn(source: LogSource, message: string, data?: any) {
|
||||
this.log(LogLevel.WARN, source, message, data);
|
||||
}
|
||||
|
||||
error(source: LogSource, message: string, error?: any) {
|
||||
this.log(LogLevel.ERROR, source, message, error);
|
||||
}
|
||||
}
|
||||
|
||||
export const logger = new Logger();
|
||||
@@ -87,7 +87,7 @@ export const economyService = {
|
||||
});
|
||||
|
||||
if (cooldown && cooldown.expiresAt > now) {
|
||||
throw new UserError(`Daily already claimed today. Next claim <t:${Math.floor(cooldown.expiresAt.getTime() / 1000)}:F>`);
|
||||
throw new UserError(`You have already claimed your daily reward today.\nNext claim available: <t:${Math.floor(cooldown.expiresAt.getTime() / 1000)}:F> (<t:${Math.floor(cooldown.expiresAt.getTime() / 1000)}:R>)`);
|
||||
}
|
||||
|
||||
// Get user for streak logic
|
||||
|
||||
237
shared/modules/economy/exam.service.test.ts
Normal file
237
shared/modules/economy/exam.service.test.ts
Normal file
@@ -0,0 +1,237 @@
|
||||
import { describe, it, expect, mock, beforeEach, setSystemTime } from "bun:test";
|
||||
import { examService, ExamStatus } from "@shared/modules/economy/exam.service";
|
||||
import { users, userTimers, transactions } from "@db/schema";
|
||||
|
||||
// Define mock functions
|
||||
const mockFindFirst = mock();
|
||||
const mockInsert = mock();
|
||||
const mockUpdate = mock();
|
||||
const mockValues = mock();
|
||||
const mockReturning = mock();
|
||||
const mockSet = mock();
|
||||
const mockWhere = mock();
|
||||
|
||||
// Chainable mock setup
|
||||
mockInsert.mockReturnValue({ values: mockValues });
|
||||
mockValues.mockReturnValue({ returning: mockReturning });
|
||||
mockUpdate.mockReturnValue({ set: mockSet });
|
||||
mockSet.mockReturnValue({ where: mockWhere });
|
||||
mockWhere.mockReturnValue({ returning: mockReturning });
|
||||
|
||||
// Mock DrizzleClient
|
||||
mock.module("@shared/db/DrizzleClient", () => {
|
||||
const createMockTx = () => ({
|
||||
query: {
|
||||
users: { findFirst: mockFindFirst },
|
||||
userTimers: { findFirst: mockFindFirst },
|
||||
},
|
||||
insert: mockInsert,
|
||||
update: mockUpdate,
|
||||
});
|
||||
|
||||
return {
|
||||
DrizzleClient: {
|
||||
query: {
|
||||
users: { findFirst: mockFindFirst },
|
||||
userTimers: { findFirst: mockFindFirst },
|
||||
},
|
||||
insert: mockInsert,
|
||||
update: mockUpdate,
|
||||
transaction: async (cb: any) => {
|
||||
return cb(createMockTx());
|
||||
}
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
// Mock withTransaction
|
||||
mock.module("@/lib/db", () => ({
|
||||
withTransaction: async (cb: any, tx?: any) => {
|
||||
if (tx) return cb(tx);
|
||||
return cb({
|
||||
query: {
|
||||
users: { findFirst: mockFindFirst },
|
||||
userTimers: { findFirst: mockFindFirst },
|
||||
},
|
||||
insert: mockInsert,
|
||||
update: mockUpdate,
|
||||
});
|
||||
}
|
||||
}));
|
||||
|
||||
// Mock Config
|
||||
mock.module("@shared/lib/config", () => ({
|
||||
config: {
|
||||
economy: {
|
||||
exam: {
|
||||
multMin: 1.0,
|
||||
multMax: 2.0,
|
||||
}
|
||||
}
|
||||
}
|
||||
}));
|
||||
|
||||
// Mock User Service
|
||||
mock.module("@shared/modules/user/user.service", () => ({
|
||||
userService: {
|
||||
getOrCreateUser: mock()
|
||||
}
|
||||
}));
|
||||
|
||||
// Mock Dashboard Service
|
||||
mock.module("@shared/modules/dashboard/dashboard.service", () => ({
|
||||
dashboardService: {
|
||||
recordEvent: mock()
|
||||
}
|
||||
}));
|
||||
|
||||
describe("ExamService", () => {
|
||||
beforeEach(() => {
|
||||
mockFindFirst.mockReset();
|
||||
mockInsert.mockClear();
|
||||
mockUpdate.mockClear();
|
||||
mockValues.mockClear();
|
||||
mockReturning.mockClear();
|
||||
mockSet.mockClear();
|
||||
mockWhere.mockClear();
|
||||
});
|
||||
|
||||
describe("getExamStatus", () => {
|
||||
it("should return NOT_REGISTERED if no timer exists", async () => {
|
||||
mockFindFirst.mockResolvedValue(undefined);
|
||||
const status = await examService.getExamStatus("1");
|
||||
expect(status.status).toBe(ExamStatus.NOT_REGISTERED);
|
||||
});
|
||||
|
||||
it("should return COOLDOWN if now < expiresAt", async () => {
|
||||
const now = new Date("2024-01-10T12:00:00Z");
|
||||
setSystemTime(now);
|
||||
const future = new Date("2024-01-11T00:00:00Z");
|
||||
|
||||
mockFindFirst.mockResolvedValue({
|
||||
expiresAt: future,
|
||||
metadata: { examDay: 3, lastXp: "100" }
|
||||
});
|
||||
|
||||
const status = await examService.getExamStatus("1");
|
||||
expect(status.status).toBe(ExamStatus.COOLDOWN);
|
||||
expect(status.nextExamAt?.getTime()).toBe(future.setHours(0,0,0,0));
|
||||
});
|
||||
|
||||
it("should return MISSED if it is the wrong day", async () => {
|
||||
const now = new Date("2024-01-15T12:00:00Z"); // Monday (1)
|
||||
setSystemTime(now);
|
||||
const past = new Date("2024-01-10T00:00:00Z"); // Wednesday (3) last week
|
||||
|
||||
mockFindFirst.mockResolvedValue({
|
||||
expiresAt: past,
|
||||
metadata: { examDay: 3, lastXp: "100" } // Registered for Wednesday
|
||||
});
|
||||
|
||||
const status = await examService.getExamStatus("1");
|
||||
expect(status.status).toBe(ExamStatus.MISSED);
|
||||
expect(status.examDay).toBe(3);
|
||||
});
|
||||
|
||||
it("should return AVAILABLE if it is the correct day", async () => {
|
||||
const now = new Date("2024-01-17T12:00:00Z"); // Wednesday (3)
|
||||
setSystemTime(now);
|
||||
const past = new Date("2024-01-10T00:00:00Z");
|
||||
|
||||
mockFindFirst.mockResolvedValue({
|
||||
expiresAt: past,
|
||||
metadata: { examDay: 3, lastXp: "100" }
|
||||
});
|
||||
|
||||
const status = await examService.getExamStatus("1");
|
||||
expect(status.status).toBe(ExamStatus.AVAILABLE);
|
||||
expect(status.examDay).toBe(3);
|
||||
expect(status.lastXp).toBe(100n);
|
||||
});
|
||||
});
|
||||
|
||||
describe("registerForExam", () => {
|
||||
it("should create user and timer correctly", async () => {
|
||||
const now = new Date("2024-01-15T12:00:00Z"); // Monday (1)
|
||||
setSystemTime(now);
|
||||
|
||||
const { userService } = await import("@shared/modules/user/user.service");
|
||||
(userService.getOrCreateUser as any).mockResolvedValue({ id: 1n, xp: 500n });
|
||||
|
||||
const result = await examService.registerForExam("1", "testuser");
|
||||
|
||||
expect(result.status).toBe(ExamStatus.NOT_REGISTERED);
|
||||
expect(result.examDay).toBe(1);
|
||||
|
||||
expect(mockInsert).toHaveBeenCalledWith(userTimers);
|
||||
expect(mockInsert).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe("takeExam", () => {
|
||||
it("should return NOT_REGISTERED if not registered", async () => {
|
||||
mockFindFirst.mockResolvedValueOnce({ id: 1n }) // user check
|
||||
.mockResolvedValueOnce(undefined); // timer check
|
||||
|
||||
const result = await examService.takeExam("1");
|
||||
expect(result.status).toBe(ExamStatus.NOT_REGISTERED);
|
||||
});
|
||||
|
||||
it("should handle missed exam and schedule for next exam day", async () => {
|
||||
const now = new Date("2024-01-15T12:00:00Z"); // Monday (1)
|
||||
setSystemTime(now);
|
||||
const past = new Date("2024-01-10T00:00:00Z");
|
||||
|
||||
mockFindFirst.mockResolvedValueOnce({ id: 1n, xp: 600n }) // user
|
||||
.mockResolvedValueOnce({
|
||||
expiresAt: past,
|
||||
metadata: { examDay: 3, lastXp: "500" } // Registered for Wednesday
|
||||
}); // timer
|
||||
|
||||
const result = await examService.takeExam("1");
|
||||
|
||||
expect(result.status).toBe(ExamStatus.MISSED);
|
||||
expect(result.examDay).toBe(3);
|
||||
|
||||
// Should set next exam to next Wednesday
|
||||
// Monday (1) + 2 days = Wednesday (3)
|
||||
const expected = new Date("2024-01-17T00:00:00Z");
|
||||
expect(result.nextExamAt!.getTime()).toBe(expected.getTime());
|
||||
|
||||
expect(mockUpdate).toHaveBeenCalledWith(userTimers);
|
||||
});
|
||||
|
||||
it("should calculate rewards and update state when passed", async () => {
|
||||
const now = new Date("2024-01-17T12:00:00Z"); // Wednesday (3)
|
||||
setSystemTime(now);
|
||||
const past = new Date("2024-01-10T00:00:00Z");
|
||||
|
||||
mockFindFirst.mockResolvedValueOnce({ id: 1n, username: "testuser", xp: 1000n, balance: 0n }) // user
|
||||
.mockResolvedValueOnce({
|
||||
expiresAt: past,
|
||||
metadata: { examDay: 3, lastXp: "500" }
|
||||
}); // timer
|
||||
|
||||
const result = await examService.takeExam("1");
|
||||
|
||||
expect(result.status).toBe(ExamStatus.AVAILABLE);
|
||||
expect(result.xpDiff).toBe(500n);
|
||||
// Multiplier is between 1.0 and 2.0 based on mock config
|
||||
expect(result.multiplier).toBeGreaterThanOrEqual(1.0);
|
||||
expect(result.multiplier).toBeLessThanOrEqual(2.0);
|
||||
expect(result.reward).toBeGreaterThanOrEqual(500n);
|
||||
expect(result.reward).toBeLessThanOrEqual(1000n);
|
||||
|
||||
expect(mockUpdate).toHaveBeenCalledWith(userTimers);
|
||||
expect(mockUpdate).toHaveBeenCalledWith(users);
|
||||
|
||||
// Verify transaction
|
||||
expect(mockInsert).toHaveBeenCalledWith(transactions);
|
||||
expect(mockValues).toHaveBeenCalledWith(expect.objectContaining({
|
||||
amount: result.reward,
|
||||
userId: 1n,
|
||||
type: expect.anything()
|
||||
}));
|
||||
});
|
||||
});
|
||||
});
|
||||
262
shared/modules/economy/exam.service.ts
Normal file
262
shared/modules/economy/exam.service.ts
Normal file
@@ -0,0 +1,262 @@
|
||||
import { users, userTimers, transactions } from "@db/schema";
|
||||
import { eq, and, sql } from "drizzle-orm";
|
||||
import { TimerType, TransactionType } from "@shared/lib/constants";
|
||||
import { config } from "@shared/lib/config";
|
||||
import { withTransaction } from "@/lib/db";
|
||||
import type { Transaction } from "@shared/lib/types";
|
||||
import { UserError } from "@shared/lib/errors";
|
||||
|
||||
const EXAM_TIMER_TYPE = TimerType.EXAM_SYSTEM;
|
||||
const EXAM_TIMER_KEY = 'default';
|
||||
|
||||
export interface ExamMetadata {
|
||||
examDay: number;
|
||||
lastXp: string;
|
||||
}
|
||||
|
||||
export enum ExamStatus {
|
||||
NOT_REGISTERED = 'NOT_REGISTERED',
|
||||
COOLDOWN = 'COOLDOWN',
|
||||
MISSED = 'MISSED',
|
||||
AVAILABLE = 'AVAILABLE',
|
||||
}
|
||||
|
||||
export interface ExamActionResult {
|
||||
status: ExamStatus;
|
||||
nextExamAt?: Date;
|
||||
reward?: bigint;
|
||||
xpDiff?: bigint;
|
||||
multiplier?: number;
|
||||
examDay?: number;
|
||||
}
|
||||
|
||||
export const examService = {
|
||||
/**
|
||||
* Get the current exam status for a user.
|
||||
*/
|
||||
async getExamStatus(userId: string, tx?: Transaction) {
|
||||
return await withTransaction(async (txFn) => {
|
||||
const timer = await txFn.query.userTimers.findFirst({
|
||||
where: and(
|
||||
eq(userTimers.userId, BigInt(userId)),
|
||||
eq(userTimers.type, EXAM_TIMER_TYPE),
|
||||
eq(userTimers.key, EXAM_TIMER_KEY)
|
||||
)
|
||||
});
|
||||
|
||||
if (!timer) {
|
||||
return { status: ExamStatus.NOT_REGISTERED };
|
||||
}
|
||||
|
||||
const now = new Date();
|
||||
const expiresAt = new Date(timer.expiresAt);
|
||||
expiresAt.setHours(0, 0, 0, 0);
|
||||
|
||||
if (now < expiresAt) {
|
||||
return {
|
||||
status: ExamStatus.COOLDOWN,
|
||||
nextExamAt: expiresAt
|
||||
};
|
||||
}
|
||||
|
||||
const metadata = timer.metadata as unknown as ExamMetadata;
|
||||
const currentDay = now.getDay();
|
||||
|
||||
if (currentDay !== metadata.examDay) {
|
||||
return {
|
||||
status: ExamStatus.MISSED,
|
||||
nextExamAt: expiresAt,
|
||||
examDay: metadata.examDay
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
status: ExamStatus.AVAILABLE,
|
||||
examDay: metadata.examDay,
|
||||
lastXp: BigInt(metadata.lastXp || "0")
|
||||
};
|
||||
}, tx);
|
||||
},
|
||||
|
||||
/**
|
||||
* Register a user for the first time.
|
||||
*/
|
||||
async registerForExam(userId: string, username: string, tx?: Transaction): Promise<ExamActionResult> {
|
||||
return await withTransaction(async (txFn) => {
|
||||
// Ensure user exists
|
||||
const { userService } = await import("@shared/modules/user/user.service");
|
||||
const user = await userService.getOrCreateUser(userId, username, txFn);
|
||||
if (!user) throw new Error("Failed to get or create user.");
|
||||
|
||||
const now = new Date();
|
||||
const currentDay = now.getDay();
|
||||
|
||||
// Set next exam to next week
|
||||
const nextExamDate = new Date(now);
|
||||
nextExamDate.setDate(now.getDate() + 7);
|
||||
nextExamDate.setHours(0, 0, 0, 0);
|
||||
|
||||
const metadata: ExamMetadata = {
|
||||
examDay: currentDay,
|
||||
lastXp: (user.xp ?? 0n).toString()
|
||||
};
|
||||
|
||||
await txFn.insert(userTimers).values({
|
||||
userId: BigInt(userId),
|
||||
type: EXAM_TIMER_TYPE,
|
||||
key: EXAM_TIMER_KEY,
|
||||
expiresAt: nextExamDate,
|
||||
metadata: metadata
|
||||
});
|
||||
|
||||
return {
|
||||
status: ExamStatus.NOT_REGISTERED,
|
||||
nextExamAt: nextExamDate,
|
||||
examDay: currentDay
|
||||
};
|
||||
}, tx);
|
||||
},
|
||||
|
||||
/**
|
||||
* Take the exam. Handles missed exams and reward calculations.
|
||||
*/
|
||||
async takeExam(userId: string, tx?: Transaction): Promise<ExamActionResult> {
|
||||
return await withTransaction(async (txFn) => {
|
||||
const user = await txFn.query.users.findFirst({
|
||||
where: eq(users.id, BigInt(userId))
|
||||
});
|
||||
|
||||
if (!user) throw new Error("User not found");
|
||||
|
||||
const timer = await txFn.query.userTimers.findFirst({
|
||||
where: and(
|
||||
eq(userTimers.userId, BigInt(userId)),
|
||||
eq(userTimers.type, EXAM_TIMER_TYPE),
|
||||
eq(userTimers.key, EXAM_TIMER_KEY)
|
||||
)
|
||||
});
|
||||
|
||||
if (!timer) {
|
||||
return { status: ExamStatus.NOT_REGISTERED };
|
||||
}
|
||||
|
||||
const now = new Date();
|
||||
const expiresAt = new Date(timer.expiresAt);
|
||||
expiresAt.setHours(0, 0, 0, 0);
|
||||
|
||||
if (now < expiresAt) {
|
||||
return {
|
||||
status: ExamStatus.COOLDOWN,
|
||||
nextExamAt: expiresAt
|
||||
};
|
||||
}
|
||||
|
||||
const metadata = timer.metadata as unknown as ExamMetadata;
|
||||
const examDay = metadata.examDay;
|
||||
const currentDay = now.getDay();
|
||||
|
||||
if (currentDay !== examDay) {
|
||||
// Missed exam logic
|
||||
let daysUntil = (examDay - currentDay + 7) % 7;
|
||||
if (daysUntil === 0) daysUntil = 7;
|
||||
|
||||
const nextExamDate = new Date(now);
|
||||
nextExamDate.setDate(now.getDate() + daysUntil);
|
||||
nextExamDate.setHours(0, 0, 0, 0);
|
||||
|
||||
const newMetadata: ExamMetadata = {
|
||||
examDay: examDay,
|
||||
lastXp: (user.xp ?? 0n).toString()
|
||||
};
|
||||
|
||||
await txFn.update(userTimers)
|
||||
.set({
|
||||
expiresAt: nextExamDate,
|
||||
metadata: newMetadata
|
||||
})
|
||||
.where(and(
|
||||
eq(userTimers.userId, BigInt(userId)),
|
||||
eq(userTimers.type, EXAM_TIMER_TYPE),
|
||||
eq(userTimers.key, EXAM_TIMER_KEY)
|
||||
));
|
||||
|
||||
return {
|
||||
status: ExamStatus.MISSED,
|
||||
nextExamAt: nextExamDate,
|
||||
examDay: examDay
|
||||
};
|
||||
}
|
||||
|
||||
// Reward Calculation
|
||||
const lastXp = BigInt(metadata.lastXp || "0");
|
||||
const currentXp = user.xp ?? 0n;
|
||||
const diff = currentXp - lastXp;
|
||||
|
||||
const multMin = config.economy.exam.multMin;
|
||||
const multMax = config.economy.exam.multMax;
|
||||
const multiplier = Math.random() * (multMax - multMin) + multMin;
|
||||
|
||||
let reward = 0n;
|
||||
if (diff > 0n) {
|
||||
// Use scaled BigInt arithmetic to avoid precision loss with large XP values
|
||||
const scaledMultiplier = BigInt(Math.round(multiplier * 10000));
|
||||
reward = (diff * scaledMultiplier) / 10000n;
|
||||
}
|
||||
|
||||
const nextExamDate = new Date(now);
|
||||
nextExamDate.setDate(now.getDate() + 7);
|
||||
nextExamDate.setHours(0, 0, 0, 0);
|
||||
|
||||
const newMetadata: ExamMetadata = {
|
||||
examDay: examDay,
|
||||
lastXp: currentXp.toString()
|
||||
};
|
||||
|
||||
// Update Timer
|
||||
await txFn.update(userTimers)
|
||||
.set({
|
||||
expiresAt: nextExamDate,
|
||||
metadata: newMetadata
|
||||
})
|
||||
.where(and(
|
||||
eq(userTimers.userId, BigInt(userId)),
|
||||
eq(userTimers.type, EXAM_TIMER_TYPE),
|
||||
eq(userTimers.key, EXAM_TIMER_KEY)
|
||||
));
|
||||
|
||||
// Add Currency
|
||||
if (reward > 0n) {
|
||||
await txFn.update(users)
|
||||
.set({
|
||||
balance: sql`${users.balance} + ${reward}`
|
||||
})
|
||||
.where(eq(users.id, BigInt(userId)));
|
||||
|
||||
// Add Transaction Record
|
||||
await txFn.insert(transactions).values({
|
||||
userId: BigInt(userId),
|
||||
amount: reward,
|
||||
type: TransactionType.EXAM_REWARD,
|
||||
description: `Weekly exam reward (XP Diff: ${diff})`,
|
||||
});
|
||||
}
|
||||
|
||||
// Record dashboard event
|
||||
const { dashboardService } = await import("@shared/modules/dashboard/dashboard.service");
|
||||
await dashboardService.recordEvent({
|
||||
type: 'success',
|
||||
message: `${user.username} passed their exam: ${reward.toLocaleString()} AU`,
|
||||
icon: '🎓'
|
||||
});
|
||||
|
||||
return {
|
||||
status: ExamStatus.AVAILABLE,
|
||||
nextExamAt: nextExamDate,
|
||||
reward,
|
||||
xpDiff: diff,
|
||||
multiplier,
|
||||
examDay
|
||||
};
|
||||
}, tx);
|
||||
}
|
||||
};
|
||||
@@ -13,6 +13,7 @@
|
||||
"moduleResolution": "bundler",
|
||||
"allowImportingTsExtensions": true,
|
||||
"verbatimModuleSyntax": true,
|
||||
"resolveJsonModule": true,
|
||||
"noEmit": true,
|
||||
// Best practices
|
||||
"strict": true,
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
|
||||
import { serve, spawn, type Subprocess } from "bun";
|
||||
import { join, resolve, dirname } from "path";
|
||||
import { logger } from "@shared/lib/logger";
|
||||
|
||||
export interface WebServerConfig {
|
||||
port?: number;
|
||||
@@ -39,7 +40,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
const isDev = process.env.NODE_ENV !== "production";
|
||||
|
||||
if (isDev) {
|
||||
console.log("🛠️ Starting Web Bundler in Watch Mode...");
|
||||
logger.info("web", "Starting Web Bundler in Watch Mode...");
|
||||
try {
|
||||
buildProcess = spawn(["bun", "run", "build.ts", "--watch"], {
|
||||
cwd: webRoot,
|
||||
@@ -47,7 +48,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
stderr: "inherit",
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Failed to start build process:", error);
|
||||
logger.error("web", "Failed to start build process", error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -75,7 +76,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
// Security Check: limit concurrent connections
|
||||
const currentConnections = server.pendingWebSockets;
|
||||
if (currentConnections >= MAX_CONNECTIONS) {
|
||||
console.warn(`⚠️ [WS] Connection rejected: limit reached (${currentConnections}/${MAX_CONNECTIONS})`);
|
||||
logger.warn("web", `Connection rejected: limit reached (${currentConnections}/${MAX_CONNECTIONS})`);
|
||||
return new Response("Connection limit reached", { status: 429 });
|
||||
}
|
||||
|
||||
@@ -94,7 +95,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
const stats = await getFullDashboardStats();
|
||||
return Response.json(stats);
|
||||
} catch (error) {
|
||||
console.error("Error fetching dashboard stats:", error);
|
||||
logger.error("web", "Error fetching dashboard stats", error);
|
||||
return Response.json(
|
||||
{ error: "Failed to fetch dashboard statistics" },
|
||||
{ status: 500 }
|
||||
@@ -124,7 +125,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
const activity = await activityPromise;
|
||||
return Response.json(activity);
|
||||
} catch (error) {
|
||||
console.error("Error fetching activity stats:", error);
|
||||
logger.error("web", "Error fetching activity stats", error);
|
||||
return Response.json(
|
||||
{ error: "Failed to fetch activity statistics" },
|
||||
{ status: 500 }
|
||||
@@ -160,7 +161,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
return Response.json(result);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Error executing administrative action:", error);
|
||||
logger.error("web", "Error executing administrative action", error);
|
||||
return Response.json(
|
||||
{ error: "Failed to execute administrative action" },
|
||||
{ status: 500 }
|
||||
@@ -196,7 +197,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
return Response.json({ success: true });
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Settings error:", error);
|
||||
logger.error("web", "Settings error", error);
|
||||
return Response.json(
|
||||
{ error: "Failed to process settings request", details: error instanceof Error ? error.message : String(error) },
|
||||
{ status: 400 }
|
||||
@@ -235,7 +236,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
|
||||
return Response.json({ roles, channels, commands });
|
||||
} catch (error) {
|
||||
console.error("Error fetching settings meta:", error);
|
||||
logger.error("web", "Error fetching settings meta", error);
|
||||
return Response.json(
|
||||
{ error: "Failed to fetch metadata" },
|
||||
{ status: 500 }
|
||||
@@ -294,7 +295,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
websocket: {
|
||||
open(ws) {
|
||||
ws.subscribe("dashboard");
|
||||
console.log(`🔌 [WS] Client connected. Total: ${server.pendingWebSockets}`);
|
||||
logger.debug("web", `Client connected. Total: ${server.pendingWebSockets}`);
|
||||
|
||||
// Send initial stats
|
||||
getFullDashboardStats().then(stats => {
|
||||
@@ -308,7 +309,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
const stats = await getFullDashboardStats();
|
||||
server.publish("dashboard", JSON.stringify({ type: "STATS_UPDATE", data: stats }));
|
||||
} catch (error) {
|
||||
console.error("Error in stats broadcast:", error);
|
||||
logger.error("web", "Error in stats broadcast", error);
|
||||
}
|
||||
}, 5000);
|
||||
}
|
||||
@@ -319,7 +320,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
|
||||
// Defense-in-depth: redundant length check before parsing
|
||||
if (messageStr.length > MAX_PAYLOAD_BYTES) {
|
||||
console.error("❌ [WS] Payload exceeded maximum limit");
|
||||
logger.error("web", "Payload exceeded maximum limit");
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -328,7 +329,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
const parsed = WsMessageSchema.safeParse(rawData);
|
||||
|
||||
if (!parsed.success) {
|
||||
console.error("❌ [WS] Invalid message format:", parsed.error.issues);
|
||||
logger.error("web", "Invalid message format", parsed.error.issues);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -336,12 +337,12 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
ws.send(JSON.stringify({ type: "PONG" }));
|
||||
}
|
||||
} catch (e) {
|
||||
console.error("❌ [WS] Failed to handle message:", e instanceof Error ? e.message : "Malformed JSON");
|
||||
logger.error("web", "Failed to handle message", e);
|
||||
}
|
||||
},
|
||||
close(ws) {
|
||||
ws.unsubscribe("dashboard");
|
||||
console.log(`🔌 [WS] Client disconnected. Total remaining: ${server.pendingWebSockets}`);
|
||||
logger.debug("web", `Client disconnected. Total remaining: ${server.pendingWebSockets}`);
|
||||
|
||||
// Stop broadcast interval if no clients left
|
||||
if (server.pendingWebSockets === 0 && statsBroadcastInterval) {
|
||||
@@ -382,7 +383,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
// Helper to unwrap result or return default
|
||||
const unwrap = <T>(result: PromiseSettledResult<T>, defaultValue: T, name: string): T => {
|
||||
if (result.status === 'fulfilled') return result.value;
|
||||
console.error(`Failed to fetch ${name}:`, result.reason);
|
||||
logger.error("web", `Failed to fetch ${name}`, result.reason);
|
||||
return defaultValue;
|
||||
};
|
||||
|
||||
@@ -403,7 +404,7 @@ export async function createWebServer(config: WebServerConfig = {}): Promise<Web
|
||||
const recentEvents = unwrap(results[4], [], 'recentEvents');
|
||||
const totalItems = unwrap(results[5], 0, 'totalItems');
|
||||
const activeLootdrops = unwrap(results[6], [], 'activeLootdrops');
|
||||
const leaderboards = unwrap(results[7], { topLevels: [], topWealth: [] }, 'leaderboards');
|
||||
const leaderboards = unwrap(results[7], { topLevels: [], topWealth: [], topNetWorth: [] }, 'leaderboards');
|
||||
const lootdropState = unwrap(results[8], undefined, 'lootdropState');
|
||||
|
||||
return {
|
||||
|
||||
Reference in New Issue
Block a user