forked from syntaxbullet/AuroraBot-discord
agent: update agent workflows
This commit is contained in:
@@ -1,57 +1,63 @@
|
||||
---
|
||||
description: Create a new Ticket
|
||||
description: Converts conversational brain dumps into structured, metric-driven Markdown tickets in the ./tickets directory.
|
||||
---
|
||||
|
||||
### Role
|
||||
You are a Senior Technical Product Manager and Lead Engineer. Your goal is to translate feature requests into comprehensive, strictly formatted engineering tickets.
|
||||
# WORKFLOW: PRAGMATIC ARCHITECT TICKET GENERATOR
|
||||
|
||||
### Task
|
||||
When I ask you to "scope a feature" or "create a ticket" for a specific functionality:
|
||||
1. Analyze the request for technical implications, edge cases, and architectural fit.
|
||||
2. Generate a new Markdown file.
|
||||
3. Place this file in the `/tickets` directory (create the directory if it does not exist).
|
||||
## 1. High-Level Goal
|
||||
Transform informal user "brain dumps" into high-precision, metric-driven engineering tickets stored as Markdown files in the `./tickets/` directory. The workflow enforces a quality gate via targeted inquiry before any file persistence occurs, ensuring all tasks are observable, measurable, and actionable.
|
||||
|
||||
### File Naming Convention
|
||||
You must use the following naming convention strictly:
|
||||
`/tickets/YYYY-MM-DD-{kebab-case-feature-name}.md`
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Assumptions:** The agent has write access to the `./tickets/` and `/temp/` directories. The current date is accessible for naming conventions. "Metrics" refer to quantifiable constraints (latency, line counts, status codes).
|
||||
- **Ambiguities:** If the user provides a second brain dump while a ticket is in progress, the agent will prioritize the current workflow until completion or explicit cancellation.
|
||||
|
||||
*Example:* `/tickets/2024-10-12-user-authentication-flow.md`
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### File Content Structure
|
||||
The markdown file must adhere to the following template exactly. Do not skip sections. If a section is not applicable, write "N/A" but explain why.
|
||||
### Stage 1: Discovery & Quality Gate
|
||||
- **Stage Name:** Requirement Analysis
|
||||
- **Purpose:** Analyze input for vagueness and enforce the "Quality Gate" by extracting metrics.
|
||||
- **Inputs:** Raw user brain dump (text).
|
||||
- **Actions:** 1. Identify "Known Unknowns" (vague terms like "fast," "better," "clean").
|
||||
2. Formulate exactly three (3) targeted questions to convert vague goals into comparable metrics.
|
||||
3. Check for logical inconsistencies in the request.
|
||||
- **Outputs:** Three questions presented to the user.
|
||||
- **Persistence Strategy:** Save the original brain dump and the three questions to `/temp/pending_ticket_state.json`.
|
||||
|
||||
```markdown
|
||||
# [Ticket ID]: [Feature Title]
|
||||
### Stage 2: Drafting & Refinement
|
||||
- **Stage Name:** Ticket Drafting
|
||||
- **Purpose:** Synthesize the original dump and user answers into a structured Markdown draft.
|
||||
- **Inputs:** User responses to the three questions; `/temp/pending_ticket_state.json`.
|
||||
- **Actions:** 1. Construct a Markdown draft using the provided template.
|
||||
2. Generate a slug-based filename: `YYYYMMDD-slug.md`.
|
||||
3. Present the draft and filename to the user for review.
|
||||
- **Outputs:** Formatted Markdown text and suggested filename displayed in the chat.
|
||||
- **Persistence Strategy:** Update `/temp/pending_ticket_state.json` with the full Markdown content and the proposed filename.
|
||||
|
||||
**Status:** Draft
|
||||
**Created:** [YYYY-MM-DD]
|
||||
**Tags:** [comma, separated, tags]
|
||||
### Stage 3: Execution & Persistence
|
||||
- **Stage Name:** Finalization
|
||||
- **Purpose:** Commit the approved ticket to the permanent `./tickets/` directory.
|
||||
- **Inputs:** User confirmation (e.g., "Go," "Approved"); `/temp/pending_ticket_state.json`.
|
||||
- **Actions:** 1. Write the finalized Markdown content to `./tickets/[filename]`.
|
||||
2. Delete the temporary state file in `/temp/`.
|
||||
- **Outputs:** Confirmation message containing the relative path to the new file.
|
||||
- **Persistence Strategy:** Permanent write to `./tickets/`.
|
||||
|
||||
## 1. Context & User Story
|
||||
* **As a:** [Role]
|
||||
* **I want to:** [Action]
|
||||
* **So that:** [Benefit/Value]
|
||||
## 4. Data & File Contracts
|
||||
- **State File:** `/temp/pending_ticket_state.json`
|
||||
- Schema: `{ "original_input": string, "questions": string[], "answers": string[], "draft_content": string, "filename": string, "step": integer }`
|
||||
- **Output File:** `./tickets/YYYYMMDD-[slug].md`
|
||||
- Format: Markdown
|
||||
- Sections: `# Title`, `## Context`, `## Acceptance Criteria`, `## Suggested Affected Files`, `## Technical Constraints`.
|
||||
|
||||
## 2. Technical Requirements
|
||||
### Data Model Changes
|
||||
- [ ] Describe any new tables, columns, or relationship changes.
|
||||
- [ ] SQL migration required? (Yes/No)
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Incomplete Inputs:** If the user fails to answer the 3 questions, the agent must politely restate that metrics are required for high-precision engineering and repeat the questions.
|
||||
- **Inconsistencies:** If the user’s answers contradict the original dump, the agent must flag the contradiction and ask for a tie-break before drafting.
|
||||
- **Missing Directory:** If `./tickets/` does not exist during Stage 3, the agent must attempt to create it before writing the file.
|
||||
|
||||
### API / Interface
|
||||
- [ ] Define endpoints (method, path) or function signatures.
|
||||
- [ ] Payload definition (JSON structure or Types).
|
||||
|
||||
## 3. Constraints & Validations (CRITICAL)
|
||||
*This section must be exhaustive. Do not be vague.*
|
||||
- **Input Validation:** (e.g., "Email must utilize standard regex", "Password must be min 12 chars with special chars").
|
||||
- **System Constraints:** (e.g., "Image upload max size 5MB", "Request timeout 30s").
|
||||
- **Business Logic Guardrails:** (e.g., "User cannot upgrade if balance < $0").
|
||||
|
||||
## 4. Acceptance Criteria
|
||||
*Use Gherkin syntax (Given/When/Then) or precise bullet points.*
|
||||
1. [ ] Criteria 1
|
||||
2. [ ] Criteria 2
|
||||
|
||||
## 5. Implementation Plan
|
||||
- [ ] Step 1: ...
|
||||
- [ ] Step 2: ...
|
||||
## 6. Final Deliverable Specification
|
||||
- **Format:** A valid Markdown file in the `./tickets/` folder.
|
||||
- **Quality Bar:**
|
||||
- Zero fluff in the Context section.
|
||||
- All Acceptance Criteria must be binary (pass/fail) or metric-based.
|
||||
- Filename must strictly follow `YYYYMMDD-slug.md` (e.g., `20240520-auth-refactor.md`).
|
||||
- No "Status" or "Priority" fields.
|
||||
89
.agent/workflows/map-impact.md
Normal file
89
.agent/workflows/map-impact.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
description: Analyzes the codebase to find dependencies and side effects related to a specific ticket.
|
||||
---
|
||||
|
||||
# WORKFLOW: Dependency Architect & Blast Radius Analysis
|
||||
|
||||
## 1. High-Level Goal
|
||||
Perform a deterministic "Blast Radius" analysis for a code change defined in a Jira/Linear-style ticket. The agent will identify direct consumers, side effects, and relevant test suites, then append a structured "Impact Analysis" section to the original ticket file to guide developers and ensure high-velocity execution without regressions.
|
||||
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Location:** Tickets are stored in the `./tickets/` directory as Markdown files.
|
||||
- **Code Access:** The agent has full read access to the project root and subdirectories.
|
||||
- **Scope:** Dependency tracing is limited to "one level deep" (direct imports/references) unless a global configuration or core database schema change is detected.
|
||||
- **Ambiguity Handling:** If "Suggested Affected Files" are missing from the ticket, the agent will attempt to infer them from the "Acceptance Criteria" logic; if inference is impossible, the agent will halt and request the file list.
|
||||
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### Stage 1: Ticket Parsing & Context Extraction
|
||||
- **Purpose:** Extract the specific files and logic constraints requiring analysis.
|
||||
- **Inputs:** A specific ticket filename (e.g., `./tickets/TASK-123.md`).
|
||||
- **Actions:**
|
||||
1. Read the ticket file.
|
||||
2. Extract the list of "Suggested Affected Files".
|
||||
3. Extract keywords and logic from the "Acceptance Criteria".
|
||||
4. Validate that all "Suggested Affected Files" exist in the current codebase.
|
||||
- **Outputs:** A JSON object containing the target file list and key logic requirements.
|
||||
- **Persistence Strategy:** Save extracted data to `/temp/context.json`.
|
||||
|
||||
### Stage 2: Recursive Dependency Mapping
|
||||
- **Purpose:** Identify which external modules rely on the target files.
|
||||
- **Inputs:** `/temp/context.json`.
|
||||
- **Actions:**
|
||||
1. For each file in the target list, perform a search (e.g., `grep` or AST walk) for import statements or references in the rest of the codebase.
|
||||
2. Filter out internal references within the same module (focus on external consumers).
|
||||
3. Detect if the change involves shared utilities (e.g., `utils/`, `common/`) or database schemas (e.g., `prisma/schema.prisma`).
|
||||
- **Outputs:** A list of unique consumer file paths and their specific usage context.
|
||||
- **Persistence Strategy:** Save findings to `/temp/dependencies.json`.
|
||||
|
||||
### Stage 3: Test Suite Identification
|
||||
- **Purpose:** Locate the specific test files required to validate the change.
|
||||
- **Inputs:** `/temp/context.json` and `/temp/dependencies.json`.
|
||||
- **Actions:**
|
||||
1. Search for files following patterns: `[filename].test.ts`, `[filename].spec.js`, or within `__tests__` folders related to affected files.
|
||||
2. Identify integration or E2E tests that cover the consumer paths identified in Stage 2.
|
||||
- **Outputs:** A list of relevant test file paths.
|
||||
- **Persistence Strategy:** Save findings to `/temp/tests.json`.
|
||||
|
||||
### Stage 4: Risk Hotspot Synthesis
|
||||
- **Purpose:** Interpret raw dependency data into actionable risk warnings.
|
||||
- **Inputs:** All files in `/temp/`.
|
||||
- **Actions:**
|
||||
1. Analyze the volume of consumers; if a file has >5 consumers, flag it as a "High Impact Hotspot."
|
||||
2. Check for breaking contract changes (e.g., interface modifications) based on the "Acceptance Criteria".
|
||||
3. Formulate specific "Risk Hotspot" warnings (e.g., "Changing Auth interface affects 12 files; consider a wrapper.").
|
||||
- **Outputs:** A structured Markdown-ready report object.
|
||||
- **Persistence Strategy:** Save final report data to `/temp/final_analysis.json`.
|
||||
|
||||
### Stage 5: Ticket Augmentation & Finalization
|
||||
- **Purpose:** Update the physical ticket file with findings.
|
||||
- **Inputs:** Original ticket file and `/temp/final_analysis.json`.
|
||||
- **Actions:**
|
||||
1. Read the current content of the ticket file.
|
||||
2. Generate a Markdown section titled `## Impact Analysis (Generated: 2026-01-09)`.
|
||||
3. Append the Direct Consumers, Test Coverage, and Risk Hotspots sections.
|
||||
4. Write the combined content back to the original file path.
|
||||
- **Outputs:** Updated Markdown ticket.
|
||||
- **Persistence Strategy:** None (Final Action).
|
||||
|
||||
## 4. Data & File Contracts
|
||||
- **State File (`/temp/state.json`):** - `affected_files`: string[]
|
||||
- `consumers`: { path: string, context: string }[]
|
||||
- `tests`: string[]
|
||||
- `risks`: string[]
|
||||
- **File Format:** All `/temp` files must be valid JSON.
|
||||
- **Ticket Format:** Standard Markdown. Use `###` for sub-headers in the generated section.
|
||||
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Missing Ticket:** If the ticket path is invalid, exit immediately with error: "TICKET_NOT_FOUND".
|
||||
- **Zero Consumers Found:** If no external consumers are found, state "No external dependencies detected" in the report; do not fail.
|
||||
- **Broken Imports:** If AST parsing fails due to syntax errors in the codebase, fallback to `grep` for string-based matching.
|
||||
- **Write Permission:** If the ticket file is read-only, output the final Markdown to the console and provide a warning.
|
||||
|
||||
## 6. Final Deliverable Specification
|
||||
- **Format:** The original ticket file must be modified in-place.
|
||||
- **Content:**
|
||||
- **Direct Consumers:** Bulleted list of `[File Path]: [Usage description]`.
|
||||
- **Test Coverage:** Bulleted list of `[File Path]`.
|
||||
- **Risk Hotspots:** Clear, one-sentence warnings for high-risk areas.
|
||||
- **Quality Bar:** No hallucinations. Every file path listed must exist in the repository. No deletions of original ticket content.
|
||||
@@ -1,53 +1,72 @@
|
||||
---
|
||||
description: Review the most recent changes critically.
|
||||
description: Performs a high-intensity, "hostile" technical audit of the provided code.
|
||||
---
|
||||
|
||||
### Role
|
||||
You are a Lead Security Engineer and Senior QA Automator. Your persona is **"The Hostile Reviewer."**
|
||||
* **Mindset:** You do not trust the code. You assume it contains bugs, security flaws, and logic gaps.
|
||||
* **Goal:** Your objective is to reject the most recent git changes by finding legitimate issues. If you cannot find issues, only then do you approve.
|
||||
# WORKFLOW: HOSTILE TECHNICAL AUDIT & SECURITY REVIEW
|
||||
|
||||
### Phase 1: The Security & Logic Audit
|
||||
Analyze the code changes for specific vulnerabilities. Do not summarize what the code does; look for what it *does wrong*.
|
||||
## 1. High-Level Goal
|
||||
Execute a multi-pass, hyper-critical technical audit of provided source code to identify fatal logic flaws, security vulnerabilities, and architectural debt. The agent acts as a hostile reviewer with a "guilty until proven innocent" mindset, aiming to justify a REJECTED verdict unless the code demonstrates exceptional robustness and simplicity.
|
||||
|
||||
1. **TypeScript Strictness:**
|
||||
* Flag any usage of `any`.
|
||||
* Flag any use of non-null assertions (`!`) unless strictly guarded.
|
||||
* Flag forced type casting (`as UnknownType`) without validation.
|
||||
2. **Bun/Runtime Specifics:**
|
||||
* Check for unhandled Promises (floating promises).
|
||||
* Ensure environment variables are not hardcoded.
|
||||
3. **Security Vectors:**
|
||||
* **Injection:** Check SQL/NoSQL queries for concatenation.
|
||||
* **Sanitization:** Are inputs from the generic request body validated against the schema defined in the Ticket?
|
||||
* **Auth:** Are sensitive routes actually protected by middleware?
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Assumption:** The user will provide either raw code snippets or paths to files within the agent's accessible environment.
|
||||
- **Assumption:** The agent has access to `/temp/` for multi-stage state persistence.
|
||||
- **Clarification:** If a "ticket description" or "requirement" is not provided, the agent will infer intent from the code but must flag "Lack of Context" as a potential risk.
|
||||
- **Clarification:** "Hostile" refers to a rigorous, zero-tolerance standard, not unprofessional language.
|
||||
|
||||
### Phase 2: Test Quality Verification
|
||||
Do not just check if tests pass. Check if the tests are **valid**.
|
||||
1. **The "Happy Path" Trap:** If the tests only check for success (status 200), **FAIL** the review.
|
||||
2. **Edge Case Coverage:**
|
||||
* Did the code handle the *Constraints & Validations* listed in the original ticket?
|
||||
* *Example:* If the ticket says "Max 5MB upload", is there a test case for a 5.1MB file?
|
||||
3. **Mocking Integrity:** Are mocks too permissive? (e.g., Mocking a function to always return `true` regardless of input).
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### Phase 3: The Verdict
|
||||
Output your review in the following strict format:
|
||||
### Stage 1: Contextual Ingestion & Dependency Mapping
|
||||
- **Purpose:** Map the attack surface and understand the logical flow before the audit.
|
||||
- **Inputs:** Target source code files.
|
||||
- **Actions:** - Identify all external dependencies and entry points.
|
||||
- Map data flow from input to storage/output.
|
||||
- Identify "High-Risk Zones" (e.g., auth logic, DB queries, memory management).
|
||||
- **Outputs:** A structured map of the code's architecture.
|
||||
- **Persistence Strategy:** Save `audit_map.json` to `/temp/` containing the file list and identified High-Risk Zones.
|
||||
|
||||
---
|
||||
# 🛡️ Code Review Report
|
||||
### Stage 2: Security & Logic Stress Test (The "Hostile" Pass)
|
||||
- **Purpose:** Identify reasons to reject the code based on security and logical integrity.
|
||||
- **Inputs:** `/temp/audit_map.json` and source code.
|
||||
- **Actions:**
|
||||
- Scan for injection, race conditions, and improper state handling.
|
||||
- Simulate edge cases: null inputs, buffer overflows, and malformed data.
|
||||
- Evaluate "Silent Failures": Does the code swallow exceptions or fail to log critical errors?
|
||||
- **Outputs:** List of fatal flaws and security risks.
|
||||
- **Persistence Strategy:** Save `vulnerabilities.json` to `/temp/`.
|
||||
|
||||
**Ticket ID:** [Ticket Name]
|
||||
**Verdict:** [🔴 REJECT / 🟢 APPROVE]
|
||||
### Stage 3: Performance & Velocity Debt Assessment
|
||||
- **Purpose:** Evaluate the "Pragmatic Performance" and maintainability of the implementation.
|
||||
- **Inputs:** Source code and `/temp/vulnerabilities.json`.
|
||||
- **Actions:**
|
||||
- Identify redundant API calls or unnecessary allocations.
|
||||
- Flag "Over-Engineering" (unnecessary abstractions) vs. "Lazy Code" (hardcoded values).
|
||||
- Identify missing unit test scenarios for identified edge cases.
|
||||
- **Outputs:** List of optimization debt and missing test scenarios.
|
||||
- **Persistence Strategy:** Save `debt_and_tests.json` to `/temp/`.
|
||||
|
||||
## 🚨 Critical Issues (Must Fix)
|
||||
*List logic bugs, security risks, or failing tests.*
|
||||
1. ...
|
||||
2. ...
|
||||
### Stage 4: Synthesis & Verdict Generation
|
||||
- **Purpose:** Compile all findings into the final "Hostile Audit" report.
|
||||
- **Inputs:** `/temp/vulnerabilities.json` and `/temp/debt_and_tests.json`.
|
||||
- **Actions:**
|
||||
- Consolidate all findings into the mandated "Response Format."
|
||||
- Apply the "Burden of Proof" rule: If any Fatal Flaws or Security Risks exist, the verdict is REJECTED.
|
||||
- Ensure no sycophantic language is present.
|
||||
- **Outputs:** Final Audit Report.
|
||||
- **Persistence Strategy:** Final output is delivered to the user; `/temp/` files may be purged.
|
||||
|
||||
## ⚠️ Suggestions (Refactoring)
|
||||
*List code style improvements, variable naming, or DRY opportunities.*
|
||||
1. ...
|
||||
## 4. Data & File Contracts
|
||||
- **Filename:** `/temp/audit_context.json` | **Schema:** `{ "high_risk_zones": [], "entry_points": [] }`
|
||||
- **Filename:** `/temp/findings.json` | **Schema:** `{ "fatal_flaws": [], "security_risks": [], "debt": [], "missing_tests": [] }`
|
||||
- **Final Report Format:** Markdown with specific headers: `## 🛑 FATAL FLAWS`, `## ⚠️ SECURITY & VULNERABILITIES`, `## 📉 VELOCITY DEBT`, `## 🧪 MISSING TESTS`, and `### VERDICT`.
|
||||
|
||||
## 🧪 Test Coverage Gap Analysis
|
||||
*List specific scenarios that are NOT currently tested but should be.*
|
||||
- [ ] Scenario: ...
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Incomplete Input:** If the code is snippet-based and missing context, the agent must assume the worst-case scenario for the missing parts and flag them as "Critical Unknowns."
|
||||
- **Stage Failure:** If a specific file cannot be parsed, log the error in the `findings.json` and proceed with the remaining files.
|
||||
- **Clarification:** The agent will NOT ask for clarification mid-audit. It will make a "hostile assumption" and document it as a risk.
|
||||
|
||||
## 6. Final Deliverable Specification
|
||||
- **Tone:** Senior Security Auditor. Clinical, critical, and direct.
|
||||
- **Acceptance Criteria:** - No "Good job" or introductory filler.
|
||||
- Every flaw must include [Why it fails] and [How to fix it].
|
||||
- Verdict must be REJECTED unless the code is "solid" (simple, robust, and secure).
|
||||
- Must identify at least one specific edge case for the "Missing Tests" section.
|
||||
@@ -1,50 +0,0 @@
|
||||
---
|
||||
description: Pick a Ticket and work on it.
|
||||
---
|
||||
|
||||
### Role
|
||||
You are an Autonomous Senior Software Engineer specializing in TypeScript and Bun. You are responsible for the full lifecycle of feature implementation: selection, coding, testing, verification, and closure.
|
||||
|
||||
|
||||
### Phase 1: Triage & Selection
|
||||
1. **Scan:** Read all files in the `/tickets` directory.
|
||||
2. **Filter:** Ignore tickets marked `Status: Done` or `Status: Archived`.
|
||||
3. **Prioritize:** Select a single ticket based on the following hierarchy:
|
||||
* **Tags:** `Critical` > `High Priority` > `Bug` > `Feature`.
|
||||
* **Age:** Oldest created date first (FIFO).
|
||||
4. **Announce:** Explicitly state: "I am picking ticket: [Ticket ID/Name] because [Reason]."
|
||||
|
||||
### Phase 2: Setup (Non-Destructive)
|
||||
1. **Branching:** Create a new git branch based on the ticket name.
|
||||
* *Format:* `feat/{ticket-kebab-name}` or `fix/{ticket-kebab-name}`.
|
||||
* *Command:* `git checkout -b feat/user-auth-flow`.
|
||||
2. **Context:** Read the selected ticket markdown file thoroughly, paying special attention to "Constraints & Validations."
|
||||
|
||||
### Phase 3: Implementation & Testing (The Loop)
|
||||
*Iterate until the requirements are met.*
|
||||
|
||||
1. **Write Code:** Implement the feature or fix using TypeScript.
|
||||
2. **Tightened Testing:**
|
||||
* You must create or update test files (`*.test.ts` or `*.spec.ts`).
|
||||
* **Requirement:** Tests must cover happy paths AND the edge cases defined in the ticket's "Constraints" section.
|
||||
* *Mocking:* Mock external dependencies where appropriate to ensure isolation.
|
||||
3. **Type Safety Check:**
|
||||
* Run: `bun x tsc --noEmit`
|
||||
* **CRITICAL:** If there are ANY TypeScript errors, you must fix them immediately. Do not proceed.
|
||||
4. **Runtime Verification:**
|
||||
* Run: `bun test`
|
||||
* Ensure all tests pass. If a test fails, analyze the stack trace, fix the implementation, and rerun.
|
||||
|
||||
### Phase 4: Self-Review & Clean Up
|
||||
Before declaring the task finished, perform a self-review:
|
||||
1. **Linting:** Check for unused variables, any types, or console logs.
|
||||
2. **Refactor:** Ensure code is DRY (Don't Repeat Yourself) and strictly typed.
|
||||
3. **Ticket Update:**
|
||||
* Modify the Markdown ticket file.
|
||||
* Change `Status: Draft` to `Status: In Review` or `Status: Done`.
|
||||
* Add a new section at the bottom: `## Implementation Notes` listing the specific files changed.
|
||||
|
||||
### Phase 5: Handover
|
||||
Only when `bun x tsc` and `bun test` pass with 0 errors:
|
||||
1. Commit the changes with a semantic message (e.g., `feat: implement user auth logic`).
|
||||
2. Present a summary of the work done and ask for a human code review.
|
||||
99
.agent/workflows/work.md
Normal file
99
.agent/workflows/work.md
Normal file
@@ -0,0 +1,99 @@
|
||||
---
|
||||
description: Work on a ticket
|
||||
---
|
||||
|
||||
# WORKFLOW: Automated Feature Implementation and Review Cycle
|
||||
|
||||
## 1. High-Level Goal
|
||||
The objective of this workflow is to autonomously ingest a task from a local `/tickets` directory, establish a dedicated development environment via Git branching, implement the requested changes with incremental commits, validate the work through an internal review process, and finalize the lifecycle by cleaning up ticket artifacts and seeking user authorization for the final merge.
|
||||
|
||||
---
|
||||
|
||||
## 2. Assumptions & Clarifications
|
||||
- **Assumptions:**
|
||||
- The `/tickets` directory contains one or more files representing tasks (e.g., `.md` or `.txt`).
|
||||
- The agent has authenticated access to the local Git repository.
|
||||
- A "Review Workflow" exists as an executable command or internal process.
|
||||
- The branch naming convention is `feature/[ticket-filename-slug]`.
|
||||
- **Ambiguities:**
|
||||
- If multiple tickets exist, the agent will select the one with the earliest "Last Modified" timestamp.
|
||||
- "Regular commits" are defined as committing after every logically complete file change or functional milestone.
|
||||
|
||||
---
|
||||
|
||||
## 3. Stage Breakdown
|
||||
|
||||
### Stage 1: Ticket Selection and Branch Initialization
|
||||
- **Purpose:** Identify the next task and prepare the workspace.
|
||||
- **Inputs:** Contents of the `/tickets` directory.
|
||||
- **Actions:**
|
||||
1. Scan `/tickets` and select the oldest file.
|
||||
2. Parse the ticket content to understand requirements.
|
||||
3. Ensure the current working directory is a Git repository.
|
||||
4. Create and switch to a new branch: `feature/[ticket-id]`.
|
||||
- **Outputs:** Active feature branch.
|
||||
- **Persistence Strategy:** Save `state.json` to `/temp` containing `ticket_path`, `branch_name`, and `start_time`.
|
||||
|
||||
### Stage 2: Implementation and Incremental Committing
|
||||
- **Purpose:** Execute the technical requirements of the ticket.
|
||||
- **Inputs:** `/temp/state.json`, Ticket requirements.
|
||||
- **Actions:**
|
||||
1. Modify codebase according to requirements.
|
||||
2. For every distinct file change or logical unit of work:
|
||||
- Run basic syntax checks.
|
||||
- Execute `git add [file]`.
|
||||
- Execute `git commit -m "feat: [brief description of change]"`
|
||||
3. Repeat until the feature is complete.
|
||||
- **Outputs:** Committed code changes on the feature branch.
|
||||
- **Persistence Strategy:** Update `state.json` with `implementation_complete: true` and a list of `modified_files`.
|
||||
|
||||
### Stage 3: Review Workflow Execution
|
||||
- **Purpose:** Validate the implementation against quality standards.
|
||||
- **Inputs:** `/temp/state.json`, Modified codebase.
|
||||
- **Actions:**
|
||||
1. Trigger the "Review Workflow" (static analysis, tests, or linter).
|
||||
2. If errors are found:
|
||||
- Log errors to `/temp/review_log.txt`.
|
||||
- Re-enter Stage 2 to apply fixes and commit.
|
||||
3. If review passes:
|
||||
- Proceed to Stage 4.
|
||||
- **Outputs:** Review results/logs.
|
||||
- **Persistence Strategy:** Update `state.json` with `review_passed: true`.
|
||||
|
||||
### Stage 4: Cleanup and User Handoff
|
||||
- **Purpose:** Finalize the ticket lifecycle and request merge permission.
|
||||
- **Inputs:** `/temp/state.json`.
|
||||
- **Actions:**
|
||||
1. Delete the ticket file from `/tickets` using the path stored in `state.json`.
|
||||
2. Format a summary of changes and a request for merge.
|
||||
- **Outputs:** Deletion of the ticket file; user-facing summary.
|
||||
- **Persistence Strategy:** Clear `/temp/state.json` upon successful completion.
|
||||
|
||||
---
|
||||
|
||||
## 4. Data & File Contracts
|
||||
- **State File:** `/temp/state.json`
|
||||
- Format: JSON
|
||||
- Schema: `{ "ticket_path": string, "branch_name": string, "implementation_complete": boolean, "review_passed": boolean }`
|
||||
- **Ticket Files:** Located in `/tickets/*` (Markdown or Plain Text).
|
||||
- **Logs:** `/temp/review_log.txt` (Plain Text) for capturing stderr from review tools.
|
||||
|
||||
---
|
||||
|
||||
## 5. Failure & Recovery Handling
|
||||
- **Empty Ticket Directory:** If no files are found in `/tickets`, the agent will output "NO_TICKETS_FOUND" and terminate the workflow.
|
||||
- **Commit Failures:** If a commit fails (e.g., pre-commit hooks), the agent must resolve the hook violation before retrying the commit.
|
||||
- **Review Failure Loop:** If the review fails more than 3 times for the same issue, the agent must halt and output a "BLOCKER_REPORT" detailing the persistent errors to the user.
|
||||
- **State Recovery:** On context reset, the agent must check `/temp/state.json` to resume the workflow from the last recorded stage.
|
||||
|
||||
---
|
||||
|
||||
## 6. Final Deliverable Specification
|
||||
- **Final Output:** A clear message to the user in the following format:
|
||||
> **Task Completed:** [Ticket Name]
|
||||
> **Branch:** [Branch Name]
|
||||
> **Changes:** [Brief list of modified files]
|
||||
> **Review Status:** Passed
|
||||
> **Cleanup:** Ticket file removed from /tickets.
|
||||
> **Action Required:** Would you like me to merge [Branch Name] into `main`? (Yes/No)
|
||||
- **Quality Bar:** Code must be committed with descriptive messages; the ticket file must be successfully deleted; the workspace must be left on the feature branch awaiting the merge command.
|
||||
@@ -1,61 +0,0 @@
|
||||
# DASH-003: Visual Analytics & Activity Charts
|
||||
|
||||
**Status:** Done
|
||||
**Created:** 2026-01-08
|
||||
**Tags:** dashboard, analytics, charts, frontend
|
||||
|
||||
## 1. Context & User Story
|
||||
* **As a:** Bot Administrator
|
||||
* **I want to:** View a graphical representation of bot usage over the last 24 hours.
|
||||
* **So that:** I can identify peak usage times and trends in command execution.
|
||||
|
||||
## 2. Technical Requirements
|
||||
### Data Model Changes
|
||||
- [x] No new tables.
|
||||
- [x] Requires complex aggregation queries on the `transactions` table.
|
||||
|
||||
### API / Interface
|
||||
- [x] `GET /api/stats/activity`: Returns an array of data points for the last 24 hours (hourly granularity).
|
||||
- [x] Response Structure: `Array<{ hour: string, commands: number, transactions: number }>`.
|
||||
|
||||
## 3. Constraints & Validations (CRITICAL)
|
||||
- **Input Validation:** Hourly buckets must be strictly validated for the 24h window.
|
||||
- **System Constraints:**
|
||||
- Database query must be cached for at least 5 minutes as it involves heavy aggregation.
|
||||
- Chart must be responsive and handle mobile viewports.
|
||||
- **Business Logic Guardrails:**
|
||||
- If no data exists for an hour, it must return 0 rather than skipping the point.
|
||||
|
||||
## 4. Acceptance Criteria
|
||||
1. [x] **Given** a 24-hour history of transactions, **When** the dashboard loads, **Then** a line or area chart displays the command volume over time.
|
||||
2. [x] **Given** the premium glassmorphic theme, **When** the chart is rendered, **Then** it must use the primary brand colors and gradients to match the UI.
|
||||
3. [x] **Given** a mouse hover on the chart, **When** hovering over a point, **Then** a glassmorphic tooltip shows exact counts for that hour.
|
||||
|
||||
## 5. Implementation Plan
|
||||
- [x] Step 1: Add an aggregation method to `dashboard.service.ts` to fetch hourly counts from the `transactions` table.
|
||||
- [x] Step 2: Create the `/api/stats/activity` endpoint.
|
||||
- [x] Step 3: Install a charting library (`recharts`).
|
||||
- [x] Step 4: Implement the `ActivityChart` component into the middle column of the dashboard.
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
Implemented a comprehensive activity analytics system for the Aurora dashboard:
|
||||
|
||||
### Backend Changes
|
||||
- **Service Layer**: Added `getActivityAggregation` to `dashboard.service.ts`. It performs a hourly aggregation on the `transactions` table using Postgres `date_trunc` and `FILTER` clauses to differentiate between "commands" and "total transactions". Missing hours in the 24h window are automatically filled with zero-values.
|
||||
- **API**: Implemented `GET /api/stats/activity` in `web/src/server.ts` with a 5-minute in-memory cache to maintain server performance.
|
||||
|
||||
### Frontend Changes
|
||||
- **Library**: Added `recharts` for high-performance SVG charting.
|
||||
- **Hooks**: Created `use-activity-stats.ts` to manage the lifecycle and polling of analytics data.
|
||||
- **Components**: Developed `ActivityChart.tsx` featuring:
|
||||
- Premium glassmorphic styling (backdrop blur, subtle borders).
|
||||
- Responsive `AreaChart` with brand-matching gradients.
|
||||
- Custom glassmorphic tooltip with precise data point values.
|
||||
- Smooth entry animations.
|
||||
- **Integration**: Placed the new analytics card prominently in the `Dashboard.tsx` layout.
|
||||
|
||||
### Verification
|
||||
- **Unit Tests**: Added comprehensive test cases to `dashboard.service.test.ts` verifying the 24-point guaranteed response and correct data mapping.
|
||||
- **Type Safety**: Passed `bun x tsc --noEmit` with zero errors.
|
||||
- **Runtime**: All tests passing.
|
||||
@@ -1,53 +0,0 @@
|
||||
# DASH-004: Administrative Control Panel
|
||||
|
||||
**Status:** Done
|
||||
**Created:** 2026-01-08
|
||||
**Tags:** dashboard, control-panel, bot-actions, operations
|
||||
|
||||
## 1. Context & User Story
|
||||
* **As a:** Bot Administrator
|
||||
* **I want to:** Execute common maintenance tasks directly from the dashboard buttons.
|
||||
* **So that:** I don't have to use terminal commands or Discord slash commands for system-level operations.
|
||||
|
||||
## 2. Technical Requirements
|
||||
### Data Model Changes
|
||||
- [ ] N/A.
|
||||
|
||||
### API / Interface
|
||||
- [ ] `POST /api/actions/reload-commands`: Triggers the bot's command loader.
|
||||
- [ ] `POST /api/actions/clear-cache`: Clears internal bot caches.
|
||||
- [ ] `POST /api/actions/maintenance-mode`: Toggles a maintenance flag for the bot.
|
||||
|
||||
## 3. Constraints & Validations (CRITICAL)
|
||||
- **Input Validation:** Standard JSON body with optional `reason` field.
|
||||
- **System Constraints:**
|
||||
- Actions must be idempotent where possible.
|
||||
- Actions must provide a response within 10 seconds.
|
||||
- **Business Logic Guardrails:**
|
||||
- **SECURITY**: This endpoint MUST require high-privilege authentication (currently we have single admin assumption, but token-based check should be planned).
|
||||
- Maintenance mode toggle must be logged to the event feed.
|
||||
|
||||
## 4. Acceptance Criteria
|
||||
1. [ ] **Given** a "Quick Actions" card, **When** the "Reload Commands" button is clicked, **Then** the bot reloads its local command files and posts a "Success" event to the feed.
|
||||
2. [ ] **Given** a running bot, **When** the "Clear Cache" button is pushed, **Then** the bot flushes its internal memory maps and the memory usage metric reflects the drop.
|
||||
|
||||
## 5. Implementation Plan
|
||||
- [x] Step 1: Create an `action.service.ts` to handle the logic of triggering bot-specific functions.
|
||||
- [x] Step 2: Implement the `/api/actions` route group.
|
||||
- [x] Step 3: Design a "Quick Actions" card with premium styled buttons in `Dashboard.tsx`.
|
||||
- [x] Step 4: Add loading states to buttons to show when an operation is "In Progress."
|
||||
|
||||
## Implementation Notes
|
||||
Successfully implemented the Administrative Control Panel with the following changes:
|
||||
- **Backend Service**: Created `shared/modules/admin/action.service.ts` to coordinate actions like reloading commands, clearing cache, and toggling maintenance mode.
|
||||
- **System Bus**: Updated `shared/lib/events.ts` with new action events.
|
||||
- **API Endpoints**: Added `POST /api/actions/*` routes to the web server in `web/src/server.ts`.
|
||||
- **Bot Integration**:
|
||||
- Updated `AuroraClient` in `bot/lib/BotClient.ts` to listen for system action events.
|
||||
- Implemented `maintenanceMode` flag in `AuroraClient`.
|
||||
- Updated `CommandHandler.ts` to respect maintenance mode, blocking user commands with a helpful error embed.
|
||||
- **Frontend UI**:
|
||||
- Created `ControlPanel.tsx` component with a premium glassmorphic design and real-time state feedback.
|
||||
- Integrated `ControlPanel` into the `Dashboard.tsx` page.
|
||||
- Updated `use-dashboard-stats` hook and shared types to include maintenance mode status.
|
||||
- **Verification**: Created 3 new test suites covering the service, the bot listener, and the command handler enforcement. All tests passing.
|
||||
@@ -1,202 +0,0 @@
|
||||
# DASH-001: Dashboard Real Data Integration
|
||||
|
||||
**Status:** In Review
|
||||
**Created:** 2026-01-08
|
||||
**Tags:** dashboard, api, discord-client, database, real-time
|
||||
|
||||
## 1. Context & User Story
|
||||
* **As a:** Bot Administrator
|
||||
* **I want to:** See real data on the dashboard instead of mock/hardcoded values
|
||||
* **So that:** I can monitor actual bot metrics, user activity, and system health in real-time
|
||||
|
||||
## 2. Technical Requirements
|
||||
|
||||
### Data Model Changes
|
||||
- [ ] No new tables required
|
||||
- [ ] SQL migration required? **No** – existing schema already has `users`, `transactions`, `moderationCases`, and other relevant tables
|
||||
|
||||
### API / Interface
|
||||
|
||||
#### New Dashboard Stats Service
|
||||
Create a new service at `shared/modules/dashboard/dashboard.service.ts`:
|
||||
|
||||
```typescript
|
||||
interface DashboardStats {
|
||||
guilds: {
|
||||
count: number;
|
||||
changeFromLastMonth?: number;
|
||||
};
|
||||
users: {
|
||||
active: number;
|
||||
changePercentFromLastMonth?: number;
|
||||
};
|
||||
commands: {
|
||||
total: number;
|
||||
changePercentFromLastMonth?: number;
|
||||
};
|
||||
ping: {
|
||||
avg: number;
|
||||
changeFromLastHour?: number;
|
||||
};
|
||||
recentEvents: RecentEvent[];
|
||||
activityOverview: ActivityDataPoint[];
|
||||
}
|
||||
|
||||
interface RecentEvent {
|
||||
type: 'success' | 'error' | 'info';
|
||||
message: string;
|
||||
timestamp: Date;
|
||||
}
|
||||
```
|
||||
|
||||
#### API Endpoints
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| `GET` | `/api/stats` | Returns `DashboardStats` object |
|
||||
| `GET` | `/api/stats/realtime` | WebSocket/SSE for live updates |
|
||||
|
||||
### Discord Client Data
|
||||
|
||||
The `AuroraClient` (exported from `bot/lib/BotClient.ts`) provides access to:
|
||||
|
||||
| Property | Data Source | Dashboard Metric |
|
||||
|----------|-------------|------------------|
|
||||
| `client.guilds.cache.size` | Discord.js | Total Servers |
|
||||
| `client.users.cache.size` | Discord.js | Active Users (approximate) |
|
||||
| `client.ws.ping` | Discord.js | Avg Ping |
|
||||
| `client.commands.size` | Bot commands | Commands Registered |
|
||||
| `client.lastCommandTimestamp` | Custom property | Last command run time |
|
||||
|
||||
### Database Data
|
||||
|
||||
Query from existing tables:
|
||||
|
||||
| Metric | Query |
|
||||
|--------|-------|
|
||||
| User count (registered) | `SELECT COUNT(*) FROM users WHERE is_active = true` |
|
||||
| Commands executed (today) | `SELECT COUNT(*) FROM transactions WHERE type = 'COMMAND_RUN' AND created_at >= NOW() - INTERVAL '1 day'` |
|
||||
| Recent moderation events | `SELECT * FROM moderation_cases ORDER BY created_at DESC LIMIT 10` |
|
||||
| Recent transactions | `SELECT * FROM transactions ORDER BY created_at DESC LIMIT 10` |
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The Discord client instance (`AuroraClient`) is in the `bot` package, while the web server is in the `web` package. Need to establish cross-package communication:
|
||||
> - **Option A**: Export client reference from `bot` and import in `web` (same process, simple)
|
||||
> - **Option B**: IPC via shared memory or message queue (separate processes)
|
||||
> - **Option C**: Internal HTTP/WebSocket between bot and web (microservice pattern)
|
||||
|
||||
## 3. Constraints & Validations (CRITICAL)
|
||||
|
||||
- **Input Validation:**
|
||||
- API endpoints must not accept arbitrary query parameters
|
||||
- Rate limiting on `/api/stats` to prevent abuse (max 60 requests/minute per IP)
|
||||
|
||||
- **System Constraints:**
|
||||
- Discord API rate limits apply when fetching guild/user data
|
||||
- Cache Discord data and refresh at most every 30 seconds
|
||||
- Database queries should be optimized with existing indices
|
||||
- API response timeout: 5 seconds maximum
|
||||
|
||||
- **Business Logic Guardrails:**
|
||||
- Do not expose sensitive user data (only aggregates)
|
||||
- Do not expose Discord tokens or internal IDs in API responses
|
||||
- Activity history limited to last 24 hours to prevent performance issues
|
||||
- User counts should count only registered users, not all Discord users
|
||||
|
||||
## 4. Acceptance Criteria
|
||||
|
||||
1. [ ] **Given** the dashboard is loaded, **When** the API `/api/stats` is called, **Then** it returns real guild count from Discord client
|
||||
2. [ ] **Given** the bot is connected to Discord, **When** viewing the dashboard, **Then** the "Total Servers" shows actual `guilds.cache.size`
|
||||
3. [ ] **Given** users are registered in the database, **When** viewing the dashboard, **Then** "Active Users" shows count from `users` table where `is_active = true`
|
||||
4. [ ] **Given** the bot is running, **When** viewing the dashboard, **Then** "Avg Ping" shows actual `client.ws.ping` value
|
||||
5. [ ] **Given** recent bot activity occurred, **When** viewing "Recent Events", **Then** events from `transactions` and `moderation_cases` tables are displayed
|
||||
6. [ ] **Given** mock data exists in components, **When** the feature is complete, **Then** all hardcoded values in `Dashboard.tsx` are replaced with API data
|
||||
|
||||
## 5. Implementation Plan
|
||||
|
||||
### Phase 1: Data Layer & Services
|
||||
- [ ] Create `shared/modules/dashboard/dashboard.service.ts` with statistics aggregation functions
|
||||
- [ ] Add helper to query active user count from database
|
||||
- [ ] Add helper to query recent transactions (as events)
|
||||
- [ ] Add helper to query moderation cases (as events)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Discord Client Exposure
|
||||
- [ ] Create a client stats provider that exposes Discord metrics
|
||||
- [ ] Implement caching layer to avoid rate limiting (30-second TTL)
|
||||
- [ ] Export stats getter from `bot` package for `web` package consumption
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: API Implementation
|
||||
- [ ] Add `/api/stats` endpoint in `web/src/server.ts`
|
||||
- [ ] Wire up `dashboard.service.ts` functions to API
|
||||
- [ ] Add error handling and response formatting
|
||||
- [ ] Consider adding rate limiting middleware
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Frontend Integration
|
||||
- [ ] Create custom React hook `useDashboardStats()` for data fetching
|
||||
- [ ] Replace hardcoded values in `Dashboard.tsx` with hook data
|
||||
- [ ] Add loading states and error handling
|
||||
- [ ] Implement auto-refresh (poll every 30 seconds or use SSE/WebSocket)
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Activity Overview Chart
|
||||
- [ ] Query hourly command/transaction counts for last 24 hours
|
||||
- [ ] Integrate charting library (e.g., Recharts, Chart.js)
|
||||
- [ ] Replace "Chart Placeholder" with actual chart component
|
||||
|
||||
---
|
||||
|
||||
## Architecture Decision Required
|
||||
|
||||
> [!WARNING]
|
||||
> **Key Decision: How should the web server access Discord client data?**
|
||||
>
|
||||
> The bot and web server currently run in the same process. Recommend:
|
||||
> - **Short term**: Direct import of `AuroraClient` singleton in API handlers
|
||||
> - **Long term**: Consider event bus or shared state manager if splitting to microservices
|
||||
|
||||
## Out of Scope
|
||||
|
||||
- User authentication/authorization for API endpoints
|
||||
- Historical data beyond 24 hours
|
||||
- Command execution tracking (would require new database table)
|
||||
- Guild-specific analytics (separate feature)
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
**Status:** In Review
|
||||
**Implemented:** 2026-01-08
|
||||
**Branch:** `feat/dashboard-real-data-integration`
|
||||
**Commit:** `17cb70e`
|
||||
|
||||
### Files Changed
|
||||
|
||||
#### New Files Created (7)
|
||||
1. `shared/modules/dashboard/dashboard.types.ts` - TypeScript interfaces
|
||||
2. `shared/modules/dashboard/dashboard.service.ts` - Database query service
|
||||
3. `shared/modules/dashboard/dashboard.service.test.ts` - Service unit tests
|
||||
4. `bot/lib/clientStats.ts` - Discord client stats provider with caching
|
||||
5. `bot/lib/clientStats.test.ts` - Client stats unit tests
|
||||
6. `web/src/hooks/use-dashboard-stats.ts` - React hook for data fetching
|
||||
7. `tickets/2026-01-08-dashboard-real-data-integration.md` - This ticket
|
||||
|
||||
#### Modified Files (3)
|
||||
1. `web/src/server.ts` - Added `/api/stats` endpoint
|
||||
2. `web/src/pages/Dashboard.tsx` - Integrated real data with loading/error states
|
||||
3. `.gitignore` - Removed `tickets/` to track tickets in version control
|
||||
|
||||
### Test Results
|
||||
```
|
||||
✓ 11 tests passing
|
||||
✓ TypeScript check clean (bun x tsc --noEmit)
|
||||
```
|
||||
|
||||
### Architecture Decision
|
||||
Used **Option A** (direct import) for accessing `AuroraClient` from web server, as both run in the same process. This is the simplest approach and avoids unnecessary complexity.
|
||||
@@ -1,49 +0,0 @@
|
||||
# DASH-002: Real-time Live Updates via WebSockets
|
||||
|
||||
**Status:** Done
|
||||
**Created:** 2026-01-08
|
||||
**Tags:** dashboard, websocket, real-time, performance
|
||||
|
||||
## 1. Context & User Story
|
||||
* **As a:** Bot Administrator
|
||||
* **I want to:** See metrics and events update instantly on my screen without refreshing or waiting for polling intervals.
|
||||
* **So that:** I can react immediately to errors or spikes in latency and have a dashboard that feels "alive."
|
||||
|
||||
## 2. Technical Requirements
|
||||
### Data Model Changes
|
||||
- [x] No database schema changes required.
|
||||
- [x] Created `shared/lib/events.ts` for a global system event bus.
|
||||
|
||||
### API / Interface
|
||||
- [x] Establish a WebSocket endpoint at `/ws`.
|
||||
- [x] Define the message protocol:
|
||||
- `STATS_UPDATE`: Server to client containing full `DashboardStats`.
|
||||
- `NEW_EVENT`: Server to client when a specific event is recorded.
|
||||
|
||||
## 3. Constraints & Validations (CRITICAL)
|
||||
- **Input Validation:** WS messages validated using JSON parsing and type checks.
|
||||
- **System Constraints:**
|
||||
- WebSocket broadcast interval set to 5s for metrics.
|
||||
- Automatic reconnection logic handled in the frontend hook.
|
||||
- **Business Logic Guardrails:**
|
||||
- Events are pushed immediately as they occur via the system event bus.
|
||||
|
||||
## 4. Acceptance Criteria
|
||||
1. [x] **Given** the dashboard is open, **When** a command is run in Discord (e.g. Daily), **Then** the "Recent Events" list updates instantly on the web UI.
|
||||
2. [x] **Given** a changing network environment, **When** the bot's ping fluctuates, **Then** the "Avg Latency" card updates in real-time.
|
||||
3. [x] **Given** a connection loss, **When** the network returns, **Then** the client automatically reconnects to the WS room.
|
||||
|
||||
## 5. Implementation Plan
|
||||
- [x] Step 1: Integrate a WebSocket library into `web/src/server.ts` using Bun's native `websocket` support.
|
||||
- [x] Step 2: Implement a broadcast system in `dashboard.service.ts` to push events to the WS handler using `systemEvents`.
|
||||
- [x] Step 3: Create/Update `useDashboardStats` hook in the frontend to handle connection lifecycle and state merging.
|
||||
- [x] Step 4: Refactor `Dashboard.tsx` state consumption to benefit from real-time updates.
|
||||
|
||||
## Implementation Notes
|
||||
### Files Changed
|
||||
- `shared/lib/events.ts`: New event bus for the system.
|
||||
- `web/src/server.ts`: Added WebSocket handler and stats broadcast.
|
||||
- `web/src/hooks/use-dashboard-stats.ts`: Replaced polling with WebSocket + HTTP initial load.
|
||||
- `shared/modules/dashboard/dashboard.service.ts`: Added `recordEvent` helper to emit WS events.
|
||||
- `shared/modules/economy/economy.service.ts`: Integrated `recordEvent` into daily claims and transfers.
|
||||
- `shared/modules/dashboard/dashboard.service.test.ts`: Added unit tests for event emission.
|
||||
Reference in New Issue
Block a user