fixes and refactor
This commit is contained in:
1
.e2e-token
Normal file
1
.e2e-token
Normal file
@@ -0,0 +1 @@
|
|||||||
|
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsInR5cGUiOiJhY2Nlc3MiLCJpYXQiOjE3NTUyMDAyNzMsImV4cCI6MTc1NTIxNDY3M30.VfcV_zbhtSe50u1awNC4v2O8CU4PQ9AwhlcNeNn40cM
|
||||||
@@ -6,7 +6,7 @@ This guide covers the complete data migration process for importing legacy Delph
|
|||||||
## 🔍 Migration Status Summary
|
## 🔍 Migration Status Summary
|
||||||
|
|
||||||
### ✅ **READY FOR MIGRATION**
|
### ✅ **READY FOR MIGRATION**
|
||||||
- **Readiness Score**: 100% (31/31 files fully mapped)
|
- **Readiness Score**: 100% (31/31 files supported; several use flexible extras for non-core columns)
|
||||||
- **Security**: All sensitive files excluded from git
|
- **Security**: All sensitive files excluded from git
|
||||||
- **API Endpoints**: Complete import/export functionality
|
- **API Endpoints**: Complete import/export functionality
|
||||||
- **Data Validation**: Enhanced type conversion and validation
|
- **Data Validation**: Enhanced type conversion and validation
|
||||||
@@ -30,8 +30,9 @@ This guide covers the complete data migration process for importing legacy Delph
|
|||||||
| GRUPLKUP.csv | GroupLookup | ✅ Ready | Group categories |
|
| GRUPLKUP.csv | GroupLookup | ✅ Ready | Group categories |
|
||||||
| FOOTERS.csv | Footer | ✅ Ready | Statement footer templates |
|
| FOOTERS.csv | Footer | ✅ Ready | Statement footer templates |
|
||||||
| PLANINFO.csv | PlanInfo | ✅ Ready | Retirement plan information |
|
| PLANINFO.csv | PlanInfo | ✅ Ready | Retirement plan information |
|
||||||
| FORM_INX.csv | FormIndex | ✅ Ready | Form templates index |
|
| FORM_INX.csv | FormIndex | ✅ Ready | Form templates index (non-core fields stored as flexible extras) |
|
||||||
| FORM_LST.csv | FormList | ✅ Ready | Form template content |
|
| FORM_LST.csv | FormList | ✅ Ready | Form template content (non-core fields stored as flexible extras) |
|
||||||
|
| INX_LKUP.csv | FormKeyword | ✅ Ready | Form keywords lookup |
|
||||||
| PRINTERS.csv | PrinterSetup | ✅ Ready | Printer configuration |
|
| PRINTERS.csv | PrinterSetup | ✅ Ready | Printer configuration |
|
||||||
| SETUP.csv | SystemSetup | ✅ Ready | System configuration |
|
| SETUP.csv | SystemSetup | ✅ Ready | System configuration |
|
||||||
| **Pension Sub-tables** | | | |
|
| **Pension Sub-tables** | | | |
|
||||||
@@ -39,8 +40,9 @@ This guide covers the complete data migration process for importing legacy Delph
|
|||||||
| MARRIAGE.csv | MarriageHistory | ✅ Ready | Marriage history data |
|
| MARRIAGE.csv | MarriageHistory | ✅ Ready | Marriage history data |
|
||||||
| DEATH.csv | DeathBenefit | ✅ Ready | Death benefit calculations |
|
| DEATH.csv | DeathBenefit | ✅ Ready | Death benefit calculations |
|
||||||
| SEPARATE.csv | SeparationAgreement | ✅ Ready | Separation agreements |
|
| SEPARATE.csv | SeparationAgreement | ✅ Ready | Separation agreements |
|
||||||
| LIFETABL.csv | LifeTable | ✅ Ready | Life expectancy tables |
|
| LIFETABL.csv | LifeTable | ✅ Ready | Life expectancy tables (simplified model; extra columns stored as flexible extras) |
|
||||||
| NUMBERAL.csv | NumberTable | ✅ Ready | Numerical calculation tables |
|
| NUMBERAL.csv | NumberTable | ✅ Ready | Numerical calculation tables (simplified model; extra columns stored as flexible extras) |
|
||||||
|
| RESULTS.csv | PensionResult | ✅ Ready | Computed results summary |
|
||||||
|
|
||||||
### ✅ **Recently Added Files** (6/31 files)
|
### ✅ **Recently Added Files** (6/31 files)
|
||||||
| File | Model | Status | Notes |
|
| File | Model | Status | Notes |
|
||||||
|
|||||||
74
README.md
74
README.md
@@ -21,6 +21,25 @@ Modern database system for legal practice management, financial tracking, and do
|
|||||||
- **Authentication**: JWT with bcrypt password hashing
|
- **Authentication**: JWT with bcrypt password hashing
|
||||||
- **Validation**: Pydantic v2
|
- **Validation**: Pydantic v2
|
||||||
|
|
||||||
|
## ⚡ Search Performance (FTS + Cache)
|
||||||
|
|
||||||
|
- Full-text search is enabled via SQLite FTS5 for Customers (`rolodex`), Files, Ledger, and QDRO.
|
||||||
|
- The app creates virtual FTS tables and sync triggers at startup.
|
||||||
|
- On engines without FTS5, search falls back to standard `ILIKE` queries.
|
||||||
|
- Common filter columns are indexed for faster filtering: `files(status, file_type, empl_num)` and `ledger(t_type, empl_num)`.
|
||||||
|
- Response caching (optional) uses Redis for global search and suggestions.
|
||||||
|
- Cache TTL: ~90s for global search, ~60s for suggestions.
|
||||||
|
- Cache is auto-invalidated on create/update/delete affecting customers, files, ledger, or QDROs.
|
||||||
|
|
||||||
|
Enable cache:
|
||||||
|
```bash
|
||||||
|
export CACHE_ENABLED=true
|
||||||
|
export REDIS_URL=redis://localhost:6379/0
|
||||||
|
```
|
||||||
|
|
||||||
|
Diagnostics:
|
||||||
|
- `GET /api/search/_debug` reports whether FTS tables exist and Redis is available (requires auth).
|
||||||
|
|
||||||
## 📊 Database Structure
|
## 📊 Database Structure
|
||||||
Based on analysis of legacy Pascal system:
|
Based on analysis of legacy Pascal system:
|
||||||
|
|
||||||
@@ -134,6 +153,48 @@ delphi-database/
|
|||||||
|
|
||||||
## 🔧 API Endpoints
|
## 🔧 API Endpoints
|
||||||
|
|
||||||
|
### Common pagination, sorting, and totals
|
||||||
|
- Many list endpoints support the same query parameters:
|
||||||
|
- `skip` (int): offset for pagination. Default varies per endpoint.
|
||||||
|
- `limit` (int): page size. Most endpoints cap at 200–1000.
|
||||||
|
- `sort_by` (str): whitelisted field name per endpoint.
|
||||||
|
- `sort_dir` (str): `asc` or `desc`.
|
||||||
|
- `include_total` (bool): when `true`, the response is an object `{ items, total }`; otherwise a plain list is returned for backwards compatibility.
|
||||||
|
- Some endpoints also support `search` (tokenized across multiple columns with AND semantics) for simple text filtering.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
```bash
|
||||||
|
# Support tickets (admin)
|
||||||
|
curl \
|
||||||
|
"http://localhost:6920/api/support/tickets?include_total=true&limit=10&sort_by=created&sort_dir=desc"
|
||||||
|
|
||||||
|
# My support tickets (current user)
|
||||||
|
curl \
|
||||||
|
"http://localhost:6920/api/support/my-tickets?include_total=true&limit=10&sort_by=updated&sort_dir=desc"
|
||||||
|
|
||||||
|
# QDROs for a file
|
||||||
|
curl \
|
||||||
|
"http://localhost:6920/api/documents/qdros/FILE-123?include_total=true&sort_by=updated&sort_dir=desc"
|
||||||
|
|
||||||
|
# Ledger entries for a file
|
||||||
|
curl \
|
||||||
|
"http://localhost:6920/api/financial/ledger/FILE-123?include_total=true&sort_by=date&sort_dir=desc"
|
||||||
|
|
||||||
|
# Customer phones
|
||||||
|
curl \
|
||||||
|
"http://localhost:6920/api/customers/CUST-1/phones?include_total=true&sort_by=location&sort_dir=asc"
|
||||||
|
```
|
||||||
|
|
||||||
|
Allowed sort fields (high level):
|
||||||
|
- Support tickets: `created`, `updated`, `resolved`, `priority`, `status`, `subject`
|
||||||
|
- My tickets: `created`, `updated`, `resolved`, `priority`, `status`, `subject`
|
||||||
|
- QDROs (list and by file): `file_no`, `version`, `status`, `created`, `updated`
|
||||||
|
- Ledger by file: `date`, `item_no`, `amount`, `billed`
|
||||||
|
- Templates: `form_id`, `form_name`, `category`, `created`, `updated`
|
||||||
|
- Files: `file_no`, `client`, `opened`, `closed`, `status`, `amount_owing`, `total_charges`
|
||||||
|
- Admin users: `username`, `email`, `first_name`, `last_name`, `created`, `updated`
|
||||||
|
- Customer phones: `location`, `phone`
|
||||||
|
|
||||||
### Authentication
|
### Authentication
|
||||||
- `POST /api/auth/login` - User login
|
- `POST /api/auth/login` - User login
|
||||||
- `POST /api/auth/register` - Register user (admin only)
|
- `POST /api/auth/register` - Register user (admin only)
|
||||||
@@ -156,19 +217,28 @@ delphi-database/
|
|||||||
- `DELETE /api/files/{file_no}` - Delete file
|
- `DELETE /api/files/{file_no}` - Delete file
|
||||||
|
|
||||||
### Financial (Ledger)
|
### Financial (Ledger)
|
||||||
- `GET /api/financial/ledger/{file_no}` - Get ledger entries
|
- `GET /api/financial/ledger/{file_no}` - Get ledger entries (supports pagination, sorting, `include_total`)
|
||||||
- `POST /api/financial/ledger/` - Create transaction
|
- `POST /api/financial/ledger/` - Create transaction
|
||||||
- `PUT /api/financial/ledger/{id}` - Update transaction
|
- `PUT /api/financial/ledger/{id}` - Update transaction
|
||||||
- `DELETE /api/financial/ledger/{id}` - Delete transaction
|
- `DELETE /api/financial/ledger/{id}` - Delete transaction
|
||||||
- `GET /api/financial/reports/{file_no}` - Financial reports
|
- `GET /api/financial/reports/{file_no}` - Financial reports
|
||||||
|
|
||||||
### Documents (QDROs)
|
### Documents (QDROs)
|
||||||
- `GET /api/documents/qdros/{file_no}` - Get QDROs for file
|
- `GET /api/documents/qdros/{file_no}` - Get QDROs for file (supports pagination, sorting, `include_total`)
|
||||||
- `POST /api/documents/qdros/` - Create QDRO
|
- `POST /api/documents/qdros/` - Create QDRO
|
||||||
- `GET /api/documents/qdros/{file_no}/{id}` - Get specific QDRO
|
- `GET /api/documents/qdros/{file_no}/{id}` - Get specific QDRO
|
||||||
- `PUT /api/documents/qdros/{file_no}/{id}` - Update QDRO
|
- `PUT /api/documents/qdros/{file_no}/{id}` - Update QDRO
|
||||||
- `DELETE /api/documents/qdros/{file_no}/{id}` - Delete QDRO
|
- `DELETE /api/documents/qdros/{file_no}/{id}` - Delete QDRO
|
||||||
|
|
||||||
|
### Support
|
||||||
|
- `POST /api/support/tickets` - Create support ticket (public; auth optional)
|
||||||
|
- `GET /api/support/tickets` - List tickets (admin; supports filters, search, pagination, sorting, `include_total`)
|
||||||
|
- `GET /api/support/tickets/{id}` - Get ticket details (admin)
|
||||||
|
- `PUT /api/support/tickets/{id}` - Update ticket (admin)
|
||||||
|
- `POST /api/support/tickets/{id}/responses` - Add response (admin)
|
||||||
|
- `GET /api/support/my-tickets` - List current user's tickets (supports status filter, search, pagination, sorting, `include_total`)
|
||||||
|
- `GET /api/support/stats` - Ticket statistics (admin)
|
||||||
|
|
||||||
### Search
|
### Search
|
||||||
- `GET /api/search/customers?q={query}` - Search customers
|
- `GET /api/search/customers?q={query}` - Search customers
|
||||||
- `GET /api/search/files?q={query}` - Search files
|
- `GET /api/search/files?q={query}` - Search files
|
||||||
|
|||||||
360
app/api/admin.py
360
app/api/admin.py
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Comprehensive Admin API endpoints - User management, system settings, audit logging
|
Comprehensive Admin API endpoints - User management, system settings, audit logging
|
||||||
"""
|
"""
|
||||||
from typing import List, Dict, Any, Optional
|
from typing import List, Dict, Any, Optional, Union
|
||||||
from fastapi import APIRouter, Depends, HTTPException, status, UploadFile, File, Query, Body, Request
|
from fastapi import APIRouter, Depends, HTTPException, status, UploadFile, File, Query, Body, Request
|
||||||
from fastapi.responses import FileResponse
|
from fastapi.responses import FileResponse
|
||||||
from sqlalchemy.orm import Session, joinedload
|
from sqlalchemy.orm import Session, joinedload
|
||||||
@@ -13,24 +13,27 @@ import hashlib
|
|||||||
import secrets
|
import secrets
|
||||||
import shutil
|
import shutil
|
||||||
import time
|
import time
|
||||||
from datetime import datetime, timedelta, date
|
from datetime import datetime, timedelta, date, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from app.database.base import get_db
|
from app.database.base import get_db
|
||||||
|
from app.api.search_highlight import build_query_tokens
|
||||||
|
|
||||||
# Track application start time
|
# Track application start time
|
||||||
APPLICATION_START_TIME = time.time()
|
APPLICATION_START_TIME = time.time()
|
||||||
from app.models import User, Rolodex, File as FileModel, Ledger, QDRO, AuditLog, LoginAttempt
|
from app.models import User, Rolodex, File as FileModel, Ledger, QDRO, AuditLog, LoginAttempt
|
||||||
from app.models.lookups import SystemSetup, Employee, FileType, FileStatus, TransactionType, TransactionCode, State, FormIndex
|
from app.models.lookups import SystemSetup, Employee, FileType, FileStatus, TransactionType, TransactionCode, State, FormIndex, PrinterSetup
|
||||||
from app.auth.security import get_admin_user, get_password_hash, create_access_token
|
from app.auth.security import get_admin_user, get_password_hash, create_access_token
|
||||||
from app.services.audit import audit_service
|
from app.services.audit import audit_service
|
||||||
from app.config import settings
|
from app.config import settings
|
||||||
|
from app.services.query_utils import apply_sorting, tokenized_ilike_filter, paginate_with_total
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
# Enhanced Admin Schemas
|
# Enhanced Admin Schemas
|
||||||
from pydantic import BaseModel, Field, EmailStr
|
from pydantic import BaseModel, Field, EmailStr
|
||||||
|
from pydantic.config import ConfigDict
|
||||||
|
|
||||||
class SystemStats(BaseModel):
|
class SystemStats(BaseModel):
|
||||||
"""Enhanced system statistics"""
|
"""Enhanced system statistics"""
|
||||||
@@ -91,8 +94,7 @@ class UserResponse(BaseModel):
|
|||||||
created_at: Optional[datetime]
|
created_at: Optional[datetime]
|
||||||
updated_at: Optional[datetime]
|
updated_at: Optional[datetime]
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
class PasswordReset(BaseModel):
|
class PasswordReset(BaseModel):
|
||||||
"""Password reset request"""
|
"""Password reset request"""
|
||||||
@@ -124,8 +126,7 @@ class AuditLogEntry(BaseModel):
|
|||||||
user_agent: Optional[str]
|
user_agent: Optional[str]
|
||||||
timestamp: datetime
|
timestamp: datetime
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
class BackupInfo(BaseModel):
|
class BackupInfo(BaseModel):
|
||||||
"""Backup information"""
|
"""Backup information"""
|
||||||
@@ -135,6 +136,132 @@ class BackupInfo(BaseModel):
|
|||||||
backup_type: str
|
backup_type: str
|
||||||
status: str
|
status: str
|
||||||
|
|
||||||
|
|
||||||
|
class PrinterSetupBase(BaseModel):
|
||||||
|
"""Base schema for printer setup"""
|
||||||
|
description: Optional[str] = None
|
||||||
|
driver: Optional[str] = None
|
||||||
|
port: Optional[str] = None
|
||||||
|
default_printer: Optional[bool] = None
|
||||||
|
active: Optional[bool] = None
|
||||||
|
number: Optional[int] = None
|
||||||
|
page_break: Optional[str] = None
|
||||||
|
setup_st: Optional[str] = None
|
||||||
|
reset_st: Optional[str] = None
|
||||||
|
b_underline: Optional[str] = None
|
||||||
|
e_underline: Optional[str] = None
|
||||||
|
b_bold: Optional[str] = None
|
||||||
|
e_bold: Optional[str] = None
|
||||||
|
phone_book: Optional[bool] = None
|
||||||
|
rolodex_info: Optional[bool] = None
|
||||||
|
envelope: Optional[bool] = None
|
||||||
|
file_cabinet: Optional[bool] = None
|
||||||
|
accounts: Optional[bool] = None
|
||||||
|
statements: Optional[bool] = None
|
||||||
|
calendar: Optional[bool] = None
|
||||||
|
|
||||||
|
|
||||||
|
class PrinterSetupCreate(PrinterSetupBase):
|
||||||
|
printer_name: str
|
||||||
|
|
||||||
|
|
||||||
|
class PrinterSetupUpdate(PrinterSetupBase):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class PrinterSetupResponse(PrinterSetupBase):
|
||||||
|
printer_name: str
|
||||||
|
created_at: Optional[datetime] = None
|
||||||
|
updated_at: Optional[datetime] = None
|
||||||
|
|
||||||
|
model_config = ConfigDict(from_attributes=True)
|
||||||
|
|
||||||
|
|
||||||
|
# Printer Setup Management
|
||||||
|
|
||||||
|
@router.get("/printers", response_model=List[PrinterSetupResponse])
|
||||||
|
async def list_printers(
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_admin_user)
|
||||||
|
):
|
||||||
|
printers = db.query(PrinterSetup).order_by(PrinterSetup.printer_name.asc()).all()
|
||||||
|
return printers
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/printers/{printer_name}", response_model=PrinterSetupResponse)
|
||||||
|
async def get_printer(
|
||||||
|
printer_name: str,
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_admin_user)
|
||||||
|
):
|
||||||
|
printer = db.query(PrinterSetup).filter(PrinterSetup.printer_name == printer_name).first()
|
||||||
|
if not printer:
|
||||||
|
raise HTTPException(status_code=404, detail="Printer not found")
|
||||||
|
return printer
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/printers", response_model=PrinterSetupResponse)
|
||||||
|
async def create_printer(
|
||||||
|
payload: PrinterSetupCreate,
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_admin_user)
|
||||||
|
):
|
||||||
|
exists = db.query(PrinterSetup).filter(PrinterSetup.printer_name == payload.printer_name).first()
|
||||||
|
if exists:
|
||||||
|
raise HTTPException(status_code=400, detail="Printer already exists")
|
||||||
|
data = payload.model_dump(exclude_unset=True)
|
||||||
|
instance = PrinterSetup(**data)
|
||||||
|
db.add(instance)
|
||||||
|
# Enforce single default printer
|
||||||
|
if data.get("default_printer"):
|
||||||
|
try:
|
||||||
|
db.query(PrinterSetup).filter(PrinterSetup.printer_name != instance.printer_name).update({PrinterSetup.default_printer: False})
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
db.commit()
|
||||||
|
db.refresh(instance)
|
||||||
|
return instance
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/printers/{printer_name}", response_model=PrinterSetupResponse)
|
||||||
|
async def update_printer(
|
||||||
|
printer_name: str,
|
||||||
|
payload: PrinterSetupUpdate,
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_admin_user)
|
||||||
|
):
|
||||||
|
instance = db.query(PrinterSetup).filter(PrinterSetup.printer_name == printer_name).first()
|
||||||
|
if not instance:
|
||||||
|
raise HTTPException(status_code=404, detail="Printer not found")
|
||||||
|
updates = payload.model_dump(exclude_unset=True)
|
||||||
|
for k, v in updates.items():
|
||||||
|
setattr(instance, k, v)
|
||||||
|
# Enforce single default printer when set true
|
||||||
|
if updates.get("default_printer"):
|
||||||
|
try:
|
||||||
|
db.query(PrinterSetup).filter(PrinterSetup.printer_name != instance.printer_name).update({PrinterSetup.default_printer: False})
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
db.commit()
|
||||||
|
db.refresh(instance)
|
||||||
|
return instance
|
||||||
|
|
||||||
|
|
||||||
|
@router.delete("/printers/{printer_name}")
|
||||||
|
async def delete_printer(
|
||||||
|
printer_name: str,
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_admin_user)
|
||||||
|
):
|
||||||
|
instance = db.query(PrinterSetup).filter(PrinterSetup.printer_name == printer_name).first()
|
||||||
|
if not instance:
|
||||||
|
raise HTTPException(status_code=404, detail="Printer not found")
|
||||||
|
db.delete(instance)
|
||||||
|
db.commit()
|
||||||
|
return {"message": "Printer deleted"}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class LookupTableInfo(BaseModel):
|
class LookupTableInfo(BaseModel):
|
||||||
"""Lookup table information"""
|
"""Lookup table information"""
|
||||||
table_name: str
|
table_name: str
|
||||||
@@ -202,7 +329,7 @@ async def system_health(
|
|||||||
# Count active sessions (simplified)
|
# Count active sessions (simplified)
|
||||||
try:
|
try:
|
||||||
active_sessions = db.query(User).filter(
|
active_sessions = db.query(User).filter(
|
||||||
User.last_login > datetime.now() - timedelta(hours=24)
|
User.last_login > datetime.now(timezone.utc) - timedelta(hours=24)
|
||||||
).count()
|
).count()
|
||||||
except:
|
except:
|
||||||
active_sessions = 0
|
active_sessions = 0
|
||||||
@@ -215,7 +342,7 @@ async def system_health(
|
|||||||
backup_files = list(backup_dir.glob("*.db"))
|
backup_files = list(backup_dir.glob("*.db"))
|
||||||
if backup_files:
|
if backup_files:
|
||||||
latest_backup = max(backup_files, key=lambda p: p.stat().st_mtime)
|
latest_backup = max(backup_files, key=lambda p: p.stat().st_mtime)
|
||||||
backup_age = datetime.now() - datetime.fromtimestamp(latest_backup.stat().st_mtime)
|
backup_age = datetime.now(timezone.utc) - datetime.fromtimestamp(latest_backup.stat().st_mtime, tz=timezone.utc)
|
||||||
last_backup = latest_backup.name
|
last_backup = latest_backup.name
|
||||||
if backup_age.days > 7:
|
if backup_age.days > 7:
|
||||||
alerts.append(f"Last backup is {backup_age.days} days old")
|
alerts.append(f"Last backup is {backup_age.days} days old")
|
||||||
@@ -257,7 +384,7 @@ async def system_statistics(
|
|||||||
|
|
||||||
# Count active users (logged in within last 30 days)
|
# Count active users (logged in within last 30 days)
|
||||||
total_active_users = db.query(func.count(User.id)).filter(
|
total_active_users = db.query(func.count(User.id)).filter(
|
||||||
User.last_login > datetime.now() - timedelta(days=30)
|
User.last_login > datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
).scalar()
|
).scalar()
|
||||||
|
|
||||||
# Count admin users
|
# Count admin users
|
||||||
@@ -308,7 +435,7 @@ async def system_statistics(
|
|||||||
recent_activity.append({
|
recent_activity.append({
|
||||||
"type": "customer_added",
|
"type": "customer_added",
|
||||||
"description": f"Customer {customer.first} {customer.last} added",
|
"description": f"Customer {customer.first} {customer.last} added",
|
||||||
"timestamp": datetime.now().isoformat()
|
"timestamp": datetime.now(timezone.utc).isoformat()
|
||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
@@ -409,7 +536,7 @@ async def export_table(
|
|||||||
# Create exports directory if it doesn't exist
|
# Create exports directory if it doesn't exist
|
||||||
os.makedirs("exports", exist_ok=True)
|
os.makedirs("exports", exist_ok=True)
|
||||||
|
|
||||||
filename = f"exports/{table_name}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"
|
filename = f"exports/{table_name}_{datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')}.csv"
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if table_name.lower() == "customers" or table_name.lower() == "rolodex":
|
if table_name.lower() == "customers" or table_name.lower() == "rolodex":
|
||||||
@@ -470,10 +597,10 @@ async def download_backup(
|
|||||||
if "sqlite" in settings.database_url:
|
if "sqlite" in settings.database_url:
|
||||||
db_path = settings.database_url.replace("sqlite:///", "")
|
db_path = settings.database_url.replace("sqlite:///", "")
|
||||||
if os.path.exists(db_path):
|
if os.path.exists(db_path):
|
||||||
return FileResponse(
|
return FileResponse(
|
||||||
db_path,
|
db_path,
|
||||||
media_type='application/octet-stream',
|
media_type='application/octet-stream',
|
||||||
filename=f"delphi_backup_{datetime.now().strftime('%Y%m%d_%H%M%S')}.db"
|
filename=f"delphi_backup_{datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')}.db"
|
||||||
)
|
)
|
||||||
|
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
@@ -484,12 +611,20 @@ async def download_backup(
|
|||||||
|
|
||||||
# User Management Endpoints
|
# User Management Endpoints
|
||||||
|
|
||||||
@router.get("/users", response_model=List[UserResponse])
|
class PaginatedUsersResponse(BaseModel):
|
||||||
|
items: List[UserResponse]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/users", response_model=Union[List[UserResponse], PaginatedUsersResponse])
|
||||||
async def list_users(
|
async def list_users(
|
||||||
skip: int = Query(0, ge=0),
|
skip: int = Query(0, ge=0),
|
||||||
limit: int = Query(100, ge=1, le=1000),
|
limit: int = Query(100, ge=1, le=1000),
|
||||||
search: Optional[str] = Query(None),
|
search: Optional[str] = Query(None),
|
||||||
active_only: bool = Query(False),
|
active_only: bool = Query(False),
|
||||||
|
sort_by: Optional[str] = Query(None, description="Sort by: username, email, first_name, last_name, created, updated"),
|
||||||
|
sort_dir: Optional[str] = Query("asc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_admin_user)
|
current_user: User = Depends(get_admin_user)
|
||||||
):
|
):
|
||||||
@@ -498,19 +633,38 @@ async def list_users(
|
|||||||
query = db.query(User)
|
query = db.query(User)
|
||||||
|
|
||||||
if search:
|
if search:
|
||||||
query = query.filter(
|
# DRY: tokenize and apply case-insensitive multi-field search
|
||||||
or_(
|
tokens = build_query_tokens(search)
|
||||||
User.username.ilike(f"%{search}%"),
|
filter_expr = tokenized_ilike_filter(tokens, [
|
||||||
User.email.ilike(f"%{search}%"),
|
User.username,
|
||||||
User.first_name.ilike(f"%{search}%"),
|
User.email,
|
||||||
User.last_name.ilike(f"%{search}%")
|
User.first_name,
|
||||||
)
|
User.last_name,
|
||||||
)
|
])
|
||||||
|
if filter_expr is not None:
|
||||||
|
query = query.filter(filter_expr)
|
||||||
|
|
||||||
if active_only:
|
if active_only:
|
||||||
query = query.filter(User.is_active == True)
|
query = query.filter(User.is_active == True)
|
||||||
|
|
||||||
users = query.offset(skip).limit(limit).all()
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"username": [User.username],
|
||||||
|
"email": [User.email],
|
||||||
|
"first_name": [User.first_name],
|
||||||
|
"last_name": [User.last_name],
|
||||||
|
"created": [User.created_at],
|
||||||
|
"updated": [User.updated_at],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
users, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
if include_total:
|
||||||
|
return {"items": users, "total": total or 0}
|
||||||
return users
|
return users
|
||||||
|
|
||||||
|
|
||||||
@@ -567,8 +721,8 @@ async def create_user(
|
|||||||
hashed_password=hashed_password,
|
hashed_password=hashed_password,
|
||||||
is_admin=user_data.is_admin,
|
is_admin=user_data.is_admin,
|
||||||
is_active=user_data.is_active,
|
is_active=user_data.is_active,
|
||||||
created_at=datetime.now(),
|
created_at=datetime.now(timezone.utc),
|
||||||
updated_at=datetime.now()
|
updated_at=datetime.now(timezone.utc)
|
||||||
)
|
)
|
||||||
|
|
||||||
db.add(new_user)
|
db.add(new_user)
|
||||||
@@ -659,7 +813,7 @@ async def update_user(
|
|||||||
changes[field] = {"from": getattr(user, field), "to": value}
|
changes[field] = {"from": getattr(user, field), "to": value}
|
||||||
setattr(user, field, value)
|
setattr(user, field, value)
|
||||||
|
|
||||||
user.updated_at = datetime.now()
|
user.updated_at = datetime.now(timezone.utc)
|
||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(user)
|
db.refresh(user)
|
||||||
@@ -702,7 +856,7 @@ async def delete_user(
|
|||||||
|
|
||||||
# Soft delete by deactivating
|
# Soft delete by deactivating
|
||||||
user.is_active = False
|
user.is_active = False
|
||||||
user.updated_at = datetime.now()
|
user.updated_at = datetime.now(timezone.utc)
|
||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
@@ -744,7 +898,7 @@ async def reset_user_password(
|
|||||||
|
|
||||||
# Update password
|
# Update password
|
||||||
user.hashed_password = get_password_hash(password_data.new_password)
|
user.hashed_password = get_password_hash(password_data.new_password)
|
||||||
user.updated_at = datetime.now()
|
user.updated_at = datetime.now(timezone.utc)
|
||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
@@ -1046,7 +1200,7 @@ async def create_backup(
|
|||||||
backup_dir.mkdir(exist_ok=True)
|
backup_dir.mkdir(exist_ok=True)
|
||||||
|
|
||||||
# Generate backup filename
|
# Generate backup filename
|
||||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
timestamp = datetime.now(timezone.utc).strftime("%Y%m%d_%H%M%S")
|
||||||
backup_filename = f"delphi_backup_{timestamp}.db"
|
backup_filename = f"delphi_backup_{timestamp}.db"
|
||||||
backup_path = backup_dir / backup_filename
|
backup_path = backup_dir / backup_filename
|
||||||
|
|
||||||
@@ -1063,7 +1217,7 @@ async def create_backup(
|
|||||||
"backup_info": {
|
"backup_info": {
|
||||||
"filename": backup_filename,
|
"filename": backup_filename,
|
||||||
"size": f"{backup_size / (1024*1024):.1f} MB",
|
"size": f"{backup_size / (1024*1024):.1f} MB",
|
||||||
"created_at": datetime.now().isoformat(),
|
"created_at": datetime.now(timezone.utc).isoformat(),
|
||||||
"backup_type": "manual",
|
"backup_type": "manual",
|
||||||
"status": "completed"
|
"status": "completed"
|
||||||
}
|
}
|
||||||
@@ -1118,45 +1272,61 @@ async def get_audit_logs(
|
|||||||
resource_type: Optional[str] = Query(None),
|
resource_type: Optional[str] = Query(None),
|
||||||
action: Optional[str] = Query(None),
|
action: Optional[str] = Query(None),
|
||||||
hours_back: int = Query(168, ge=1, le=8760), # Default 7 days, max 1 year
|
hours_back: int = Query(168, ge=1, le=8760), # Default 7 days, max 1 year
|
||||||
|
sort_by: Optional[str] = Query("timestamp", description="Sort by: timestamp, username, action, resource_type"),
|
||||||
|
sort_dir: Optional[str] = Query("desc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_admin_user)
|
current_user: User = Depends(get_admin_user)
|
||||||
):
|
):
|
||||||
"""Get audit log entries with filtering"""
|
"""Get audit log entries with filtering, sorting, and pagination"""
|
||||||
|
|
||||||
cutoff_time = datetime.now() - timedelta(hours=hours_back)
|
cutoff_time = datetime.now(timezone.utc) - timedelta(hours=hours_back)
|
||||||
|
|
||||||
query = db.query(AuditLog).filter(AuditLog.timestamp >= cutoff_time)
|
query = db.query(AuditLog).filter(AuditLog.timestamp >= cutoff_time)
|
||||||
|
|
||||||
if user_id:
|
if user_id:
|
||||||
query = query.filter(AuditLog.user_id == user_id)
|
query = query.filter(AuditLog.user_id == user_id)
|
||||||
|
|
||||||
if resource_type:
|
if resource_type:
|
||||||
query = query.filter(AuditLog.resource_type.ilike(f"%{resource_type}%"))
|
query = query.filter(AuditLog.resource_type.ilike(f"%{resource_type}%"))
|
||||||
|
|
||||||
if action:
|
if action:
|
||||||
query = query.filter(AuditLog.action.ilike(f"%{action}%"))
|
query = query.filter(AuditLog.action.ilike(f"%{action}%"))
|
||||||
|
|
||||||
total_count = query.count()
|
# Sorting (whitelisted)
|
||||||
logs = query.order_by(AuditLog.timestamp.desc()).offset(skip).limit(limit).all()
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
return {
|
sort_by,
|
||||||
"total": total_count,
|
sort_dir,
|
||||||
"logs": [
|
allowed={
|
||||||
{
|
"timestamp": [AuditLog.timestamp],
|
||||||
"id": log.id,
|
"username": [AuditLog.username],
|
||||||
"user_id": log.user_id,
|
"action": [AuditLog.action],
|
||||||
"username": log.username,
|
"resource_type": [AuditLog.resource_type],
|
||||||
"action": log.action,
|
},
|
||||||
"resource_type": log.resource_type,
|
)
|
||||||
"resource_id": log.resource_id,
|
|
||||||
"details": log.details,
|
logs, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
"ip_address": log.ip_address,
|
|
||||||
"user_agent": log.user_agent,
|
items = [
|
||||||
"timestamp": log.timestamp.isoformat()
|
{
|
||||||
}
|
"id": log.id,
|
||||||
for log in logs
|
"user_id": log.user_id,
|
||||||
]
|
"username": log.username,
|
||||||
}
|
"action": log.action,
|
||||||
|
"resource_type": log.resource_type,
|
||||||
|
"resource_id": log.resource_id,
|
||||||
|
"details": log.details,
|
||||||
|
"ip_address": log.ip_address,
|
||||||
|
"user_agent": log.user_agent,
|
||||||
|
"timestamp": log.timestamp.isoformat(),
|
||||||
|
}
|
||||||
|
for log in logs
|
||||||
|
]
|
||||||
|
|
||||||
|
if include_total:
|
||||||
|
return {"items": items, "total": total or 0}
|
||||||
|
return items
|
||||||
|
|
||||||
|
|
||||||
@router.get("/audit/login-attempts")
|
@router.get("/audit/login-attempts")
|
||||||
@@ -1166,39 +1336,55 @@ async def get_login_attempts(
|
|||||||
username: Optional[str] = Query(None),
|
username: Optional[str] = Query(None),
|
||||||
failed_only: bool = Query(False),
|
failed_only: bool = Query(False),
|
||||||
hours_back: int = Query(168, ge=1, le=8760), # Default 7 days
|
hours_back: int = Query(168, ge=1, le=8760), # Default 7 days
|
||||||
|
sort_by: Optional[str] = Query("timestamp", description="Sort by: timestamp, username, ip_address, success"),
|
||||||
|
sort_dir: Optional[str] = Query("desc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_admin_user)
|
current_user: User = Depends(get_admin_user)
|
||||||
):
|
):
|
||||||
"""Get login attempts with filtering"""
|
"""Get login attempts with filtering, sorting, and pagination"""
|
||||||
|
|
||||||
cutoff_time = datetime.now() - timedelta(hours=hours_back)
|
cutoff_time = datetime.now(timezone.utc) - timedelta(hours=hours_back)
|
||||||
|
|
||||||
query = db.query(LoginAttempt).filter(LoginAttempt.timestamp >= cutoff_time)
|
query = db.query(LoginAttempt).filter(LoginAttempt.timestamp >= cutoff_time)
|
||||||
|
|
||||||
if username:
|
if username:
|
||||||
query = query.filter(LoginAttempt.username.ilike(f"%{username}%"))
|
query = query.filter(LoginAttempt.username.ilike(f"%{username}%"))
|
||||||
|
|
||||||
if failed_only:
|
if failed_only:
|
||||||
query = query.filter(LoginAttempt.success == 0)
|
query = query.filter(LoginAttempt.success == 0)
|
||||||
|
|
||||||
total_count = query.count()
|
# Sorting (whitelisted)
|
||||||
attempts = query.order_by(LoginAttempt.timestamp.desc()).offset(skip).limit(limit).all()
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
return {
|
sort_by,
|
||||||
"total": total_count,
|
sort_dir,
|
||||||
"attempts": [
|
allowed={
|
||||||
{
|
"timestamp": [LoginAttempt.timestamp],
|
||||||
"id": attempt.id,
|
"username": [LoginAttempt.username],
|
||||||
"username": attempt.username,
|
"ip_address": [LoginAttempt.ip_address],
|
||||||
"ip_address": attempt.ip_address,
|
"success": [LoginAttempt.success],
|
||||||
"user_agent": attempt.user_agent,
|
},
|
||||||
"success": bool(attempt.success),
|
)
|
||||||
"failure_reason": attempt.failure_reason,
|
|
||||||
"timestamp": attempt.timestamp.isoformat()
|
attempts, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
}
|
|
||||||
for attempt in attempts
|
items = [
|
||||||
]
|
{
|
||||||
}
|
"id": attempt.id,
|
||||||
|
"username": attempt.username,
|
||||||
|
"ip_address": attempt.ip_address,
|
||||||
|
"user_agent": attempt.user_agent,
|
||||||
|
"success": bool(attempt.success),
|
||||||
|
"failure_reason": attempt.failure_reason,
|
||||||
|
"timestamp": attempt.timestamp.isoformat(),
|
||||||
|
}
|
||||||
|
for attempt in attempts
|
||||||
|
]
|
||||||
|
|
||||||
|
if include_total:
|
||||||
|
return {"items": items, "total": total or 0}
|
||||||
|
return items
|
||||||
|
|
||||||
|
|
||||||
@router.get("/audit/user-activity/{user_id}")
|
@router.get("/audit/user-activity/{user_id}")
|
||||||
@@ -1251,7 +1437,7 @@ async def get_security_alerts(
|
|||||||
):
|
):
|
||||||
"""Get security alerts and suspicious activity"""
|
"""Get security alerts and suspicious activity"""
|
||||||
|
|
||||||
cutoff_time = datetime.now() - timedelta(hours=hours_back)
|
cutoff_time = datetime.now(timezone.utc) - timedelta(hours=hours_back)
|
||||||
|
|
||||||
# Get failed login attempts
|
# Get failed login attempts
|
||||||
failed_logins = db.query(LoginAttempt).filter(
|
failed_logins = db.query(LoginAttempt).filter(
|
||||||
@@ -1356,7 +1542,7 @@ async def get_audit_statistics(
|
|||||||
):
|
):
|
||||||
"""Get audit statistics and metrics"""
|
"""Get audit statistics and metrics"""
|
||||||
|
|
||||||
cutoff_time = datetime.now() - timedelta(days=days_back)
|
cutoff_time = datetime.now(timezone.utc) - timedelta(days=days_back)
|
||||||
|
|
||||||
# Total activity counts
|
# Total activity counts
|
||||||
total_audit_entries = db.query(func.count(AuditLog.id)).filter(
|
total_audit_entries = db.query(func.count(AuditLog.id)).filter(
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Authentication API endpoints
|
Authentication API endpoints
|
||||||
"""
|
"""
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta, timezone
|
||||||
from typing import List
|
from typing import List
|
||||||
from fastapi import APIRouter, Depends, HTTPException, status, Request
|
from fastapi import APIRouter, Depends, HTTPException, status, Request
|
||||||
from fastapi.security import OAuth2PasswordRequestForm
|
from fastapi.security import OAuth2PasswordRequestForm
|
||||||
@@ -69,7 +69,7 @@ async def login(login_data: LoginRequest, request: Request, db: Session = Depend
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Update last login
|
# Update last login
|
||||||
user.last_login = datetime.utcnow()
|
user.last_login = datetime.now(timezone.utc)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
access_token_expires = timedelta(minutes=settings.access_token_expire_minutes)
|
access_token_expires = timedelta(minutes=settings.access_token_expire_minutes)
|
||||||
@@ -144,7 +144,7 @@ async def read_users_me(current_user: User = Depends(get_current_user)):
|
|||||||
async def refresh_token_endpoint(
|
async def refresh_token_endpoint(
|
||||||
request: Request,
|
request: Request,
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
body: RefreshRequest | None = None,
|
body: RefreshRequest = None,
|
||||||
):
|
):
|
||||||
"""Issue a new access token using a valid, non-revoked refresh token.
|
"""Issue a new access token using a valid, non-revoked refresh token.
|
||||||
|
|
||||||
@@ -203,7 +203,7 @@ async def refresh_token_endpoint(
|
|||||||
if not user or not user.is_active:
|
if not user or not user.is_active:
|
||||||
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="User not found or inactive")
|
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="User not found or inactive")
|
||||||
|
|
||||||
user.last_login = datetime.utcnow()
|
user.last_login = datetime.now(timezone.utc)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
access_token_expires = timedelta(minutes=settings.access_token_expire_minutes)
|
access_token_expires = timedelta(minutes=settings.access_token_expire_minutes)
|
||||||
@@ -225,7 +225,7 @@ async def list_users(
|
|||||||
|
|
||||||
|
|
||||||
@router.post("/logout")
|
@router.post("/logout")
|
||||||
async def logout(body: RefreshRequest | None = None, db: Session = Depends(get_db)):
|
async def logout(body: RefreshRequest = None, db: Session = Depends(get_db)):
|
||||||
"""Revoke the provided refresh token. Idempotent and safe to call multiple times.
|
"""Revoke the provided refresh token. Idempotent and safe to call multiple times.
|
||||||
|
|
||||||
The client should send a JSON body: { "refresh_token": "..." }.
|
The client should send a JSON body: { "refresh_token": "..." }.
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ Customer (Rolodex) API endpoints
|
|||||||
from typing import List, Optional, Union
|
from typing import List, Optional, Union
|
||||||
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
||||||
from sqlalchemy.orm import Session, joinedload
|
from sqlalchemy.orm import Session, joinedload
|
||||||
from sqlalchemy import or_, and_, func, asc, desc
|
from sqlalchemy import func
|
||||||
from fastapi.responses import StreamingResponse
|
from fastapi.responses import StreamingResponse
|
||||||
import csv
|
import csv
|
||||||
import io
|
import io
|
||||||
@@ -13,12 +13,16 @@ from app.database.base import get_db
|
|||||||
from app.models.rolodex import Rolodex, Phone
|
from app.models.rolodex import Rolodex, Phone
|
||||||
from app.models.user import User
|
from app.models.user import User
|
||||||
from app.auth.security import get_current_user
|
from app.auth.security import get_current_user
|
||||||
|
from app.services.cache import invalidate_search_cache
|
||||||
|
from app.services.customers_search import apply_customer_filters, apply_customer_sorting, prepare_customer_csv_rows
|
||||||
|
from app.services.query_utils import apply_sorting, paginate_with_total
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
# Pydantic schemas for request/response
|
# Pydantic schemas for request/response
|
||||||
from pydantic import BaseModel, EmailStr
|
from pydantic import BaseModel, EmailStr, Field
|
||||||
|
from pydantic.config import ConfigDict
|
||||||
from datetime import date
|
from datetime import date
|
||||||
|
|
||||||
|
|
||||||
@@ -32,8 +36,7 @@ class PhoneResponse(BaseModel):
|
|||||||
location: Optional[str]
|
location: Optional[str]
|
||||||
phone: str
|
phone: str
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
|
||||||
class CustomerBase(BaseModel):
|
class CustomerBase(BaseModel):
|
||||||
@@ -84,10 +87,11 @@ class CustomerUpdate(BaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class CustomerResponse(CustomerBase):
|
class CustomerResponse(CustomerBase):
|
||||||
phone_numbers: List[PhoneResponse] = []
|
phone_numbers: List[PhoneResponse] = Field(default_factory=list)
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/search/phone")
|
@router.get("/search/phone")
|
||||||
@@ -196,80 +200,17 @@ async def list_customers(
|
|||||||
try:
|
try:
|
||||||
base_query = db.query(Rolodex)
|
base_query = db.query(Rolodex)
|
||||||
|
|
||||||
if search:
|
base_query = apply_customer_filters(
|
||||||
s = (search or "").strip()
|
base_query,
|
||||||
s_lower = s.lower()
|
search=search,
|
||||||
tokens = [t for t in s_lower.split() if t]
|
group=group,
|
||||||
# Basic contains search on several fields (case-insensitive)
|
state=state,
|
||||||
contains_any = or_(
|
groups=groups,
|
||||||
func.lower(Rolodex.id).contains(s_lower),
|
states=states,
|
||||||
func.lower(Rolodex.last).contains(s_lower),
|
)
|
||||||
func.lower(Rolodex.first).contains(s_lower),
|
|
||||||
func.lower(Rolodex.middle).contains(s_lower),
|
|
||||||
func.lower(Rolodex.city).contains(s_lower),
|
|
||||||
func.lower(Rolodex.email).contains(s_lower),
|
|
||||||
)
|
|
||||||
# Multi-token name support: every token must match either first, middle, or last
|
|
||||||
name_tokens = [
|
|
||||||
or_(
|
|
||||||
func.lower(Rolodex.first).contains(tok),
|
|
||||||
func.lower(Rolodex.middle).contains(tok),
|
|
||||||
func.lower(Rolodex.last).contains(tok),
|
|
||||||
)
|
|
||||||
for tok in tokens
|
|
||||||
]
|
|
||||||
combined = contains_any if not name_tokens else or_(contains_any, and_(*name_tokens))
|
|
||||||
# Comma pattern: "Last, First"
|
|
||||||
last_first_filter = None
|
|
||||||
if "," in s_lower:
|
|
||||||
last_part, first_part = [p.strip() for p in s_lower.split(",", 1)]
|
|
||||||
if last_part and first_part:
|
|
||||||
last_first_filter = and_(
|
|
||||||
func.lower(Rolodex.last).contains(last_part),
|
|
||||||
func.lower(Rolodex.first).contains(first_part),
|
|
||||||
)
|
|
||||||
elif last_part:
|
|
||||||
last_first_filter = func.lower(Rolodex.last).contains(last_part)
|
|
||||||
final_filter = or_(combined, last_first_filter) if last_first_filter is not None else combined
|
|
||||||
base_query = base_query.filter(final_filter)
|
|
||||||
|
|
||||||
# Apply group/state filters (support single and multi-select)
|
|
||||||
effective_groups = [g for g in (groups or []) if g] or ([group] if group else [])
|
|
||||||
if effective_groups:
|
|
||||||
base_query = base_query.filter(Rolodex.group.in_(effective_groups))
|
|
||||||
effective_states = [s for s in (states or []) if s] or ([state] if state else [])
|
|
||||||
if effective_states:
|
|
||||||
base_query = base_query.filter(Rolodex.abrev.in_(effective_states))
|
|
||||||
|
|
||||||
# Apply sorting (whitelisted fields only)
|
# Apply sorting (whitelisted fields only)
|
||||||
normalized_sort_by = (sort_by or "id").lower()
|
base_query = apply_customer_sorting(base_query, sort_by=sort_by, sort_dir=sort_dir)
|
||||||
normalized_sort_dir = (sort_dir or "asc").lower()
|
|
||||||
is_desc = normalized_sort_dir == "desc"
|
|
||||||
|
|
||||||
order_columns = []
|
|
||||||
if normalized_sort_by == "id":
|
|
||||||
order_columns = [Rolodex.id]
|
|
||||||
elif normalized_sort_by == "name":
|
|
||||||
# Sort by last, then first
|
|
||||||
order_columns = [Rolodex.last, Rolodex.first]
|
|
||||||
elif normalized_sort_by == "city":
|
|
||||||
# Sort by city, then state abbreviation
|
|
||||||
order_columns = [Rolodex.city, Rolodex.abrev]
|
|
||||||
elif normalized_sort_by == "email":
|
|
||||||
order_columns = [Rolodex.email]
|
|
||||||
else:
|
|
||||||
# Fallback to id to avoid arbitrary column injection
|
|
||||||
order_columns = [Rolodex.id]
|
|
||||||
|
|
||||||
# Case-insensitive ordering where applicable, preserving None ordering default
|
|
||||||
ordered = []
|
|
||||||
for col in order_columns:
|
|
||||||
# Use lower() for string-like cols; SQLAlchemy will handle non-string safely enough for SQLite/Postgres
|
|
||||||
expr = func.lower(col) if col.type.python_type in (str,) else col # type: ignore[attr-defined]
|
|
||||||
ordered.append(desc(expr) if is_desc else asc(expr))
|
|
||||||
|
|
||||||
if ordered:
|
|
||||||
base_query = base_query.order_by(*ordered)
|
|
||||||
|
|
||||||
customers = base_query.options(joinedload(Rolodex.phone_numbers)).offset(skip).limit(limit).all()
|
customers = base_query.options(joinedload(Rolodex.phone_numbers)).offset(skip).limit(limit).all()
|
||||||
if include_total:
|
if include_total:
|
||||||
@@ -304,72 +245,16 @@ async def export_customers(
|
|||||||
try:
|
try:
|
||||||
base_query = db.query(Rolodex)
|
base_query = db.query(Rolodex)
|
||||||
|
|
||||||
if search:
|
base_query = apply_customer_filters(
|
||||||
s = (search or "").strip()
|
base_query,
|
||||||
s_lower = s.lower()
|
search=search,
|
||||||
tokens = [t for t in s_lower.split() if t]
|
group=group,
|
||||||
contains_any = or_(
|
state=state,
|
||||||
func.lower(Rolodex.id).contains(s_lower),
|
groups=groups,
|
||||||
func.lower(Rolodex.last).contains(s_lower),
|
states=states,
|
||||||
func.lower(Rolodex.first).contains(s_lower),
|
)
|
||||||
func.lower(Rolodex.middle).contains(s_lower),
|
|
||||||
func.lower(Rolodex.city).contains(s_lower),
|
|
||||||
func.lower(Rolodex.email).contains(s_lower),
|
|
||||||
)
|
|
||||||
name_tokens = [
|
|
||||||
or_(
|
|
||||||
func.lower(Rolodex.first).contains(tok),
|
|
||||||
func.lower(Rolodex.middle).contains(tok),
|
|
||||||
func.lower(Rolodex.last).contains(tok),
|
|
||||||
)
|
|
||||||
for tok in tokens
|
|
||||||
]
|
|
||||||
combined = contains_any if not name_tokens else or_(contains_any, and_(*name_tokens))
|
|
||||||
last_first_filter = None
|
|
||||||
if "," in s_lower:
|
|
||||||
last_part, first_part = [p.strip() for p in s_lower.split(",", 1)]
|
|
||||||
if last_part and first_part:
|
|
||||||
last_first_filter = and_(
|
|
||||||
func.lower(Rolodex.last).contains(last_part),
|
|
||||||
func.lower(Rolodex.first).contains(first_part),
|
|
||||||
)
|
|
||||||
elif last_part:
|
|
||||||
last_first_filter = func.lower(Rolodex.last).contains(last_part)
|
|
||||||
final_filter = or_(combined, last_first_filter) if last_first_filter is not None else combined
|
|
||||||
base_query = base_query.filter(final_filter)
|
|
||||||
|
|
||||||
effective_groups = [g for g in (groups or []) if g] or ([group] if group else [])
|
base_query = apply_customer_sorting(base_query, sort_by=sort_by, sort_dir=sort_dir)
|
||||||
if effective_groups:
|
|
||||||
base_query = base_query.filter(Rolodex.group.in_(effective_groups))
|
|
||||||
effective_states = [s for s in (states or []) if s] or ([state] if state else [])
|
|
||||||
if effective_states:
|
|
||||||
base_query = base_query.filter(Rolodex.abrev.in_(effective_states))
|
|
||||||
|
|
||||||
normalized_sort_by = (sort_by or "id").lower()
|
|
||||||
normalized_sort_dir = (sort_dir or "asc").lower()
|
|
||||||
is_desc = normalized_sort_dir == "desc"
|
|
||||||
|
|
||||||
order_columns = []
|
|
||||||
if normalized_sort_by == "id":
|
|
||||||
order_columns = [Rolodex.id]
|
|
||||||
elif normalized_sort_by == "name":
|
|
||||||
order_columns = [Rolodex.last, Rolodex.first]
|
|
||||||
elif normalized_sort_by == "city":
|
|
||||||
order_columns = [Rolodex.city, Rolodex.abrev]
|
|
||||||
elif normalized_sort_by == "email":
|
|
||||||
order_columns = [Rolodex.email]
|
|
||||||
else:
|
|
||||||
order_columns = [Rolodex.id]
|
|
||||||
|
|
||||||
ordered = []
|
|
||||||
for col in order_columns:
|
|
||||||
try:
|
|
||||||
expr = func.lower(col) if col.type.python_type in (str,) else col # type: ignore[attr-defined]
|
|
||||||
except Exception:
|
|
||||||
expr = col
|
|
||||||
ordered.append(desc(expr) if is_desc else asc(expr))
|
|
||||||
if ordered:
|
|
||||||
base_query = base_query.order_by(*ordered)
|
|
||||||
|
|
||||||
if not export_all:
|
if not export_all:
|
||||||
if skip is not None:
|
if skip is not None:
|
||||||
@@ -382,39 +267,10 @@ async def export_customers(
|
|||||||
# Prepare CSV
|
# Prepare CSV
|
||||||
output = io.StringIO()
|
output = io.StringIO()
|
||||||
writer = csv.writer(output)
|
writer = csv.writer(output)
|
||||||
allowed_fields_in_order = ["id", "name", "group", "city", "state", "phone", "email"]
|
header_row, rows = prepare_customer_csv_rows(customers, fields)
|
||||||
header_names = {
|
writer.writerow(header_row)
|
||||||
"id": "Customer ID",
|
for row in rows:
|
||||||
"name": "Name",
|
writer.writerow(row)
|
||||||
"group": "Group",
|
|
||||||
"city": "City",
|
|
||||||
"state": "State",
|
|
||||||
"phone": "Primary Phone",
|
|
||||||
"email": "Email",
|
|
||||||
}
|
|
||||||
requested = [f.lower() for f in (fields or []) if isinstance(f, str)]
|
|
||||||
selected_fields = [f for f in allowed_fields_in_order if f in requested] if requested else allowed_fields_in_order
|
|
||||||
if not selected_fields:
|
|
||||||
selected_fields = allowed_fields_in_order
|
|
||||||
writer.writerow([header_names[f] for f in selected_fields])
|
|
||||||
for c in customers:
|
|
||||||
full_name = f"{(c.first or '').strip()} {(c.last or '').strip()}".strip()
|
|
||||||
primary_phone = ""
|
|
||||||
try:
|
|
||||||
if c.phone_numbers:
|
|
||||||
primary_phone = c.phone_numbers[0].phone or ""
|
|
||||||
except Exception:
|
|
||||||
primary_phone = ""
|
|
||||||
row_map = {
|
|
||||||
"id": c.id,
|
|
||||||
"name": full_name,
|
|
||||||
"group": c.group or "",
|
|
||||||
"city": c.city or "",
|
|
||||||
"state": c.abrev or "",
|
|
||||||
"phone": primary_phone,
|
|
||||||
"email": c.email or "",
|
|
||||||
}
|
|
||||||
writer.writerow([row_map[f] for f in selected_fields])
|
|
||||||
|
|
||||||
output.seek(0)
|
output.seek(0)
|
||||||
filename = "customers_export.csv"
|
filename = "customers_export.csv"
|
||||||
@@ -469,6 +325,10 @@ async def create_customer(
|
|||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(customer)
|
db.refresh(customer)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return customer
|
return customer
|
||||||
|
|
||||||
|
|
||||||
@@ -494,7 +354,10 @@ async def update_customer(
|
|||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(customer)
|
db.refresh(customer)
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return customer
|
return customer
|
||||||
|
|
||||||
|
|
||||||
@@ -515,17 +378,30 @@ async def delete_customer(
|
|||||||
|
|
||||||
db.delete(customer)
|
db.delete(customer)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return {"message": "Customer deleted successfully"}
|
return {"message": "Customer deleted successfully"}
|
||||||
|
|
||||||
|
|
||||||
@router.get("/{customer_id}/phones", response_model=List[PhoneResponse])
|
class PaginatedPhonesResponse(BaseModel):
|
||||||
|
items: List[PhoneResponse]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/{customer_id}/phones", response_model=Union[List[PhoneResponse], PaginatedPhonesResponse])
|
||||||
async def get_customer_phones(
|
async def get_customer_phones(
|
||||||
customer_id: str,
|
customer_id: str,
|
||||||
|
skip: int = Query(0, ge=0, description="Offset for pagination"),
|
||||||
|
limit: int = Query(100, ge=1, le=1000, description="Page size"),
|
||||||
|
sort_by: Optional[str] = Query("location", description="Sort by: location, phone"),
|
||||||
|
sort_dir: Optional[str] = Query("asc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
"""Get customer phone numbers"""
|
"""Get customer phone numbers with optional sorting/pagination"""
|
||||||
customer = db.query(Rolodex).filter(Rolodex.id == customer_id).first()
|
customer = db.query(Rolodex).filter(Rolodex.id == customer_id).first()
|
||||||
|
|
||||||
if not customer:
|
if not customer:
|
||||||
@@ -534,7 +410,21 @@ async def get_customer_phones(
|
|||||||
detail="Customer not found"
|
detail="Customer not found"
|
||||||
)
|
)
|
||||||
|
|
||||||
phones = db.query(Phone).filter(Phone.rolodex_id == customer_id).all()
|
query = db.query(Phone).filter(Phone.rolodex_id == customer_id)
|
||||||
|
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"location": [Phone.location, Phone.phone],
|
||||||
|
"phone": [Phone.phone],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
phones, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
if include_total:
|
||||||
|
return {"items": phones, "total": total or 0}
|
||||||
return phones
|
return phones
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,16 +1,19 @@
|
|||||||
"""
|
"""
|
||||||
Document Management API endpoints - QDROs, Templates, and General Documents
|
Document Management API endpoints - QDROs, Templates, and General Documents
|
||||||
"""
|
"""
|
||||||
from typing import List, Optional, Dict, Any
|
from __future__ import annotations
|
||||||
|
from typing import List, Optional, Dict, Any, Union
|
||||||
from fastapi import APIRouter, Depends, HTTPException, status, Query, UploadFile, File, Form, Request
|
from fastapi import APIRouter, Depends, HTTPException, status, Query, UploadFile, File, Form, Request
|
||||||
from sqlalchemy.orm import Session, joinedload
|
from sqlalchemy.orm import Session, joinedload
|
||||||
from sqlalchemy import or_, func, and_, desc, asc, text
|
from sqlalchemy import or_, func, and_, desc, asc, text
|
||||||
from datetime import date, datetime
|
from datetime import date, datetime, timezone
|
||||||
import os
|
import os
|
||||||
import uuid
|
import uuid
|
||||||
import shutil
|
import shutil
|
||||||
|
|
||||||
from app.database.base import get_db
|
from app.database.base import get_db
|
||||||
|
from app.api.search_highlight import build_query_tokens
|
||||||
|
from app.services.query_utils import tokenized_ilike_filter, apply_pagination, apply_sorting, paginate_with_total
|
||||||
from app.models.qdro import QDRO
|
from app.models.qdro import QDRO
|
||||||
from app.models.files import File as FileModel
|
from app.models.files import File as FileModel
|
||||||
from app.models.rolodex import Rolodex
|
from app.models.rolodex import Rolodex
|
||||||
@@ -20,18 +23,20 @@ from app.auth.security import get_current_user
|
|||||||
from app.models.additional import Document
|
from app.models.additional import Document
|
||||||
from app.core.logging import get_logger
|
from app.core.logging import get_logger
|
||||||
from app.services.audit import audit_service
|
from app.services.audit import audit_service
|
||||||
|
from app.services.cache import invalidate_search_cache
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
# Pydantic schemas
|
# Pydantic schemas
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel, ConfigDict
|
||||||
|
|
||||||
|
|
||||||
class QDROBase(BaseModel):
|
class QDROBase(BaseModel):
|
||||||
file_no: str
|
file_no: str
|
||||||
version: str = "01"
|
version: str = "01"
|
||||||
title: Optional[str] = None
|
title: Optional[str] = None
|
||||||
|
form_name: Optional[str] = None
|
||||||
content: Optional[str] = None
|
content: Optional[str] = None
|
||||||
status: str = "DRAFT"
|
status: str = "DRAFT"
|
||||||
created_date: Optional[date] = None
|
created_date: Optional[date] = None
|
||||||
@@ -51,6 +56,7 @@ class QDROCreate(QDROBase):
|
|||||||
class QDROUpdate(BaseModel):
|
class QDROUpdate(BaseModel):
|
||||||
version: Optional[str] = None
|
version: Optional[str] = None
|
||||||
title: Optional[str] = None
|
title: Optional[str] = None
|
||||||
|
form_name: Optional[str] = None
|
||||||
content: Optional[str] = None
|
content: Optional[str] = None
|
||||||
status: Optional[str] = None
|
status: Optional[str] = None
|
||||||
created_date: Optional[date] = None
|
created_date: Optional[date] = None
|
||||||
@@ -66,27 +72,61 @@ class QDROUpdate(BaseModel):
|
|||||||
class QDROResponse(QDROBase):
|
class QDROResponse(QDROBase):
|
||||||
id: int
|
id: int
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/qdros/{file_no}", response_model=List[QDROResponse])
|
class PaginatedQDROResponse(BaseModel):
|
||||||
|
items: List[QDROResponse]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/qdros/{file_no}", response_model=Union[List[QDROResponse], PaginatedQDROResponse])
|
||||||
async def get_file_qdros(
|
async def get_file_qdros(
|
||||||
file_no: str,
|
file_no: str,
|
||||||
|
skip: int = Query(0, ge=0, description="Offset for pagination"),
|
||||||
|
limit: int = Query(100, ge=1, le=1000, description="Page size"),
|
||||||
|
sort_by: Optional[str] = Query("updated", description="Sort by: updated, created, version, status"),
|
||||||
|
sort_dir: Optional[str] = Query("desc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
"""Get QDROs for specific file"""
|
"""Get QDROs for a specific file with optional sorting/pagination"""
|
||||||
qdros = db.query(QDRO).filter(QDRO.file_no == file_no).order_by(QDRO.version).all()
|
query = db.query(QDRO).filter(QDRO.file_no == file_no)
|
||||||
|
|
||||||
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"updated": [QDRO.updated_at, QDRO.id],
|
||||||
|
"created": [QDRO.created_at, QDRO.id],
|
||||||
|
"version": [QDRO.version],
|
||||||
|
"status": [QDRO.status],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
qdros, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
if include_total:
|
||||||
|
return {"items": qdros, "total": total or 0}
|
||||||
return qdros
|
return qdros
|
||||||
|
|
||||||
|
|
||||||
@router.get("/qdros/", response_model=List[QDROResponse])
|
class PaginatedQDROResponse(BaseModel):
|
||||||
|
items: List[QDROResponse]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/qdros/", response_model=Union[List[QDROResponse], PaginatedQDROResponse])
|
||||||
async def list_qdros(
|
async def list_qdros(
|
||||||
skip: int = Query(0, ge=0),
|
skip: int = Query(0, ge=0),
|
||||||
limit: int = Query(50, ge=1, le=200),
|
limit: int = Query(50, ge=1, le=200),
|
||||||
status_filter: Optional[str] = Query(None),
|
status_filter: Optional[str] = Query(None),
|
||||||
search: Optional[str] = Query(None),
|
search: Optional[str] = Query(None),
|
||||||
|
sort_by: Optional[str] = Query(None, description="Sort by: file_no, version, status, created, updated"),
|
||||||
|
sort_dir: Optional[str] = Query("asc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
@@ -97,17 +137,37 @@ async def list_qdros(
|
|||||||
query = query.filter(QDRO.status == status_filter)
|
query = query.filter(QDRO.status == status_filter)
|
||||||
|
|
||||||
if search:
|
if search:
|
||||||
query = query.filter(
|
# DRY: tokenize and apply case-insensitive search across common QDRO fields
|
||||||
or_(
|
tokens = build_query_tokens(search)
|
||||||
QDRO.file_no.contains(search),
|
filter_expr = tokenized_ilike_filter(tokens, [
|
||||||
QDRO.title.contains(search),
|
QDRO.file_no,
|
||||||
QDRO.participant_name.contains(search),
|
QDRO.form_name,
|
||||||
QDRO.spouse_name.contains(search),
|
QDRO.pet,
|
||||||
QDRO.plan_name.contains(search)
|
QDRO.res,
|
||||||
)
|
QDRO.case_number,
|
||||||
)
|
QDRO.notes,
|
||||||
|
QDRO.status,
|
||||||
qdros = query.offset(skip).limit(limit).all()
|
])
|
||||||
|
if filter_expr is not None:
|
||||||
|
query = query.filter(filter_expr)
|
||||||
|
|
||||||
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"file_no": [QDRO.file_no],
|
||||||
|
"version": [QDRO.version],
|
||||||
|
"status": [QDRO.status],
|
||||||
|
"created": [QDRO.created_at],
|
||||||
|
"updated": [QDRO.updated_at],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
qdros, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
if include_total:
|
||||||
|
return {"items": qdros, "total": total or 0}
|
||||||
return qdros
|
return qdros
|
||||||
|
|
||||||
|
|
||||||
@@ -135,6 +195,10 @@ async def create_qdro(
|
|||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(qdro)
|
db.refresh(qdro)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return qdro
|
return qdro
|
||||||
|
|
||||||
|
|
||||||
@@ -189,6 +253,10 @@ async def update_qdro(
|
|||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(qdro)
|
db.refresh(qdro)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return qdro
|
return qdro
|
||||||
|
|
||||||
|
|
||||||
@@ -213,7 +281,10 @@ async def delete_qdro(
|
|||||||
|
|
||||||
db.delete(qdro)
|
db.delete(qdro)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return {"message": "QDRO deleted successfully"}
|
return {"message": "QDRO deleted successfully"}
|
||||||
|
|
||||||
|
|
||||||
@@ -241,8 +312,7 @@ class TemplateResponse(TemplateBase):
|
|||||||
active: bool = True
|
active: bool = True
|
||||||
created_at: Optional[datetime] = None
|
created_at: Optional[datetime] = None
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
# Document Generation Schema
|
# Document Generation Schema
|
||||||
class DocumentGenerateRequest(BaseModel):
|
class DocumentGenerateRequest(BaseModel):
|
||||||
@@ -269,13 +339,21 @@ class DocumentStats(BaseModel):
|
|||||||
recent_activity: List[Dict[str, Any]]
|
recent_activity: List[Dict[str, Any]]
|
||||||
|
|
||||||
|
|
||||||
@router.get("/templates/", response_model=List[TemplateResponse])
|
class PaginatedTemplatesResponse(BaseModel):
|
||||||
|
items: List[TemplateResponse]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/templates/", response_model=Union[List[TemplateResponse], PaginatedTemplatesResponse])
|
||||||
async def list_templates(
|
async def list_templates(
|
||||||
skip: int = Query(0, ge=0),
|
skip: int = Query(0, ge=0),
|
||||||
limit: int = Query(50, ge=1, le=200),
|
limit: int = Query(50, ge=1, le=200),
|
||||||
category: Optional[str] = Query(None),
|
category: Optional[str] = Query(None),
|
||||||
search: Optional[str] = Query(None),
|
search: Optional[str] = Query(None),
|
||||||
active_only: bool = Query(True),
|
active_only: bool = Query(True),
|
||||||
|
sort_by: Optional[str] = Query(None, description="Sort by: form_id, form_name, category, created, updated"),
|
||||||
|
sort_dir: Optional[str] = Query("asc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
@@ -289,14 +367,31 @@ async def list_templates(
|
|||||||
query = query.filter(FormIndex.category == category)
|
query = query.filter(FormIndex.category == category)
|
||||||
|
|
||||||
if search:
|
if search:
|
||||||
query = query.filter(
|
# DRY: tokenize and apply case-insensitive search for templates
|
||||||
or_(
|
tokens = build_query_tokens(search)
|
||||||
FormIndex.form_name.contains(search),
|
filter_expr = tokenized_ilike_filter(tokens, [
|
||||||
FormIndex.form_id.contains(search)
|
FormIndex.form_name,
|
||||||
)
|
FormIndex.form_id,
|
||||||
)
|
FormIndex.category,
|
||||||
|
])
|
||||||
templates = query.offset(skip).limit(limit).all()
|
if filter_expr is not None:
|
||||||
|
query = query.filter(filter_expr)
|
||||||
|
|
||||||
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"form_id": [FormIndex.form_id],
|
||||||
|
"form_name": [FormIndex.form_name],
|
||||||
|
"category": [FormIndex.category],
|
||||||
|
"created": [FormIndex.created_at],
|
||||||
|
"updated": [FormIndex.updated_at],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
templates, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
|
||||||
# Enhanced response with template content
|
# Enhanced response with template content
|
||||||
results = []
|
results = []
|
||||||
@@ -317,6 +412,8 @@ async def list_templates(
|
|||||||
"variables": _extract_variables_from_content(content)
|
"variables": _extract_variables_from_content(content)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
if include_total:
|
||||||
|
return {"items": results, "total": total or 0}
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
|
||||||
@@ -356,6 +453,10 @@ async def create_template(
|
|||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(form_index)
|
db.refresh(form_index)
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"form_id": form_index.form_id,
|
"form_id": form_index.form_id,
|
||||||
@@ -440,6 +541,10 @@ async def update_template(
|
|||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(template)
|
db.refresh(template)
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
# Get updated content
|
# Get updated content
|
||||||
template_lines = db.query(FormList).filter(
|
template_lines = db.query(FormList).filter(
|
||||||
@@ -480,6 +585,10 @@ async def delete_template(
|
|||||||
# Delete template
|
# Delete template
|
||||||
db.delete(template)
|
db.delete(template)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
return {"message": "Template deleted successfully"}
|
return {"message": "Template deleted successfully"}
|
||||||
|
|
||||||
@@ -574,7 +683,7 @@ async def generate_document(
|
|||||||
"file_name": file_name,
|
"file_name": file_name,
|
||||||
"file_path": file_path,
|
"file_path": file_path,
|
||||||
"size": file_size,
|
"size": file_size,
|
||||||
"created_at": datetime.now()
|
"created_at": datetime.now(timezone.utc)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -629,32 +738,49 @@ async def get_document_stats(
|
|||||||
@router.get("/file/{file_no}/documents")
|
@router.get("/file/{file_no}/documents")
|
||||||
async def get_file_documents(
|
async def get_file_documents(
|
||||||
file_no: str,
|
file_no: str,
|
||||||
|
sort_by: Optional[str] = Query("updated", description="Sort by: updated, created"),
|
||||||
|
sort_dir: Optional[str] = Query("desc", description="Sort direction: asc or desc"),
|
||||||
|
skip: int = Query(0, ge=0),
|
||||||
|
limit: int = Query(100, ge=1, le=1000),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
"""Get all documents associated with a specific file"""
|
"""Get all documents associated with a specific file, with optional sorting/pagination"""
|
||||||
# Get QDROs for this file
|
# Base query for QDROs tied to the file
|
||||||
qdros = db.query(QDRO).filter(QDRO.file_no == file_no).order_by(desc(QDRO.updated_at)).all()
|
query = db.query(QDRO).filter(QDRO.file_no == file_no)
|
||||||
|
|
||||||
# Format response
|
# Apply sorting using shared helper (map friendly names to columns)
|
||||||
documents = [
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"updated": [QDRO.updated_at, QDRO.id],
|
||||||
|
"created": [QDRO.created_at, QDRO.id],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
qdros, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
|
||||||
|
items = [
|
||||||
{
|
{
|
||||||
"id": qdro.id,
|
"id": qdro.id,
|
||||||
"type": "QDRO",
|
"type": "QDRO",
|
||||||
"title": f"QDRO v{qdro.version}",
|
"title": f"QDRO v{qdro.version}",
|
||||||
"status": qdro.status,
|
"status": qdro.status,
|
||||||
"created_date": qdro.created_date.isoformat() if qdro.created_date else None,
|
"created_date": qdro.created_date.isoformat() if getattr(qdro, "created_date", None) else None,
|
||||||
"updated_at": qdro.updated_at.isoformat() if qdro.updated_at else None,
|
"updated_at": qdro.updated_at.isoformat() if getattr(qdro, "updated_at", None) else None,
|
||||||
"file_no": qdro.file_no
|
"file_no": qdro.file_no,
|
||||||
}
|
}
|
||||||
for qdro in qdros
|
for qdro in qdros
|
||||||
]
|
]
|
||||||
|
|
||||||
return {
|
payload = {"file_no": file_no, "documents": items, "total_count": (total if include_total else None)}
|
||||||
"file_no": file_no,
|
# Maintain previous shape by omitting total_count when include_total is False? The prior code always returned total_count.
|
||||||
"documents": documents,
|
# Keep total_count for backward compatibility but set to actual total when include_total else len(items)
|
||||||
"total_count": len(documents)
|
payload["total_count"] = (total if include_total else len(items))
|
||||||
}
|
return payload
|
||||||
|
|
||||||
|
|
||||||
def _extract_variables_from_content(content: str) -> Dict[str, str]:
|
def _extract_variables_from_content(content: str) -> Dict[str, str]:
|
||||||
|
|||||||
@@ -1,25 +1,29 @@
|
|||||||
"""
|
"""
|
||||||
File Management API endpoints
|
File Management API endpoints
|
||||||
"""
|
"""
|
||||||
from typing import List, Optional, Dict, Any
|
from typing import List, Optional, Dict, Any, Union
|
||||||
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
||||||
from sqlalchemy.orm import Session, joinedload
|
from sqlalchemy.orm import Session, joinedload
|
||||||
from sqlalchemy import or_, func, and_, desc
|
from sqlalchemy import or_, func, and_, desc
|
||||||
from datetime import date, datetime
|
from datetime import date, datetime
|
||||||
|
|
||||||
from app.database.base import get_db
|
from app.database.base import get_db
|
||||||
|
from app.api.search_highlight import build_query_tokens
|
||||||
|
from app.services.query_utils import tokenized_ilike_filter, apply_pagination, apply_sorting, paginate_with_total
|
||||||
from app.models.files import File
|
from app.models.files import File
|
||||||
from app.models.rolodex import Rolodex
|
from app.models.rolodex import Rolodex
|
||||||
from app.models.ledger import Ledger
|
from app.models.ledger import Ledger
|
||||||
from app.models.lookups import Employee, FileType, FileStatus
|
from app.models.lookups import Employee, FileType, FileStatus
|
||||||
from app.models.user import User
|
from app.models.user import User
|
||||||
from app.auth.security import get_current_user
|
from app.auth.security import get_current_user
|
||||||
|
from app.services.cache import invalidate_search_cache
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
# Pydantic schemas
|
# Pydantic schemas
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
|
from pydantic.config import ConfigDict
|
||||||
|
|
||||||
|
|
||||||
class FileBase(BaseModel):
|
class FileBase(BaseModel):
|
||||||
@@ -67,17 +71,24 @@ class FileResponse(FileBase):
|
|||||||
amount_owing: float = 0.0
|
amount_owing: float = 0.0
|
||||||
transferable: float = 0.0
|
transferable: float = 0.0
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/", response_model=List[FileResponse])
|
class PaginatedFilesResponse(BaseModel):
|
||||||
|
items: List[FileResponse]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/", response_model=Union[List[FileResponse], PaginatedFilesResponse])
|
||||||
async def list_files(
|
async def list_files(
|
||||||
skip: int = Query(0, ge=0),
|
skip: int = Query(0, ge=0),
|
||||||
limit: int = Query(50, ge=1, le=200),
|
limit: int = Query(50, ge=1, le=200),
|
||||||
search: Optional[str] = Query(None),
|
search: Optional[str] = Query(None),
|
||||||
status_filter: Optional[str] = Query(None),
|
status_filter: Optional[str] = Query(None),
|
||||||
employee_filter: Optional[str] = Query(None),
|
employee_filter: Optional[str] = Query(None),
|
||||||
|
sort_by: Optional[str] = Query(None, description="Sort by: file_no, client, opened, closed, status, amount_owing, total_charges"),
|
||||||
|
sort_dir: Optional[str] = Query("asc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
@@ -85,14 +96,17 @@ async def list_files(
|
|||||||
query = db.query(File)
|
query = db.query(File)
|
||||||
|
|
||||||
if search:
|
if search:
|
||||||
query = query.filter(
|
# DRY: tokenize and apply case-insensitive search consistently with search endpoints
|
||||||
or_(
|
tokens = build_query_tokens(search)
|
||||||
File.file_no.contains(search),
|
filter_expr = tokenized_ilike_filter(tokens, [
|
||||||
File.id.contains(search),
|
File.file_no,
|
||||||
File.regarding.contains(search),
|
File.id,
|
||||||
File.file_type.contains(search)
|
File.regarding,
|
||||||
)
|
File.file_type,
|
||||||
)
|
File.memo,
|
||||||
|
])
|
||||||
|
if filter_expr is not None:
|
||||||
|
query = query.filter(filter_expr)
|
||||||
|
|
||||||
if status_filter:
|
if status_filter:
|
||||||
query = query.filter(File.status == status_filter)
|
query = query.filter(File.status == status_filter)
|
||||||
@@ -100,7 +114,25 @@ async def list_files(
|
|||||||
if employee_filter:
|
if employee_filter:
|
||||||
query = query.filter(File.empl_num == employee_filter)
|
query = query.filter(File.empl_num == employee_filter)
|
||||||
|
|
||||||
files = query.offset(skip).limit(limit).all()
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"file_no": [File.file_no],
|
||||||
|
"client": [File.id],
|
||||||
|
"opened": [File.opened],
|
||||||
|
"closed": [File.closed],
|
||||||
|
"status": [File.status],
|
||||||
|
"amount_owing": [File.amount_owing],
|
||||||
|
"total_charges": [File.total_charges],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
files, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
if include_total:
|
||||||
|
return {"items": files, "total": total or 0}
|
||||||
return files
|
return files
|
||||||
|
|
||||||
|
|
||||||
@@ -142,6 +174,10 @@ async def create_file(
|
|||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(file_obj)
|
db.refresh(file_obj)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return file_obj
|
return file_obj
|
||||||
|
|
||||||
|
|
||||||
@@ -167,7 +203,10 @@ async def update_file(
|
|||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(file_obj)
|
db.refresh(file_obj)
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return file_obj
|
return file_obj
|
||||||
|
|
||||||
|
|
||||||
@@ -188,7 +227,10 @@ async def delete_file(
|
|||||||
|
|
||||||
db.delete(file_obj)
|
db.delete(file_obj)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return {"message": "File deleted successfully"}
|
return {"message": "File deleted successfully"}
|
||||||
|
|
||||||
|
|
||||||
@@ -433,11 +475,13 @@ async def advanced_file_search(
|
|||||||
query = query.filter(File.file_no.contains(file_no))
|
query = query.filter(File.file_no.contains(file_no))
|
||||||
|
|
||||||
if client_name:
|
if client_name:
|
||||||
|
# SQLite-safe concatenation for first + last name
|
||||||
|
full_name_expr = (func.coalesce(Rolodex.first, '') + ' ' + func.coalesce(Rolodex.last, ''))
|
||||||
query = query.join(Rolodex).filter(
|
query = query.join(Rolodex).filter(
|
||||||
or_(
|
or_(
|
||||||
Rolodex.first.contains(client_name),
|
Rolodex.first.contains(client_name),
|
||||||
Rolodex.last.contains(client_name),
|
Rolodex.last.contains(client_name),
|
||||||
func.concat(Rolodex.first, ' ', Rolodex.last).contains(client_name)
|
full_name_expr.contains(client_name)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
"""
|
"""
|
||||||
Financial/Ledger API endpoints
|
Financial/Ledger API endpoints
|
||||||
"""
|
"""
|
||||||
from typing import List, Optional, Dict, Any
|
from typing import List, Optional, Dict, Any, Union
|
||||||
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
||||||
from sqlalchemy.orm import Session, joinedload
|
from sqlalchemy.orm import Session, joinedload
|
||||||
from sqlalchemy import or_, func, and_, desc, asc, text
|
from sqlalchemy import or_, func, and_, desc, asc, text
|
||||||
from datetime import date, datetime, timedelta
|
from datetime import date, datetime, timedelta, timezone
|
||||||
|
|
||||||
from app.database.base import get_db
|
from app.database.base import get_db
|
||||||
from app.models.ledger import Ledger
|
from app.models.ledger import Ledger
|
||||||
@@ -14,12 +14,14 @@ from app.models.rolodex import Rolodex
|
|||||||
from app.models.lookups import Employee, TransactionType, TransactionCode
|
from app.models.lookups import Employee, TransactionType, TransactionCode
|
||||||
from app.models.user import User
|
from app.models.user import User
|
||||||
from app.auth.security import get_current_user
|
from app.auth.security import get_current_user
|
||||||
|
from app.services.cache import invalidate_search_cache
|
||||||
|
from app.services.query_utils import apply_sorting, paginate_with_total
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
# Pydantic schemas
|
# Pydantic schemas
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel, ConfigDict
|
||||||
|
|
||||||
|
|
||||||
class LedgerBase(BaseModel):
|
class LedgerBase(BaseModel):
|
||||||
@@ -57,8 +59,7 @@ class LedgerResponse(LedgerBase):
|
|||||||
id: int
|
id: int
|
||||||
item_no: int
|
item_no: int
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
|
||||||
class FinancialSummary(BaseModel):
|
class FinancialSummary(BaseModel):
|
||||||
@@ -75,23 +76,46 @@ class FinancialSummary(BaseModel):
|
|||||||
billed_amount: float
|
billed_amount: float
|
||||||
|
|
||||||
|
|
||||||
@router.get("/ledger/{file_no}", response_model=List[LedgerResponse])
|
class PaginatedLedgerResponse(BaseModel):
|
||||||
|
items: List[LedgerResponse]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/ledger/{file_no}", response_model=Union[List[LedgerResponse], PaginatedLedgerResponse])
|
||||||
async def get_file_ledger(
|
async def get_file_ledger(
|
||||||
file_no: str,
|
file_no: str,
|
||||||
skip: int = Query(0, ge=0),
|
skip: int = Query(0, ge=0, description="Offset for pagination"),
|
||||||
limit: int = Query(100, ge=1, le=500),
|
limit: int = Query(100, ge=1, le=500, description="Page size"),
|
||||||
billed_only: Optional[bool] = Query(None),
|
billed_only: Optional[bool] = Query(None, description="Filter billed vs unbilled entries"),
|
||||||
|
sort_by: Optional[str] = Query("date", description="Sort by: date, item_no, amount, billed"),
|
||||||
|
sort_dir: Optional[str] = Query("desc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
"""Get ledger entries for specific file"""
|
"""Get ledger entries for specific file"""
|
||||||
query = db.query(Ledger).filter(Ledger.file_no == file_no).order_by(Ledger.date.desc())
|
query = db.query(Ledger).filter(Ledger.file_no == file_no)
|
||||||
|
|
||||||
if billed_only is not None:
|
if billed_only is not None:
|
||||||
billed_filter = "Y" if billed_only else "N"
|
billed_filter = "Y" if billed_only else "N"
|
||||||
query = query.filter(Ledger.billed == billed_filter)
|
query = query.filter(Ledger.billed == billed_filter)
|
||||||
|
|
||||||
entries = query.offset(skip).limit(limit).all()
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"date": [Ledger.date, Ledger.item_no],
|
||||||
|
"item_no": [Ledger.item_no],
|
||||||
|
"amount": [Ledger.amount],
|
||||||
|
"billed": [Ledger.billed, Ledger.date],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
entries, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
if include_total:
|
||||||
|
return {"items": entries, "total": total or 0}
|
||||||
return entries
|
return entries
|
||||||
|
|
||||||
|
|
||||||
@@ -127,6 +151,10 @@ async def create_ledger_entry(
|
|||||||
# Update file balances (simplified version)
|
# Update file balances (simplified version)
|
||||||
await _update_file_balances(file_obj, db)
|
await _update_file_balances(file_obj, db)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return entry
|
return entry
|
||||||
|
|
||||||
|
|
||||||
@@ -158,6 +186,10 @@ async def update_ledger_entry(
|
|||||||
if file_obj:
|
if file_obj:
|
||||||
await _update_file_balances(file_obj, db)
|
await _update_file_balances(file_obj, db)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return entry
|
return entry
|
||||||
|
|
||||||
|
|
||||||
@@ -185,6 +217,10 @@ async def delete_ledger_entry(
|
|||||||
if file_obj:
|
if file_obj:
|
||||||
await _update_file_balances(file_obj, db)
|
await _update_file_balances(file_obj, db)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await invalidate_search_cache()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
return {"message": "Ledger entry deleted successfully"}
|
return {"message": "Ledger entry deleted successfully"}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import re
|
|||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from difflib import SequenceMatcher
|
from difflib import SequenceMatcher
|
||||||
from datetime import datetime, date
|
from datetime import datetime, date, timezone
|
||||||
from decimal import Decimal
|
from decimal import Decimal
|
||||||
from typing import List, Dict, Any, Optional, Tuple
|
from typing import List, Dict, Any, Optional, Tuple
|
||||||
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File as UploadFileForm, Form, Query
|
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File as UploadFileForm, Form, Query
|
||||||
@@ -19,8 +19,8 @@ from app.models.rolodex import Rolodex, Phone
|
|||||||
from app.models.files import File
|
from app.models.files import File
|
||||||
from app.models.ledger import Ledger
|
from app.models.ledger import Ledger
|
||||||
from app.models.qdro import QDRO
|
from app.models.qdro import QDRO
|
||||||
from app.models.pensions import Pension, PensionSchedule, MarriageHistory, DeathBenefit, SeparationAgreement, LifeTable, NumberTable
|
from app.models.pensions import Pension, PensionSchedule, MarriageHistory, DeathBenefit, SeparationAgreement, LifeTable, NumberTable, PensionResult
|
||||||
from app.models.lookups import Employee, FileType, FileStatus, TransactionType, TransactionCode, State, GroupLookup, Footer, PlanInfo, FormIndex, FormList, PrinterSetup, SystemSetup
|
from app.models.lookups import Employee, FileType, FileStatus, TransactionType, TransactionCode, State, GroupLookup, Footer, PlanInfo, FormIndex, FormList, PrinterSetup, SystemSetup, FormKeyword
|
||||||
from app.models.additional import Payment, Deposit, FileNote, FormVariable, ReportVariable
|
from app.models.additional import Payment, Deposit, FileNote, FormVariable, ReportVariable
|
||||||
from app.models.flexible import FlexibleImport
|
from app.models.flexible import FlexibleImport
|
||||||
from app.models.audit import ImportAudit, ImportAuditFile
|
from app.models.audit import ImportAudit, ImportAuditFile
|
||||||
@@ -28,6 +28,25 @@ from app.config import settings
|
|||||||
|
|
||||||
router = APIRouter(tags=["import"])
|
router = APIRouter(tags=["import"])
|
||||||
|
|
||||||
|
# Common encodings to try for legacy CSV files (order matters)
|
||||||
|
ENCODINGS = [
|
||||||
|
'utf-8-sig',
|
||||||
|
'utf-8',
|
||||||
|
'windows-1252',
|
||||||
|
'iso-8859-1',
|
||||||
|
'cp1252',
|
||||||
|
]
|
||||||
|
|
||||||
|
# Unified import order used across batch operations
|
||||||
|
IMPORT_ORDER = [
|
||||||
|
"STATES.csv", "GRUPLKUP.csv", "EMPLOYEE.csv", "FILETYPE.csv", "FILESTAT.csv",
|
||||||
|
"TRNSTYPE.csv", "TRNSLKUP.csv", "FOOTERS.csv", "SETUP.csv", "PRINTERS.csv",
|
||||||
|
"INX_LKUP.csv",
|
||||||
|
"ROLODEX.csv", "PHONE.csv", "FILES.csv", "LEDGER.csv", "TRNSACTN.csv",
|
||||||
|
"QDROS.csv", "PENSIONS.csv", "LIFETABL.csv", "NUMBERAL.csv", "PLANINFO.csv", "RESULTS.csv", "PAYMENTS.csv", "DEPOSITS.csv",
|
||||||
|
"FILENOTS.csv", "FORM_INX.csv", "FORM_LST.csv", "FVARLKUP.csv", "RVARLKUP.csv"
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
# CSV to Model mapping
|
# CSV to Model mapping
|
||||||
CSV_MODEL_MAPPING = {
|
CSV_MODEL_MAPPING = {
|
||||||
@@ -56,7 +75,6 @@ CSV_MODEL_MAPPING = {
|
|||||||
"FOOTERS.csv": Footer,
|
"FOOTERS.csv": Footer,
|
||||||
"PLANINFO.csv": PlanInfo,
|
"PLANINFO.csv": PlanInfo,
|
||||||
# Legacy alternate names from export directories
|
# Legacy alternate names from export directories
|
||||||
"SCHEDULE.csv": PensionSchedule,
|
|
||||||
"FORM_INX.csv": FormIndex,
|
"FORM_INX.csv": FormIndex,
|
||||||
"FORM_LST.csv": FormList,
|
"FORM_LST.csv": FormList,
|
||||||
"PRINTERS.csv": PrinterSetup,
|
"PRINTERS.csv": PrinterSetup,
|
||||||
@@ -67,7 +85,9 @@ CSV_MODEL_MAPPING = {
|
|||||||
"FVARLKUP.csv": FormVariable,
|
"FVARLKUP.csv": FormVariable,
|
||||||
"RVARLKUP.csv": ReportVariable,
|
"RVARLKUP.csv": ReportVariable,
|
||||||
"PAYMENTS.csv": Payment,
|
"PAYMENTS.csv": Payment,
|
||||||
"TRNSACTN.csv": Ledger # Maps to existing Ledger model (same structure)
|
"TRNSACTN.csv": Ledger, # Maps to existing Ledger model (same structure)
|
||||||
|
"INX_LKUP.csv": FormKeyword,
|
||||||
|
"RESULTS.csv": PensionResult
|
||||||
}
|
}
|
||||||
|
|
||||||
# Field mappings for CSV columns to database fields
|
# Field mappings for CSV columns to database fields
|
||||||
@@ -230,8 +250,12 @@ FIELD_MAPPINGS = {
|
|||||||
"Default_Rate": "default_rate"
|
"Default_Rate": "default_rate"
|
||||||
},
|
},
|
||||||
"FILESTAT.csv": {
|
"FILESTAT.csv": {
|
||||||
|
"Status": "status_code",
|
||||||
"Status_Code": "status_code",
|
"Status_Code": "status_code",
|
||||||
|
"Definition": "description",
|
||||||
"Description": "description",
|
"Description": "description",
|
||||||
|
"Send": "send",
|
||||||
|
"Footer_Code": "footer_code",
|
||||||
"Sort_Order": "sort_order"
|
"Sort_Order": "sort_order"
|
||||||
},
|
},
|
||||||
"FOOTERS.csv": {
|
"FOOTERS.csv": {
|
||||||
@@ -253,22 +277,44 @@ FIELD_MAPPINGS = {
|
|||||||
"Phone": "phone",
|
"Phone": "phone",
|
||||||
"Notes": "notes"
|
"Notes": "notes"
|
||||||
},
|
},
|
||||||
|
"INX_LKUP.csv": {
|
||||||
|
"Keyword": "keyword",
|
||||||
|
"Description": "description"
|
||||||
|
},
|
||||||
"FORM_INX.csv": {
|
"FORM_INX.csv": {
|
||||||
"Form_Id": "form_id",
|
"Name": "form_id",
|
||||||
"Form_Name": "form_name",
|
"Keyword": "keyword"
|
||||||
"Category": "category"
|
|
||||||
},
|
},
|
||||||
"FORM_LST.csv": {
|
"FORM_LST.csv": {
|
||||||
"Form_Id": "form_id",
|
"Name": "form_id",
|
||||||
"Line_Number": "line_number",
|
"Memo": "content",
|
||||||
"Content": "content"
|
"Status": "status"
|
||||||
},
|
},
|
||||||
"PRINTERS.csv": {
|
"PRINTERS.csv": {
|
||||||
|
# Legacy variants
|
||||||
"Printer_Name": "printer_name",
|
"Printer_Name": "printer_name",
|
||||||
"Description": "description",
|
"Description": "description",
|
||||||
"Driver": "driver",
|
"Driver": "driver",
|
||||||
"Port": "port",
|
"Port": "port",
|
||||||
"Default_Printer": "default_printer"
|
"Default_Printer": "default_printer",
|
||||||
|
# Observed legacy headers from export
|
||||||
|
"Number": "number",
|
||||||
|
"Name": "printer_name",
|
||||||
|
"Page_Break": "page_break",
|
||||||
|
"Setup_St": "setup_st",
|
||||||
|
"Reset_St": "reset_st",
|
||||||
|
"B_Underline": "b_underline",
|
||||||
|
"E_Underline": "e_underline",
|
||||||
|
"B_Bold": "b_bold",
|
||||||
|
"E_Bold": "e_bold",
|
||||||
|
# Optional report toggles
|
||||||
|
"Phone_Book": "phone_book",
|
||||||
|
"Rolodex_Info": "rolodex_info",
|
||||||
|
"Envelope": "envelope",
|
||||||
|
"File_Cabinet": "file_cabinet",
|
||||||
|
"Accounts": "accounts",
|
||||||
|
"Statements": "statements",
|
||||||
|
"Calendar": "calendar",
|
||||||
},
|
},
|
||||||
"SETUP.csv": {
|
"SETUP.csv": {
|
||||||
"Setting_Key": "setting_key",
|
"Setting_Key": "setting_key",
|
||||||
@@ -285,32 +331,98 @@ FIELD_MAPPINGS = {
|
|||||||
"MARRIAGE.csv": {
|
"MARRIAGE.csv": {
|
||||||
"File_No": "file_no",
|
"File_No": "file_no",
|
||||||
"Version": "version",
|
"Version": "version",
|
||||||
"Marriage_Date": "marriage_date",
|
"Married_From": "married_from",
|
||||||
"Separation_Date": "separation_date",
|
"Married_To": "married_to",
|
||||||
"Divorce_Date": "divorce_date"
|
"Married_Years": "married_years",
|
||||||
|
"Service_From": "service_from",
|
||||||
|
"Service_To": "service_to",
|
||||||
|
"Service_Years": "service_years",
|
||||||
|
"Marital_%": "marital_percent"
|
||||||
},
|
},
|
||||||
"DEATH.csv": {
|
"DEATH.csv": {
|
||||||
"File_No": "file_no",
|
"File_No": "file_no",
|
||||||
"Version": "version",
|
"Version": "version",
|
||||||
"Benefit_Type": "benefit_type",
|
"Lump1": "lump1",
|
||||||
"Benefit_Amount": "benefit_amount",
|
"Lump2": "lump2",
|
||||||
"Beneficiary": "beneficiary"
|
"Growth1": "growth1",
|
||||||
|
"Growth2": "growth2",
|
||||||
|
"Disc1": "disc1",
|
||||||
|
"Disc2": "disc2"
|
||||||
},
|
},
|
||||||
"SEPARATE.csv": {
|
"SEPARATE.csv": {
|
||||||
"File_No": "file_no",
|
"File_No": "file_no",
|
||||||
"Version": "version",
|
"Version": "version",
|
||||||
"Agreement_Date": "agreement_date",
|
"Separation_Rate": "terms"
|
||||||
"Terms": "terms"
|
|
||||||
},
|
},
|
||||||
"LIFETABL.csv": {
|
"LIFETABL.csv": {
|
||||||
"Age": "age",
|
"AGE": "age",
|
||||||
"Male_Mortality": "male_mortality",
|
"LE_AA": "le_aa",
|
||||||
"Female_Mortality": "female_mortality"
|
"NA_AA": "na_aa",
|
||||||
|
"LE_AM": "le_am",
|
||||||
|
"NA_AM": "na_am",
|
||||||
|
"LE_AF": "le_af",
|
||||||
|
"NA_AF": "na_af",
|
||||||
|
"LE_WA": "le_wa",
|
||||||
|
"NA_WA": "na_wa",
|
||||||
|
"LE_WM": "le_wm",
|
||||||
|
"NA_WM": "na_wm",
|
||||||
|
"LE_WF": "le_wf",
|
||||||
|
"NA_WF": "na_wf",
|
||||||
|
"LE_BA": "le_ba",
|
||||||
|
"NA_BA": "na_ba",
|
||||||
|
"LE_BM": "le_bm",
|
||||||
|
"NA_BM": "na_bm",
|
||||||
|
"LE_BF": "le_bf",
|
||||||
|
"NA_BF": "na_bf",
|
||||||
|
"LE_HA": "le_ha",
|
||||||
|
"NA_HA": "na_ha",
|
||||||
|
"LE_HM": "le_hm",
|
||||||
|
"NA_HM": "na_hm",
|
||||||
|
"LE_HF": "le_hf",
|
||||||
|
"NA_HF": "na_hf"
|
||||||
},
|
},
|
||||||
"NUMBERAL.csv": {
|
"NUMBERAL.csv": {
|
||||||
"Table_Name": "table_name",
|
"Month": "month",
|
||||||
|
"NA_AA": "na_aa",
|
||||||
|
"NA_AM": "na_am",
|
||||||
|
"NA_AF": "na_af",
|
||||||
|
"NA_WA": "na_wa",
|
||||||
|
"NA_WM": "na_wm",
|
||||||
|
"NA_WF": "na_wf",
|
||||||
|
"NA_BA": "na_ba",
|
||||||
|
"NA_BM": "na_bm",
|
||||||
|
"NA_BF": "na_bf",
|
||||||
|
"NA_HA": "na_ha",
|
||||||
|
"NA_HM": "na_hm",
|
||||||
|
"NA_HF": "na_hf"
|
||||||
|
},
|
||||||
|
"RESULTS.csv": {
|
||||||
|
"Accrued": "accrued",
|
||||||
|
"Start_Age": "start_age",
|
||||||
|
"COLA": "cola",
|
||||||
|
"Withdrawal": "withdrawal",
|
||||||
|
"Pre_DR": "pre_dr",
|
||||||
|
"Post_DR": "post_dr",
|
||||||
|
"Tax_Rate": "tax_rate",
|
||||||
"Age": "age",
|
"Age": "age",
|
||||||
"Value": "value"
|
"Years_From": "years_from",
|
||||||
|
"Life_Exp": "life_exp",
|
||||||
|
"EV_Monthly": "ev_monthly",
|
||||||
|
"Payments": "payments",
|
||||||
|
"Pay_Out": "pay_out",
|
||||||
|
"Fund_Value": "fund_value",
|
||||||
|
"PV": "pv",
|
||||||
|
"Mortality": "mortality",
|
||||||
|
"PV_AM": "pv_am",
|
||||||
|
"PV_AMT": "pv_amt",
|
||||||
|
"PV_Pre_DB": "pv_pre_db",
|
||||||
|
"PV_Annuity": "pv_annuity",
|
||||||
|
"WV_AT": "wv_at",
|
||||||
|
"PV_Plan": "pv_plan",
|
||||||
|
"Years_Married": "years_married",
|
||||||
|
"Years_Service": "years_service",
|
||||||
|
"Marr_Per": "marr_per",
|
||||||
|
"Marr_Amt": "marr_amt"
|
||||||
},
|
},
|
||||||
# Additional CSV file mappings
|
# Additional CSV file mappings
|
||||||
"DEPOSITS.csv": {
|
"DEPOSITS.csv": {
|
||||||
@@ -357,7 +469,7 @@ FIELD_MAPPINGS = {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
def parse_date(date_str: str) -> Optional[datetime]:
|
def parse_date(date_str: str) -> Optional[date]:
|
||||||
"""Parse date string in various formats"""
|
"""Parse date string in various formats"""
|
||||||
if not date_str or date_str.strip() == "":
|
if not date_str or date_str.strip() == "":
|
||||||
return None
|
return None
|
||||||
@@ -612,7 +724,11 @@ def convert_value(value: str, field_name: str) -> Any:
|
|||||||
return parsed_date
|
return parsed_date
|
||||||
|
|
||||||
# Boolean fields
|
# Boolean fields
|
||||||
if any(word in field_name.lower() for word in ["active", "default_printer", "billed", "transferable"]):
|
if any(word in field_name.lower() for word in [
|
||||||
|
"active", "default_printer", "billed", "transferable", "send",
|
||||||
|
# PrinterSetup legacy toggles
|
||||||
|
"phone_book", "rolodex_info", "envelope", "file_cabinet", "accounts", "statements", "calendar"
|
||||||
|
]):
|
||||||
if value.lower() in ["true", "1", "yes", "y", "on", "active"]:
|
if value.lower() in ["true", "1", "yes", "y", "on", "active"]:
|
||||||
return True
|
return True
|
||||||
elif value.lower() in ["false", "0", "no", "n", "off", "inactive"]:
|
elif value.lower() in ["false", "0", "no", "n", "off", "inactive"]:
|
||||||
@@ -621,7 +737,11 @@ def convert_value(value: str, field_name: str) -> Any:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
# Numeric fields (float)
|
# Numeric fields (float)
|
||||||
if any(word in field_name.lower() for word in ["rate", "hour", "bal", "fee", "amount", "owing", "transfer", "valu", "accrued", "vested", "cola", "tax", "percent", "benefit_amount", "mortality", "value"]):
|
if any(word in field_name.lower() for word in [
|
||||||
|
"rate", "hour", "bal", "fee", "amount", "owing", "transfer", "valu",
|
||||||
|
"accrued", "vested", "cola", "tax", "percent", "benefit_amount", "mortality",
|
||||||
|
"value"
|
||||||
|
]) or field_name.lower().startswith(("na_", "le_")):
|
||||||
try:
|
try:
|
||||||
# Remove currency symbols and commas
|
# Remove currency symbols and commas
|
||||||
cleaned_value = value.replace("$", "").replace(",", "").replace("%", "")
|
cleaned_value = value.replace("$", "").replace(",", "").replace("%", "")
|
||||||
@@ -630,7 +750,9 @@ def convert_value(value: str, field_name: str) -> Any:
|
|||||||
return 0.0
|
return 0.0
|
||||||
|
|
||||||
# Integer fields
|
# Integer fields
|
||||||
if any(word in field_name.lower() for word in ["item_no", "age", "start_age", "version", "line_number", "sort_order", "empl_num"]):
|
if any(word in field_name.lower() for word in [
|
||||||
|
"item_no", "age", "start_age", "version", "line_number", "sort_order", "empl_num", "month", "number"
|
||||||
|
]):
|
||||||
try:
|
try:
|
||||||
return int(float(value)) # Handle cases like "1.0"
|
return int(float(value)) # Handle cases like "1.0"
|
||||||
except ValueError:
|
except ValueError:
|
||||||
@@ -673,11 +795,18 @@ async def get_available_csv_files(current_user: User = Depends(get_current_user)
|
|||||||
"available_files": list(CSV_MODEL_MAPPING.keys()),
|
"available_files": list(CSV_MODEL_MAPPING.keys()),
|
||||||
"descriptions": {
|
"descriptions": {
|
||||||
"ROLODEX.csv": "Customer/contact information",
|
"ROLODEX.csv": "Customer/contact information",
|
||||||
|
"ROLEX_V.csv": "Customer/contact information (alias)",
|
||||||
"PHONE.csv": "Phone numbers linked to customers",
|
"PHONE.csv": "Phone numbers linked to customers",
|
||||||
"FILES.csv": "Client files and cases",
|
"FILES.csv": "Client files and cases",
|
||||||
|
"FILES_R.csv": "Client files and cases (alias)",
|
||||||
|
"FILES_V.csv": "Client files and cases (alias)",
|
||||||
"LEDGER.csv": "Financial transactions per file",
|
"LEDGER.csv": "Financial transactions per file",
|
||||||
"QDROS.csv": "Legal documents and court orders",
|
"QDROS.csv": "Legal documents and court orders",
|
||||||
"PENSIONS.csv": "Pension calculation data",
|
"PENSIONS.csv": "Pension calculation data",
|
||||||
|
"SCHEDULE.csv": "Vesting schedules for pensions",
|
||||||
|
"MARRIAGE.csv": "Marriage history data",
|
||||||
|
"DEATH.csv": "Death benefit calculations",
|
||||||
|
"SEPARATE.csv": "Separation agreements",
|
||||||
"EMPLOYEE.csv": "Staff and employee information",
|
"EMPLOYEE.csv": "Staff and employee information",
|
||||||
"STATES.csv": "US States lookup table",
|
"STATES.csv": "US States lookup table",
|
||||||
"FILETYPE.csv": "File type categories",
|
"FILETYPE.csv": "File type categories",
|
||||||
@@ -688,7 +817,12 @@ async def get_available_csv_files(current_user: User = Depends(get_current_user)
|
|||||||
"FVARLKUP.csv": "Form template variables",
|
"FVARLKUP.csv": "Form template variables",
|
||||||
"RVARLKUP.csv": "Report template variables",
|
"RVARLKUP.csv": "Report template variables",
|
||||||
"PAYMENTS.csv": "Individual payments within deposits",
|
"PAYMENTS.csv": "Individual payments within deposits",
|
||||||
"TRNSACTN.csv": "Transaction details (maps to Ledger)"
|
"TRNSACTN.csv": "Transaction details (maps to Ledger)",
|
||||||
|
"INX_LKUP.csv": "Form keywords lookup",
|
||||||
|
"PLANINFO.csv": "Pension plan information",
|
||||||
|
"RESULTS.csv": "Pension computed results",
|
||||||
|
"LIFETABL.csv": "Life expectancy table by age, sex, and race (rich typed)",
|
||||||
|
"NUMBERAL.csv": "Monthly survivor counts by sex and race (rich typed)"
|
||||||
},
|
},
|
||||||
"auto_discovery": True
|
"auto_discovery": True
|
||||||
}
|
}
|
||||||
@@ -724,7 +858,7 @@ async def import_csv_data(
|
|||||||
content = await file.read()
|
content = await file.read()
|
||||||
|
|
||||||
# Try multiple encodings for legacy CSV files
|
# Try multiple encodings for legacy CSV files
|
||||||
encodings = ['utf-8', 'windows-1252', 'iso-8859-1', 'cp1252']
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for encoding in encodings:
|
for encoding in encodings:
|
||||||
try:
|
try:
|
||||||
@@ -736,34 +870,7 @@ async def import_csv_data(
|
|||||||
if csv_content is None:
|
if csv_content is None:
|
||||||
raise HTTPException(status_code=400, detail="Could not decode CSV file. Please ensure it's saved in UTF-8, Windows-1252, or ISO-8859-1 encoding.")
|
raise HTTPException(status_code=400, detail="Could not decode CSV file. Please ensure it's saved in UTF-8, Windows-1252, or ISO-8859-1 encoding.")
|
||||||
|
|
||||||
# Preprocess CSV content to fix common legacy issues
|
# Note: preprocess_csv helper removed as unused; robust parsing handled below
|
||||||
def preprocess_csv(content):
|
|
||||||
lines = content.split('\n')
|
|
||||||
cleaned_lines = []
|
|
||||||
i = 0
|
|
||||||
|
|
||||||
while i < len(lines):
|
|
||||||
line = lines[i]
|
|
||||||
# If line doesn't have the expected number of commas, it might be a broken multi-line field
|
|
||||||
if i == 0: # Header line
|
|
||||||
cleaned_lines.append(line)
|
|
||||||
expected_comma_count = line.count(',')
|
|
||||||
i += 1
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Check if this line has the expected number of commas
|
|
||||||
if line.count(',') < expected_comma_count:
|
|
||||||
# This might be a continuation of the previous line
|
|
||||||
# Try to merge with previous line
|
|
||||||
if cleaned_lines:
|
|
||||||
cleaned_lines[-1] += " " + line.replace('\n', ' ').replace('\r', ' ')
|
|
||||||
else:
|
|
||||||
cleaned_lines.append(line)
|
|
||||||
else:
|
|
||||||
cleaned_lines.append(line)
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
return '\n'.join(cleaned_lines)
|
|
||||||
|
|
||||||
# Custom robust parser for problematic legacy CSV files
|
# Custom robust parser for problematic legacy CSV files
|
||||||
class MockCSVReader:
|
class MockCSVReader:
|
||||||
@@ -791,7 +898,7 @@ async def import_csv_data(
|
|||||||
header_reader = csv.reader(io.StringIO(lines[0]))
|
header_reader = csv.reader(io.StringIO(lines[0]))
|
||||||
headers = next(header_reader)
|
headers = next(header_reader)
|
||||||
headers = [h.strip() for h in headers]
|
headers = [h.strip() for h in headers]
|
||||||
print(f"DEBUG: Found {len(headers)} headers: {headers}")
|
# Debug logging removed in API path; rely on audit/logging if needed
|
||||||
# Build dynamic header mapping for this file/model
|
# Build dynamic header mapping for this file/model
|
||||||
mapping_info = _build_dynamic_mapping(headers, model_class, file_type)
|
mapping_info = _build_dynamic_mapping(headers, model_class, file_type)
|
||||||
|
|
||||||
@@ -829,17 +936,21 @@ async def import_csv_data(
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
csv_reader = MockCSVReader(rows_data, headers)
|
csv_reader = MockCSVReader(rows_data, headers)
|
||||||
print(f"SUCCESS: Parsed {len(rows_data)} rows (skipped {skipped_rows} malformed rows)")
|
# Parsing summary suppressed to avoid noisy stdout in API
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Custom parsing failed: {e}")
|
# Keep error minimal for client; internal logging can capture 'e'
|
||||||
raise HTTPException(status_code=400, detail=f"Could not parse CSV file. The file appears to have serious formatting issues. Error: {str(e)}")
|
raise HTTPException(status_code=400, detail=f"Could not parse CSV file. The file appears to have serious formatting issues. Error: {str(e)}")
|
||||||
|
|
||||||
imported_count = 0
|
imported_count = 0
|
||||||
|
created_count = 0
|
||||||
|
updated_count = 0
|
||||||
errors = []
|
errors = []
|
||||||
flexible_saved = 0
|
flexible_saved = 0
|
||||||
mapped_headers = mapping_info.get("mapped_headers", {})
|
mapped_headers = mapping_info.get("mapped_headers", {})
|
||||||
unmapped_headers = mapping_info.get("unmapped_headers", [])
|
unmapped_headers = mapping_info.get("unmapped_headers", [])
|
||||||
|
# Special handling: assign line numbers per form for FORM_LST.csv
|
||||||
|
form_lst_line_counters: Dict[str, int] = {}
|
||||||
|
|
||||||
# If replace_existing is True, delete all existing records and related flexible extras
|
# If replace_existing is True, delete all existing records and related flexible extras
|
||||||
if replace_existing:
|
if replace_existing:
|
||||||
@@ -860,6 +971,16 @@ async def import_csv_data(
|
|||||||
converted_value = convert_value(row[csv_field], db_field)
|
converted_value = convert_value(row[csv_field], db_field)
|
||||||
if converted_value is not None:
|
if converted_value is not None:
|
||||||
model_data[db_field] = converted_value
|
model_data[db_field] = converted_value
|
||||||
|
|
||||||
|
# Inject sequential line_number for FORM_LST rows grouped by form_id
|
||||||
|
if file_type == "FORM_LST.csv":
|
||||||
|
form_id_value = model_data.get("form_id")
|
||||||
|
if form_id_value:
|
||||||
|
current = form_lst_line_counters.get(str(form_id_value), 0) + 1
|
||||||
|
form_lst_line_counters[str(form_id_value)] = current
|
||||||
|
# Only set if not provided
|
||||||
|
if "line_number" not in model_data:
|
||||||
|
model_data["line_number"] = current
|
||||||
|
|
||||||
# Skip empty rows
|
# Skip empty rows
|
||||||
if not any(model_data.values()):
|
if not any(model_data.values()):
|
||||||
@@ -902,10 +1023,43 @@ async def import_csv_data(
|
|||||||
if 'file_no' not in model_data or not model_data['file_no']:
|
if 'file_no' not in model_data or not model_data['file_no']:
|
||||||
continue # Skip ledger records without file number
|
continue # Skip ledger records without file number
|
||||||
|
|
||||||
# Create model instance
|
# Create or update model instance
|
||||||
instance = model_class(**model_data)
|
instance = None
|
||||||
db.add(instance)
|
# Upsert behavior for printers
|
||||||
db.flush() # Ensure PK is available
|
if model_class == PrinterSetup:
|
||||||
|
# Determine primary key field name
|
||||||
|
_, pk_names = _get_model_columns(model_class)
|
||||||
|
pk_field_name_local = pk_names[0] if len(pk_names) == 1 else None
|
||||||
|
pk_value_local = model_data.get(pk_field_name_local) if pk_field_name_local else None
|
||||||
|
if pk_field_name_local and pk_value_local:
|
||||||
|
existing = db.query(model_class).filter(getattr(model_class, pk_field_name_local) == pk_value_local).first()
|
||||||
|
if existing:
|
||||||
|
# Update mutable fields
|
||||||
|
for k, v in model_data.items():
|
||||||
|
if k != pk_field_name_local:
|
||||||
|
setattr(existing, k, v)
|
||||||
|
instance = existing
|
||||||
|
updated_count += 1
|
||||||
|
else:
|
||||||
|
instance = model_class(**model_data)
|
||||||
|
db.add(instance)
|
||||||
|
created_count += 1
|
||||||
|
else:
|
||||||
|
# Fallback to insert if PK missing
|
||||||
|
instance = model_class(**model_data)
|
||||||
|
db.add(instance)
|
||||||
|
created_count += 1
|
||||||
|
db.flush()
|
||||||
|
# Enforce single default
|
||||||
|
try:
|
||||||
|
if bool(model_data.get("default_printer")):
|
||||||
|
db.query(model_class).filter(getattr(model_class, pk_field_name_local) != getattr(instance, pk_field_name_local)).update({model_class.default_printer: False})
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
instance = model_class(**model_data)
|
||||||
|
db.add(instance)
|
||||||
|
db.flush() # Ensure PK is available
|
||||||
|
|
||||||
# Capture PK details for flexible storage linkage (single-column PKs only)
|
# Capture PK details for flexible storage linkage (single-column PKs only)
|
||||||
_, pk_names = _get_model_columns(model_class)
|
_, pk_names = _get_model_columns(model_class)
|
||||||
@@ -980,6 +1134,10 @@ async def import_csv_data(
|
|||||||
"flexible_saved_rows": flexible_saved,
|
"flexible_saved_rows": flexible_saved,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
# Include create/update breakdown for printers
|
||||||
|
if file_type == "PRINTERS.csv":
|
||||||
|
result["created_count"] = created_count
|
||||||
|
result["updated_count"] = updated_count
|
||||||
|
|
||||||
if errors:
|
if errors:
|
||||||
result["warning"] = f"Import completed with {len(errors)} errors"
|
result["warning"] = f"Import completed with {len(errors)} errors"
|
||||||
@@ -987,9 +1145,7 @@ async def import_csv_data(
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"IMPORT ERROR DEBUG: {type(e).__name__}: {str(e)}")
|
# Suppress stdout debug prints in API layer
|
||||||
import traceback
|
|
||||||
print(f"TRACEBACK: {traceback.format_exc()}")
|
|
||||||
db.rollback()
|
db.rollback()
|
||||||
raise HTTPException(status_code=500, detail=f"Import failed: {str(e)}")
|
raise HTTPException(status_code=500, detail=f"Import failed: {str(e)}")
|
||||||
|
|
||||||
@@ -1071,7 +1227,7 @@ async def validate_csv_file(
|
|||||||
content = await file.read()
|
content = await file.read()
|
||||||
|
|
||||||
# Try multiple encodings for legacy CSV files
|
# Try multiple encodings for legacy CSV files
|
||||||
encodings = ['utf-8', 'windows-1252', 'iso-8859-1', 'cp1252']
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for encoding in encodings:
|
for encoding in encodings:
|
||||||
try:
|
try:
|
||||||
@@ -1083,18 +1239,6 @@ async def validate_csv_file(
|
|||||||
if csv_content is None:
|
if csv_content is None:
|
||||||
raise HTTPException(status_code=400, detail="Could not decode CSV file. Please ensure it's saved in UTF-8, Windows-1252, or ISO-8859-1 encoding.")
|
raise HTTPException(status_code=400, detail="Could not decode CSV file. Please ensure it's saved in UTF-8, Windows-1252, or ISO-8859-1 encoding.")
|
||||||
|
|
||||||
# Parse CSV with fallback to robust line-by-line parsing
|
|
||||||
def parse_csv_with_fallback(text: str) -> Tuple[List[Dict[str, str]], List[str]]:
|
|
||||||
try:
|
|
||||||
reader = csv.DictReader(io.StringIO(text), delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
|
|
||||||
headers_local = reader.fieldnames or []
|
|
||||||
rows_local = []
|
|
||||||
for r in reader:
|
|
||||||
rows_local.append(r)
|
|
||||||
return rows_local, headers_local
|
|
||||||
except Exception:
|
|
||||||
return parse_csv_robust(text)
|
|
||||||
|
|
||||||
rows_list, csv_headers = parse_csv_with_fallback(csv_content)
|
rows_list, csv_headers = parse_csv_with_fallback(csv_content)
|
||||||
model_class = CSV_MODEL_MAPPING[file_type]
|
model_class = CSV_MODEL_MAPPING[file_type]
|
||||||
mapping_info = _build_dynamic_mapping(csv_headers, model_class, file_type)
|
mapping_info = _build_dynamic_mapping(csv_headers, model_class, file_type)
|
||||||
@@ -1142,9 +1286,7 @@ async def validate_csv_file(
|
|||||||
}
|
}
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"VALIDATION ERROR DEBUG: {type(e).__name__}: {str(e)}")
|
# Suppress stdout debug prints in API layer
|
||||||
import traceback
|
|
||||||
print(f"VALIDATION TRACEBACK: {traceback.format_exc()}")
|
|
||||||
raise HTTPException(status_code=500, detail=f"Validation failed: {str(e)}")
|
raise HTTPException(status_code=500, detail=f"Validation failed: {str(e)}")
|
||||||
|
|
||||||
|
|
||||||
@@ -1199,7 +1341,7 @@ async def batch_validate_csv_files(
|
|||||||
content = await file.read()
|
content = await file.read()
|
||||||
|
|
||||||
# Try multiple encodings for legacy CSV files (include BOM-friendly utf-8-sig)
|
# Try multiple encodings for legacy CSV files (include BOM-friendly utf-8-sig)
|
||||||
encodings = ['utf-8-sig', 'utf-8', 'windows-1252', 'iso-8859-1', 'cp1252']
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for encoding in encodings:
|
for encoding in encodings:
|
||||||
try:
|
try:
|
||||||
@@ -1302,13 +1444,7 @@ async def batch_import_csv_files(
|
|||||||
raise HTTPException(status_code=400, detail="Maximum 25 files allowed per batch")
|
raise HTTPException(status_code=400, detail="Maximum 25 files allowed per batch")
|
||||||
|
|
||||||
# Define optimal import order based on dependencies
|
# Define optimal import order based on dependencies
|
||||||
import_order = [
|
import_order = IMPORT_ORDER
|
||||||
"STATES.csv", "GRUPLKUP.csv", "EMPLOYEE.csv", "FILETYPE.csv", "FILESTAT.csv",
|
|
||||||
"TRNSTYPE.csv", "TRNSLKUP.csv", "FOOTERS.csv", "SETUP.csv", "PRINTERS.csv",
|
|
||||||
"ROLODEX.csv", "PHONE.csv", "FILES.csv", "LEDGER.csv", "TRNSACTN.csv",
|
|
||||||
"QDROS.csv", "PENSIONS.csv", "PLANINFO.csv", "PAYMENTS.csv", "DEPOSITS.csv",
|
|
||||||
"FILENOTS.csv", "FORM_INX.csv", "FORM_LST.csv", "FVARLKUP.csv", "RVARLKUP.csv"
|
|
||||||
]
|
|
||||||
|
|
||||||
# Sort uploaded files by optimal import order
|
# Sort uploaded files by optimal import order
|
||||||
file_map = {f.filename: f for f in files}
|
file_map = {f.filename: f for f in files}
|
||||||
@@ -1365,7 +1501,7 @@ async def batch_import_csv_files(
|
|||||||
saved_path = str(file_path)
|
saved_path = str(file_path)
|
||||||
except Exception:
|
except Exception:
|
||||||
saved_path = None
|
saved_path = None
|
||||||
encodings = ['utf-8-sig', 'utf-8', 'windows-1252', 'iso-8859-1', 'cp1252']
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for encoding in encodings:
|
for encoding in encodings:
|
||||||
try:
|
try:
|
||||||
@@ -1466,7 +1602,7 @@ async def batch_import_csv_files(
|
|||||||
saved_path = None
|
saved_path = None
|
||||||
|
|
||||||
# Try multiple encodings for legacy CSV files
|
# Try multiple encodings for legacy CSV files
|
||||||
encodings = ['utf-8-sig', 'utf-8', 'windows-1252', 'iso-8859-1', 'cp1252']
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for encoding in encodings:
|
for encoding in encodings:
|
||||||
try:
|
try:
|
||||||
@@ -1505,6 +1641,8 @@ async def batch_import_csv_files(
|
|||||||
imported_count = 0
|
imported_count = 0
|
||||||
errors = []
|
errors = []
|
||||||
flexible_saved = 0
|
flexible_saved = 0
|
||||||
|
# Special handling: assign line numbers per form for FORM_LST.csv
|
||||||
|
form_lst_line_counters: Dict[str, int] = {}
|
||||||
|
|
||||||
# If replace_existing is True and this is the first file of this type
|
# If replace_existing is True and this is the first file of this type
|
||||||
if replace_existing:
|
if replace_existing:
|
||||||
@@ -1523,6 +1661,15 @@ async def batch_import_csv_files(
|
|||||||
converted_value = convert_value(row[csv_field], db_field)
|
converted_value = convert_value(row[csv_field], db_field)
|
||||||
if converted_value is not None:
|
if converted_value is not None:
|
||||||
model_data[db_field] = converted_value
|
model_data[db_field] = converted_value
|
||||||
|
|
||||||
|
# Inject sequential line_number for FORM_LST rows grouped by form_id
|
||||||
|
if file_type == "FORM_LST.csv":
|
||||||
|
form_id_value = model_data.get("form_id")
|
||||||
|
if form_id_value:
|
||||||
|
current = form_lst_line_counters.get(str(form_id_value), 0) + 1
|
||||||
|
form_lst_line_counters[str(form_id_value)] = current
|
||||||
|
if "line_number" not in model_data:
|
||||||
|
model_data["line_number"] = current
|
||||||
|
|
||||||
if not any(model_data.values()):
|
if not any(model_data.values()):
|
||||||
continue
|
continue
|
||||||
@@ -1697,7 +1844,7 @@ async def batch_import_csv_files(
|
|||||||
"completed_with_errors" if summary["successful_files"] > 0 else "failed"
|
"completed_with_errors" if summary["successful_files"] > 0 else "failed"
|
||||||
)
|
)
|
||||||
audit_row.message = f"Batch import completed: {audit_row.successful_files}/{audit_row.total_files} files"
|
audit_row.message = f"Batch import completed: {audit_row.successful_files}/{audit_row.total_files} files"
|
||||||
audit_row.finished_at = datetime.utcnow()
|
audit_row.finished_at = datetime.now(timezone.utc)
|
||||||
audit_row.details = {
|
audit_row.details = {
|
||||||
"files": [
|
"files": [
|
||||||
{"file_type": r.get("file_type"), "status": r.get("status"), "imported_count": r.get("imported_count", 0), "errors": r.get("errors", 0)}
|
{"file_type": r.get("file_type"), "status": r.get("status"), "imported_count": r.get("imported_count", 0), "errors": r.get("errors", 0)}
|
||||||
@@ -1844,13 +1991,7 @@ async def rerun_failed_files(
|
|||||||
raise HTTPException(status_code=400, detail="No saved files available to rerun. Upload again.")
|
raise HTTPException(status_code=400, detail="No saved files available to rerun. Upload again.")
|
||||||
|
|
||||||
# Import order for sorting
|
# Import order for sorting
|
||||||
import_order = [
|
import_order = IMPORT_ORDER
|
||||||
"STATES.csv", "GRUPLKUP.csv", "EMPLOYEE.csv", "FILETYPE.csv", "FILESTAT.csv",
|
|
||||||
"TRNSTYPE.csv", "TRNSLKUP.csv", "FOOTERS.csv", "SETUP.csv", "PRINTERS.csv",
|
|
||||||
"ROLODEX.csv", "PHONE.csv", "FILES.csv", "LEDGER.csv", "TRNSACTN.csv",
|
|
||||||
"QDROS.csv", "PENSIONS.csv", "PLANINFO.csv", "PAYMENTS.csv", "DEPOSITS.csv",
|
|
||||||
"FILENOTS.csv", "FORM_INX.csv", "FORM_LST.csv", "FVARLKUP.csv", "RVARLKUP.csv"
|
|
||||||
]
|
|
||||||
order_index = {name: i for i, name in enumerate(import_order)}
|
order_index = {name: i for i, name in enumerate(import_order)}
|
||||||
items.sort(key=lambda x: order_index.get(x[0], len(import_order) + 1))
|
items.sort(key=lambda x: order_index.get(x[0], len(import_order) + 1))
|
||||||
|
|
||||||
@@ -1898,7 +2039,7 @@ async def rerun_failed_files(
|
|||||||
|
|
||||||
if file_type not in CSV_MODEL_MAPPING:
|
if file_type not in CSV_MODEL_MAPPING:
|
||||||
# Flexible-only path
|
# Flexible-only path
|
||||||
encodings = ['utf-8-sig', 'utf-8', 'windows-1252', 'iso-8859-1', 'cp1252']
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for enc in encodings:
|
for enc in encodings:
|
||||||
try:
|
try:
|
||||||
@@ -1964,7 +2105,7 @@ async def rerun_failed_files(
|
|||||||
|
|
||||||
# Known model path
|
# Known model path
|
||||||
model_class = CSV_MODEL_MAPPING[file_type]
|
model_class = CSV_MODEL_MAPPING[file_type]
|
||||||
encodings = ['utf-8-sig', 'utf-8', 'windows-1252', 'iso-8859-1', 'cp1252']
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for enc in encodings:
|
for enc in encodings:
|
||||||
try:
|
try:
|
||||||
@@ -1996,6 +2137,8 @@ async def rerun_failed_files(
|
|||||||
unmapped_headers = mapping_info["unmapped_headers"]
|
unmapped_headers = mapping_info["unmapped_headers"]
|
||||||
imported_count = 0
|
imported_count = 0
|
||||||
errors: List[Dict[str, Any]] = []
|
errors: List[Dict[str, Any]] = []
|
||||||
|
# Special handling: assign line numbers per form for FORM_LST.csv
|
||||||
|
form_lst_line_counters: Dict[str, int] = {}
|
||||||
|
|
||||||
if replace_existing:
|
if replace_existing:
|
||||||
db.query(model_class).delete()
|
db.query(model_class).delete()
|
||||||
@@ -2013,6 +2156,14 @@ async def rerun_failed_files(
|
|||||||
converted_value = convert_value(row[csv_field], db_field)
|
converted_value = convert_value(row[csv_field], db_field)
|
||||||
if converted_value is not None:
|
if converted_value is not None:
|
||||||
model_data[db_field] = converted_value
|
model_data[db_field] = converted_value
|
||||||
|
# Inject sequential line_number for FORM_LST rows grouped by form_id
|
||||||
|
if file_type == "FORM_LST.csv":
|
||||||
|
form_id_value = model_data.get("form_id")
|
||||||
|
if form_id_value:
|
||||||
|
current = form_lst_line_counters.get(str(form_id_value), 0) + 1
|
||||||
|
form_lst_line_counters[str(form_id_value)] = current
|
||||||
|
if "line_number" not in model_data:
|
||||||
|
model_data["line_number"] = current
|
||||||
if not any(model_data.values()):
|
if not any(model_data.values()):
|
||||||
continue
|
continue
|
||||||
required_fields = _get_required_fields(model_class)
|
required_fields = _get_required_fields(model_class)
|
||||||
@@ -2147,7 +2298,7 @@ async def rerun_failed_files(
|
|||||||
"completed_with_errors" if summary["successful_files"] > 0 else "failed"
|
"completed_with_errors" if summary["successful_files"] > 0 else "failed"
|
||||||
)
|
)
|
||||||
rerun_audit.message = f"Rerun completed: {rerun_audit.successful_files}/{rerun_audit.total_files} files"
|
rerun_audit.message = f"Rerun completed: {rerun_audit.successful_files}/{rerun_audit.total_files} files"
|
||||||
rerun_audit.finished_at = datetime.utcnow()
|
rerun_audit.finished_at = datetime.now(timezone.utc)
|
||||||
rerun_audit.details = {"rerun_of": audit_id}
|
rerun_audit.details = {"rerun_of": audit_id}
|
||||||
db.add(rerun_audit)
|
db.add(rerun_audit)
|
||||||
db.commit()
|
db.commit()
|
||||||
@@ -2183,7 +2334,7 @@ async def upload_flexible_only(
|
|||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
content = await file.read()
|
content = await file.read()
|
||||||
encodings = ["utf-8-sig", "utf-8", "windows-1252", "iso-8859-1", "cp1252"]
|
encodings = ENCODINGS
|
||||||
csv_content = None
|
csv_content = None
|
||||||
for encoding in encodings:
|
for encoding in encodings:
|
||||||
try:
|
try:
|
||||||
|
|||||||
72
app/api/mortality.py
Normal file
72
app/api/mortality.py
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
"""
|
||||||
|
Mortality/Life Table API endpoints
|
||||||
|
|
||||||
|
Provides read endpoints to query life tables by age and number tables by month,
|
||||||
|
filtered by sex (M/F/A) and race (W/B/H/A).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, Query, status, Path
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
|
||||||
|
from app.database.base import get_db
|
||||||
|
from app.models.user import User
|
||||||
|
from app.auth.security import get_current_user
|
||||||
|
from app.services.mortality import get_life_values, get_number_value, InvalidCodeError
|
||||||
|
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
|
class LifeResponse(BaseModel):
|
||||||
|
age: int
|
||||||
|
sex: str = Field(description="M, F, or A (all)")
|
||||||
|
race: str = Field(description="W, B, H, or A (all)")
|
||||||
|
le: Optional[float]
|
||||||
|
na: Optional[float]
|
||||||
|
|
||||||
|
|
||||||
|
class NumberResponse(BaseModel):
|
||||||
|
month: int
|
||||||
|
sex: str = Field(description="M, F, or A (all)")
|
||||||
|
race: str = Field(description="W, B, H, or A (all)")
|
||||||
|
na: Optional[float]
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/life/{age}", response_model=LifeResponse)
|
||||||
|
async def get_life_entry(
|
||||||
|
age: int = Path(..., ge=0, description="Age in years (>= 0)"),
|
||||||
|
sex: str = Query("A", min_length=1, max_length=1, description="M, F, or A (all)"),
|
||||||
|
race: str = Query("A", min_length=1, max_length=1, description="W, B, H, or A (all)"),
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
):
|
||||||
|
"""Get life expectancy (LE) and number alive (NA) for an age/sex/race."""
|
||||||
|
try:
|
||||||
|
result = get_life_values(db, age=age, sex=sex, race=race)
|
||||||
|
except InvalidCodeError as e:
|
||||||
|
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
|
||||||
|
if result is None:
|
||||||
|
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Age not found")
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/number/{month}", response_model=NumberResponse)
|
||||||
|
async def get_number_entry(
|
||||||
|
month: int = Path(..., ge=0, description="Month index (>= 0)"),
|
||||||
|
sex: str = Query("A", min_length=1, max_length=1, description="M, F, or A (all)"),
|
||||||
|
race: str = Query("A", min_length=1, max_length=1, description="W, B, H, or A (all)"),
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
):
|
||||||
|
"""Get monthly number alive (NA) for a month/sex/race."""
|
||||||
|
try:
|
||||||
|
result = get_number_value(db, month=month, sex=sex, race=race)
|
||||||
|
except InvalidCodeError as e:
|
||||||
|
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
|
||||||
|
if result is None:
|
||||||
|
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Month not found")
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
1019
app/api/search.py
1019
app/api/search.py
File diff suppressed because it is too large
Load Diff
@@ -2,8 +2,10 @@
|
|||||||
Server-side highlight utilities for search results.
|
Server-side highlight utilities for search results.
|
||||||
|
|
||||||
These functions generate HTML snippets with <strong> around matched tokens,
|
These functions generate HTML snippets with <strong> around matched tokens,
|
||||||
preserving the original casing of the source text. The output is intended to be
|
preserving the original casing of the source text. All non-HTML segments are
|
||||||
sanitized on the client before insertion into the DOM.
|
HTML-escaped server-side to prevent injection. Only the <strong> tags added by
|
||||||
|
this module are emitted as HTML; any pre-existing HTML in source text is
|
||||||
|
escaped.
|
||||||
"""
|
"""
|
||||||
from typing import List, Tuple, Any
|
from typing import List, Tuple, Any
|
||||||
import re
|
import re
|
||||||
@@ -42,18 +44,40 @@ def _merge_ranges(ranges: List[Tuple[int, int]]) -> List[Tuple[int, int]]:
|
|||||||
|
|
||||||
|
|
||||||
def highlight_text(value: str, tokens: List[str]) -> str:
|
def highlight_text(value: str, tokens: List[str]) -> str:
|
||||||
"""Return `value` with case-insensitive matches of `tokens` wrapped in <strong>, preserving original casing."""
|
"""Return `value` with case-insensitive matches of `tokens` wrapped in <strong>, preserving original casing.
|
||||||
|
|
||||||
|
Non-highlighted segments and the highlighted text content are HTML-escaped.
|
||||||
|
Only the surrounding <strong> wrappers are emitted as markup.
|
||||||
|
"""
|
||||||
if value is None:
|
if value is None:
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
|
def _escape_html(text: str) -> str:
|
||||||
|
# Minimal, safe HTML escaping
|
||||||
|
if text is None:
|
||||||
|
return ""
|
||||||
|
# Replace ampersand first to avoid double-escaping
|
||||||
|
text = str(text)
|
||||||
|
text = text.replace("&", "&")
|
||||||
|
text = text.replace("<", "<")
|
||||||
|
text = text.replace(">", ">")
|
||||||
|
text = text.replace('"', """)
|
||||||
|
text = text.replace("'", "'")
|
||||||
|
return text
|
||||||
source = str(value)
|
source = str(value)
|
||||||
if not source or not tokens:
|
if not source or not tokens:
|
||||||
return source
|
return _escape_html(source)
|
||||||
haystack = source.lower()
|
haystack = source.lower()
|
||||||
ranges: List[Tuple[int, int]] = []
|
ranges: List[Tuple[int, int]] = []
|
||||||
|
# Deduplicate tokens case-insensitively to avoid redundant scans (parity with client)
|
||||||
|
unique_needles = []
|
||||||
|
seen_needles = set()
|
||||||
for t in tokens:
|
for t in tokens:
|
||||||
needle = str(t or "").lower()
|
needle = str(t or "").lower()
|
||||||
if not needle:
|
if needle and needle not in seen_needles:
|
||||||
continue
|
unique_needles.append(needle)
|
||||||
|
seen_needles.add(needle)
|
||||||
|
for needle in unique_needles:
|
||||||
start = 0
|
start = 0
|
||||||
last_possible = max(0, len(haystack) - len(needle))
|
last_possible = max(0, len(haystack) - len(needle))
|
||||||
while start <= last_possible and len(needle) > 0:
|
while start <= last_possible and len(needle) > 0:
|
||||||
@@ -63,17 +87,17 @@ def highlight_text(value: str, tokens: List[str]) -> str:
|
|||||||
ranges.append((idx, idx + len(needle)))
|
ranges.append((idx, idx + len(needle)))
|
||||||
start = idx + 1
|
start = idx + 1
|
||||||
if not ranges:
|
if not ranges:
|
||||||
return source
|
return _escape_html(source)
|
||||||
parts: List[str] = []
|
parts: List[str] = []
|
||||||
merged = _merge_ranges(ranges)
|
merged = _merge_ranges(ranges)
|
||||||
pos = 0
|
pos = 0
|
||||||
for s, e in merged:
|
for s, e in merged:
|
||||||
if pos < s:
|
if pos < s:
|
||||||
parts.append(source[pos:s])
|
parts.append(_escape_html(source[pos:s]))
|
||||||
parts.append("<strong>" + source[s:e] + "</strong>")
|
parts.append("<strong>" + _escape_html(source[s:e]) + "</strong>")
|
||||||
pos = e
|
pos = e
|
||||||
if pos < len(source):
|
if pos < len(source):
|
||||||
parts.append(source[pos:])
|
parts.append(_escape_html(source[pos:]))
|
||||||
return "".join(parts)
|
return "".join(parts)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -2,21 +2,24 @@
|
|||||||
Support ticket API endpoints
|
Support ticket API endpoints
|
||||||
"""
|
"""
|
||||||
from typing import List, Optional
|
from typing import List, Optional
|
||||||
from fastapi import APIRouter, Depends, HTTPException, status, Request
|
from fastapi import APIRouter, Depends, HTTPException, status, Request, Query
|
||||||
from sqlalchemy.orm import Session, joinedload
|
from sqlalchemy.orm import Session, joinedload
|
||||||
from sqlalchemy import func, desc, and_, or_
|
from sqlalchemy import func, desc, and_, or_
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
import secrets
|
import secrets
|
||||||
|
|
||||||
from app.database.base import get_db
|
from app.database.base import get_db
|
||||||
from app.models import User, SupportTicket, TicketResponse as TicketResponseModel, TicketStatus, TicketPriority, TicketCategory
|
from app.models import User, SupportTicket, TicketResponse as TicketResponseModel, TicketStatus, TicketPriority, TicketCategory
|
||||||
from app.auth.security import get_current_user, get_admin_user
|
from app.auth.security import get_current_user, get_admin_user
|
||||||
from app.services.audit import audit_service
|
from app.services.audit import audit_service
|
||||||
|
from app.services.query_utils import apply_sorting, paginate_with_total, tokenized_ilike_filter
|
||||||
|
from app.api.search_highlight import build_query_tokens
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
# Pydantic models for API
|
# Pydantic models for API
|
||||||
from pydantic import BaseModel, Field, EmailStr
|
from pydantic import BaseModel, Field, EmailStr
|
||||||
|
from pydantic.config import ConfigDict
|
||||||
|
|
||||||
|
|
||||||
class TicketCreate(BaseModel):
|
class TicketCreate(BaseModel):
|
||||||
@@ -57,8 +60,7 @@ class TicketResponseOut(BaseModel):
|
|||||||
author_email: Optional[str]
|
author_email: Optional[str]
|
||||||
created_at: datetime
|
created_at: datetime
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
|
||||||
class TicketDetail(BaseModel):
|
class TicketDetail(BaseModel):
|
||||||
@@ -81,15 +83,19 @@ class TicketDetail(BaseModel):
|
|||||||
assigned_to: Optional[int]
|
assigned_to: Optional[int]
|
||||||
assigned_admin_name: Optional[str]
|
assigned_admin_name: Optional[str]
|
||||||
submitter_name: Optional[str]
|
submitter_name: Optional[str]
|
||||||
responses: List[TicketResponseOut] = []
|
responses: List[TicketResponseOut] = Field(default_factory=list)
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
class PaginatedTicketsResponse(BaseModel):
|
||||||
|
items: List[TicketDetail]
|
||||||
|
total: int
|
||||||
|
|
||||||
|
|
||||||
def generate_ticket_number() -> str:
|
def generate_ticket_number() -> str:
|
||||||
"""Generate unique ticket number like ST-2024-001"""
|
"""Generate unique ticket number like ST-2024-001"""
|
||||||
year = datetime.now().year
|
year = datetime.now(timezone.utc).year
|
||||||
random_suffix = secrets.token_hex(2).upper()
|
random_suffix = secrets.token_hex(2).upper()
|
||||||
return f"ST-{year}-{random_suffix}"
|
return f"ST-{year}-{random_suffix}"
|
||||||
|
|
||||||
@@ -129,7 +135,7 @@ async def create_support_ticket(
|
|||||||
ip_address=client_ip,
|
ip_address=client_ip,
|
||||||
user_id=current_user.id if current_user else None,
|
user_id=current_user.id if current_user else None,
|
||||||
status=TicketStatus.OPEN,
|
status=TicketStatus.OPEN,
|
||||||
created_at=datetime.utcnow()
|
created_at=datetime.now(timezone.utc)
|
||||||
)
|
)
|
||||||
|
|
||||||
db.add(new_ticket)
|
db.add(new_ticket)
|
||||||
@@ -158,14 +164,18 @@ async def create_support_ticket(
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@router.get("/tickets", response_model=List[TicketDetail])
|
@router.get("/tickets", response_model=List[TicketDetail] | PaginatedTicketsResponse)
|
||||||
async def list_tickets(
|
async def list_tickets(
|
||||||
status: Optional[TicketStatus] = None,
|
status: Optional[TicketStatus] = Query(None, description="Filter by ticket status"),
|
||||||
priority: Optional[TicketPriority] = None,
|
priority: Optional[TicketPriority] = Query(None, description="Filter by ticket priority"),
|
||||||
category: Optional[TicketCategory] = None,
|
category: Optional[TicketCategory] = Query(None, description="Filter by ticket category"),
|
||||||
assigned_to_me: bool = False,
|
assigned_to_me: bool = Query(False, description="Only include tickets assigned to the current admin"),
|
||||||
skip: int = 0,
|
search: Optional[str] = Query(None, description="Tokenized search across number, subject, description, contact name/email, current page, and IP"),
|
||||||
limit: int = 50,
|
skip: int = Query(0, ge=0, description="Offset for pagination"),
|
||||||
|
limit: int = Query(50, ge=1, le=200, description="Page size"),
|
||||||
|
sort_by: Optional[str] = Query(None, description="Sort by: created, updated, resolved, priority, status, subject"),
|
||||||
|
sort_dir: Optional[str] = Query("desc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_admin_user)
|
current_user: User = Depends(get_admin_user)
|
||||||
):
|
):
|
||||||
@@ -186,8 +196,38 @@ async def list_tickets(
|
|||||||
query = query.filter(SupportTicket.category == category)
|
query = query.filter(SupportTicket.category == category)
|
||||||
if assigned_to_me:
|
if assigned_to_me:
|
||||||
query = query.filter(SupportTicket.assigned_to == current_user.id)
|
query = query.filter(SupportTicket.assigned_to == current_user.id)
|
||||||
|
|
||||||
tickets = query.order_by(desc(SupportTicket.created_at)).offset(skip).limit(limit).all()
|
# Search across key text fields
|
||||||
|
if search:
|
||||||
|
tokens = build_query_tokens(search)
|
||||||
|
filter_expr = tokenized_ilike_filter(tokens, [
|
||||||
|
SupportTicket.ticket_number,
|
||||||
|
SupportTicket.subject,
|
||||||
|
SupportTicket.description,
|
||||||
|
SupportTicket.contact_name,
|
||||||
|
SupportTicket.contact_email,
|
||||||
|
SupportTicket.current_page,
|
||||||
|
SupportTicket.ip_address,
|
||||||
|
])
|
||||||
|
if filter_expr is not None:
|
||||||
|
query = query.filter(filter_expr)
|
||||||
|
|
||||||
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"created": [SupportTicket.created_at],
|
||||||
|
"updated": [SupportTicket.updated_at],
|
||||||
|
"resolved": [SupportTicket.resolved_at],
|
||||||
|
"priority": [SupportTicket.priority],
|
||||||
|
"status": [SupportTicket.status],
|
||||||
|
"subject": [SupportTicket.subject],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
tickets, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
|
||||||
# Format response
|
# Format response
|
||||||
result = []
|
result = []
|
||||||
@@ -226,6 +266,8 @@ async def list_tickets(
|
|||||||
}
|
}
|
||||||
result.append(ticket_dict)
|
result.append(ticket_dict)
|
||||||
|
|
||||||
|
if include_total:
|
||||||
|
return {"items": result, "total": total or 0}
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
@@ -312,10 +354,10 @@ async def update_ticket(
|
|||||||
|
|
||||||
# Set resolved timestamp if status changed to resolved
|
# Set resolved timestamp if status changed to resolved
|
||||||
if ticket_data.status == TicketStatus.RESOLVED and ticket.resolved_at is None:
|
if ticket_data.status == TicketStatus.RESOLVED and ticket.resolved_at is None:
|
||||||
ticket.resolved_at = datetime.utcnow()
|
ticket.resolved_at = datetime.now(timezone.utc)
|
||||||
changes["resolved_at"] = {"from": None, "to": ticket.resolved_at}
|
changes["resolved_at"] = {"from": None, "to": ticket.resolved_at}
|
||||||
|
|
||||||
ticket.updated_at = datetime.utcnow()
|
ticket.updated_at = datetime.now(timezone.utc)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
# Audit logging (non-blocking)
|
# Audit logging (non-blocking)
|
||||||
@@ -358,13 +400,13 @@ async def add_response(
|
|||||||
message=response_data.message,
|
message=response_data.message,
|
||||||
is_internal=response_data.is_internal,
|
is_internal=response_data.is_internal,
|
||||||
user_id=current_user.id,
|
user_id=current_user.id,
|
||||||
created_at=datetime.utcnow()
|
created_at=datetime.now(timezone.utc)
|
||||||
)
|
)
|
||||||
|
|
||||||
db.add(response)
|
db.add(response)
|
||||||
|
|
||||||
# Update ticket timestamp
|
# Update ticket timestamp
|
||||||
ticket.updated_at = datetime.utcnow()
|
ticket.updated_at = datetime.now(timezone.utc)
|
||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
db.refresh(response)
|
db.refresh(response)
|
||||||
@@ -386,11 +428,15 @@ async def add_response(
|
|||||||
return {"message": "Response added successfully", "response_id": response.id}
|
return {"message": "Response added successfully", "response_id": response.id}
|
||||||
|
|
||||||
|
|
||||||
@router.get("/my-tickets", response_model=List[TicketDetail])
|
@router.get("/my-tickets", response_model=List[TicketDetail] | PaginatedTicketsResponse)
|
||||||
async def get_my_tickets(
|
async def get_my_tickets(
|
||||||
status: Optional[TicketStatus] = None,
|
status: Optional[TicketStatus] = Query(None, description="Filter by ticket status"),
|
||||||
skip: int = 0,
|
search: Optional[str] = Query(None, description="Tokenized search across number, subject, description"),
|
||||||
limit: int = 20,
|
skip: int = Query(0, ge=0, description="Offset for pagination"),
|
||||||
|
limit: int = Query(20, ge=1, le=200, description="Page size"),
|
||||||
|
sort_by: Optional[str] = Query(None, description="Sort by: created, updated, resolved, priority, status, subject"),
|
||||||
|
sort_dir: Optional[str] = Query("desc", description="Sort direction: asc or desc"),
|
||||||
|
include_total: bool = Query(False, description="When true, returns {items, total} instead of a plain list"),
|
||||||
db: Session = Depends(get_db),
|
db: Session = Depends(get_db),
|
||||||
current_user: User = Depends(get_current_user)
|
current_user: User = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
@@ -403,7 +449,33 @@ async def get_my_tickets(
|
|||||||
if status:
|
if status:
|
||||||
query = query.filter(SupportTicket.status == status)
|
query = query.filter(SupportTicket.status == status)
|
||||||
|
|
||||||
tickets = query.order_by(desc(SupportTicket.created_at)).offset(skip).limit(limit).all()
|
# Search within user's tickets
|
||||||
|
if search:
|
||||||
|
tokens = build_query_tokens(search)
|
||||||
|
filter_expr = tokenized_ilike_filter(tokens, [
|
||||||
|
SupportTicket.ticket_number,
|
||||||
|
SupportTicket.subject,
|
||||||
|
SupportTicket.description,
|
||||||
|
])
|
||||||
|
if filter_expr is not None:
|
||||||
|
query = query.filter(filter_expr)
|
||||||
|
|
||||||
|
# Sorting (whitelisted)
|
||||||
|
query = apply_sorting(
|
||||||
|
query,
|
||||||
|
sort_by,
|
||||||
|
sort_dir,
|
||||||
|
allowed={
|
||||||
|
"created": [SupportTicket.created_at],
|
||||||
|
"updated": [SupportTicket.updated_at],
|
||||||
|
"resolved": [SupportTicket.resolved_at],
|
||||||
|
"priority": [SupportTicket.priority],
|
||||||
|
"status": [SupportTicket.status],
|
||||||
|
"subject": [SupportTicket.subject],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
tickets, total = paginate_with_total(query, skip, limit, include_total)
|
||||||
|
|
||||||
# Format response (exclude internal responses for regular users)
|
# Format response (exclude internal responses for regular users)
|
||||||
result = []
|
result = []
|
||||||
@@ -442,6 +514,8 @@ async def get_my_tickets(
|
|||||||
}
|
}
|
||||||
result.append(ticket_dict)
|
result.append(ticket_dict)
|
||||||
|
|
||||||
|
if include_total:
|
||||||
|
return {"items": result, "total": total or 0}
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
@@ -473,7 +547,7 @@ async def get_ticket_stats(
|
|||||||
|
|
||||||
# Recent tickets (last 7 days)
|
# Recent tickets (last 7 days)
|
||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
week_ago = datetime.utcnow() - timedelta(days=7)
|
week_ago = datetime.now(timezone.utc) - timedelta(days=7)
|
||||||
recent_tickets = db.query(func.count(SupportTicket.id)).filter(
|
recent_tickets = db.query(func.count(SupportTicket.id)).filter(
|
||||||
SupportTicket.created_at >= week_ago
|
SupportTicket.created_at >= week_ago
|
||||||
).scalar()
|
).scalar()
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Authentication schemas
|
|||||||
"""
|
"""
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from pydantic import BaseModel, EmailStr
|
from pydantic import BaseModel, EmailStr
|
||||||
|
from pydantic.config import ConfigDict
|
||||||
|
|
||||||
|
|
||||||
class UserBase(BaseModel):
|
class UserBase(BaseModel):
|
||||||
@@ -32,8 +33,7 @@ class UserResponse(UserBase):
|
|||||||
is_admin: bool
|
is_admin: bool
|
||||||
theme_preference: Optional[str] = "light"
|
theme_preference: Optional[str] = "light"
|
||||||
|
|
||||||
class Config:
|
model_config = ConfigDict(from_attributes=True)
|
||||||
from_attributes = True
|
|
||||||
|
|
||||||
|
|
||||||
class ThemePreferenceUpdate(BaseModel):
|
class ThemePreferenceUpdate(BaseModel):
|
||||||
@@ -45,7 +45,7 @@ class Token(BaseModel):
|
|||||||
"""Token response schema"""
|
"""Token response schema"""
|
||||||
access_token: str
|
access_token: str
|
||||||
token_type: str
|
token_type: str
|
||||||
refresh_token: str | None = None
|
refresh_token: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class TokenData(BaseModel):
|
class TokenData(BaseModel):
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Authentication and security utilities
|
Authentication and security utilities
|
||||||
"""
|
"""
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta, timezone
|
||||||
from typing import Optional, Union, Tuple
|
from typing import Optional, Union, Tuple
|
||||||
from uuid import uuid4
|
from uuid import uuid4
|
||||||
from jose import JWTError, jwt
|
from jose import JWTError, jwt
|
||||||
@@ -54,12 +54,12 @@ def _decode_with_rotation(token: str) -> dict:
|
|||||||
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -> str:
|
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -> str:
|
||||||
"""Create JWT access token"""
|
"""Create JWT access token"""
|
||||||
to_encode = data.copy()
|
to_encode = data.copy()
|
||||||
expire = datetime.utcnow() + (
|
expire = datetime.now(timezone.utc) + (
|
||||||
expires_delta if expires_delta else timedelta(minutes=settings.access_token_expire_minutes)
|
expires_delta if expires_delta else timedelta(minutes=settings.access_token_expire_minutes)
|
||||||
)
|
)
|
||||||
to_encode.update({
|
to_encode.update({
|
||||||
"exp": expire,
|
"exp": expire,
|
||||||
"iat": datetime.utcnow(),
|
"iat": datetime.now(timezone.utc),
|
||||||
"type": "access",
|
"type": "access",
|
||||||
})
|
})
|
||||||
return _encode_with_rotation(to_encode)
|
return _encode_with_rotation(to_encode)
|
||||||
@@ -68,14 +68,14 @@ def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -
|
|||||||
def create_refresh_token(user: User, user_agent: Optional[str], ip_address: Optional[str], db: Session) -> str:
|
def create_refresh_token(user: User, user_agent: Optional[str], ip_address: Optional[str], db: Session) -> str:
|
||||||
"""Create refresh token, store its JTI in DB for revocation."""
|
"""Create refresh token, store its JTI in DB for revocation."""
|
||||||
jti = uuid4().hex
|
jti = uuid4().hex
|
||||||
expire = datetime.utcnow() + timedelta(minutes=settings.refresh_token_expire_minutes)
|
expire = datetime.now(timezone.utc) + timedelta(minutes=settings.refresh_token_expire_minutes)
|
||||||
payload = {
|
payload = {
|
||||||
"sub": user.username,
|
"sub": user.username,
|
||||||
"uid": user.id,
|
"uid": user.id,
|
||||||
"jti": jti,
|
"jti": jti,
|
||||||
"type": "refresh",
|
"type": "refresh",
|
||||||
"exp": expire,
|
"exp": expire,
|
||||||
"iat": datetime.utcnow(),
|
"iat": datetime.now(timezone.utc),
|
||||||
}
|
}
|
||||||
token = _encode_with_rotation(payload)
|
token = _encode_with_rotation(payload)
|
||||||
|
|
||||||
@@ -84,7 +84,7 @@ def create_refresh_token(user: User, user_agent: Optional[str], ip_address: Opti
|
|||||||
jti=jti,
|
jti=jti,
|
||||||
user_agent=user_agent,
|
user_agent=user_agent,
|
||||||
ip_address=ip_address,
|
ip_address=ip_address,
|
||||||
issued_at=datetime.utcnow(),
|
issued_at=datetime.now(timezone.utc),
|
||||||
expires_at=expire,
|
expires_at=expire,
|
||||||
revoked=False,
|
revoked=False,
|
||||||
)
|
)
|
||||||
@@ -93,6 +93,15 @@ def create_refresh_token(user: User, user_agent: Optional[str], ip_address: Opti
|
|||||||
return token
|
return token
|
||||||
|
|
||||||
|
|
||||||
|
def _to_utc_aware(dt: Optional[datetime]) -> Optional[datetime]:
|
||||||
|
"""Convert a datetime to UTC-aware. If naive, assume it's already UTC and attach tzinfo."""
|
||||||
|
if dt is None:
|
||||||
|
return None
|
||||||
|
if dt.tzinfo is None:
|
||||||
|
return dt.replace(tzinfo=timezone.utc)
|
||||||
|
return dt.astimezone(timezone.utc)
|
||||||
|
|
||||||
|
|
||||||
def verify_token(token: str) -> Optional[str]:
|
def verify_token(token: str) -> Optional[str]:
|
||||||
"""Verify JWT token and return username"""
|
"""Verify JWT token and return username"""
|
||||||
try:
|
try:
|
||||||
@@ -122,14 +131,20 @@ def decode_refresh_token(token: str) -> Optional[dict]:
|
|||||||
|
|
||||||
def is_refresh_token_revoked(jti: str, db: Session) -> bool:
|
def is_refresh_token_revoked(jti: str, db: Session) -> bool:
|
||||||
token_row = db.query(RefreshToken).filter(RefreshToken.jti == jti).first()
|
token_row = db.query(RefreshToken).filter(RefreshToken.jti == jti).first()
|
||||||
return not token_row or token_row.revoked or token_row.expires_at <= datetime.utcnow()
|
if not token_row:
|
||||||
|
return True
|
||||||
|
if token_row.revoked:
|
||||||
|
return True
|
||||||
|
expires_at_utc = _to_utc_aware(token_row.expires_at)
|
||||||
|
now_utc = datetime.now(timezone.utc)
|
||||||
|
return expires_at_utc is not None and expires_at_utc <= now_utc
|
||||||
|
|
||||||
|
|
||||||
def revoke_refresh_token(jti: str, db: Session) -> None:
|
def revoke_refresh_token(jti: str, db: Session) -> None:
|
||||||
token_row = db.query(RefreshToken).filter(RefreshToken.jti == jti).first()
|
token_row = db.query(RefreshToken).filter(RefreshToken.jti == jti).first()
|
||||||
if token_row and not token_row.revoked:
|
if token_row and not token_row.revoked:
|
||||||
token_row.revoked = True
|
token_row.revoked = True
|
||||||
token_row.revoked_at = datetime.utcnow()
|
token_row.revoked_at = datetime.now(timezone.utc)
|
||||||
db.commit()
|
db.commit()
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -57,6 +57,10 @@ class Settings(BaseSettings):
|
|||||||
log_rotation: str = "10 MB"
|
log_rotation: str = "10 MB"
|
||||||
log_retention: str = "30 days"
|
log_retention: str = "30 days"
|
||||||
|
|
||||||
|
# Cache / Redis
|
||||||
|
cache_enabled: bool = False
|
||||||
|
redis_url: Optional[str] = None
|
||||||
|
|
||||||
# pydantic-settings v2 configuration
|
# pydantic-settings v2 configuration
|
||||||
model_config = SettingsConfigDict(
|
model_config = SettingsConfigDict(
|
||||||
env_file=".env",
|
env_file=".env",
|
||||||
|
|||||||
@@ -2,8 +2,7 @@
|
|||||||
Database configuration and session management
|
Database configuration and session management
|
||||||
"""
|
"""
|
||||||
from sqlalchemy import create_engine
|
from sqlalchemy import create_engine
|
||||||
from sqlalchemy.ext.declarative import declarative_base
|
from sqlalchemy.orm import declarative_base, sessionmaker, Session
|
||||||
from sqlalchemy.orm import sessionmaker, Session
|
|
||||||
from typing import Generator
|
from typing import Generator
|
||||||
|
|
||||||
from app.config import settings
|
from app.config import settings
|
||||||
|
|||||||
248
app/database/fts.py
Normal file
248
app/database/fts.py
Normal file
@@ -0,0 +1,248 @@
|
|||||||
|
"""
|
||||||
|
SQLite Full-Text Search (FTS5) helpers.
|
||||||
|
|
||||||
|
Creates and maintains FTS virtual tables and triggers to keep them in sync
|
||||||
|
with their content tables. Designed to be called at app startup.
|
||||||
|
"""
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from sqlalchemy.engine import Engine
|
||||||
|
from sqlalchemy import text
|
||||||
|
|
||||||
|
|
||||||
|
def _execute_ignore_errors(engine: Engine, sql: str) -> None:
|
||||||
|
"""Execute SQL, ignoring operational errors (e.g., when FTS5 is unavailable)."""
|
||||||
|
from sqlalchemy.exc import OperationalError
|
||||||
|
with engine.begin() as conn:
|
||||||
|
try:
|
||||||
|
conn.execute(text(sql))
|
||||||
|
except OperationalError:
|
||||||
|
# Likely FTS5 extension not available in this SQLite build
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_rolodex_fts(engine: Engine) -> None:
|
||||||
|
"""Ensure the `rolodex_fts` virtual table and triggers exist and are populated.
|
||||||
|
|
||||||
|
This uses content=rolodex so the FTS table shadows the base table and is kept
|
||||||
|
in sync via triggers.
|
||||||
|
"""
|
||||||
|
# Create virtual table (if FTS5 is available)
|
||||||
|
_create_table = """
|
||||||
|
CREATE VIRTUAL TABLE IF NOT EXISTS rolodex_fts USING fts5(
|
||||||
|
id,
|
||||||
|
first,
|
||||||
|
last,
|
||||||
|
city,
|
||||||
|
email,
|
||||||
|
memo,
|
||||||
|
content='rolodex',
|
||||||
|
content_rowid='rowid'
|
||||||
|
);
|
||||||
|
"""
|
||||||
|
_execute_ignore_errors(engine, _create_table)
|
||||||
|
|
||||||
|
# Triggers to keep FTS in sync
|
||||||
|
_triggers = [
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS rolodex_ai AFTER INSERT ON rolodex BEGIN
|
||||||
|
INSERT INTO rolodex_fts(rowid, id, first, last, city, email, memo)
|
||||||
|
VALUES (new.rowid, new.id, new.first, new.last, new.city, new.email, new.memo);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS rolodex_ad AFTER DELETE ON rolodex BEGIN
|
||||||
|
INSERT INTO rolodex_fts(rolodex_fts, rowid, id, first, last, city, email, memo)
|
||||||
|
VALUES ('delete', old.rowid, old.id, old.first, old.last, old.city, old.email, old.memo);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS rolodex_au AFTER UPDATE ON rolodex BEGIN
|
||||||
|
INSERT INTO rolodex_fts(rolodex_fts, rowid, id, first, last, city, email, memo)
|
||||||
|
VALUES ('delete', old.rowid, old.id, old.first, old.last, old.city, old.email, old.memo);
|
||||||
|
INSERT INTO rolodex_fts(rowid, id, first, last, city, email, memo)
|
||||||
|
VALUES (new.rowid, new.id, new.first, new.last, new.city, new.email, new.memo);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
]
|
||||||
|
for trig in _triggers:
|
||||||
|
_execute_ignore_errors(engine, trig)
|
||||||
|
|
||||||
|
# Backfill if the FTS table exists but is empty
|
||||||
|
with engine.begin() as conn:
|
||||||
|
try:
|
||||||
|
count_fts = conn.execute(text("SELECT count(*) FROM rolodex_fts")).scalar() # type: ignore
|
||||||
|
if count_fts == 0:
|
||||||
|
# Populate from existing rolodex rows
|
||||||
|
conn.execute(text(
|
||||||
|
"""
|
||||||
|
INSERT INTO rolodex_fts(rowid, id, first, last, city, email, memo)
|
||||||
|
SELECT rowid, id, first, last, city, email, memo FROM rolodex;
|
||||||
|
"""
|
||||||
|
))
|
||||||
|
except Exception:
|
||||||
|
# If FTS table doesn't exist or any error occurs, ignore silently
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_files_fts(engine: Engine) -> None:
|
||||||
|
"""Ensure the `files_fts` virtual table and triggers exist and are populated."""
|
||||||
|
_create_table = """
|
||||||
|
CREATE VIRTUAL TABLE IF NOT EXISTS files_fts USING fts5(
|
||||||
|
file_no,
|
||||||
|
id,
|
||||||
|
regarding,
|
||||||
|
file_type,
|
||||||
|
memo,
|
||||||
|
content='files',
|
||||||
|
content_rowid='rowid'
|
||||||
|
);
|
||||||
|
"""
|
||||||
|
_execute_ignore_errors(engine, _create_table)
|
||||||
|
|
||||||
|
_triggers = [
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS files_ai AFTER INSERT ON files BEGIN
|
||||||
|
INSERT INTO files_fts(rowid, file_no, id, regarding, file_type, memo)
|
||||||
|
VALUES (new.rowid, new.file_no, new.id, new.regarding, new.file_type, new.memo);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS files_ad AFTER DELETE ON files BEGIN
|
||||||
|
INSERT INTO files_fts(files_fts, rowid, file_no, id, regarding, file_type, memo)
|
||||||
|
VALUES ('delete', old.rowid, old.file_no, old.id, old.regarding, old.file_type, old.memo);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS files_au AFTER UPDATE ON files BEGIN
|
||||||
|
INSERT INTO files_fts(files_fts, rowid, file_no, id, regarding, file_type, memo)
|
||||||
|
VALUES ('delete', old.rowid, old.file_no, old.id, old.regarding, old.file_type, old.memo);
|
||||||
|
INSERT INTO files_fts(rowid, file_no, id, regarding, file_type, memo)
|
||||||
|
VALUES (new.rowid, new.file_no, new.id, new.regarding, new.file_type, new.memo);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
]
|
||||||
|
for trig in _triggers:
|
||||||
|
_execute_ignore_errors(engine, trig)
|
||||||
|
|
||||||
|
with engine.begin() as conn:
|
||||||
|
try:
|
||||||
|
count_fts = conn.execute(text("SELECT count(*) FROM files_fts")).scalar() # type: ignore
|
||||||
|
if count_fts == 0:
|
||||||
|
conn.execute(text(
|
||||||
|
"""
|
||||||
|
INSERT INTO files_fts(rowid, file_no, id, regarding, file_type, memo)
|
||||||
|
SELECT rowid, file_no, id, regarding, file_type, memo FROM files;
|
||||||
|
"""
|
||||||
|
))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_ledger_fts(engine: Engine) -> None:
|
||||||
|
"""Ensure the `ledger_fts` virtual table and triggers exist and are populated."""
|
||||||
|
_create_table = """
|
||||||
|
CREATE VIRTUAL TABLE IF NOT EXISTS ledger_fts USING fts5(
|
||||||
|
file_no,
|
||||||
|
t_code,
|
||||||
|
note,
|
||||||
|
empl_num,
|
||||||
|
content='ledger',
|
||||||
|
content_rowid='rowid'
|
||||||
|
);
|
||||||
|
"""
|
||||||
|
_execute_ignore_errors(engine, _create_table)
|
||||||
|
|
||||||
|
_triggers = [
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS ledger_ai AFTER INSERT ON ledger BEGIN
|
||||||
|
INSERT INTO ledger_fts(rowid, file_no, t_code, note, empl_num)
|
||||||
|
VALUES (new.rowid, new.file_no, new.t_code, new.note, new.empl_num);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS ledger_ad AFTER DELETE ON ledger BEGIN
|
||||||
|
INSERT INTO ledger_fts(ledger_fts, rowid, file_no, t_code, note, empl_num)
|
||||||
|
VALUES ('delete', old.rowid, old.file_no, old.t_code, old.note, old.empl_num);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS ledger_au AFTER UPDATE ON ledger BEGIN
|
||||||
|
INSERT INTO ledger_fts(ledger_fts, rowid, file_no, t_code, note, empl_num)
|
||||||
|
VALUES ('delete', old.rowid, old.file_no, old.t_code, old.note, old.empl_num);
|
||||||
|
INSERT INTO ledger_fts(rowid, file_no, t_code, note, empl_num)
|
||||||
|
VALUES (new.rowid, new.file_no, new.t_code, new.note, new.empl_num);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
]
|
||||||
|
for trig in _triggers:
|
||||||
|
_execute_ignore_errors(engine, trig)
|
||||||
|
|
||||||
|
with engine.begin() as conn:
|
||||||
|
try:
|
||||||
|
count_fts = conn.execute(text("SELECT count(*) FROM ledger_fts")).scalar() # type: ignore
|
||||||
|
if count_fts == 0:
|
||||||
|
conn.execute(text(
|
||||||
|
"""
|
||||||
|
INSERT INTO ledger_fts(rowid, file_no, t_code, note, empl_num)
|
||||||
|
SELECT rowid, file_no, t_code, note, empl_num FROM ledger;
|
||||||
|
"""
|
||||||
|
))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_qdros_fts(engine: Engine) -> None:
|
||||||
|
"""Ensure the `qdros_fts` virtual table and triggers exist and are populated."""
|
||||||
|
_create_table = """
|
||||||
|
CREATE VIRTUAL TABLE IF NOT EXISTS qdros_fts USING fts5(
|
||||||
|
file_no,
|
||||||
|
form_name,
|
||||||
|
pet,
|
||||||
|
res,
|
||||||
|
case_number,
|
||||||
|
content='qdros',
|
||||||
|
content_rowid='rowid'
|
||||||
|
);
|
||||||
|
"""
|
||||||
|
_execute_ignore_errors(engine, _create_table)
|
||||||
|
|
||||||
|
_triggers = [
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS qdros_ai AFTER INSERT ON qdros BEGIN
|
||||||
|
INSERT INTO qdros_fts(rowid, file_no, form_name, pet, res, case_number)
|
||||||
|
VALUES (new.rowid, new.file_no, new.form_name, new.pet, new.res, new.case_number);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS qdros_ad AFTER DELETE ON qdros BEGIN
|
||||||
|
INSERT INTO qdros_fts(qdros_fts, rowid, file_no, form_name, pet, res, case_number)
|
||||||
|
VALUES ('delete', old.rowid, old.file_no, old.form_name, old.pet, old.res, old.case_number);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TRIGGER IF NOT EXISTS qdros_au AFTER UPDATE ON qdros BEGIN
|
||||||
|
INSERT INTO qdros_fts(qdros_fts, rowid, file_no, form_name, pet, res, case_number)
|
||||||
|
VALUES ('delete', old.rowid, old.file_no, old.form_name, old.pet, old.res, old.case_number);
|
||||||
|
INSERT INTO qdros_fts(rowid, file_no, form_name, pet, res, case_number)
|
||||||
|
VALUES (new.rowid, new.file_no, new.form_name, new.pet, new.res, new.case_number);
|
||||||
|
END;
|
||||||
|
""",
|
||||||
|
]
|
||||||
|
for trig in _triggers:
|
||||||
|
_execute_ignore_errors(engine, trig)
|
||||||
|
|
||||||
|
with engine.begin() as conn:
|
||||||
|
try:
|
||||||
|
count_fts = conn.execute(text("SELECT count(*) FROM qdros_fts")).scalar() # type: ignore
|
||||||
|
if count_fts == 0:
|
||||||
|
conn.execute(text(
|
||||||
|
"""
|
||||||
|
INSERT INTO qdros_fts(rowid, file_no, form_name, pet, res, case_number)
|
||||||
|
SELECT rowid, file_no, form_name, pet, res, case_number FROM qdros;
|
||||||
|
"""
|
||||||
|
))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
31
app/database/indexes.py
Normal file
31
app/database/indexes.py
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
"""
|
||||||
|
Database secondary indexes helper.
|
||||||
|
|
||||||
|
Creates small B-tree indexes for common equality filters to speed up searches.
|
||||||
|
Uses CREATE INDEX IF NOT EXISTS so it is safe to call repeatedly at startup
|
||||||
|
and works for existing databases without running a migration.
|
||||||
|
"""
|
||||||
|
from sqlalchemy.engine import Engine
|
||||||
|
from sqlalchemy import text
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_secondary_indexes(engine: Engine) -> None:
|
||||||
|
statements = [
|
||||||
|
# Files
|
||||||
|
"CREATE INDEX IF NOT EXISTS idx_files_status ON files(status)",
|
||||||
|
"CREATE INDEX IF NOT EXISTS idx_files_file_type ON files(file_type)",
|
||||||
|
"CREATE INDEX IF NOT EXISTS idx_files_empl_num ON files(empl_num)",
|
||||||
|
# Ledger
|
||||||
|
"CREATE INDEX IF NOT EXISTS idx_ledger_t_type ON ledger(t_type)",
|
||||||
|
"CREATE INDEX IF NOT EXISTS idx_ledger_empl_num ON ledger(empl_num)",
|
||||||
|
]
|
||||||
|
with engine.begin() as conn:
|
||||||
|
for stmt in statements:
|
||||||
|
try:
|
||||||
|
conn.execute(text(stmt))
|
||||||
|
except Exception:
|
||||||
|
# Ignore failures (e.g., non-SQLite engines that still support IF NOT EXISTS;
|
||||||
|
# if not supported, users should manage indexes via migrations)
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
130
app/database/schema_updates.py
Normal file
130
app/database/schema_updates.py
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
"""
|
||||||
|
Lightweight, idempotent schema updates for SQLite.
|
||||||
|
|
||||||
|
Adds newly introduced columns to existing tables when running on an
|
||||||
|
already-initialized database. Safe to call multiple times.
|
||||||
|
"""
|
||||||
|
from typing import Dict
|
||||||
|
from sqlalchemy.engine import Engine
|
||||||
|
|
||||||
|
|
||||||
|
def _existing_columns(conn, table: str) -> set[str]:
|
||||||
|
rows = conn.execute(f"PRAGMA table_info('{table}')").fetchall()
|
||||||
|
return {row[1] for row in rows} # name is column 2
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_schema_updates(engine: Engine) -> None:
|
||||||
|
"""Ensure missing columns are added for backward-compatible updates."""
|
||||||
|
# Map of table -> {column: SQL type}
|
||||||
|
updates: Dict[str, Dict[str, str]] = {
|
||||||
|
# Forms
|
||||||
|
"form_index": {
|
||||||
|
"keyword": "TEXT",
|
||||||
|
},
|
||||||
|
# Richer Life/Number tables (forms & pensions harmonized)
|
||||||
|
"life_tables": {
|
||||||
|
"le_aa": "FLOAT",
|
||||||
|
"na_aa": "FLOAT",
|
||||||
|
"le_am": "FLOAT",
|
||||||
|
"na_am": "FLOAT",
|
||||||
|
"le_af": "FLOAT",
|
||||||
|
"na_af": "FLOAT",
|
||||||
|
"le_wa": "FLOAT",
|
||||||
|
"na_wa": "FLOAT",
|
||||||
|
"le_wm": "FLOAT",
|
||||||
|
"na_wm": "FLOAT",
|
||||||
|
"le_wf": "FLOAT",
|
||||||
|
"na_wf": "FLOAT",
|
||||||
|
"le_ba": "FLOAT",
|
||||||
|
"na_ba": "FLOAT",
|
||||||
|
"le_bm": "FLOAT",
|
||||||
|
"na_bm": "FLOAT",
|
||||||
|
"le_bf": "FLOAT",
|
||||||
|
"na_bf": "FLOAT",
|
||||||
|
"le_ha": "FLOAT",
|
||||||
|
"na_ha": "FLOAT",
|
||||||
|
"le_hm": "FLOAT",
|
||||||
|
"na_hm": "FLOAT",
|
||||||
|
"le_hf": "FLOAT",
|
||||||
|
"na_hf": "FLOAT",
|
||||||
|
"table_year": "INTEGER",
|
||||||
|
"table_type": "VARCHAR(45)",
|
||||||
|
},
|
||||||
|
"number_tables": {
|
||||||
|
"month": "INTEGER",
|
||||||
|
"na_aa": "FLOAT",
|
||||||
|
"na_am": "FLOAT",
|
||||||
|
"na_af": "FLOAT",
|
||||||
|
"na_wa": "FLOAT",
|
||||||
|
"na_wm": "FLOAT",
|
||||||
|
"na_wf": "FLOAT",
|
||||||
|
"na_ba": "FLOAT",
|
||||||
|
"na_bm": "FLOAT",
|
||||||
|
"na_bf": "FLOAT",
|
||||||
|
"na_ha": "FLOAT",
|
||||||
|
"na_hm": "FLOAT",
|
||||||
|
"na_hf": "FLOAT",
|
||||||
|
"table_type": "VARCHAR(45)",
|
||||||
|
"description": "TEXT",
|
||||||
|
},
|
||||||
|
"form_list": {
|
||||||
|
"status": "VARCHAR(45)",
|
||||||
|
},
|
||||||
|
# Printers: add advanced legacy fields
|
||||||
|
"printers": {
|
||||||
|
"number": "INTEGER",
|
||||||
|
"page_break": "VARCHAR(50)",
|
||||||
|
"setup_st": "VARCHAR(200)",
|
||||||
|
"reset_st": "VARCHAR(200)",
|
||||||
|
"b_underline": "VARCHAR(100)",
|
||||||
|
"e_underline": "VARCHAR(100)",
|
||||||
|
"b_bold": "VARCHAR(100)",
|
||||||
|
"e_bold": "VARCHAR(100)",
|
||||||
|
"phone_book": "BOOLEAN",
|
||||||
|
"rolodex_info": "BOOLEAN",
|
||||||
|
"envelope": "BOOLEAN",
|
||||||
|
"file_cabinet": "BOOLEAN",
|
||||||
|
"accounts": "BOOLEAN",
|
||||||
|
"statements": "BOOLEAN",
|
||||||
|
"calendar": "BOOLEAN",
|
||||||
|
},
|
||||||
|
# Pensions
|
||||||
|
"pension_schedules": {
|
||||||
|
"vests_on": "DATE",
|
||||||
|
"vests_at": "FLOAT",
|
||||||
|
},
|
||||||
|
"marriage_history": {
|
||||||
|
"married_from": "DATE",
|
||||||
|
"married_to": "DATE",
|
||||||
|
"married_years": "FLOAT",
|
||||||
|
"service_from": "DATE",
|
||||||
|
"service_to": "DATE",
|
||||||
|
"service_years": "FLOAT",
|
||||||
|
"marital_percent": "FLOAT",
|
||||||
|
},
|
||||||
|
"death_benefits": {
|
||||||
|
"lump1": "FLOAT",
|
||||||
|
"lump2": "FLOAT",
|
||||||
|
"growth1": "FLOAT",
|
||||||
|
"growth2": "FLOAT",
|
||||||
|
"disc1": "FLOAT",
|
||||||
|
"disc2": "FLOAT",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
with engine.begin() as conn:
|
||||||
|
for table, cols in updates.items():
|
||||||
|
try:
|
||||||
|
existing = _existing_columns(conn, table)
|
||||||
|
except Exception:
|
||||||
|
# Table may not exist yet
|
||||||
|
continue
|
||||||
|
for col_name, col_type in cols.items():
|
||||||
|
if col_name not in existing:
|
||||||
|
try:
|
||||||
|
conn.execute(f"ALTER TABLE {table} ADD COLUMN {col_name} {col_type}")
|
||||||
|
except Exception:
|
||||||
|
# Ignore if not applicable (other engines) or race condition
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
20
app/main.py
20
app/main.py
@@ -9,6 +9,9 @@ from fastapi.middleware.cors import CORSMiddleware
|
|||||||
|
|
||||||
from app.config import settings
|
from app.config import settings
|
||||||
from app.database.base import engine
|
from app.database.base import engine
|
||||||
|
from app.database.fts import ensure_rolodex_fts, ensure_files_fts, ensure_ledger_fts, ensure_qdros_fts
|
||||||
|
from app.database.indexes import ensure_secondary_indexes
|
||||||
|
from app.database.schema_updates import ensure_schema_updates
|
||||||
from app.models import BaseModel
|
from app.models import BaseModel
|
||||||
from app.models.user import User
|
from app.models.user import User
|
||||||
from app.auth.security import get_admin_user
|
from app.auth.security import get_admin_user
|
||||||
@@ -24,6 +27,21 @@ logger = get_logger("main")
|
|||||||
logger.info("Creating database tables")
|
logger.info("Creating database tables")
|
||||||
BaseModel.metadata.create_all(bind=engine)
|
BaseModel.metadata.create_all(bind=engine)
|
||||||
|
|
||||||
|
# Initialize SQLite FTS (if available)
|
||||||
|
logger.info("Initializing FTS (if available)")
|
||||||
|
ensure_rolodex_fts(engine)
|
||||||
|
ensure_files_fts(engine)
|
||||||
|
ensure_ledger_fts(engine)
|
||||||
|
ensure_qdros_fts(engine)
|
||||||
|
|
||||||
|
# Ensure helpful secondary indexes
|
||||||
|
logger.info("Ensuring secondary indexes (status, type, employee, etc.)")
|
||||||
|
ensure_secondary_indexes(engine)
|
||||||
|
|
||||||
|
# Ensure idempotent schema updates for added columns
|
||||||
|
logger.info("Ensuring schema updates (new columns)")
|
||||||
|
ensure_schema_updates(engine)
|
||||||
|
|
||||||
# Initialize FastAPI app
|
# Initialize FastAPI app
|
||||||
logger.info("Initializing FastAPI application", version=settings.app_version, debug=settings.debug)
|
logger.info("Initializing FastAPI application", version=settings.app_version, debug=settings.debug)
|
||||||
app = FastAPI(
|
app = FastAPI(
|
||||||
@@ -71,6 +89,7 @@ from app.api.import_data import router as import_router
|
|||||||
from app.api.flexible import router as flexible_router
|
from app.api.flexible import router as flexible_router
|
||||||
from app.api.support import router as support_router
|
from app.api.support import router as support_router
|
||||||
from app.api.settings import router as settings_router
|
from app.api.settings import router as settings_router
|
||||||
|
from app.api.mortality import router as mortality_router
|
||||||
|
|
||||||
logger.info("Including API routers")
|
logger.info("Including API routers")
|
||||||
app.include_router(auth_router, prefix="/api/auth", tags=["authentication"])
|
app.include_router(auth_router, prefix="/api/auth", tags=["authentication"])
|
||||||
@@ -84,6 +103,7 @@ app.include_router(import_router, prefix="/api/import", tags=["import"])
|
|||||||
app.include_router(support_router, prefix="/api/support", tags=["support"])
|
app.include_router(support_router, prefix="/api/support", tags=["support"])
|
||||||
app.include_router(settings_router, prefix="/api/settings", tags=["settings"])
|
app.include_router(settings_router, prefix="/api/settings", tags=["settings"])
|
||||||
app.include_router(flexible_router, prefix="/api")
|
app.include_router(flexible_router, prefix="/api")
|
||||||
|
app.include_router(mortality_router, prefix="/api/mortality", tags=["mortality"])
|
||||||
|
|
||||||
|
|
||||||
@app.get("/", response_class=HTMLResponse)
|
@app.get("/", response_class=HTMLResponse)
|
||||||
|
|||||||
@@ -46,6 +46,25 @@ def _get_correlation_id(request: Request) -> str:
|
|||||||
return str(uuid4())
|
return str(uuid4())
|
||||||
|
|
||||||
|
|
||||||
|
def _json_safe(value: Any) -> Any:
|
||||||
|
"""Recursively convert non-JSON-serializable objects (like Exceptions) into strings.
|
||||||
|
|
||||||
|
Keeps overall structure intact so tests inspecting error details (e.g. 'loc', 'msg') still work.
|
||||||
|
"""
|
||||||
|
# Exception -> string message
|
||||||
|
if isinstance(value, BaseException):
|
||||||
|
return str(value)
|
||||||
|
# Mapping types
|
||||||
|
if isinstance(value, dict):
|
||||||
|
return {k: _json_safe(v) for k, v in value.items()}
|
||||||
|
# Sequence types
|
||||||
|
if isinstance(value, (list, tuple)):
|
||||||
|
return [
|
||||||
|
_json_safe(v) for v in value
|
||||||
|
]
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
def _build_error_response(
|
def _build_error_response(
|
||||||
request: Request,
|
request: Request,
|
||||||
*,
|
*,
|
||||||
@@ -66,7 +85,7 @@ def _build_error_response(
|
|||||||
"correlation_id": correlation_id,
|
"correlation_id": correlation_id,
|
||||||
}
|
}
|
||||||
if details is not None:
|
if details is not None:
|
||||||
body["error"]["details"] = details
|
body["error"]["details"] = _json_safe(details)
|
||||||
|
|
||||||
response = JSONResponse(content=body, status_code=status_code)
|
response = JSONResponse(content=body, status_code=status_code)
|
||||||
response.headers[ERROR_HEADER_NAME] = correlation_id
|
response.headers[ERROR_HEADER_NAME] = correlation_id
|
||||||
|
|||||||
@@ -14,12 +14,12 @@ from .flexible import FlexibleImport
|
|||||||
from .support import SupportTicket, TicketResponse, TicketStatus, TicketPriority, TicketCategory
|
from .support import SupportTicket, TicketResponse, TicketStatus, TicketPriority, TicketCategory
|
||||||
from .pensions import (
|
from .pensions import (
|
||||||
Pension, PensionSchedule, MarriageHistory, DeathBenefit,
|
Pension, PensionSchedule, MarriageHistory, DeathBenefit,
|
||||||
SeparationAgreement, LifeTable, NumberTable
|
SeparationAgreement, LifeTable, NumberTable, PensionResult
|
||||||
)
|
)
|
||||||
from .lookups import (
|
from .lookups import (
|
||||||
Employee, FileType, FileStatus, TransactionType, TransactionCode,
|
Employee, FileType, FileStatus, TransactionType, TransactionCode,
|
||||||
State, GroupLookup, Footer, PlanInfo, FormIndex, FormList,
|
State, GroupLookup, Footer, PlanInfo, FormIndex, FormList,
|
||||||
PrinterSetup, SystemSetup
|
PrinterSetup, SystemSetup, FormKeyword
|
||||||
)
|
)
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
@@ -28,8 +28,8 @@ __all__ = [
|
|||||||
"Deposit", "Payment", "FileNote", "FormVariable", "ReportVariable", "Document", "FlexibleImport",
|
"Deposit", "Payment", "FileNote", "FormVariable", "ReportVariable", "Document", "FlexibleImport",
|
||||||
"SupportTicket", "TicketResponse", "TicketStatus", "TicketPriority", "TicketCategory",
|
"SupportTicket", "TicketResponse", "TicketStatus", "TicketPriority", "TicketCategory",
|
||||||
"Pension", "PensionSchedule", "MarriageHistory", "DeathBenefit",
|
"Pension", "PensionSchedule", "MarriageHistory", "DeathBenefit",
|
||||||
"SeparationAgreement", "LifeTable", "NumberTable",
|
"SeparationAgreement", "LifeTable", "NumberTable", "PensionResult",
|
||||||
"Employee", "FileType", "FileStatus", "TransactionType", "TransactionCode",
|
"Employee", "FileType", "FileStatus", "TransactionType", "TransactionCode",
|
||||||
"State", "GroupLookup", "Footer", "PlanInfo", "FormIndex", "FormList",
|
"State", "GroupLookup", "Footer", "PlanInfo", "FormIndex", "FormList",
|
||||||
"PrinterSetup", "SystemSetup"
|
"PrinterSetup", "SystemSetup", "FormKeyword"
|
||||||
]
|
]
|
||||||
@@ -3,7 +3,7 @@ Audit logging models
|
|||||||
"""
|
"""
|
||||||
from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey, JSON
|
from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey, JSON
|
||||||
from sqlalchemy.orm import relationship
|
from sqlalchemy.orm import relationship
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
from app.models.base import BaseModel
|
from app.models.base import BaseModel
|
||||||
|
|
||||||
|
|
||||||
@@ -22,7 +22,7 @@ class AuditLog(BaseModel):
|
|||||||
details = Column(JSON, nullable=True) # Additional details as JSON
|
details = Column(JSON, nullable=True) # Additional details as JSON
|
||||||
ip_address = Column(String(45), nullable=True) # IPv4/IPv6 address
|
ip_address = Column(String(45), nullable=True) # IPv4/IPv6 address
|
||||||
user_agent = Column(Text, nullable=True) # Browser/client information
|
user_agent = Column(Text, nullable=True) # Browser/client information
|
||||||
timestamp = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
|
timestamp = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), nullable=False, index=True)
|
||||||
|
|
||||||
# Relationships
|
# Relationships
|
||||||
user = relationship("User", back_populates="audit_logs")
|
user = relationship("User", back_populates="audit_logs")
|
||||||
@@ -42,7 +42,7 @@ class LoginAttempt(BaseModel):
|
|||||||
ip_address = Column(String(45), nullable=False)
|
ip_address = Column(String(45), nullable=False)
|
||||||
user_agent = Column(Text, nullable=True)
|
user_agent = Column(Text, nullable=True)
|
||||||
success = Column(Integer, default=0) # 1 for success, 0 for failure
|
success = Column(Integer, default=0) # 1 for success, 0 for failure
|
||||||
timestamp = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
|
timestamp = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), nullable=False, index=True)
|
||||||
failure_reason = Column(String(200), nullable=True) # Reason for failure
|
failure_reason = Column(String(200), nullable=True) # Reason for failure
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
@@ -56,8 +56,8 @@ class ImportAudit(BaseModel):
|
|||||||
__tablename__ = "import_audit"
|
__tablename__ = "import_audit"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True, autoincrement=True, index=True)
|
id = Column(Integer, primary_key=True, autoincrement=True, index=True)
|
||||||
started_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
|
started_at = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), nullable=False, index=True)
|
||||||
finished_at = Column(DateTime, nullable=True, index=True)
|
finished_at = Column(DateTime(timezone=True), nullable=True, index=True)
|
||||||
status = Column(String(30), nullable=False, default="running", index=True) # running|success|completed_with_errors|failed
|
status = Column(String(30), nullable=False, default="running", index=True) # running|success|completed_with_errors|failed
|
||||||
|
|
||||||
total_files = Column(Integer, nullable=False, default=0)
|
total_files = Column(Integer, nullable=False, default=0)
|
||||||
@@ -94,7 +94,7 @@ class ImportAuditFile(BaseModel):
|
|||||||
errors = Column(Integer, nullable=False, default=0)
|
errors = Column(Integer, nullable=False, default=0)
|
||||||
message = Column(String(255), nullable=True)
|
message = Column(String(255), nullable=True)
|
||||||
details = Column(JSON, nullable=True)
|
details = Column(JSON, nullable=True)
|
||||||
created_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
|
created_at = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), nullable=False, index=True)
|
||||||
|
|
||||||
audit = relationship("ImportAudit", back_populates="files")
|
audit = relationship("ImportAudit", back_populates="files")
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Authentication-related persistence models
|
Authentication-related persistence models
|
||||||
"""
|
"""
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from sqlalchemy import Column, Integer, String, DateTime, Boolean, ForeignKey, UniqueConstraint
|
from sqlalchemy import Column, Integer, String, DateTime, Boolean, ForeignKey, UniqueConstraint
|
||||||
@@ -19,10 +19,10 @@ class RefreshToken(BaseModel):
|
|||||||
jti = Column(String(64), nullable=False, unique=True, index=True)
|
jti = Column(String(64), nullable=False, unique=True, index=True)
|
||||||
user_agent = Column(String(255), nullable=True)
|
user_agent = Column(String(255), nullable=True)
|
||||||
ip_address = Column(String(45), nullable=True)
|
ip_address = Column(String(45), nullable=True)
|
||||||
issued_at = Column(DateTime, default=datetime.utcnow, nullable=False)
|
issued_at = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), nullable=False)
|
||||||
expires_at = Column(DateTime, nullable=False, index=True)
|
expires_at = Column(DateTime(timezone=True), nullable=False, index=True)
|
||||||
revoked = Column(Boolean, default=False, nullable=False)
|
revoked = Column(Boolean, default=False, nullable=False)
|
||||||
revoked_at = Column(DateTime, nullable=True)
|
revoked_at = Column(DateTime(timezone=True), nullable=True)
|
||||||
|
|
||||||
# relationships
|
# relationships
|
||||||
user = relationship("User")
|
user = relationship("User")
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Lookup table models based on legacy system analysis
|
Lookup table models based on legacy system analysis
|
||||||
"""
|
"""
|
||||||
from sqlalchemy import Column, Integer, String, Text, Boolean, Float
|
from sqlalchemy import Column, Integer, String, Text, Boolean, Float, ForeignKey
|
||||||
from app.models.base import BaseModel
|
from app.models.base import BaseModel
|
||||||
|
|
||||||
|
|
||||||
@@ -53,6 +53,9 @@ class FileStatus(BaseModel):
|
|||||||
description = Column(String(200), nullable=False) # Description
|
description = Column(String(200), nullable=False) # Description
|
||||||
active = Column(Boolean, default=True) # Is status active
|
active = Column(Boolean, default=True) # Is status active
|
||||||
sort_order = Column(Integer, default=0) # Display order
|
sort_order = Column(Integer, default=0) # Display order
|
||||||
|
# Legacy fields for typed import support
|
||||||
|
send = Column(Boolean, default=True) # Should statements print by default
|
||||||
|
footer_code = Column(String(45), ForeignKey("footers.footer_code")) # Default footer
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"<FileStatus(code='{self.status_code}', description='{self.description}')>"
|
return f"<FileStatus(code='{self.status_code}', description='{self.description}')>"
|
||||||
@@ -169,9 +172,10 @@ class FormIndex(BaseModel):
|
|||||||
"""
|
"""
|
||||||
__tablename__ = "form_index"
|
__tablename__ = "form_index"
|
||||||
|
|
||||||
form_id = Column(String(45), primary_key=True, index=True) # Form identifier
|
form_id = Column(String(45), primary_key=True, index=True) # Form identifier (maps to Name)
|
||||||
form_name = Column(String(200), nullable=False) # Form name
|
form_name = Column(String(200), nullable=False) # Form name
|
||||||
category = Column(String(45)) # Form category
|
category = Column(String(45)) # Form category
|
||||||
|
keyword = Column(String(200)) # Legacy FORM_INX Name/Keyword pair
|
||||||
active = Column(Boolean, default=True) # Is form active
|
active = Column(Boolean, default=True) # Is form active
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
@@ -189,6 +193,7 @@ class FormList(BaseModel):
|
|||||||
form_id = Column(String(45), nullable=False) # Form identifier
|
form_id = Column(String(45), nullable=False) # Form identifier
|
||||||
line_number = Column(Integer, nullable=False) # Line number in form
|
line_number = Column(Integer, nullable=False) # Line number in form
|
||||||
content = Column(Text) # Line content
|
content = Column(Text) # Line content
|
||||||
|
status = Column(String(45)) # Legacy FORM_LST Status
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"<FormList(form_id='{self.form_id}', line={self.line_number})>"
|
return f"<FormList(form_id='{self.form_id}', line={self.line_number})>"
|
||||||
@@ -201,12 +206,34 @@ class PrinterSetup(BaseModel):
|
|||||||
"""
|
"""
|
||||||
__tablename__ = "printers"
|
__tablename__ = "printers"
|
||||||
|
|
||||||
|
# Core identity and basic configuration
|
||||||
printer_name = Column(String(100), primary_key=True, index=True) # Printer name
|
printer_name = Column(String(100), primary_key=True, index=True) # Printer name
|
||||||
description = Column(String(200)) # Description
|
description = Column(String(200)) # Description
|
||||||
driver = Column(String(100)) # Print driver
|
driver = Column(String(100)) # Print driver
|
||||||
port = Column(String(20)) # Port/connection
|
port = Column(String(20)) # Port/connection
|
||||||
default_printer = Column(Boolean, default=False) # Is default printer
|
default_printer = Column(Boolean, default=False) # Is default printer
|
||||||
active = Column(Boolean, default=True) # Is printer active
|
active = Column(Boolean, default=True) # Is printer active
|
||||||
|
|
||||||
|
# Legacy numeric printer number (from PRINTERS.csv "Number")
|
||||||
|
number = Column(Integer)
|
||||||
|
|
||||||
|
# Legacy control sequences and formatting (from PRINTERS.csv)
|
||||||
|
page_break = Column(String(50))
|
||||||
|
setup_st = Column(String(200))
|
||||||
|
reset_st = Column(String(200))
|
||||||
|
b_underline = Column(String(100))
|
||||||
|
e_underline = Column(String(100))
|
||||||
|
b_bold = Column(String(100))
|
||||||
|
e_bold = Column(String(100))
|
||||||
|
|
||||||
|
# Optional report configuration toggles (legacy flags)
|
||||||
|
phone_book = Column(Boolean, default=False)
|
||||||
|
rolodex_info = Column(Boolean, default=False)
|
||||||
|
envelope = Column(Boolean, default=False)
|
||||||
|
file_cabinet = Column(Boolean, default=False)
|
||||||
|
accounts = Column(Boolean, default=False)
|
||||||
|
statements = Column(Boolean, default=False)
|
||||||
|
calendar = Column(Boolean, default=False)
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"<Printer(name='{self.printer_name}', description='{self.description}')>"
|
return f"<Printer(name='{self.printer_name}', description='{self.description}')>"
|
||||||
@@ -225,4 +252,19 @@ class SystemSetup(BaseModel):
|
|||||||
setting_type = Column(String(20), default="STRING") # DATA type (STRING, INTEGER, FLOAT, BOOLEAN)
|
setting_type = Column(String(20), default="STRING") # DATA type (STRING, INTEGER, FLOAT, BOOLEAN)
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"<SystemSetup(key='{self.setting_key}', value='{self.setting_value}')>"
|
return f"<SystemSetup(key='{self.setting_key}', value='{self.setting_value}')>"
|
||||||
|
|
||||||
|
|
||||||
|
class FormKeyword(BaseModel):
|
||||||
|
"""
|
||||||
|
Form keyword lookup
|
||||||
|
Corresponds to INX_LKUP table in legacy system
|
||||||
|
"""
|
||||||
|
__tablename__ = "form_keywords"
|
||||||
|
|
||||||
|
keyword = Column(String(200), primary_key=True, index=True)
|
||||||
|
description = Column(String(200))
|
||||||
|
active = Column(Boolean, default=True)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"<FormKeyword(keyword='{self.keyword}')>"
|
||||||
@@ -60,11 +60,13 @@ class PensionSchedule(BaseModel):
|
|||||||
file_no = Column(String(45), ForeignKey("files.file_no"), nullable=False)
|
file_no = Column(String(45), ForeignKey("files.file_no"), nullable=False)
|
||||||
version = Column(String(10), default="01")
|
version = Column(String(10), default="01")
|
||||||
|
|
||||||
# Schedule details
|
# Schedule details (legacy vesting fields)
|
||||||
start_date = Column(Date) # Start date for payments
|
start_date = Column(Date) # Start date for payments
|
||||||
end_date = Column(Date) # End date for payments
|
end_date = Column(Date) # End date for payments
|
||||||
payment_amount = Column(Float, default=0.0) # Payment amount
|
payment_amount = Column(Float, default=0.0) # Payment amount
|
||||||
frequency = Column(String(20)) # Monthly, quarterly, etc.
|
frequency = Column(String(20)) # Monthly, quarterly, etc.
|
||||||
|
vests_on = Column(Date) # Legacy SCHEDULE.csv Vests_On
|
||||||
|
vests_at = Column(Float, default=0.0) # Legacy SCHEDULE.csv Vests_At (percent)
|
||||||
|
|
||||||
# Relationships
|
# Relationships
|
||||||
file = relationship("File", back_populates="pension_schedules")
|
file = relationship("File", back_populates="pension_schedules")
|
||||||
@@ -85,6 +87,15 @@ class MarriageHistory(BaseModel):
|
|||||||
divorce_date = Column(Date) # Date of divorce/separation
|
divorce_date = Column(Date) # Date of divorce/separation
|
||||||
spouse_name = Column(String(100)) # Spouse name
|
spouse_name = Column(String(100)) # Spouse name
|
||||||
notes = Column(Text) # Additional notes
|
notes = Column(Text) # Additional notes
|
||||||
|
|
||||||
|
# Legacy MARRIAGE.csv fields
|
||||||
|
married_from = Column(Date)
|
||||||
|
married_to = Column(Date)
|
||||||
|
married_years = Column(Float, default=0.0)
|
||||||
|
service_from = Column(Date)
|
||||||
|
service_to = Column(Date)
|
||||||
|
service_years = Column(Float, default=0.0)
|
||||||
|
marital_percent = Column(Float, default=0.0)
|
||||||
|
|
||||||
# Relationships
|
# Relationships
|
||||||
file = relationship("File", back_populates="marriage_history")
|
file = relationship("File", back_populates="marriage_history")
|
||||||
@@ -105,6 +116,14 @@ class DeathBenefit(BaseModel):
|
|||||||
benefit_amount = Column(Float, default=0.0) # Benefit amount
|
benefit_amount = Column(Float, default=0.0) # Benefit amount
|
||||||
benefit_type = Column(String(45)) # Type of death benefit
|
benefit_type = Column(String(45)) # Type of death benefit
|
||||||
notes = Column(Text) # Additional notes
|
notes = Column(Text) # Additional notes
|
||||||
|
|
||||||
|
# Legacy DEATH.csv fields
|
||||||
|
lump1 = Column(Float, default=0.0)
|
||||||
|
lump2 = Column(Float, default=0.0)
|
||||||
|
growth1 = Column(Float, default=0.0)
|
||||||
|
growth2 = Column(Float, default=0.0)
|
||||||
|
disc1 = Column(Float, default=0.0)
|
||||||
|
disc2 = Column(Float, default=0.0)
|
||||||
|
|
||||||
# Relationships
|
# Relationships
|
||||||
file = relationship("File", back_populates="death_benefits")
|
file = relationship("File", back_populates="death_benefits")
|
||||||
@@ -138,10 +157,36 @@ class LifeTable(BaseModel):
|
|||||||
|
|
||||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||||
age = Column(Integer, nullable=False) # Age
|
age = Column(Integer, nullable=False) # Age
|
||||||
male_expectancy = Column(Float) # Male life expectancy
|
# Rich typed columns reflecting legacy LIFETABL.csv headers
|
||||||
female_expectancy = Column(Float) # Female life expectancy
|
# LE_* = Life Expectancy, NA_* = Number Alive/Survivors
|
||||||
table_year = Column(Integer) # Year of table (e.g., 2023)
|
le_aa = Column(Float)
|
||||||
table_type = Column(String(45)) # Type of table
|
na_aa = Column(Float)
|
||||||
|
le_am = Column(Float)
|
||||||
|
na_am = Column(Float)
|
||||||
|
le_af = Column(Float)
|
||||||
|
na_af = Column(Float)
|
||||||
|
le_wa = Column(Float)
|
||||||
|
na_wa = Column(Float)
|
||||||
|
le_wm = Column(Float)
|
||||||
|
na_wm = Column(Float)
|
||||||
|
le_wf = Column(Float)
|
||||||
|
na_wf = Column(Float)
|
||||||
|
le_ba = Column(Float)
|
||||||
|
na_ba = Column(Float)
|
||||||
|
le_bm = Column(Float)
|
||||||
|
na_bm = Column(Float)
|
||||||
|
le_bf = Column(Float)
|
||||||
|
na_bf = Column(Float)
|
||||||
|
le_ha = Column(Float)
|
||||||
|
na_ha = Column(Float)
|
||||||
|
le_hm = Column(Float)
|
||||||
|
na_hm = Column(Float)
|
||||||
|
le_hf = Column(Float)
|
||||||
|
na_hf = Column(Float)
|
||||||
|
|
||||||
|
# Optional metadata retained for future variations
|
||||||
|
table_year = Column(Integer) # Year/version of table if known
|
||||||
|
table_type = Column(String(45)) # Source/type of table (optional)
|
||||||
|
|
||||||
|
|
||||||
class NumberTable(BaseModel):
|
class NumberTable(BaseModel):
|
||||||
@@ -152,7 +197,63 @@ class NumberTable(BaseModel):
|
|||||||
__tablename__ = "number_tables"
|
__tablename__ = "number_tables"
|
||||||
|
|
||||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||||
table_type = Column(String(45), nullable=False) # Type of table
|
month = Column(Integer, nullable=False)
|
||||||
key_value = Column(String(45), nullable=False) # Key identifier
|
# Rich typed NA_* columns reflecting legacy NUMBERAL.csv headers
|
||||||
numeric_value = Column(Float) # Numeric value
|
na_aa = Column(Float)
|
||||||
description = Column(Text) # Description
|
na_am = Column(Float)
|
||||||
|
na_af = Column(Float)
|
||||||
|
na_wa = Column(Float)
|
||||||
|
na_wm = Column(Float)
|
||||||
|
na_wf = Column(Float)
|
||||||
|
na_ba = Column(Float)
|
||||||
|
na_bm = Column(Float)
|
||||||
|
na_bf = Column(Float)
|
||||||
|
na_ha = Column(Float)
|
||||||
|
na_hm = Column(Float)
|
||||||
|
na_hf = Column(Float)
|
||||||
|
|
||||||
|
# Optional metadata retained for future variations
|
||||||
|
table_type = Column(String(45))
|
||||||
|
description = Column(Text)
|
||||||
|
|
||||||
|
|
||||||
|
class PensionResult(BaseModel):
|
||||||
|
"""
|
||||||
|
Computed pension results summary
|
||||||
|
Corresponds to RESULTS table in legacy system
|
||||||
|
"""
|
||||||
|
__tablename__ = "pension_results"
|
||||||
|
|
||||||
|
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||||
|
|
||||||
|
# Optional linkage if present in future exports
|
||||||
|
file_no = Column(String(45))
|
||||||
|
version = Column(String(10))
|
||||||
|
|
||||||
|
# Columns observed in legacy RESULTS.csv header
|
||||||
|
accrued = Column(Float)
|
||||||
|
start_age = Column(Integer)
|
||||||
|
cola = Column(Float)
|
||||||
|
withdrawal = Column(String(45))
|
||||||
|
pre_dr = Column(Float)
|
||||||
|
post_dr = Column(Float)
|
||||||
|
tax_rate = Column(Float)
|
||||||
|
age = Column(Integer)
|
||||||
|
years_from = Column(Float)
|
||||||
|
life_exp = Column(Float)
|
||||||
|
ev_monthly = Column(Float)
|
||||||
|
payments = Column(Float)
|
||||||
|
pay_out = Column(Float)
|
||||||
|
fund_value = Column(Float)
|
||||||
|
pv = Column(Float)
|
||||||
|
mortality = Column(Float)
|
||||||
|
pv_am = Column(Float)
|
||||||
|
pv_amt = Column(Float)
|
||||||
|
pv_pre_db = Column(Float)
|
||||||
|
pv_annuity = Column(Float)
|
||||||
|
wv_at = Column(Float)
|
||||||
|
pv_plan = Column(Float)
|
||||||
|
years_married = Column(Float)
|
||||||
|
years_service = Column(Float)
|
||||||
|
marr_per = Column(Float)
|
||||||
|
marr_amt = Column(Float)
|
||||||
@@ -3,7 +3,7 @@ Support ticket models for help desk functionality
|
|||||||
"""
|
"""
|
||||||
from sqlalchemy import Column, Integer, String, Text, DateTime, Boolean, ForeignKey, Enum
|
from sqlalchemy import Column, Integer, String, Text, DateTime, Boolean, ForeignKey, Enum
|
||||||
from sqlalchemy.orm import relationship
|
from sqlalchemy.orm import relationship
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
import enum
|
import enum
|
||||||
|
|
||||||
from app.models.base import BaseModel
|
from app.models.base import BaseModel
|
||||||
@@ -63,9 +63,9 @@ class SupportTicket(BaseModel):
|
|||||||
ip_address = Column(String(45)) # IP address
|
ip_address = Column(String(45)) # IP address
|
||||||
|
|
||||||
# Timestamps
|
# Timestamps
|
||||||
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
|
created_at = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), nullable=False)
|
||||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
updated_at = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
|
||||||
resolved_at = Column(DateTime)
|
resolved_at = Column(DateTime(timezone=True))
|
||||||
|
|
||||||
# Admin assignment
|
# Admin assignment
|
||||||
assigned_to = Column(Integer, ForeignKey("users.id"))
|
assigned_to = Column(Integer, ForeignKey("users.id"))
|
||||||
@@ -95,7 +95,7 @@ class TicketResponse(BaseModel):
|
|||||||
author_email = Column(String(100)) # For non-user responses
|
author_email = Column(String(100)) # For non-user responses
|
||||||
|
|
||||||
# Timestamps
|
# Timestamps
|
||||||
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
|
created_at = Column(DateTime(timezone=True), default=lambda: datetime.now(timezone.utc), nullable=False)
|
||||||
|
|
||||||
# Relationships
|
# Relationships
|
||||||
ticket = relationship("SupportTicket", back_populates="responses")
|
ticket = relationship("SupportTicket", back_populates="responses")
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ Audit logging service
|
|||||||
"""
|
"""
|
||||||
import json
|
import json
|
||||||
from typing import Dict, Any, Optional
|
from typing import Dict, Any, Optional
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta, timezone
|
||||||
from sqlalchemy.orm import Session
|
from sqlalchemy.orm import Session
|
||||||
from fastapi import Request
|
from fastapi import Request
|
||||||
|
|
||||||
@@ -65,7 +65,7 @@ class AuditService:
|
|||||||
details=details,
|
details=details,
|
||||||
ip_address=ip_address,
|
ip_address=ip_address,
|
||||||
user_agent=user_agent,
|
user_agent=user_agent,
|
||||||
timestamp=datetime.utcnow()
|
timestamp=datetime.now(timezone.utc)
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -76,7 +76,7 @@ class AuditService:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
db.rollback()
|
db.rollback()
|
||||||
# Log the error but don't fail the main operation
|
# Log the error but don't fail the main operation
|
||||||
logger.error("Failed to log audit entry", error=str(e), action=action, user_id=user_id)
|
logger.error("Failed to log audit entry", error=str(e), action=action)
|
||||||
return audit_log
|
return audit_log
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@@ -119,7 +119,7 @@ class AuditService:
|
|||||||
ip_address=ip_address or "unknown",
|
ip_address=ip_address or "unknown",
|
||||||
user_agent=user_agent,
|
user_agent=user_agent,
|
||||||
success=1 if success else 0,
|
success=1 if success else 0,
|
||||||
timestamp=datetime.utcnow(),
|
timestamp=datetime.now(timezone.utc),
|
||||||
failure_reason=failure_reason if not success else None
|
failure_reason=failure_reason if not success else None
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -252,7 +252,7 @@ class AuditService:
|
|||||||
Returns:
|
Returns:
|
||||||
List of failed login attempts
|
List of failed login attempts
|
||||||
"""
|
"""
|
||||||
cutoff_time = datetime.utcnow() - timedelta(hours=hours)
|
cutoff_time = datetime.now(timezone.utc) - timedelta(hours=hours)
|
||||||
query = db.query(LoginAttempt).filter(
|
query = db.query(LoginAttempt).filter(
|
||||||
LoginAttempt.success == 0,
|
LoginAttempt.success == 0,
|
||||||
LoginAttempt.timestamp >= cutoff_time
|
LoginAttempt.timestamp >= cutoff_time
|
||||||
|
|||||||
98
app/services/cache.py
Normal file
98
app/services/cache.py
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
"""
|
||||||
|
Cache utilities with optional Redis backend.
|
||||||
|
|
||||||
|
If Redis is not configured or unavailable, all functions degrade to no-ops.
|
||||||
|
"""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import hashlib
|
||||||
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
try:
|
||||||
|
import redis.asyncio as redis # type: ignore
|
||||||
|
except Exception: # pragma: no cover - allow running without redis installed
|
||||||
|
redis = None # type: ignore
|
||||||
|
|
||||||
|
from app.config import settings
|
||||||
|
|
||||||
|
|
||||||
|
_client: Optional["redis.Redis"] = None # type: ignore
|
||||||
|
_lock = asyncio.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
async def _get_client() -> Optional["redis.Redis"]: # type: ignore
|
||||||
|
"""Lazily initialize and return a shared Redis client if enabled."""
|
||||||
|
global _client
|
||||||
|
if not getattr(settings, "redis_url", None) or not getattr(settings, "cache_enabled", False):
|
||||||
|
return None
|
||||||
|
if redis is None:
|
||||||
|
return None
|
||||||
|
if _client is not None:
|
||||||
|
return _client
|
||||||
|
async with _lock:
|
||||||
|
if _client is None:
|
||||||
|
try:
|
||||||
|
_client = redis.from_url(settings.redis_url, decode_responses=True) # type: ignore
|
||||||
|
except Exception:
|
||||||
|
_client = None
|
||||||
|
return _client
|
||||||
|
|
||||||
|
|
||||||
|
def _stable_hash(obj: Any) -> str:
|
||||||
|
data = json.dumps(obj, sort_keys=True, separators=(",", ":"))
|
||||||
|
return hashlib.sha1(data.encode("utf-8")).hexdigest()
|
||||||
|
|
||||||
|
|
||||||
|
def build_key(kind: str, user_id: Optional[str], parts: dict) -> str:
|
||||||
|
payload = {"u": user_id or "anon", "p": parts}
|
||||||
|
return f"search:{kind}:v1:{_stable_hash(payload)}"
|
||||||
|
|
||||||
|
|
||||||
|
async def cache_get_json(kind: str, user_id: Optional[str], parts: dict) -> Optional[Any]:
|
||||||
|
client = await _get_client()
|
||||||
|
if client is None:
|
||||||
|
return None
|
||||||
|
key = build_key(kind, user_id, parts)
|
||||||
|
try:
|
||||||
|
raw = await client.get(key)
|
||||||
|
if raw is None:
|
||||||
|
return None
|
||||||
|
return json.loads(raw)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
async def cache_set_json(kind: str, user_id: Optional[str], parts: dict, value: Any, ttl_seconds: int) -> None:
|
||||||
|
client = await _get_client()
|
||||||
|
if client is None:
|
||||||
|
return
|
||||||
|
key = build_key(kind, user_id, parts)
|
||||||
|
try:
|
||||||
|
await client.set(key, json.dumps(value, separators=(",", ":")), ex=ttl_seconds)
|
||||||
|
except Exception:
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
async def invalidate_prefix(prefix: str) -> None:
|
||||||
|
client = await _get_client()
|
||||||
|
if client is None:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
# Use SCAN to avoid blocking Redis
|
||||||
|
async for key in client.scan_iter(match=f"{prefix}*"):
|
||||||
|
try:
|
||||||
|
await client.delete(key)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
except Exception:
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
async def invalidate_search_cache() -> None:
|
||||||
|
# Wipe both global search and suggestions namespaces
|
||||||
|
await invalidate_prefix("search:global:")
|
||||||
|
await invalidate_prefix("search:suggestions:")
|
||||||
|
|
||||||
|
|
||||||
141
app/services/customers_search.py
Normal file
141
app/services/customers_search.py
Normal file
@@ -0,0 +1,141 @@
|
|||||||
|
from typing import Optional, List
|
||||||
|
from sqlalchemy import or_, and_, func, asc, desc
|
||||||
|
|
||||||
|
from app.models.rolodex import Rolodex
|
||||||
|
|
||||||
|
|
||||||
|
def apply_customer_filters(base_query, search: Optional[str], group: Optional[str], state: Optional[str], groups: Optional[List[str]], states: Optional[List[str]]):
|
||||||
|
"""Apply shared search and group/state filters to the provided base_query.
|
||||||
|
|
||||||
|
This helper is used by both list and export endpoints to keep logic in sync.
|
||||||
|
"""
|
||||||
|
s = (search or "").strip()
|
||||||
|
if s:
|
||||||
|
s_lower = s.lower()
|
||||||
|
tokens = [t for t in s_lower.split() if t]
|
||||||
|
contains_any = or_(
|
||||||
|
func.lower(Rolodex.id).contains(s_lower),
|
||||||
|
func.lower(Rolodex.last).contains(s_lower),
|
||||||
|
func.lower(Rolodex.first).contains(s_lower),
|
||||||
|
func.lower(Rolodex.middle).contains(s_lower),
|
||||||
|
func.lower(Rolodex.city).contains(s_lower),
|
||||||
|
func.lower(Rolodex.email).contains(s_lower),
|
||||||
|
)
|
||||||
|
name_tokens = [
|
||||||
|
or_(
|
||||||
|
func.lower(Rolodex.first).contains(tok),
|
||||||
|
func.lower(Rolodex.middle).contains(tok),
|
||||||
|
func.lower(Rolodex.last).contains(tok),
|
||||||
|
)
|
||||||
|
for tok in tokens
|
||||||
|
]
|
||||||
|
combined = contains_any if not name_tokens else or_(contains_any, and_(*name_tokens))
|
||||||
|
|
||||||
|
last_first_filter = None
|
||||||
|
if "," in s_lower:
|
||||||
|
last_part, first_part = [p.strip() for p in s_lower.split(",", 1)]
|
||||||
|
if last_part and first_part:
|
||||||
|
last_first_filter = and_(
|
||||||
|
func.lower(Rolodex.last).contains(last_part),
|
||||||
|
func.lower(Rolodex.first).contains(first_part),
|
||||||
|
)
|
||||||
|
elif last_part:
|
||||||
|
last_first_filter = func.lower(Rolodex.last).contains(last_part)
|
||||||
|
|
||||||
|
final_filter = or_(combined, last_first_filter) if last_first_filter is not None else combined
|
||||||
|
base_query = base_query.filter(final_filter)
|
||||||
|
|
||||||
|
effective_groups = [g for g in (groups or []) if g] or ([group] if group else [])
|
||||||
|
if effective_groups:
|
||||||
|
base_query = base_query.filter(Rolodex.group.in_(effective_groups))
|
||||||
|
|
||||||
|
effective_states = [s for s in (states or []) if s] or ([state] if state else [])
|
||||||
|
if effective_states:
|
||||||
|
base_query = base_query.filter(Rolodex.abrev.in_(effective_states))
|
||||||
|
|
||||||
|
return base_query
|
||||||
|
|
||||||
|
|
||||||
|
def apply_customer_sorting(base_query, sort_by: Optional[str], sort_dir: Optional[str]):
|
||||||
|
"""Apply shared sorting to the provided base_query.
|
||||||
|
|
||||||
|
Supported fields: id, name (last,first), city (city,state), email.
|
||||||
|
Unknown fields fall back to id. Sorting is case-insensitive for strings.
|
||||||
|
"""
|
||||||
|
normalized_sort_by = (sort_by or "id").lower()
|
||||||
|
normalized_sort_dir = (sort_dir or "asc").lower()
|
||||||
|
is_desc = normalized_sort_dir == "desc"
|
||||||
|
|
||||||
|
order_columns = []
|
||||||
|
if normalized_sort_by == "id":
|
||||||
|
order_columns = [Rolodex.id]
|
||||||
|
elif normalized_sort_by == "name":
|
||||||
|
order_columns = [Rolodex.last, Rolodex.first]
|
||||||
|
elif normalized_sort_by == "city":
|
||||||
|
order_columns = [Rolodex.city, Rolodex.abrev]
|
||||||
|
elif normalized_sort_by == "email":
|
||||||
|
order_columns = [Rolodex.email]
|
||||||
|
else:
|
||||||
|
order_columns = [Rolodex.id]
|
||||||
|
|
||||||
|
ordered = []
|
||||||
|
for col in order_columns:
|
||||||
|
try:
|
||||||
|
expr = func.lower(col) if col.type.python_type in (str,) else col # type: ignore[attr-defined]
|
||||||
|
except Exception:
|
||||||
|
expr = col
|
||||||
|
ordered.append(desc(expr) if is_desc else asc(expr))
|
||||||
|
|
||||||
|
if ordered:
|
||||||
|
base_query = base_query.order_by(*ordered)
|
||||||
|
return base_query
|
||||||
|
|
||||||
|
|
||||||
|
def prepare_customer_csv_rows(customers: List[Rolodex], fields: Optional[List[str]]):
|
||||||
|
"""Prepare CSV header and rows for the given customers and requested fields.
|
||||||
|
|
||||||
|
Returns a tuple: (header_row, rows), where header_row is a list of column
|
||||||
|
titles and rows is a list of row lists ready to be written by csv.writer.
|
||||||
|
"""
|
||||||
|
allowed_fields_in_order = ["id", "name", "group", "city", "state", "phone", "email"]
|
||||||
|
header_names = {
|
||||||
|
"id": "Customer ID",
|
||||||
|
"name": "Name",
|
||||||
|
"group": "Group",
|
||||||
|
"city": "City",
|
||||||
|
"state": "State",
|
||||||
|
"phone": "Primary Phone",
|
||||||
|
"email": "Email",
|
||||||
|
}
|
||||||
|
|
||||||
|
requested = [f.lower() for f in (fields or []) if isinstance(f, str)]
|
||||||
|
selected_fields = [f for f in allowed_fields_in_order if f in requested] if requested else allowed_fields_in_order
|
||||||
|
if not selected_fields:
|
||||||
|
selected_fields = allowed_fields_in_order
|
||||||
|
|
||||||
|
header_row = [header_names[f] for f in selected_fields]
|
||||||
|
|
||||||
|
rows: List[List[str]] = []
|
||||||
|
for c in customers:
|
||||||
|
full_name = f"{(c.first or '').strip()} {(c.last or '').strip()}".strip()
|
||||||
|
primary_phone = ""
|
||||||
|
try:
|
||||||
|
if getattr(c, "phone_numbers", None):
|
||||||
|
primary_phone = c.phone_numbers[0].phone or ""
|
||||||
|
except Exception:
|
||||||
|
primary_phone = ""
|
||||||
|
|
||||||
|
row_map = {
|
||||||
|
"id": c.id,
|
||||||
|
"name": full_name,
|
||||||
|
"group": c.group or "",
|
||||||
|
"city": c.city or "",
|
||||||
|
"state": c.abrev or "",
|
||||||
|
"phone": primary_phone,
|
||||||
|
"email": c.email or "",
|
||||||
|
}
|
||||||
|
rows.append([row_map[f] for f in selected_fields])
|
||||||
|
|
||||||
|
return header_row, rows
|
||||||
|
|
||||||
|
|
||||||
127
app/services/mortality.py
Normal file
127
app/services/mortality.py
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
"""
|
||||||
|
Mortality/Life table utilities.
|
||||||
|
|
||||||
|
Helpers to query `life_tables` and `number_tables` by age/month and
|
||||||
|
return values filtered by sex/race using compact codes:
|
||||||
|
- sex: M, F, A (All)
|
||||||
|
- race: W (White), B (Black), H (Hispanic), A (All)
|
||||||
|
|
||||||
|
Column naming in tables follows the pattern:
|
||||||
|
- LifeTable: le_{race}{sex}, na_{race}{sex}
|
||||||
|
- NumberTable: na_{race}{sex}
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- race=W, sex=M => suffix "wm" (columns `le_wm`, `na_wm`)
|
||||||
|
- race=A, sex=F => suffix "af" (columns `le_af`, `na_af`)
|
||||||
|
- race=H, sex=A => suffix "ha" (columns `le_ha`, `na_ha`)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Dict, Optional, Tuple
|
||||||
|
from sqlalchemy.orm import Session
|
||||||
|
|
||||||
|
from app.models.pensions import LifeTable, NumberTable
|
||||||
|
|
||||||
|
|
||||||
|
_RACE_MAP: Dict[str, str] = {
|
||||||
|
"W": "w", # White
|
||||||
|
"B": "b", # Black
|
||||||
|
"H": "h", # Hispanic
|
||||||
|
"A": "a", # All races
|
||||||
|
}
|
||||||
|
|
||||||
|
_SEX_MAP: Dict[str, str] = {
|
||||||
|
"M": "m",
|
||||||
|
"F": "f",
|
||||||
|
"A": "a", # All sexes
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidCodeError(ValueError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_codes(sex: str, race: str) -> Tuple[str, str, str]:
|
||||||
|
"""Validate/normalize sex and race to construct the column suffix.
|
||||||
|
|
||||||
|
Returns (suffix, sex_u, race_u) where suffix is lowercase like "wm".
|
||||||
|
Raises InvalidCodeError on invalid inputs.
|
||||||
|
"""
|
||||||
|
sex_u = (sex or "").strip().upper()
|
||||||
|
race_u = (race or "").strip().upper()
|
||||||
|
if sex_u not in _SEX_MAP:
|
||||||
|
raise InvalidCodeError(f"Invalid sex code '{sex}'. Expected one of: {', '.join(_SEX_MAP.keys())}")
|
||||||
|
if race_u not in _RACE_MAP:
|
||||||
|
raise InvalidCodeError(f"Invalid race code '{race}'. Expected one of: {', '.join(_RACE_MAP.keys())}")
|
||||||
|
return _RACE_MAP[race_u] + _SEX_MAP[sex_u], sex_u, race_u
|
||||||
|
|
||||||
|
|
||||||
|
def get_life_values(
|
||||||
|
db: Session,
|
||||||
|
*,
|
||||||
|
age: int,
|
||||||
|
sex: str,
|
||||||
|
race: str,
|
||||||
|
) -> Optional[Dict[str, Optional[float]]]:
|
||||||
|
"""Return life table LE and NA values for a given age, sex, and race.
|
||||||
|
|
||||||
|
Returns dict: {"age": int, "sex": str, "race": str, "le": float|None, "na": float|None}
|
||||||
|
Returns None if the age row does not exist.
|
||||||
|
Raises InvalidCodeError for invalid codes.
|
||||||
|
"""
|
||||||
|
suffix, sex_u, race_u = _normalize_codes(sex, race)
|
||||||
|
row: Optional[LifeTable] = db.query(LifeTable).filter(LifeTable.age == age).first()
|
||||||
|
if not row:
|
||||||
|
return None
|
||||||
|
|
||||||
|
le_col = f"le_{suffix}"
|
||||||
|
na_col = f"na_{suffix}"
|
||||||
|
le_val = getattr(row, le_col, None)
|
||||||
|
na_val = getattr(row, na_col, None)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"age": int(age),
|
||||||
|
"sex": sex_u,
|
||||||
|
"race": race_u,
|
||||||
|
"le": float(le_val) if le_val is not None else None,
|
||||||
|
"na": float(na_val) if na_val is not None else None,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_number_value(
|
||||||
|
db: Session,
|
||||||
|
*,
|
||||||
|
month: int,
|
||||||
|
sex: str,
|
||||||
|
race: str,
|
||||||
|
) -> Optional[Dict[str, Optional[float]]]:
|
||||||
|
"""Return number table NA value for a given month, sex, and race.
|
||||||
|
|
||||||
|
Returns dict: {"month": int, "sex": str, "race": str, "na": float|None}
|
||||||
|
Returns None if the month row does not exist.
|
||||||
|
Raises InvalidCodeError for invalid codes.
|
||||||
|
"""
|
||||||
|
suffix, sex_u, race_u = _normalize_codes(sex, race)
|
||||||
|
row: Optional[NumberTable] = db.query(NumberTable).filter(NumberTable.month == month).first()
|
||||||
|
if not row:
|
||||||
|
return None
|
||||||
|
|
||||||
|
na_col = f"na_{suffix}"
|
||||||
|
na_val = getattr(row, na_col, None)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"month": int(month),
|
||||||
|
"sex": sex_u,
|
||||||
|
"race": race_u,
|
||||||
|
"na": float(na_val) if na_val is not None else None,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"InvalidCodeError",
|
||||||
|
"get_life_values",
|
||||||
|
"get_number_value",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
72
app/services/query_utils.py
Normal file
72
app/services/query_utils.py
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
from typing import Iterable, Optional, Sequence
|
||||||
|
from sqlalchemy import or_, and_, asc, desc, func
|
||||||
|
from sqlalchemy.sql.elements import BinaryExpression
|
||||||
|
from sqlalchemy.sql.schema import Column
|
||||||
|
|
||||||
|
|
||||||
|
def tokenized_ilike_filter(tokens: Sequence[str], columns: Sequence[Column]) -> Optional[BinaryExpression]:
|
||||||
|
"""Build an AND-of-ORs case-insensitive LIKE filter across columns for each token.
|
||||||
|
|
||||||
|
Example: AND(OR(col1 ILIKE %t1%, col2 ILIKE %t1%), OR(col1 ILIKE %t2%, ...))
|
||||||
|
Returns None when tokens or columns are empty.
|
||||||
|
"""
|
||||||
|
if not tokens or not columns:
|
||||||
|
return None
|
||||||
|
per_token_clauses = []
|
||||||
|
for term in tokens:
|
||||||
|
term = str(term or "").strip()
|
||||||
|
if not term:
|
||||||
|
continue
|
||||||
|
per_token_clauses.append(or_(*[c.ilike(f"%{term}%") for c in columns]))
|
||||||
|
if not per_token_clauses:
|
||||||
|
return None
|
||||||
|
return and_(*per_token_clauses)
|
||||||
|
|
||||||
|
|
||||||
|
def apply_pagination(query, skip: int, limit: int):
|
||||||
|
"""Apply offset/limit pagination to a SQLAlchemy query in a DRY way."""
|
||||||
|
return query.offset(skip).limit(limit)
|
||||||
|
|
||||||
|
|
||||||
|
def paginate_with_total(query, skip: int, limit: int, include_total: bool):
|
||||||
|
"""Return (items, total|None) applying pagination and optionally counting total.
|
||||||
|
|
||||||
|
This avoids duplicating count + pagination logic at each endpoint.
|
||||||
|
"""
|
||||||
|
total_count = query.count() if include_total else None
|
||||||
|
items = apply_pagination(query, skip, limit).all()
|
||||||
|
return items, total_count
|
||||||
|
|
||||||
|
|
||||||
|
def apply_sorting(query, sort_by: Optional[str], sort_dir: Optional[str], allowed: dict[str, list[Column]]):
|
||||||
|
"""Apply case-insensitive sorting per a whitelist of allowed fields.
|
||||||
|
|
||||||
|
allowed: mapping from field name -> list of columns to sort by, in priority order.
|
||||||
|
For string columns, compares using lower(column) for stable ordering.
|
||||||
|
Unknown sort_by falls back to the first key in allowed.
|
||||||
|
sort_dir: "asc" or "desc" (default asc)
|
||||||
|
"""
|
||||||
|
if not allowed:
|
||||||
|
return query
|
||||||
|
normalized_sort_by = (sort_by or next(iter(allowed.keys()))).lower()
|
||||||
|
normalized_sort_dir = (sort_dir or "asc").lower()
|
||||||
|
is_desc = normalized_sort_dir == "desc"
|
||||||
|
|
||||||
|
columns = allowed.get(normalized_sort_by)
|
||||||
|
if not columns:
|
||||||
|
columns = allowed.get(next(iter(allowed.keys())))
|
||||||
|
if not columns:
|
||||||
|
return query
|
||||||
|
|
||||||
|
order_exprs = []
|
||||||
|
for col in columns:
|
||||||
|
try:
|
||||||
|
expr = func.lower(col) if getattr(col.type, "python_type", str) is str else col
|
||||||
|
except Exception:
|
||||||
|
expr = col
|
||||||
|
order_exprs.append(desc(expr) if is_desc else asc(expr))
|
||||||
|
if order_exprs:
|
||||||
|
query = query.order_by(*order_exprs)
|
||||||
|
return query
|
||||||
|
|
||||||
|
|
||||||
69
e2e/global-setup.js
Normal file
69
e2e/global-setup.js
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
// Global setup to seed admin user before Playwright tests
|
||||||
|
const { spawnSync } = require('child_process');
|
||||||
|
const fs = require('fs');
|
||||||
|
const jwt = require('jsonwebtoken');
|
||||||
|
|
||||||
|
module.exports = async () => {
|
||||||
|
const SECRET_KEY = process.env.SECRET_KEY || 'x'.repeat(32);
|
||||||
|
const path = require('path');
|
||||||
|
const dbPath = path.resolve(__dirname, '..', '.e2e-db.sqlite');
|
||||||
|
const DATABASE_URL = process.env.DATABASE_URL || `sqlite:////${dbPath}`;
|
||||||
|
|
||||||
|
// Ensure a clean database for deterministic tests
|
||||||
|
try { fs.rmSync(dbPath, { force: true }); } catch (_) {}
|
||||||
|
|
||||||
|
const pyCode = `
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
from app.database.base import engine
|
||||||
|
from app.models import BaseModel
|
||||||
|
from app.models.user import User
|
||||||
|
from app.auth.security import get_password_hash
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Ensure tables
|
||||||
|
BaseModel.metadata.create_all(bind=engine)
|
||||||
|
|
||||||
|
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
admin = db.query(User).filter(User.username=='admin').first()
|
||||||
|
if not admin:
|
||||||
|
admin = User(
|
||||||
|
username=os.getenv('ADMIN_USERNAME','admin'),
|
||||||
|
email=os.getenv('ADMIN_EMAIL','admin@delphicg.local'),
|
||||||
|
full_name=os.getenv('ADMIN_FULLNAME','System Administrator'),
|
||||||
|
hashed_password=get_password_hash(os.getenv('ADMIN_PASSWORD','admin123')),
|
||||||
|
is_active=True,
|
||||||
|
is_admin=True,
|
||||||
|
)
|
||||||
|
db.add(admin)
|
||||||
|
db.commit()
|
||||||
|
print('Seeded admin user')
|
||||||
|
else:
|
||||||
|
print('Admin user already exists')
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
`;
|
||||||
|
|
||||||
|
const env = {
|
||||||
|
...process.env,
|
||||||
|
SECRET_KEY,
|
||||||
|
DATABASE_URL,
|
||||||
|
ADMIN_EMAIL: 'admin@example.com',
|
||||||
|
ADMIN_USERNAME: 'admin',
|
||||||
|
ADMIN_PASSWORD: process.env.ADMIN_PASSWORD || 'admin123',
|
||||||
|
};
|
||||||
|
let res = spawnSync('python3', ['-c', pyCode], { env, stdio: 'inherit' });
|
||||||
|
if (res.error) {
|
||||||
|
res = spawnSync('python', ['-c', pyCode], { env, stdio: 'inherit' });
|
||||||
|
if (res.error) throw res.error;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pre-generate a valid access token to bypass login DB writes in tests
|
||||||
|
const token = jwt.sign({ sub: env.ADMIN_USERNAME, type: 'access' }, env.SECRET_KEY, { expiresIn: '4h' });
|
||||||
|
// Persist to a file for the tests to read
|
||||||
|
const tokenPath = path.resolve(__dirname, '..', '.e2e-token');
|
||||||
|
fs.writeFileSync(tokenPath, token, 'utf-8');
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
239
e2e/search.e2e.spec.js
Normal file
239
e2e/search.e2e.spec.js
Normal file
@@ -0,0 +1,239 @@
|
|||||||
|
// Playwright E2E tests for Advanced Search UI
|
||||||
|
const { test, expect } = require('@playwright/test');
|
||||||
|
|
||||||
|
async function loginAndSetTokens(page) {
|
||||||
|
// Read pre-generated access token
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const tokenPath = path.resolve(__dirname, '..', '.e2e-token');
|
||||||
|
const access = fs.readFileSync(tokenPath, 'utf-8').trim();
|
||||||
|
const refresh = '';
|
||||||
|
await page.addInitScript((a, r) => {
|
||||||
|
try { window.localStorage.setItem('auth_token', a); } catch (_) {}
|
||||||
|
try { if (r) window.localStorage.setItem('refresh_token', r); } catch (_) {}
|
||||||
|
}, access, refresh);
|
||||||
|
return access;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function apiCreateCustomer(page, payload, token) {
|
||||||
|
// Use import endpoint to avoid multiple writes and simplify schema
|
||||||
|
const req = await page.request.post('/api/import/customers', {
|
||||||
|
data: { customers: [payload] },
|
||||||
|
headers: token ? { Authorization: `Bearer ${token}` } : {},
|
||||||
|
});
|
||||||
|
expect(req.ok()).toBeTruthy();
|
||||||
|
// Return id directly
|
||||||
|
return payload.id;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function apiCreateFile(page, payload, token) {
|
||||||
|
const req = await page.request.post('/api/import/files', {
|
||||||
|
data: { files: [payload] },
|
||||||
|
headers: token ? { Authorization: `Bearer ${token}` } : {},
|
||||||
|
});
|
||||||
|
expect(req.ok()).toBeTruthy();
|
||||||
|
return payload.file_no;
|
||||||
|
}
|
||||||
|
|
||||||
|
test.describe('Advanced Search UI', () => {
|
||||||
|
test.beforeEach(async ({ page }) => {
|
||||||
|
// no-op here; call per test to capture token
|
||||||
|
});
|
||||||
|
|
||||||
|
test('returns highlighted results and enforces XSS safety', async ({ page }) => {
|
||||||
|
const token = `E2E-${Date.now()}`;
|
||||||
|
const accessToken = await loginAndSetTokens(page);
|
||||||
|
const malicious = `${token} <img src=x onerror=alert(1)>`;
|
||||||
|
await apiCreateCustomer(page, {
|
||||||
|
id: `E2E-CUST-${Date.now()}`,
|
||||||
|
first: 'Alice',
|
||||||
|
last: malicious,
|
||||||
|
email: `alice.${Date.now()}@example.com`,
|
||||||
|
city: 'Austin',
|
||||||
|
abrev: 'TX',
|
||||||
|
}, accessToken);
|
||||||
|
|
||||||
|
await page.goto('/search');
|
||||||
|
await page.fill('#searchQuery', token);
|
||||||
|
await page.click('#advancedSearchForm button[type="submit"]');
|
||||||
|
await page.waitForResponse(res => res.url().includes('/api/search/advanced') && res.request().method() === 'POST');
|
||||||
|
|
||||||
|
const results = page.locator('#searchResults .search-result-item');
|
||||||
|
await expect(results.first()).toBeVisible({ timeout: 10000 });
|
||||||
|
|
||||||
|
const matchHtml = page.locator('#searchResults .search-result-item .text-sm.text-info-600');
|
||||||
|
if (await matchHtml.count()) {
|
||||||
|
const html = await matchHtml.first().innerHTML();
|
||||||
|
expect(html).toContain('<strong>');
|
||||||
|
expect(html).not.toContain('onerror');
|
||||||
|
expect(html).not.toContain('<script');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
test('pagination works when results exceed page size', async ({ page }) => {
|
||||||
|
const token = `E2E-PAGE-${Date.now()}`;
|
||||||
|
const accessToken = await loginAndSetTokens(page);
|
||||||
|
const today = new Date().toISOString().slice(0, 10);
|
||||||
|
const ownerId = await apiCreateCustomer(page, {
|
||||||
|
id: `E2E-P-OWNER-${Date.now()}`,
|
||||||
|
first: 'Bob',
|
||||||
|
last: 'Pagination',
|
||||||
|
email: `bob.${Date.now()}@example.com`,
|
||||||
|
city: 'Austin',
|
||||||
|
abrev: 'TX',
|
||||||
|
}, accessToken);
|
||||||
|
for (let i = 0; i < 60; i++) {
|
||||||
|
await apiCreateFile(page, {
|
||||||
|
file_no: `E2E-F-${Date.now()}-${i}`,
|
||||||
|
id: ownerId,
|
||||||
|
regarding: `About ${token} #${i}`,
|
||||||
|
empl_num: 'E01',
|
||||||
|
file_type: 'CIVIL',
|
||||||
|
opened: today,
|
||||||
|
status: 'ACTIVE',
|
||||||
|
rate_per_hour: 150,
|
||||||
|
memo: 'seeded',
|
||||||
|
}, accessToken);
|
||||||
|
}
|
||||||
|
|
||||||
|
await page.goto('/search');
|
||||||
|
await page.fill('#searchQuery', token);
|
||||||
|
await page.click('#advancedSearchForm button[type="submit"]');
|
||||||
|
await page.waitForResponse(res => res.url().includes('/api/search/advanced') && res.request().method() === 'POST');
|
||||||
|
|
||||||
|
const pager = page.locator('#searchPagination');
|
||||||
|
await expect(pager).toBeVisible({ timeout: 10000 });
|
||||||
|
const firstPageActive = page.locator('#searchPagination button.bg-primary-600');
|
||||||
|
await expect(firstPageActive).toContainText('1');
|
||||||
|
|
||||||
|
const next = page.locator('#searchPagination button', { hasText: 'Next' });
|
||||||
|
await Promise.all([
|
||||||
|
page.waitForResponse((res) => res.url().includes('/api/search/advanced') && res.request().method() === 'POST'),
|
||||||
|
next.click(),
|
||||||
|
]);
|
||||||
|
const active = page.locator('#searchPagination button.bg-primary-600');
|
||||||
|
await expect(active).not.toContainText('1');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('suggestions dropdown renders safely and clicking populates input and triggers search', async ({ page }) => {
|
||||||
|
const token = `E2E-SUG-${Date.now()}`;
|
||||||
|
await loginAndSetTokens(page);
|
||||||
|
|
||||||
|
const suggestionOne = `${token} first`;
|
||||||
|
const suggestionTwo = `${token} second`;
|
||||||
|
|
||||||
|
// Stub the suggestions endpoint for our token
|
||||||
|
await page.route('**/api/search/suggestions*', async (route) => {
|
||||||
|
try {
|
||||||
|
const url = new URL(route.request().url());
|
||||||
|
const q = url.searchParams.get('q') || '';
|
||||||
|
if (q.includes(token)) {
|
||||||
|
return route.fulfill({
|
||||||
|
status: 200,
|
||||||
|
contentType: 'application/json',
|
||||||
|
body: JSON.stringify({
|
||||||
|
suggestions: [
|
||||||
|
{ text: suggestionOne, category: 'customer', description: 'Name match' },
|
||||||
|
{ text: suggestionTwo, category: 'file', description: 'File regarding' },
|
||||||
|
],
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
} catch (_) {}
|
||||||
|
return route.fallback();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Stub the advanced search to assert it gets triggered with clicked suggestion
|
||||||
|
let receivedQuery = null;
|
||||||
|
await page.route('**/api/search/advanced', async (route) => {
|
||||||
|
try {
|
||||||
|
const body = route.request().postDataJSON();
|
||||||
|
receivedQuery = body?.query || null;
|
||||||
|
} catch (_) {}
|
||||||
|
return route.fulfill({
|
||||||
|
status: 200,
|
||||||
|
contentType: 'application/json',
|
||||||
|
body: JSON.stringify({
|
||||||
|
total_results: 0,
|
||||||
|
stats: { search_execution_time: 0.001 },
|
||||||
|
facets: { customer: {}, file: {}, ledger: {}, qdro: {}, document: {}, phone: {} },
|
||||||
|
results: [],
|
||||||
|
page_info: { current_page: 1, total_pages: 0, has_previous: false, has_next: false },
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
await page.goto('/search');
|
||||||
|
|
||||||
|
// Type to trigger suggestions (debounced)
|
||||||
|
await page.fill('#searchQuery', token);
|
||||||
|
|
||||||
|
const dropdown = page.locator('#searchSuggestions');
|
||||||
|
const items = dropdown.locator('a');
|
||||||
|
await expect(items).toHaveCount(2, { timeout: 5000 });
|
||||||
|
await expect(dropdown).toBeVisible();
|
||||||
|
|
||||||
|
// Basic safety check — ensure no script tags ended up in suggestions markup
|
||||||
|
const dropdownHtml = await dropdown.innerHTML();
|
||||||
|
expect(dropdownHtml).not.toContain('<script');
|
||||||
|
|
||||||
|
// Click the first suggestion and expect a search to be performed with that query
|
||||||
|
await Promise.all([
|
||||||
|
page.waitForResponse((res) => res.url().includes('/api/search/advanced') && res.request().method() === 'POST'),
|
||||||
|
items.first().click(),
|
||||||
|
]);
|
||||||
|
|
||||||
|
await expect(page.locator('#searchQuery')).toHaveValue(new RegExp(`^${suggestionOne}`));
|
||||||
|
expect(receivedQuery || '').toContain(suggestionOne);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('Escape hides suggestions dropdown without triggering a search', async ({ page }) => {
|
||||||
|
const token = `E2E-ESC-${Date.now()}`;
|
||||||
|
await loginAndSetTokens(page);
|
||||||
|
|
||||||
|
// Track whether advanced search is called
|
||||||
|
let calledAdvanced = false;
|
||||||
|
await page.route('**/api/search/advanced', async (route) => {
|
||||||
|
calledAdvanced = true;
|
||||||
|
return route.fulfill({
|
||||||
|
status: 200,
|
||||||
|
contentType: 'application/json',
|
||||||
|
body: JSON.stringify({
|
||||||
|
total_results: 0,
|
||||||
|
stats: { search_execution_time: 0.001 },
|
||||||
|
facets: { customer: {}, file: {}, ledger: {}, qdro: {}, document: {}, phone: {} },
|
||||||
|
results: [],
|
||||||
|
page_info: { current_page: 1, total_pages: 0, has_previous: false, has_next: false },
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Stub suggestions so they appear
|
||||||
|
await page.route('**/api/search/suggestions*', async (route) => {
|
||||||
|
return route.fulfill({
|
||||||
|
status: 200,
|
||||||
|
contentType: 'application/json',
|
||||||
|
body: JSON.stringify({
|
||||||
|
suggestions: [
|
||||||
|
{ text: `${token} foo`, category: 'customer', description: '' },
|
||||||
|
{ text: `${token} bar`, category: 'file', description: '' },
|
||||||
|
],
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
await page.goto('/search');
|
||||||
|
await page.fill('#searchQuery', token);
|
||||||
|
|
||||||
|
const dropdown = page.locator('#searchSuggestions');
|
||||||
|
await expect(dropdown.locator('a')).toHaveCount(2, { timeout: 5000 });
|
||||||
|
await expect(dropdown).toBeVisible();
|
||||||
|
|
||||||
|
// Press Escape: should hide dropdown and not trigger search
|
||||||
|
await page.keyboard.press('Escape');
|
||||||
|
await expect(dropdown).toHaveClass(/hidden/);
|
||||||
|
expect(calledAdvanced).toBeFalsy();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
211
package-lock.json
generated
211
package-lock.json
generated
@@ -10,10 +10,12 @@
|
|||||||
"license": "ISC",
|
"license": "ISC",
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@jest/environment": "^30.0.5",
|
"@jest/environment": "^30.0.5",
|
||||||
|
"@playwright/test": "^1.45.0",
|
||||||
"@tailwindcss/forms": "^0.5.10",
|
"@tailwindcss/forms": "^0.5.10",
|
||||||
"jest": "^29.7.0",
|
"jest": "^29.7.0",
|
||||||
"jest-environment-jsdom": "^30.0.5",
|
"jest-environment-jsdom": "^30.0.5",
|
||||||
"jsdom": "^22.1.0",
|
"jsdom": "^22.1.0",
|
||||||
|
"jsonwebtoken": "^9.0.2",
|
||||||
"tailwindcss": "^3.4.10"
|
"tailwindcss": "^3.4.10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@@ -1720,6 +1722,22 @@
|
|||||||
"node": ">=14"
|
"node": ">=14"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@playwright/test": {
|
||||||
|
"version": "1.54.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.54.2.tgz",
|
||||||
|
"integrity": "sha512-A+znathYxPf+72riFd1r1ovOLqsIIB0jKIoPjyK2kqEIe30/6jF6BC7QNluHuwUmsD2tv1XZVugN8GqfTMOxsA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"dependencies": {
|
||||||
|
"playwright": "1.54.2"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"playwright": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@sinclair/typebox": {
|
"node_modules/@sinclair/typebox": {
|
||||||
"version": "0.34.38",
|
"version": "0.34.38",
|
||||||
"resolved": "https://registry.npmjs.org/@sinclair/typebox/-/typebox-0.34.38.tgz",
|
"resolved": "https://registry.npmjs.org/@sinclair/typebox/-/typebox-0.34.38.tgz",
|
||||||
@@ -2215,6 +2233,13 @@
|
|||||||
"node-int64": "^0.4.0"
|
"node-int64": "^0.4.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/buffer-equal-constant-time": {
|
||||||
|
"version": "1.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz",
|
||||||
|
"integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "BSD-3-Clause"
|
||||||
|
},
|
||||||
"node_modules/buffer-from": {
|
"node_modules/buffer-from": {
|
||||||
"version": "1.1.2",
|
"version": "1.1.2",
|
||||||
"resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz",
|
"resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz",
|
||||||
@@ -2824,6 +2849,16 @@
|
|||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
|
"node_modules/ecdsa-sig-formatter": {
|
||||||
|
"version": "1.0.11",
|
||||||
|
"resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz",
|
||||||
|
"integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"dependencies": {
|
||||||
|
"safe-buffer": "^5.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/electron-to-chromium": {
|
"node_modules/electron-to-chromium": {
|
||||||
"version": "1.5.200",
|
"version": "1.5.200",
|
||||||
"resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.200.tgz",
|
"resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.200.tgz",
|
||||||
@@ -6040,6 +6075,65 @@
|
|||||||
"node": ">=6"
|
"node": ">=6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/jsonwebtoken": {
|
||||||
|
"version": "9.0.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz",
|
||||||
|
"integrity": "sha512-PRp66vJ865SSqOlgqS8hujT5U4AOgMfhrwYIuIhfKaoSCZcirrmASQr8CX7cUg+RMih+hgznrjp99o+W4pJLHQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"jws": "^3.2.2",
|
||||||
|
"lodash.includes": "^4.3.0",
|
||||||
|
"lodash.isboolean": "^3.0.3",
|
||||||
|
"lodash.isinteger": "^4.0.4",
|
||||||
|
"lodash.isnumber": "^3.0.3",
|
||||||
|
"lodash.isplainobject": "^4.0.6",
|
||||||
|
"lodash.isstring": "^4.0.1",
|
||||||
|
"lodash.once": "^4.0.0",
|
||||||
|
"ms": "^2.1.1",
|
||||||
|
"semver": "^7.5.4"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12",
|
||||||
|
"npm": ">=6"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/jsonwebtoken/node_modules/semver": {
|
||||||
|
"version": "7.7.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/semver/-/semver-7.7.2.tgz",
|
||||||
|
"integrity": "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "ISC",
|
||||||
|
"bin": {
|
||||||
|
"semver": "bin/semver.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=10"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/jwa": {
|
||||||
|
"version": "1.4.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/jwa/-/jwa-1.4.2.tgz",
|
||||||
|
"integrity": "sha512-eeH5JO+21J78qMvTIDdBXidBd6nG2kZjg5Ohz/1fpa28Z4CcsWUzJ1ZZyFq/3z3N17aZy+ZuBoHljASbL1WfOw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"buffer-equal-constant-time": "^1.0.1",
|
||||||
|
"ecdsa-sig-formatter": "1.0.11",
|
||||||
|
"safe-buffer": "^5.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/jws": {
|
||||||
|
"version": "3.2.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/jws/-/jws-3.2.2.tgz",
|
||||||
|
"integrity": "sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"jwa": "^1.4.1",
|
||||||
|
"safe-buffer": "^5.0.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/kleur": {
|
"node_modules/kleur": {
|
||||||
"version": "3.0.3",
|
"version": "3.0.3",
|
||||||
"resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz",
|
"resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz",
|
||||||
@@ -6090,6 +6184,55 @@
|
|||||||
"node": ">=8"
|
"node": ">=8"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/lodash.includes": {
|
||||||
|
"version": "4.3.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz",
|
||||||
|
"integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/lodash.isboolean": {
|
||||||
|
"version": "3.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz",
|
||||||
|
"integrity": "sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/lodash.isinteger": {
|
||||||
|
"version": "4.0.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz",
|
||||||
|
"integrity": "sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/lodash.isnumber": {
|
||||||
|
"version": "3.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz",
|
||||||
|
"integrity": "sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/lodash.isplainobject": {
|
||||||
|
"version": "4.0.6",
|
||||||
|
"resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz",
|
||||||
|
"integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/lodash.isstring": {
|
||||||
|
"version": "4.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz",
|
||||||
|
"integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/lodash.once": {
|
||||||
|
"version": "4.1.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz",
|
||||||
|
"integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/lru-cache": {
|
"node_modules/lru-cache": {
|
||||||
"version": "10.4.3",
|
"version": "10.4.3",
|
||||||
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-10.4.3.tgz",
|
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-10.4.3.tgz",
|
||||||
@@ -6582,6 +6725,53 @@
|
|||||||
"node": ">=8"
|
"node": ">=8"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/playwright": {
|
||||||
|
"version": "1.54.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.54.2.tgz",
|
||||||
|
"integrity": "sha512-Hu/BMoA1NAdRUuulyvQC0pEqZ4vQbGfn8f7wPXcnqQmM+zct9UliKxsIkLNmz/ku7LElUNqmaiv1TG/aL5ACsw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"dependencies": {
|
||||||
|
"playwright-core": "1.54.2"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"playwright": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
},
|
||||||
|
"optionalDependencies": {
|
||||||
|
"fsevents": "2.3.2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/playwright-core": {
|
||||||
|
"version": "1.54.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.54.2.tgz",
|
||||||
|
"integrity": "sha512-n5r4HFbMmWsB4twG7tJLDN9gmBUeSPcsBZiWSE4DnYz9mJMAFqr2ID7+eGC9kpEnxExJ1epttwR59LEWCk8mtA==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"bin": {
|
||||||
|
"playwright-core": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/playwright/node_modules/fsevents": {
|
||||||
|
"version": "2.3.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
|
||||||
|
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
|
||||||
|
"dev": true,
|
||||||
|
"hasInstallScript": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"optional": true,
|
||||||
|
"os": [
|
||||||
|
"darwin"
|
||||||
|
],
|
||||||
|
"engines": {
|
||||||
|
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/postcss": {
|
"node_modules/postcss": {
|
||||||
"version": "8.5.6",
|
"version": "8.5.6",
|
||||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz",
|
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz",
|
||||||
@@ -7018,6 +7208,27 @@
|
|||||||
"queue-microtask": "^1.2.2"
|
"queue-microtask": "^1.2.2"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/safe-buffer": {
|
||||||
|
"version": "5.2.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
|
||||||
|
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
|
||||||
|
"dev": true,
|
||||||
|
"funding": [
|
||||||
|
{
|
||||||
|
"type": "github",
|
||||||
|
"url": "https://github.com/sponsors/feross"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "patreon",
|
||||||
|
"url": "https://www.patreon.com/feross"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "consulting",
|
||||||
|
"url": "https://feross.org/support"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/safer-buffer": {
|
"node_modules/safer-buffer": {
|
||||||
"version": "2.1.2",
|
"version": "2.1.2",
|
||||||
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
|
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
|
||||||
|
|||||||
@@ -4,7 +4,11 @@
|
|||||||
"description": "A modern Python web application built with FastAPI to replace the legacy Pascal-based database system. This system maintains the familiar keyboard shortcuts and workflows while providing a robust, modular backend with a clean web interface.",
|
"description": "A modern Python web application built with FastAPI to replace the legacy Pascal-based database system. This system maintains the familiar keyboard shortcuts and workflows while providing a robust, modular backend with a clean web interface.",
|
||||||
"main": "tailwind.config.js",
|
"main": "tailwind.config.js",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"test": "jest"
|
"test": "jest",
|
||||||
|
"e2e": "playwright test",
|
||||||
|
"e2e:headed": "playwright test --headed",
|
||||||
|
"e2e:debug": "PWDEBUG=1 playwright test",
|
||||||
|
"e2e:install": "playwright install --with-deps"
|
||||||
},
|
},
|
||||||
"repository": {
|
"repository": {
|
||||||
"type": "git",
|
"type": "git",
|
||||||
@@ -23,6 +27,8 @@
|
|||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@jest/environment": "^30.0.5",
|
"@jest/environment": "^30.0.5",
|
||||||
"@tailwindcss/forms": "^0.5.10",
|
"@tailwindcss/forms": "^0.5.10",
|
||||||
|
"@playwright/test": "^1.45.0",
|
||||||
|
"jsonwebtoken": "^9.0.2",
|
||||||
"jest": "^29.7.0",
|
"jest": "^29.7.0",
|
||||||
"jest-environment-jsdom": "^30.0.5",
|
"jest-environment-jsdom": "^30.0.5",
|
||||||
"jsdom": "^22.1.0",
|
"jsdom": "^22.1.0",
|
||||||
|
|||||||
34
playwright.config.js
Normal file
34
playwright.config.js
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
// @ts-check
|
||||||
|
const { defineConfig } = require('@playwright/test');
|
||||||
|
const path = require('path');
|
||||||
|
const DB_ABS_PATH = path.resolve(__dirname, '.e2e-db.sqlite');
|
||||||
|
|
||||||
|
module.exports = defineConfig({
|
||||||
|
testDir: './e2e',
|
||||||
|
fullyParallel: true,
|
||||||
|
retries: process.env.CI ? 2 : 0,
|
||||||
|
workers: process.env.CI ? 2 : undefined,
|
||||||
|
use: {
|
||||||
|
baseURL: process.env.PW_BASE_URL || 'http://127.0.0.1:6123',
|
||||||
|
trace: 'on-first-retry',
|
||||||
|
},
|
||||||
|
globalSetup: require.resolve('./e2e/global-setup.js'),
|
||||||
|
webServer: {
|
||||||
|
command: 'uvicorn app.main:app --host 127.0.0.1 --port 6123',
|
||||||
|
env: {
|
||||||
|
SECRET_KEY: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
|
||||||
|
DATABASE_URL: `sqlite:////${DB_ABS_PATH}`,
|
||||||
|
LOG_LEVEL: 'WARNING',
|
||||||
|
DISABLE_LOG_ENQUEUE: '1',
|
||||||
|
LOG_TO_FILE: 'False',
|
||||||
|
ADMIN_EMAIL: 'admin@example.com',
|
||||||
|
ADMIN_USERNAME: 'admin',
|
||||||
|
ADMIN_PASSWORD: process.env.ADMIN_PASSWORD || 'admin123',
|
||||||
|
},
|
||||||
|
url: 'http://127.0.0.1:6123/health',
|
||||||
|
reuseExistingServer: !process.env.CI,
|
||||||
|
timeout: 60 * 1000,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
@@ -34,4 +34,7 @@ httpx==0.28.1
|
|||||||
python-dotenv==1.0.1
|
python-dotenv==1.0.1
|
||||||
|
|
||||||
# Logging
|
# Logging
|
||||||
loguru==0.7.2
|
loguru==0.7.2
|
||||||
|
|
||||||
|
# Caching (optional)
|
||||||
|
redis==5.0.8
|
||||||
50
static/js/__tests__/search_snippet.ui.test.js
Normal file
50
static/js/__tests__/search_snippet.ui.test.js
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
/** @jest-environment jsdom */
|
||||||
|
|
||||||
|
// Load sanitizer and highlight utils used by the UI
|
||||||
|
require('../sanitizer.js');
|
||||||
|
require('../highlight.js');
|
||||||
|
|
||||||
|
describe('Search highlight integration (server snippet rendering)', () => {
|
||||||
|
const { formatSnippet, highlight, buildTokens } = window.highlightUtils;
|
||||||
|
|
||||||
|
test('formatSnippet preserves server <strong> and sanitizes dangerous HTML', () => {
|
||||||
|
const tokens = buildTokens('alpha');
|
||||||
|
const serverSnippet = 'Hello <strong>Alpha</strong> <img src=x onerror=alert(1)> <a href="javascript:evil()">link</a>';
|
||||||
|
const html = formatSnippet(serverSnippet, tokens);
|
||||||
|
// Server-provided strong is preserved
|
||||||
|
expect(html).toContain('<strong>Alpha</strong>');
|
||||||
|
// Dangerous attributes removed
|
||||||
|
expect(html).not.toContain('onerror=');
|
||||||
|
// javascript: protocol removed
|
||||||
|
expect(html.toLowerCase()).not.toContain('href="javascript:');
|
||||||
|
// Image tag should remain but sanitized (no onerror)
|
||||||
|
expect(html).toContain('<img');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('setSafeHTML inserts sanitized content into DOM safely', () => {
|
||||||
|
const container = document.createElement('div');
|
||||||
|
const rawHtml = '<div onclick="evil()"><script>alert(1)</script>Text <b>bold</b></div>';
|
||||||
|
// Using global helper installed by sanitizer.js
|
||||||
|
window.setSafeHTML(container, rawHtml);
|
||||||
|
// Script tags removed
|
||||||
|
expect(container.innerHTML).not.toContain('<script>');
|
||||||
|
// Event handlers stripped
|
||||||
|
expect(container.innerHTML).not.toContain('onclick=');
|
||||||
|
// Harmless markup preserved
|
||||||
|
expect(container.innerHTML).toContain('<b>bold</b>');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('highlight then sanitize flow escapes original tags and wraps tokens', () => {
|
||||||
|
const tokens = buildTokens('john smith');
|
||||||
|
const out = highlight('Hello <b>John</b> Smith & Sons', tokens);
|
||||||
|
// Original b-tags escaped
|
||||||
|
expect(out).toContain('<b>');
|
||||||
|
// Tokens wrapped with strong
|
||||||
|
expect(out).toMatch(/<strong>John<\/strong>/);
|
||||||
|
expect(out).toMatch(/<strong>Smith<\/strong>/);
|
||||||
|
// Ampersand escaped
|
||||||
|
expect(out).toContain('& Sons');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
@@ -5,6 +5,8 @@ let isEditing = false;
|
|||||||
let editingCustomerId = null;
|
let editingCustomerId = null;
|
||||||
let selectedCustomerIds = new Set();
|
let selectedCustomerIds = new Set();
|
||||||
let customerCompactMode = false;
|
let customerCompactMode = false;
|
||||||
|
let customerFocusIndex = -1;
|
||||||
|
let _customerNavInitialized = false;
|
||||||
|
|
||||||
// Local debounce fallback to avoid dependency on main.js
|
// Local debounce fallback to avoid dependency on main.js
|
||||||
function _localDebounce(func, wait) {
|
function _localDebounce(func, wait) {
|
||||||
@@ -49,6 +51,7 @@ function displayCustomers(customers) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
customers.forEach(customer => {
|
customers.forEach(customer => {
|
||||||
|
const rowIndex = tbody.children.length;
|
||||||
const phones = Array.isArray(customer.phone_numbers) ? customer.phone_numbers : [];
|
const phones = Array.isArray(customer.phone_numbers) ? customer.phone_numbers : [];
|
||||||
const primaryPhone = phones.length > 0 ? (phones[0].phone || '') : '';
|
const primaryPhone = phones.length > 0 ? (phones[0].phone || '') : '';
|
||||||
const phoneCount = phones.length;
|
const phoneCount = phones.length;
|
||||||
@@ -58,6 +61,8 @@ function displayCustomers(customers) {
|
|||||||
|
|
||||||
// Store customer ID as data attribute to avoid escaping issues in onclick
|
// Store customer ID as data attribute to avoid escaping issues in onclick
|
||||||
row.dataset.customerId = customer.id;
|
row.dataset.customerId = customer.id;
|
||||||
|
row.dataset.rowIndex = String(rowIndex);
|
||||||
|
row.setAttribute('tabindex', '-1');
|
||||||
|
|
||||||
// Build clean, simple row structure with clickable rows (no inline onclick to avoid backslash issues)
|
// Build clean, simple row structure with clickable rows (no inline onclick to avoid backslash issues)
|
||||||
const pad = customerCompactMode ? 'px-3 py-2' : 'px-6 py-4';
|
const pad = customerCompactMode ? 'px-3 py-2' : 'px-6 py-4';
|
||||||
@@ -122,11 +127,16 @@ function displayCustomers(customers) {
|
|||||||
e.stopPropagation();
|
e.stopPropagation();
|
||||||
editCustomer(customer.id);
|
editCustomer(customer.id);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Focus management for keyboard navigation
|
||||||
|
row.addEventListener('mouseenter', () => setCustomerFocus(rowIndex));
|
||||||
|
row.addEventListener('click', () => setCustomerFocus(rowIndex));
|
||||||
|
|
||||||
tbody.appendChild(row);
|
tbody.appendChild(row);
|
||||||
});
|
});
|
||||||
|
|
||||||
// No select-all
|
// No select-all
|
||||||
|
refreshCustomerKeyboardRows();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper functions
|
// Helper functions
|
||||||
@@ -803,12 +813,13 @@ function enhanceCustomerTableRows() {
|
|||||||
function initializeCustomerListEnhancer() {
|
function initializeCustomerListEnhancer() {
|
||||||
const tbody = document.getElementById('customersTableBody');
|
const tbody = document.getElementById('customersTableBody');
|
||||||
if (!tbody || window._customerListObserver) return;
|
if (!tbody || window._customerListObserver) return;
|
||||||
const debouncedEnhance = (typeof window.debounce === 'function' ? window.debounce : _localDebounce)(() => enhanceCustomerTableRows(), 10);
|
const debouncedEnhance = (typeof window.debounce === 'function' ? window.debounce : _localDebounce)(() => { enhanceCustomerTableRows(); refreshCustomerKeyboardRows(); }, 10);
|
||||||
const observer = new MutationObserver(() => debouncedEnhance());
|
const observer = new MutationObserver(() => debouncedEnhance());
|
||||||
observer.observe(tbody, { childList: true, subtree: false });
|
observer.observe(tbody, { childList: true, subtree: false });
|
||||||
window._customerListObserver = observer;
|
window._customerListObserver = observer;
|
||||||
// Initial pass
|
// Initial pass
|
||||||
enhanceCustomerTableRows();
|
enhanceCustomerTableRows();
|
||||||
|
initializeCustomerListKeyboardNav();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Selection helpers
|
// Selection helpers
|
||||||
@@ -878,4 +889,79 @@ function onSelectAllChange(checked) {
|
|||||||
// Expose helpers
|
// Expose helpers
|
||||||
window.initializeCustomerListState = initializeCustomerListState;
|
window.initializeCustomerListState = initializeCustomerListState;
|
||||||
window.toggleCompactMode = toggleCompactMode;
|
window.toggleCompactMode = toggleCompactMode;
|
||||||
window.onSelectAllChange = onSelectAllChange;
|
window.onSelectAllChange = onSelectAllChange;
|
||||||
|
|
||||||
|
// Keyboard navigation for customer list
|
||||||
|
function initializeCustomerListKeyboardNav() {
|
||||||
|
if (_customerNavInitialized) return;
|
||||||
|
_customerNavInitialized = true;
|
||||||
|
document.addEventListener('keydown', (e) => {
|
||||||
|
const active = document.activeElement || e.target;
|
||||||
|
const tag = active && active.tagName ? active.tagName.toUpperCase() : '';
|
||||||
|
const isTyping = tag === 'INPUT' || tag === 'TEXTAREA' || tag === 'SELECT' || (active && active.isContentEditable);
|
||||||
|
if (isTyping) return;
|
||||||
|
const tbody = document.getElementById('customersTableBody');
|
||||||
|
if (!tbody || tbody.children.length === 0) return;
|
||||||
|
switch (e.key) {
|
||||||
|
case 'ArrowDown': e.preventDefault(); moveCustomerFocus(1); break;
|
||||||
|
case 'ArrowUp': e.preventDefault(); moveCustomerFocus(-1); break;
|
||||||
|
case 'PageDown': e.preventDefault(); moveCustomerFocus(10); break;
|
||||||
|
case 'PageUp': e.preventDefault(); moveCustomerFocus(-10); break;
|
||||||
|
case 'Home': e.preventDefault(); setCustomerFocus(0); break;
|
||||||
|
case 'End': e.preventDefault(); setCustomerFocus(tbody.children.length - 1); break;
|
||||||
|
case 'Enter': e.preventDefault(); openFocusedCustomer(); break;
|
||||||
|
}
|
||||||
|
}, { passive: false });
|
||||||
|
}
|
||||||
|
|
||||||
|
function refreshCustomerKeyboardRows() {
|
||||||
|
const tbody = document.getElementById('customersTableBody');
|
||||||
|
if (!tbody) return;
|
||||||
|
const rows = Array.from(tbody.querySelectorAll('tr'));
|
||||||
|
rows.forEach((row, idx) => {
|
||||||
|
row.dataset.rowIndex = String(idx);
|
||||||
|
if (!row.hasAttribute('tabindex')) row.setAttribute('tabindex', '-1');
|
||||||
|
if (!row._navBound) {
|
||||||
|
row.addEventListener('mouseenter', () => setCustomerFocus(idx));
|
||||||
|
row.addEventListener('click', () => setCustomerFocus(idx));
|
||||||
|
row._navBound = true;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
if (customerFocusIndex < 0 && rows.length > 0) setCustomerFocus(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
function setCustomerFocus(index) {
|
||||||
|
const tbody = document.getElementById('customersTableBody');
|
||||||
|
if (!tbody) return;
|
||||||
|
const rows = Array.from(tbody.querySelectorAll('tr'));
|
||||||
|
if (rows.length === 0) { customerFocusIndex = -1; return; }
|
||||||
|
const clamped = Math.max(0, Math.min(index, rows.length - 1));
|
||||||
|
if (clamped === customerFocusIndex) return;
|
||||||
|
if (customerFocusIndex >= 0 && rows[customerFocusIndex]) {
|
||||||
|
rows[customerFocusIndex].classList.remove('ring-2', 'ring-blue-400', 'dark:ring-blue-500', 'bg-blue-50', 'dark:bg-blue-900/30');
|
||||||
|
}
|
||||||
|
customerFocusIndex = clamped;
|
||||||
|
const row = rows[customerFocusIndex];
|
||||||
|
if (!row) return;
|
||||||
|
row.classList.add('ring-2', 'ring-blue-400', 'dark:ring-blue-500', 'bg-blue-50', 'dark:bg-blue-900/30');
|
||||||
|
try { row.scrollIntoView({ block: 'nearest' }); } catch (_) {}
|
||||||
|
}
|
||||||
|
|
||||||
|
function moveCustomerFocus(delta) {
|
||||||
|
const next = (customerFocusIndex < 0 ? 0 : customerFocusIndex) + delta;
|
||||||
|
setCustomerFocus(next);
|
||||||
|
}
|
||||||
|
|
||||||
|
function openFocusedCustomer() {
|
||||||
|
const tbody = document.getElementById('customersTableBody');
|
||||||
|
if (!tbody || customerFocusIndex < 0) return;
|
||||||
|
const row = tbody.querySelector(`tr[data-row-index="${customerFocusIndex}"]`) || Array.from(tbody.querySelectorAll('tr'))[customerFocusIndex];
|
||||||
|
const id = row && row.dataset ? row.dataset.customerId : null;
|
||||||
|
if (id) viewCustomer(id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expose for external usage/debugging
|
||||||
|
window.initializeCustomerListKeyboardNav = initializeCustomerListKeyboardNav;
|
||||||
|
window.refreshCustomerKeyboardRows = refreshCustomerKeyboardRows;
|
||||||
|
window.setCustomerFocus = setCustomerFocus;
|
||||||
|
window.openFocusedCustomer = openFocusedCustomer;
|
||||||
@@ -2,13 +2,23 @@
|
|||||||
function buildTokens(rawQuery) {
|
function buildTokens(rawQuery) {
|
||||||
const q = (rawQuery || '').trim();
|
const q = (rawQuery || '').trim();
|
||||||
if (!q) return [];
|
if (!q) return [];
|
||||||
// Normalize punctuation to spaces, trim non-alphanumerics at ends, dedupe
|
// Normalize punctuation to spaces, trim non-alphanumerics at ends
|
||||||
const tokens = q
|
const tokens = q
|
||||||
.replace(/[,_;:]+/g, ' ')
|
.replace(/[,_;:]+/g, ' ')
|
||||||
.split(/\s+/)
|
.split(/\s+/)
|
||||||
.map(t => t.replace(/^[^A-Za-z0-9]+|[^A-Za-z0-9]+$/g, ''))
|
.map(t => t.replace(/^[^A-Za-z0-9]+|[^A-Za-z0-9]+$/g, ''))
|
||||||
.filter(Boolean);
|
.filter(Boolean);
|
||||||
return Array.from(new Set(tokens));
|
// Case-insensitive dedupe while preserving original order and casing (parity with server)
|
||||||
|
const seen = new Set();
|
||||||
|
const result = [];
|
||||||
|
for (const tok of tokens) {
|
||||||
|
const lowered = tok.toLowerCase();
|
||||||
|
if (!seen.has(lowered)) {
|
||||||
|
seen.add(lowered);
|
||||||
|
result.push(tok);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
function escapeHtml(text) {
|
function escapeHtml(text) {
|
||||||
|
|||||||
@@ -137,6 +137,10 @@
|
|||||||
<i class="fa-solid fa-wrench"></i>
|
<i class="fa-solid fa-wrench"></i>
|
||||||
<span>Maintenance</span>
|
<span>Maintenance</span>
|
||||||
</button>
|
</button>
|
||||||
|
<button class="flex items-center gap-2 px-6 py-4 text-sm font-medium border-b-2 border-transparent hover:border-primary-300 text-neutral-600 dark:text-neutral-400 hover:text-primary-600 dark:hover:text-primary-400 hover:bg-neutral-50 dark:hover:bg-neutral-700/50 transition-all duration-200" id="printers-tab" data-tab-target="#printers" type="button" role="tab">
|
||||||
|
<i class="fa-solid fa-print"></i>
|
||||||
|
<span>Printers</span>
|
||||||
|
</button>
|
||||||
<button class="flex items-center gap-2 px-6 py-4 text-sm font-medium border-b-2 border-transparent hover:border-primary-300 text-neutral-600 dark:text-neutral-400 hover:text-primary-600 dark:hover:text-primary-400 hover:bg-neutral-50 dark:hover:bg-neutral-700/50 transition-all duration-200" id="import-tab" data-tab-target="#import" type="button" role="tab">
|
<button class="flex items-center gap-2 px-6 py-4 text-sm font-medium border-b-2 border-transparent hover:border-primary-300 text-neutral-600 dark:text-neutral-400 hover:text-primary-600 dark:hover:text-primary-400 hover:bg-neutral-50 dark:hover:bg-neutral-700/50 transition-all duration-200" id="import-tab" data-tab-target="#import" type="button" role="tab">
|
||||||
<i class="fa-solid fa-file-import"></i>
|
<i class="fa-solid fa-file-import"></i>
|
||||||
<span>Import</span>
|
<span>Import</span>
|
||||||
@@ -513,6 +517,110 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Printers Tab -->
|
||||||
|
<div id="printers" role="tabpanel" class="hidden">
|
||||||
|
<div class="grid grid-cols-1 lg:grid-cols-3 gap-6">
|
||||||
|
<div class="lg:col-span-1">
|
||||||
|
<div class="bg-white dark:bg-neutral-800 border border-neutral-200 dark:border-neutral-700 rounded-lg shadow">
|
||||||
|
<div class="px-4 py-3 border-b border-neutral-200 dark:border-neutral-700 flex items-center justify-between">
|
||||||
|
<h5 class="m-0 font-semibold"><i class="fa-solid fa-list"></i> Printers</h5>
|
||||||
|
<button type="button" class="px-3 py-1.5 bg-primary-600 hover:bg-primary-700 text-white rounded text-sm" onclick="showCreatePrinterForm()">
|
||||||
|
<i class="fas fa-plus"></i> Add
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="p-4">
|
||||||
|
<ul id="printers-list" class="divide-y divide-neutral-200 dark:divide-neutral-700">
|
||||||
|
<li class="py-2 text-neutral-500">Loading...</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="lg:col-span-2">
|
||||||
|
<div class="bg-white dark:bg-neutral-800 border border-neutral-200 dark:border-neutral-700 rounded-lg shadow">
|
||||||
|
<div class="px-4 py-3 border-b border-neutral-200 dark:border-neutral-700">
|
||||||
|
<h5 class="m-0 font-semibold"><i class="fa-solid fa-pen-to-square"></i> Edit Printer</h5>
|
||||||
|
</div>
|
||||||
|
<div class="p-4">
|
||||||
|
<form id="printer-form" class="grid grid-cols-1 md:grid-cols-2 gap-4" onsubmit="return savePrinter(event)">
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Printer Name *</label>
|
||||||
|
<input id="printer_name" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg" required>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Description</label>
|
||||||
|
<input id="description" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Driver</label>
|
||||||
|
<input id="driver" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Port</label>
|
||||||
|
<input id="port" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Number</label>
|
||||||
|
<input id="number" type="number" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div class="flex items-center gap-2 mt-6">
|
||||||
|
<input id="default_printer" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500">
|
||||||
|
<label class="text-sm">Default Printer</label>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="md:col-span-2 grid grid-cols-1 md:grid-cols-2 gap-4 mt-2">
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Page Break</label>
|
||||||
|
<input id="page_break" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg" placeholder="e.g., \f">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Setup Sequence</label>
|
||||||
|
<input id="setup_st" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Reset Sequence</label>
|
||||||
|
<input id="reset_st" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Bold Start</label>
|
||||||
|
<input id="b_bold" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Bold End</label>
|
||||||
|
<input id="e_bold" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Underline Start</label>
|
||||||
|
<input id="b_underline" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="block text-sm font-medium mb-1">Underline End</label>
|
||||||
|
<input id="e_underline" class="w-full px-3 py-2 bg-white dark:bg-neutral-800 border border-neutral-300 dark:border-neutral-600 rounded-lg">
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="md:col-span-2 grid grid-cols-2 md:grid-cols-4 gap-3 mt-2">
|
||||||
|
<label class="inline-flex items-center gap-2"><input id="phone_book" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500"> Phone Book</label>
|
||||||
|
<label class="inline-flex items-center gap-2"><input id="rolodex_info" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500"> Rolodex Info</label>
|
||||||
|
<label class="inline-flex items-center gap-2"><input id="envelope" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500"> Envelope</label>
|
||||||
|
<label class="inline-flex items-center gap-2"><input id="file_cabinet" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500"> File Cabinet</label>
|
||||||
|
<label class="inline-flex items-center gap-2"><input id="accounts" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500"> Accounts</label>
|
||||||
|
<label class="inline-flex items-center gap-2"><input id="statements" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500"> Statements</label>
|
||||||
|
<label class="inline-flex items-center gap-2"><input id="calendar" type="checkbox" class="rounded border-neutral-300 text-primary-600 focus:ring-primary-500"> Calendar</label>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="md:col-span-2 flex gap-2 mt-4">
|
||||||
|
<button type="submit" class="px-4 py-2 bg-primary-600 hover:bg-primary-700 text-white rounded">
|
||||||
|
<i class="fa-solid fa-floppy-disk"></i> Save
|
||||||
|
</button>
|
||||||
|
<button type="button" class="px-4 py-2 border border-neutral-300 dark:border-neutral-600 rounded" onclick="clearPrinterForm()">Clear</button>
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
<!-- Data Import Tab -->
|
<!-- Data Import Tab -->
|
||||||
<div id="import" role="tabpanel" class="hidden">
|
<div id="import" role="tabpanel" class="hidden">
|
||||||
<div class="flex flex-wrap -mx-4">
|
<div class="flex flex-wrap -mx-4">
|
||||||
@@ -1203,6 +1311,8 @@ function onTabShown(tabName) {
|
|||||||
loadUsers();
|
loadUsers();
|
||||||
} else if (tabName === 'settings') {
|
} else if (tabName === 'settings') {
|
||||||
loadSettings();
|
loadSettings();
|
||||||
|
} else if (tabName === 'printers') {
|
||||||
|
loadPrinters();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1221,6 +1331,7 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
loadSettings();
|
loadSettings();
|
||||||
loadLookupTables();
|
loadLookupTables();
|
||||||
loadBackups();
|
loadBackups();
|
||||||
|
loadPrinters();
|
||||||
// Tabs setup
|
// Tabs setup
|
||||||
initializeTabs();
|
initializeTabs();
|
||||||
|
|
||||||
@@ -1720,6 +1831,157 @@ async function loadLookupTables() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Printers Management
|
||||||
|
async function loadPrinters() {
|
||||||
|
try {
|
||||||
|
const response = await window.http.wrappedFetch('/api/admin/printers');
|
||||||
|
const printers = await response.json();
|
||||||
|
renderPrintersList(printers);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to load printers:', err);
|
||||||
|
const ul = document.getElementById('printers-list');
|
||||||
|
if (ul) ul.innerHTML = '<li class="py-2 text-danger-600 dark:text-danger-400">Failed to load printers</li>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderPrintersList(printers) {
|
||||||
|
const ul = document.getElementById('printers-list');
|
||||||
|
if (!ul) return;
|
||||||
|
if (!printers || printers.length === 0) {
|
||||||
|
ul.innerHTML = '<li class="py-2 text-neutral-500">No printers found</li>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
ul.innerHTML = printers.map(p => `
|
||||||
|
<li class="py-2 px-2 rounded hover:bg-neutral-50 dark:hover:bg-neutral-800/50">
|
||||||
|
<div class="flex justify-between items-center gap-2">
|
||||||
|
<div class="flex-1 cursor-pointer" onclick="selectPrinter('${encodeURIComponent(p.printer_name)}')">
|
||||||
|
<div class="font-medium">${p.printer_name}</div>
|
||||||
|
<div class="text-xs text-neutral-500">${p.description || ''}</div>
|
||||||
|
</div>
|
||||||
|
<div class="flex items-center gap-2">
|
||||||
|
${p.default_printer ? '<span class="text-xs px-2 py-0.5 bg-primary-100 dark:bg-primary-800 text-primary-700 dark:text-primary-200 rounded">Default</span>' : ''}
|
||||||
|
<button class="px-2 py-1 border border-danger-600 text-danger-700 dark:text-danger-200 rounded text-xs hover:bg-danger-50 dark:hover:bg-danger-900/20" onclick="deletePrinter(event, '${encodeURIComponent(p.printer_name)}')" title="Delete">
|
||||||
|
<i class="fas fa-trash"></i>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</li>
|
||||||
|
`).join('');
|
||||||
|
}
|
||||||
|
|
||||||
|
async function selectPrinter(encodedName) {
|
||||||
|
const name = decodeURIComponent(encodedName);
|
||||||
|
try {
|
||||||
|
const response = await window.http.wrappedFetch('/api/admin/printers/' + encodeURIComponent(name));
|
||||||
|
const p = await response.json();
|
||||||
|
fillPrinterForm(p);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to fetch printer:', err);
|
||||||
|
if (window.alerts) window.alerts.error('Failed to load printer details');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function fillPrinterForm(p) {
|
||||||
|
const set = (id, val) => { const el = document.getElementById(id); if (el) el.value = val == null ? '' : val; };
|
||||||
|
const setb = (id, val) => { const el = document.getElementById(id); if (el) el.checked = !!val; };
|
||||||
|
set('printer_name', p.printer_name || '');
|
||||||
|
set('description', p.description);
|
||||||
|
set('driver', p.driver);
|
||||||
|
set('port', p.port);
|
||||||
|
set('number', p.number);
|
||||||
|
setb('default_printer', p.default_printer);
|
||||||
|
set('page_break', p.page_break);
|
||||||
|
set('setup_st', p.setup_st);
|
||||||
|
set('reset_st', p.reset_st);
|
||||||
|
set('b_bold', p.b_bold);
|
||||||
|
set('e_bold', p.e_bold);
|
||||||
|
set('b_underline', p.b_underline);
|
||||||
|
set('e_underline', p.e_underline);
|
||||||
|
setb('phone_book', p.phone_book);
|
||||||
|
setb('rolodex_info', p.rolodex_info);
|
||||||
|
setb('envelope', p.envelope);
|
||||||
|
setb('file_cabinet', p.file_cabinet);
|
||||||
|
setb('accounts', p.accounts);
|
||||||
|
setb('statements', p.statements);
|
||||||
|
setb('calendar', p.calendar);
|
||||||
|
}
|
||||||
|
|
||||||
|
function showCreatePrinterForm() {
|
||||||
|
clearPrinterForm();
|
||||||
|
const nameEl = document.getElementById('printer_name');
|
||||||
|
if (nameEl) nameEl.focus();
|
||||||
|
}
|
||||||
|
|
||||||
|
function clearPrinterForm() {
|
||||||
|
fillPrinterForm({});
|
||||||
|
}
|
||||||
|
|
||||||
|
async function savePrinter(event) {
|
||||||
|
event.preventDefault();
|
||||||
|
const payload = {
|
||||||
|
printer_name: document.getElementById('printer_name').value,
|
||||||
|
description: document.getElementById('description').value,
|
||||||
|
driver: document.getElementById('driver').value,
|
||||||
|
port: document.getElementById('port').value,
|
||||||
|
number: document.getElementById('number').value ? parseInt(document.getElementById('number').value, 10) : null,
|
||||||
|
default_printer: document.getElementById('default_printer').checked,
|
||||||
|
page_break: document.getElementById('page_break').value,
|
||||||
|
setup_st: document.getElementById('setup_st').value,
|
||||||
|
reset_st: document.getElementById('reset_st').value,
|
||||||
|
b_bold: document.getElementById('b_bold').value,
|
||||||
|
e_bold: document.getElementById('e_bold').value,
|
||||||
|
b_underline: document.getElementById('b_underline').value,
|
||||||
|
e_underline: document.getElementById('e_underline').value,
|
||||||
|
phone_book: document.getElementById('phone_book').checked,
|
||||||
|
rolodex_info: document.getElementById('rolodex_info').checked,
|
||||||
|
envelope: document.getElementById('envelope').checked,
|
||||||
|
file_cabinet: document.getElementById('file_cabinet').checked,
|
||||||
|
accounts: document.getElementById('accounts').checked,
|
||||||
|
statements: document.getElementById('statements').checked,
|
||||||
|
calendar: document.getElementById('calendar').checked,
|
||||||
|
};
|
||||||
|
try {
|
||||||
|
const existsResp = await window.http.wrappedFetch('/api/admin/printers/' + encodeURIComponent(payload.printer_name));
|
||||||
|
if (existsResp.ok) {
|
||||||
|
// update
|
||||||
|
const resp = await window.http.wrappedFetch('/api/admin/printers/' + encodeURIComponent(payload.printer_name), {
|
||||||
|
method: 'PUT',
|
||||||
|
body: JSON.stringify(payload),
|
||||||
|
});
|
||||||
|
if (!resp.ok) throw await window.http.toError(resp, 'Failed to update printer');
|
||||||
|
} else {
|
||||||
|
// create
|
||||||
|
const resp = await window.http.wrappedFetch('/api/admin/printers', {
|
||||||
|
method: 'POST',
|
||||||
|
body: JSON.stringify(payload),
|
||||||
|
});
|
||||||
|
if (!resp.ok) throw await window.http.toError(resp, 'Failed to create printer');
|
||||||
|
}
|
||||||
|
if (window.alerts) window.alerts.success('Printer saved');
|
||||||
|
await loadPrinters();
|
||||||
|
} catch (err) {
|
||||||
|
console.error(err);
|
||||||
|
if (window.alerts) window.alerts.error(window.http.formatAlert(err, 'Printer save failed'));
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deletePrinter(evt, encodedName) {
|
||||||
|
evt.stopPropagation();
|
||||||
|
const name = decodeURIComponent(encodedName);
|
||||||
|
if (!confirm(`Delete printer "${name}"?`)) return;
|
||||||
|
try {
|
||||||
|
const resp = await window.http.wrappedFetch('/api/admin/printers/' + encodeURIComponent(name), { method: 'DELETE' });
|
||||||
|
if (!resp.ok) throw await window.http.toError(resp, 'Failed to delete printer');
|
||||||
|
if (window.alerts) window.alerts.success('Printer deleted');
|
||||||
|
await loadPrinters();
|
||||||
|
clearPrinterForm();
|
||||||
|
} catch (err) {
|
||||||
|
console.error(err);
|
||||||
|
if (window.alerts) window.alerts.error(window.http.formatAlert(err, 'Delete failed'));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
async function vacuumDatabase() {
|
async function vacuumDatabase() {
|
||||||
if (!confirm('This will optimize the database. Continue?')) return;
|
if (!confirm('This will optimize the database. Continue?')) return;
|
||||||
|
|
||||||
@@ -2576,6 +2838,17 @@ function displayAdminImportResults(result) {
|
|||||||
html += '</div>';
|
html += '</div>';
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Add summary for printers
|
||||||
|
if (result.file_type === 'PRINTERS.csv') {
|
||||||
|
html += `
|
||||||
|
<div class="mt-2 p-2 bg-neutral-50 dark:bg-neutral-800/50 rounded border border-neutral-200 dark:border-neutral-700 text-sm">
|
||||||
|
<strong>Printers:</strong> ${result.created_count || 0} created, ${result.updated_count || 0} updated
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
// Auto-refresh printers tab list
|
||||||
|
try { loadPrinters(); } catch (_) {}
|
||||||
|
}
|
||||||
|
|
||||||
container.innerHTML = html;
|
container.innerHTML = html;
|
||||||
panel.style.display = 'block';
|
panel.style.display = 'block';
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -230,13 +230,16 @@
|
|||||||
<button type="submit" class="w-full px-4 py-2 bg-primary-600 text-white hover:bg-primary-700 rounded-lg transition-colors flex items-center justify-center gap-2">
|
<button type="submit" class="w-full px-4 py-2 bg-primary-600 text-white hover:bg-primary-700 rounded-lg transition-colors flex items-center justify-center gap-2">
|
||||||
<i class="fa-solid fa-magnifying-glass"></i> Search
|
<i class="fa-solid fa-magnifying-glass"></i> Search
|
||||||
</button>
|
</button>
|
||||||
<div class="grid grid-cols-2 gap-2">
|
<div class="grid grid-cols-3 gap-2">
|
||||||
<button type="button" class="w-full px-4 py-2 text-neutral-700 dark:text-neutral-300 bg-neutral-200 dark:bg-neutral-700 hover:bg-neutral-300 dark:hover:bg-neutral-600 rounded-lg transition-colors flex items-center justify-center gap-2" id="saveSearchBtn">
|
<button type="button" class="w-full px-4 py-2 text-neutral-700 dark:text-neutral-300 bg-neutral-200 dark:bg-neutral-700 hover:bg-neutral-300 dark:hover:bg-neutral-600 rounded-lg transition-colors flex items-center justify-center gap-2" id="saveSearchBtn">
|
||||||
<i class="fa-solid fa-bookmark"></i> Save Search
|
<i class="fa-solid fa-bookmark"></i> Save Search
|
||||||
</button>
|
</button>
|
||||||
<button type="button" class="w-full px-4 py-2 text-neutral-700 dark:text-neutral-300 bg-neutral-200 dark:bg-neutral-700 hover:bg-neutral-300 dark:hover:bg-neutral-600 rounded-lg transition-colors flex items-center justify-center gap-2" id="resetSearchBtn">
|
<button type="button" class="w-full px-4 py-2 text-neutral-700 dark:text-neutral-300 bg-neutral-200 dark:bg-neutral-700 hover:bg-neutral-300 dark:hover:bg-neutral-600 rounded-lg transition-colors flex items-center justify-center gap-2" id="resetSearchBtn">
|
||||||
<i class="fa-solid fa-rotate-right"></i> Reset
|
<i class="fa-solid fa-rotate-right"></i> Reset
|
||||||
</button>
|
</button>
|
||||||
|
<button type="button" class="w-full px-4 py-2 text-neutral-700 dark:text-neutral-300 bg-neutral-200 dark:bg-neutral-700 hover:bg-neutral-300 dark:hover:bg-neutral-600 rounded-lg transition-colors flex items-center justify-center gap-2" id="restoreLastBtn">
|
||||||
|
<i class="fa-solid fa-clock-rotate-left"></i> Restore Last
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</form>
|
</form>
|
||||||
@@ -412,10 +415,9 @@ document.addEventListener('DOMContentLoaded', function() {
|
|||||||
setupEventHandlers();
|
setupEventHandlers();
|
||||||
setupKeyboardShortcuts();
|
setupKeyboardShortcuts();
|
||||||
|
|
||||||
// Check for URL parameters to auto-load search
|
// Apply URL parameters to form and auto-perform search if any present
|
||||||
const urlParams = new URLSearchParams(window.location.search);
|
const didApplyFromUrl = applyCriteriaFromUrl();
|
||||||
if (urlParams.get('q')) {
|
if (didApplyFromUrl) {
|
||||||
document.getElementById('searchQuery').value = urlParams.get('q');
|
|
||||||
performSearch();
|
performSearch();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
@@ -541,6 +543,8 @@ function setupEventHandlers() {
|
|||||||
document.getElementById('confirmSaveSearch').addEventListener('click', saveCurrentSearch);
|
document.getElementById('confirmSaveSearch').addEventListener('click', saveCurrentSearch);
|
||||||
document.getElementById('savedSearchBtn').addEventListener('click', loadSavedSearches);
|
document.getElementById('savedSearchBtn').addEventListener('click', loadSavedSearches);
|
||||||
document.getElementById('clearAllBtn').addEventListener('click', clearAll);
|
document.getElementById('clearAllBtn').addEventListener('click', clearAll);
|
||||||
|
const restoreBtn = document.getElementById('restoreLastBtn');
|
||||||
|
if (restoreBtn) restoreBtn.addEventListener('click', restoreLastSearch);
|
||||||
|
|
||||||
// Sort change handlers
|
// Sort change handlers
|
||||||
document.getElementById('sortBy').addEventListener('change', () => {
|
document.getElementById('sortBy').addEventListener('change', () => {
|
||||||
@@ -553,6 +557,19 @@ function setupEventHandlers() {
|
|||||||
performSearch();
|
performSearch();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Facet chip click handler (event delegation)
|
||||||
|
const facetsContainer = document.getElementById('facetsContainer');
|
||||||
|
facetsContainer.addEventListener('click', function(e) {
|
||||||
|
const chip = e.target.closest('.facet-chip');
|
||||||
|
if (chip && facetsContainer.contains(chip)) {
|
||||||
|
const facet = chip.getAttribute('data-facet');
|
||||||
|
const value = chip.getAttribute('data-value');
|
||||||
|
if (applyFacetFilter(facet, value)) {
|
||||||
|
performSearch(0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
function setupKeyboardShortcuts() {
|
function setupKeyboardShortcuts() {
|
||||||
@@ -569,8 +586,14 @@ function setupKeyboardShortcuts() {
|
|||||||
performSearch();
|
performSearch();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Escape to clear search
|
// Escape: hide suggestions if open; otherwise clear search
|
||||||
if (e.key === 'Escape' && document.activeElement.id === 'searchQuery') {
|
if (e.key === 'Escape' && document.activeElement.id === 'searchQuery') {
|
||||||
|
const dropdown = document.getElementById('searchSuggestions');
|
||||||
|
if (dropdown && !dropdown.classList.contains('hidden')) {
|
||||||
|
e.preventDefault();
|
||||||
|
dropdown.classList.add('hidden');
|
||||||
|
return;
|
||||||
|
}
|
||||||
clearAll();
|
clearAll();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
@@ -631,6 +654,10 @@ async function performSearch(offset = 0) {
|
|||||||
const criteria = buildSearchCriteria();
|
const criteria = buildSearchCriteria();
|
||||||
criteria.offset = offset;
|
criteria.offset = offset;
|
||||||
currentSearchCriteria = criteria;
|
currentSearchCriteria = criteria;
|
||||||
|
// Sync URL with current criteria for shareable searches
|
||||||
|
syncUrlWithCriteria(criteria);
|
||||||
|
// Save last criteria best-effort
|
||||||
|
saveLastCriteria(criteria);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const response = await window.http.wrappedFetch('/api/search/advanced', {
|
const response = await window.http.wrappedFetch('/api/search/advanced', {
|
||||||
@@ -651,6 +678,59 @@ async function performSearch(offset = 0) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async function saveLastCriteria(criteria) {
|
||||||
|
try {
|
||||||
|
await window.http.wrappedFetch('/api/search/last_criteria', {
|
||||||
|
method: 'POST',
|
||||||
|
body: JSON.stringify(criteria)
|
||||||
|
});
|
||||||
|
} catch (e) { /* ignore */ }
|
||||||
|
}
|
||||||
|
|
||||||
|
async function restoreLastSearch() {
|
||||||
|
try {
|
||||||
|
const resp = await window.http.wrappedFetch('/api/search/last_criteria');
|
||||||
|
if (!resp.ok) throw new Error('failed');
|
||||||
|
const saved = await resp.json();
|
||||||
|
if (!saved || Object.keys(saved).length === 0) {
|
||||||
|
showAlert('No previous search found', 'info');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
// Populate form from saved criteria
|
||||||
|
document.getElementById('searchQuery').value = saved.query || '';
|
||||||
|
document.getElementById('exactPhrase').checked = !!saved.exact_phrase;
|
||||||
|
document.getElementById('caseSensitive').checked = !!saved.case_sensitive;
|
||||||
|
document.getElementById('wholeWords').checked = !!saved.whole_words;
|
||||||
|
if (Array.isArray(saved.search_types) && saved.search_types.length) {
|
||||||
|
document.querySelectorAll('.search-type').forEach(cb => cb.checked = saved.search_types.includes(cb.value));
|
||||||
|
}
|
||||||
|
if (saved.date_field) document.getElementById('dateField').value = saved.date_field;
|
||||||
|
if (saved.date_from) document.getElementById('dateFrom').value = saved.date_from;
|
||||||
|
if (saved.date_to) document.getElementById('dateTo').value = saved.date_to;
|
||||||
|
if (saved.amount_field) document.getElementById('amountField').value = saved.amount_field;
|
||||||
|
if (saved.amount_min != null) document.getElementById('amountMin').value = saved.amount_min;
|
||||||
|
if (saved.amount_max != null) document.getElementById('amountMax').value = saved.amount_max;
|
||||||
|
const setMulti = (id, values) => {
|
||||||
|
if (!values || !values.length) return;
|
||||||
|
const select = document.getElementById(id);
|
||||||
|
if (!select) return;
|
||||||
|
const set = new Set(values.map(String));
|
||||||
|
Array.from(select.options).forEach(o => { o.selected = set.has(String(o.value)); });
|
||||||
|
};
|
||||||
|
setMulti('fileTypes', saved.file_types);
|
||||||
|
setMulti('fileStatuses', saved.file_statuses);
|
||||||
|
setMulti('employees', saved.employees);
|
||||||
|
setMulti('transactionTypes', saved.transaction_types);
|
||||||
|
setMulti('states', saved.states);
|
||||||
|
document.getElementById('activeOnly').checked = saved.active_only !== false;
|
||||||
|
document.getElementById('hasBalance').checked = !!saved.has_balance;
|
||||||
|
document.getElementById('isBilled').checked = !!saved.is_billed;
|
||||||
|
performSearch(0);
|
||||||
|
} catch (e) {
|
||||||
|
showAlert('Could not restore last search', 'warning');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function buildSearchCriteria() {
|
function buildSearchCriteria() {
|
||||||
const searchTypes = [];
|
const searchTypes = [];
|
||||||
document.querySelectorAll('.search-type:checked').forEach(checkbox => {
|
document.querySelectorAll('.search-type:checked').forEach(checkbox => {
|
||||||
@@ -833,9 +913,11 @@ function displayFacets(facets) {
|
|||||||
facetsHTML += `
|
facetsHTML += `
|
||||||
<div class="facet-group mb-2">
|
<div class="facet-group mb-2">
|
||||||
<strong>${facetName.replace('_', ' ').toUpperCase()}:</strong>
|
<strong>${facetName.replace('_', ' ').toUpperCase()}:</strong>
|
||||||
${Object.entries(facetData).map(([value, count]) =>
|
${Object.entries(facetData).map(([value, count]) => {
|
||||||
`<span class="inline-block px-2 py-0.5 text-xs rounded bg-neutral-200 text-neutral-700 ml-1">${value} (${count})</span>`
|
const isClickable = ['state','transaction_type','file_type','status','employee'].includes(facetName);
|
||||||
).join('')}
|
const cls = isClickable ? 'facet-chip cursor-pointer hover:bg-neutral-300' : '';
|
||||||
|
return `<span class="inline-block px-2 py-0.5 text-xs rounded bg-neutral-200 text-neutral-700 ml-1 ${cls}" data-facet="${facetName}" data-value="${String(value).replace(/"/g,'"')}">${value} (${count})</span>`
|
||||||
|
}).join('')}
|
||||||
</div>
|
</div>
|
||||||
`;
|
`;
|
||||||
}
|
}
|
||||||
@@ -844,6 +926,106 @@ function displayFacets(facets) {
|
|||||||
container.innerHTML = facetsHTML;
|
container.innerHTML = facetsHTML;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Apply a clicked facet value to the appropriate filter control
|
||||||
|
function applyFacetFilter(facetName, value) {
|
||||||
|
const map = {
|
||||||
|
'state': 'states',
|
||||||
|
'transaction_type': 'transactionTypes',
|
||||||
|
'file_type': 'fileTypes',
|
||||||
|
'status': 'fileStatuses',
|
||||||
|
'employee': 'employees'
|
||||||
|
};
|
||||||
|
const selectId = map[facetName];
|
||||||
|
if (!selectId) return false;
|
||||||
|
const select = document.getElementById(selectId);
|
||||||
|
if (!select) return false;
|
||||||
|
const option = Array.from(select.options).find(o => String(o.value) === String(value));
|
||||||
|
if (!option) return false;
|
||||||
|
option.selected = true;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sync URL query params with current criteria (shareable/bookmarkable)
|
||||||
|
function syncUrlWithCriteria(criteria) {
|
||||||
|
const params = new URLSearchParams();
|
||||||
|
if (criteria.query) params.set('q', criteria.query);
|
||||||
|
if (Array.isArray(criteria.search_types) && criteria.search_types.length) params.set('types', criteria.search_types.join(','));
|
||||||
|
if (criteria.exact_phrase) params.set('exact_phrase', '1');
|
||||||
|
if (criteria.case_sensitive) params.set('case_sensitive', '1');
|
||||||
|
if (criteria.whole_words) params.set('whole_words', '1');
|
||||||
|
if (criteria.sort_by) params.set('sort_by', criteria.sort_by);
|
||||||
|
if (criteria.sort_order) params.set('sort_order', criteria.sort_order);
|
||||||
|
if (criteria.date_field) params.set('date_field', criteria.date_field);
|
||||||
|
if (criteria.date_from) params.set('date_from', criteria.date_from);
|
||||||
|
if (criteria.date_to) params.set('date_to', criteria.date_to);
|
||||||
|
if (criteria.amount_field) params.set('amount_field', criteria.amount_field);
|
||||||
|
if (criteria.amount_min != null) params.set('amount_min', String(criteria.amount_min));
|
||||||
|
if (criteria.amount_max != null) params.set('amount_max', String(criteria.amount_max));
|
||||||
|
if (Array.isArray(criteria.file_types) && criteria.file_types.length) params.set('file_types', criteria.file_types.join(','));
|
||||||
|
if (Array.isArray(criteria.file_statuses) && criteria.file_statuses.length) params.set('file_statuses', criteria.file_statuses.join(','));
|
||||||
|
if (Array.isArray(criteria.employees) && criteria.employees.length) params.set('employees', criteria.employees.join(','));
|
||||||
|
if (Array.isArray(criteria.transaction_types) && criteria.transaction_types.length) params.set('transaction_types', criteria.transaction_types.join(','));
|
||||||
|
if (Array.isArray(criteria.states) && criteria.states.length) params.set('states', criteria.states.join(','));
|
||||||
|
if (criteria.active_only === false) params.set('active_only', '0');
|
||||||
|
if (criteria.has_balance === true) params.set('has_balance', '1');
|
||||||
|
if (criteria.is_billed === true) params.set('is_billed', '1');
|
||||||
|
const page = Math.floor((criteria.offset || 0) / (criteria.limit || 50)) + 1;
|
||||||
|
if (page > 1) params.set('page', String(page));
|
||||||
|
const newUrl = `${window.location.pathname}?${params.toString()}`;
|
||||||
|
window.history.replaceState({}, '', newUrl);
|
||||||
|
}
|
||||||
|
|
||||||
|
function applyCriteriaFromUrl() {
|
||||||
|
const urlParams = new URLSearchParams(window.location.search);
|
||||||
|
let applied = false;
|
||||||
|
const getBool = (key) => {
|
||||||
|
const v = urlParams.get(key);
|
||||||
|
return v === '1' || v === 'true';
|
||||||
|
};
|
||||||
|
const getCsv = (key) => {
|
||||||
|
const v = urlParams.get(key);
|
||||||
|
return v ? v.split(',').filter(Boolean) : [];
|
||||||
|
};
|
||||||
|
const q = urlParams.get('q');
|
||||||
|
if (q) {
|
||||||
|
document.getElementById('searchQuery').value = q;
|
||||||
|
applied = true;
|
||||||
|
}
|
||||||
|
const types = getCsv('types').length ? getCsv('types') : getCsv('search_types');
|
||||||
|
if (types.length) {
|
||||||
|
document.querySelectorAll('.search-type').forEach(cb => cb.checked = types.includes(cb.value));
|
||||||
|
applied = true;
|
||||||
|
}
|
||||||
|
if (urlParams.has('exact_phrase')) { document.getElementById('exactPhrase').checked = getBool('exact_phrase'); applied = true; }
|
||||||
|
if (urlParams.has('case_sensitive')) { document.getElementById('caseSensitive').checked = getBool('case_sensitive'); applied = true; }
|
||||||
|
if (urlParams.has('whole_words')) { document.getElementById('wholeWords').checked = getBool('whole_words'); applied = true; }
|
||||||
|
if (urlParams.has('sort_by')) { document.getElementById('sortBy').value = urlParams.get('sort_by'); applied = true; }
|
||||||
|
if (urlParams.has('sort_order')) { document.getElementById('sortOrder').value = urlParams.get('sort_order'); applied = true; }
|
||||||
|
if (urlParams.has('date_field')) { document.getElementById('dateField').value = urlParams.get('date_field'); applied = true; }
|
||||||
|
if (urlParams.has('date_from')) { document.getElementById('dateFrom').value = urlParams.get('date_from'); applied = true; }
|
||||||
|
if (urlParams.has('date_to')) { document.getElementById('dateTo').value = urlParams.get('date_to'); applied = true; }
|
||||||
|
if (urlParams.has('amount_field')) { document.getElementById('amountField').value = urlParams.get('amount_field'); applied = true; }
|
||||||
|
if (urlParams.has('amount_min')) { document.getElementById('amountMin').value = urlParams.get('amount_min'); applied = true; }
|
||||||
|
if (urlParams.has('amount_max')) { document.getElementById('amountMax').value = urlParams.get('amount_max'); applied = true; }
|
||||||
|
const setMulti = (id, values) => {
|
||||||
|
if (!values || !values.length) return false;
|
||||||
|
const select = document.getElementById(id);
|
||||||
|
if (!select) return false;
|
||||||
|
const set = new Set(values.map(String));
|
||||||
|
Array.from(select.options).forEach(o => { o.selected = set.has(String(o.value)); });
|
||||||
|
return true;
|
||||||
|
};
|
||||||
|
if (setMulti('fileTypes', getCsv('file_types'))) applied = true;
|
||||||
|
if (setMulti('fileStatuses', getCsv('file_statuses'))) applied = true;
|
||||||
|
if (setMulti('employees', getCsv('employees'))) applied = true;
|
||||||
|
if (setMulti('transactionTypes', getCsv('transaction_types'))) applied = true;
|
||||||
|
if (setMulti('states', getCsv('states'))) applied = true;
|
||||||
|
if (urlParams.has('active_only')) { document.getElementById('activeOnly').checked = getBool('active_only'); applied = true; }
|
||||||
|
if (urlParams.has('has_balance')) { document.getElementById('hasBalance').checked = getBool('has_balance'); applied = true; }
|
||||||
|
if (urlParams.has('is_billed')) { document.getElementById('isBilled').checked = getBool('is_billed'); applied = true; }
|
||||||
|
return applied;
|
||||||
|
}
|
||||||
|
|
||||||
function displayPagination(pageInfo) {
|
function displayPagination(pageInfo) {
|
||||||
const paginationContainer = document.getElementById('searchPagination');
|
const paginationContainer = document.getElementById('searchPagination');
|
||||||
paginationContainer.innerHTML = '';
|
paginationContainer.innerHTML = '';
|
||||||
|
|||||||
4
test-results/.last-run.json
Normal file
4
test-results/.last-run.json
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"status": "passed",
|
||||||
|
"failedTests": []
|
||||||
|
}
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
"""Tests for Customers API using FastAPI TestClient (no live server required)."""
|
"""Tests for Customers API using FastAPI TestClient (no live server required)."""
|
||||||
import os
|
import os
|
||||||
import uuid
|
import uuid
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from fastapi.testclient import TestClient
|
from fastapi.testclient import TestClient
|
||||||
@@ -88,7 +88,7 @@ def test_customers_crud_and_queries(client: TestClient):
|
|||||||
# Update
|
# Update
|
||||||
resp = client.put(
|
resp = client.put(
|
||||||
f"/api/customers/{unique_id}",
|
f"/api/customers/{unique_id}",
|
||||||
json={"memo": f"Updated at {datetime.utcnow().isoformat()}"},
|
json={"memo": f"Updated at {datetime.now(timezone.utc).isoformat()}"},
|
||||||
)
|
)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
|||||||
@@ -93,9 +93,78 @@ def test_lookup_crud_file_types_and_statuses_and_audit(client_admin: TestClient)
|
|||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
# Verify audit logs endpoint is accessible and returns structure
|
# Verify audit logs endpoint is accessible and returns structure
|
||||||
resp = client_admin.get("/api/admin/audit/logs")
|
resp = client_admin.get("/api/admin/audit/logs", params={"include_total": 1})
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
body = resp.json()
|
body = resp.json()
|
||||||
assert set(body.keys()) == {"total", "logs"}
|
assert set(body.keys()) == {"total", "items"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
|
||||||
|
|
||||||
|
def test_printer_setup_crud(client_admin: TestClient):
|
||||||
|
# Create a printer
|
||||||
|
resp = client_admin.post(
|
||||||
|
"/api/admin/printers",
|
||||||
|
json={
|
||||||
|
"printer_name": "TestPrinter",
|
||||||
|
"description": "Test",
|
||||||
|
"driver": "Generic",
|
||||||
|
"port": "LPT1",
|
||||||
|
"default_printer": True,
|
||||||
|
"page_break": "\f",
|
||||||
|
"setup_st": "^[[0m",
|
||||||
|
"reset_st": "^[[0m",
|
||||||
|
"b_bold": "^[[1m",
|
||||||
|
"e_bold": "^[[22m",
|
||||||
|
"b_underline": "^[[4m",
|
||||||
|
"e_underline": "^[[24m",
|
||||||
|
"phone_book": True,
|
||||||
|
"rolodex_info": False,
|
||||||
|
"envelope": True,
|
||||||
|
"file_cabinet": True,
|
||||||
|
"accounts": False,
|
||||||
|
"statements": True,
|
||||||
|
"calendar": False,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
printer = resp.json()
|
||||||
|
assert printer["printer_name"] == "TestPrinter"
|
||||||
|
assert printer["default_printer"] is True
|
||||||
|
|
||||||
|
# Update printer flags
|
||||||
|
resp = client_admin.put(
|
||||||
|
"/api/admin/printers/TestPrinter",
|
||||||
|
json={
|
||||||
|
"default_printer": False,
|
||||||
|
"statements": False,
|
||||||
|
"calendar": True,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
updated = resp.json()
|
||||||
|
assert updated["default_printer"] is False
|
||||||
|
assert updated["statements"] is False
|
||||||
|
assert updated["calendar"] is True
|
||||||
|
|
||||||
|
# Get printer by name
|
||||||
|
resp = client_admin.get("/api/admin/printers/TestPrinter")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
fetched = resp.json()
|
||||||
|
assert fetched["printer_name"] == "TestPrinter"
|
||||||
|
|
||||||
|
# List printers includes our printer
|
||||||
|
resp = client_admin.get("/api/admin/printers")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
names = [p["printer_name"] for p in resp.json()]
|
||||||
|
assert "TestPrinter" in names
|
||||||
|
|
||||||
|
# Delete the printer
|
||||||
|
resp = client_admin.delete("/api/admin/printers/TestPrinter")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
# Verify it's gone
|
||||||
|
resp = client_admin.get("/api/admin/printers")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
names = [p["printer_name"] for p in resp.json()]
|
||||||
|
assert "TestPrinter" not in names
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta, timezone
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from jose import jwt
|
from jose import jwt
|
||||||
@@ -43,8 +43,8 @@ def test_jwt_rotation_decode(monkeypatch):
|
|||||||
# Sign token with old key
|
# Sign token with old key
|
||||||
payload = {
|
payload = {
|
||||||
"sub": "tester",
|
"sub": "tester",
|
||||||
"exp": datetime.utcnow() + timedelta(minutes=5),
|
"exp": datetime.now(timezone.utc) + timedelta(minutes=5),
|
||||||
"iat": datetime.utcnow(),
|
"iat": datetime.now(timezone.utc),
|
||||||
"type": "access",
|
"type": "access",
|
||||||
}
|
}
|
||||||
token = jwt.encode(payload, old_key, algorithm=settings.algorithm)
|
token = jwt.encode(payload, old_key, algorithm=settings.algorithm)
|
||||||
|
|||||||
80
tests/test_customers_search_props.py
Normal file
80
tests/test_customers_search_props.py
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
import pytest
|
||||||
|
|
||||||
|
try:
|
||||||
|
from hypothesis import given, strategies as st, settings
|
||||||
|
except Exception: # pragma: no cover
|
||||||
|
pytest.skip("Hypothesis not installed; skipping property-based tests.", allow_module_level=True)
|
||||||
|
|
||||||
|
from app.services.customers_search import apply_customer_filters, apply_customer_sorting
|
||||||
|
|
||||||
|
|
||||||
|
class FakeQuery:
|
||||||
|
def __init__(self):
|
||||||
|
self.filters = []
|
||||||
|
self.orderings = []
|
||||||
|
|
||||||
|
def filter(self, *args):
|
||||||
|
self.filters.extend(args)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def order_by(self, *args):
|
||||||
|
self.orderings.extend(args)
|
||||||
|
return self
|
||||||
|
|
||||||
|
|
||||||
|
def _expected_filter_count(search, group, groups, state, states):
|
||||||
|
s = (search or "").strip()
|
||||||
|
search_filter = 1 if s else 0
|
||||||
|
|
||||||
|
eff_groups = [g for g in (groups or []) if g] or ([group] if group else [])
|
||||||
|
groups_filter = 1 if eff_groups else 0
|
||||||
|
|
||||||
|
eff_states = [s for s in (states or []) if s] or ([state] if state else [])
|
||||||
|
states_filter = 1 if eff_states else 0
|
||||||
|
|
||||||
|
return search_filter + groups_filter + states_filter
|
||||||
|
|
||||||
|
|
||||||
|
@settings(deadline=None, max_examples=100)
|
||||||
|
@given(
|
||||||
|
search=st.text(min_size=0, max_size=200),
|
||||||
|
group=st.one_of(st.none(), st.text(min_size=0, max_size=20)),
|
||||||
|
state=st.one_of(st.none(), st.text(min_size=0, max_size=10)),
|
||||||
|
groups=st.one_of(
|
||||||
|
st.none(),
|
||||||
|
st.lists(st.one_of(st.none(), st.text(min_size=0, max_size=10)), max_size=5),
|
||||||
|
),
|
||||||
|
states=st.one_of(
|
||||||
|
st.none(),
|
||||||
|
st.lists(st.one_of(st.none(), st.text(min_size=0, max_size=10)), max_size=5),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
def test_apply_customer_filters_property(search, group, groups, state, states):
|
||||||
|
q = FakeQuery()
|
||||||
|
q = apply_customer_filters(q, search=search, group=group, state=state, groups=groups, states=states)
|
||||||
|
|
||||||
|
assert len(q.filters) == _expected_filter_count(search, group, groups, state, states)
|
||||||
|
|
||||||
|
|
||||||
|
@settings(deadline=None, max_examples=100)
|
||||||
|
@given(
|
||||||
|
sort_by=st.one_of(
|
||||||
|
st.none(),
|
||||||
|
st.sampled_from(["id", "name", "city", "email", "ID", "NAME", "CITY", "EMAIL"]),
|
||||||
|
st.text(min_size=0, max_size=15),
|
||||||
|
),
|
||||||
|
sort_dir=st.one_of(
|
||||||
|
st.none(),
|
||||||
|
st.sampled_from(["asc", "ASC", "desc", "DESC", ""]),
|
||||||
|
st.text(min_size=0, max_size=10),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
def test_apply_customer_sorting_property(sort_by, sort_dir):
|
||||||
|
q = FakeQuery()
|
||||||
|
q = apply_customer_sorting(q, sort_by=sort_by, sort_dir=sort_dir)
|
||||||
|
|
||||||
|
sb = (sort_by or "id").lower()
|
||||||
|
expected_order_cols = 2 if sb == "name" else 1
|
||||||
|
assert len(q.orderings) == expected_order_cols
|
||||||
|
|
||||||
|
|
||||||
135
tests/test_customers_search_utils.py
Normal file
135
tests/test_customers_search_utils.py
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
from types import SimpleNamespace
|
||||||
|
from sqlalchemy.dialects import sqlite
|
||||||
|
|
||||||
|
from app.services.customers_search import (
|
||||||
|
apply_customer_filters,
|
||||||
|
apply_customer_sorting,
|
||||||
|
prepare_customer_csv_rows,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class FakeQuery:
|
||||||
|
"""Lightweight stand-in for SQLAlchemy Query that captures filters and orderings.
|
||||||
|
|
||||||
|
We only need to verify that our helper functions add the expected number of
|
||||||
|
filter/order_by clauses and roughly target the expected columns. We do not
|
||||||
|
execute any SQL.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.filters = []
|
||||||
|
self.orderings = []
|
||||||
|
|
||||||
|
def filter(self, *args):
|
||||||
|
self.filters.extend(args)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def order_by(self, *args):
|
||||||
|
self.orderings.extend(args)
|
||||||
|
return self
|
||||||
|
|
||||||
|
|
||||||
|
def compile_sql(expr):
|
||||||
|
"""Compile a SQLAlchemy expression to a SQLite SQL string for simple assertions."""
|
||||||
|
try:
|
||||||
|
return str(expr.compile(dialect=sqlite.dialect()))
|
||||||
|
except Exception:
|
||||||
|
return str(expr)
|
||||||
|
|
||||||
|
|
||||||
|
def test_apply_customer_filters_search_and_comma_pattern():
|
||||||
|
q = FakeQuery()
|
||||||
|
q = apply_customer_filters(q, search="Smith, John", group=None, state=None, groups=None, states=None)
|
||||||
|
# One filter clause added (combined search filter)
|
||||||
|
assert len(q.filters) == 1
|
||||||
|
sql = compile_sql(q.filters[0])
|
||||||
|
assert "last" in sql and "first" in sql
|
||||||
|
|
||||||
|
|
||||||
|
def test_apply_customer_filters_groups_and_states():
|
||||||
|
q = FakeQuery()
|
||||||
|
q = apply_customer_filters(q, search=None, group="A", state="NY", groups=None, states=None)
|
||||||
|
# Two filter clauses added: group and state
|
||||||
|
assert len(q.filters) == 2
|
||||||
|
sql_group = compile_sql(q.filters[0])
|
||||||
|
sql_state = compile_sql(q.filters[1])
|
||||||
|
assert "group" in sql_group
|
||||||
|
assert "abrev" in sql_state or "state" in sql_state
|
||||||
|
|
||||||
|
|
||||||
|
def test_apply_customer_filters_multi_groups_priority():
|
||||||
|
q = FakeQuery()
|
||||||
|
q = apply_customer_filters(q, search=None, group="A", state=None, groups=["X", "Y"], states=None)
|
||||||
|
# Only one filter (multi-groups) should be applied for groups
|
||||||
|
assert len(q.filters) == 1
|
||||||
|
assert "IN" in compile_sql(q.filters[0])
|
||||||
|
|
||||||
|
|
||||||
|
def test_apply_customer_sorting_fields_and_direction():
|
||||||
|
# name sorting => two orderings
|
||||||
|
q1 = FakeQuery()
|
||||||
|
q1 = apply_customer_sorting(q1, sort_by="name", sort_dir="asc")
|
||||||
|
assert len(q1.orderings) == 2
|
||||||
|
assert "last" in compile_sql(q1.orderings[0])
|
||||||
|
assert "first" in compile_sql(q1.orderings[1])
|
||||||
|
|
||||||
|
# id sorting desc => one ordering and DESC direction in SQL
|
||||||
|
q2 = FakeQuery()
|
||||||
|
q2 = apply_customer_sorting(q2, sort_by="id", sort_dir="desc")
|
||||||
|
assert len(q2.orderings) == 1
|
||||||
|
assert "DESC" in compile_sql(q2.orderings[0]).upper()
|
||||||
|
|
||||||
|
# unknown field falls back to id
|
||||||
|
q3 = FakeQuery()
|
||||||
|
q3 = apply_customer_sorting(q3, sort_by="unknown", sort_dir="asc")
|
||||||
|
assert len(q3.orderings) == 1
|
||||||
|
assert "id" in compile_sql(q3.orderings[0]).lower()
|
||||||
|
|
||||||
|
|
||||||
|
def test_prepare_customer_csv_rows_default_and_selected_fields():
|
||||||
|
cust1 = SimpleNamespace(
|
||||||
|
id="001",
|
||||||
|
first="John",
|
||||||
|
last="Smith",
|
||||||
|
group="G1",
|
||||||
|
city="New York",
|
||||||
|
abrev="NY",
|
||||||
|
email="john@example.com",
|
||||||
|
phone_numbers=[SimpleNamespace(phone="123-456-7890")],
|
||||||
|
)
|
||||||
|
cust2 = SimpleNamespace(
|
||||||
|
id="002",
|
||||||
|
first="Jane",
|
||||||
|
last="Doe",
|
||||||
|
group="G2",
|
||||||
|
city="Boston",
|
||||||
|
abrev="MA",
|
||||||
|
email="jane@example.com",
|
||||||
|
phone_numbers=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Default fields
|
||||||
|
header, rows = prepare_customer_csv_rows([cust1, cust2], fields=None)
|
||||||
|
assert header == [
|
||||||
|
"Customer ID",
|
||||||
|
"Name",
|
||||||
|
"Group",
|
||||||
|
"City",
|
||||||
|
"State",
|
||||||
|
"Primary Phone",
|
||||||
|
"Email",
|
||||||
|
]
|
||||||
|
assert rows[0][0] == "001"
|
||||||
|
assert rows[0][1] == "John Smith"
|
||||||
|
assert rows[0][2] == "G1"
|
||||||
|
assert rows[0][3] == "New York"
|
||||||
|
assert rows[0][4] == "NY"
|
||||||
|
assert rows[0][5] == "123-456-7890"
|
||||||
|
assert rows[0][6] == "john@example.com"
|
||||||
|
|
||||||
|
# Selected subset of fields
|
||||||
|
header_sel, rows_sel = prepare_customer_csv_rows([cust1], fields=["id", "name", "email"]) # any case ok
|
||||||
|
assert header_sel == ["Customer ID", "Name", "Email"]
|
||||||
|
assert rows_sel[0] == ["001", "John Smith", "john@example.com"]
|
||||||
|
|
||||||
|
|
||||||
82
tests/test_highlight_parity.py
Normal file
82
tests/test_highlight_parity.py
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
import json
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from app.api.search_highlight import build_query_tokens, highlight_text
|
||||||
|
|
||||||
|
|
||||||
|
def _run_node_highlight(value: str, query: str):
|
||||||
|
"""Invoke Node to run client highlight.js and return tokens and html.
|
||||||
|
|
||||||
|
Skips DOM and sanitizer loading by providing a minimal window with an
|
||||||
|
escape() function that mirrors server escaping behavior.
|
||||||
|
"""
|
||||||
|
node_path = shutil.which("node")
|
||||||
|
if not node_path:
|
||||||
|
return None
|
||||||
|
|
||||||
|
repo_root = Path(__file__).resolve().parents[1]
|
||||||
|
highlight_js_path = repo_root / "static/js/highlight.js"
|
||||||
|
if not highlight_js_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
payload = json.dumps({"value": value, "query": query})
|
||||||
|
script = f"""
|
||||||
|
const fs = require('fs');
|
||||||
|
global.window = {{}};
|
||||||
|
// Provide escape that matches server: replace &, <, >, ", '
|
||||||
|
window.htmlSanitizer = {{
|
||||||
|
escape: function(text) {{
|
||||||
|
const str = String(text == null ? '' : text);
|
||||||
|
return str
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}}
|
||||||
|
}};
|
||||||
|
require('{highlight_js_path.as_posix()}');
|
||||||
|
const input = JSON.parse(process.argv[2]);
|
||||||
|
const tokens = window.highlightUtils.buildTokens(input.query);
|
||||||
|
const html = window.highlightUtils.highlight(input.value, tokens);
|
||||||
|
process.stdout.write(JSON.stringify({{ tokens, html }}));
|
||||||
|
"""
|
||||||
|
res = subprocess.run(
|
||||||
|
[node_path, "-e", script, payload],
|
||||||
|
cwd=str(repo_root),
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
)
|
||||||
|
if res.returncode != 0:
|
||||||
|
return None
|
||||||
|
return json.loads(res.stdout)
|
||||||
|
|
||||||
|
|
||||||
|
def test_highlight_parity_with_client_when_node_available():
|
||||||
|
"""Compare tokens and highlighted HTML between server and client implementations.
|
||||||
|
|
||||||
|
This test is skipped when Node is unavailable.
|
||||||
|
"""
|
||||||
|
samples = [
|
||||||
|
("Hello John Smith", "john smith"),
|
||||||
|
("<b>A&B</b> and C", "a b"),
|
||||||
|
("Anna and Ann went", "ann anna"),
|
||||||
|
("He said \"Hello\" & it's fine", "hello"),
|
||||||
|
("Case 12345", "case 123"),
|
||||||
|
]
|
||||||
|
|
||||||
|
for value, query in samples:
|
||||||
|
client = _run_node_highlight(value, query)
|
||||||
|
if client is None:
|
||||||
|
# Skip gracefully if Node not present or script failed
|
||||||
|
import pytest
|
||||||
|
pytest.skip("Node or client highlight not available")
|
||||||
|
server_tokens = build_query_tokens(query)
|
||||||
|
server_html = highlight_text(value, server_tokens)
|
||||||
|
assert client["tokens"] == server_tokens
|
||||||
|
assert client["html"] == server_html
|
||||||
|
|
||||||
|
|
||||||
158
tests/test_mortality.py
Normal file
158
tests/test_mortality.py
Normal file
@@ -0,0 +1,158 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
# Ensure required env vars for app import/config
|
||||||
|
os.environ.setdefault("SECRET_KEY", "x" * 32)
|
||||||
|
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/delphi_test.sqlite")
|
||||||
|
|
||||||
|
# Ensure repository root on sys.path for direct test runs
|
||||||
|
ROOT = Path(__file__).resolve().parents[1]
|
||||||
|
if str(ROOT) not in sys.path:
|
||||||
|
sys.path.insert(0, str(ROOT))
|
||||||
|
|
||||||
|
from app.main import app # noqa: E402
|
||||||
|
from app.auth.security import get_current_user # noqa: E402
|
||||||
|
from tests.helpers import assert_http_error # noqa: E402
|
||||||
|
from app.database.base import SessionLocal # noqa: E402
|
||||||
|
from app.models.pensions import LifeTable, NumberTable # noqa: E402
|
||||||
|
from app.services.mortality import ( # noqa: E402
|
||||||
|
get_life_values,
|
||||||
|
get_number_value,
|
||||||
|
InvalidCodeError,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module")
|
||||||
|
def client():
|
||||||
|
# Override auth to bypass JWT for these tests
|
||||||
|
class _User:
|
||||||
|
def __init__(self):
|
||||||
|
self.id = "test"
|
||||||
|
self.username = "tester"
|
||||||
|
self.is_admin = True
|
||||||
|
self.is_active = True
|
||||||
|
|
||||||
|
app.dependency_overrides[get_current_user] = lambda: _User()
|
||||||
|
|
||||||
|
try:
|
||||||
|
yield TestClient(app)
|
||||||
|
finally:
|
||||||
|
app.dependency_overrides.pop(get_current_user, None)
|
||||||
|
|
||||||
|
|
||||||
|
def _seed_life_and_number():
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
# Seed a life table row for age 65
|
||||||
|
db.query(LifeTable).filter(LifeTable.age == 65).delete()
|
||||||
|
lt = LifeTable(
|
||||||
|
age=65,
|
||||||
|
le_wm=14.5,
|
||||||
|
na_wm=87000.0,
|
||||||
|
le_af=20.1,
|
||||||
|
na_af=92000.0,
|
||||||
|
le_ha=18.2,
|
||||||
|
na_ha=88000.0,
|
||||||
|
)
|
||||||
|
db.add(lt)
|
||||||
|
|
||||||
|
# Seed a number table row for month 305
|
||||||
|
db.query(NumberTable).filter(NumberTable.month == 305).delete()
|
||||||
|
nt = NumberTable(
|
||||||
|
month=305,
|
||||||
|
na_wm=80000.0,
|
||||||
|
na_af=90000.0,
|
||||||
|
na_ha=85000.0,
|
||||||
|
)
|
||||||
|
db.add(nt)
|
||||||
|
|
||||||
|
db.commit()
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_service_helpers_success_invalid_and_not_found():
|
||||||
|
_seed_life_and_number()
|
||||||
|
db = SessionLocal()
|
||||||
|
try:
|
||||||
|
# Success cases
|
||||||
|
res = get_life_values(db, age=65, sex="M", race="W")
|
||||||
|
assert res and res["le"] == 14.5 and res["na"] == 87000.0
|
||||||
|
res = get_life_values(db, age=65, sex="F", race="A")
|
||||||
|
assert res and res["le"] == 20.1 and res["na"] == 92000.0
|
||||||
|
res = get_life_values(db, age=65, sex="A", race="H")
|
||||||
|
assert res and res["le"] == 18.2 and res["na"] == 88000.0
|
||||||
|
|
||||||
|
nres = get_number_value(db, month=305, sex="M", race="W")
|
||||||
|
assert nres and nres["na"] == 80000.0
|
||||||
|
nres = get_number_value(db, month=305, sex="F", race="A")
|
||||||
|
assert nres and nres["na"] == 90000.0
|
||||||
|
nres = get_number_value(db, month=305, sex="A", race="H")
|
||||||
|
assert nres and nres["na"] == 85000.0
|
||||||
|
|
||||||
|
# Invalid codes
|
||||||
|
with pytest.raises(InvalidCodeError):
|
||||||
|
get_life_values(db, age=65, sex="X", race="W")
|
||||||
|
with pytest.raises(InvalidCodeError):
|
||||||
|
get_number_value(db, month=305, sex="M", race="Z")
|
||||||
|
|
||||||
|
# Not found
|
||||||
|
assert get_life_values(db, age=9999, sex="M", race="W") is None
|
||||||
|
assert get_number_value(db, month=99999, sex="M", race="W") is None
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_life_valid_invalid_not_found(client: TestClient):
|
||||||
|
_seed_life_and_number()
|
||||||
|
|
||||||
|
# Valid lookups
|
||||||
|
resp = client.get("/api/mortality/life/65", params={"sex": "M", "race": "W"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
assert data["le"] == 14.5 and data["na"] == 87000.0
|
||||||
|
|
||||||
|
resp = client.get("/api/mortality/life/65", params={"sex": "F", "race": "A"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.json()["le"] == 20.1
|
||||||
|
|
||||||
|
# Invalid code -> 400 wrapped error
|
||||||
|
resp = client.get("/api/mortality/life/65", params={"sex": "X", "race": "W"})
|
||||||
|
assert_http_error(resp, 400, "Invalid sex code")
|
||||||
|
|
||||||
|
# Not found -> 404 wrapped error
|
||||||
|
resp = client.get("/api/mortality/life/9999", params={"sex": "M", "race": "W"})
|
||||||
|
assert_http_error(resp, 404, "Age not found")
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_number_valid_invalid_not_found(client: TestClient):
|
||||||
|
_seed_life_and_number()
|
||||||
|
|
||||||
|
# Valid lookup
|
||||||
|
resp = client.get("/api/mortality/number/305", params={"sex": "M", "race": "W"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.json()["na"] == 80000.0
|
||||||
|
|
||||||
|
# Invalid code -> 400
|
||||||
|
resp = client.get("/api/mortality/number/305", params={"sex": "M", "race": "Z"})
|
||||||
|
assert_http_error(resp, 400, "Invalid race code")
|
||||||
|
|
||||||
|
# Not found -> 404
|
||||||
|
resp = client.get("/api/mortality/number/99999", params={"sex": "M", "race": "W"})
|
||||||
|
assert_http_error(resp, 404, "Month not found")
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_validation_negative_inputs(client: TestClient):
|
||||||
|
# Negative age -> 422 validation envelope
|
||||||
|
resp = client.get("/api/mortality/life/-1", params={"sex": "M", "race": "W"})
|
||||||
|
from tests.helpers import assert_validation_error
|
||||||
|
assert_validation_error(resp, "age")
|
||||||
|
|
||||||
|
# Negative month -> 422 validation envelope
|
||||||
|
resp = client.get("/api/mortality/number/-5", params={"sex": "F", "race": "A"})
|
||||||
|
assert_validation_error(resp, "month")
|
||||||
|
|
||||||
|
|
||||||
201
tests/test_pagination_shapes.py
Normal file
201
tests/test_pagination_shapes.py
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
import os
|
||||||
|
import uuid
|
||||||
|
from datetime import date
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
os.environ.setdefault("SECRET_KEY", "x" * 32)
|
||||||
|
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/delphi_test.sqlite")
|
||||||
|
|
||||||
|
from app.main import app # noqa: E402
|
||||||
|
from app.auth.security import get_current_user, get_admin_user # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
|
class _User:
|
||||||
|
def __init__(self, admin: bool = False):
|
||||||
|
self.id = 1
|
||||||
|
self.username = "tester"
|
||||||
|
self.is_admin = admin
|
||||||
|
self.is_active = True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture()
|
||||||
|
def client():
|
||||||
|
app.dependency_overrides[get_current_user] = lambda: _User(True)
|
||||||
|
app.dependency_overrides[get_admin_user] = lambda: _User(True)
|
||||||
|
try:
|
||||||
|
yield TestClient(app)
|
||||||
|
finally:
|
||||||
|
app.dependency_overrides.pop(get_current_user, None)
|
||||||
|
app.dependency_overrides.pop(get_admin_user, None)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_customer(client: TestClient) -> str:
|
||||||
|
cid = f"PGN-{uuid.uuid4().hex[:8]}"
|
||||||
|
resp = client.post("/api/customers/", json={"id": cid, "last": "Paginate", "email": f"{cid}@ex.com"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
return cid
|
||||||
|
|
||||||
|
|
||||||
|
def test_files_include_total_shape(client: TestClient):
|
||||||
|
owner_id = _create_customer(client)
|
||||||
|
for _ in range(2):
|
||||||
|
fno = f"P-{uuid.uuid4().hex[:6]}"
|
||||||
|
payload = {
|
||||||
|
"file_no": fno,
|
||||||
|
"id": owner_id,
|
||||||
|
"regarding": "Pagination Test",
|
||||||
|
"empl_num": "E01",
|
||||||
|
"file_type": "CIVIL",
|
||||||
|
"opened": date.today().isoformat(),
|
||||||
|
"status": "ACTIVE",
|
||||||
|
"rate_per_hour": 100.0,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/files/", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
resp = client.get("/api/files/", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
assert body["total"] >= len(body["items"]) >= 1
|
||||||
|
|
||||||
|
|
||||||
|
def test_templates_include_total_shape(client: TestClient):
|
||||||
|
tid = f"PGT-{uuid.uuid4().hex[:6]}"
|
||||||
|
resp = client.post(
|
||||||
|
"/api/documents/templates/",
|
||||||
|
json={"form_id": tid, "form_name": "TName", "category": "GENERAL", "content": "C"},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
resp = client.get("/api/documents/templates/", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
|
||||||
|
|
||||||
|
def test_users_include_total_shape(client: TestClient):
|
||||||
|
# Admin endpoint: just validate shape
|
||||||
|
resp = client.get("/api/admin/users", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
|
||||||
|
|
||||||
|
def test_support_tickets_include_total_shape(client: TestClient):
|
||||||
|
# Ensure at least one ticket exists
|
||||||
|
payload = {
|
||||||
|
"subject": "Pagination test subject",
|
||||||
|
"description": "A sufficiently long description for validation.",
|
||||||
|
"category": "bug_report",
|
||||||
|
"priority": "medium",
|
||||||
|
"contact_name": "Tester",
|
||||||
|
"contact_email": "tester@example.com",
|
||||||
|
}
|
||||||
|
resp = client.post("/api/support/tickets", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Validate include_total shape
|
||||||
|
resp = client.get("/api/support/tickets", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
assert body["total"] >= len(body["items"]) >= 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_my_support_tickets_include_total_shape(client: TestClient):
|
||||||
|
# Even if empty, should return the same shape
|
||||||
|
resp = client.get("/api/support/my-tickets", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
assert body["total"] >= 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_qdros_by_file_include_total_shape(client: TestClient):
|
||||||
|
# Create minimal file and a qdro
|
||||||
|
import uuid
|
||||||
|
owner_id = _create_customer(client)
|
||||||
|
fno = f"P-{uuid.uuid4().hex[:6]}"
|
||||||
|
resp = client.post(
|
||||||
|
"/api/files/",
|
||||||
|
json={
|
||||||
|
"file_no": fno,
|
||||||
|
"id": owner_id,
|
||||||
|
"regarding": "QDRO Pagination Test",
|
||||||
|
"empl_num": "E01",
|
||||||
|
"file_type": "CIVIL",
|
||||||
|
"opened": date.today().isoformat(),
|
||||||
|
"status": "ACTIVE",
|
||||||
|
"rate_per_hour": 100.0,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
resp = client.post(
|
||||||
|
"/api/documents/qdros/",
|
||||||
|
json={"file_no": fno, "form_name": "FormX", "status": "DRAFT"},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Validate include_total on file-specific qdros
|
||||||
|
resp = client.get(f"/api/documents/qdros/{fno}", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
|
||||||
|
|
||||||
|
def test_ledger_by_file_include_total_shape(client: TestClient):
|
||||||
|
# Create minimal file and a ledger entry via financial quick helper
|
||||||
|
import uuid
|
||||||
|
owner_id = _create_customer(client)
|
||||||
|
fno = f"P-{uuid.uuid4().hex[:6]}"
|
||||||
|
resp = client.post(
|
||||||
|
"/api/files/",
|
||||||
|
json={
|
||||||
|
"file_no": fno,
|
||||||
|
"id": owner_id,
|
||||||
|
"regarding": "Ledger Pagination Test",
|
||||||
|
"empl_num": "E01",
|
||||||
|
"file_type": "CIVIL",
|
||||||
|
"opened": date.today().isoformat(),
|
||||||
|
"status": "ACTIVE",
|
||||||
|
"rate_per_hour": 100.0,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
# Quick time entry
|
||||||
|
resp = client.post(
|
||||||
|
"/api/financial/time-entry/quick",
|
||||||
|
params={"file_no": fno, "hours": 1.5, "description": "Work"},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Validate include_total on file ledger
|
||||||
|
resp = client.get(f"/api/financial/ledger/{fno}", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
|
||||||
|
|
||||||
|
def test_customer_phones_include_total_shape(client: TestClient):
|
||||||
|
# Create customer and a couple of phones
|
||||||
|
owner_id = _create_customer(client)
|
||||||
|
for ph in ["555-1000", "555-1001"]:
|
||||||
|
resp = client.post(f"/api/customers/{owner_id}/phones", json={"phone": ph, "location": "Home"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
resp = client.get(f"/api/customers/{owner_id}/phones", params={"include_total": True, "limit": 1})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
body = resp.json()
|
||||||
|
assert set(body.keys()) == {"items", "total"}
|
||||||
|
assert isinstance(body["items"], list)
|
||||||
|
|
||||||
|
|
||||||
@@ -22,6 +22,7 @@ from tests.helpers import assert_validation_error # noqa: E402
|
|||||||
from app.api.financial import LedgerCreate # noqa: E402
|
from app.api.financial import LedgerCreate # noqa: E402
|
||||||
from app.database.base import SessionLocal # noqa: E402
|
from app.database.base import SessionLocal # noqa: E402
|
||||||
from app.models.qdro import QDRO # noqa: E402
|
from app.models.qdro import QDRO # noqa: E402
|
||||||
|
from app.config import settings # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="module")
|
@pytest.fixture(scope="module")
|
||||||
@@ -37,6 +38,8 @@ def client():
|
|||||||
app.dependency_overrides[get_current_user] = lambda: _User()
|
app.dependency_overrides[get_current_user] = lambda: _User()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
# Disable cache for search API tests unless explicitly testing caching
|
||||||
|
settings.cache_enabled = False
|
||||||
yield TestClient(app)
|
yield TestClient(app)
|
||||||
finally:
|
finally:
|
||||||
app.dependency_overrides.pop(get_current_user, None)
|
app.dependency_overrides.pop(get_current_user, None)
|
||||||
@@ -284,3 +287,181 @@ def test_global_search_highlights_mixed_case_for_customer_file_qdro(client: Test
|
|||||||
assert q is not None and isinstance(q.get("highlight"), str)
|
assert q is not None and isinstance(q.get("highlight"), str)
|
||||||
assert "<strong>" in q["highlight"]
|
assert "<strong>" in q["highlight"]
|
||||||
assert f"<strong>{token_mixed}</strong>" in q["highlight"]
|
assert f"<strong>{token_mixed}</strong>" in q["highlight"]
|
||||||
|
|
||||||
|
|
||||||
|
def test_file_search_whole_words_and_exact_phrase(client: TestClient):
|
||||||
|
token = f"FW-{uuid.uuid4().hex[:6]}"
|
||||||
|
owner_id = _create_customer(client, f"Owner-{token}")
|
||||||
|
f_exact = _create_file(client, owner_id, regarding_token="The apple pie is fresh")
|
||||||
|
f_plural = _create_file(client, owner_id, regarding_token="The apple pies are fresh")
|
||||||
|
|
||||||
|
# whole_words=True should match 'pie' but not 'pies'
|
||||||
|
payload = {
|
||||||
|
"query": "pie",
|
||||||
|
"search_types": ["file"],
|
||||||
|
"whole_words": True,
|
||||||
|
"limit": 50,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
results = resp.json()["results"]
|
||||||
|
ids = {r["id"] for r in results}
|
||||||
|
assert f_exact in ids
|
||||||
|
assert f_plural not in ids
|
||||||
|
|
||||||
|
# exact_phrase should match the exact wording only
|
||||||
|
payload = {
|
||||||
|
"query": "apple pie",
|
||||||
|
"search_types": ["file"],
|
||||||
|
"exact_phrase": True,
|
||||||
|
"limit": 50,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
results = resp.json()["results"]
|
||||||
|
ids = {r["id"] for r in results}
|
||||||
|
assert f_exact in ids
|
||||||
|
assert f_plural not in ids
|
||||||
|
|
||||||
|
# default (substring) matching should include both
|
||||||
|
payload = {
|
||||||
|
"query": "pie",
|
||||||
|
"search_types": ["file"],
|
||||||
|
"limit": 50,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
results = resp.json()["results"]
|
||||||
|
ids = {r["id"] for r in results}
|
||||||
|
assert f_exact in ids and f_plural in ids
|
||||||
|
|
||||||
|
|
||||||
|
def test_ledger_search_whole_words(client: TestClient):
|
||||||
|
token = f"LW-{uuid.uuid4().hex[:6]}"
|
||||||
|
# Create a file for ledger linkage
|
||||||
|
owner_id = _create_customer(client, f"Owner-{token}")
|
||||||
|
file_no = _create_file(client, owner_id, regarding_token=token)
|
||||||
|
|
||||||
|
# Ledger entries: 'retainer' vs 'retained'
|
||||||
|
resp = client.post(
|
||||||
|
"/api/financial/ledger/",
|
||||||
|
json=LedgerCreate(
|
||||||
|
file_no=file_no,
|
||||||
|
date=date.today().isoformat(),
|
||||||
|
t_code="NOTE",
|
||||||
|
t_type="2",
|
||||||
|
empl_num="E01",
|
||||||
|
quantity=0.0,
|
||||||
|
rate=0.0,
|
||||||
|
amount=0.0,
|
||||||
|
billed="N",
|
||||||
|
note="retainer fee approved",
|
||||||
|
).model_dump(mode="json"),
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
resp = client.post(
|
||||||
|
"/api/financial/ledger/",
|
||||||
|
json=LedgerCreate(
|
||||||
|
file_no=file_no,
|
||||||
|
date=date.today().isoformat(),
|
||||||
|
t_code="NOTE",
|
||||||
|
t_type="2",
|
||||||
|
empl_num="E01",
|
||||||
|
quantity=0.0,
|
||||||
|
rate=0.0,
|
||||||
|
amount=0.0,
|
||||||
|
billed="N",
|
||||||
|
note="retained amount on file",
|
||||||
|
).model_dump(mode="json"),
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"query": "retainer",
|
||||||
|
"search_types": ["ledger"],
|
||||||
|
"whole_words": True,
|
||||||
|
"limit": 50,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
results = resp.json()["results"]
|
||||||
|
# Should contain the entry with 'retainer fee approved' and exclude 'retained amount on file'
|
||||||
|
texts = [r.get("description", "") for r in results]
|
||||||
|
assert any("retainer fee approved" in t for t in texts)
|
||||||
|
assert all("retained amount on file" not in t for t in texts)
|
||||||
|
|
||||||
|
|
||||||
|
def test_qdro_search_whole_words_and_exact_phrase(client: TestClient):
|
||||||
|
token = f"QW-{uuid.uuid4().hex[:6]}"
|
||||||
|
owner_id = _create_customer(client, f"Owner-{token}")
|
||||||
|
file_no = _create_file(client, owner_id, regarding_token=token)
|
||||||
|
q1 = _create_qdro_with_form_name(file_no, form_name="Order for benefit under plan")
|
||||||
|
q2 = _create_qdro_with_form_name(file_no, form_name="Order benefiting alternate payee")
|
||||||
|
|
||||||
|
# whole_words=True should match 'benefit' but not 'benefiting'
|
||||||
|
payload = {
|
||||||
|
"query": "benefit",
|
||||||
|
"search_types": ["qdro"],
|
||||||
|
"whole_words": True,
|
||||||
|
"limit": 50,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
results = resp.json()["results"]
|
||||||
|
ids = {r["id"] for r in results}
|
||||||
|
assert q1 in ids
|
||||||
|
assert q2 not in ids
|
||||||
|
|
||||||
|
# exact_phrase should only match the precise phrase
|
||||||
|
payload = {
|
||||||
|
"query": "Order for benefit",
|
||||||
|
"search_types": ["qdro"],
|
||||||
|
"exact_phrase": True,
|
||||||
|
"limit": 50,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
results = resp.json()["results"]
|
||||||
|
ids = {r["id"] for r in results}
|
||||||
|
assert q1 in ids
|
||||||
|
assert q2 not in ids
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_facets_include_state_and_transaction_type(client: TestClient):
|
||||||
|
token = f"FAC-{uuid.uuid4().hex[:6]}"
|
||||||
|
# Ensure at least one TX customer
|
||||||
|
_ = _create_customer(client, f"Facet-{token}")
|
||||||
|
# Ensure at least one ledger with t_type '2'
|
||||||
|
owner_id = _create_customer(client, f"Owner-{token}")
|
||||||
|
file_no = _create_file(client, owner_id, regarding_token=token)
|
||||||
|
resp = client.post(
|
||||||
|
"/api/financial/ledger/",
|
||||||
|
json=LedgerCreate(
|
||||||
|
file_no=file_no,
|
||||||
|
date=date.today().isoformat(),
|
||||||
|
t_code="NOTE",
|
||||||
|
t_type="2",
|
||||||
|
empl_num="E01",
|
||||||
|
quantity=0.0,
|
||||||
|
rate=0.0,
|
||||||
|
amount=0.0,
|
||||||
|
billed="N",
|
||||||
|
note="Fee for facets token",
|
||||||
|
).model_dump(mode="json"),
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Query can be empty; we'll aggregate facets across returned results
|
||||||
|
payload = {
|
||||||
|
"search_types": ["customer", "ledger"],
|
||||||
|
"limit": 200,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
facets = data.get("facets", {})
|
||||||
|
assert "state" in facets and isinstance(facets["state"], dict)
|
||||||
|
assert any(k in ("TX", "Tx", "tx") for k in facets["state"].keys())
|
||||||
|
assert "transaction_type" in facets and isinstance(facets["transaction_type"], dict)
|
||||||
|
assert "2" in facets["transaction_type"] or 2 in facets["transaction_type"]
|
||||||
93
tests/test_search_cache.py
Normal file
93
tests/test_search_cache.py
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from time import sleep
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
# Ensure required env vars for app import/config
|
||||||
|
os.environ.setdefault("SECRET_KEY", "x" * 32)
|
||||||
|
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/delphi_test.sqlite")
|
||||||
|
|
||||||
|
ROOT = Path(__file__).resolve().parents[1]
|
||||||
|
if str(ROOT) not in sys.path:
|
||||||
|
sys.path.insert(0, str(ROOT))
|
||||||
|
|
||||||
|
from app.main import app # noqa: E402
|
||||||
|
from app.auth.security import get_current_user # noqa: E402
|
||||||
|
from app.config import settings # noqa: E402
|
||||||
|
from app.services.cache import invalidate_search_cache # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module")
|
||||||
|
def client():
|
||||||
|
class _User:
|
||||||
|
def __init__(self):
|
||||||
|
self.id = "cache-tester"
|
||||||
|
self.username = "tester"
|
||||||
|
self.is_admin = True
|
||||||
|
self.is_active = True
|
||||||
|
|
||||||
|
app.dependency_overrides[get_current_user] = lambda: _User()
|
||||||
|
|
||||||
|
# Enable cache for this test module if redis is configured
|
||||||
|
settings.cache_enabled = True
|
||||||
|
yield TestClient(app)
|
||||||
|
app.dependency_overrides.pop(get_current_user, None)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.skipif(not settings.redis_url, reason="Redis not configured for caching tests")
|
||||||
|
def test_advanced_search_caches_by_criteria_and_user(client: TestClient):
|
||||||
|
criteria = {
|
||||||
|
"query": "cache-token",
|
||||||
|
"search_types": ["customer"],
|
||||||
|
"limit": 10,
|
||||||
|
"offset": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
# First call: cold cache
|
||||||
|
r1 = client.post("/api/search/advanced", json=criteria)
|
||||||
|
assert r1.status_code == 200
|
||||||
|
d1 = r1.json()
|
||||||
|
|
||||||
|
# Second call: should be served from cache and identical
|
||||||
|
r2 = client.post("/api/search/advanced", json=criteria)
|
||||||
|
assert r2.status_code == 200
|
||||||
|
d2 = r2.json()
|
||||||
|
assert d1 == d2
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.skipif(not settings.redis_url, reason="Redis not configured for caching tests")
|
||||||
|
def test_advanced_search_cache_invalidation_on_data_change(client: TestClient):
|
||||||
|
criteria = {
|
||||||
|
"query": "invalidate-token",
|
||||||
|
"search_types": ["customer"],
|
||||||
|
"limit": 10,
|
||||||
|
"offset": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
r1 = client.post("/api/search/advanced", json=criteria)
|
||||||
|
assert r1.status_code == 200
|
||||||
|
d1 = r1.json()
|
||||||
|
|
||||||
|
# Mutate data via customers API which triggers invalidation
|
||||||
|
resp = client.post("/api/customers/", json={
|
||||||
|
"id": "CACHE-INVALIDATE-1",
|
||||||
|
"last": "Cache",
|
||||||
|
"first": "Invalidate",
|
||||||
|
"email": "invalidate@example.com",
|
||||||
|
"city": "Austin",
|
||||||
|
"abrev": "TX",
|
||||||
|
})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Best-effort async invalidation; give Redis a moment if needed
|
||||||
|
sleep(0.05)
|
||||||
|
|
||||||
|
r2 = client.post("/api/search/advanced", json=criteria)
|
||||||
|
assert r2.status_code == 200
|
||||||
|
d2 = r2.json()
|
||||||
|
# Total_results or results content may change; at minimum the payload should not be the same object
|
||||||
|
assert d1 != d2
|
||||||
|
|
||||||
@@ -184,3 +184,26 @@ def test_create_qdro_highlight_requires_full_query_in_single_field():
|
|||||||
out = create_qdro_highlight(qdro, 'plan 123')
|
out = create_qdro_highlight(qdro, 'plan 123')
|
||||||
assert out == ''
|
assert out == ''
|
||||||
|
|
||||||
|
|
||||||
|
def test_highlight_text_escapes_html_in_source_and_tokens():
|
||||||
|
# Source contains HTML, should be escaped, not interpreted
|
||||||
|
out = highlight_text('<script>alert(1)</script> Alpha & Beta', ['alpha', 'beta'])
|
||||||
|
# Tags are escaped; only <strong> wrappers exist
|
||||||
|
assert '<script>alert(1)</script>' in out
|
||||||
|
assert '<strong>Alpha</strong>' in out
|
||||||
|
assert '<strong>Beta</strong>' in out
|
||||||
|
assert '<script>' not in out and '</script>' not in out
|
||||||
|
|
||||||
|
|
||||||
|
def test_highlight_text_handles_quotes_and_apostrophes_safely():
|
||||||
|
out = highlight_text('He said "Hello" & it\'s fine', ['hello'])
|
||||||
|
# Quotes and ampersand should be escaped
|
||||||
|
assert '"<strong>Hello</strong>"' in out
|
||||||
|
assert ''s' in out
|
||||||
|
assert '&' in out
|
||||||
|
|
||||||
|
|
||||||
|
def test_highlight_text_no_tokens_returns_escaped_source():
|
||||||
|
out = highlight_text('<b>bold</b>', [])
|
||||||
|
assert out == '<b>bold</b>'
|
||||||
|
|
||||||
|
|||||||
101
tests/test_search_sort_documents.py
Normal file
101
tests/test_search_sort_documents.py
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
import os
|
||||||
|
import uuid
|
||||||
|
from datetime import date
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
os.environ.setdefault("SECRET_KEY", "x" * 32)
|
||||||
|
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/delphi_test.sqlite")
|
||||||
|
|
||||||
|
from app.main import app # noqa: E402
|
||||||
|
from app.auth.security import get_current_user # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
|
class _User:
|
||||||
|
def __init__(self):
|
||||||
|
self.id = 1
|
||||||
|
self.username = "tester"
|
||||||
|
self.is_admin = True
|
||||||
|
self.is_active = True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture()
|
||||||
|
def client():
|
||||||
|
app.dependency_overrides[get_current_user] = lambda: _User()
|
||||||
|
try:
|
||||||
|
yield TestClient(app)
|
||||||
|
finally:
|
||||||
|
app.dependency_overrides.pop(get_current_user, None)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_customer_and_file(client: TestClient):
|
||||||
|
cust_id = f"DOCSS-{uuid.uuid4().hex[:8]}"
|
||||||
|
resp = client.post("/api/customers/", json={"id": cust_id, "last": "DocSS", "email": "dss@example.com"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
file_no = f"D-{uuid.uuid4().hex[:6]}"
|
||||||
|
payload = {
|
||||||
|
"file_no": file_no,
|
||||||
|
"id": cust_id,
|
||||||
|
"regarding": "Doc matter",
|
||||||
|
"empl_num": "E01",
|
||||||
|
"file_type": "CIVIL",
|
||||||
|
"opened": date.today().isoformat(),
|
||||||
|
"status": "ACTIVE",
|
||||||
|
"rate_per_hour": 100.0,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/files/", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
return cust_id, file_no
|
||||||
|
|
||||||
|
|
||||||
|
def test_templates_tokenized_search_and_sort(client: TestClient):
|
||||||
|
# Create templates
|
||||||
|
t1 = f"TMP-{uuid.uuid4().hex[:6]}"
|
||||||
|
t2 = f"TMP-{uuid.uuid4().hex[:6]}"
|
||||||
|
|
||||||
|
resp = client.post(
|
||||||
|
"/api/documents/templates/",
|
||||||
|
json={"form_id": t1, "form_name": "Alpha Letter", "category": "GENERAL", "content": "Hello"},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
resp = client.post(
|
||||||
|
"/api/documents/templates/",
|
||||||
|
json={"form_id": t2, "form_name": "Beta Memo", "category": "GENERAL", "content": "Hello"},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Tokenized search for both tokens only matches when both present
|
||||||
|
resp = client.get("/api/documents/templates/", params={"search": "Alpha Letter"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
items = resp.json()
|
||||||
|
ids = {i["form_id"] for i in items}
|
||||||
|
assert t1 in ids and t2 not in ids
|
||||||
|
|
||||||
|
# Sorting by form_name desc
|
||||||
|
resp = client.get("/api/documents/templates/", params={"sort_by": "form_name", "sort_dir": "desc"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
items = resp.json()
|
||||||
|
if len(items) >= 2:
|
||||||
|
assert items[0]["form_name"] >= items[1]["form_name"]
|
||||||
|
|
||||||
|
|
||||||
|
def test_qdros_tokenized_search(client: TestClient):
|
||||||
|
_, file_no = _create_customer_and_file(client)
|
||||||
|
# Create QDROs
|
||||||
|
q1 = {"file_no": file_no, "version": "01", "status": "DRAFT", "form_name": "Alpha Order", "notes": "Beta token present"}
|
||||||
|
q2 = {"file_no": file_no, "version": "02", "status": "DRAFT", "form_name": "Gamma", "notes": "Beta only"}
|
||||||
|
resp = client.post("/api/documents/qdros/", json=q1)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
resp = client.post("/api/documents/qdros/", json=q2)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Only the one containing both tokens should match
|
||||||
|
resp = client.get("/api/documents/qdros/", params={"search": "Alpha Beta"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
items = resp.json()
|
||||||
|
names = {i.get("form_name") for i in items}
|
||||||
|
assert "Alpha Order" in names
|
||||||
|
assert "Gamma" not in names
|
||||||
|
|
||||||
|
|
||||||
94
tests/test_search_sort_files.py
Normal file
94
tests/test_search_sort_files.py
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
import os
|
||||||
|
import uuid
|
||||||
|
from datetime import date, timedelta
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
os.environ.setdefault("SECRET_KEY", "x" * 32)
|
||||||
|
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/delphi_test.sqlite")
|
||||||
|
|
||||||
|
from app.main import app # noqa: E402
|
||||||
|
from app.auth.security import get_current_user # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
|
class _User:
|
||||||
|
def __init__(self):
|
||||||
|
self.id = "test"
|
||||||
|
self.username = "tester"
|
||||||
|
self.is_admin = True
|
||||||
|
self.is_active = True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture()
|
||||||
|
def client():
|
||||||
|
app.dependency_overrides[get_current_user] = lambda: _User()
|
||||||
|
try:
|
||||||
|
yield TestClient(app)
|
||||||
|
finally:
|
||||||
|
app.dependency_overrides.pop(get_current_user, None)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_customer(client: TestClient) -> str:
|
||||||
|
cid = f"FSSR-{uuid.uuid4().hex[:8]}"
|
||||||
|
resp = client.post("/api/customers/", json={"id": cid, "last": "SearchSort", "email": f"{cid}@example.com"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
return cid
|
||||||
|
|
||||||
|
|
||||||
|
def _create_file(client: TestClient, file_no: str, owner_id: str, regarding: str, opened: date):
|
||||||
|
payload = {
|
||||||
|
"file_no": file_no,
|
||||||
|
"id": owner_id,
|
||||||
|
"regarding": regarding,
|
||||||
|
"empl_num": "E01",
|
||||||
|
"file_type": "CIVIL",
|
||||||
|
"opened": opened.isoformat(),
|
||||||
|
"status": "ACTIVE",
|
||||||
|
"rate_per_hour": 100.0,
|
||||||
|
"memo": "test search/sort",
|
||||||
|
}
|
||||||
|
resp = client.post("/api/files/", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
|
||||||
|
def test_files_tokenized_search_sort_and_pagination(client: TestClient):
|
||||||
|
owner_id = _create_customer(client)
|
||||||
|
base_day = date.today()
|
||||||
|
f1 = f"FS-{uuid.uuid4().hex[:6]}"
|
||||||
|
f2 = f"FS-{uuid.uuid4().hex[:6]}"
|
||||||
|
|
||||||
|
# f1 contains both tokens across a single field
|
||||||
|
_create_file(client, f1, owner_id, regarding="Alpha project Beta milestone", opened=base_day - timedelta(days=1))
|
||||||
|
# f2 contains only one token
|
||||||
|
_create_file(client, f2, owner_id, regarding="Only Alpha token here", opened=base_day)
|
||||||
|
|
||||||
|
# Tokenized search: both tokens required (AND-of-OR across fields)
|
||||||
|
resp = client.get("/api/files/", params={"search": "Alpha Beta"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
items = resp.json()
|
||||||
|
file_nos = {it["file_no"] for it in items}
|
||||||
|
assert f1 in file_nos and f2 not in file_nos
|
||||||
|
|
||||||
|
# Sorting by opened desc should put f2 first if both were present; we restrict to both-token result (just f1)
|
||||||
|
resp = client.get("/api/files/", params={"search": "Alpha Beta", "sort_by": "opened", "sort_dir": "desc"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
items = resp.json()
|
||||||
|
assert len(items) >= 1 and items[0]["file_no"] == f1
|
||||||
|
|
||||||
|
# Pagination over a broader query (single-token) to verify skip/limit
|
||||||
|
resp = client.get(
|
||||||
|
"/api/files/",
|
||||||
|
params={"search": "Alpha", "sort_by": "file_no", "sort_dir": "asc", "limit": 1, "skip": 0},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
first_page = resp.json()
|
||||||
|
assert len(first_page) == 1
|
||||||
|
resp = client.get(
|
||||||
|
"/api/files/",
|
||||||
|
params={"search": "Alpha", "sort_by": "file_no", "sort_dir": "asc", "limit": 1, "skip": 1},
|
||||||
|
)
|
||||||
|
second_page = resp.json()
|
||||||
|
assert len(second_page) >= 0 # may be 0 or 1 depending on other fixtures
|
||||||
|
|
||||||
|
|
||||||
177
tests/test_search_validation.py
Normal file
177
tests/test_search_validation.py
Normal file
@@ -0,0 +1,177 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
# Ensure required env vars for app import/config
|
||||||
|
os.environ.setdefault("SECRET_KEY", "x" * 32)
|
||||||
|
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/delphi_test.sqlite")
|
||||||
|
|
||||||
|
# Ensure repository root on sys.path for direct test runs
|
||||||
|
ROOT = Path(__file__).resolve().parents[1]
|
||||||
|
if str(ROOT) not in sys.path:
|
||||||
|
sys.path.insert(0, str(ROOT))
|
||||||
|
|
||||||
|
from app.main import app # noqa: E402
|
||||||
|
from app.auth.security import get_current_user # noqa: E402
|
||||||
|
from tests.helpers import assert_validation_error # noqa: E402
|
||||||
|
from app.config import settings # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module")
|
||||||
|
def client():
|
||||||
|
# Override auth to bypass JWT for these tests
|
||||||
|
class _User:
|
||||||
|
def __init__(self):
|
||||||
|
self.id = "test"
|
||||||
|
self.username = "tester"
|
||||||
|
self.is_admin = True
|
||||||
|
self.is_active = True
|
||||||
|
|
||||||
|
app.dependency_overrides[get_current_user] = lambda: _User()
|
||||||
|
|
||||||
|
# Disable cache to make validation tests deterministic
|
||||||
|
settings.cache_enabled = False
|
||||||
|
|
||||||
|
try:
|
||||||
|
yield TestClient(app)
|
||||||
|
finally:
|
||||||
|
app.dependency_overrides.pop(get_current_user, None)
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_invalid_search_types(client: TestClient):
|
||||||
|
payload = {
|
||||||
|
"query": "anything",
|
||||||
|
"search_types": ["customer", "bogus"],
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert_validation_error(resp, "search_types")
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_invalid_sort_options(client: TestClient):
|
||||||
|
# Invalid sort_by
|
||||||
|
payload = {
|
||||||
|
"query": "x",
|
||||||
|
"search_types": ["customer"],
|
||||||
|
"sort_by": "nope",
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert_validation_error(resp, "sort_by")
|
||||||
|
|
||||||
|
# Invalid sort_order
|
||||||
|
payload = {
|
||||||
|
"query": "x",
|
||||||
|
"search_types": ["customer"],
|
||||||
|
"sort_order": "sideways",
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert_validation_error(resp, "sort_order")
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_limit_bounds(client: TestClient):
|
||||||
|
# Too low
|
||||||
|
payload = {
|
||||||
|
"query": "x",
|
||||||
|
"search_types": ["customer"],
|
||||||
|
"limit": 0,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert_validation_error(resp, "limit")
|
||||||
|
|
||||||
|
# Too high
|
||||||
|
payload["limit"] = 201
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert_validation_error(resp, "limit")
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_conflicting_flags_exact_phrase_and_whole_words(client: TestClient):
|
||||||
|
payload = {
|
||||||
|
"query": "apple pie",
|
||||||
|
"search_types": ["file"],
|
||||||
|
"exact_phrase": True,
|
||||||
|
"whole_words": True,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
# Cannot rely on field location for model-level validation, check message text in details
|
||||||
|
assert resp.status_code == 422
|
||||||
|
body = resp.json()
|
||||||
|
assert body.get("success") is False
|
||||||
|
assert body.get("error", {}).get("code") == "validation_error"
|
||||||
|
msgs = [d.get("msg", "") for d in body.get("error", {}).get("details", [])]
|
||||||
|
assert any("exact_phrase and whole_words" in m for m in msgs)
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_inverted_date_range(client: TestClient):
|
||||||
|
payload = {
|
||||||
|
"search_types": ["file"],
|
||||||
|
"date_field": "created",
|
||||||
|
"date_from": "2024-02-01",
|
||||||
|
"date_to": "2024-01-31",
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 422
|
||||||
|
body = resp.json()
|
||||||
|
assert body.get("success") is False
|
||||||
|
assert body.get("error", {}).get("code") == "validation_error"
|
||||||
|
msgs = [d.get("msg", "") for d in body.get("error", {}).get("details", [])]
|
||||||
|
assert any("date_from must be less than or equal to date_to" in m for m in msgs)
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_inverted_amount_range(client: TestClient):
|
||||||
|
payload = {
|
||||||
|
"search_types": ["file"],
|
||||||
|
"amount_field": "amount",
|
||||||
|
"amount_min": 100.0,
|
||||||
|
"amount_max": 50.0,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 422
|
||||||
|
body = resp.json()
|
||||||
|
assert body.get("success") is False
|
||||||
|
assert body.get("error", {}).get("code") == "validation_error"
|
||||||
|
msgs = [d.get("msg", "") for d in body.get("error", {}).get("details", [])]
|
||||||
|
assert any("amount_min must be less than or equal to amount_max" in m for m in msgs)
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_date_field_supported_per_type(client: TestClient):
|
||||||
|
# 'opened' is only valid for files
|
||||||
|
payload = {
|
||||||
|
"search_types": ["customer", "ledger"],
|
||||||
|
"date_field": "opened",
|
||||||
|
"date_from": "2024-01-01",
|
||||||
|
"date_to": "2024-12-31",
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 422
|
||||||
|
body = resp.json()
|
||||||
|
msgs = [d.get("msg", "") for d in body.get("error", {}).get("details", [])]
|
||||||
|
assert any("date_field 'opened' is not supported" in m for m in msgs)
|
||||||
|
|
||||||
|
# Valid when 'file' included
|
||||||
|
payload["search_types"] = ["file"]
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
|
||||||
|
def test_advanced_search_amount_field_supported_per_type(client: TestClient):
|
||||||
|
# 'amount' is only valid for ledger
|
||||||
|
payload = {
|
||||||
|
"search_types": ["file"],
|
||||||
|
"amount_field": "amount",
|
||||||
|
"amount_min": 1,
|
||||||
|
"amount_max": 10,
|
||||||
|
}
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 422
|
||||||
|
body = resp.json()
|
||||||
|
msgs = [d.get("msg", "") for d in body.get("error", {}).get("details", [])]
|
||||||
|
assert any("amount_field 'amount' is not supported" in m for m in msgs)
|
||||||
|
|
||||||
|
# Valid when 'ledger' included
|
||||||
|
payload["search_types"] = ["ledger"]
|
||||||
|
resp = client.post("/api/search/advanced", json=payload)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
|
||||||
@@ -119,4 +119,9 @@ def test_ticket_lifecycle_and_404s_with_audit(client: TestClient):
|
|||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
assert isinstance(resp.json(), list)
|
assert isinstance(resp.json(), list)
|
||||||
|
|
||||||
|
# Search should filter results
|
||||||
|
resp = client.get("/api/support/tickets", params={"search": "Support issue"})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert isinstance(resp.json(), list)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user