"The K9 Handler's Unfair Advantage." Voice-first AI-native platform for police K9 handlers, trainers, and supervisors. Court-admissible narratives, case-law verification, offline-first mirror, CJIS posture documented from project creation.
A police K9 unit produces an enormous amount of legally-load-bearing documentation: care and training logs, deployment narratives, vet records, certifications, recertifications, case-specific reports, and the chain of evidence connecting a dog's training history to its admissibility as a probable-cause source in a particular case. Most of this happens on paper or in form-driven e-record apps that handlers hate, supervisors can't pull useful trends from, and trainers can't connect back to skill gaps. Florida v. Harris sets the modern admissibility expectation; meeting it consistently with paper-and-form-fills is a daily challenge.
PawTrek replaces the form with the handler's voice — and then organizes everything downstream of it.
Deepgram + Grok turn raw voice into a structured narrative that aligns with the admissibility expectations from Florida v. Harris: dog identity, certifications relevant to the alert type, training-history grounding, environmental conditions, alert-and-confirmation sequence, handler observations. The handler talks once, in their natural cadence; the system produces a draft narrative they can edit, diff, and lock.
Where a narrative implicates legal questions (e.g., scope-of-alert, reliability challenges), the system surfaces relevant case law and notes the language the narrative already uses — and the language it would need to use — to align cleanly with that precedent.
A K9 deployment frequently happens in places where connectivity is poor or absent. Every write goes to a local SQLite mirror first; a Firestore outbox queue with per-collection watermarks reconciles when connectivity returns. The handler never sees the network state — they just keep working. This is the part of the architecture that any officer-facing system has to get right, or it will be abandoned in the field.
Around the voice-first capture sits the rest of what a handler / trainer / supervisor triad needs:
Turn paper records and form-filling apps into a substrate-first system: the handler talks, the system organizes. The voice is the input; everything downstream — narrative, skills graph, deployment log, comms — is generated.
Like CopApp, PawTrek sits adjacent to law-enforcement data, and the architecture decisions were made under that constraint:
Turborepo monorepo with shared TypeScript domain spine — same domain types across mobile, web, and backend, so a handler-facing concept is defined once:
| Tier | Tech |
|---|---|
| Mobile | React Native 0.83 / Expo SDK 55 (New Architecture, React 19) |
| Web | Next.js 16 |
| Backend | Firebase Functions v2 |
| Voice / AI | Deepgram (ASR) + Grok 4.3 (narrative + legal-context reasoning) |
| Data | Firestore (CMEK), local SQLite mirror with outbox / per-collection watermarks |
| Integrity | App Check (App Attest / Play Integrity / reCAPTCHA Enterprise) |
PawTrek is where I learned what "AI-first product design" means in practice: not "we wedge a chatbot in," but the substrate of the product is voice → structured record → organized everything-else. Once that substrate is in place, the rest of the product is composable — every new feature is a new query, a new report, a new outbound communication. It also confirmed, the hard way, that offline-first is non-negotiable for any system meant to be used in the field. K9 units don't have time to wait for a network.