Archive note
December preparation flow
By the end of 2024, the important work was not public claims, but engineering validation. Before any broader story, we needed a narrow research question, a repeatable setup protocol, and product language grounded in evidence.
RX
The first preparation task was scope. Internal notes kept returning to one principle: begin with controlled domestic scenarios, activity intervals, and measurable environmental change, not with promises about diagnosis, identity, or emergency response.
The second task was repeatability. The system had to produce comparable logs in one real room before any discussion of wider deployment. That meant a fixed placement procedure, documented node positions, and sessions that could be reviewed under the same assumptions.
The third task was claim discipline. A product like this has to be careful not only in engineering, but also in the conclusions it allows. By the end of that phase, the direction was clearer: privacy-first sensing, restrained statements, and reports that show observable patterns without presenting unverified certainty.
What had to be prepared first
The preparation started by separating the prototype into two clear roles: TX created repeatable packet activity, while RX joined the same 2.4 GHz network, enabled radio-channel capture, and exposed a structured stream for the host laptop. This kept the experiment readable: one node stimulates the channel, the other records it.
The useful result at this stage was not a classification model. It was a controlled capture routine: prepare the receiver, verify the serial stream, start the sender, run a short capture, and save the outputs in a session folder that can be reviewed later.
Capture flow before ML
Each test session could include the raw receiver stream, parsed CSI CSV, structured JSONL collector events, parse errors, raw logs, summary metadata, and host diagnostics. Short smoke runs could be limited to a fixed frame count, which made early checks easier to compare without turning every trial into a large dataset.
Why this matters
This gives the project a chronological foundation. By December 24, 2024, the work had moved from loose hardware checks toward a repeatable TX/RX procedure, a receiver-labeled live example, and a logging path that could preserve evidence from each run. That is the right order: setup, capture, review, then only later modeling.