WxParser Gets Smarter
Multiple Recipients, Resilience, and a Design Document
A follow-up to From Raw Weather Data to my Inbox
What Changed — and Why
The first version of WxParser worked well for a single location and a single recipient. But real life is messier than that. My wife’s sister lives in The Woodlands, Texas — not in Spring, which is what a postal address might suggest. I like to receive reports in both English and Spanish, so I need to be configured as two recipients with different preferences. And my preferred weather station occasionally goes offline.
Turning a working prototype into something genuinely useful meant solving several problems in sequence.
Multiple Recipients, Multiple Locations
The original design fetched weather data for one location and sent a report for that location to everyone. That breaks down immediately when recipients live in different cities.
The new design gives each recipient their own location. The first time the service encounters a recipient, it geocodes their street address using the OpenStreetMap Nominatim API to get a latitude and longitude. It then queries the local weather database — which already contains observations from many stations in the region — to find the nearest METAR and TAF reporting station for that specific person.
Those results are cached back to the local configuration file so the API calls happen exactly once. From then on, each recipient gets a snapshot built from their own nearest stations. A recipient in The Woodlands gets different data than one near Houston Hobby.
The locality name used in the report subject and body is also configurable. If the geocoder returns “Spring, TX” but the recipient prefers “The Woodlands,” that preference can be respected without any code change.
Keeping Private Data Private
The original configuration file stored a home street address. That file was tracked by git, which meant a private address was sitting in version control.
The fix was a clean separation between two kinds of configuration:
Shared config (tracked by git): fetch region, bounding box, service intervals, non-sensitive defaults.
Local config (git-ignored): SMTP credentials, API keys, recipient addresses, and the cached geocoding results.
The local config file is never committed. The service writes resolved data — coordinates, station ICAOs — back to it automatically. Sensitive information stays off the repository.
Resilience: What Happens When a Station Goes Offline?
Aviation weather stations occasionally stop reporting. The original code would simply find no data and skip the recipient for that cycle. With a tiered fallback system, the behavior is now more graceful:
Try each station in the recipient’s preferred list, in order.
If none have recent data, fall back to any station in the database that reported within the last three hours.
If still nothing, skip the cycle and log a warning.
The preferred station list now accepts a comma-separated string — `”KDWH, KHOU”` — so a recipient can have an ordered set of preferences. Critically, the service never updates the configuration when a fallback station is used. It logs a warning so the operator knows, but it does not silently rewrite the user’s preferences.
Stable Identities for Recipients
The database tracks per-recipient state: when the last report was sent, what the weather fingerprint looked like. Originally that state was keyed on the recipient’s email address. That caused a subtle problem: two recipients sharing an email address but receiving reports in different languages were treated as the same person.
Adding a stable `Id` field — a short human-readable string like `”paul-en”` or `”paul-es”` — solved this. The ID is the key throughout the system. It is also used to match entries when writing resolved data back to the configuration file, which handles the shared-email case correctly.
Duplicate IDs are detected at startup. If two recipients share an ID, both are skipped and an error is logged rather than silently corrupting state.
A New Service: WxMonitor
The first article described a system with two services. There are now three.
WxMonitor.Svc runs independently and watches the other two. On each cycle it does two things:
Log scanning. It reads the log files of WxParser.Svc and WxReport.Svc, looking for entries at ERROR level or above that are newer than the last entry it processed. If it finds any, it sends an alert email.
Heartbeat checking. WxParser.Svc and WxReport.Svc each write a timestamp to a small file at the end of every successful cycle. WxMonitor checks whether those timestamps are fresh. If a service goes silent — stopped, crashed, or hung — the heartbeat goes stale and an alert goes out.
Alerts are rate-limited by a configurable cooldown period. A stuck service does not flood the inbox; it sends one alert per hour until the problem is resolved.
The design uses a classic dead man’s switch pattern. The monitored services don’t report their health to a central system — they simply leave evidence that they ran. The monitor draws its own conclusions from the absence of that evidence.
The Design Document
As the system grew from a prototype to three services and eleven projects, the need for a design document became obvious. I asked Claude to write one.
The result covered purpose, architecture, solution structure, per-service data flows, database schema, configuration reference, external dependencies, installation instructions, and known limitations — all in a single Markdown file. Diagrams were written in Mermaid, a text-based diagram language that renders natively in VS Code and on GitHub without any image files or external tools.
My role was validation. I read the document carefully against the actual code and found it accurate. I asked for one layout adjustment — the initial architecture diagram was too wide — and Claude decomposed it into a high-level overview and three focused detail diagrams, one per service.
The document is now the authoritative reference for the project and will be updated alongside the code.
That last point is crucial: It is almost axiomatic that software design documentation is always out of date. Developers are in love with the process of making dumb machines do smart things. That process is called “coding”, not “documentation”. So most don’t document well unless forced to do so. In-code documentation comments are usually inadequate and design documents -- if they even exist -- are dead letters. But, with Claude Code doing the bulk of the work, design docs can be updated as frequently as the code is changed.
The Broader Pattern
Each of these improvements came from friction with real use: a second recipient in a different city, a configuration file that was too public, a station that went quiet. The system evolved in response to actual problems rather than anticipated ones.
Claude handled all the implementation — new projects, modified classes, updated configuration, the design document itself. My contribution was knowing what problems needed solving and why, understanding the constraints that shaped the solutions, spot-checking the code, and validating the results.
The quote from the first article still holds: AI coding assistants do not replace domain expertise. They amplify it.
---
The source code is available at https://github.com/PaulHarder2/HarderWare.
