Hi! DIY Home automation system build.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • paulca
    Automated Home Jr Member
    • Jul 2019
    • 31

    #31
    Next on the list:

    * Find a way to export and visualise the active schedules and their selected target temps.
    * Find a way to pass the meta data in the "Demand" object so that it can be traced to it's schedule, target temp and trigger/current temp.

    Both of these datums are currently only in memory in a python microservice so I need to work out the best way to expose them. Hoping I can just add a free form JSON payload into the Demand object.

    * ThermoElectric TRVs for Bedroom and Office so I can actually start to multi-zone properly.

    Comment

    • paulca
      Automated Home Jr Member
      • Jul 2019
      • 31

      #32
      Long time, no update.

      But an update.

      My as-simple-as-possibly data-fabric... a single json blob for everything... has reached it's limits now that things have expanded.

      Since my last update I have done the following:

      * Moved everything to Docker on a real server. Had enough of the little Pi that could... corrupt it's SDcard.
      * Added a feeder to send all the data to InfluxDB and put a grafana dashboard on it.
      * Added support for electric radiator valves and multizone heating, I now have 3 switched valves and an "everywhere else" zone.
      * Added support for adhoc overrides, so I can force a target temp for an amount of time in a particular zone.
      * fixed a few bugs.
      * Bolstered the infrastructure with a new Nighthawk Wifi router and customer DHCP and DNS servers, allowing dynamic control over device IPs and dns names. There may even be options soon for sending application information to devices from the DHCP "Options" list.
      * Created a much more templated style of writing firmware using VSCode and C++ build variables.
      * Also on firmware, moved a bunch of management stuff into a base template, to provide things like OTA programming and broadcast discovery.
      * Added presence (or in-use) detection to three rooms, based on whether the smart TV (media centre) is switch on or not in that room. Works great except the bedroom in the mornings when the TV is obviously off.

      But the big news is... I'm rewriting the core and data fabric to utilise MQTT, instead of the custom json dictionary blob over raw TCP. (The later is memory leaking like a sieve)

      I have the basics working, but still procrastinating over how to manage multiple schedules. The MQTT structure now breaks down as...

      /sensors/<measurement>/<zone> - eg: /sensors/humidity/livingroom
      /heating/targettemps/<profilename>/<zone>
      /heating/requests/<zone>
      /heating/futures/<zone>
      /heating/demands/<zone>
      /heating/controls/boiler/main
      /heating/controls/radiators/<zone>
      /heating/state/boiler/main
      /heating/state/radiators/<zone>


      It's sort of like a workflow/pipeline model, but using cascading events. So if either the target temp, or the sensor value changes, a schedule will be triggered and a cascade of messages might result in 5 or 6 new messages propagating through to turn the boiler and a radiator on.

      The new feature I added, which if you read the topic list above, is "futures". Heating futures are created by trial running schedules with an time period in the future. If that schedule would have raised any requests for heating, these are instead published as "futures". An additional component is subscribed to these topics and will try and make a logical decision if the heating should be put on early and when. This will create what I call pre-ramps with the target temp rising towards the future target. It's currently based on a manually entered *C delta per minute per zone, but it's plausible to mine this out of InfluxDB periodically based on the heating gradients in zones.

      The other non-intuitive feature is the differentiation between requests and demands. Well, whatever is raising a need for heating, should not need to check if it would conflict with an existing demand. So requests are processed as a queue and every message is run through logic to determine whether the new request can override an existing demand. This is necessary as some schedules might raise a request for heating at a lower temp, while at exactly the same time another schedule raised a request for a higher temp... or a longer time interval.... or a change of state.

      So requests are literally just that. Most will probably be ignored as conflicts or superfluous. They are adhoc and can be raised by anything that publishes to that topic.

      Demands are the current set of needs the system must respond to, once the chaff has been ignored, one per zone.

      A cool feature with the JSON payloads is that they nest their trigger message with in them, all the way through the system. So if a radiator is showing as "ON", you can look at the JSON payload in that radiator switch "state" message and see it will have the history of the event that caused it to be on, right back to the temp sensor reading and schedule that triggered it.

      The state topic might contain metadata with has the "Control" message in it. That control message has metadata which contains the Demand message which caused it. The Demand message will contain metadata with has the Request or Future message in it. The Request/Future message will have metadata showing the target and current temps and the schedule which triggered it. That Schedule object shows a summary of that schedule's, name, conditions and target temp profile.

      However a higher priority feature is required to save on heating wear and tear as well as fuel. "Sympathetic heating" or I'm almost leaning towards "Altruistic heating". If any zone triggers the heating a dedicated process will immediately look for other zones which, while not actually triggering might be close enough that they could use a little heating since its already on and put a request in on their behalf. This stops the issue of one zone running the heating for 20 minutes, only to be followed 2 minutes later by a request from another zone. It would be more efficient to predict this and heat both at the same time.

      As to how likely this is of much use to anyone ... well those without some python experience... it's getting better. MQTT opens up not just the interfacing, but the core too. So there is no reason why you couldn't throw out one of my components and put the messages through NodeRed instead. In fact you could probably rewrite my code in NodeRed, but I wouldn't envy you trying. I just want to remind you, it's free and open source.
      Last edited by paulca; 26 January 2021, 05:22 PM.

      Comment

      • paulca
        Automated Home Jr Member
        • Jul 2019
        • 31

        #33
        Oh yes.

        For xmas I got a 10" LCD touch screen for the RPi. Hoping to use grafana and some custom HTML to create a dashboard/interface for it.

        Comment

        Working...
        X