HE vs ST handling of virtual switches


#1

I’ve had a piston to control my garage doors which has worked flawlessly for years. I’ve just moved it to HE, and it no longer works. The logic is rather complex to follow, however I’ve identified a difference with HE.

The piston has a virtual switch which can be used to open/close the door. It also gets updated if the door is opened/closed manually.
When the door is closed manually, the piston changes the VS to off (it was on). The piston fires again, but on reaching the “if switch changes to off” it evaluates as false. It seems to be behaving as “if switch is off”


Switch Condition not working - Bug?
#2

If it’s a complex piston like you mentioned, it helps if you create a simple piston showing the issue. It will be much easier for @E_Sch to track down the bug if it is.


#3

I was thinking that too!

Here is a sample showing the issue. Both switches were off, the first was turned on. The second also turned on, but failed to display the message.


8/09/2022, 16:32:44 +237ms
+2ms ╔Received event [Test2].switch = on with a delay of 46ms, canQueue: false, calledMyself: true
+4ms ║RunTime initialize > 4 LockT > 0ms > r9T > 1ms > pistonT > 0ms (first state access 3 2 2)
+6ms ║Runtime (6638 bytes) initialized in 1ms (v0.3.114.20220822_HE)
+7ms ║╔Execution stage started
+13ms ║║Condition #2 evaluated false (4ms)
+15ms ║║Condition group #1 evaluated false (condition changed) (6ms)
+18ms ║║Comparison (enum) on changes_to (string) on = false (0ms)
+19ms ║║Condition #7 evaluated false (2ms)
+19ms ║║Condition group #6 evaluated false (condition did not change) (3ms)
+23ms ║╚Execution stage complete. (15ms)
+25ms ╚Event processed successfully (24ms)
08/09/2022, 16:32:44 +205ms
+2ms ╔Received event [Test2].switch = on with a delay of 14ms, canQueue: true, calledMyself: false
+14ms ╚Event queued (12ms)
08/09/2022, 16:32:44 +164ms
+2ms ╔Received event [Test1].switch = on with a delay of 15ms, canQueue: true, calledMyself: false
+8ms ║RunTime initialize > 7 LockT > 1ms > r9T > 1ms > pistonT > 0ms (first state access 5 3 4)
+10ms ║Runtime (6628 bytes) initialized in 1ms (v0.3.114.20220822_HE)
+11ms ║╔Execution stage started
+16ms ║║Comparison (enum) on changes_to (string) on = true (0ms)
+17ms ║║Condition #2 evaluated true (4ms)
+19ms ║║Condition group #1 evaluated true (condition changed) (5ms)
+29ms ║║Executed physical command [Test2].on() (7ms)
+34ms ║║Test1 turned on
+36ms ║║Executed virtual command [Test2].log (2ms)
+41ms ║║Condition #7 evaluated false (3ms)
+41ms ║║Condition group #6 evaluated false (condition did not change) (4ms)
+45ms ║╚Execution stage complete. (34ms)
+48ms ╚Event processed successfully (46ms)


#4

I did a quick try of that and to my untrained eye it looks like the events are processing really fast to catch the second turn on event. If the same statement is on a different piston then it logs properly.


#5

I wondered that, as my original piston seemed to work sometimes on HE.
I found the two wake ups of the piston after the second switch had been turned on curious?

Do you think this is a bug, or just the way it works?


#6

I will look at this more.

My first impression is that

  • in the first execution (for test1.on), the check for test2.on is correctly assessed as did not change by this event, BUT, test2 has changed so the saved state is the new value. When the event for test2.on is processed, the saved value was already on, hence it says did not change.

I will look at the code to see what can be done here.

I do agree the near term workaround is another piston or reverse the checks (assuming they come in a specific order…if the order can vary then the separate piston may be the workaround).

Another workaround is to put an exit() into the piston once the first one is found (ie there is no need to check the other values or update them). This may not always work due to timing conditions. but for the simple 2 case it may work. Separate pistons for sure will work.

So I’ll look into this.


#7

That was the impression I was getting too. I’d rather assumed that the state would only be saved if it was a test2 switch event being evaluated just because this sort of thing could potentially happen, but it certainly doesn’t look that way.


#8

I have pushed an update for this.

HPM repair to try it.


#9

I’ve performed the repair, but is still seems the same.

11/09/2022, 19:40:18 +555ms
+3ms ╔Received event [Test2].switch = on with a delay of 53ms, canQueue: false, calledMyself: true
+5ms ║RunTime initialize > 5 LockT > 0ms > r9T > 2ms > pistonT > 1ms (first state access 3 3 2)
+9ms ║Runtime (6643 bytes) initialized in 2ms (v0.3.114.20220822_HE)
+10ms ║╔Execution stage started
+19ms ║║Condition #2 evaluated false (6ms)
+22ms ║║Condition group #1 evaluated false (condition changed) (9ms)
+27ms ║║Comparison (enum) on changes_to (string) on = false (0ms)
+28ms ║║Condition #7 evaluated false (4ms)
+29ms ║║Condition group #6 evaluated false (condition did not change) (7ms)
+40ms ║╚Execution stage complete. (30ms)
+43ms ╚Event processed successfully (41ms)
11/09/2022, 19:40:18 +439ms
+3ms ╔Received event [Test1].switch = on with a delay of 16ms, canQueue: true, calledMyself: false
+30ms ║RunTime initialize > 29 LockT > 1ms > r9T > 21ms > pistonT > 20ms (first state access 7 4 25)
+34ms ║Runtime (6625 bytes) initialized in 21ms (v0.3.114.20220822_HE)
+36ms ║╔Execution stage started
+43ms ║║Comparison (enum) on changes_to (string) on = true (1ms)
+46ms ║║Condition #2 evaluated true (7ms)
+49ms ║║Condition group #1 evaluated true (condition changed) (10ms)
+68ms ║║Executed physical command [Test2].on() (14ms)
+78ms ║║Test1 turned on
+80ms ║║Executed virtual command [Test2].log (2ms)
+87ms ║║Condition #7 evaluated false (4ms)
+88ms ║║Condition group #6 evaluated false (condition did not change) (7ms)
+92ms ║╚Execution stage complete. (58ms)
+97ms ╚Event processed successfully (94ms)
11/09/2022, 19:40:18 +518ms
+2ms ╔Received event [Test2].switch = on with a delay of 16ms, canQueue: true, calledMyself: false
+3ms ╚Event queued (2ms)


#10

I’ve rebooted the hub, and tried again (not performed another repair) and its worked as expected.

11/09/2022, 19:46:42 +878ms
+4ms ╔Received event [Test2].switch = on with a delay of 104ms, canQueue: false, calledMyself: true
+85ms ║RunTime initialize > 84 LockT > 0ms > r9T > 80ms > pistonT > 0ms (first state access 4 4 80)
+91ms ║Runtime (6650 bytes) initialized in 80ms (v0.3.114.20220822_HE)
+92ms ║╔Execution stage started
+104ms ║║Condition #2 evaluated false (8ms)
+108ms ║║Condition group #1 evaluated false (condition changed) (12ms)
+114ms ║║Comparison (enum) on changes_to (string) on = true (1ms)
+117ms ║║Condition #7 evaluated true (7ms)
+120ms ║║Condition group #6 evaluated true (condition changed) (10ms)
+129ms ║║test2 turned on
+132ms ║║Executed virtual command log (3ms)
+141ms ║╚Execution stage complete. (50ms)
+149ms ╚Event processed successfully (146ms)
11/09/2022, 19:46:42 +709ms
+4ms ╔Received event [Test1].switch = on with a delay of 26ms, canQueue: true, calledMyself: false
+15ms ║RunTime initialize > 13 LockT > 1ms > r9T > 2ms > pistonT > 0ms (first state access 10 5 8)
+20ms ║Runtime (6633 bytes) initialized in 2ms (v0.3.114.20220822_HE)
+23ms ║╔Execution stage started
+32ms ║║Comparison (enum) on changes_to (string) on = true (1ms)
+35ms ║║Condition #2 evaluated true (8ms)
+38ms ║║Condition group #1 evaluated true (condition changed) (11ms)
+68ms ║║Executed physical command [Test2].on() (24ms)
+76ms ║║Test1 turned on
+79ms ║║Executed virtual command [Test2].log (3ms)
+87ms ║║Condition #7 evaluated false (4ms)
+88ms ║║Condition group #6 evaluated false (condition did not change) (7ms)
+97ms ║╚Execution stage complete. (76ms)
+126ms ╚Event processed successfully (122ms)
11/09/2022, 19:46:42 +800ms
+14ms ╔Received event [Test2].switch = on with a delay of 26ms, canQueue: true, calledMyself: false
+17ms ╚Event queued (3ms)


#11

I’ve tested this today on the original garage door piston and its working as it did in ST.

Thanks for the change, will this be rolled out as a general release?


#12

It is in the release stream. (you got it via HPM)

I have not forced everyone to update to it yet just to control the number of forced updates.