Intelligent Echo Device Location change piston


#18

Given that Echo/Alexa indeed does not (yet?) provide programmers the ability to utilize audio level to determine presence, for now other means are required. As I would prefer this to be automated, I’ve been thinking on this a bit.

How do we determine presence in a Room?
If we have enough HA devices and know the people around us, perhaps we can use the status of those devices.

Here’s an example. If my wife is in the bedroom after dinner, a particular dimming outlet is almost always ON. Additionally, the tv in that room is almost always on. And if it’s after 10pm and those devices have been turned off, it means she’s asleep… but still in that room.

Likewise, if there’s motion in the kitchen/dining room area around 6:30-7pm odds are she’s there. And in two months when the kitchen has been redone and smart switches and outlets and Hue lights are there, I could use their on/off status to say she is likely there. Or I could simply acknowledge that she cooks 70% of the suppers, and so she’s likely to be there during that time span.

And so any EchoSpeaks messages that are for her should be directed to those areas under those conditions.

Here’s another: in our home office we have a new PC. I’d love to find a way to set it up such that whichever of us is logged on to that machine and using it at that moment, ST could recognize it and register myself or her as present in that room. Likewise I keep my business suits in the closet in that office and get dressed for work there, so any Speaks messages designed for me should play there between 6:30-7a in the morning on weekdays.


#19

Interesting discussion. The focus of interest is in any given home do you want automated devices or automation. I’ve found some things make sense to automate. For example, if I have either garage door open, the Echo device in the family room announces every five minutes which door has been left open and for how long. Generally, I don’t want the garage left open unattended. Also, I have a tilt sensor on each garage door and when the doors are successfully closed I receive a spoken confirmation in the family room. I voice command my garage doors and I want to know they have closed. Another example is my wife and I tend to go in and out of the kitchen in the evenings for a few minutes here and there always carrying something and so it is not convenient to turn on a light and nor do we require full lighting that would be needed to cook. So, we have a Phillips Hue strip light on top of the cabinets we call “Kitchen mood”. The kitchen has an EZMultiPLI that detects motion when someone enters the kitchen after sunset and before sunrise and turns the “Kitchen Mood” on and leaves it on until the room is unoccupied for 5 minutes and then turns it off. Another example is the EZMultiPLI also has an LED light. There is a piston that causes that LED to turn RED when the Smart Home Monitor is in the armed mode and GREEN when the Smart Home Monitor is in the disarmed mode. That same piston also controls a color Hue light bulb in the ceiling of the garage similarly. A final example is in our bedroom there is a Sonos Sound bar connected to our bedroom TV, two Sonos Play 1 speakers acting as surrounds and a Sonos Play SUB. This system is used for both TV sound and music. After the TV has been watched, the volume level is typically much higher if you happen to switch over to music. This is a basic level mismatch between the two sources and Sonos and/or Vizio have no easy fix. So, there is an automation that adjusts the Sonos volume down after the TV has been played. You can see from these examples that these automations make sense. I’ve never felt like automation makes sense for things like the internet controlled crock pot or the ceiling fan that comes on when the room temperature reaches a certain level. The reason is that I revel in the fact that I have added smart switches to control almost all lights in the house, music and video selection. Having played with things like turning certain lights on when I turn the TV on or turning the ceiling fan on at a certain temperature have not been useful personally. I really love controlling those devices via voice, but having them happen automatically is a stretch to say they make things better. Another example of useful automation is I have both my washer and dryer plugged into smart wall plugs that are of the energy monitoring type. My washer and dryer are very basic models with no smarts at all. The washer and dryer are in a closed utility room and I can’t hear when they finish. I have a piston that announces in the living area when the washer has completed and when the dryer has completely which works simply by monitoring when they stop drawing watts. I have very few lights not automated in the house. The light in the laundry room is not automated because whenever I go in there the switch is in a very handy position even if I am carrying a basket of clothes. The light in the hallway is not automated because I rarely use it and also because I do not automate my lights in closets. My thought is that unoccupied areas don’t add a lot of automation value for me personally. I have three motion detectors in the house and I use them almost entirely for Smart Home Monitor. Even if I had a reliable and personally identifiable means of determining room occupancy, I am not sure that granularity would add a lot of value to my level of automation. I would be very interested in hearing more input on what @Glen_King and I have discussed.


#20

Ok, one item taken care of. I’m using EventGhost on my PC to determine whether someone is logged on. It can’t determine WHO is logged on, but it can determine whether someone is logged on or whether it has timed out or user sessions have locked.

And from there, I can generally determine presence in my office.

I have it doing a python command string to a Webcore endpoint.

EDIT: turns out you CAN access who is logged on. It’s called a payload. I just have to script it.


#21

I do something similar with a free program for Windows called EventGhost. It can send and receive messages both to and from webCoRE, as well as pass variables in both directions.

Here is an example that tells webCoRE if my PC is on, off, in use, or at screensaver mode.

It looks so cool when my screen saver kicks in, my lights in the Den will fade… and when I type in my password to log into my desktop again, the lights return to the normal levels.

I also do a variation of this that tells webCoRE when certain apps are executed on the PC.


Edit:
I just saw your last post Glen… LOL
Great minds think alike!


#22

I got it to work as individual python statements, but the if - elif script is not working.

My script creates a variable called pcuser then populates it, then tries to act on values held in it.

  1. pcuser = eg.event.payload
  2. Print pcuser (just to ensure it’s in the log)
  3. If pcuser == ‘glen’
  4. Import urllib; urllib.urlopen(‘Webcore eternal url’)

Up to step 2 it’s fine - it prints the username into the log.
But if that’s my name it should then execute step 4, which is identical to the python statement that executed perfectly well.

Do I have to make more than one line out of that when running in a script?


#23

I am not an expert on Python scripting, but I sometimes turn a payload into a new trigger, and let that trigger activate webCoRE. Example here

An alternative method is to simply pass the pcuser variable to webCoRE, and let webCoRE handle the logic. Example here.


#24

Success. Created a script that concatenates the url with the contents of eg.event.payload:

  1. pcuser = eg.event.payload[0]

  2. url = ‘https://graph.api.smartthings.com/api/token/your token here/execute/:your other piston id info:?user=%s’ %pcuser

  3. import urllib; urllib.urlopen(url)

Turns the computer from an ‘occupancy’ sensor to a ‘presence’ sensor.


#25

Nicely done!!


#26

OK…I know this may seem like a rehash (which it may be, in a sense), but it is not a moot point (at least as I read this thread thus far).

I also know that this is not the main point of the thread, but since it came up, and seems relevant, I thought I’d address it.

It is NOT true that Alexa knows exactly which Alexa device you’re closest to at the moment when you use the wake word. It actually doesn’t know for sure until after it gathers voice data from the wake-word-use event, and does whatever calculations it does based on that wake-word usage event.

How do I know this?
I know this because of the fact that, when I use the wake word while I am in proximity to more than one Alexa device (e.g. there is a place where my voice can sort of ‘straddle’ three of them), they all wake up, and it takes a moment for the system to figure out which one of the three I’m actually closest to, and only then (i.e. post-wake-word-usage) does it know exactly which one I’m closest to, and responds accordingly.

Again, I know this isn’t the main point, but since you specifically stated this twice so far in this thread, I decided it was valid enough to offer another point of view on this specific sub-point of the discussion.


#27

Yes, it takes a moment.
And once in awhile, it gets it wrong.

But in almost all instances, it gets it right; the echo closest to where you were at the moment you said the wake word is almost always the one that responds or performs the requested action. In order to do that, it had to have the data available at the moment you used the wake word. Yes it had to perform some calculations so it took a moment before choosing which Echo to reply with… but it did it.

From a programming perspective, it could treat these announcements in similar fashion. It would not hav to always be calculating. It could take measurements and run the compare once every five minutes, and make the announcement to the device or devices which most recently had the highest ambient noise levels. That would be one of many possible approaches.
———————————————-

Back to the original topic of “follow me” and Alexa. Now that I have set up my PC as a presence device, I don’t even need Alexa to make announcements on that thing. Variables can be passed from Webcore to eventghost, and eventghost can read them out. So that if someone is on the PC and Smartthings/Webcore has messages that are personalized for them or has general messages they would need to hear anyway, it could be done through the PC.


#28

I agree…mostly…
My response had nothing to do with how long it takes for Alexa to run the algorithm, or how accurate it is.
I was simply addressing that one, simple point of the fact that it does NOT know ‘at the moment’ we wake it, but takes a moment (however short a measurement of time it is) to gather and crunch data before it makes that decision.

Now, admittedly, in a vacuum, that point is meaningless. However…

(Again, back on topic)
If I’m right about my supposedly off-topic point, then no, it (Alexa) wouldn’t always know or be able to use such data to help us with individualized micro-location…well, at least, not until we wake it, and the whole point of the thread is to find a way of having individual micro-location as an ‘always-on’ sort of thing so that the user doesn’t have to constantly manage it with one sort of interaction, or another.

This seems problematic for the Alexa-based concept…that is, unless they modify their algorithms to be always tracking individual micro-location for us…which…it sounds like some here may be afraid of anyway. :o


#29

I have read through this discussion with interest. But I am not sure I completely understand these pistons. I am wanting to be able to identify (with webcore) which Alexa dot I am talking to. I want to be able to say “Alexa, play my 80’s hits playlist” and then use webcore to turn on the speakers in that area and play my music. Currently I do this by saying “Alexa, play my 80’s hits playlist in the workshop” and that fires a piston that turns on my workshop speakers and tells Kodi to play my 80’s hits playlist. The catch here is that my wife must remember the keywords for the groups of speakers throughout our house. So, can we identify which Alexa dot hears the command?


#30

Try the piston below, which was posted in the ST forum. You could also make it a global variable; you can then build routines that check which device is stored there, and act on that.

https://discourse-cdn-sjc1.com/smartthings/uploads/default/original/3X/4/6/4674623928dc771c96b0d9ccf74585432f647b41.png


#31

Do you remember where you got this? I need help with it. I presume that the Music Player 1,4,5 are echo devices. But how do I get webcore to list those?


#32

I wrote the Intelligent Echo Device Location change piston. In order to access your Echo devices you need to 1) Install the “Echo Speaks” SmartApp via the community installer and 2) You need to go into the Webcore SmartApp and authorize access to the discovered Echo devices.


#33

Echo Speaks.


#34

Yes, install the “Echo Speaks” SmartApp.


#35

A little time has passed since the last post. Any new options for Echo device location other than Echo Speaks?


#36

Echo Speaks makes your Echo Devices available to ST. I don’t know of another program that does so. Were you not able to get it to work?


#37

@Pantheon
“Location” is a utility for Echo Speaks. The idea is to move where Echo Speaks responds. Echo Speaks is a function that allows an Echo Device to perform text to voice functions under Webcore program control and this location change piston drives which echo device in your home will speak. Please explain what you mean by another option for device location other than “Echo Speaks”.