Intelligent Echo Device Location change piston


I do something similar with a free program for Windows called EventGhost. It can send and receive messages both to and from webCoRE, as well as pass variables in both directions.

Here is an example that tells webCoRE if my PC is on, off, in use, or at screensaver mode.

It looks so cool when my screen saver kicks in, my lights in the Den will fade… and when I type in my password to log into my desktop again, the lights return to the normal levels.

I also do a variation of this that tells webCoRE when certain apps are executed on the PC.

I just saw your last post Glen… LOL
Great minds think alike!


I got it to work as individual python statements, but the if - elif script is not working.

My script creates a variable called pcuser then populates it, then tries to act on values held in it.

  1. pcuser = eg.event.payload
  2. Print pcuser (just to ensure it’s in the log)
  3. If pcuser == ‘glen’
  4. Import urllib; urllib.urlopen(‘Webcore eternal url’)

Up to step 2 it’s fine - it prints the username into the log.
But if that’s my name it should then execute step 4, which is identical to the python statement that executed perfectly well.

Do I have to make more than one line out of that when running in a script?


I am not an expert on Python scripting, but I sometimes turn a payload into a new trigger, and let that trigger activate webCoRE. Example here

An alternative method is to simply pass the pcuser variable to webCoRE, and let webCoRE handle the logic. Example here.


Success. Created a script that concatenates the url with the contents of eg.event.payload:

  1. pcuser = eg.event.payload[0]

  2. url = ‘ token here/execute/:your other piston id info:?user=%s’ %pcuser

  3. import urllib; urllib.urlopen(url)

Turns the computer from an ‘occupancy’ sensor to a ‘presence’ sensor.


Nicely done!!


OK…I know this may seem like a rehash (which it may be, in a sense), but it is not a moot point (at least as I read this thread thus far).

I also know that this is not the main point of the thread, but since it came up, and seems relevant, I thought I’d address it.

It is NOT true that Alexa knows exactly which Alexa device you’re closest to at the moment when you use the wake word. It actually doesn’t know for sure until after it gathers voice data from the wake-word-use event, and does whatever calculations it does based on that wake-word usage event.

How do I know this?
I know this because of the fact that, when I use the wake word while I am in proximity to more than one Alexa device (e.g. there is a place where my voice can sort of ‘straddle’ three of them), they all wake up, and it takes a moment for the system to figure out which one of the three I’m actually closest to, and only then (i.e. post-wake-word-usage) does it know exactly which one I’m closest to, and responds accordingly.

Again, I know this isn’t the main point, but since you specifically stated this twice so far in this thread, I decided it was valid enough to offer another point of view on this specific sub-point of the discussion.


Yes, it takes a moment.
And once in awhile, it gets it wrong.

But in almost all instances, it gets it right; the echo closest to where you were at the moment you said the wake word is almost always the one that responds or performs the requested action. In order to do that, it had to have the data available at the moment you used the wake word. Yes it had to perform some calculations so it took a moment before choosing which Echo to reply with… but it did it.

From a programming perspective, it could treat these announcements in similar fashion. It would not hav to always be calculating. It could take measurements and run the compare once every five minutes, and make the announcement to the device or devices which most recently had the highest ambient noise levels. That would be one of many possible approaches.

Back to the original topic of “follow me” and Alexa. Now that I have set up my PC as a presence device, I don’t even need Alexa to make announcements on that thing. Variables can be passed from Webcore to eventghost, and eventghost can read them out. So that if someone is on the PC and Smartthings/Webcore has messages that are personalized for them or has general messages they would need to hear anyway, it could be done through the PC.


I agree…mostly…
My response had nothing to do with how long it takes for Alexa to run the algorithm, or how accurate it is.
I was simply addressing that one, simple point of the fact that it does NOT know ‘at the moment’ we wake it, but takes a moment (however short a measurement of time it is) to gather and crunch data before it makes that decision.

Now, admittedly, in a vacuum, that point is meaningless. However…

(Again, back on topic)
If I’m right about my supposedly off-topic point, then no, it (Alexa) wouldn’t always know or be able to use such data to help us with individualized micro-location…well, at least, not until we wake it, and the whole point of the thread is to find a way of having individual micro-location as an ‘always-on’ sort of thing so that the user doesn’t have to constantly manage it with one sort of interaction, or another.

This seems problematic for the Alexa-based concept…that is, unless they modify their algorithms to be always tracking individual micro-location for us…which…it sounds like some here may be afraid of anyway. :o


I have read through this discussion with interest. But I am not sure I completely understand these pistons. I am wanting to be able to identify (with webcore) which Alexa dot I am talking to. I want to be able to say “Alexa, play my 80’s hits playlist” and then use webcore to turn on the speakers in that area and play my music. Currently I do this by saying “Alexa, play my 80’s hits playlist in the workshop” and that fires a piston that turns on my workshop speakers and tells Kodi to play my 80’s hits playlist. The catch here is that my wife must remember the keywords for the groups of speakers throughout our house. So, can we identify which Alexa dot hears the command?


Try the piston below, which was posted in the ST forum. You could also make it a global variable; you can then build routines that check which device is stored there, and act on that.


Do you remember where you got this? I need help with it. I presume that the Music Player 1,4,5 are echo devices. But how do I get webcore to list those?


I wrote the Intelligent Echo Device Location change piston. In order to access your Echo devices you need to 1) Install the “Echo Speaks” SmartApp via the community installer and 2) You need to go into the Webcore SmartApp and authorize access to the discovered Echo devices.


Echo Speaks.


Yes, install the “Echo Speaks” SmartApp.


A little time has passed since the last post. Any new options for Echo device location other than Echo Speaks?


Echo Speaks makes your Echo Devices available to ST. I don’t know of another program that does so. Were you not able to get it to work?


“Location” is a utility for Echo Speaks. The idea is to move where Echo Speaks responds. Echo Speaks is a function that allows an Echo Device to perform text to voice functions under Webcore program control and this location change piston drives which echo device in your home will speak. Please explain what you mean by another option for device location other than “Echo Speaks”.