Why do devices with 'ColorTemperature' trait in rooms receive commands for 'ColorSpectrum'? - actions-on-google

To illustrate an example scenario that prompted this question, please consider the following:
A room, attic, that contains 3 devices: one light that supports both ColorSpectrum and ColorTemperature, one light that only supports ColorTemperature and one light that only supports ColorSpectrum (all 3 also support OnOff and Brightness, but this appears to be irrelevant).
"Set the attic to warm white" will result in two of the lights receiving a temperature value in Kelvin, whereas the third (which didn't support ColorTemperature) will receive an rgb/hsv color value approximating the correct hue of white.
Conversely: "Set the attic to red" will result in all 3 lights receiving an rgb/hsv color value (including the light that does not support the ColorSpectrum trait).
We are unsure how a light that supports only ColorTemperature is supposed to respond to an rgb/hsv value. This final scenario - after the light's failure to be able to execute the user's command - left us with 3 options to respond:
Lie and respond 'SUCCESS' for all 3 lights, "Ok, changing 3 lights to red."
Omit the third light from the response entirely, "Ok, changing 3 lights to red."
Respond with "notSupported" 'ERROR' for the third light, "Ok, changing 2 lights to red. That mode isn't available for the LIGHT_3."
Option 1 is clearly undesirable, incorrect feedback is worse than no feedback at all.
Option 2 is equivalent to 1, though it seems odd that Google Home should assume that a device omitted from a response was successfully processed.
Option 3 we deem unideal as well, as we expect the user may get bored of hearing that a certain light in their room is unable to change color when they might be perfectly aware of this fact. Our preference would go to a response of: "Ok, changing 2 lights to red." We feel that this communicates clearly that one light didn't change, without the potentially superfluous error message.
Our question, then, is how we might realize this?
Is the behavior listed above unintended (a bug)?
Is there some response that we are unaware of that can be used to communicate to Google Home that a device simply is not eligible for the provided execution?
Is the behavior listed above not experienced by others or the result of a mistake on our part?
Thank you for reading.

I'm going to check about this scenario as it might be a bug in which devices get commands when they shouldn't.
Canonically, option three may be undesirable but is the right implementation. Trying to ignore the command will also create a bad user experience as they receive an incorrect reply.

Related

Dialogflow, Google Assistant: Getting error "MalformedResponse 'final_response' must be set"

I have three intents, "Cold Exposure", "Poisoning" and "Frostbite". Each intent has suggestion chips to move to the next intent, i.e. the "Cold Exposure" shows a chip of "Poisoning" and "Poisoning" shows a suggestion chip of "Frostbite".
All are follow up intents of the Default Welcome Intent, so all have the "Default Welcome Intent- followup" input context.
My problem is, when I call Cold Exposure and then call Poisoning, there's no problem. However, when I call Cold Exposure and then Poisoning, and then Frostbite, I get an error "MalformedResponse 'final_response' must be set." . Im not able to call any three intents back to back. I really dont know why this is happening.Im using v2 API.
This is the only error I have in my program, so it would be great if this could be solved quick.This is a screenshot of my intents.
What is the lifespan of the "Default Welcome Intent- followup" output context in the Default Welcome Intent?
By default, when you create followup Intents, the output context lifespan set in the root Intent is 2. Each action decrements this count and the context disappears when the count reaches 0.
Try increasing the lifespan (in the grey circle next to the output context name) to something like 10 (or any other number you see fit) and remove it manually when needed in later intents (by setting it as output context with a lifespan of 0).
Florent.
I actually wouldn't have expected that the followup intents would work the way you're trying. They're all followups to the original intent, rather than to each other. So it could be that the lifespan of the original intent's Context has expired by the third one. In this case, it would revert to the Fallback Intent.
But the reason for the error message itself is that you're not sending back a reply. If you're using a fulfillment, it means it isn't sending a reply. If you're not, it could be that the "Frostbite" Intent doesn't have a reply set or that your Fallback Intent doesn't have a reply set.

Switch monitor position in Weston

Is it possible to indicate monitor position in Weston/Wayland?
I have two monitors and been testing the Weston compositor, but I have been unable to indicate which monitor should be the main one (or which one should show the "left part" of the screen).
Checking the weston.ini docs (http://manpages.ubuntu.com/manpages/xenial/en/man5/weston.ini.5.html) I found info about setting resolution, scaling and transform/rotation, but nothing about the position of the monitors.
I have been interested into the same some weeks ago. I send the question to the wayland IRC channel.
You can have a look at:
https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=wayland&highlight_names=&date=2017-12-19
https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=wayland&highlight_names=&date=2017-12-20
Here is the relevant part:
22:31 maggu2810: Is there any change to change the display / output position in weston? I am using three outputs but don't know how to configure which one if left / right of the other one.
22:33 maggu2810: ... chance to change ...
22:35 maggu2810: the output sections in weston.ini currently contains the name, the mode and the scale. I didn't find any "position" option or anything that looks like an option to modify the "order" of the displays.
07:07 pq: maggu2810, yes, not implemented yet, there were indeed some WIP patches in 2016
Perhaps you will be able to post a feature request in their mailing list. I don't think they will look for feature requests on SO ;)

Anylogic - Events triggered by message

i'm new to AnyLogic and i'm trying to built an ABM SIRS model for pertussis in Italy...but i'm stuck because I want infected agents to send a message to all the agents they are connected with.
I want the message to be an number ( [0,1] level of infectivity) and not a string and then the real problem is: once an agent gets this message it becomes infected with probability equal to the number in the message
Sending the message
Once the message is received
Thanks!
try this.
To understand, put your cursor into the "Expression" code box and hover over the little light bulb in the top left corner.
Also refer to my blog on the little light bulb which can be a life-saver in these situations: the magic lightbulb

Why is my "waterproof" polyhedron causing "WARNING: Object may not be a valid 2-manifold and may need repair!"?

In the script
difference() {
polyhedron(
points=[[0,0,0],
[2,0,0],
[2,1,0],
[0,1,0],
[0,0,2],
[0,1,2]],
faces=[[0,1,2,3],
[5,4,1,2],
[5,4,0,3],
[0,1,4],
[2,3,5]]);
cube([1,1,1]);
};
the polyhedron alone works fine (is rendered without warnings), but adding the cube above causes the warning WARNING: Object may not be a valid 2-manifold and may need repair! to be logged and the output to only render some parts of some surfaces.
I'm using OpenSCAD 2015.03-1 on Ubuntu 16.04.
This is because your polyhedron has some faces pointing into the wrong direction, causing issues when calculating the difference().
See the Manual and FAQ for details.
Changing the winding order of the affected polygons fixes the polyhedron:
difference() {
polyhedron(
points=[[0,0,0],
[2,0,0],
[2,1,0],
[0,1,0],
[0,0,2],
[0,1,2]],
faces=[[0,1,2,3],
[2,1,4,5],
[5,4,0,3],
[0,4,1],
[2,5,3]]);
cube([1,1,1]);
};
The difference is still non-manifold as cutting the cube results in 2 prism shaped objects just touching at one edge. That's also by definition not 2-manifold, so the warning remains.
Depending on how the exported model is supposed to be used, you could choose to ignore this warning and hope the tool processing the 3d model can handle that.
To remove the issue, for example the cube could be made a bit smaller like cube([1, 1, 0.999]).
An unrelated, but still useful strategy for preventing issues later on is to always make the cutting object a bit larger to ensure that no very thin planes remain, e.g. use cube([2,3,1.999], center = true). That will also remove the display artifacts in preview mode.

Controlling light using midi inputs

I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!