Actions on Google - What is the relationship between commands/devices/executions in the Google Home EXEC input and response? - actions-on-google

This question concerns the Actions on Google Smart Home documentation Create a Smart Home App, specifically the action.devices.EXECUTE section.
We are somewhat confused regarding the exact relationship between the list of 'Command' objects and their associated lists of Devices and Executions, especially regarding how these are translated to a response.
Based on the documentation, we believe that the intent is for Commands to be processed in order: top to bottom. Per Command, each Execution is processed (again, top to bottom) for each device ID in the Command.
A response, if we understand the description correctly, could include up to 4 Commands per initial Command in the input (one for SUCCESS, PENDING, OFFLINE & ERROR), each with a list of device IDs for which that result is appropriate.
There is no mention of Executions in the response, however. Does this mean that if 1 execution for a device fails (out of multiple) that in the response it is listed under ERROR, despite other executions for the device succeeding?
For example, if a command comes in to turn on a light and set its color to blue. Turning it on succeeds, but some arbitrary error prevents the color from being set, then what should the response format look like?
Thank you for reading.

A commands array will contain all of the devices that are supposed to controlled with this command. There is an additional execution array which provides the command and parameters.
If some devices could not successfully be controlled, there should be an error returned for that device id, as shown in the documentation.
For any particular device, it may be odd to think of a scenario where one command is successful but another failing. In that case, you will need to think of the reason that makes the most sense, perhaps error protocolError or unknownError.
Every command is meant to be processed simultaneously, or in parallel. If you cannot make all of the changes that the user requested, it may be more consistent if no command was executed at all. So your device could be turned on/off, but if color is broken it should fail if both commands are sent at the same time.

Related

Filtering certain types of Requests logged by quarkus.http.access-log

I want to test a new REST-Client I am building, where I'd like to see the exact request which is being built, so I set the quarkus.http.access-log.enabled=true property. When starting the Quarkus however, I am being bombarded with the logs of many scheduled requests which are happening simultaneously. The worst of those are several Elasticsearch Scroll-Requests, which return a lot of data which is directly fed into the log.
My idea is, that everything returning a response that contains _index should be filtered, as I know well by now, that my ES-Client is working properly, however it pumps out so much data, that the request I am intending to log is overwritten almost instantly.
So my question: Does somebody know a working (and convenient) method to effectively filter unwanted HTTP-Logs?
I tried setting
quarkus.http.access-log.exclude-pattern=(_index+)
in an attempt to filter out unwanted requests, but I'm not sure where to continue from there.

IBM Datastage reports failure code 262148

I realize this is a bad question, but I don't know where else to turn.
can someone point me to where I can find the list of reports failure codes for IBM? I've tried searching for it in the IBM documentation, and in general google search, but this particular error is unique and I've never seen it before.
I'm trying to find out what code 262148 means.
Background:
I built a datastage job that has:
ORACLE CONNECTOR --> TRANSFORMER -> HIERARCHICAL DATA
The intent is to pull data from a ORACLE table, and output the response of the select statement into a JSON file. I'm using the HIERARCHICAL stage to set it. When tested in the stage, no problems, I see the JSON output.
However, when I run the job, it squawks:
reports failure code 262148
then the job aborts. There are no warnings, no signs, no errors prior to this line.
Until I know what it is, I can't troubleshoot.
If someone can point me to where the list of failure codes are, i can proceed.
Thanks!
can someone point me to where I can find the list of reports failure codes for IBM?
Here you go:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzahb/rzahbsrclist.htm
While this list does not list your specific error code, it does categorize many other codes, and explains how the code breakdown works. While this list is not specifically for DataStage, in my experience IBM standards are generally compatible across different products. In this list every code that starts with a 2 is a disk failure, so maybe run a disk checker. That's the best I've got as far as error codes.
Without knowledge of the inner workings of the product, there is not much more you can do beyond checking system health in general (especially disk, network and permissions in this case). Personally, I prefer to go get internal knowledge whenever exterior knowledge proves insufficient. I would start with a network capture, as I'm sure there's a socket involved in the connection between the layers. Compare the network capture from when the select statement is run from the hierarchical from one captured when run from the job. There may be clues in there, like reset or refused connections.

Are there a way to know how much of the EEPROM memmory that is used?

I have looked trough the "logbook" and "datalogger" APIs and there are no way of telling that the data logger is almost full. I found the API call with the following path "/Mem/Logbook/IsFull". If I have understood it correct this will notify me when log is full and the datalogger has stopped logging.
So my question is: Are there a way to know how much of the memmory is currently in use so that I do a cleanup old data (need to do some calculations on them before they are deleted) before the EEPROM is full and the Datalogger stops recording?
The data memory of Logbook/DataLogger is conceptually a ring buffer. That's why /Mem/DataLogger/IsFull always returns false on Movesense sensor (Suunto uses the same API in its watches where the situation is different). Therefore the sensor never stops recording, it just replaces oldest data with new.
Here are a couple of strategies that you could use:
Plan A:
Create a new log (POST /Mem/Logbook/Entries => returns the logId for it)
Start Logging (PUT /Mem/DataLogger/State: LOGGING)
Every once in a while create a new log (POST /Mem/Logbook/Entries). Note: This can be done while logging is ongoing!
When you want to know what is the status of the log, read /Mem/Logbook/Entries. When the oldest entry has completely been overwritten, it disappears from the list. Note: The GET /Entries is a heavy operation so you may not want to do it when the logger is running!
Plan B
Every now and then start a new log and process the previous one. That way the log never overwrites something you have not processed.
Plan C
(Note: This is low level and may break with some future Movesense sensor release)
GET the first 256 bytes of EEPROM chip #0 using the /Component/EEPROM API. This area contains a number of ExtflashChunkStorage::StorageHeader structs (see: ExtflashChunkStorage.h), rest is filled with 0xFF. The last StorageHeader before 0xFF is the current one. With that StorageHeader one can see where the ring buffer starts (firstChunk) and where next data is written (cursor). The difference of the two is the used memory. (Note: Since it is a ring buffer the difference can be negative. In that case add the "Size of Logbook area - 256" to it)
Full disclosure: I work for Movesense team

Google Analytics Core Reporting API query for exits and entrances metrics - entrance values incorrectly exactly the same as exits

I'm using GA's Core Reporting API to create a report that shows the top exit pages alongside some behavioural metrics for each page. The dimension is ga:exitPagePath, and the metrics I want are:
ga:exits
ga:pageviews
ga:entrances
ga:avgTimeOnPage
ga:bounceRate
ga:exitRate
I'm sorting by -ga:exits. I'm not using any filters or segments.
The query appears to work fine, it doesn't return an error - however the entrances values it returns are incorrect and exactly match the exit values for each page. Other queries for ga:entrances without ga:exits give the correct entrance values.
I may have overlooked it but I can't find anywhere in the documentation indicating that these metrics can't be used together. I also tested creating a custom report within the GA interface with these two metrics and found the same result - no error or indication that I can't create a report with both metrics, but entrances incorrectly reported and exactly matching the exit values. I also get the same result in GA's Query Explorer.
Would love to work this out - it seems perfectly logical to me to want to view entrances alongside exits for exit pages :)
A better late than never response.
It makes sense, because all users that have visited your site (entrances) have left (exits).
It gets meaningful when using it along with the pages (ga:pagePath for example).

Can watchman send why a file changed?

Is watchman capable of posting to the configured command, why it's sending a file to that command?
For example:
a file is new to a folder would possibly be a FILE_CREATE flag;
a file that is deleted would send to the command the FILE_DELETE flag;
a file that's modified would send a FILE_MOD flag etc.
Perhaps even when a folder gets deleted (and therefore the files thereunder) would send a FOLDER_DELETE parameter naming the folder, as well as a FILE_DELETE to the files thereunder / FOLDER_DELETE to the folders thereunder
Is there such a thing?
No, it can't do that. The reasons why are pretty fundamental to its design.
The TL;DR is that it is a lot more complicated than you might think for a client to correctly process those individual events and in almost all cases you don't really want them.
Most file watching systems are abstractions that simply translate from the system specific notification information into some common form. They don't deal, either very well or at all, with the notification queue being overflown and don't provide their clients with a way to reliably respond to that situation.
In addition to this, the filesystem can be subject to many and varied changes in a very short amount of time, and from multiple concurrent threads or processes. This makes this area extremely prone to TOCTOU issues that are difficult to manage. For example, creating and writing to a file typically results in a series of notifications about the file and its containing directory. If the file is removed immediately after this sequence (perhaps it was an intermediate file in a build step), by the time you see the notifications about the file creation there is a good chance that it has already been deleted.
Watchman takes the input stream of notifications and feeds it into its internal model of the filesystem: an ordered list of observed files. Each time a notification is received watchman treats it as a signal that it should go and look at the file that was reported as changed and then move the entry for that file to the most recent end of the ordered list.
When you ask Watchman for information about the filesystem it is possible or even likely that there may be pending notifications still due from the kernel. To minimize TOCTOU and ensure that its state is current, watchman generates a synchronization cookie and waits for that notification to be visible before it responds to your query.
The combination of the two things above mean that watchman result data has two important properties:
You are guaranteed to have have observed all notifications that happened before your query
You receive the most recent information for any given file only once in your query results (the change results are coalesced together)
Let's talk about the overflow case. If your system is unable to keep up with the rate at which files are changing (eg: you have a big project and are very quickly creating and deleting files and the system is heavily loaded), the OS can't fit all of the pending notifications in the buffer resources allocated to the watches. When that happens, it blows those buffers and sends an overflow signal. What that means is that the client of the watching API has missed some number of events and is no longer in synchronization with the state of the filesystem. If that client is maintains state about the filesystem it is no longer valid.
Watchman addresses this situation by re-examining the watched tree and synthetically marking all of the files as being changed. This causes the next query from the client to see everything in the tree. We call this a fresh instance result set because it is the same view you'd get when you are querying for the first time. We set a flag in the result so that the client knows that this has happened and can take appropriate steps to repair its own state. You can configure this behavior through query parameters.
In these fresh instance result sets, we don't know whether any given file really changed or not (it's possible that it changed in such a way that we can't detect via lstat) and even if we can see that its metadata changed, we don't know the cause of that change.
There can be multiple events that contribute to why a given file appears in the results delivered by watchman. We don't them record them individually because we can't track them with unbounded history; imagine a file that is incrementally being written once every second all day long. Do we keep 86400 change entries for it per day on hand and deliver those to our clients? What if there are hundreds of thousands of files like this? We'd have to truncate that data, and at that point the loss in the data reduces how well you can reason about it.
At the end of all of this, it is very rare for a client to do much more than try to read a file or look at its metadata, and generally speaking, they want to do that only when the file has stopped changing. For this use case, watchman-wait, watchman-make and trigger all have the concept of a settle period that causes the change notifications to be delayed in delivery until after the filesystem has stopped changing.