For each media response we are receiving duplicate MEDIA_FINISHED in MEDIA_STATUS. This is causing every user to skip one file for each media session. There is no identifier in the requests either so we can't ignore one of them.
This is for an existing Action SDK app which was earlier working fine and have broken recently.
This is my Response builder which has suggestion chips as well:
.add(audioString)
.add(
new MediaResponse()
.setMediaObjects(
new ArrayList<MediaObject>(
Arrays.asList(
new MediaObject()
.setName(mediaObject.getString("name"))
.setDescription(audioString)
.setContentUrl(
mediaObject.getString("contentUrl"))
.setIcon(
new Image()
.setUrl(
"https://www.somehost.com/blog/email-img/badge-108.png")
.setAccessibilityText("Logo")))))
.setMediaType("AUDIO"))
.addSuggestions(suggesstionArray);```
We managed to implement a workaround by sending a timestamp in our state. Once Action sdk sent the duplicate MEDIA_FINISHED intent then we replayed the last audio again if MEDIA_FINISHED intent and state has timestamp which less than 10 seconds. It was still a bad experience but we had no response from Google so we had to eventually take workaround route
Eventually this was fixed by Google (Not sure if accidentally) last week and now we are not receiving duplicate MEDIA_FINISHED intent
Related
Im having trouble finding documentation regarding best practices for handling activity data emitted from a realtime subscription to a getStream feed.
Our current set-up mimics what it seems react-activity-feed does: (1) subscribe to feed and listen for new activities (2) when a new activity is emitted display a button at the top of the feed announcing the new activity (3) when the announcement button is selected make a new feed.get() call to retrieve the most recent feed data.
The problem we are considering is how we could avoid making a new call to feed.get() every time a new activity is emitted (seems wasteful). We would rather store the original response data from feed.get() inside a state variable and insert new activities into that object.
This doesn't seem possible, however, as we get the following error whenever we try to append to the nested array inside the feed.get() response object: TypeError: Cannot assign to read only property 'activities'
I would greatly appreciate any advice on how others have handled new activities emitted from a feed.
I discovered that react-activity-feed does NOT make a call to feed.get() to retrieve the updated feed data after a new event is emitted to the feed.
feed.get() is only called when a feed is first visited or a pagination event occurs that makes a request to get more feed data. Otherwise, a copy of the feed activity object is created (using immutableJS.Map) and newly emitted events are pushed to this object which is used to display content in the feed (see object definition here). The same is done for managing reactions to activities as well as notifications.
So for anyone who is curious how to manage new events emitted from Streams real-time subscription try to managed feed state in your app by making a copy of the object that is initially returned by feed.get() and manipulate that object in your UI to limit the number of requests made to getStream and improves the performance of the feed as well.
I have been using
Net::SSLeay
for 20 years to post data to authorize.net and receive the reply, as shown below:
($reply_data, $reply_type, %reply_headers) =
post_https($host, $port, $script, '', $form_data);
#data = split (/\,/, $reply_data);
$FORM{'x_response_code'} = $data[0];
$FORM{'x_response_reason_text'} = $data[3];
$FORM{'x_auth_code'} = $data[4];
$FORM{'x_amount'} = $data[9];
if ($FORM{'x_response_code'} != 1) {...
authorize.net received the data and processed the payment, but my system did not receive a reply. The user got a server error, and tried submitting the form several more times, all which resulted in payment processing, but no reply from authorize.net. When comparing my logs with authorize.net's processing time, I see that there's about a fifteen minute lag time between when the call was sent, and when authorize.net processed the payment. All four attempts were completed before the first processing occurred. Authorize.net says there were no problems or changes on their end.
How can I suppress the server error and instead return a custom error message?
As I understand it, your problem is that your local server is timing out before receiving a response from authorize.net, and from your log files, it seems that the response does arrive, but after a long delay. Although it is probably possible to configure your server to wait for longer before timing out (assuming you have control of the server config, etc.), 15 minutes is too long for a user to wait and would provide very poor user experience. Therefore, I'd suggest one of the following strategies:
submit the form by AJAX and wait for a response -- i.e. translate your form processing logic into javascript and perform the transaction asynchronously. Looks like there's an existing authorize.net JS API.
the old school method: when the form is submitted, fork a new process that posts the form data (e.g. using curl) and saves the response to a file. The UI would return an intermediate page informing the user that their payment is being processed, and would periodically check whether the response has arrived, either through a link the user clicks to trigger a test for the presence of the response file, or through automatically refreshing the page (but see these accessibility guidelines).
Whatever strategy you use, I would get in touch with authorize.net and see if you can find out why the response is taking so long. Adding a note to users that indicates that there have been delays recently and preventing them from submitting the same payment four times in a row would also definitely be good!
We use the cumulocity REST API. Regular real time notifications work, e.g. we subscribe to /alarms/*, start our connection/polling loop and when we create an alarm we receive the expected JSON. We did not install any specific modules or statements, it just works.
But when we try to do the same with SmartREST we receive this error, as soon as the alarm is created:
40,,/alarms/177649296,Could not find any templates subscribed for the channel
Following the reference guide (http://cumulocity.com/guides/reference/smartrest/) we tried it like this, where all requests have the same X-Id-header and all requests result in the expected http status 200 and no error messages, except for the last one:
Register a smart response template by doing a POST to /s
Body: 11,102,,,$.channel
Handhake: POST to /cep/realtime
Body: 80
Response is our clientId (e.g. 191het1z38bp7iq1m96jqqt8jnef)
Subscribe: POST to /cep/realtime
Body: 81,191het1z38bp7iq1m96jqqt8jnef,/alarms/*
Connect: POST to /cep/realtime
Body: 83,191het1z38bp7iq1m96jqqt8jnef
In the normal REST case the notification consists of a JSON array with 2 elements, both of which have a property "channel". So that is what we would expect from our response template. Instead, we get the aforementioned error 40.
Is our response template wrong? Is it not properly matched by the X-Id? What does it mean, that there are no "templates subscribed for the channel"? The subscriptions are done for a clientId, and not for a specific response template, and the templates are supposed to be matched automatically anyway. So probably "template" means "X-Id" here? The documentation seems ambiguous as to the meaning of that word. But anyway, we did use the same X-Id header in all of the requests.
Any pointer about what we're doing wrong would be appreciated, since we tried pretty much anything by now.
The SmartREST protocol was developed for a IoT-device <-> platform communication. So there was never any design around using it to subscribing to realtime data (except of course for the operations a device needs) as usually devices to not need subscribe to the data that they created themselves.
That said it is possible to use it but with a couple of limitations. Your approach is basically correct but there is one problem with the subscription. The wildcard subscriptions will not work with SmartREST because on subscription it links your X-Id with the channel you subscribed to but there is never a message published on the channel /alarms/*. Thus this kind of weird error message that said that there was no template subscribed for the channel the alarm appeared on. Inside CometD you still receive the alarm because of the wildcard subscription but the SmartREST part does not work.
The messages are published on the channel with the deviceId (e.g. /alarms/12345).
If you subscribe to /alarms/12345 it will work. You can of course subscribe to as many channels as you want but wildcard subscription won't work.
Regarding the templates you need to know the following. The SmartREST parsing is not done on the raw JSON of CometD but on the payload inside it (e.g. the alarm). So a template for an alarm could look like this:
11,500,,$.severity,$.id,$.type,$.severity
This would trigger only if the object has a severity and would return id, type and severity.
I'm using the JavaScript SDK flavor of the Dropbox Datastore API with a web app for mobile and desktop. When the recordsChanged event fires while the app is offline, object data about those changes are generated but the changes can't sync to the datastore until the app is online again.
The event data can be checked against the settings table, for instance, like this:
e.affectedRecordsForTable("settings")
But the array data returned has a lot of layers to wade through.
[t_datastore: t_deleted: false_managed_datastore: t_record_cache: t_rid: "startDate"_tid: "settings"__proto__: t]
I would like to capture the "has been synced" or the "not yet synced" status of each change (each array index) so that I can store the data still waiting to sync in case the session is lost (user closes the app/browser or OS kills the app process). But I also want to know if/when the data does eventually sync successfully. Where can I find the property holding this data?
I found my answer. Steve Marx has a post on the Dropbox developer blog that covers the information I needed. There is a datastore.getSyncStatus().uploading property that returns true or false depending on the state of the datastore sync status.
Source:
https://www.dropbox.com/developers/blog/61/checking-the-datastore-sync-status-in-javascript
I just discovered that when I configure the session plugin of a Catalyst app (Catalyst::Plugin::Session) to expire, it screws with the flash data. More specifically, I'm finding that flash data no longer carries over with a new request.
Does this sound normal? How might I cope with this?
Perfectly normal. The whole point of sessions is to be able to associated data from one request with data in another request. When you let the session for some request expire, you are saying that that request's data shouldn't have anything to do with any future request.
More specifically, the flash data is a part of the session data -- see the _save_flash method in the Catalyst/Plugin/Session.pm file, for instance. Also see the big warning for the delete_session method:
NOTE: This method will also delete your flash data.
How to cope with it? You need to persist data from a request using any scheme other than the Session plugin. Without knowing more about your app, what data you are trying to persist, and how you will associate data from an old session with a new request, I couldn't begin to make a more specific recommendation than that.
When configuring the session for example with a database backend you'll have to add flash_to_stash as an option:
<session>
dbi_dbh DB
dbi_table sessions
dbi_id_field id
dbi_data_field session_data
dbi_expires_field expires
flash_to_stash 1
expires 3600
</session>