Getting many welcome messages from the same user - actions-on-google

I am getting many welcome messages from the same user, is it some kind of a monitoring system by Google?
How can I learn to ignore those requests?

Yes, Google periodically issues a health check against your Action, usually about every 5-10 minutes. Your Action should respond to it normally so Google knows if there is something wrong. If there is, you will receive email that your Action is unavailable because it is unhealthy. They will continue to monitor it and, when healthy again, will restore it.
You don't need to ignore those requests, however you may wish to, either to save on resources or to avoid logging it all the time.
With a library such as multivocal, it detects it and responds automatically - there is nothing you need to to. For other libraries, you will need to examine the raw input sent in the body of your webhook request.
If you are using the Action SDK, you should examine the inputs array to see if there is one with an argument named "is_health_check". If you are using Dialogflow, then you would need to look under originalDetectIntentRequest.data.inputs.

Related

Is it possible to track down very rare failed requests using linkerd?

Linkerd's docs explain how to track down failing requests using the tap command, but in some cases the success rate might be very high, with only a single failed request every hour or so. How is it possible to track down those requests that are considered "unsuccessful"? Perhaps a way to log them somewhere?
It sounds like you're looking for a way to configure Linkerd to trap requests that fail and dump the request data somewhere, which is not supported by Linkerd at the moment.
You do have a couple of options with the current functionality to derive some of the info that you're looking for. The Linkerd proxies record error rates as Prometheus metrics which are consumed by Grafana to render the dashboards. When you observe one of these infrequent errors, you can use the time window functionality in Grafana to find the precise time that the error occurred, then refer to the service log to see if there are any corresponding error messages there. If the error is coming from the service itself, then you can add as much logging info about the request that you need to in order to help solve the problem.
Another option, which I haven't tried myself is to integrate linkerd tap into your monitoring system to collect the request info and save the data for the requests that fail. There's a caveat here in that you will want to be careful about leaving a tap command running, because it will continuously collect data from the tap control plane component, which will add load to that service.
Perhaps a more straightforward approach would be to ensure that all the proxy logs and service logs are written to a long-term store like Splunk, an ELK (Elasticsearch, Logstash, and Kibana), or Loki. Then you can set up alerting (Prometheus alert-manager, for example) to send a notification when a request fails, then you can match the time of the failure with the logs that have been collected.
You could also look into adding distributed tracing to your environment. Depending on the implementation that you use (jaeger, zipkin, etc.) I think the interface will allow you to inspect the details of the request for each trace.
One final thought: since Linkerd is an open source project, I'd suggest opening a feature request with specifics on the behavior that you'd like to see and work with the community to get it implemented. I know the roadmap includes plans to be able to see the request bodies using linkerd tap and this sounds like a good use case for having those bodies.

Client Interaction With Event Sourcing

I have been recently looking into event sourcing and have some questions about the interactions with clients.
So event-sourcing sounds great. decoupling all your microservices, keeping your information in immutable events and formulating a stored states off of that to fit your needs is really handy. Having event propagate through your system/services and reacting to events in their own way is all fine.
The issue i am having lies with understanding the client interaction.
So you want clients to interact with the system, but they need to do this now by events. They can not longer submit a state to mutate your existing one.
So the question is how do clients fire off specific event and interact with (not only an event based system) but a system based on event sourcing.
My understanding is that you no longer use the rest api as resources (which you can get, update, delete, etc.. handling them as a resource), but you instead post to an endpoint as an event.
So how do these endpoint work?
my second question is how does the user get responses back?
for instance lets say we have an event to place an order.
your going to fire off an event an its going to do its thing. Again my understanding is that you dont now validate the request, e.g. checking if the user ordering the order has enough money, but instead fire it to be place and it will be handled in the system.
e.g. it will not be
- order placed
- this will be picked up by the pricing service and it will either fire an reserved money or money exceeded event based on if the user can afford it.
- The order service will then listen for those and then mark the order as denied or not enough credit.
So because this is a async process and the user has fired and forgotten, how do you then show the user it has either failed or succeeded? do you show them an order confirmation page with the order status as it is (even if its pending)
or do you poll it until it changes (web sockets or something).
I'm sorry if a lot of this is all nonsense, I am still learning about this architecture and am very much in the mindset of a monolith with REST responses.
Any help would be appreciated.
The issue i am having lies with understanding the client interaction.
Some of the issue may be understanding, but I promise you a fair share of the issue is that the literature sucks.
In particular, the word "Event" gets re-used a lot of different ways. If you aren't paying very careful attention to which meaning is being used, you are going to get knotted.
Event Sourcing is really about persistence - how does a micro-server store its private copy of state for later re-use? Instead of destructively overwriting our previous state, we write new information that links back to the previous state. If you imagine each microservice storing each change of state as a commit in its own git repository, you are in the right ballpark.
That's a different animal from using Event Messages to communicate information between one microservice and another.
There's some obvious overlap, of course, because the one message that you are likely to share with other microservices is "I just changed state".
So how do these endpoint work?
The same way that web forms do. I send you a representation of a form, the client displays the form to you. You fill in your data and submit the form, the client processes the contents of the form, and sends back to me an HTTP request with a "FormSubmitted" event in the message body.
You can achieve similar results by sending new representations of the state, but its a bit error prone to strip away the semantic intent and then try to guess it again on the server. So you are more likely to instead see task based user interfaces, or protocols that clearly identify the semantics of the change.
When the outside world is the authority for some piece of data (a shopper's shipping address, for example), you are more likely to see the more traditional "just edit the existing representation" approach.
So because this is a async process and the user has fired and forgotten, how do you then show the user it has either failed or succeeded?
Fire and forget really doesn't work for a distributed protocol on an unreliable network. In most cases, at-least-once delivery is important, so Fire until verified is the more common option. The initial acknowledgement of the message might be something like 202 Accepted -- "We received your message, we wrote it down, here's our current progress, here are some links you can fetch for progress reports".
It doesnt seem to me that event-sourcing fits with the traditional REST model where you CRUD a resource.
Jim Webber's 2011 talk may help to prune away the noise. A REST API is a disguise that your domain model wears; you exchange messages about manipulating resources, and as a side effect your domain model does useful work.
One way you could do this that would look more "traditional" is to work with representations of the event stream. I do a GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 and it returns me a representation of a list of events. I append a new event onto the end of that list, and PUT /08ff2ec9-a9ad-4be2-9793-18e232dbe615, and interesting side effects happen. Or perhaps I instead create a patch document that describes my change, and PATCH /08ff2ec9-a9ad-4be2-9793-18e232dbe615.
But more likely, I would do something else -- instead of GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 to fetch a representation of the list of events, I'd probably GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 to fetch a representation of available protocols - which is to say, a document filled with hyper links. From there, I might GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615/603766ac-92af-47f3-8265-16f003ce5a09 to obtain a representation of the data collection form. I fill in the details of my event, submit the form, and POST /08ff2ec9-a9ad-4be2-9793-18e232dbe615 the form data to the server.
You can, of course, use any spelling you like for the URI.
In the first case, we need something like an HTTP capable document editor; the second case uses something more like a web browser.
If there were lots of different kinds of events, then the second case might well have lots of different form resources, all submitting POST /08ff2ec9-a9ad-4be2-9793-18e232dbe615 requests.
(You don't have to have all of the forms submitting to the same URI, but there are advantages to consider).
In a non event sourcing pattern I guess that would be first put into the database, then the event gets risen.
Even when you aren't event sourcing, there may still be some advantages to committing events to your durable store before emitting them. See Pat Helland: Data on the Outside versus Data on the Inside.
So you want clients to interact with the system, but they need to do this now by events.
Clients don't have to. Client may even not be aware of the underlying event store.
There are a number of trade-offs to consider and decisions to take when implementing an event-sourced system. To start with you can try to name a few pre computer era examples of event-sourced systems and look at their non-functional characteristics.
So the question is how do clients fire off specific event
Clients don't send events. They rather should express an intent (a command). Then it is the responsibility of the event-sourced system to validate the intent and either reject it or accept and store the corresponding event. It would mean that an intent to change the system's state was accepted and the stored event confirms the change.
My understanding is that you no longer use the rest api as resources
REST is one of the options. You just consider different things as resources. A command can be a REST resource. An event-sourced entity can be a resource, to which you POST a command. If you like it async - you can later GET the command to check its status. You can GET an entity to know its current state. You cant GET events from a class of entities as a means of subscription.
If we are talking about an end user, then most likely it doesn't deal with the event store directly. There is some third tier in between, which does CQRS. From a user client perspective it can be provided with REST, GraphQL, SOAP, gRPC or event e-mail. Whatever transport solution you find suitable. Command-processing part from CQRS is what specifically domain-driven. It decides which intent to accept and which to reject.
Event store itself is responsible for the data consistency. I.e. it should not allow two concurrent event leading to invalid state be published. This is what pre-computer event-sourced systems are good at. You usually have some physical object as an entity, so you lock for update by just getting hand of it.
Then an end-user client usually reads from some prepared read model. The responsibility of a read (R in CQRS) component is to prepare read-optimised data for clients. This data may come from multiple event-sourced of the same or different classes. Again, client may interact with a read model with whatever transport is suitable.
While an event-store is consistent and consistent immediately, a read model is eventually consistent. But it's up to you to tune this eventuality.
Just try to throw REST out of the architecture for a while. Consider it a one of available transport options - that may help to look at the root.

How do I create a stack in a REST API?

I am working on a distributed execution server. I have decided to use a REST API based on HTTP on the server. The clients will connect to the server and GET the next task to be accomplished. Obviously I need to "update" the task that is retrieved to ensure that it is only processed once. A GET is not supposed to have any side effects (like changing the state of the resource retrieved). I could use a POST (to update the resource), but I also need to retrieve it. I am thinking that I could have a URL that a POST marks the task as "claimed", then a GET marks the task as retrieved. Unfortunately I have a side effect on GET again. Is this just not going to work in REST? I am OK with have a "function" resource to do this, but don't want to give up the paradigm without a little research.
Pat O
If nothing else fits, you're supposed to use a POST request. Nothing prevents you from returning the resource on a POST request. But it becomes apparent that something (in this case) will happen to that resource, which wouldn't be the case when using a GET request.
REST is really just a concept, and you can implement it however you want. There is no one 'right way', as everyones use cases are different. (yes I understand that there is a defined spec out there, but you can still do it however you want) In this situation if your GET needs to have a side effect, it will have a side effect. Just make sure to properly document what you did (and potentially why you did it).
However it sounds like you're just trying to create a queue with multiple subscribers, and if the subscribers are automated (such as scripts or other machines) you may want to look at using an actual queue. (http://www.rabbitmq.com/getstarted.html).
If you are using this to power a web UI or something where actual people process this, you could also use a queue, with your GET request simply pulling the next item from the queue.
Note that when using most of the messaging systems you will not be able to guarantee the order in which the messages are pulled from the queue, so if the order is necessary you may not be able to use this approach.

iPhone App, server-side component, parse integration

This will be my first iOS app with any bit of complexity. I'd like to outline the components and structure to get some feedback before I dive into attempting it.
From the user's perspective, the app monitors the water level of a local lake and receives push notifications when the water level changes a user-specified amount. I think using Parse will be easiest to manage user data and I will attempt a Node.js server-side component on Nodester (I know some basic JS and figure its an good up and coming language to get familiar with). Here's how I see it working...
The user creates an account on the device and specifies a lakeLevelChange amount in which they will receive a push notification. The user's data is pushed to Parse's data mgt.
The server side component will run this program 3-6 times a day:
Pulls a currentLakeLevel via HTTP request
Pulls user data from Parse
Compares the currentLakeLevel to the user specified lakeLevelChange
If the difference is => lakeLevelChange, a push notification HTTP Post request is sent, per user which their specified condition is met
Parse receives POST request and sends a push notification to APNS server
Client receives push notification
It actually doesn't sound terribly complex when its typed out. Is this the proper way of structuring this functionality? Am I missing anything? Suggestions are greatly appreciated!
Bit of a logic problem:
The server side component will run this program 3-6 times a day:
Pulls a currentLakeLevel via HTTP request.
Pulls user data from Parse
Compares the currentLakeLevel to the user specified lakeLevelChange
If the difference is => lakeLevelChange, a push notification HTTP Post request is sent, per user which their specified condition is met
You actually need to store the level at last alert for each user, too. Otherwise incremental changes could creep over your users' threshhold and never trigger an alert.
Imagine if I said I want to be alerted when the level has changed by 6 inches. You then record seven events in which the level rises by an inch each time. At no point did you observe more than 6 inches of change, but the total change is over my threshold for notification, and I probably meant to have you notify me about that.
So when you fire an alert, you need to store the current level, and then on each change event, you compare that to the last level you notified them about.
You're missing the unhappy path. It's the path programmers never travel while programs always travel. Nothing goes the way we plan it so we have to plan for failures. Ask yourself questions like, "What happens when the server powers down for maintenance or outages and misses one or all of its 3-6 scheduled runs?" "Should the missed executions queue up and send out a bunch of missed notifications?" "What happens when the user changes what they specified as lakeLevelChange but the radio is out and/or the server request cannot complete?" "What happens when Parse gets garbage data in or produces garbage date?" Asking just a few of these will steer you towards an optimal design.

Creating a constantly updating feed like Twitter

I'd like to have something in my app that is just like Twitter but not Twitter. Basically it will be a place people can submit messages and do not need an account. They only way they can submit is through the app. I want other app users to see the submitted messages nearly immediate. I believe push notification can do that sort of work but do I need push notification for this? How does Twitter do it?
-- EDIT --
After reading some of the responses, push might be what I need. People will be submitting messages to my server often. If someone is watching the feed, they might see one new message per minute depending on the query they are using. I'm thinking to go with a MySQL database, (which allows switching to cheaper non Windows servers w/o much hassle) and push notification. Are there any reasons those won't work for my scenario?
You only need push notification if you want the app to be able to receive new messages while closed.
Here's a rough description of one way to do this:
Your app sends a message via HTTP Post to your server.
Your server stores the message in a database, using the iPhones unique ID as an identifier.
Your app connects to the server frequently, asking for new messages.
If there are any new ones, the server hands the message to the app, which displays it.
This is approximately what twitter/iphone twitter apps do.
Your choices are fairly binary:
Use push notification
Use Polling
With Push Notification:
You control when you contact your users... Heavy Load means you can slow updates down to avoid taxing your infrastructure
Contrariwise, you have to push to clients that may not even be there anymore (And thus may need some sort of register model), high load may mean that clients don't get immediate update
You can leverage things like Amazon's EC2 to give you more processing power
Unless you're out of capacity, users are almost certain to be receiving updates as they happen
To pick up messages missed while offline, the SERVER needs to know what message was last successfully received, store older messages and forward many all at once
If you choose to use polling:
You must have a stable address to be polled
You need the ability to have lots of quick query connections checking for new data, then returning that data if required.
If your application becomes popular enough you may find you don't have enough resources
If your resources are taxed your application will go down, rather then just slow down
You don't need to register clients and keep track of their on/offline state
Parallelizing on the fly is a bit trickier
To pick up older messages, the CLIENT needs to know when they last received a message and then request the server send any message since that time
Both can be fast, but they come with different bandwidth and processing profiles. I prefer push for everything that's real-time.
Might want to take a look at XMPP.
Twitter doesn't really push events out to the iPhone in realtime. It's more like polling by the various clients.
If you really want instantaneous for the last mile you'll want to use push.
Twitter uses lots of servers and raid arrays to handle the load of millions of people posting 140 character messages. Twitter clients log in and request a list of updates for all of the people the user is following within a certain time frame.
Push wouldn't be a good candidate for this because it does not persist the "tweets". It is simply a notification mechanism. There is a text messaging app on the App Store (called Ping!) that relies completely on push notification for sending text messages. This seems to work fine, but if the developers are keeping track of the messages, it is all done on their servers. In their case push makes sense as you want to alert the user of a new message. In the case of a twitter clone, however, it would probably just annoy users if they got a new notification every time someone tweeted.
In the end you're better off just implementing it server side and then developing an iPhone client that logs in and retrieves the latest tweets for the people the user is following.