i want to clear 'response' queue and any other queues if processor is stopped because of failure( i stop it with 'template' which works similar to rest api) .
I have read this:https://nifi.apache.org/docs/nifi-docs/rest-api/index.html
but i have no idea how can i use it to fullfile my idea.
I mean it would be perfect if i can clear response queue ,in case, i
have at least 1 flowfile in failure queue. is it possible?
Can i use Put request for deleting queues i mean is there any ,state for flowfiles in queues to set it as an empty or deleted?
Using your browser's Developer Tools window, use the UI to clear a queue while monitoring the network tab. Everything the Apache NiFi UI does is performed via the REST API. You will be able to see exactly what requests are sent to the server to clear the connection queue and can recreate that programmatically.
The specific API endpoint you want in this case is POST /flowfile-queues/{id}/drop-requests where {id} is the connection ID.
Related
I'm new to Apache Kafka building some applications and i get stuck dealing with a specific problem.
I'll try my best do explain my use case.
I have an external application, that is a kind of ticket manager and would like to pull data from them. They have a paginated REST API where i can get ticket data by a client. I would like to loop through this API 'till the last page and send it to Kafka, where my Sink Connectors would send them to three DBs.
Q) My best option is to create some kind of python script to get data and /POST/ them to Kafka REST Proxy API?
I don't think you have really any good option here.
Pages imply ordering; if you have N pages and attempt to send N requests, then your producer request could fail and retry any one of them, causing loss of information and random order
Two options to fix that
send "page count" and "current page" along with each message and reshuffle data at some downstream system
don't produce any message until you iterate all pages, keeping in mind that Kafka has a maximum request size
Problem with either approach - what happens if another page is added to the API while you're producing or writing to the database? Or if they change? How will you detect which pages you might need to request again/overwrite?
POST them to Kafka REST Proxy API?
If the REST Proxy is the only way you're able to get data into the cluster, then sure, but it'll be less performant than a producer
I am designing a microservice architecture, using a database per service pattern.
Following the example of Order Service and Shipping Service, when a user makes an HTTP REST request to the Order Service, this one fires an event to notify shipping service. All this happens asynchronously. So, what happens with the user experience? I mean, the user needs an immediate response from the HTTP request. How can I handle this scenario?
All this happens asynchronously. So, What happen with the user experience? I mean, the user needs an immediately response from the HTTP request. How can I handle this scenario?
Respond as soon as you have stored the request.
Part of the point of microservices is that you have a system composed of independently deployable elements that do not require coordination.
If you want a system that is reliable even though the services don't have 100% uptime, then you need to have some form of durable message storage so that the sender and the receiver don't need to be running at the same time.
Therefore, your basic pattern for data from the outside is that the information from the incoming HTTP request is copied, not directly into a running service, but instead into the message store, to be processed by the service at some later time.
In other words, your REST API is a facade in front of your storage, not in front of the service itself.
The actor model may be a useful analogy; information moves around by copying messages into different inboxes, and are later consumed by the subscribing actor.
From the perspective of the client, the HTTP response is an acknowledgement that the request has been received and recognized as valid. Think "thank you for your order, we'll send you an email when your purchase is ready for pick up."
On the web, we would include in the response links to other useful resources; click here to see the status of your order, click there to see your history of recent orders, and so on.
I have problem, that is pretty crucial, but I couldn't find good answer to it for a while.
I have a microservice-based backend with the gateway, a few other microservices, and Kafka brokers.
Gateway offers synchronous REST API for reads/queries and asynchronous for writes.
Write scenario looks as follows. The gateway returns 202 Accepted status and publishes event e.g. CreateItem to Kafka. Item service subscribes to this kind of event, creates an item and emits ItemCreated event.
My problem is how to handle such scenario on frontend side.
The most basic approach I thought about is to route to the items list page and poll for items, so the newly created items show there eventually (maybe with some kind of indicator, that shipment creation is being processed) but it's kinda stupid.
I also thought about pushing writes from frontend over WebSocket to the gateway and on ItemCreated event gateway would push that info back to the client, but it doesn't resolve the problem - what to show to the user?
On the other hand, I can use the WebSocket solution and show some loading screen with an indeterminate progress bar, when waiting for a response over a socket, but that would make the write effectively synchronous - at least on the frontend side. Just as well I could make the write HTTP POST endpoint synchronous on the Gateway side and return the response only after the ItemCreated event has been received.
So, what would be the best solution to that problem? Are some of these I listed any good?
We are implementing a REST API, which will kick off multiple long running backend tasks. I have been reading the RESTful Web Services Cookbook and the recommendation is to return HTTP 202 / Accepted with a Content-Location header pointing to the task being processed. (e.g. http://www.example.org/orders/tasks/1234), and have the client poll this URI for an update on the long running task.
The idea is to have the REST API immediately post a message to a queue, with a background worker role picking up the message from the queue and spinning up multiple backend tasks, also using queues. The problem I see with this approach is how to assign a unique ID to the task and subsequently let the client request a status of the task by issuing a GET to the Content-Location URI.
If the REST API immediately posts to a queue, then it could generate a GUID and attach that as an attribute on the message being added to the queue, but fetching the status of the request becomes awkward.
Another option would be to have the REST API immediately add an entry to the database (let's say an order, with a new order id), with an initial status and then subsequently put a message on the queue to kick off the back ground tasks, which would then subsequently update that database record. The API would return this new order ID in the URI of the Content-Location header, for the client to use when checking the status of the task.
Somehow adding the database entry first, then adding the message to the queue seems backwards, but only adding the request to the queue makes it hard to track progress.
What would be the recommended approach?
Thanks a lot for your insights.
I assume your system looks like the following. You have a REST service, which receives requests from the client. It converts the requests into commands which the business logic can understand. You put these commands into a queue. You have a single or multiple workers which can process and remove these commands from the queue and send the results to the REST service, which can respond to the client.
Your problem that by your long running tasks the client connection timeouts, so you cannot send a response. So what you can do is sending a 202 accepted after you put the commands into the queue and add a polling link, so the client will be able to poll for the changes. Your tasks have multiple subtasks so there is progress, not just pending and complete status changes.
If you want to stick with polling, you should create a new REST resource, which contains the actual state and the progress of the long running task. This means that you have to store this info in a database, so the REST service will be able to respond to requests like GET /tasks/23461/status. This means that your worker has to update the database when it is completed a subtask or the whole task.
If your REST service is running as a daemon, then you can notify it by progress, so storing the task status in the database won't be the responsibility of the worker. This kind of REST service can store the info in the memory as well.
If you decide to use websockets to notify the client, then you can create a notification service. By REST you have to respond with a task id. After that you send back this task id on the websocket connection, so the notification service will know which websocket connection subscribed to the events of a certain task. After that you won't need the REST service, you can send the progress through the websocket connection as long as the client does not close the connection.
You can combine these solutions the following way. You let your REST service to create a task resource, so you'll be able to access the progress by using a polling link. After that you send back an identifier with 202 which you send back through the websockets connection. So you can use a notification service to notify the client. By progress your worker will notify the REST service, which will create a link like GET /tasks/23461/status and send that link to the client through the notification service. After that the client can use the link to update its status.
I think the last one is the best solution if your REST service runs as a daemon. It is because you can move the notification responsibility to a dedicated notification service, which can use websockets, polling, SSE, whatever you want. It can collapse without killing the REST service, so the REST service will stay stable and fast. If you send back a manual update link too with the 202, then the client can do manual update (assuming a human controlled client), so you will have something like graceful degradation if the notification service is not available. You don't have to maintain the notification service because it won't know anything about the tasks, it will just send data to the clients. Your worker won't have to know anything about how to send notifications and how to create hyperlinks. It will be easier to maintain the client code too, since it will be almost a pure REST client. The only extra feature will be the subscription for the notification links, which does not change frequently.
I have a running system that process short and long running operations with a Request-Response interface based on Agatha-RRSL.
Now we want to change a little in order to be able to send requests via website in Json format so i'm trying many REST server implementation that support Json.
REST server will be one module or "shelve" handled by Topshelf, another module will be the processing module and the last the NoSQL database runner module.
To talk between REST and processing module i'm thinking about a servicebus but we have two types of request: short requests that perform work in 1-2 seconds and long requests that do work in 1 minute..
Is servicebus the right choice for this work? I'm thinking about returning a "response" for long running op with a token that can be used to request operation status and results with a new request. The problem is that big part of the requests must be used like sync request in order to complete http response.
I think I have also problems with response size (on MSMQ message transport) when I have to return huge list of objects
Any hint?
NServiceBus is not really suitable for request-response messaging patterns. It's more suited to asynchronous publish-subscribe.
Edit: In order to implement a kind of request response, you would need to message in both directions, but consisting of three logical steps:
So your client sends a message requesting the data.
The server would receive the message, process it, construct a return message with the data, and send it to the client.
The client can then process the data.
Because each of these steps takes place in isolation and in an asynchronous manner there can be no meaningful SLA or timeout enforced between when a client sends a request and receives a response. But this works nicely for large processing job which may take several minutes to complete.
Additionally a common value which can be used to tie the request to the response will need to be present in both messages. Otherwise a client could send more than one request, and receive multiple responses and not know which response was for which request.
So you can do this with NServiceBus but it takes a little more thought.
Also NServiceBus uses MSMQ as the underlying transport, not http.