Im newbie to Celery and trying to understand backend parameter in Celery.
I tried this document - http://celery.readthedocs.org/en/latest/getting-started/first-steps-with-celery.html and was able to execute a tasks.
I see this result in the worked window -
[2015-07-04 15:45:32,633: INFO/MainProcess] Task tasks.add[95eee2f4-8491-4200-8628-30ea131d9777] succeeded in 0.0166970053688s: 20
But, I'm not sure where the results are being stored when I use my backend parameter.
Here is my backend parameter-
app = Celery('tasks', broker='amqp://guest#localhost//',backend='amqp')
Results of tasks are stored in dedicated message-queues in the AMQP implementation you are using (most probably, RabbitMQ)!
Since you have read the document you provide the link to, I am assuming you know how to retrieve results & are only interested in knowing where the result is stored.
Related
I'm trying to execute simple step from Azure Apps to get the pipeline run statistics, said pipeline calls Logic Apps in the Web activity:
However I'm receiving the error and I don't understand what exactly the step expects as input here:
Could you please assist in resolving above?
You should not use http requests to pass in your Run Id, because Run Id changes every time you run the pipeline.
You should use Create a pipeline run action first, then you can pass the run ID of the output of this operation to the Get a pipeline run action.
You can refer to this question.
There should be a file identifier logic to be added it seems in your case:
You need to take the Output body of JSON file in next block.
16:37:21.945 [Workflow Executor taskList="PullFulfillmentsTaskList", domain="test-domain": 3] WARN com.uber.cadence.internal.common.Retryer - Retrying after failure
org.apache.thrift.transport.TTransportException: Request timeout after 1993ms
at com.uber.cadence.serviceclient.WorkflowServiceTChannel.throwOnRpcError(WorkflowServiceTChannel.java:546)
at com.uber.cadence.serviceclient.WorkflowServiceTChannel.doRemoteCall(WorkflowServiceTChannel.java:519)
at com.uber.cadence.serviceclient.WorkflowServiceTChannel.respondDecisionTaskCompleted(WorkflowServiceTChannel.java:962)
at com.uber.cadence.serviceclient.WorkflowServiceTChannel.lambda$RespondDecisionTaskCompleted$11(WorkflowServiceTChannel.java:951)
at com.uber.cadence.serviceclient.WorkflowServiceTChannel.measureRemoteCall(WorkflowServiceTChannel.java:569)
at com.uber.cadence.serviceclient.WorkflowServiceTChannel.RespondDecisionTaskCompleted(WorkflowServiceTChannel.java:949)
at com.uber.cadence.internal.worker.WorkflowWorker$TaskHandlerImpl.lambda$sendReply$0(WorkflowWorker.java:301)
at com.uber.cadence.internal.common.Retryer.lambda$retry$0(Retryer.java:104)
at com.uber.cadence.internal.common.Retryer.retryWithResult(Retryer.java:122)
at com.uber.cadence.internal.common.Retryer.retry(Retryer.java:101)
at com.uber.cadence.internal.worker.WorkflowWorker$TaskHandlerImpl.sendReply(WorkflowWorker.java:301)
at com.uber.cadence.internal.worker.WorkflowWorker$TaskHandlerImpl.handle(WorkflowWorker.java:261)
at com.uber.cadence.internal.worker.WorkflowWorker$TaskHandlerImpl.handle(WorkflowWorker.java:229)
at com.uber.cadence.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:71)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Our parent workflow code is basically like this (JSONObject is from org.json)
JSONObject[] array = restActivities.getArrayWithHugeJSONItems();
for(JSONObject hugeJSON: array) {
ChildWorkflow child = Workflow.newChildWorkflowStub(ChildWorkflow.class);
child.run(hugeJSON);
}
What we find out is that most of the time, the parent workflow worker fails to start the child workflow and throws the timeout exception above. It retries like crazy but never success and print the timeout exception over and over again. However sometimes we got very lucky and it works. And sometimes it fails even earlier at the activity worker, and it throws the same exception. We believe this is due to the size of the data is too big (about 5MB) and could not be sent within the timeout (judging from the log we guess it's set to 2s). If we call child.run with small fake data it 100% works.
The reason we use child workflow is we want to use Async.function to run them in parallel. So how can we solve this problem? Is there a thrift timeout config we should increase or somehow we can avoid passing huge data around?
Thank you in advance!
---Update after Maxim's answer---
Thank you. I read the example, but still have some questions for my use case. Let's say I got an array of 100 huge JSON objects in my RestActivitiesWorker, if I should not return the huge array to the workflow, I need to make 100 calls to the database to create 100 rows of records and put 100 ids in an array and pass that back to the workflow. Then the workflow create one child workflow per id. Each child workflow then calls another activity with the id to load the data from the DB. But that activity has to pass that huge JSON to the child workflow, is this OK? And for the RestActivitiesWorker making 100 inserts into the DB, what if it failed in the middle?
I guess it boils down to that our workflow is trying to work directly with huge JSON. We are trying to load huge JSON (5-30MB, not that huge) from an external system into our system. We break down the JSON a little bit, manipulate a few values, and use values from a few fields to do some different logic, and finally save it in our DB. How should we do this with Temporal?
Temporal/Cadence doesn't support passing large blobs as inputs and outputs as it uses a DB as underlying storage. So you want to change architecture of your application to avoid this.
The standard workarounds are:
Use external blob store to save large data and pass reference to it as parameters.
Cache data in a worker process or even on a host disk and route activities that operate on this data to that process or host. See fileprocessing sample for this approach.
The developers at my company are in the process of incorporating VSTS into our testing. I am developing unit tests for our code, using the VSTS Rest API to post the results of the tests, grouped in test runs.
My problem is that I am unable to update the test run to show the number of failed tests and the correct pass rate. My demonstration code uses four unit tests, with 3 passing results and 1 failing result. On the page of test runs, it shows 0 Failed tests and a 0% Pass Rate.
Internet searches haven't yielded any information on how those fields are set or calculated. I've done some searching in the documentation for the REST API in the hopes that I would just need to set a certain field when calling the endpoints. Although the Failed Tests and Pass Rate fields are returned as part of the update call, it doesn't seem like you can set those fields directly for a test run. I haven't found any alternate endpoints that affect those fields.
https://learn.microsoft.com/en-us/rest/api/vsts/test/runs/update?view=vsts-rest-5.0
So, the question is, in a nutshell: How do I update the Failed Tests and Pass Rate fields for a VSTS Test Run using the REST API?
I am programming in C#, using HttpClients to call the REST API endpoints, and passing the relevant data via JSON. Everything is created and updated properly in VSTS; it is just these two fields that don't seem to be working.
I am having this issue. I am creating the run, adding test results then closing the test run. On the front end my runs look like this:
Test run stats
However all my results show passes within the run like this
Test results
Can't understand why this is the case as the graphs on the test runs are correctly picking up the passes and fails.
I have got a Talend job that as a finish product prepares zip package, now i want to expose this job as a REST server with GET request, so each time clients calls service the package is made and available to download. I know that there is a thread with exact same name expose job as web service but accepted answer has links that are no longer valid.
Currently my job looks like this
And the idea standing behind this design is that i got one column in output flow in tRESTRequest named "userfile" of type byte array, calling GET request tJavaRow starts and wrap file into byte array, then i transfer it through tMap to tRestRespone body as byte array. What am I missing?
I'm using the mockrunner package from http://mockrunner.sourceforge.net/ to set up a mock queue for JUnit testing an XML filter which operates like this:
sets recognized properties for an ftp server to put and get xml input and a jms queue server that keeps track of jobs. Remotely there waits a server that actually parses the xml once a queue message is received.
creates a remote directory using ftp and starts a queue connection using mqconnectionfactory to the given address of the queue server.
once the new queue entry is made in 2), the filter waits for a new queue message to appear signifying the job has been completed by the remote server. The filter then grabs the modified xml file from the ftp and passes it along to the next filter.
The JUnit test I am working on simply needs to emulate this environment by starting a local ftp and mock queue server for the filter to connect to, then waiting for the filter to connect to the queue and put the new xml input file on a local directory via a local ftp server, wait for the queue message and then modify the xml input slightly, put the modified xml in a new directory and post another message to the queue signifying the job has completed.
All of the tutorials I have found on the net have used EJB and JNDI to lookup the queue server once it has been made. If possible, I'd like to sidestep that route by just creating a mock queue on my local machine and connecting to it in the simplest manner possible, not using EJB and JNDI.
Thanks in advance!
I'm using MockEjb and there are some examples among them one for using mock queues, so take a look to the info and to the example
Hopefully it helps.
I'd recommend having a look at using Apache Camel to create your test case. Then its really easy to switch your test case from any of the available components and most importantly Camel comes with some really handy Mock Endpoints which makes it super easy to test complex routing logic particularly with asynchronous operations.
If you also use Spring, then maybe start by trying out these Spring unit tests with mock endpoints in Camel which let you inject the mock endpoints to perform assertions on together with the ProducerTemplate object to make it really easy to fire your messages for your test case. e.g. see the last example on that page.
Start off using simple endpoints like the SEDA endpoint - then when you've got your head around the core spring/mock framework, try using the JMS endpoint or FTP endpoint endpoints etc.