I have a simple Action in Laravel Nova which updates stock inventory via an API call. I want to be able to queue this as individual jobs for each product that requires an update to stagger the API calls.
When I add my action to a resource and run it via the index page by selecting multiple products and running the action, only a single job is created in my jobs table.
So when the queue is processed, rather than queuing each product for an individual update, a single job is run which loops all selected products and makes multiple API requests in quick succession, which is not my desired result.
Is there a way for the action to create a job for each resource that the action is run against?
Action Class
class UpdateInventory extends Action implements ShouldQueue
{
use InteractsWithQueue, Queueable, SerializesModels;
public function __construct()
{
$this->connection = config('queue.default');
$this->queue = 'inventory_update';
}
/**
* Perform the action on the given models.
*/
public function handle(ActionFields $fields, Collection $products)
{
foreach ($products as $product) {
try {
$inventoryService = resolve(InventoryService::class);
$inventoryService->updateProductInventory($product); // <- API calls within
$this->markAsFinished($product);
} catch (\Exception $e) {
$this->markAsFailed($product, $e);
}
}
return Action::message("Inventory update started");
}
}
class UpdateInventory extends Action implements ShouldQueue
{
use InteractsWithQueue, Queueable, SerializesModels;
/**
* The number of models that should be included in each chunk.
*
* #var int
*/
public static $chunkCount = 1;
}
You may instruct Nova to your queue actions as a batch. That requires migration so you create batches table with following command php artisan queue:batches-table and migrate your database. This table contains total_jobs, pending_jobs, failed_jobs, failed_job_ids fields so you can keep track each job. If each job has one resource, it should resolve your issue.
More information:
Laravel Nova Actions - Job Batching
Laravel Queues - Job Batching
Related
My goal is to create a pipeline that invokes a back-end (Cloud hosted) service a maximum number of times per second ... how can I achieve that?
Back story: Imagine a back-end service that is invoked with a single input and returns a single output. This service has quotas associated with it that permit a maximum number of requests per second (let's say 10 requests per second). Now imagine an unbounded source PCollection where I wish to transform the elements in the input by passing them through my back-end service. I can envisage a ParDo invoking the back-end service once for each element in the input PCollection. However, this doesn't perform any kind of flow control against the back-end.
I could imagine my DoFn logic testing the response from the back-end response and retrying till it succeeds but this doesn't feel right. If I have 100 workers, then I seem to be burning a lot of resources and putting a load on the back-end. What I think I want to do is throttle the calls to the back-end from the pipeline.
Good Day, kolban. In addition to Bruno Volpato's helpful RampupThrottlingFn example, I've seen a combination of the following. Please do not hesitate at all to let me know how I can update the example with more clarity.
PeriodicImpulse - emits an Instant at a fixed specified interval.
Fix the number of workers with the maxNumWorkers and numWorkers (Please see Dataflow Pipeline Options), if using the Dataflow runner.
Beam Metrics API to monitor the actual resource request count over time and set alerts. When using Dataflow, the Beam Metrics API automatically connects to Cloud Monitoring as Custom metrics
The following shows abbreviated code starting from the whole pipeline followed by some details as needed to provide clarity. It assumes a target of 10 workers, using Dataflow with the arguments --maxNumWorkers=10 and --numWorkers=10 and a goal to limit the resource requests among all workers to 10 requests per second. This translates to 1 request per second per worker.
PeriodicImpulse limits the Request creation to 1 per second
public class MyPipeline {
public static void main(String[] args) {
Pipeline pipeline = Pipeline.create(/* Usually with options */);
PCollection<Response> responses = pipeline.apply(
"PeriodicImpulse",
PeriodicImpulse
.create()
.withInterval(Duration.standardSeconds(1L))
).apply(
"Build Requests",
ParDo.of(new RequestFn())
)
.apply(ResourceTransform.create());
}
}
RequestFn DoFn emits Requests per Instant emitted from PeriodicImpulse
class RequestFn extends DoFn<Instant, Request> {
#ProcessElement
public void process(#Element Instant instant, OutputReceiver<Request> receiver) {
receiver.output(
Request.builder().build()
);
}
}
ResourceTransform transforms Requests to Responses, incrementing a Counter
class ResourceTransform extends PTransform<PCollection<Request>, PCollection<Response>> {
static ResourceTransform create() {
return new ResourceTransform();
}
public PCollection<Response> expand(PCollection<Request> input) {
return ParDo.of("Consume Resource", new ResourceFn());
}
}
class ResourceFn extends DoFn<Request, Response> {
private Counter counter = Metrics.counter(ResourceFn.class, "some:resource");
private transient ResourceClient client = null;
#Setup
public void setup() {
client = new ResourceClient();
}
#ProcessElement
public void process(#Element Request request, OutputReceiver<> receiver)
{
counter.inc(); // Increment the counter.
// not showing error handling
Response response = client.execute(request);
receiver.output(response);
}
}
Request and Response classes
(Aside: consider creating a Schema for the request input and response output classes. Example below uses AutoValue and AutoValueSchema)
#DefaultSchema(AutoValueSchema.class)
#AutoValue
abstract class Request {
/* abstract Getters. */
abstract String getId();
#AutoValue.Builder
static abstract class Builder {
/* abstract Setters. */
abstract Builder setId(String value);
abstract Request build();
}
}
#DefaultSchema(AutoValueSchema.class)
#AutoValue
abstract class Response {
/* abstract Getters. */
abstract String getId();
#AutoValue.Builder
static abstract class Builder {
/* abstract Setters. */
abstract Builder setId(String value);
abstract Response build();
}
}
I am attempting to abstract out my core objects for a service I am writing to a library. I have all the other artifacts I need straightened out, but unfortunately I cannot find an artifact for #BsonIgnore. I am using #BsonIgnore to ignore some methods that get added to the bson document when they shouldn't, as the implementing service writes these objects to MongoDb.
For context, the service is written using Quarkus, and Mongo objects are handled with Panache:
implementation 'io.quarkus:quarkus-mongodb-panache'
The library I am creating is mostly just a simplistic pojo library, nothing terribly fancy in the Gradle build.
I have found this on Maven Central: https://mvnrepository.com/artifact/org.bson/bson?repo=novus-releases but seems like not a normal release, and doesn't solve the issue.
In case it is useful, here is my code:
#Data
public abstract class Historied {
/** The list of history events */
private List<HistoryEvent> history = new ArrayList<>(List.of());
/**
* Adds a history event to the set held, to the front of the list.
* #param event The event to add
* #return This historied object.
*/
#JsonIgnore
public Historied updated(HistoryEvent event) {
if(this.history.isEmpty() && !EventType.CREATE.equals(event.getType())){
throw new IllegalArgumentException("First event must be CREATE");
}
if(!this.history.isEmpty() && EventType.CREATE.equals(event.getType())){
throw new IllegalArgumentException("Cannot add another CREATE event type.");
}
this.getHistory().add(0, event);
return this;
}
#BsonIgnore
#JsonIgnore
public HistoryEvent getLastHistoryEvent() {
return this.getHistory().get(0);
}
#BsonIgnore
#JsonIgnore
public ZonedDateTime getLastHistoryEventTime() {
return this.getLastHistoryEvent().getTimestamp();
}
}
This is the correct dependency https://mvnrepository.com/artifact/org.mongodb/bson/4.3.3, but check for your specific version
I have some users who need to extract a large amount of data from from the database. I am creating a job that receive the parameters, query the database and write the data on a txt file. And as far as the file is ready I will send a notification e-mail.
To store the file locally, I am using the Storage facade inside the job.
If I run my code on a regular controller, the file is generated, but when I run the same command inside a job, the file is not generated (even having the job processed).
I am running laravel 5.4 on php 7.2. and have linked the storage (php artisan storage:link)
Since my code have a lot of complexity (consulting models, mail and so on) I created a simple test controller and a simple test job to isolate the file creation from the rest of the code. The same problem is happening in the simple version.
Here is the controller:
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Storage;
use App\Jobs\testeStorageJob;
class testController extends Controller
{
public function test(){
Storage::put('testFile.txt', 'Some Content on a regular controller');
testeStorageJob::dispatch();
}
}
and here is the job:
namespace App\Jobs;
use Illuminate\Bus\Queueable;
use Illuminate\Queue\SerializesModels;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Storage;
class testeStorageJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $tries = 1;
/**
* Create a new job instance.
*
* #return void
*/
public function __construct()
{
//
}
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
Storage::put('testFileQueue.txt', 'Some content on a queue');
}
}
I've checked the filesystem config and it is the default values.
I was expecting to have 2 files on the "Storage/App" folder testFile.txt and testFileQueue.txt, but I only have testFile.txt.
I have an Entity that pulls it's data from a REST web service. To keep thing consistent with the entities in my app that pull data from the database I have used ORM and overridden the find functions in the repository.
My problem is that ORM seems to demand a database table. When I run doctrine:schema:update it moans about needing an index for the entity then when I add one it creates me a table for the entity. I guess this will be a problem in the future as ORM will want to query the database and not the web service.
So... am I doing this wrong?
1, If I continue to use ORM how can I get it to stop needing the database table for a single entity.
2, If I forget ORM where do I put my data loading functions? Can I connect the entity to a repository without using ORM?
So... am I doing this wrong?
Yes. It doesn't make sense to use the ORM interfaces if you don't really want to use an ORM.
I think the best approach is NOT to think about implementation details at all. Introduce your own interfaces for repositories:
interface Products
{
/**
* #param string $slug
*
* #return Product[]
*/
public function findBySlug($slug);
}
interface Orders
{
/**
* #param Product $product
*
* #return Order[]
*/
public function findProductOrders(Product $product);
}
And implement them with either an ORM:
class DoctrineProducts implements Products
{
private $em;
public function __construct(EntityManager $em)
{
$this->em = $em;
}
public function findBySlug($slug)
{
return $this->em->createQueryBuilder()
->select()
// ...
}
}
or a Rest client:
class RestOrders implements Orders
{
private $httpClient;
public function __construct(HttpClient $httpClient)
{
$this->httpClient = $httpClient;
}
public function findProductOrders(Product $product)
{
$orders = $this->httpClient->get(sprintf('/product/%d/orders', $product->getId()));
$orders = $this->hydrateResponseToOrdersInSomeWay($orders);
return $orders;
}
}
You can even make some methods use the http client and some use the database in a single repository.
Register your repositories as services and use them rather then calling Doctrine::getRepository() directly:
services:
repository.orders:
class: DoctrineOrders
arguments:
- #doctrine.orm.entity_manager
Always rely on your repository interfaces and never on a specific implementation. In other words, always use a repository interface type hint:
class DefaultController
{
private $orders;
public function __construct(Orders $orders)
{
$this->orders = $orders;
}
public function indexAction(Product $product)
{
$orders = $this->orders->findProductOrders($product);
// ...
}
}
If you don't register controllers as services:
class DefaultController extends Controller
{
public function indexAction(Product $product)
{
$orders = $this->get('repository.orders')->findProductOrders($product);
// ...
}
}
A huge advantage of this approach is that you can always change the implementation details later on. Mysql is not good enough for search anymore? Let's use elastic search, it's only a single repository!
If you need to call $product->getOrders() and fetch orders from the API behind the scenes it should still be possible with some help of doctrine's lazy loading and event listeners.
I am working on implementing a state machine for a workflow management system based on the Stateless4j API. However, I am not able to find an effective way to persist the states and transitions in Stateless4j.
As part of our usecases, we have the requirement to keep States alive for more than 3 - 4 days until the user returns to the workflow. And we will have more than one workflow running concurrently.
Can you please share your insights on the best practices to persist states in Stateless4j based State Machine implementation?
It looks like what you need to do is construct your StateMachine with a custom accessor and mutator, something like this:
public class PersistentMutator<S> implements Action1<S> {
Foo foo = null;
#Inject
FooRepository fooRepository;
public PersistentMutator(Foo foo) {
this.foo = foo;
}
#Override
public void doIt(S s) {
foo.setState(s);
fooRepository.save(foo)
}
}
Then you want to call the constructor with your accessors and mutators:
/**
* Construct a state machine with external state storage.
*
* #param initialState The initial state
* #param stateAccessor State accessor
* #param stateMutator State mutator
*/
public StateMachine(S initialState, Func<S> stateAccessor, Action1<S> stateMutator, StateMachineConfig<S, T> config) {
this.config = config;
this.stateAccessor = stateAccessor;
this.stateMutator = stateMutator;
stateMutator.doIt(initialState);
}
Alternatively, you may want to look at StatefulJ. It has built in support for atomically updating state in both JPA and Mongo out of the box. This may save you some time.
Disclaimer: I'm the author of StatefulJ