Lumen queue jobs with database drivers doesn't populate table jobs - queue

I'm completely new using laravel/lumen
I generate a new lumen proyect a few hours ago and I'm trying to send job to the default queue using a database driver.
my .env file looks like this
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=mydb
DB_USERNAME=root
DB_PASSWORD=xxxx
CACHE_DRIVER=file
QUEUE_DRIVER=database
Following the official Queues - Lumen page I generate a migration to create jobs and failed_jobs tables.
config/queue.php file has the default configuration
Problem:
before running the command php artisan queue:work I tried to send a job to the queue, Queue::push(new SendEmailJob) but it run immediately and I didn't run the command php artisan queue:work yet.
I check the database and table job is empty.
I run the command php artisan queue:work make a request to a specific endpoint, put the job on the queue using one of this line Queue::push(new SendEmailJob) or dispatch(new SendEmailJob())
I got no errors but jobs queue still empty
What am I doing wrong?
Why queue process all jobs before I run the command php artisan queue:work ?
Thanks in advance

You can dispatch a job and ask for a delay before it is executed:
$this->dispatch((new ProcessJob($id))->delay(10)); // 10 ms
Make sure the worker is not running:
ps aux|grep queue
If so, kill it. Although the best practice is to restart the worker after any code change:
php artisan queue:restart

APP.PHP
if(\Illuminate\Support\Facades\Schema::hasTable("jobs"))
Queue::push(new ProcessCenterJob());

Related

How is airflow database managed periodically?

I am running airflow using postgres.
There was a phenomenon that the web server was slow during operation.
It was a problem caused by data continuing to accumulate in dag_run and log of the db table (it became faster by accessing postgres and deleting data directly).
Are there any airflow options to clean the db periodically?
If there is no such option, we will try to delete the data directly using the dag script.
And I think it's strange that the web server slows down because there is a lot of data. Does the web server get all the data when opening another window?
You can purge old records by running:
airflow db clean [-h] --clean-before-timestamp CLEAN_BEFORE_TIMESTAMP [--dry-run] [--skip-archive] [-t TABLES] [-v] [-y]
(cli reference)
It is a quite common setup to include this command in a DAG that runs periodically.

Does Airflow clear Tasks remove data from the database also?

We have a use case where we have to populate fresh data in our DB. There is already old data present from the successful DAG run in our DB . Now we need to delete the old data and re run the task.
Airflow already provides a command to clear selection.
airflow clear -dx occupancy_reports.* -t building -s 2022-04-01 -e 2022-04-30
Will running this also delete the data from the Database and then populate fresh data ?
I guess you meant : airflow **tasks** clear ...
It is only clear the set of task instance, as if they never ran (it is not rollback)

AEM Stop command not working for Publish instance

I have AEM instance setup on a AWS Ubuntu machine. Not sure why stop command not working for publish instance. It is working fine for author instance. We have configured it on changed port(5054). Can anyone suggest if I am missing anything? Here is the screen shot showing the error message while running the stop command:
You cannot stop an AEM instance with the stop script if you did not start it with the start script.
If you look at the scripts you'll see that:
start script creates a file conf/cq.pid that has the java PID
stop script looks for that file and tries to stop the instance then deletes the file.
Thus, if you did not start with start script, you cannot stop with stop script.

The best practices for PostgreSQL docker container initialization with some data

I've created docker image with PostgreSQL running inside and exposing 5432 port.
This image doesn't contain any database inside. Container is an empty PostgreSQL database server.
I'd like in (or during) "docker run" command:
attach db file
create db via sql query execution
restore db from dump
I don't want to keep the data after container will be closed. It's just a temporary development server.
I suspect it's possible to keep my "docker run" command string quite short/simple.
Probably there it is possible to mount some external folder with db/sql/dump in run command and then create db during container initialization.
What are the best/recommended way and the best practices to accomplish this task? Probably somebody can point me to corresponding docker examples.
This is a good question and probably something other folks asked themselves more than once.
According to the docker guide you would not do this in a RUN command. Instead you would create yourself an ENTRYPOINT or CMD in your Dockerfile that calls a custom shell script instead of calling the postgres process direclty. In this scenario the DB would be created in a "real" filesystem, but then cleaned-up during shutdown of the container.
How would this work? The container would start, call the ENTRYPOINT or CMD as usual and consume the init script to get the DB filled. Then at the moment the container is stopped, the same script will be notified with a signal and manually drop the database content.
CMD ["cleanAndRun.sh"]
A sketched script "cleanAndRun.sh" taken from the Docker documentation and modified for your needs. Please remember it is a sketch only and needs modifications:
#!/bin/sh
# The script that is called in the trap must also stop the DB, so below call to
# dropdb is not enough, it just demonstrates how to call anything in the stop-container scenario!
trap "dropdb <params>" HUP INT QUIT TERM
# init your DB -every- time container starts
<init script to import to clean and import dump>
# start your postgres DB
postgres
echo "exited $0"

why my postgres job stop running?

I create a job to clean the database every day at 01:00.
According to statistic run OK from 3 months.
But today i realize the database size was very big so i check the jobs and hasn't run for one month.
Properties say last run was '10/27/2014' and statistics confirm run successful.
Also say next run will be '10/28/2014' but looks like never run and stay frozen since then.
(I'm using dd/mm/yyyy format)
So why stop running?
There is a way to restart the job or should i go and delete and recreate the job?
How can i know a job didn't run?
I guess i can write a code if job isn't successful but what about if never execute?
Windows Server 2008
PostgreSQL 9.3.2, compiled by Visual C++ build 1600, 64-bit
The problem was the pgAgent service wasn't running.
When I Restart the Postgres service:
Postgres service stop
pgAgent stop because is a dependent service
Postgres service start
but pgAgent didn't
Here you can see pgAgent didn't start.