We have beanstalkd running in our server. All our application records are stored in beanstalk before they are written into the mysql tables. Now few items got stuck inside this beanstalk queue. They are not being written into the database. So I would like to fetch all the items which are stuck in this Queue so that we can analyse further to see if they are corrupt entries. I could not find any way to list all the items from beanstalkd. Is there any way I can do it?
Related
I am using the change streams to keep the redis cache up to date whenever a change happens in the mongodb. I am updating the cache in two steps whenever the db is changed:
clear cache
Load data from scratch
code:
redisTemplate.delete(BINDATA);
redisTemplate.opsForHash().putAll(BINDATA, binDataItemMap);
The problem is that in prod this setting may be dangerous. As redis is single threaded there is the chance the events will happen in the following way in a multi-container application:
db update happens
data is cleared in redis
request comes in and queries an empty redis cache
data is pushed into redis
Ideally it should happen as so:
db update happens
data is cleared in redis
data is pushed into redis
request comes in and queries an updated redis cache
How can both operations clear and update happen in redis as a block?
Redis supports a kind of transactions. This fits exactly your needs.
All the commands in a transaction are serialized and executed sequentially. It can never happen that a request issued by another client is served in the middle of the execution of a Redis transaction. This guarantees that the commands are executed as a single isolated operation.
A Redis transaction is entered using the MULTI command. At this point the user can issue multiple commands. Instead of executing these commands, Redis will queue them. All the commands are executed once EXEC is called.
Pseudo Code:
> MULTI
OK
> redisTemplate.delete(BINDATA);
QUEUED
> redisTemplate.opsForHash().putAll(BINDATA, binDataItemMap);
QUEUED
> EXEC
I want to get the list of Jobs which are working in my server on database, which displays the name, job timings, etc. using a query. Is it possible in Postresql PgAdmin.
We are using remote partitioning in our POC where we process around 20 million records. To process this records, slave needs some static metadata which is around 5000 rows. Our current POC uses EhCache to load this metadata in slave once time from db and put it in cache so the subseuent calls just get this data from cache for better performance.
Now since we are using remote partitioning, our slave has approx 20 MDP/thread so each message listener calls first to get the metadata from db, so basically 20 threads are hitting db at the same time on each remote machine. We have 2 machine for now but will grow to 4.
My question is , is there any better way to load this metadata only one time like before job starts and be accessible to all remote slave?
Or can we use step listener in remote stap? I dont think so this is a good idea, as it will be executed for each remote step execution but needed expert thoughts on this.
You could set up an EhCache server running as a separate application or use another product for caching instead like Hazelcast. If commercial products are an option for you, Coherence might also work.
I want to store the job in mongodb using python and it should schedule on specific time.
I did googling and found APScheduler will do. i downloaded the code and tried to run the code.
It's schedule the job correctly and run it, but it store the job in apscheduler database of mongodb, i want to store the job in my own database.
Can please tell me how to store the job in my own database instead of default db.
Simply give the mongodb jobstore a different "database" argument. It seems like the API documentation for this job store was not included in what is available on ReadTheDocs, but you can inspect the source and see how it works.
I am using Quartz to schedule cron jobs in my web application. i am using a oracle Databse to store jobs and related info. When i add the jobs in the Database, i need to re-start the server/application (tomcat server) for these new jobs to get scheduled. How can i add jobs in the database and make them work without restarting the server.
I assume you mean you are using JDBCJobStore? In that case it is not ideal to make direct changes in the database tables storing the job data. However, I suppose you could set up a separate job that runs every X minutes / hours, checks whether there are new jobs in the database (that need to be scheduled), and schedule them as usual.
Add jobs via the Scheduler API.
http://www.quartz-scheduler.org/docs/best_practices.html