We have 2 applications that uses quartz for scheduling. The quartz.properties for our application is as follows :
org.quartz.scheduler.instanceName = sr22QuartzScheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 2
org.quartz.threadPool.threadPriority = 5
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = quartzDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.clusterCheckinInterval = 20000
org.quartz.scheduler.idleWaitTime=1000
#org.quartz.jobStore.acquireTriggersWithinLock=true
#Adding unusually high misfire threshold as we dont want to handle misfires
org.quartz.jobStore.misfireThreshold = 50000000
#org.quartz.jobStore.maxMisfiresToHandleAtATime = 0
org.quartz.dataSource.quartzDS.jndiURL= java:jdbc/quartzDS
org.quartz.plugin.shutdownhook.class = org.quartz.plugins.management.ShutdownHookPlugin
org.quartz.plugin.shutdownhook.cleanShutdown = false
#org.quartz.plugin.triggHistory.class = org.quartz.plugins.history.LoggingTriggerHistoryPlugin
#org.quartz.plugin.triggHistory.triggerFiredMessage = Trigger \{1\}.\{0\} fired job \{6\}.\{5\} at: \{4, date, HH:mm:ss MM/dd/yyyy}
#org.quartz.plugin.triggHistory.triggerCompleteMessage = Trigger \{1\}.\{0\} completed firing job \{6\}.\{5\} at \{4, date, HH:mm:ss MM/dd/yyyy\}
The other application have the same configuration but with a different instanceName.
Both applications will be running on same set of server instances. Both of them uses the same set of tables as Quartz Job store in the database.
Now the problem is :
If both the applications are running at the same time, the triggers are not routed properly. The triggers from application1 are routed to application2 and vice-versa. This happens randomly.
Should the applications use different set of quatrz tables in the same database? Should we have only one quartz scheduler instance per server for multiple applications?
I am seeing a random behaviour with quartz. Is there any thing wrong with our setup ??
BTW, we are using quartz 1.8.
Any help is appreciated.
Thanks,
Sri Harsha Yenuganti.
"The other application have the same configuration but with a different instanceName."
To enable clustering:
use only a SINGLE scheduler instance name (but with different instance IDs)
point to a single set of tables
On version 1.x, you have to use multiples table set per scheduler.
On version 2.x, you can use a single set of tables. There is a new discriminator column on each table that contains the scheduler name (SCHED_NAME).
Quarts version 2.0 and above supports this feature.
Related
Im currently working on a RoCE (RDMA over Converged Ethernet) python application with the library pyverbs. First, i want to do a simple loopback test with an RDMA Write. I tested the setup with ib_write_bw from perftest, which worked like a charm.
This is my setup:
OS: Ubuntu 20.04.05 LTS
Kernel: 5.15.0-56-generic
NIC: Mellanox ConnectX-5 MC516A-GCA_Ax 50Gbe dual-port QSFP28
I'm developing the application with a jupyter notebook. The ports are connected together with a QSFP28 cable. I set up a "client" and "server" on the same system. Both use one port of the NIC. The client performs the "RDMA Write" action. In the future, metadata will be exchanged over tcp, but for ease of debugging i exchange metadata locally in the same notebook.
Now i was able to perform an "RDMA Write" action and capture the packets.
Captured RDMA packets
I keep getting not acknowledeges (NACK) from the "server". The RDMA Write packet look correct to me. It got the right payload and the headers are the same as i configured (i can post it if it would help).
I got three ideas in my head, why it would work.
wrong server memory adress/rkey used for rdma write
missing flags at server memory allocation
missing/wrong flags at server queue pair modification
I tried all different combinations of flags and values at the queue pair modification and memory allocation.
I start the NIC in the shi
ell with
sudo mst start
Then i run my python application. I posted some codesnipped below, which i am not sure i implemented those right.
Server and client qp modification init to RTR
gid_index = 0
port_num = 1
server_attr.qp_state = e.IBV_QPS_RTR
server_attr.path_mtu = e.IBV_MTU_4096
server_attr.rq_psn = 0
server_attr.min_rnr_timer = 12
server_attr.max_dest_rd_atomic = 10
server_attr.dest_qp_num = client_qp.qp_num
server_attr.qp_access_flags = e.IBV_ACCESS_LOCAL_WRITE | e.IBV_ACCESS_REMOTE_READ | e.IBV_ACCESS_REMOTE_WRITE
server_gr = GlobalRoute(dgid=client_ctx.query_gid(port_num=port_num,index=gid_index), sgid_index=gid_index)
server_ah_attr = AHAttr(gr=server_gr, is_global=1, port_num=1)
server_attr.ah_attr = server_ah_attr
server_qp.to_rtr(server_attr)
client_attr.qp_state = e.IBV_QPS_RTR
client_attr.path_mtu = e.IBV_MTU_4096
client_attr.rq_psn = 0
client_attr.min_rnr_timer = 12
client_attr.max_dest_rd_atomic = 10
client_attr.dest_qp_num = server_qp.qp_num
client_attr.qp_access_flags = e.IBV_ACCESS_LOCAL_WRITE | e.IBV_ACCESS_REMOTE_READ | e.IBV_ACCESS_REMOTE_WRITE
client_gr = GlobalRoute(dgid=server_ctx.query_gid(port_num=port_num,index=gid_index), sgid_index=gid_index)
client_ah_attr = AHAttr(gr=client_gr, is_global=1, port_num=port_num)
client_attr.ah_attr = client_ah_attr
client_qp.to_rtr(client_attr)
Client qp modification RTR to RTS
client_attr.qp_state = e.IBV_QPS_RTS
client_attr.timeout = 14
client_attr.retry_cnt = 7
client_attr.rnr_retry = 7
client_attr.sq_psn = 0
client_attr.max_rd_atomic = 10
RDMA Write instruction to client
client_sge = SGE(client_mr.buf,len(SEND_STRING),client_mr.lkey)
send_wr = pwr.SendWR(num_sge = 1, sg = [client_sge],opcode=e.IBV_WR_RDMA_WRITE)
send_wr.set_wr_rdma(rkey = server_mr.rkey, addr = server_mr.buf)
client_qp.post_send(send_wr)
sleep(1)
print(server_mr.read(len(SEND_STRING),0))
If there is someone with knowledge in RMDA/RoCE/Pyverbs, i would be glad for some help. I don't have any prior knowledge in those topics. This is why i have choosen to write an application in python. I have knowledge in C, but python is for me much more convient for prototyping :)
Thank for your help!
I follow the specification of Citrix ICA Client Object API Specification
According to this documentation, you can set OutputMode property which has the following meaning:
OutputMode: Output mode for the client engine.
Valid values
0(non-headless),
1(normal),
2(renderless),
3(windowless)
So I set in my code the value to 3 which has the following meaning:
OutputModeWindowless= 3
The client runs as normal, but does not display in the session window. Maintains
internal bitmap surface for screen snapshots. Select this mode to prevent the
client from drawing to the screen if client CPU usage is identified as a
bottleneck. Rendering still occurs in the background to an off-screen surface,
making it possible to obtain screen captures of the session if desired.
But there is absolutely no difference in behaviour, I still see the window as in Normal mode.
I have ensured I set it before connecting as per this documentation:
OutputMode must be defined only at load-time; that is, before a connection is
launched.
I have seen this issue is faced by other developers:
https://discussions.citrix.com/topic/278410-outputmode-windowsless-and-renderless-help-on/
https://discussions.citrix.com/topic/372758-ica-api-icaclientoutputmode-does-not-change-anything/#comment-1904176
https://discussions.citrix.com/topic/372758-ica-api-icaclientoutputmode-does-not-change-anything/#comment-2001192
https://discussions.citrix.com/topic/393456-starting-a-ica-session-in-mode-outputmodewindowless/#comment-2001191
https://discussions.citrix.com/topic/278410-outputmode-windowsless-and-renderless-help-on/#comment-1515471
So question:
Is this method really implemented ?
If yes, what needs to be done to make it work ?
Here is sample code I used:
[system.Reflection.Assembly]::LoadFile("c:\Users\<user>\AppData\Local\Citrix\ICA Client\WfIcaLib.dll")
$icaClient = New-Object WFICALib.ICAClientClass
$icaClient.CacheICAFile = $false
$icaClient.ICAFile = $icapath
$icaClient.OutputMode = [WfIcaLib.OutputMode]::OutputModeWindowless
$icaClient.Launch = $true
$icaClient.TWIMode = $true
$icaClient.Connect()
sleep 10
$enumHandle = $icaClient.EnumerateCCMSessions()
$sessionid = $icaClient.GetEnumNameByIndex($enumHandle, 0)
$icaClient.StartMonitoringCCMSession($sessionid, $true)
#$icaClient.session.ReplayMode = $true
$icaClient.session.Keyboard.SendKeyDown(16) # shift key
$icaClient.session.Keyboard.SendKeyDown(53) # number 5 key
$screenShot = $icaClient.session.CreateFullScreenShot()
$screenShot.Save()
$icaClient.Logoff()
sleep 10
$icaClient.StopMonitoringCCMSession($sessionid)
$icaClient.CloseEnumHandle($enumHandle)
I am using:
Citrix Receiver/Workspace versions I tried: 4.12, 4.9, Workspace 19.11
Citrix StoreFront version: 3.12.5000
I recently upgraded one of our Graphite instances from 0.9.2 to 1.1.1, and have since run into an issue where, for the lack of a better word, there is a rolling gap of data.
It shows the last few minutes correctly (I'm guessing what's in carbon cache), and after about 10-15 minutes past, it shows all of the data correctly as well.
However, inside that 10-15 minute gap, it's completely blank. I can see the gap both in Graphite, and in Grafana. It disappears after restarting carbon cache, and then comes back about a day later.
Example screenshot:
This happens for most graphs/dashboards I have.
I've spent a lot of effort optimizing disk IO, so I doubt it to be the case -> Cloudwatch shows 100% burst credit for disk. It's an m3.xlarge instance with 4 cores and 16 GB RAM. Swap file is on ephemeral storage and looks barely utilized.
Using 1 Carbon Cache instance with Whisper backend.
storage_schemas.conf:
[carbon]
pattern = ^carbon\.
retentions = 60:90d
[dumbo]
pattern = ^collectd\.dumbo # load test containers, we don't care about their data
retentions = 300:1
[collectd]
pattern = ^collectd
retentions = 10s:8h,30s:1d,1m:3d,5m:30d,15m:90d
[statsite]
pattern = ^statsite
retentions = 10s:8h,30s:1d,1m:3d,5m:30d,15m:90d
[default_1min_for_1day]
pattern = .*
retentions = 60s:1d
Non-default (or potentially relevant) carbon.conf settings:
[cache]
MAX_CACHE_SIZE = inf
MAX_UPDATES_PER_SECOND = 100 # was slagging disk write IO until I dropped it down from 500
MAX_CREATES_PER_MINUTE = 50
CACHE_WRITE_STRATEGY = sorted
RELAY_METHOD = rules
DESTINATIONS = 127.0.0.1:2004
MAX_DATAPOINTS_PER_MESSAGE = 500
MAX_QUEUE_SIZE = 10000
Graphite local_settings.py
CARBONLINK_TIMEOUT = 10.0
CARBONLINK_QUERY_BULK = True
USE_WORKER_POOL = False
We've seen this with some workloads on 1.1.1, can you try updating carbon to current master? If not 1.1.2 will be released shortly which should solve the problem.
I noticed a slight difference between the documentation for 2.1 and 2.0:
2.0
akka.default-dispatcher.core-pool-size-max = 64
akka.debug.receive = on
2.1
akka.default-dispatcher.fork-join-executor.pool-size-max =64
akka.actor.debug.receive = on
Akka's own documentation has a core-pool-size-max setting like 2.0, but no pool-size-max like 2.1. Why did this change between 2.0 and 2.1? Which is the correct way to configure Akka in Play? Is this a documentation bug in one of the versions?
(In the meantime, I'm going to try and stick both config styles in my Play 2.1 config and hope for the best).
First of all, always use the documentation for the version you're using, in your case you're linking to the snapshot documentation which is for an unreleased Akka version (i.e. a snapshot).
Here's the 2.1.2 docs: http://doc.akka.io/docs/akka/2.1.2/scala/dispatchers.html (also accessible from doc.akka.io)
When we look at that page, we see that under the example configuration for fork-join-executor and thread-pool-executor it says: "For more options, see the default-dispatcher section of the Configuration.", linking to:
Where we can find:
# This will be used if you have set "executor = "thread-pool-executor""
thread-pool-executor {
# Keep alive time for threads
keep-alive-time = 60s
# Min number of threads to cap factor-based core number to
core-pool-size-min = 8
# The core pool size factor is used to determine thread pool core size
# using the following formula: ceil(available processors * factor).
# Resulting size is then bounded by the core-pool-size-min and
# core-pool-size-max values.
core-pool-size-factor = 3.0
# Max number of threads to cap factor-based number to
core-pool-size-max = 64
# Minimum number of threads to cap factor-based max number to
# (if using a bounded task queue)
max-pool-size-min = 8
# Max no of threads (if using a bounded task queue) is determined by
# calculating: ceil(available processors * factor)
max-pool-size-factor = 3.0
# Max number of threads to cap factor-based max number to
# (if using a bounded task queue)
max-pool-size-max = 64
# Specifies the bounded capacity of the task queue (< 1 == unbounded)
task-queue-size = -1
# Specifies which type of task queue will be used, can be "array" or
# "linked" (default)
task-queue-type = "linked"
# Allow core threads to time out
allow-core-timeout = on
}
So to conclude, you need to set the default-dispatcher to use the "thread-pool-executor" if you want to use the ThreadPoolExecutor, by akka.default-dispatcher.executor = "thread-pool-executor" and then specify your configuration for that thread-pool-executor.
Does that help?
Cheers,
√
I am trying to send an email with an excel attachment without using rsconn01. If this is possible could you show me how this is done?
I would also like a little bit more information about how rsconn01 works. I am using rsconn01 to send the emails but, I received a complaint that this program was also resending out emails that failed earlier that day.
This is the code I am using now. It works, but I want to know another way to do it without using rsconn01.
`CALL FUNCTION 'SO_DOCUMENT_SEND_API1'
EXPORTING
document_data = w_doc_data
put_in_outbox = 'X'
commit_work = 'X'
IMPORTING
sent_to_all = w_sent_all
TABLES
packing_list = t_packing_list
contents_bin = t_attachment
contents_txt = it_message
receivers = t_receivers
EXCEPTIONS
too_many_receivers = 1
document_not_sent = 2
document_type_not_exist = 3
operation_no_authorization = 4
parameter_error = 5
x_error = 6
enqueue_error = 7
OTHERS = 8.
if sy-subrc = 0.
WAIT UP TO 2 SECONDS.
SUBMIT rsconn01 WITH mode = 'INT'
WITH output = 'X'
AND RETURN.
else.
WRITE:/ 'ERROR IN MAIL ', sy-subrc.
endif.`
You will have to use RSCONN01 unless you'd like to implement your own protocol handling. You're using the standard SAPconnect functionality (although with an API that's a bit outdated, I'd switch to the BCS if I were in your shoes). As long as you're using this, you're stuck with that report. However, you usually won't have to call it for yourself. It's a background process that is called every few minutes to process outgoing mail. Perhaps you're working in a development environment where the SAPconnect system isn't properly setup - in that case, you should talk to your system administrators. There are ways to tune the SAPconnect system to work in many cases. You should try to use the existing and well supported facilities before trying to circumvent them.