I have been using the scheduling feature in ActiveMQ to delay a message.
However, when switching from version 5.9.0 to 5.15.8, suddenly the delay setting is ignored. Does anyone have a clue about why?
The ActiveMQ broker is defined as
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="myBroker" dataDirectory="${activemq.data}" schedulerSupport="true">
in 5.9.0 and as
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="myBroker" dataDirectory="${activemq.data}" schedulerSupport="true">
in 5.15.8.
The delay is, in my Java code, set via
message.setLongProperty(ScheduledMessage.AMQ_SCHEDULED_DELAY, 120000);
As said, this works perfectly fine (i.e., messages delivered after two minutes) in version 5.9.0, but is totally ignored (i.e., messages delivered immediately) in 5.15.8. Both versions are started using the same script, just changing the relevant path.
Diffing the activemq.xml files I can't see anything I think is significant:
[servers]# diff apache-activemq-5.15.8/conf/activemq.xml apache-activemq-5.9.0/conf/activemq.xml
32c32
< <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
---
> <bean id="logQuery" class="org.fusesource.insight.log.log4j.Log4jLogQuery"
84c84
< <!--<persistenceAdapter>
---
> <persistenceAdapter>
86,93c86,87
< </persistenceAdapter>-->
< <persistenceAdapter>
< <jdbcPersistenceAdapter dataSource="#mssql-ds" lockDataSource="#mssql-ds-lock" lockKeepAlivePeriod="5000">
< <locker>
< <lease-database-locker lockAcquireSleepInterval="10000"/>
< </locker>
< </jdbcPersistenceAdapter>
< </persistenceAdapter>
---
> </persistenceAdapter>
>
Related
What does this error mean in CouchDB logs? I see that it is looking for some "_users" database. But I don't have a database with that name. Is there anything I can do to stop these errors?
[notice] 2021-10-12T14:36:18.259160Z couchdb#127.0.0.1 <0.328.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142)
[error] 2021-10-12T14:36:18.259219Z couchdb#127.0.0.1 emulator -------- Error in process <0.2113.0> on node 'couchdb#127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
I found the solution in Couchdb documentation
https://docs.couchdb.org/en/latest/setup/single-node.html
Make sure to create the three system databases manually on startup:
curl -X PUT http://127.0.0.1:5984/_users
curl -X PUT http://127.0.0.1:5984/_replicator
curl -X PUT http://127.0.0.1:5984/_global_changes
Note that the last of these is not necessary if you do not expect to be using the global changes feed. Feel free to delete this database if you have created it, it has grown in size, and you do not need the function (and do not wish to waste system resources on compacting it regularly.)
I am running on Wildfly 23.0.1.Final (openjdk 11) under Centos 8.
I am not using opentrace in my application at all and i also did not add any jaeger dependency.
Whenever i look in the logs, i often get an excpetion(Level: Warn) the looks like the following:
> 021-04-28 15:08:29,875 WARN [io. .internal.reporters.RemoteReporter]
> (jaeger.RemoteReporter-QueueProcessor) FlushCommand execution failed!
> Repeated errors of this command will not be logged.:
> io.jaegertracing.internal.exceptions.SenderException: Failed to flush
> spans. at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.thrift.internal.senders.ThriftSender.flush(ThriftSender.java:115)
> at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.internal.reporters.RemoteReporter$FlushCommand.execute(RemoteReporter.java:160)
> at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.internal.reporters.RemoteReporter$QueueProcessor.run(RemoteReporter.java:182)
> at java.base/java.lang.Thread.run(Thread.java:834) Caused by:
> io.jaegertracing.internal.exceptions.SenderException: Could not send 1
> spans at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.thrift.internal.senders.UdpSender.send(UdpSender.java:85)
> at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.thrift.internal.senders.ThriftSender.flush(ThriftSender.java:113)
> ... 3 more Caused by:
> org.apache.thrift.transport.TTransportException: Cannot flush closed
> transport at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.thrift.internal.reporters.protocols.ThriftUdpTransport.flush(ThriftUdpTransport.java:148)
> at
> org.apache.thrift#0.13.0//org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73) at
> org.apache.thrift#0.13.0//org.apache.thrift.TServiceClient.sendBaseOneway(TServiceClient.java:66)
> at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.agent.thrift.Agent$Client.send_emitBatch(Agent.java:70)
> at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.agent.thrift.Agent$Client.emitBatch(Agent.java:63)
> at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.thrift.internal.senders.UdpSender.send(UdpSender.java:83)
> ... 4 more Caused by: java.net.PortUnreachableException: ICMP Port
> Unreachable at java.base/java.net.PlainDatagramSocketImpl.send(Native
> Method) at
> java.base/java.net.DatagramSocket.send(DatagramSocket.java:695) at
> io.jaegertracing.jaeger#1.5.0//io.jaegertracing.thrift.internal.reporters.protocols.ThriftUdpTransport.flush(ThriftUdpTransport.java:146)
> ... 9 more
These messages fill the logfiles and i do not know how to disable the unwanted opentrace feature.
I was not able to find something on google concerning this strange exception.
Does anybody has some idea?
best regards
shane
If you don't use it, you can do something like the following in the CLI:
/subsystem=microprofile-opentracing-smallrye/jaeger-tracer=jaeger:write-attribute(name=sampler-param, value=0)
Another solution is to remove the opentracing subsystem, install jaeger or wait for a release of WildFly with a fix for https://issues.redhat.com/browse/WFLY-14625
You can disable this by remove
<subsystem xmlns="urn:wildfly:microprofile-opentracing-smallrye:3.0" default-tracer="jaeger">
<jaeger-tracer name="jaeger">
<sampler-configuration sampler-type="const" sampler-param="1.0"/>
</jaeger-tracer>
</subsystem>
in standalone.xml
In parallel to wildfly, execute the jaeger application - a log tracer, with a suitable port configuration. You may find a docker image to run.
In HAL Management Console -> Configuration -> Subsystem -> Logging -> Configuration -> View -> Categories
In Category for 'io.jaegertracing.Configuration' disable parameter 'Use Parent Handlers'.
Or just remove 'io.jaegertracing.Configuration'
We configured the latest version (7.2) SMSC-GW to work on on our server with the environment (cassandra and such). However, after setting up everything. Some failures are appearing (which did not appear in previous versions).
Firstly, when connecting the simulators and the gateway using the default settings (JSS7 <-> SMSCGW <-> SMPP)
JSS7 is connected and sending, but no response is received.
SMPP is connected to SMSC-GW and the EMSE is bound. SMPP tries to send to SS7 but receives a response PDU packet failure from the SMSC-GW
I tried configuring DB routing rules, but that did not work.
Also, the log in the SMSC-GW server is frequently displaying the following message:
16:00:28,504 INFO [SchedulerResourceAdaptor] (pool-56-thread-1) Not all SBB are running now: ServicesDownList=[smscTxSmppServerServiceState, smscRxSmppServerServiceState, smscTxSipServerServiceState, smscRxSipServerServiceState, smscTxHttpServerServiceState, moServiceState, homeRoutingServiceState, mtServiceState, alertServiceState, chargingServiceState, ]
And the JSS7 management console GUI is displaying this (which looks wrong):
So are these the source of the SMSC-GW failures?
UPDATE: I found this error in the server.log
2017-02-02 10:57:42,005 WARN [org.mobicents.slee.container.deployment.jboss.SleeContainerDeployerImpl] (SLEE-InternalDeployer-thread-1) SLEE DUs not deployed, due to missing dependencies: file:/home/coreteam/kitchensink/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/simulator/deploy/smsc-services-du-7.2.109.jar/
Followed by:
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_MT,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=PersistenceResourceAdaptorType,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=SchedulerResourceAdaptorType,vendor=org.mobicents,version=1.0]
SipRA
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_RSDS,vendor=org.mobicents,version=1.0]
SchedulerResourceAdaptor^M
PersistenceResourceAdaptor^M
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SMPP_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SIP_SM,vendor=org.mobicents,version=1.0]
2017-02-02 14:41:17,450 WARN [org.mobicents.slee.container.deployment.jboss.DeploymentManager] (main) Unable to INSTALL smsc-services-du-7.3.0-SNAPSHOT.jar right now. Waiting for dependencies to be resolved.
Solved it quite a while ago, but thought I would share. I just simply installed the SipRA missing dependency by adding the following in the deploy-config.xml file:
<ra-entity
resource-adaptor-id="ResourceAdaptorID[name=JainSipResourceAdaptor,vendor=net.java.slee.sip,version=1.2]"
entity-name="SipRA">
<properties>
<property name="javax.sip.PORT" type="java.lang.Integer" value="5060" />
</properties>
<ra-link name="SipRA" />
In the $JBOSS_HOME/server/profile_name/deploy/restcomm-slee directory.
I set the port to some other value since that number was already taken by some other service.
The smsc-services-du-7.2.109.jar then installed automatically the next time I ran the SMSC-GW.
We are currently testing to move from Wildfly 8.2.0 to Wildfly 9.0.0.CR1 (or CR2 built from snapshot). The system is a cluster using mod_cluster and is running on VPS what in fact prevents it from using multicast.
On 8.2.0 we have been using the following configuration of the modcluster that works well:
<mod-cluster-config proxy-list="1.2.3.4:10001,1.2.3.5:10001" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
Unfortunately, on 9.0.0 proxy-list was deprecated and the start of the server will finish with an error. There is a terrible lack of documentation, however after a couple of tries I have discovered that proxy-list was replaced with proxies that are a list of outbound-socket-bindings. Hence, the configuration looks like the following:
<mod-cluster-config proxies="mc-prox1 mc-prox2" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
And the following should be added into the appropriate socket-binding-group (full-ha in my case):
<outbound-socket-binding name="mc-prox1">
<remote-destination host="1.2.3.4" port="10001"/>
</outbound-socket-binding>
<outbound-socket-binding name="mc-prox2">
<remote-destination host="1.2.3.5" port="10001"/>
</outbound-socket-binding>
So far so good. After this, the httpd cluster starts registering the nodes. However I am getting errors from load balancer. When I look into /mod_cluster-manager, I see a couple of Node REMOVED lines and there are also many many errors like:
ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to node1/1.2.3.4:10001, configuration will be reset: MEM: Can't read node
In the log of mod_cluster there are the equivalent warnings:
manager_handler STATUS error: MEM: Can't read node
As far as I understand, the problem is that although wildfly/modcluster is able to connect to httpd/mod_cluster, it does not work the other way. Unfortunately, even after an extensive effort I am stuck.
Could someone help with setting mod_cluster for Wildfly 9.0.0 without advertising? Thanks a lot.
I ran into the Node Removed issue to.
I managed to solve it by using the following as instance-id
<subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="${jboss.server.name}">
I hope this will help someone else to ;)
There is no need for any unnecessary effort or uneasiness about static proxy configuration. Each WildFly distribution comes with xsd sheets that describe xml subsystem configuration. For instance, with WildFly 9x, it's:
WILDFLY_DIRECTORY/docs/schema/jboss-as-mod-cluster_2_0.xsd
It says:
<xs:attribute name="proxies" use="optional">
<xs:annotation>
<xs:documentation>List of proxies for mod_cluster to register with defined by outbound-socket-binding in socket-binding-group.</xs:documentation>
</xs:annotation>
<xs:simpleType>
<xs:list itemType="xs:string"/>
</xs:simpleType>
</xs:attribute>
The following setup works out of box
Download wildfly-9.0.0.CR1.zip or build with ./build.sh from sources
Let's assume you have 2 boxes, Apache HTTP Server with mod_cluster acting as a load balancing proxy and your WildFly server acting as a worker. Make sure botch servers can access each other on both MCMP enabled VirtualHost's address and port (Apache HTTP Server side) and on WildFly AJP and HTTP connector side. The common mistake is to binf WildFLy to localhost; it then reports its addess as localhost to the Apache HTTP Server residing on a dofferent box, which makes it impossible for it to contact WildFly server back. The communication is bidirectional.
This is my configuration diff from the default wildfly-9.0.0.CR1.zip.
328c328
< <mod-cluster-config advertise-socket="modcluster" connector="ajp" advertise="false" proxies="my-proxy-one">
---
> <mod-cluster-config advertise-socket="modcluster" connector="ajp">
384c384
< <subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="worker-1">
---
> <subsystem xmlns="urn:jboss:domain:undertow:2.0">
435c435
< <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:102}">
---
> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
452,454d451
< <outbound-socket-binding name="my-proxy-one">
< <remote-destination host="10.10.2.4" port="6666"/>
< </outbound-socket-binding>
456c453
< </server>
---
> </server>
Changes explanation
proxies="my-proxy-one", outbound socket binding name; could be more of them here.
instance-id="worker-1", the name of the worker, a.k.a. JVMRoute.
offset -- you could ignore, it's just for my test setup. Offset does not apply to outbound socket bindings.
<outbound-socket-binding name="my-proxy-one"> - IP and port of the VirtualHost in Apache HTTP Server containing EnableMCPMReceive directive.
Conclusion
Generally, these MEM read / node error messages are related to network problems, e.g. WildFly can contact Apache, but Apache cannot contact WildFly back. Last but not least, it could happen that the Apache HTTP Server's configuration uses PersistSlots directive and some substantial enviroment conf change took place, e.g. switch from mpm_prefork to mpm_worker. In this case, MEM Read error messages are not realted to WildFly, but to the cached slotmem files in HTTPD/cache/mod_custer that need to be deleted.
I'm certain it's network in your case though.
After a couple of weeks I got back to the problem and found the solution. The problem was - of course - in configuration and had nothing in common with the particular version of Wildfly. Mode specifically:
There were three nodes in the domain and three servers in each node. All nodes were launched with the following property:
-Djboss.node.name=nodeX
...where nodeX is the name of a particular node. However, it meant that all three servers in the node get the same name, which is exactly what confused the load balancer.
As soon as I have removed this property, everything started to work.
I have a windows service that uses log4net. We noticed that the service in question was running painfully slow so we attached a debugger to it and stepped through. It appears that each time it tries to write an entry to the log via log4net that it takes anywhere from 10 to 30 seconds before the next line of code can execute. Obviously this adds up...
The service is 2.0 .net
We're using log4Net 1.2.0.30714.
We've tested this on a machine running vista and a machine running win sever 2003 and have seen the same or similar results.
Jeff mentioned a performance problem with Log4Net in Podcast 20. It's possible that you are seeing a similar issue.
It turned out that someone had added an SMPTAppender in a config file which was overriding the one in our app. As a result the errant SMPT server address was unreachable. log4net was trying to log the error for a minute per request and then giving up and going on to the next line of code. Correcting the smtp address fixed the problem.
I have log4net with adonet appender and have not seen any decremental performance of my windows service. what appender are you using?
Check your config file for Log4Net settings. Log4Net can be configured to log to a remote machine, and if the connection is slow, so will be your logging speed.
Well I'm not remoting... this is writing to the log file on the machine it's running on. Here's my appender settings:
<appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender,log4net">
<file value="D:\\ROPLogFiles\\FileProcessor.txt" />
<appendToFile value="true" />
<datePattern value="yyyyMMdd" />
<rollingStyle value="Date" />
<layout type="log4net.Layout.PatternLayout,log4net">
<param name="ConversionPattern" value="%d [%t] %-5p %c [%x] - %m%n" />
</layout>
<threshold value="INFO" />
</appender>
the default maximum file size is 10mb . if your files are about this size and your file systems is quite full and probably heavily fragmented, it may be possible that the problem lies there. how big are your log files? i encountered similar problems with logfiles at gigabyte size.