WildFly content is obsolete and will be removed - deployment

I try to deploy an EAR on an empty WildFly server using a script containing this CLI command:
deploy path/app.ear --server-groups=myServerGroup
However, it seems that WildFly removes the deployment. Indeed, in the wildfly/domain/log/host-controller.log file, I see:
2017-03-23 18:00:29,173 INFO [org.jboss.as.repository] (management-handler-thread - 1) WFLYDR0001: Content added at location /opt/jboss/wildfly/domain/data/content/e1/42b70f3e3e5127f67b742db96d522c4602a779/content
2017-03-23 18:01:42,262 INFO [org.jboss.as.repository] (External Management Request Threads -- 5) WFLYDR0001: Content added at location /opt/jboss/wildfly/domain/data/content/b3/ea005efe4d3f22d006db850ac1c88b0a470b3a
/content
2017-03-23 18:10:15,028 INFO [org.jboss.as.repository] (Host Controller Service Threads - 32) WFLYDR0009: Content /opt/jboss/wildfly/domain/data/content/e1/42b70f3e3e5127f67b742db96d522c4602a779 is obsolete and will
be removed
2017-03-23 18:10:15,034 INFO [org.jboss.as.repository] (Host Controller Service Threads - 32) WFLYDR0002: Content removed from location /opt/jboss/wildfly/domain/data/content/e1/42b70f3e3e5127f67b742db96d522c4602a779
/content
Moreover, if I manually run the command, the deployment works.
When does WildFly consider a content as obsolete?
Why my deployment would be considered as obsolete?
UPDATE
Another example :
2017-05-23 16:06:48,493 INFO [org.jboss.as.repository] (management-handler-thread - 3) WFLYDR0001: Content added at location /opt/jboss/wildfly/domain/data/content/57/f5a4fe985f3528222053b0404654ead3502fa9/content
2017-05-23 16:13:05,281 INFO [org.jboss.as.repository] (Host Controller Service Threads - 3) WFLYDR0009: Content /opt/jboss/wildfly/domain/data/content/57/f5a4fe985f3528222053b0404654ead3502fa9 is obsolete and will be removed
2017-05-23 16:13:05,288 INFO [org.jboss.as.repository] (Host Controller Service Threads - 3) WFLYDR0002: Content removed from location /opt/jboss/wildfly/domain/data/content/57/f5a4fe985f3528222053b0404654ead3502fa9/ content

The content is considered obsolete when it is not referenced from a deployment through its hash and if it is more than 10 minutes old.

Related

Timeout deploying an artifact that has deployed many times under JBoss EAP 7.3

I have been running PAM 7.9/JBPM 7.48 for about a year under JBOSS EAP 7.3. My JBPM's KieServer persists using SQL Server. I repeatedly deployed the KieServer yesterday but deploying today fails.
2021-12-16 15:25:53,645 ERROR [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 1) WFLYCTL0348: Timeout after [300] seconds waiting for service container stability. Operation will roll back. Step that first updated the service container was 'full-replace-deployment' at address '[]'
2021-12-16 15:26:03,649 ERROR [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 1) WFLYCTL0190: Step handler org.jboss.as.server.deployment.DeploymentHandlerUtil$4#74e289e9 for operation full-replace-deployment at address [] failed handling operation rollback -- java.util.concurrent.TimeoutException: java.util.concurrent.TimeoutException
at org.jboss.as.controller.OperationContextImpl.waitForRemovals(OperationContextImpl.java:523)
at org.jboss.as.controller.AbstractOperationContext$Step.handleResult(AbstractOperationContext.java:1518)
I have already set the property to increase the timeout for the deployment but it still complains about a 5 second timeout that must be controlled by another property
2021-12-16 13:40:47,039 ERROR [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 1) WFLYCTL0349: Timeout after [5] seconds waiting for service container stability while finalizing an operation. Process must be restarted. Step that first updated the service container was 'deploy' at address '[("deployment" => "kie-server.war")]'
I have changed the logging level to trace in order to gain all the information I can. How else can I debug / solve this issue?
There are two factors that may be contributing to this, but I don't have a good approach for addressing them.
There was a Windows Update yesterday (likely due to the recent Log4j exploit)
Some people at my company are having problems connecting to the SQL Server database. I am not seeing log messages about KieServer being unable to connect to the DB, but when it cannot reaching the DB the KieServer fails to start.

rundeck :how to correct the configuration of rundeck to access via the browser

i have problem accessing rundeck
[2021-05-03T17:33:33,231] WARN beans.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'exceptionMappings' being accessed! Ambiguous write methods found next to actually used [public void grails.plugin.springsecurity.web.authentication.AjaxAwareAuthenticationFailureHandler.setExceptionMappings(java.util.List)]: [public void org.springframework.security.web.authentication.ExceptionMappingAuthenticationFailureHandler.setExceptionMappings(java.util.Map)]
[2021-05-03T17:33:41,756] INFO rundeckapp.BootStrap - Starting Rundeck 3.3.10-20210301 (2021-03-02) ...
[2021-05-03T17:33:41,757] INFO rundeckapp.BootStrap - using rdeck.base config property: /var/lib/rundeck
[2021-05-03T17:33:41,768] INFO rundeckapp.BootStrap - loaded configuration: /etc/rundeck/framework.properties
[2021-05-03T17:33:41,805] INFO rundeckapp.BootStrap - RSS feeds disabled
[2021-05-03T17:33:41,806] INFO rundeckapp.BootStrap - Using jaas authentication
[2021-05-03T17:33:41,811] INFO rundeckapp.BootStrap - Preauthentication is disabled
[2021-05-03T17:33:41,918] INFO rundeckapp.BootStrap - Rundeck is ACTIVE: executions can be run.
[2021-05-03T17:33:42,283] WARN rundeckapp.BootStrap - [Development Mode] Usage of H2 database is recommended only for development and testing
[2021-05-03T17:33:42,590] INFO rundeckapp.BootStrap - Rundeck startup finished in 945ms
[2021-05-03T17:33:42,877] INFO rundeckapp.Application - Started Application in 32.801 seconds (JVM running for 35.608)
Grails application running at http://xxx.xxx.xxx.xxx:4440 in environment: production
Session terminated, killing shell...[2021-05-04T10:20:46,596] INFO rundeckapp.BootStrap - Rundeck Shutdown detected
...killed.
can you help me please
by the way I have installed a vm under redhat
then I installed rundeck RPM
and from my physical machine when I do http: // rundecknode_ip: 4440
it returns me on the browser error 113 no route to host and on examination of the logs I have what I have posted above
when i do systemctl status rundeck it is active running

Wildfly : Singleton Deployment on Cluster | Elects two servers in Server Group

This does not happens all time but many a times.
3 Clusters of Server Group
Wildfly 16
Deploy .war from UI. It picks fine on one server::
2020-02-26 07:21:12,951 INFO [org.wildfly.clustering.server] (LegacyDistributedSingletonService - 1) WFLYCLSV0003: alp-esb-app02:servicedesk-02 elected as the singleton provider of the jboss.deployment.unit."Now-1.11-SNAPSHOT.war".installer service
2020-02-26 07:21:13,115 INFO [org.jboss.as.server] (ServerService Thread Pool -- 26) WFLYSRV0010: Deployed "Now-1.11-SNAPSHOT.war" (runtime-name : "Now-1.11-SNAPSHOT.war")
2020-02-26 07:21:14,133 INFO [org.wildfly.clustering.server] (LegacyDistributedSingletonService - 1) WFLYCLSV0001: This node will now operate as the singleton provider of the jboss.deployment.unit."Now-1.11-SNAPSHOT.war".installer service
But i disable-renable or deploy next time: It shows same logs in two server.
An there is scheduler which runs twice which is corrupting database with duplicates.
Need to redeploy and redeploy and check when logs went fine i.e only one server is elected.
Project Structure:
webapp -> Meta INF -> singleton-deployment.xml
<?xml version="1.0" encoding="UTF-8"?>
<singleton-deployment xmlns="urn:jboss:singleton-deployment:1.0"/>
Scheduler Starts like:
#Startup
#Singleton
#AccessTimeout(value = 30, unit = TimeUnit.MINUTES)
public class SnowPollerNew {
Any suggestion why do it runs fine but do not runs fine many a time.
Is it linked to JGroups? or communication between two clusters?
You need to ensure that the servers are building the cluster correctly.
Also I remember some issues (WFLY-11619) with the singleton election.
I would suppose that this is not reproducable at WildFly 18.

Keycloak cluster fails on Amazon ECS (org.infinispan.commons.CacheException: Initial state transfer timed out for cache)

I am trying to deploy a cluster of 2 Keycloak docker images (6.0.1) on Amazon ECS (Fargate) using the built-in ECS Service Discovery mecanism (using DNS_PING).
Environment:
JGROUPS_DISCOVERY_PROTOCOL=dns.DNS_PING
JGROUPS_DISCOVERY_PROPERTIES=dns_query=my.services.internal,dns_record_type=A
JGROUPS_TRANSPORT_STACK=tcp <---(also tried udp)
The instances IP are correctly resolved from Route53 private namespace and they discover each other without any problem (x.x.x.138 is started first, then x.x.x.76).
Second instance:
[org.jgroups.protocols.dns.DNS_PING] (ServerService Thread Pool -- 58) ip-x.x.x.76: entries collected from DNS (in 3 ms): [x.x.x.76:0, x.x.x.138:0]
[org.jgroups.protocols.dns.DNS_PING] (ServerService Thread Pool -- 58) ip-x.x.x.76: sending discovery requests to hosts [x.x.x.76:0, x.x.x.138:0] on ports [55200 .. 55200]
[org.jgroups.protocols.pbcast.GMS] (ServerService Thread Pool -- 58) ip-x.x.x.76: sending JOIN(ip-x-x-x-76) to ip-x-x-x-138
And on the first instance:
[org.infinispan.CLUSTER] (thread-8,ejb,ip-x-x-x-138) ISPN000094: Received new cluster view for channel ejb: [ip-x-x-x-138|1] (2) [ip-x-x-x-138, ip-172-x-x-x-76]
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-8,ejb,ip-x-x-x-138) Joined: [ip-x-x-x-76], Left: []
[org.infinispan.CLUSTER] (thread-8,ejb,ip-x-x-x-138) ISPN100000: Node ip-x-x-x-76 joined the cluster
[org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger-12,ejb,ip-x-x-x-76) ip-x-x-x-76: pingable_mbrs=[ip-x-x-x-138, ip-x-x-x-76], ping_dest=ip-x-x-x-138
So it seems we have a working cluster. Unfortunately, the second instance ends up failing with the following exception:
Caused by: org.infinispan.commons.CacheException: Initial state transfer timed out for cache work on ip-x-x-x-76
Before this occurs, I am seeing a bunch of failure discovery task suspecting/unsuspecting the opposite instance:
[org.jgroups.protocols.FD_ALL] (Timer runner-1,null,null) haven't received a heartbeat from ip-x-x-x-76 for 60016 ms, adding it to suspect list
[org.jgroups.protocols.FD_ALL] (Timer runner-1,null,null) ip-x-x-x-138: suspecting [ip-x-x-x-76]
[org.jgroups.protocols.FD_ALL] (thread-9,ejb,ip-x-x-x-138) Unsuspecting ip-x-x-x-76
[org.jgroups.protocols.FD_SOCK] (thread-9,ejb,ip-x-x-x-138) ip-x-x-x-138: broadcasting unsuspect(ip-x-x-x-76)
On the Infinispan side (cache), everything seems to occur correctly but I am not sure. Every cache is "rebalanced" and each "rebalance" seems to end up with, for example:
[org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p24-t2) Finished receiving of segments for cache offlineSessions for topology 2.
It feels like its a connectivity issue, but all the ports are wide open between these 2 instances, both for TCP and UDP.
Any idea ? Anyone successfull at configuring this on ECS (fargate) ?
UPDATE 1
The second instance was initially shutting down not because of the "Initial state transfer timed out .." error but because the health check was taking longer than the configured grace period. Nonetheless, with 2 healthy instances, I receive "404 - Not Found" once every 2 queries, telling me that there is indeed a cache problem.
In current keycloak docker image (6.0.1), the default stack is UDP. According to this, version 7.0.0 will default to TCP and will also introduce a variable to toggle the stack (JGROUPS_TRANSPORT_STACK).
Using the UDP stack in Amazon ECS will "partially" work, meaning the discovery will work, the cluster will form, but the Infinispan cache won't be able to sync between instances, which will produce erratic errors. There is probably a way to make it work "as-is", but I dont see anything blocked between the instances when checking the VPC Flow logs.
A workaround is to switch to TCP by modifying the JGroups stack directly in the image in file /opt/jboss/keycloak/standalone/configuration/standalone-ha.xml:
<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
<channels default="ee">
<channel name="ee" stack="tcp" cluster="ejb"/> <-- set stack to tcp
</channels>
Then commit the new image:
docker commit -m="TCP cluster stack" CONTAINER_ID jboss/keycloak:6.0.1-tcp-cluster
Tag/Push the image to Amazon ECR and make sure the port 7600 is accepted in your security group between your Amazon ECS tasks.

JBoss EAP HA singleton deployment on two server groups

I'm trying to use the HA Singleton on JBoss EAP 6.4. I have two hosts: app-05, app-06 and two groups: ADM, DEV. Both server groups use their own profiles based on standard HA profile and both use ha-sockets socket binding.
+-----------------------------------+
|Group v / Host > | app-05 | app-06 |
|-----------------+--------+--------|
| ADM | ADM_01 | ADM_02 |
|-----------------+--------+--------|
| DEV | DEV_01 | DEV_02 |
+-----------------------------------+
When I assign an application containing HA singleton to one of the groups the result is as I would expect. The app-05 server starts:
INFO [stdout] -------------------------------------------------------------------
INFO [stdout] GMS: address=app-05:DEV_01/singleton, cluster=singleton, physical address=10.7.0.131:55400
INFO [stdout] -------------------------------------------------------------------
INFO [o.i.r.t.j.JGroupsTransport] ISPN000094: Received new cluster view: [app-05:DEV_01/singleton|0] [app-05:DEV_01/singleton]
INFO [o.i.r.t.j.JGroupsTransport] ISPN000079: Cache local address is app-05:DEV_01/singleton, physical addresses are [10.7.0.131:55400]
INFO [o.j.a.clustering] JBAS010238: Number of cluster members: 1
and is elected as the HA singleton provider:
INFO [o.j.a.c.infinispan] JBAS010281: Started default cache from singleton container
INFO [o.j.a.c.singleton] JBAS010342: app-05:DEV_01/singleton elected as the singleton provider of the jboss.test.ha.singleton service
INFO [o.j.a.c.singleton] JBAS010340: This node will now operate as the singleton provider of the jboss.test.ha.singleton service
then app-06 server starts and forms a cluster:
INFO [stdout] -------------------------------------------------------------------
INFO [stdout] GMS: address=app-06:DEV_02/singleton, cluster=singleton, physical address=10.7.0.132:55400
INFO [stdout] -------------------------------------------------------------------
INFO [o.i.r.t.j.JGroupsTransport] ISPN000094: Received new cluster view: [app-05:DEV_01/singleton|1] [app-05:DEV_01/singleton, app-06:DEV_02/singleton]
INFO [o.i.r.t.j.JGroupsTransport] ISPN000079: Cache local address is app-06:DEV_02/singleton, physical addresses are [10.7.0.132:55400]
INFO [o.j.a.clustering] JBAS010238: Number of cluster members: 2
and ultimateley second app-06 is elected:
INFO [o.j.a.c.infinispan] JBAS010281: Started default cache from singleton container
INFO [o.j.a.c.singleton] JBAS010342: app-06:DEV_02/singleton elected as the singleton provider of the jboss.test.ha.singleton service
INFO [o.j.a.c.singleton] JBAS010340: This node will now operate as the singleton provider of the jboss.test.ha.singleton service
But when I assign the same application to the second group of servers strange things are happening. All FOUR servers form a cluster:
INFO [o.i.r.t.j.JGroupsTransport] ISPN000094: Received new cluster view: [app-05:DEV_01/singleton|2] [app-05:DEV_01/singleton, app-06:DEV_02/singleton, app-06:ADM_02/singleton, app-05:ADM_01/singleton]
INFO [o.j.a.c.singleton] JBAS010342: app-05:DEV_01/singleton elected as the singleton provider of the jboss.test.ha.singleton service
INFO [o.j.a.c.singleton] JBAS010340: This node will now operate as the singleton provider of the jboss.test.ha.singleton service
Why do servers from different groups and using different profiles see each other? And all of them form only one cluster with one HA singleton?
I would expect there will be two independent clusters, each one with its own instance of HA singleton.
Do you use different JGroups multicast addresses for your groups? Two different server groups correspond to two different clusters. In order for JBoss cluster to work correctly with other clusters in the same network, you need to isolate it so that it does not interfere with others. This is done by setting different values for jgroups-udp and messaging-group in socket binding group - for example by setting system properties jboss.default.multicast.address and jboss.messaging.group.address. The multicast addresses should be in range 224.0.0.0 to 239.255.255.255.