OOM during EhCache replication - jboss

We have faced with OOM issue during EhCache replication usage.
Memory dump shows us that jgroup related object in the top:
Instance Counts for All Classes (excluding platform)
464012 instances of class org.jgroups.util.Headers
463718 instances of class org.jgroups.protocols.pbcast.NakAckHeader
463512 instances of class [Lorg.jgroups.Header;
462136 instances of class org.jgroups.Message
173509 instances of class org.jgroups.protocols.TpHeader
63301 instances of class com.mongodb.BasicDBObject
Also we see following warnings in log:
2012-08-26 02:05:50,980 INFO [org.jgroups.JChannel] (main) JGroups version:
2.10.0.GA
2012-08-26 02:05:51,569 WARN [org.jgroups.stack.Configurator] (main) TCPPING property down_thread was deprecated and is ignored
2012-08-26 02:05:51,569 WARN [org.jgroups.stack.Configurator] (main) TCPPING property up_thread was deprecated and is ignored
2012-08-26 02:05:51,576 WARN [org.jgroups.stack.Configurator] (main) VERIFY_SUSPECT property down_thread was deprecated and is ignored
2012-08-26 02:05:51,576 WARN [org.jgroups.stack.Configurator] (main) VERIFY_SUSPECT property up_thread was deprecated and is ignored
2012-08-26 02:05:51,584 WARN [org.jgroups.stack.Configurator] (main) NAKACK property down_thread was deprecated and is ignored
2012-08-26 02:05:51,584 WARN [org.jgroups.stack.Configurator] (main) NAKACK property up_thread was deprecated and is ignored
2012-08-26 02:05:51,629 WARN [org.jgroups.stack.Configurator] (main) GMS property join_retry_timeout was deprecated and is ignored
2012-08-26 02:05:51,629 WARN [org.jgroups.stack.Configurator] (main) GMS property shun was deprecated and is ignored
2012-08-26 02:05:51,629 WARN [org.jgroups.stack.Configurator] (main) GMS property down_thread was deprecated and is ignored
2012-08-26 02:05:51,629 WARN [org.jgroups.stack.Configurator] (main) GMS property up_thread was deprecated and is ignored
2012-08-26 02:05:51,734 WARN [org.jgroups.protocols.pbcast.NAKACK] (main) use_mcast_xmit should not be used because the transport (TCP) does not support IP multicasting; setting use_mcast_xmit to false
2012-08-26 02:05:58,539 WARN [org.jgroups.protocols.pbcast.GMS] (main)
join(host_x-17490) sent to host_x-5955 timed out (after 5000 ms), retrying
2012-08-26 02:06:01,601 INFO
[net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider] (main) JGroups Replication started for 'EH_CACHE'. JChannel: local_addr=host_x-17490
cluster_name=EH_CACHE
my_view=[host_x-17490|0] [host_x-17490]
Environment:
CentOS release 5.4 (Final)
JBboss-4.2.3 GA
Java: 1.6.0_21
RAM: 8 Gb
Hosts (machines): host_x, host_y
Lib versions that we use:
jgroups-2.10.0.GA.jar
ehcache-jgroupsreplication-1.5.jar
ehcache-core-2.5.0.jar
Configuration of EhCache (ehcache.xml):
<ehcache>
<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=TCP(bind_port=7800):
TCPPING(initial_hosts=host_x[7800],host_y[7800];port_range=5;timeout=3000;
num_initial_members=3;up_thread=true;down_thread=true):
VERIFY_SUSPECT(timeout=1500;down_thread=false;up_thread=false):
pbcast.NAKACK(down_thread=true;up_thread=true;gc_lag=100;retransmit_timeout=3000):
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;
print_local_addr=false;down_thread=true;up_thread=true)"
propertySeparator="::" />
<cache name="RECORD_CACHE" maxElementsInMemory="25000" eternal="false"
overflowToDisk="false" memoryStoreEvictionPolicy="LFU" timeToLiveSeconds="900" >
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=false, replicateUpdates=false,
replicateUpdatesViaCopy=false, replicateRemovals=true" />
</cache>
</ehcache>
We have checked that 7800 port on host_x is available from host_y and vice versa (via telnet).
Could you please help us to detect root if OOM issue here?
We have some assumptions about incorrect configuration of replication - but currently can't define where is error here.
Thank you for any advice or suggestion!

Your JGroups config is completely off !
First of all, it was probably copied from a very old version. Second, STABLE is missing, which means messages will never get garbage collected ! I suggest use tcp.xml or udp.xml from the 2.10 version of JGroups (you're using).

Related

Keycloak v20.0.3 - I can't connect to Postgresql 12 via IP:Port

As title, I am unable to connect to Postgresql 12 on VM but for Postgresql deployed on K8S it connects normally.
Here is my info:
Docker Desktop 4.16.3 (96739)
Keycloak v20.0.3 (v20.0.1 also got the same error)
VM: Postgresql 12 - Ubuntu 18.04
My env:
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=admin
KC_DB=postgres
KC_DB_URL_DATABASE=web-oidc-v20
KC_DB_URL_HOST=192.168.201.23
KC_DB_URL_PORT=5432
KC_DB_USERNAME=web-oidc
KC_DB_PASSWORD=Abcd#1234
KC_HOSTNAME_ADMIN_URL=http://localhost:8080
KC_HOSTNAME_URL=http://localhost:8080
(I also tried to use KC_DB_URL=jdbc:postgresql://192.168.201.23:5432/web-oidc-v20 as replacement for KC_DB_URL_DATABASE, KC_DB_URL_HOST, KC_DB_URL_PORT but it didn't work either)
Here is the docker run command:
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=postgres -e KC_DB_URL_DATABASE=web-oidc-v20 -e KC_DB_URL_HOST=192.168.201.23 -e KC_DB_URL_PORT=5432 -e KC_DB_USERNAME=web-oidc -e KC_DB_PASSWORD=Abcd#1234 -e KC_HOSTNAME_ADMIN_URL=http://localhost:8080 -e KC_HOSTNAME_URL=http://localhost:8080 web-oidc-v20
Container logs:
Changes detected in configuration. Updating the server image.
Updating the configuration and installing your custom providers, if any. Please wait.
2023-02-08 08:55:41,942 WARN [org.keycloak.services] (build-26) KC-SERVICES0047: load-digo-user (vn.vnpt.digo.iscs.keycloak.spi.authenticator.LoadDigoUserAuthenticatorFactory) is implementing the internal SPI authenticator. This SPI is internal and may change without notice
2023-02-08 08:55:46,242 WARN [io.quarkus.deployment.steps.ReflectiveHierarchyStep] (build-104) Unable to properly register the hierarchy of the following classes for reflection as they are not in the Jandex index:
- javax.servlet.http.Cookie (source: JacksonProcessor > org.springframework.security.web.savedrequest.DefaultSavedRequest$Builder)
Consider adding them to the index either by creating a Jandex index for your dependency via the Maven plugin, an empty META-INF/beans.xml or quarkus.index-dependency properties.
2023-02-08 08:55:53,256 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 14734ms
Server configuration updated and persisted. Run the following command to review the configuration:
kc.sh show-config
Next time you run the server, just run:
kc.sh start --optimized
2023-02-08 08:55:55,649 INFO [vn.vnpt.digo.iscs.keycloak.spi.authenticator.LoadDigoUserAuthenticatorFactory] (main) init
2023-02-08 08:55:55,751 INFO [vn.vnpt.digo.iscs.keycloak.spi.user.storage.DigoUserStorageProviderFactory] (main) init
2023-02-08 08:55:55,753 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: http://localhost:8080, Hostname: localhost, Strict HTTPS: false, Path: /, Strict BackChannel: false, Admin URL: http://localhost:8080, Admin: localhost, Port: 8080, Proxied: false
2023-02-08 08:55:58,207 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-02-08 08:55:59,484 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-02-08 08:55:59,546 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2023-02-08 08:55:59,608 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-02-08 08:56:00,138 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-02-08 08:56:00,135 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
2023-02-08 08:56:00,195 WARN [org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory] (main) Unable to prepare operational info due database exception: Connection has been closed.
2023-02-08 08:56:00,428 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2023-02-08 08:56:00,429 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2023-02-08 08:56:00,585 WARN [io.agroal.pool] (main) Datasource '<default>': Connection has been closed.
2023-02-08 08:56:00,613 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,618 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,620 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,622 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:02,644 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) 87e253982a2b-10337: no members discovered after 2001 ms: creating cluster as coordinator
2023-02-08 08:56:02,662 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [87e253982a2b-10337|0] (1) [87e253982a2b-10337]
2023-02-08 08:56:02,668 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `87e253982a2b-10337`, physical addresses are `[192.168.0.2:33278]`
2023-02-08 08:56:03,341 INFO [org.infinispan.CLUSTER] (main) ISPN000080: Disconnecting JGroups channel `ISPN`
2023-02-08 08:56:03,507 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (production) mode
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to validate database
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: org.postgresql.util.PSQLException: Connection has been closed.
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Connection has been closed.
2023-02-08 08:56:03,509 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command.
Postgresql Logs:
2023-02-08 14:59:19.553 +07 [2652184] web-oidc#web-oidc-v20 LOG: connection authorized: user=web-oidc database=web-oidc-v20 SSL enabled (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 FATAL: relation "migration_model" does not exist at character 25
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 STATEMENT: SELECT ID, VERSION FROM MIGRATION_MODEL ORDER BY UPDATE_TIME DESC
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 LOG: disconnection: session time: 0:00:01.577 user=web-oidc database=web-oidc-v20 host=192.168.200.25 port=62448
Please let me know where I am wrong, sincerely thanks everyone! 🙏
try to replace localhost in your docker command with host.docker.internal.
Docker cannot resolve localhost.
So, you need to run
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=postgres -e KC_DB_URL_DATABASE=web-oidc-v20 -e KC_DB_URL_HOST=192.168.201.23 -e KC_DB_URL_PORT=5432 -e KC_DB_USERNAME=web-oidc -e KC_DB_PASSWORD=Abcd#1234 -e KC_HOSTNAME_ADMIN_URL=http://host.docker.internal:8080 -e KC_HOSTNAME_URL=http://host.docker.internal:8080 web-oidc-v20
Also, not sure about this
-e KC_DB_URL_HOST=192.168.201.23
Also, I see you are trying to start it in production mode. Is it expected? If no then add in the end start-dev.
I found the problem I was having. It was because 1 value in Postgresql config was blocking my connection!
exit_on_error

Import Keycloak existing realm without losing existing users

I configured the kubernetes init container that imports an existing Realm, and override the one that is in environment already.
I'm using this command:
/opt/keycloak/bin/kc.sh import --file=/opt/keycloak/data/import/tyk-realm-export.json
The problem that I'm having, is, when the existing realm is replaced, it deletes all users in it.
Is there any way to import a new configuration for realm without loosing the users?
In particular, my DB is expecting to have hundred thousands of users.
PS: using keycloak >=18.0.0
Here is a log:
Appending additional Java properties to JAVA_OPTS: -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.strategy=OVERWRITE_EXISTING
2022-06-17 10:17:30,048 INFO [org.keycloak.common.Profile] (main) Preview feature enabled: scripts
2022-06-17 10:17:30,198 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: <MyHostname>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin: <request>, Port: -1, Proxied: true
2022-06-17 10:17:32,225 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-06-17 10:17:32,505 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-06-17 10:17:32,559 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-06-17 10:17:33,004 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.9.Final
2022-06-17 10:17:33,311 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2022-06-17 10:17:33,312 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2022-06-17 10:17:33,599 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:35,614 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) sb-keycloak-bd4778849-n8jh5-3122: no members discovered after 2004 ms: creating cluster as coordinator
2022-06-17 10:17:35,636 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [sb-keycloak-bd4778849-n8jh5-3122|0] (1) [sb-keycloak-bd4778849-n8jh5-3122]
2022-06-17 10:17:35,647 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `sb-keycloak-bd4778849-n8jh5-3122`, physical addresses are `[10.2.0.74:41912]`
2022-06-17 10:17:36,678 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: sb-keycloak-bd4778849-n8jh5-3122, Site name: null
2022-06-17 10:17:37,972 INFO [org.keycloak.services] (main) KC-SERVICES0030: Full model import requested. Strategy: OVERWRITE_EXISTING
2022-06-17 10:17:37,983 INFO [org.keycloak.exportimport.singlefile.SingleFileImportProvider] (main) Full importing from file /opt/keycloak/data/import/tyk-realm-export.json
2022-06-17 10:17:38,388 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm 'tyk' already exists. Removing it before import
2022-06-17 10:17:49,348 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm 'tyk' imported
2022-06-17 10:17:49,540 INFO [org.keycloak.services] (main) KC-SERVICES0032: Import finished successfully
2022-06-17 10:17:49,832 INFO [io.quarkus] (main) Keycloak 18.0.1 on JVM (powered by Quarkus 2.7.5.Final) started in 25.524s. Listening on: http://0.0.0.0:8080
2022-06-17 10:17:49,834 INFO [io.quarkus] (main) Profile import_export activated.
2022-06-17 10:17:49,834 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
2022-06-17 10:17:49,922 INFO [org.infinispan.CLUSTER] (main) ISPN000080: Disconnecting JGroups channel `ISPN`
2022-06-17 10:17:50,012 INFO [io.quarkus] (main) Keycloak stopped in 0.165s
Done
Maybe you could export both realms and stitch the dumps together.
I don't know your exact use-case.
But the question I ask: is it mandatory to import the realm again or do you just need an update?
First time you import the realm, it's perfectly fine. When importing you have to choose between two strategies: OVERWRITE_EXISTING and IGNORE_EXISTING .
However, both don't fit the use-case of updating particular items of your realm, like the smtp-server settings.
Let's say you have three environments: development, release, production.
Your configuration evolves and runs through each stage.
With ignore_existing no import will happen.
With overwrite_existing it will remove all your users, since overwrite_existing works this way: delete the existing, completly create a new realm. No need to say that this is not wanted in a productive environment.
What you need in this case is just an update via the REST-API. (note that this links points to a specific version AND PLEASE NOTE THAT THE SPECIFIED PATH IN THE DOCUMENTATION IS WRONG, THAT'S WHY IT DIFFERS IN MY CURL COMMAND)
E.g.:
Let's say you get the requirement, that the emails sent by keycloak should have a new "from"-mail. You develop it, it will be tested and than run in production. In this case you can run cUrl-scripts like this:
------------------------------
# First initialize your variables
export KEYCLOAK_HOST="http://localhost:8471"
export REALM_NAME="myrealm"
export CLIENT_SECRET="client-secret-from-your-admin-cli-user-in-the-myrealm"
export CLIENT_ID="admin-cli"
# get the token (mandatory for any action as an admin)
export TOKEN=$( \
curl -s \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d 'grant_type=client_credentials' \
"$KEYCLOAK_HOST/auth/realms/$REALM_NAME/protocol/openid-connect/token" \
| jq -j '.access_token')
#update your specific resource, in this case we're updating the attribute smtpServer with the according values
curl -X PUT \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"smtpServer" : { "replyToDisplayName" : "my Example Display Name", "starttls" : "false", "auth" : "", "port" : "12345", "host" : "my-host.local", "replyTo" : "my-new-address-requested#supermail.com", "from" : "my-new-address-requested#supermail.com", "fromDisplayName" : "", "ssl" : ""} }' \
$KEYCLOAK_HOST/auth/admin/realms/ekc
With this approach you can update your realm and let it evolve according to its stage.
As I said, I don't know if it solves your problem, but if so I am happy I could help.

Getting SmscManagement is not registered error in smsc gateway management UI

I am using Telestax Restcomm smsc gateway 7.2.109.
When I load sms gateway management UI, I am getting
15:31:12:520 [ERROR] javax.management.InstanceNotFoundException : org.mobicents.smsc:layer=SmscPropertiesManagement,name=SmscManagement is not registered.. (Full Stack Trace)
Also I am getting following errors while starting smsc server (jboss).
08:56:25,851 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SCTPManagement
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SCTPManagement",type=Component already registered.
08:56:25,858 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SCTPShellExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SCTPShellExecutor",type=Component already registered.
08:56:25,865 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean RoutingLabelFormat
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="RoutingLabelFormat",type=Component already registered.
08:56:25,874 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean Mtp3UserPart
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="Mtp3UserPart",type=Component already registered.
08:56:25,882 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean M3UAShellExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="M3UAShellExecutor",type=Component already registered.
08:56:25,889 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SS7Clock
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SS7Clock",type=Component already registered.
08:56:25,899 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SS7Scheduler
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SS7Scheduler",type=Component already registered.
08:56:25,907 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SccpStack
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SccpStack",type=Component already registered.
08:56:25,914 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean SccpExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="SccpExecutor",type=Component already registered.
08:56:25,921 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean TcapStack
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="TcapStack",type=Component already registered.
08:56:25,927 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean TcapExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="TcapExecutor",type=Component already registered.
08:56:25,934 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean ShellExecutor
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="ShellExecutor",type=Component already registered.
08:56:25,940 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean MapStack
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="MapStack",type=Component already registered.
08:56:25,950 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean MAPSS7Service
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="MAPSS7Service",type=Component already registered.
08:56:25,984 WARN [AbstractDeploymentContext] (main) Unable to register deployment mbean Ss7Management
javax.management.InstanceAlreadyExistsException: jboss.deployment:id="Ss7Management",type=Component already registered.
08:56:26,041 ERROR [AbstractKernelController] (main) Error installing to Real: name=vfsfile:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/restcomm-smsc-server/META-INF/jboss-beans.xml state=PreReal mode=Manual requiredState=Real
org.jboss.deployers.spi.DeploymentException: Error deploying: SCTPManagement
DEPLOYMENTS MISSING DEPENDENCIES:
Deployment "vfszip:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/smpp-server-ra-du-7.0.5.jar/" is missing the following dependencies:
Dependency "SmppManagement" (should be in state "Real", but is actually in state "** NOT FOUND Depends on 'SmppManagement' ")
Deployment "vfszip:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/smsc-resource-adaptors-du-7.2.109.jar/" is missing the following dependencies:
Dependency "SmscManagement" (should be in state "Real", but is actually in state " NOT FOUND Depends on 'SmscManagement' ")
Deployment "vfszip:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/smsc-services-du-7.2.109.jar/" is missing the following dependencies:
Dependency "SmscManagement" (should be in state "Real", but is actually in state " NOT FOUND Depends on 'SmscManagement' **")
DEPLOYMENTS IN ERROR:
Deployment "vfsfile:/home/telestax/Downloads/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/default/deploy/restcomm-smsc-server/META-INF/jboss-beans.xml" is in error due to the following reason(s): java.lang.IllegalStateException: SCTPManagement is already installed.
Deployment "SmscManagement" is in error due to the following reason(s): ** NOT FOUND Depends on 'SmscManagement' **
Deployment "SmppManagement" is in error due to the following reason(s): ** NOT FOUND Depends on 'SmppManagement' **
Kindly help.
Thanks.
Update:
Server is working fine now.
Getting below error when calling from smtp simulator client.
14:26:41,913 INFO [SmppServerConnector] (SmppManagement) New channel from [172.17.0.1:57210]
14:26:41,916 INFO [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) received PDU: (bind_transmitter: 0x00000023 0x00000002 0x00000000 0x00000001) (body: systemId [test] password [test] systemType [] interfaceVersion [0x34] addressRange (0x01 0x01 [6666])) (opts: )
14:26:41,917 ERROR [DefaultSmppServerHandler] (SmppManagement.UnboundSession.172.17.0.1:57210) Received BIND request but no ESME configured for SystemId=test Host=172.17.0.1 Port=57210 SmppBindType=TRANSMITTER
14:26:41,918 WARN [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) Bind request rejected or failed for connection [172.17.0.1:57210] with error [SMPP processing error [0x0000000F]]
14:26:41,918 INFO [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) send PDU: (bind_transmitter_resp: 0x0000001A 0x80000002 0x0000000F 0x00000001 result: "System ID invalid") (body: systemId [test]) (opts: (sc_interface_version: 0x0210 0x0001 [34]))
14:26:41,919 INFO [UnboundSmppSession] (SmppManagement.UnboundSession.172.17.0.1:57210) Connection closed with [172.17.0.1:57210]
Can you retry from the latest snapshot release from https://mobicents.ci.cloudbees.com/job/RestComm-SMSC/ ?

What do WARN messages mean when starting spark-shell?

When starting my spark-shell, I had a bunch of WARN messages. But I cannot understand them. Is there any important problems that I should take care of? Or is there any configuration that I missed? Or these WARN messages are normal.
cliu#cliu-ubuntu:Apache-Spark$ spark-shell
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel("INFO")
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.5.2
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_66)
Type in expressions to have them evaluated.
Type :help for more information.
15/11/30 11:43:54 WARN Utils: Your hostname, cliu-ubuntu resolves to a loopback address: 127.0.1.1; using xxx.xxx.xxx.xx (`here I hide my IP`) instead (on interface wlan0)
15/11/30 11:43:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/11/30 11:43:55 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
Spark context available as sc.
15/11/30 11:43:58 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/11/30 11:43:58 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/11/30 11:44:11 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
15/11/30 11:44:11 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
15/11/30 11:44:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/30 11:44:14 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/11/30 11:44:14 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/11/30 11:44:27 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
15/11/30 11:44:27 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
SQL context available as sqlContext.
scala>
This one:
15/11/30 11:43:54 WARN Utils: Your hostname, cliu-ubuntu resolves to a loopback address: 127.0.1.1; using xxx.xxx.xxx.xx (`here I hide my IP`) instead (on interface wlan0)
15/11/30 11:43:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
means that the hostname the driver managed to figure out for itself is not routable and hence no remote connections are allowed. In your local environment, it is not an issue, but if you go for multi-machine configuration, Spark won't work properly. Hence the WARN message as it may or may not be an issue. Just a heads-up.
The logging info are absolutely normal. Here the BoneCP tries to bind to a JDBC connection and this is why you receive these warnings. In any case if you would like to manage the log records you could specify the logging level by copying <spark-path>/conf/log4j.properties.template
file to <spark-path>/conf/log4j.properties and make your configurations.
Lastly, a similar answer for logging level can be found here:
How to stop messages displaying on spark console?
Adding to #Jacek Laskowski answer, with respect to the SPARK_LOCAL_IP warning:
15/11/30 11:43:54 WARN Utils: Your hostname, cliu-ubuntu resolves to a loopback address: 127.0.1.1; using xxx.xxx.xxx.xx (`here I hide my IP`) instead (on interface wlan0)
15/11/30 11:43:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
I encountered the same running spark-shell over a standalone Spark cluster running on Ubuntu 20.04 server. As expected, setting the SPARK_LOCAL_IP environment variables to $(hostname) made the warning go away, but while the application was running without issues, the worker GUI was not reachable using port 4040.
For fixing this, we had to set SPARK_LOCAL_HOSTNAME instead of SPARK_LOCAL_IP. Doing this, the warning was gone, and the worker GUI became accessible though port 4040.
I couldn't find information about this variable in Spark documentation, but according to Spark's source code it is used for setting a custom local machine URI: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L1058

Orient db distributed database configuration

I am unable to configure distributed databases using https://github.com/orientechnologies/orientdb/wiki/Tutorial%3A-setup-a-distributed-database
I am using orientdb community 1.7.5 edition.
The nodes aren't able to connect to each other. I am configuring it on same server and have followed each and every instruction given in above link.
Update:
there weren't any error earlier however got this error last time when I tried.
here aren't any errors earlier. but got below error last time when I tried...
00:55:15:315 INFO [152.144.227.223]:2434 [orientdb] [3.2.2] Accepting socket connection from /152.144.227.223:56818 [SocketAcceptor]
2014-07-23 00:55:15:321 INFO [152.144.227.223]:2434 [orientdb] [3.2.2] 2434 accepted socket connection from /152.144.227.223:56818 [TcpIpConnectionManager]
2014-07-23 00:55:16:321 WARN [152.144.227.223]:2434 [orientdb] [3.2.2] Invalid join request from: Address[152.144.227.223]:2435, reason:Incompatible joiners! -vs- tcp-ip [ClusterService]
2014-07-23 00:55:16:325 INFO [152.144.227.223]:2434 [orientdb] [3.2.2] Connection [Address[152.144.227.223]:2435] lost. Reason: Socket explicitly closed [TcpIpConnection]
also find below the hazlecase configuration.its same for both nodes. nodes are on same machine.
<network>
<port auto-increment="true">2434</port>
<join>
<multicast enabled="false">
<multicast-group>235.1.1.1</multicast-group>
<multicast-port>2434</multicast-port>
</multicast>
</join>
<tcp-ip enabled="true">
<member>152.144.227.223:2434</member>
<member>152.144.227.223:2435</member>
</tcp-ip>
</network>
tried by changing the port in hazlecast to 152.144.227.223:2424/2425 and got below warning when starting node1.
2014-07-23 01:14:27:157 INFO null [orientdb] [3.2.2] Picked Address[152.144.227.223]:2434, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=2434], bind any local is true [DefaultAddressPicker]
2014-07-23 01:14:27:252 INFO [152.144.227.223]:2434 [orientdb] [3.2.2] Hazelcast Community Edition 3.2.2 (20140527) starting at Address[152.144.227.223]:2434 [system]
2014-07-23 01:14:27:254 INFO [152.144.227.223]:2434 [orientdb] [3.2.2] Copyright (C) 2008-2014 Hazelcast.com [system]
2014-07-23 01:14:27:258 INFO [152.144.227.223]:2434 [orientdb] [3.2.2] Address[152.144.227.223]:2434 is STARTING [LifecycleService]
2014-07-23 01:14:27:424 WARN [152.144.227.223]:2434 [orientdb] [3.2.2] No join method is enabled! Starting standalone. [Node]
2014-07-23 01:14:27:457 INFO [152.144.227.223]:2434 [orientdb] [3.2.2] Address[152.144.227.223]:2434 is STARTED [LifecycleService]
NEW ERROR
getting below error on both nodes
2014-08-08 16:27:37:309 INFO [192.168.159.134]:2434 [orientdb] [3.2.2] Hazelcast Community Edition 3.2.2 (20140527) starting at Address[192.168.159.134]:2434 [system]
2014-08-08 16:27:37:309 INFO [192.168.159.134]:2434 [orientdb] [3.2.2] Copyright (C) 2008-2014 Hazelcast.com [system]
2014-08-08 16:27:37:356 INFO [192.168.159.134]:2434 [orientdb] [3.2.2] Address[192.168.159.134]:2434 is STARTING [LifecycleService]
2014-08-08 16:27:38:494 WARN [192.168.159.134]:2434 [orientdb] [3.2.2] No join method is enabled! Starting standalone. [Node]
2014-08-08 16:27:38:869 INFO [192.168.159.134]:2434 [orientdb] [3.2.2] Address[192.168.159.134]:2434 is STARTED [LifecycleService]
To be able to form clusters you need to add tcp-ip tag inside the join tag of the hazelcast configuration. The error which you are facing because of No join method will be resolved. Your hazelcast configuration file should look like this:
<join>
<multicast enabled="false">
<multicast-group>235.1.1.1</multicast-group>
<multicast-port>2434</multicast-port>
</multicast>
<tcp-ip enabled="true">
<member>152.144.227.223:2434</member>
<member>152.144.227.223:2435</member>
</tcp-ip>
</join>