Restcomm - Solving SMSC GW 7.2 configuration failures - jboss

We configured the latest version (7.2) SMSC-GW to work on on our server with the environment (cassandra and such). However, after setting up everything. Some failures are appearing (which did not appear in previous versions).
Firstly, when connecting the simulators and the gateway using the default settings (JSS7 <-> SMSCGW <-> SMPP)
JSS7 is connected and sending, but no response is received.
SMPP is connected to SMSC-GW and the EMSE is bound. SMPP tries to send to SS7 but receives a response PDU packet failure from the SMSC-GW
I tried configuring DB routing rules, but that did not work.
Also, the log in the SMSC-GW server is frequently displaying the following message:
16:00:28,504 INFO [SchedulerResourceAdaptor] (pool-56-thread-1) Not all SBB are running now: ServicesDownList=[smscTxSmppServerServiceState, smscRxSmppServerServiceState, smscTxSipServerServiceState, smscRxSipServerServiceState, smscTxHttpServerServiceState, moServiceState, homeRoutingServiceState, mtServiceState, alertServiceState, chargingServiceState, ]
And the JSS7 management console GUI is displaying this (which looks wrong):
So are these the source of the SMSC-GW failures?
UPDATE: I found this error in the server.log
2017-02-02 10:57:42,005 WARN [org.mobicents.slee.container.deployment.jboss.SleeContainerDeployerImpl] (SLEE-InternalDeployer-thread-1) SLEE DUs not deployed, due to missing dependencies: file:/home/coreteam/kitchensink/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/simulator/deploy/smsc-services-du-7.2.109.jar/
Followed by:
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_MT,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=PersistenceResourceAdaptorType,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=SchedulerResourceAdaptorType,vendor=org.mobicents,version=1.0]
SipRA
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_RSDS,vendor=org.mobicents,version=1.0]
SchedulerResourceAdaptor^M
PersistenceResourceAdaptor^M
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SMPP_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SIP_SM,vendor=org.mobicents,version=1.0]
2017-02-02 14:41:17,450 WARN [org.mobicents.slee.container.deployment.jboss.DeploymentManager] (main) Unable to INSTALL smsc-services-du-7.3.0-SNAPSHOT.jar right now. Waiting for dependencies to be resolved.

Solved it quite a while ago, but thought I would share. I just simply installed the SipRA missing dependency by adding the following in the deploy-config.xml file:
<ra-entity
resource-adaptor-id="ResourceAdaptorID[name=JainSipResourceAdaptor,vendor=net.java.slee.sip,version=1.2]"
entity-name="SipRA">
<properties>
<property name="javax.sip.PORT" type="java.lang.Integer" value="5060" />
</properties>
<ra-link name="SipRA" />
In the $JBOSS_HOME/server/profile_name/deploy/restcomm-slee directory.
I set the port to some other value since that number was already taken by some other service.
The smsc-services-du-7.2.109.jar then installed automatically the next time I ran the SMSC-GW.

Related

IBM BLUEMIX BLOCKCHAIN SDK-DEMO failing

I have been working with HFC SDK for Node.js and it used to work, but since last night I am having some problems.
When running helloblockchain.js only few times works, most time I get this error when it tries to enroll a new user:
E0113 11:56:05.983919636 5288 handshake.c:128] Security handshake failed: {"created":"#1484304965.983872199","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484304965.983866102","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Error: Failed to register and enroll JohnDoe: Error
Other times, the enroll works and the failure appears deploying the chaincode:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:14:27.341527043 5455 handshake.c:128] Security handshake failed: {"created":"#1484306067.341430168","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306067.341421859","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Failed to deploy chaincode: request={"fcn":"init","args":["a","100","b","200"],"chaincodePath":"chaincode","certificatePath":"/certs/peer/cert.pem"}, error={"error":{"code":14,"metadata":{"_internal_repr":{}}},"msg":"Error"}
Or:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:15:27.448867739 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.448692244","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.448668047","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
events.js:160
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:217:12)
E0113 12:15:27.563487641 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.563437122","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.563429661","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
This code worked yesterday, so I don't know what could be happening.
Does anybody know how can I fix it?
Thanks,
Javier.
ibm-bluemix
blockchain
These types of intermittent issues are usually related to GRPC. An initial suggestion is to ensure that you are using at least GRPC version 1.0.0.
If you are using a Mac, then the maximum number of open file descriptors should be checked (using ulimit -n). Sometimes this is initially set to a low value such as 256, so increasing the value could help.
There are a couple of GRPC issues with similar symptoms.
https://github.com/grpc/grpc/issues/8732
https://github.com/grpc/grpc/issues/8839
https://github.com/grpc/grpc/issues/8382
There is a grpc.initial_reconnect_backoff_ms property that is mentioned in some of these issues. Increasing the value past the 1000 ms level might help reduce the frequency of issues. Below are instructions for how the helloblockchain.js file can be modified to set this property to a higher value.
Open the helloblockchain.js file in the Hyperledger Fabric Client example and find the enrollAndRegisterUsers function.
Add “grpc.initial_reconnect_backoff_ms": 5000 to the setMemberServicesUrl call.
chain.setMemberServicesUrl(ca_url, {
pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Add “grpc.initial_reconnect_backoff_ms": 5000 to the addPeer call.
chain.addPeer("grpcs://" + peers[i].discovery_host + ":" + peers[i].discovery_port,
{pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Note that setting the grpc.initial_reconnect_backoff_ms property may reduce the frequency of issues, but it will not necessarily eliminate all issues.
The connection to the eventhub that is made in the helloblockchain.js file can also be a factor. There is an earlier version of the Hyperledger Fabric Client that does not utilize the eventhub. This earlier version could be tried to determine if this makes a difference. After running git clone https://github.com/IBM-Blockchain/SDK-Demo.git, run git checkout b7d5195 to use this prior level. Before running node helloblockchain.js from a Node.js command window, the git status command can be used to check the code level that is being used.

How to check cluster working between two different JBoss Server

I configured cluster between two different JBoss server using Multicast method.
Both server will be connected , when I start both JBoss server.
After one days , I'm getting following messages
Errors start to show for the clustering in server.log
05:28:17,447 ERROR [org.hornetq.core.server] (Thread-11905 (HornetQ-client-global-threads-377807954)) HQ224037:
cluster connection Failed to handle message: java.lang.IllegalStateException:
Cannot find binding for d7c1004f-b1a1-4160-8888-c38175ac45d599cf0dfe-5f30-11e4-bd7e-556a35fb9ec6 on
ClusterConnectionImpl#538608327[nodeUUID=930dee51-5f30-11e4-9695-ef52e2a27831, connector=TransportConfiguration(name=netty,
factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=5445&host=172-29-250-191, address=jms,
server=HornetQServerImpl::serverUUID=930dee51-5f30-11e4-9695-ef52e2a27831]
at org.hornetq.core.server.cluster.impl.ClusterConnectionImpl$MessageFlowRecordImpl.doConsumerCreat
05:28:17,411 ERROR [org.hornetq.core.server] (Thread-11439
(HornetQ-remoting-threads-HornetQServerImpl::serverUUID=99cf0dfe-5f30-11e4-bd7e-556a35fb9ec6-136247994-702467456))
HQ224016: Caught exception: HornetQException[errorType=QUEUE_EXISTS message=HQ119019:
Queue already exists 7a8b46d5-a038-4efd-900e-4c041c2c121f]
At org.hornetq.core.server.impl.HornetQServerImpl.createQueue(HornetQServerImpl.java:1811)
[hornetq-server-2.3.1.Final-redhat-1.jar:2.3.1.Final-redhat-1]
How to ensure cluster between two servers. Is there any procedures or any work around available?
Red Hat provides a McastReceiverTest java client test utility- further information on its use can be located at https://access.redhat.com/solutions/123073

Cometd in wildfly8.1.0 no response

I'm trying to deploy a project which utilizes Cometd2.9.1 in Wildfly8.1.0.
The project worked in glassfish4.0, however it won't work in wildfly. It can be deployed with no problem, but it stuck after I visit http://localhost:8080/Cometd3/ (Yes, the project name is Cometd3 while I ended up with cometd2.9.1)
It's weird because there is no error or exception message even I enabled debug log.
Because my log is too long to post here and I have completely no idea what is the problem, please download my log file here.
Please note that the log is just stop at
2014-09-17 14:57:09,893 FINE [javax.enterprise.resource.webcontainer.jsf.application] (default task-4) servletPath /faces
2014-09-17 14:57:09,900 FINE [javax.enterprise.resource.webcontainer.jsf.application] (default task-5) servletPath /faces
2014-09-17 14:57:09,907 FINE [javax.enterprise.resource.webcontainer.jsf.application] (default task-6) servletPath /faces
2014-09-17 14:58:44,505 DEBUG [org.jboss.ejb.client.txn] (Periodic Recovery) Send recover request for transaction origin node identifier 1 to EJB receiver with node name 972909-a43sv
And there are no other log after those messages.
What's more,when I say it is stuck, I mean not only the server side stuck, but also the browser, which means the browser is totally white and keep loading page when I open it.
Any thoughts? Thanks!
I can answer only from the CometD side of things.
The logs shows some server-side CometD activity (a LocalSession called echo handshakes), but after that it's like there are no requests to the server, so that CometD has nothing to do and just sits idle.
From your description of the problem looks more like a server problem than a CometD problem.
CometD typically runs very well in Jetty; I would try to deploy your application in Jetty and see how it goes.

Migrating JMS Queue from Hypersonic to MSSQL

I am currently trying to replace Hypersonic with MS-SQL 2008 R2 in JBoss AS 5.1.0GA.
I have followed the instructions in the JBoss Server Configuration Guide, however the server fails to load with this error:
2013-09-26 17:06:04,479 WARN [org.jboss.resource.adapter.jms.inflow.JmsActivation] (WorkManager(2)-3) Failure in jms activation org.jboss.resource.adapter.jms.inflow.JmsActivationSpec#8bb1eb(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter#c54851 destination=queue/iam/im/jms/queue/wpUtilQueue destinationType=javax.jms.Queue tx=true durable=false reconnect=10 provider=DefaultJMSProvider user=null maxMessages=1 minSession=1 maxSession=15 keepAlive=30000 useDLQ=true DLQHandler=org.jboss.resource.adapter.jms.inflow.dlq.GenericDLQHandler DLQJndiName=queue/DLQ DLQUser=null DLQMaxResent=10)
javax.naming.NameNotFoundException: DLQ not bound
(I left out the stack trace for brevity; it isn't important.)
I have checked, and DLQ is defined in destinations-service.xml
I'm not sure where to proceed from here; every response I can find on Google seems to suggest that defining the queue in destinations-service.xml has solved the issue for almost everyone.
Any help would be appreciated.
It turns out that the instructions in the Configuration Guide aren't 100% complete. The issue was that a ChannelFactory was referenced in the mssql-persistence-service.xml; however, this environment is not clustered, and so there were no ChannelFactory objects defined.
Removing the reference to the ChannelFactory was sufficient to resolve the issue.

Jboss shows error with datasource on startup

On starting jboss I am getting the following error :
--- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM ---
ObjectName: jboss.jca:service=DataSourceBinding,name=DefaultDS
State: NOTYETINSTALLED
Depends On Me:
jboss.ejb:service=EJBTimerService,persistencePolicy=database
jboss:service=KeyGeneratorFactory,type=HiLo
jboss.mq:service=StateManager
jboss.mq:service=PersistenceManager
And for all database connections in the servlet I get the following exception :
org.postgresql.util.PSQLException: FATAL: password a
uthentication failed for user "poll"
It was working fine and all of a sudden I started getting these errors. My password is correct. I even tried changing the password and then tried again it showed the same exception. What is happening here?
The DefaultDS data source is what the name suggests; the default datasource. It ships with JBoss and is configured to use the Hypersonic (ie in-memory) database. JBoss uses the DefaultDS datasource to read/write internal queues, timed events, etc
Check the file ../conf/standardjbosscmp-jdbc.xml to see what you've got configured for the DefaultDS datasource. It sounds like you've edited that file unintentionally. Unless you need to persist internal queues etc across boots, just leave it as shipped using Hypersonic.
See the JBoss doc for more.