How to create Connection Factory in Websphere - queue

I am trying to use Queue-s, so I need to set a Connection Factory, and Queue.
But at the beggining, I can't go forward. I can't set the Connection Factory.
I am using this link to set up a connection factory:
link
But at the 9th step I don't know what to set, this is the interface to set:
If I set "localhost" to Hostname, then click NEXT and trying to test it, I get this message:
A connection could not be made to WebSphere MQ for the following
reason: JMSCMQ0001: WebSphere MQ call failed with compcode '2'
('MQCC_FAILED') reason '2059' ('MQRC_Q_MGR_NOT_AVAILABLE').
So.. How could create a connection factory?

The transport should not be client if you are trying to connect to a local qmgr in bindings mode. The application server will access the qmgr using IPC.
If you are trying to connect as a client then TCP will be used in that case you need to specify the port where the qmgr listener is listening and the SVRCONN (server connection) channel to use.

Related

A6 gsm/gprs module TCP/IP connection to Cloud

I want to send data with A6 GSM/GPRS module to data.sparkfun.com cloud service. I am using these AT commands:
// Setting up network
AT+CGATT?
AT+CGATT=1
AT+CGDCONT=1,"IP","internet"
AT+CGACT=1,1
AT+CIPSTATUS
AT+CIFSR
// Start the TCP/IP connection to the server
AT+CIPSTART="TCP","54.86.132.254",80 // PROBLEM STARTS HERE
AT+CIPSTATUS
AT+CIPSEND
GET /input/***********?private_key=****************&temp=45.2 HTTP/1.1<cr><lf>Host:data.sparkfun.com<cr><lf>Connection:keep-alive<cr><lf>
^z
When I enter this command AT+CIPSTART="TCP","data.sparkfun.com",80 I will get back CONNECT OK(TCP connection success) and just after that it will automatically close it +TCPCLOSED:0(TCP connection is closed by remote server). There is no time to enter the AT+CIPSEND command because the TCP connection is lost.
I tried to make my own nodejs server but still the same problem.
How to keep the connection alive until I can send data and then close the connection with AT+CIPCLOSE command?
Most probably the solution is very simple.
The AT-Command
AT+CGDCONT=1,"IP","internet"
defines the PDP context and I guess the "internet" was just a generic value and you might have to replace it by the APN of your mobile network provider.

MQ error code 2058 when connecting to queue manager JMS

I am trying to connect to Queue Manager using MQ api and I am able to connect to queue manager
MQQueueManager queueManager=new MQQueueManager(qmgrName);
queueManager.accessQueue(qName,MQOO_OUTPUT);
But when I try to connect to the same queue manager using JMS it fails with 2058 code.Not sure if I am missing something with JMS
MQQueueConnectionFactory qcf=new MQQueueConnectionFactory();
qcf.setQueueManager(qmgrName);
qcf.setPort(1414);
qcf.setHostname("localhost");
qcf.createQueueConnection();
You have two or more queue managers on the local host. In your first example you connect in bindings mode so the queue manager is selected by name and you get the right one. In the second example the connection is being made over a client connection and so is received by the QMgr listening on 1414 which is not the one that you intend so the connection is rejected.
Please note that if both QMgrs have a listener on 1414 the connection will succeed or fail depending on which QMgr was started first. Only one can bind to that port so the first one started on it gets to use it. This might lead to what appears to be inconsistent behavior.
Please see Connection modes for IBM MQ classes for JMS which advises "To change the connection options used by the IBM MQ classes for JMS, modify the Connection Factory property CONNOPT." The acceptable values are provided on the page but you almost always want it to set for Standard Bindings (MQCNO_STANDARD_BINDING).
As documented here, MQRC 2058 means an invalid queue manager name or the queue manager name is unknown. But as you mention, bindings mode connection using MQ Base Java is successful, the queue manager name appears valid.
Update:
Sorry, I was mislead by your code and thought you are trying to do client mode connection using JMS. You don't need to set host and port for bindings mode connection.
Since the transport type is not set, default, WMQ_CM_BINDINGS is used. Suggest you to verify the queue manager name.
To connect with "BINDINGS", the queue manager needs to be local. Are you trying to connect to a remote queue manager? If so you would need to connect as "CLIENT". Also, check to be sure the qmgr is listening on the port you specified.

Error: Communication error: The Client failed to send packet. The socket has been shut down

Steps followed to installed Load Agent on AWS.
Firewall Exception from controller for port 50500 , 54345, 443 and 3389 on load agent machine.
Installed Load Runner Setup [ as Load agent Process is also a part of Load Runner Setup]
Allowed all the programs [Agent Process, Agent Service.. etc] from Windows Firewall.
Tried to connect from Load Controller. Error received on controller is
Communication error: The Client failed to send packet. The socket has been shut down.
As per OPs team, the agent is trying to establish a connection back to a server 54.xxx.xx.xxx[Unknown AWS IP] on port 10051 and failing eventually where as this particular server is unknown to us.
Version of Loadrunner on Agent and Controller is same.
Please tell how do i have to install or configure MI LISTENER or AGENT PROCESS over firewall.
Turns out, it was a firewall exception mess made by the IT department. The above mentioned steps will clearly allow a communication with the LR Controller and Agent.

Connection timeout to MongoDb on Azure VM

I have some timeout problems when connecting my Azure Web App to a MongoDb hosted on a Azure VM.
2015-12-19T15:57:47.330+0100 I NETWORK Socket recv() errno:10060 A connection attempt
failed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond.
2015-12-19T15:57:47.343+0100 I NETWORK SocketException: remote: 104.45.x.x:27017 error:
9001 socket exception [RECV_ERROR] server [104.45.x.x:27017]
2015-12-19T15:57:47.350+0100 I NETWORK DBClientCursor::init call() failed
Currently mongodb is configured on a single server (just for dev) and it is exposed through a public ip. Website connect to it using an azure domain name (*.westeurope.cloudapp.azure.com) and without a Virtual Network.
Usually everything works well, but after some minutes of inactivity I get that timeout exception. The same will happen when using the MongoDb shell from my PC, so I'm quite sure that it is a problem on mongodb side.
I'm missing some configuration?
After some searching here my considerations:
It is usually a good practice to implement some sort of retry logic on every resource that you access on Azure (database, VM, ...). For MongoDb there is a partial implementation so you should potentially write your own. See also this issue and this.
If possible all resources on Azure should be in the same Azure Virtual Network (in this way all connections are made using Azure Private Ip instead of Public Ip. This is also useful for security reasons because you don't need to open endpoint to the public.
When deploying MongoDb on Azure try to follow the official MongoDb guidelines.
In this particular case you should set the net.ipv4.tcp_keepalive_time to a value lower than the tcp keep alive of Azure, that by default is 240 seconds. In this way the connection is closed and MongoDb driver can intercept this condition and open a new connection. If the connection is closed by Azure the driver cannot intercept it. If you want to change this setting on Azure (not recommended) you can find it inside the Public Ip configuration.
In my development environment I have set the net.ipv4.tcp_keepalive_time to 120 and now everything seems to work fine. Consider that if you host MondoDb inside an Docker container you should set this setting on the Docker host.
Here some other useful links:
http://focusmatic.tumblr.com/post/39569711018/solving-mongodb-connection-losses-on-windows-azure
https://docs.mongodb.org/ecosystem/platforms/windows-azure/
https://michaelmckeownblog.wordpress.com/2013/12/04/resolving-internal-ips-vs-dns-names-between-vms/
https://gist.github.com/davideicardi/f2094c4c3f3e00fbd490
MongoDB connection problems on Azure
MongoDB connection timeouts (Azure)
When using the C# Mongo driver we resolved this by setting the following
MongoDefaults.MaxConnectionIdleTime = TimeSpan.FromMinutes(1);

IBM BPM unable to detect WODM server?

Following the tutorial on
http://bpmwiki.blueworkslive.com/display/samples/Decision+Service+demonstrating+BPM+and+WODM+integration#DecisionServicedemonstratingBPMandWODMintegration-PartI%26nbsp%3B%5C%26nbsp%3BImplementingtheJRulesSolution
I'm able to run the rule app using soapUI and everything works fine. Now, when I try to implement the rule service on BPM, it seems BPM is unable to detect the WODM server.
When I test this using soapUI, the wsdl URL was something like: http://localhost:9081/xxxxxxxx.
Now, when I try to implement this on BPM, I've set the Server location to http://localhost:9081 and SOAP Port to 8881 as shown below:
However, I've failed to login. I'm wondering what SOAP Port actually is and why BPM needs one and soapUI doesn't?
Update:
When I set the SOAP Port to 8881, it's throwing
java.io.IOException: Mismatched serialization UIDs :
Source(RepId RMI:java.lang.Throwable:F...............) =........ whereas Target (RepId RMI:com.ibm.jsse2.util.h:CAAC186..................) = D9CE.........
When I set the SOAP Port to 8880, it's not throwing any errors but there's no ruleset and ruleapp available
When I set the SOAP Port to 8882 and above, it's throwing me
[SOAPException: faultCode=SOAP-ENV; msg=Error opening socket:
java.net.ConnectException: Connection refused: connect; targetException: Connection
refused: connect; targetException=java.lang.IllegalArgumentException: Error opening socket:
java.net.ConnectException: Connection refused: connect]
Has your WAS been installed using the default ports or custom ports?
I have got this working using BPM 8.5 and ODM 8.5, but the default SOAP port is 8880 (although I have noticed that you are using port 9081, which implies you might have more than one WAS server installed so its bumped all the port numbers up by one, so this might not be the problem).
The other thing to check is how you have set up the BPM server in the Process App Settings in BPM. The format of the server location should be http://:
BPM need the SOAP port of WODM server to explore what rule app / rule set are available, so that BPM could present a list for you to choose from.
When you call a ruleset in soapUI like following, you have already specified which rule app / rule set to call in the URL.
https://HOST:PORT/DecisionService/ws/ruleapp/ruleset
You may go to WAS admin console to check what's the SOAP port of the server that running WODM.