MessageQueue.Exists(QueueName) returns false but it exists - msmq

The problem I'm having is with this code:
if (!MessageQueue.Exists(QueueName))
{
MessageQueue.Create(QueueName, true);
}
It will check if a queue exists; if it doesn't I want it to create the queue. This code has been working and hasn't changed for a few months. Today I started receiving this error:
[MessageQueueException (0x80004005): A queue with the same path name
already exists.] System.Messaging.MessageQueue.Create(String path,
Boolean transactional) +239478
The queues are local and if I delete the specific queue it will work once. After the queue is created it starts to fail again with the same error message.

It looks like the issue may be because of the Network Load Balancing (NLB) configuration. I was unaware of a change that recently put the machine in a NLB environment. The configuration we are using is an unsupported one.
More information is in How Message Queuing can function over Network Load Balancing (NLB).

Related

JMS Outbound Gateway - Receiving Replies from two jobs instances

We are using the JMSOutboundGateway to send message and receive message using the reply channel within the JMSOutboundGateway. When we run multiple iterations of the same job using the same JMSOutboundGateway, it fails with this error "Message contained wrong job instance id [85] should have been [86]" ( org.springframework.batch.integration.chunk.ChunkMessageChannelItemWriter.getNextResult() ) .
This is due to same JMSOutBoundGateway instance being using when I run the second when the first job is still in progress.
Is there a way I can run parallel execution of the same job type ?
This is a known issue, see https://github.com/spring-projects/spring-batch/issues/1372 and https://github.com/spring-projects/spring-batch/issues/1096.
The workaround is to use a separate instance of the writer for each job to prevent sharing the same reply channel.

No error when stopping non existing service with chef

Im new to chef and trying to understand why this code does not return any error while if i do the same with 'start' i will get an error for such service does not exist.
service 'non-existing-service' do
action :stop
end
# chef-apply test.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* service[non-existing-service] action stop (up to date)
Don't know which plattform you are running on if you are running on Windows it should at least log
Chef::Log.debug "#{#new_resource} does not exist - nothing to do"
given that you have debug as log level.
You could argue this is the wrong behaviour, but if the service dose not exist it for sure isen't running.
Source code
https://github.com/chef/chef/blob/master/lib/chef/provider/service/windows.rb#L147
If you are getting one of the variants of the init.d provider, they default to getting the current status of a service by grepping the process table. Because Chef does its own idempotence checks internally before calling the provider's stop method, it would see there is no such process in the table and assume it was already stopped.

ServiceProxy throws ProtocolException, communication is not restored on retrying

We are seeing ProtocolExceptions while communicating with a service running in the cluster. The message and InnerException message:
System.ServiceModel.ProtocolException: You have tried to create a channel to a service that does not support .Net Framing.
---> System.IO.InvalidDataException: Expected record type 'PreambleAck', found '145'.
This service is running on a local dev cluster, and the exception is thrown after communicating successfully with the service.
The code that we use for communicating is:
var eventHandlerServiceClient = ServiceProxy.Create<IEventHandlerService>(eventHandlerTypeName, new Uri(ServiceFabricSettings.EventHandlerServiceName));
return await eventHandlerServiceClient.GetQueueLength();
We have retry logic (with increasing delay's between the attempts). But this call never succeeds. So it looks like the service is in a fault state and cannot recover from it.
Update
We are also seeing the following errors in the logs:
connection 0x1B6F9EB0 localhost:64002-[::1]:50376 target 0x1B64F3C0: invalid frame: length=0x1000100,type=514,header=28278,check=0x742E7465
Update 14-12-2015
If this ProtocolException is thrown, retries don't help. Even after hours of waiting, it still fails.
We log the endpoint address with
var spr = ServicePartitionResolver.GetDefault();
var x = await spr.ResolveAsync(new Uri(ServiceFabricSettings.EventHandlerServiceName),
eventHandlerTypeName,
new CancellationToken());
var endpointAddress = x.GetEndpoint().Address;
The resolved endpoint looks like
{"Endpoints":{"":"net.tcp:\/\/localhost:57999\/d6782e21-87c0-40d1-a505-ec6f64d586db\/a00e6931-aee6-4c6d-868a-f8003864a216-130945476153695343"}}
This endpoint is the same as reported by the Service Fabric Explorer.
From our logs seen, it seems that this service is working (it is reachable via another API method), but this specific call never succeeds.
This typically indicate mismatched communication stack on the service and client side. Once the service is up and running, check the endpoint of the service replica via Service Fabric Explorer. If that seems fine, check that the client is connecting to the right service. Resolve the partition using the ServicePartitionResolver (https://msdn.microsoft.com/en-us/library/azure/microsoft.servicefabric.services.servicepartitionresolver.aspx), passing the same arguments that you pass to ServiceProxy.
I'm seeing the same sort of errors. Just looking at my code, I'm caching an actorproxy. I'm going to change that and remove the caching in case the cache is referencing an old instance of the service.
That appears to have fixed my issues. I'm guessing that the proxy caches the reference once it has been used and if the service changes, that reference is out of date.

Load balancing MySQL ndbcluster

I have successfully setup ndbcluster version 7.1.26.
This contains 2 data nodes[NDBD], 2 mysql [MYSQLD] nodes and one management [MGMD] node.
Replication works successfully.
My Web application is deployed in JBoss-5.0.1 and using JNDI for connection resources which are specified in application specific ds.xml file in load balanced url forms e.g. jbdc:mysql:loadbalance:host1:port1,host2:port2/databaseName.
host1 : refers to first mysqld node and port1 refers the port it is running on.
host2 : refers to second mysqld node and port2 refers the port it is running on.
When both of the [MySQLD] nodes are up and running everything works fine and cluster responds well, replicates data, and data retrieval operations also work properly.
But issues are raised when any of the [MySQLD] nodes goes down. Data gets inserted/updated/replicated but the application is unable to retrieve data from cluster and web page remains busy working which means busy retrieving data. As soon as the node which was down goes up it responds properly and application goes forward and shows up data retrieved from cluster.
At JBoss 5.0.1 startup it showed up a NullPointerException in class LoadBalancingConnectionProxy.invoke(LoadBalancingConnectionProxy.java:439). Tell me if the above Exception plays any role in the above explained issues.
If anyone had faced issues like above and if has any solution regarding the issues please let me know.
Thanks and regards.
I have resolved the issue as it was a bug in the connectorJ's version.
As The project I am working on was already using both the buggy jar mysql-connector-java-5.0.8.jar and the jar version in which the issue is already resolved i.e. mysql-connector-java-5.1.13-bin.jar.
After all the search when I removed the jar mysql-connector-java-5.0.8.jar my issues got resolved.
All that was problematic was that the ConnectorJ/Driver was getting referred from the buggy jar.
The bug id and url which refers to this issue is:
http://bugs.mysql.com/bug.php?id=31053
.
Thanks for considerations.
Are you using different userids and passwords for each of the hosts(host1, host2) specified in the tag ? (Either directly or using tag) ?

Can I open a clustered MQ queue for writing in Perl?

If I have a Websphere MQ queue defined on another queue manager in the cluster, is there a way I can open it for writing using the Perl interface? The code below brings back mqrc 2085.
$messageQ = MQSeries::Queue->new
(
QueueManager => $qMgr,
Queue => $queue,
Options => $openOpt
) or die ">>>ERROR2: Unable to open the queue: $queue\n";
}
Yes! The Perl modules are a thin veneer over the WMQ API and expose all the basic options and most of the really esoteric stuff as well.
When you open a queue, WebSphere MQ performs name resolution on the values you provide for Queue and QMgr names. If you provide both a Queue and a QMgr name then the object reference is fully qualified and WMQ will attempt to open it as named. So if the name you provide is the local QMgr and the clustered queue does not have a locally defined instance, the open will fail with a 2085 Unknown Object Name.
The trick to opening a clustered queue is to provide a null value for the QMgr name. This causes name resolution to check the local QMgr for a queue of the same name, then finding none it checks the cluster repository and resolve the open to the clustered queue. Note that the queue must be advertised to the cluster for this to work. Specifically, the CLUSTER or CLUSNL attribute of the target queue must be non-blank and refer to a cluster that the source QMgr participates in. Similarly, the destination QMgr must also participate in the same cluster as the source QMgr.
Note also that if you specify a QMgr name on the open that is not the local QMgr, then WMQ will attempt to resolve the QMgr name only. If it can resolve a route to that QMgr then it will send the message there. This means that in a cluster you can send a message to any queue on any QMgr so long as you know the fully-qualified name.
Finally, you can define a local alias over a clustered queue. For example if you are on QMGRA and DEF QA(TARGET.QUEUE) TARGQ(TARGET.QUEUE) and then on QMGRB and QMGRC in the same cluster you DEF QL(TARGET.QUEUE) CLUSTER(MYCLUS) then it is possible to open QMGR=QMGRA QUEUE=TARGET.QUEUE and still have it work as expected. Note that the alias is NOT advertised to the cluster but the target queue is. The only issue with this approach is that the first time it is opened the API call may fail if the cluster query takes too long. When I do this in Production, I always use amqsput on the alias ahead of time to make the QMgr query the repository before the actual application opens the queue. Why would you do this? If security is a concern you probably don't want to authorize all apps directly to the cluster XMitQ because, as noted above, they could then put a message onto any queue on any QMgr in the cluster, including SYSTEM.ADMIN.COMMAND.QUEUE. The alias gives you a place to hang authorizations and restrict the user to specific destinations in the cluster.
So short answer, make sure you provide a null QMgr name on the Open call or set up a local alias over the clustered queue. For more about the security aspects of this, see the WMQ Security presentation at http://t-rob.net/links