Biztalk not tracking send/receive ports - sql-server-2008-r2

It seems that any new send or receive ports that I create do not display any tracking even if I tick all the tracking boxes. I have an existing application and the receive port and orchestration tracking work, but the send port tracking doesn't.
On the same machine I also tried creating a new application. Created a send and a receive port and no tracking at all. I did the same thing on a fresh install of biztalk on another machine and I got tracking so I'm not crazy.
I've tried ...
ticking every box in tracking for the receive, orch, send ports.
creating a new host specifically for tracking
recreating the original host with a different name
sql service is running
reboot system
reboot host instances
restart biztalk services
nothing shows in event logs
all sql jobs ok except for 'monitor biztalk' which complains about 7 orphaned dta.
can't see anything in particular that stands out from mbv except for the above mentioned oraphaned dta.

In addition to Mike's answer:
You need to ensure that at least one of your hosts is enabled for tracking. In BizTalk Administrator, under Platform Settings, Hosts, Select the host, and enable tracking (the list of hosts also shows which host(s) are current tracking enabled).
You can also verify that the tracking SQL Agent job is running by looking directly at the database
select count(*) from BizTalkMsgBoxDb.dbo.Spool (NOLOCK)
select count(*) from BizTalkDTADb.dbo.Tracking_Parts1 (NOLOCK)
Basically, spool should be a fairly low number (< 10 000), and should come back to a static level after a spike in messages, unless your suspended orchs are growing.
And new messages should be copied across from the MessageBox to DtaDb.TrackingParts every minute, so Tracking_Parts1 should grow a few records every 60-120 seconds after processing new messages, although they will be eventually purged / archived in line with your tracking archiving / purge strategy.
In a Dev environment, the more tracking the merrier, as HAT (the orchestration debugger) will give you more information the more you track. However, in a PROD environment, you would typically want to minimize tracking to improve performance and reduce disk overhead. We just track one copy, viz 'before processing' on the receive and 'after processing' on the send ports to our partners, and nothing at all on internal Ports and Orchs. This allows us to provide sufficient evidence of data received and sent.

This post might help some people: http://learningcenter2.eworldtree.net:7090/Lists/Posts/Post.aspx?ID=78
For message tracking to work, among other factors, make sure that the "Message send and receive events" checkbox in the corresponding pipeline is enabled.

Please take a look at these two articles, What is Message Tracking? and Insight into BizTalk Server message tracking. The first article has an item of interest for you and I'll quote it below and the second should just solidify what you're trying to do.
The SQL Server Agent service must be running on all MessageBox databases. The TrackedMessages_Copy_ job makes message bodies available to tracking queries and WMI. To efficiently copy the message bodies, they remain in the MessageBox database and are periodically copied to the BizTalk Tracking (BizTalkDTADb) database by the TrackedMessages_Copy_ job. Having the SQL Server Agent service running is also a prerequisite for the archiving and purging process to work correctly.

Are you using a default pipeline? Have you checked the tracking check boxes on them? There is some bug where the pipeline tracking is disabled for default pipelines.
More info here:
http://blog.ibiz-solutions.se/integration/biztalk-global-pipeline-tracking-disabled-unexpectedly/

Please ensure that required tracking is enbled in the properties of the send pipeline used by your send port. If message body tracking is disabled on the send pipeline, nothing is tracked on the send port as well.

Related

Two email alerts issue for announcement in Liferay

I am using two instances of liferay, means two servers are running with a load balancer on it. Both the servers are pointing to a one common DB. Now the issue is when a announcement is created two emails are received, if we shutdown one of the servers then only one email is received, couldn't find out why this is happening. what i want is running both servers but only single email alert for every announcement.
Are the emails triggered instance wise nothing related to DB except the subscription.
May be there could be any property that we could set in portal-ext.properties of one of the servers, so that mail alerts from one server can be stopped.
Thanks.

MSMQ: How do you send a msg from transactional dead letter queue to a private queue on remote machine

Windows Server 2012
MSMQ 6 Workgroup Mode
We've had issues trying to recover MSMQ messages that were sent to the transaction dead letter queue. We've tried moving them to the outbound queue, the message seems to send fine (even the Event Log says so) however it never gets to the destination queue.
After trial and error we've figured out how to get them to another queue on the same server but not to the destination queue on a remote server. We don't want to lose anymore messages. Does anyone have any suggestion on how we can deliver these messages?
Thank you,
David
As I understood your question, it's a one time problem with some number of messages you already have in MSMQ, and not general connectivity issue between machines? If so, you should be able to solve it with some MSMQ management tool. Disclaimer: I'm the author of one such tool - QueueExplorer. I don't know what other tools can do, but with QueueExplorer you can copy/paste or drag/drop messages to another machine opened in separate tab/window. In order to do that QueueExplorer has to perform MSMQ Send operation, so messages will have to pass through MSMQ between these two machines.
So if there's still that issue that prevented original delivery you'll still be stuck. In that case you can save all messages to a file, transfer it to another machine through file system and load it there to whichever queue they should go. This is obviusly just a manual workaround for one time situation. Btw. this could be done in QueueExplore's trial mode.
If however problem is with connectivity and messages always end up in dead letter queue, it's better to check them from Computer Management. It's one area where it's better than our tool - you can turn on "Class" column and see reason why messages couldn't be delivered. For instance if you see "The time-to-be-received has elapsed" you'll know what's the problem.

Store-and-forward failover solution for ServiceStack web services

I am developing a customer account system for a chain of recycling centers in the Northwest US. One of our key features is that our customers can set up accounts that are credited with their bottle deposit refunds, instead of always disbursing cash. Customers can also drop off bags of recyclables that are processed on-site and credited. Each center runs near capacity and can physically process cans and bottles when offline, so we don't have a lot of leeway for IT infrastructure to shut down everything when the Internet goes out.
Basically, I've been asked to develop a customer account system that will allow credits from a retail center to be posted to accounts, even if telecommunications with our central server breaks down for a period of hours. This will allow the center to keep processing and crediting customers when the pipes get clogged. Certain transactions, like withdraws, do NOT need to occur in this situation, since we can't accurately get the customer's current balance.
We are a 100% Windows shop, and the IT manager and network admin don't want to get near anything *nix. Each retail center has an on-premise dedicated Windows Server, so that seems like a logical place to start.
I'm a huge fan of ServiceStack, and the REST-ful message-based paradigm seems like might work. I'd create a "Credit" message and send it to the local server. A message broker there would log the request and attempt to forward that message to the central server where it is processed. In case the central server were down, I would rely on the MQ's reliable messaging protocol to hold on to it until telecommunications are restored. The overall anticipated volume is 100s to low 1,000s of messages out of each center, so low by modern computing terms.
The Redis MQ Client / Server for ServiceStack looks interesting, but since the Windows Redis server is explicitly labeled "prototype" and "not production quality", there is a 0% chance of being able to leverage it.
So, ultimately the questions are:
Is a reliable messaging system the right type of solution for this problem? Are there other approaches I should consider?
Are there alternatives to Redis that play well with ServiceStack? Is there a "production quality" NoSQL server replacement I can use on Windows?
I've looked briefly at RabbitMQ. Might that be an option? My Googling doesn't show any active integration between it and ServiceStack, so I'm leery of writing something from the ground up.
Ideally the overhead of my solution is low enough we can perform a synchronous update and return a "current balance" receipt to a customer if everything is working well. Is this a realistic?
A production solution for running Redis on windows is to run redis-server inside a Linux VM on windows with Vagrant.
There is current a feature request to add more MQ Options to ServiceStack. Rabbit MQ is expected to be the next MQ adapter to be supported in future.
As a follow-up, MS Open Tech has released a "production-ready" native implementation of Redis 2.8.9. GitHub link.

MSMQ messages bound for clustered MSMQ instance get stuck in outgoing queues

We have clustered MSMQ for a set of NServiceBus services, and everything runs great until it doesn't. Outgoing queues on one server start filling up, and pretty soon the whole system is hung.
More details:
We have a clustered MSMQ between servers N1 and N2. Other clustered resources are only services that operate directly on the clustered queues as local, i.e. NServiceBus distributors.
All of the worker processes live on separate servers, Services3 and Services4.
For those unfamiliar with NServiceBus, work goes into a clustered work queue managed by the distributor. Worker apps on Service3 and Services4 send "I'm Ready for Work" messages to a clustered control queue managed by the same distributor, and the distributor responds by sending a unit of work to the worker process's input queue.
At some point, this process can get completely hung. Here is a picture of the outgoing queues on the clustered MSMQ instance when the system is hung:
If I fail over the cluster to the other node, it's like the whole system gets a kick in the pants. Here is a picture of the same clustered MSMQ instance shortly after a failover:
Can anyone explain this behavior, and what I can do to avoid it, to keep the system running smoothly?
Over a year later, it seems that our issue has been resolved. The key takeaways seem to be:
Make sure you have a solid DNS system so when MSMQ needs to resolve a host, it can.
Only create one clustered instance of MSMQ on a Windows Failover Cluster.
When we set up our Windows Failover Cluster, we made the assumption that it would be bad to "waste" resources on the inactive node, and so, having two quasi-related NServiceBus clusters at the time, we made a clustered MSMQ instance for Project1, and another clustered MSMQ instance for Project2. Most of the time, we figured, we would run them on separate nodes, and during maintenance windows they would co-locate on the same node. After all, this was the setup we have for our primary and dev instances of SQL Server 2008, and that has been working quite well.
At some point I began to grow dubious about this approach, especially since failing over each MSMQ instance once or twice seemed to always get messages moving again.
I asked Udi Dahan (author of NServiceBus) about this clustered hosting strategy, and he gave me a puzzled expression and asked "Why would you want to do something like that?" In reality, the Distributor is very light-weight, so there's really not much reason to distribute them evenly among the available nodes.
After that, we decided to take everything we had learned and recreate a new Failover Cluster with only one MSMQ instance. We have not seen the issue since. Of course, making sure this problem is solved would be proving a negative, and thus impossible. It hasn't been an issue for at least 6 months, but who knows, I suppose it could fail tomorrow! Let's hope not.
Maybe your servers were cloned and thus share the same Queue Manager ID (QMId).
MSMQ use the QMId as a hash for caching the address of remote machines. If more than one machine has the same QMId in your network you could end up with stuck or missing messages.
Check out the explanation and solution in this blog post: Link
How are your endpoints configured to persist their subscriptions?
What if one (or more) of your service encounters an error and is restartet by the Failoverclustermanager? In this case, this service would never receive one of the "I'm Ready for Work" message from the other services again.
When you fail over to the other node, I guess that all your services send these messages again and, as a result, everything gets back working.
To test this behavior do the following.
Stop and restart all your services.
Stop only one of the services.
Restart the stopped service.
If your system does not hang, repeat this with each single service.
If your system now hangs again, check your configurations. It this scenario your at least one, if not all, services lose the subscriptions between restarts. If you did not do so already, persist the subscription in a database.

Does the POP3 protocol allow you to specify a subset of emails to download?

I am writing a POP3 mail client. I want to leave the messages on the server, but I don't want to have to redownload all messages every time I reconnect.
If I download all the messages today, and reconnect tomorrow does the protocol support the ability to only download the messages from the last 24 hours or from a certain sequential ID? Or will I have to redownload all of the messages again?
I am aware of the Unique IDentification Listing feature, but according to http://www.faqs.org/rfcs/rfc1939.html it's not supported in the original specification. Do most mail servers support this feature?
Yes, my client supports IMAP too, but this question is specifically for the POP servers.
Have you considered using IMAP?
I've done it.
You'll have to reread all the headers but you can decide which messages to download.
I don't recall anything in the header that will give you a foolproof timestamp, however. I don't believe your solution is possible without keeping a record of what you have already seen.
(In my case I didn't care--I was simply looking for messages with certain identifying features in the header--those messages were downloaded, processed and killed, everything else was untouched.)
I also wonder if you're misunderstanding the protocol. Just because you download a message doesn't mean it's removed from the server. It's only removed from the server if you give an explicit command to kill the message. (And when a message contains so many attachments that the system time-outs before you properly log off and thus your kill command is discarded you'll be driven up the wall!) (It was an oversight in the design. The original logic was attach one file over 100k, or as many as possible whose total was under 100k. Another task barfed and generated thousands of files of around 100 bytes each. While it was a perfectly legit, albeit extreme, e-mail nothing was able to kill it!)
Thus if I were writing a mail client I would simply download anything I didn't already have locally. If it's supposed to remain on the server, fine, just don't give the kill command.
The way I have seen that handled in the past is on a client-by-client basis. For example, if I use Scribe to get e-mail on one machine without deleting, then move to another machine, all e-mails are downloaded again despite the fact that I've seen them before. Internally, I imagine the client has a table that stores whether or not an e-mail has been downloaded previously.
There's nothing in the protocol that I'm aware of that would allow for that.
Sort-of. You can download individual messages, but you can't store state on the remote server.
See the RETR command at http://www.faqs.org/rfcs/rfc1939.html.