How can all clients be easily cleared from Sensu? - sensu

I'm trying to clear all of the clients and alerts from Sensu, but they keep coming back.
With large numbers of clients, Uchiwa is unable to efficiently or reliably delete them all.
I have also tried deleting all of the keys in Redis while sensu-api and sensu-server services are stopped, but once they are restarted, all of the clients come back, including clients that don't exist and are failing their keepalive checks.
Do I have to empty all of the RabbitMQ queues as well?

Use Uchiwa or API or CLI to remove the client(s). If you want to delete all clients, use Uchiwa->Clients, select all clients and then select Delete from Actions dropdown.

Related

Is it possible to direct certain client traffic to the same pod? Not sticky sessions but clients that need to connect to the same pod

I have a situation where a group of clients need to connect to the same pod. The pods are stateful and contains session specific info. Clients need to be given some kind of id the will direct them to the correct pod. Not exactly sticky sessions though. The session is created on a pod by one client, and then other clients given the location id to join that pods session. If I run the app as a single instance, everything work great, because all sessions are on the same pod. But, as soon as the pod is scaled out, sessions can be created anywhere, and participant clients are not able to find the session, and thus don't receive the data.
I'm trying to work around a problem where this app is not only stateful, but also not distributable. Ideally, session data would be in a message queue, and any pod should be able to receive the session info. But, these are the cards I've been dealt.
TIA for any help or advice you can give.
ADDENDUM: The devs will be refactoring the app to utilize some kind of MQ. So, I'm not looking for suggestions to share session info. I'm looking for a fix that will route traffic specific clients to a specific pod. I personally don't believe this will be possible, just trying on the off chance it is. Right now the only option is to run the app as on giant pod that will handle all client session.

Should I use separate connections for Pub and Sub with Redis?

I have noticed that Socket.io is using two separate connections for Pub and Sub to Redis server. Is it something that could improve the performance? Or is it just purely a move towards more organized event handlers and code? What are the benefits and drawbacks of the two separate connections and one single connections for publishing and subscribing.
P.S. The system is pushing about an equal number of messages that it is receiving. It pushes updates to the servers, which are on the same level in the hierarchy, so there is no master, pushing all of the updates, or slave, consuming the messages. One server would have about 4-8 subscriptions and it will send the messages back to these servers.
P.S.S. Is this more of a job for a purpose-built job queue? The reason I am looking at Redis. is that I am already keeping some shared objects in it, which are used by all servers. Is message queue worth adding yet another network connection?
You are required to use two connections for pub and sub. A subscriber connection cannot issue any commands other than subscribe, psubscribe, unsubscribe, punsubscribe (although #Antirez has hinted of a subscriber-safe ping in the future). If you try to do anything else, redis tells you:
-ERR only (P)SUBSCRIBE / (P)UNSUBSCRIBE / QUIT allowed in this context
(note that you can't test this with redis-cli, since that understands the protocol well enough to prevent you from issuing commands once you have subscribed - but any other basic socket tool should work fine)
This is because subscriber connections work very differently - rather than working on a request/response basis, incoming messages can now come in at any time, unsolicited.
publish is a regular request/response command, so must be sent on a regular connection, not a subscriber connection.

Postgres trigger to open http connection

I need to ping some HTTP service every time an insert occurs in a Postgres table using a trigger and an HTTP GET or POST?
Is there any simple way to achieve that with a standard PostgreSQL installation?
In case there isn't, is there any way to do it with additional libraries?
You can do this with PL/perlu or PL/pythonu . However, I strongly recommend that you don't do it this way. DNS problems, problems with the server, etc will cause your PostgreSQL backends to stall, severely disrupting database performance and possibly causing you to exhaust max_connections.
Instead, have a trigger send a NOTIFY when the change occurs, and have the trigger insert details into a log table. Have a client application LISTEN for the notification, read the records inserted by the trigger into the log table, and make any appropriate HTTP requests.
The client should get the requests from the log table one-by-one using SELECT ... FOR UPDATE and DELETE it when the request is made successfully.
See this answer about sending email from within the DB for some more detail.

Biztalk not tracking send/receive ports

It seems that any new send or receive ports that I create do not display any tracking even if I tick all the tracking boxes. I have an existing application and the receive port and orchestration tracking work, but the send port tracking doesn't.
On the same machine I also tried creating a new application. Created a send and a receive port and no tracking at all. I did the same thing on a fresh install of biztalk on another machine and I got tracking so I'm not crazy.
I've tried ...
ticking every box in tracking for the receive, orch, send ports.
creating a new host specifically for tracking
recreating the original host with a different name
sql service is running
reboot system
reboot host instances
restart biztalk services
nothing shows in event logs
all sql jobs ok except for 'monitor biztalk' which complains about 7 orphaned dta.
can't see anything in particular that stands out from mbv except for the above mentioned oraphaned dta.
In addition to Mike's answer:
You need to ensure that at least one of your hosts is enabled for tracking. In BizTalk Administrator, under Platform Settings, Hosts, Select the host, and enable tracking (the list of hosts also shows which host(s) are current tracking enabled).
You can also verify that the tracking SQL Agent job is running by looking directly at the database
select count(*) from BizTalkMsgBoxDb.dbo.Spool (NOLOCK)
select count(*) from BizTalkDTADb.dbo.Tracking_Parts1 (NOLOCK)
Basically, spool should be a fairly low number (< 10 000), and should come back to a static level after a spike in messages, unless your suspended orchs are growing.
And new messages should be copied across from the MessageBox to DtaDb.TrackingParts every minute, so Tracking_Parts1 should grow a few records every 60-120 seconds after processing new messages, although they will be eventually purged / archived in line with your tracking archiving / purge strategy.
In a Dev environment, the more tracking the merrier, as HAT (the orchestration debugger) will give you more information the more you track. However, in a PROD environment, you would typically want to minimize tracking to improve performance and reduce disk overhead. We just track one copy, viz 'before processing' on the receive and 'after processing' on the send ports to our partners, and nothing at all on internal Ports and Orchs. This allows us to provide sufficient evidence of data received and sent.
This post might help some people: http://learningcenter2.eworldtree.net:7090/Lists/Posts/Post.aspx?ID=78
For message tracking to work, among other factors, make sure that the "Message send and receive events" checkbox in the corresponding pipeline is enabled.
Please take a look at these two articles, What is Message Tracking? and Insight into BizTalk Server message tracking. The first article has an item of interest for you and I'll quote it below and the second should just solidify what you're trying to do.
The SQL Server Agent service must be running on all MessageBox databases. The TrackedMessages_Copy_ job makes message bodies available to tracking queries and WMI. To efficiently copy the message bodies, they remain in the MessageBox database and are periodically copied to the BizTalk Tracking (BizTalkDTADb) database by the TrackedMessages_Copy_ job. Having the SQL Server Agent service running is also a prerequisite for the archiving and purging process to work correctly.
Are you using a default pipeline? Have you checked the tracking check boxes on them? There is some bug where the pipeline tracking is disabled for default pipelines.
More info here:
http://blog.ibiz-solutions.se/integration/biztalk-global-pipeline-tracking-disabled-unexpectedly/
Please ensure that required tracking is enbled in the properties of the send pipeline used by your send port. If message body tracking is disabled on the send pipeline, nothing is tracked on the send port as well.

Does the POP3 protocol allow you to specify a subset of emails to download?

I am writing a POP3 mail client. I want to leave the messages on the server, but I don't want to have to redownload all messages every time I reconnect.
If I download all the messages today, and reconnect tomorrow does the protocol support the ability to only download the messages from the last 24 hours or from a certain sequential ID? Or will I have to redownload all of the messages again?
I am aware of the Unique IDentification Listing feature, but according to http://www.faqs.org/rfcs/rfc1939.html it's not supported in the original specification. Do most mail servers support this feature?
Yes, my client supports IMAP too, but this question is specifically for the POP servers.
Have you considered using IMAP?
I've done it.
You'll have to reread all the headers but you can decide which messages to download.
I don't recall anything in the header that will give you a foolproof timestamp, however. I don't believe your solution is possible without keeping a record of what you have already seen.
(In my case I didn't care--I was simply looking for messages with certain identifying features in the header--those messages were downloaded, processed and killed, everything else was untouched.)
I also wonder if you're misunderstanding the protocol. Just because you download a message doesn't mean it's removed from the server. It's only removed from the server if you give an explicit command to kill the message. (And when a message contains so many attachments that the system time-outs before you properly log off and thus your kill command is discarded you'll be driven up the wall!) (It was an oversight in the design. The original logic was attach one file over 100k, or as many as possible whose total was under 100k. Another task barfed and generated thousands of files of around 100 bytes each. While it was a perfectly legit, albeit extreme, e-mail nothing was able to kill it!)
Thus if I were writing a mail client I would simply download anything I didn't already have locally. If it's supposed to remain on the server, fine, just don't give the kill command.
The way I have seen that handled in the past is on a client-by-client basis. For example, if I use Scribe to get e-mail on one machine without deleting, then move to another machine, all e-mails are downloaded again despite the fact that I've seen them before. Internally, I imagine the client has a table that stores whether or not an e-mail has been downloaded previously.
There's nothing in the protocol that I'm aware of that would allow for that.
Sort-of. You can download individual messages, but you can't store state on the remote server.
See the RETR command at http://www.faqs.org/rfcs/rfc1939.html.