How to log when a a connection to the database is restored in Talend? - talend

I'm using Talend, tJDCBConnection.
Before running a db actions I added the tJDCBConnection component so if the connection was lost it will be reconnected again.
How can I tell if the connection was re-established so I can add a log message?
Thank you,
Shirly

You can adopt this schema tJDBCConnection-->onComponentOK-->tWarn.
As soon as the tJDBCConnection is called, tWarn send the message you've configured.
In parallel, you need a tLogCatcher to catch all messages issued from tWarn (and/or tDie and/or Java exceptions) and log these messages in the desired logfile.
Hope this helps,
TRF

Related

Kafka Connect: Error detection when worker fails

I'am submitting a connector to kafka. The connector created is sftp connector. Now when the password is wrong the connector sends back success response when the connector fails. The password is wrong response is not given at that time. This is a single scenario there could be mutliple scenarios like this. Now when I use the <host>/connectors/<connector-name>/status, I get the error saying failed to establish connection. But this endpoint has a little delay. If I'am immediately trying after creating the connector, I may not get any response(404).
What is the proper way of handling this using the status api call.Is there any delay that needs to be used before firing this API. Or can it be handled while submitting the connector to API?
When you create the connector, it naturally needs to load the JAR(s) responsible for the tasks, then distribute the tasks to actually start the connector code (which is responsible for connecting to the SFTP server with the connection details).
Therefore, the delay is natural, and there's no way to know your connection details are incorrect unless you try to use them before launching the connector.

JDBC connection lost while UNLOADing from Redshift to S3. What should happen?

Reshift newbie here - greetings!
I am trying to unload data to S3 from Redshift, using a java program running locally which issues an UNLOAD statement over a JDBC connection. At some point the JDBC connection appears lost on my end (exception caught).
However, looking at the S3 location, it seems that the unload runs to completion. It is true however that I am unloading a rather small set of data.
So my question is, in principle, how is unload supposed to behave in case of a lost connection (say, a firewall kills it or even someone does a kill -9 on the process that executes the unload)? Will it run to completion? Will it stop as soon as it senses that the connection is lost? I have been unable to find the answer neither by rtfm'ing, nor by googling...
Thank you!
The UNLOAD will run until it completes, is cancelled, or encounters an error. Loss of the issuing connection is not interpreted as a cancel.
The statement can be cancelled on a separate connection using CANCEL or PG_CANCEL_BACKEND.
http://docs.aws.amazon.com/redshift/latest/dg/r_CANCEL.html
http://docs.aws.amazon.com/redshift/latest/dg/PG_CANCEL_BACKEND.html

Multiple could not receive data from client: Connection reset by peer Postgresql and Resque

I have a server that runs Postgresql. in the logs I am seeing this message for my resque based 'worker' box, multiple times a minute. Some minutes there isn't a message, others could be 10 times.
2016-01-12 13:40:36 EST:1.1.8.2(33899):[16141]: LOG: could not receive data from client: Connection reset by peer
Now when i go into the 1.1.8.2 box to look at netstat -ntp i don't see a port 33899, and most of them are at least in the 40xxx range by now. That may be conjecture but I'm at a loss to find out why a Redis/Resque/Puma Rails stack would be printing out these messages, let alone what that means even if i get to the bottom of it.
Will I gain memory back if they are closed 'normally'?
Is this a thing to be wary of?
How does one debug OLD ports that are open when the db box and the worker box both don't display the ports any more?
This message is probably due to the resque worker task not closing the database connection before it exits. It's not a huge problem, but presumably Postgres is doing a little extra work to clean it up, and it makes a mess of your log file...
One solution is to add a hook to your resque worker's task file (the same file that contains the self.perform definition):
def self.after_perform(*args)
ActiveRecord::Base.connection.disconnect!
end

MSMQ messages disappear when they get to remote server

I have to create a MSMQ messaging mechanism between two servers in the same domain, SenderServer (MS Server 2012) and ReceiverServer (MS Server 2008 R2).
I created a private, transactional queue in ReceiverServer .\private$\receiver, I gave receive (and peek) message rights to system and administrators.
I then created a client application that creates and forwards messages to the queue by using the following code:
MessageQueue queue = new queue("FormatName:Direct=OS:ReceiverServer\private$\receiver");
Message message = new Message();
message.Body = "myMessage";
using (MessageQueueTransaction tx = new MessageQueueTransaction())
{
tx.Begin();
queue.Send(message, "myLabel", tx);
tx.Commit();
}
Before deploying the application, I tested it from my machine (Windows 7) that correctly creates an outgoing queue Direct=OS:ReceiverServer\private$\receiver with State:Connected and Connection History:Connection is ready to transfer messages.
The messages are correctly sent to the ReceiverServer and placed in the \private$\receiver queue. The End2End log of the ReceiverServer for every message logs two events:
Message came over network (EventId: 4)
Message with ID CN=msmq, CN=[mymachinename], CN=Computers, DC=[domain], DC=[other] was put into queue ReceiverServer\private$\receiver (EventId: 1)
Then I used the client application from within the SenderServer using the same code. The server correctly creates an outgoing queue Direct=OS:ReceiverServer\private$\receiver with State:Connected and Connection History:Connection is ready to transfer messages, I can see the message queuing up and be sent but I do not receive them in the remote ReceiverServer queue .\private$\receiver. If I check the End2End event log of the ReceiverServer I just see the first message (Message came over network (EventId: 4)) but the message is not placed in the queue.
I turned off firewalls from both machines, changed the authorization settings for the queue and tried the following endpoint for the queues:
FormatName:Direct=OS:[IPv6 address]\private$\receiver
FormatName:Direct=TCP:ReceiverServer\private$\receiver
FormatName:Direct=TCP:[IPv6 address]\private$\receiver
With no luck. The troubleshooting process and the documentation from Microsoft are really general and simplistic, therefore I decided to ask here because for me is a dead end.
The sender domain account needs to have the following permissions on the remote queue: Send, Get Permissions, Get Properties
Are these machines on the same domain? If not you may need to grant the above permissions to the local user called ANONYMOUS LOGON
I ran into a similar problem and spend a few hours resolving it, so I wanted to post an answer to save others who may fall into the same trap I did.
When the queue was created on the remote server, it was mistakenly created as a transactional queue. However the code that was posting the messages was calling send without the transaction parameter. I could see the message at the sending workstation, but once it hit the destination server, it would disappear without any logging, journaling, or events to help determine why.
Once I identified the problem, I recreated the Queue as a non-transactional queue, and the issue was fixed.

MSMQ - Create and Send Message

I have a public queue created in a remote machine. I am able to access the queue, create a message and send it from my workstation. However, when I access the remote machine that hosts the message queue, I do not see any messages. Any ideas on what I am missing? Is there anything that need to be configured to receive messages?
You should check the security settings on the remote queue - the default setting for any account is "allow sending only".
I got it to work by removing MessageQueueTransactionType.Single from MessageQueue.Send(message,MessageQueueTransactionType.Single) method.
It Seems like there was a mismatch between the Transaction types. I am still not familiar how the transaction types work.