Mule Echo component changes the message - echo

The Mule Echo component is meant to output the current message to the console. As far as the documentation goes the Echo component shouldn't change the message in any way. However I have noticed that it does have an effect on the message seemingly similar to an Object to String component.
Is this expected or is it a bug?

It is expected.
Use the <logger> message processor, which is a replacement of the obsolete Echo Component, and which doesn't log the message payload by default, i.e. leaving it intact by default.

Related

SOAP Web Service Client error, While consuming the service

I am getting this error while using SOAP web service client with axis 1. I had created stub from the wsdl file and tried to consume it then I got this error. wsdl is given to me by someone else.
error in msg parsing: xml was empty, did't parse!
below is the error message and stack trace for the same. Anyone can help.?
In order to fix the javax.activation.DataHandler issue you must add the JavaBeans Activation Framework activation.jar in your classpath.
In order to fix the javax.mail.internet.mimeMultipart issue you must add the Java Mail API mail.jar in your classpath.
The warning messages printed in your console shows that the above jars are not in the classpath.
There are several common reasons to receive the message:
error in msg parsing: xml was empty, did't parse!
The most obvious is that no message was sent. If you have some way of inspecting your transport channel, that would be worth looking at.
Also, the xml message could have been sent in an unexpected character set, e.g. A header declares it to be "Utf-8" but it is really "Win-1252", sometimes you can get away with that if you only use 7-bit ASCII characters, but anything in the 8-bit plane will cause it to bomb.
Also, the xml message could have had a byte order mark unexpectedly inserted at the beginning of the message.
Also, the xml message might not have the document declaration ( starting in the first byte of the message, that violates the specification, and often causes parsers to puke and claim that no message was found.
All things considered with this error message, the parser was not able to find a valid xml message that it could parse, so it didn't. You need to grab the data on the transport channel and figure out what exactly is wrong to resolve the issue.

Flume TAILDIR Source to Kafka Sink- Static Interceptor Issue

The scenario I'm trying to do is as follows:
1- Flume TAILDIR Source reading from a log file and appending a static interceptor to the beginning of the message. The interceptor consists of the host name and the host IP cause it's required with every log message I receive.
2- Flume Kafka Producer Sink that take those messages from the file and put them in a Kafka topic.
The Flume configuration is as follows:
tier1.sources=source1
tier1.channels=channel1
tier1.sinks =sink1
tier1.sources.source1.interceptors=i1
tier1.sources.source1.interceptors.i1.type=static
tier1.sources.source1.interceptors.i1.key=HostData
tier1.sources.source1.interceptors.i1.value=###HostName###000.00.0.000###
tier1.sources.source1.type=TAILDIR
tier1.sources.source1.positionFile=/usr/software/flumData/flumeStressAndKafkaFailureTestPos.json
tier1.sources.source1.filegroups=f1
tier1.sources.source1.filegroups.f1=/usr/software/flumData/flumeStressAndKafkaFailureTest.txt
tier1.sources.source1.channels=channel1
tier1.channels.channel1.type=file
tier1.channels.channel1.checkpointDir = /usr/software/flumData/checkpoint
tier1.channels.channel1.dataDirs = /usr/software/flumData/data
tier1.sinks.sink1.channel=channel1
tier1.sinks.sink1.type=org.apache.flume.sink.kafka.KafkaSink
tier1.sinks.sink1.kafka.bootstrap.servers=<Removed For Confidentiality >
tier1.sinks.sink1.kafka.topic=FlumeTokafkaTest
tier1.sinks.sink1.kafka.flumeBatchSize=20
tier1.sinks.sink1.kafka.producer.acks=0
tier1.sinks.sink1.useFlumeEventFormat=true
tier1.sinks.sink1.kafka.producer.linger.ms=1
tier1.sinks.sink1.kafka.producer.client.id=HOSTNAME
tier1.sinks.sink1.kafka.producer.compression.type = snappy
So now I'm testing, I ran a Console Kafka Consumer and I started to write in the source file and I do receive the message with the header appended.
Example:
I write 'test' in the source file and press Enter then save the file
Flume detect the file change, then it sends the new line to Kafka producer.
My consumer get the following line:
###HostName###000.00.0.000###test
The issue now is that sometimes, the interceptor doesn't work as expected. It's like Flume sends 2 messages, one contains the interceptor and the other one the message content.
Example:
I write 'hi you' in the source file and press Enter then save the file
Flume detect the file change, then it sends the new line to Kafka producer.
My consumer get the following 2 line:
###HostName###000.00.0.000###
hi you
And the terminal scrolls to the the new message content.
This case always happen when I type 'hi you' in the text file, and since I read from a log file, then it's not predictable when it happens.
Help and support will be much appreciated ^^
Thank you
So the problem was from Kafka Consumer. It receives the full message from flume
Interceptor + some garbage characters + message
and if one of the garbage characters was \n (LF in Linux systems) then it assumes its 2 messages, not 1.
I'm using Kafka Consumer element in Streamsets, so it's simple to change the message delimiter. I made it \r\n and now it's working fine.
If you are dealing with the full message as a string and want to apply a regex on it or want to write it to a file, then it's better to replace \r and \n with an empty string.
The full walkthrough to the answer can be found here:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Flume-TAILDIR-Source-to-Kafka-Sink-Static-Interceptor-Issue/m-p/86388#M3508

Debugging Transformer Errors in Mirth Connect Server Log

Fairly new to Mirth, so looking for advice in regards to debugging/getting more information from errors reported in the Server Log in Mirth Connect. I know what channel this is originating from, but that's about it. This error is received 10 times for each message coming through. It should be noted that the channel is working properly except for this error cluttering up the logs.
The Error:
ERROR (transformer:?): TypeError: undefined is not an xml object.
What I've Tried:
Ruled out Channel Map variables (mappers), they don't have null default values, they match up with vars in the incoming xml message, even changed to Javascript transformers to modify the catch to try to narrow down the issue, but no luck.
Modified external javascript source files to include more error handling (wrapped each file in a try/catch that would log with identifying info) but this didn't change the result at all.
Added a new Alert to send info if errors are received, but this alert never fired.
Anything else to try? Thanks for any/all help!
This is a Rhino message that happens when you use an e4x operator on a variable that isn't an xml object. The following two samples will both throw the same error you are seeing when obj is undefined. Otherwise, 'undefined' in your error will be replaced with obj.toString();
// Putting a dot between the variable and () indicates an xml filter
// instead of a function call
obj.('test');
// Two consecutive dots returns all xml descendant elements of obj
// named test instead of retrieving a property named test from obj.
obj..test;

mirth connect stop message propagation through destinations

I am using Mirth Connect 3.5.0.8232.
I have a Database Reader as source connector and a JavaScript writer as destination connector. I decided to put some fancy code in the destination, doing four separate things, which should follow one after the other. Basically I just wrote the code and it seemed to me that it was too long and too clumsy, so I decided to split it into 4 destinations that would be daisy-chained, via the "Wait for previous destination" option.
The question is : How do I interrupt this chain of execution if an error occurs on one of the destinations?
I found a JIRA issue from 2013 saying that actually the errors that would occur in the body of the Destination Connector would not prevent the message from going to all other Destinations. And it states that the 2.X version behavior is still current, i.e. an error that would occur in the Destination Transformer, will actually stop the message from propagating.
I tried throwing errors in both the Destination body, and in Destination Response Transformer, and in both cases the message would continue to other Destinations. I also tried returning ResponseFactory.getErrorResponse from the Destination body with no luck. I also tried setting responseStatus to ERROR in Destination Response Transformer to no avail. Did they mean the normal Transformer/Filter?
Also - maybe my particular solution of splitting a task into 4 distinct destinations was NOT the reason why the destinations were created in the first place? I think that the documentation states that destinations are basically what the actual word Destination stands for.
If the above case is true, maybe there are better ways of organizing the code functionally in Mirth? I think including external JS files is not allowed in JavaScript writer - even if it were, i would prefer everything to sit inside the Channel itself and be exportable/importable as a single file.
Thank you.
Yep, when an error is thrown from a filter/transformer, it's considered truly "exceptional" and so message flow is stopped (subsequent destinations in the same chain are not executed).
If an error is thrown from the actual destination dispatcher or from the response transformer, that destination is marked as ERROR, but subsequent destinations will still be executed.
You can still stop the message flow if you want though. Use filters on your subsequent destinations:

Injecting a custom die() handler into mod_perl SOAP handler

We're using a $server = SOAP::Transport::HTTP::Apache->new; $server->dispatch_with(...) over here as a backend to a JS-based application. Should the underlying module die, it sends back a nice error message that gets displayed by the JS code.
The problem is, I would like more detailed messages (e.g. Carp::longmess), and a hard copy of those on STDERR.
How can I inject a custom exception handler into SOAP::Transport::HTTP::Apache with minimal code modifications?
(This is a large and old project we can't afford to rewrite, though honestly it deserves a rewrite).
UPDATE: here's a sample error message:
<soap:Body><soap:Fault>
<faultcode>soap:Server</faultcode><faultstring>Column
'allocation' cannot be null at
/usr/local/lib/perl5/site_perl/5.8.8/Tangram/Storage.pm
line 686. </faultstring></soap:Fault></soap:Body>
I get a Tangram error but this is unlikely a bug in Tangram and anyway I need a full stack-trace. OTOH, the die message got into a SOAP message which is not a normal die action so there's a handler somewhere -- which I want to customize a bit.
The error handler is located under SOAP::Transport::HTTP::Server::_output_soap_fault. Try a grep on <faultcode> in the perl INC paths.