Mule ESB - How to use WS Consumer Response as input for WS Consumer 2 - Error coercing number to string - mule-studio

I am struggling to use the output from WS Consumer A as input for WS Consumer B using a transformer component.
The error I am getting is "Cannot coerce null to string".
The IDNumber from WS Consumer A output is of type number which I am trying to convert to a string as input of WS Consumer B
%dw 1.0
%output application/xml
%namespace ns0 http://www.iLov2kodez.com/fakeNamespace0
%namespace ns1 http://www.iLov2kodez.com/fakeNamespace1
%namespace ns2 http://www.iLov2kodez.com/fakeNamespace2
---
{
ns0#SearchCustomerDetails: {
ns0#IDnumber: payload.ns0#Response.ns0#Result.ns1#IDnumber as :string
}
}
Error:
Cannot coerce a :null to a :string
Type : com.mulesoft.weave.mule.exception.WeaveExecutionException
Code : MULE_ERROR--2

are you sure your getting response from WSConsumer1. You can try thing using mule break point or using logger. Try to post your piece of xml code to better understand the issue.

Related

Remove the error messages from the validation report

Snehal Wale Bhosle
2:22 PM (2 hours ago)
to HAPI FHIR
Hi All,
I am using HL7 HAPI framework from my resource instance validation.Currently even if the cardinality of a field is 0..1. or 0..*. error messages are captured for these fields if they have null value saying that the field cannot be null.. Can we ignore such message as it will be a false positive result.
Image of Validation report line and comparison with HL7 FHIR structure for Patient model
Code:
args = new String[]{"DSTU3", "Patient", "/Users/sbhosle/Documents/hl7FHIR/Patient.json", "/Users/sbhosle/Documents/hl7FHIR"};
FhirContext ctx = ValidatorFactory.getValidator(args[0]);
System.out.println(args);
//Get resource file
//Create a new validation
FhirValidator validator = ctx.newValidator();
IValidatorModule module = new FhirInstanceValidator(ctx);
validator.registerValidatorModule(module);
// Did we succeed?
ValidationResult result = validator.validateWithResult(readeFile(args[2]));
String validationResult = ValidatorFactory.getOperationOutcome(args[0], result, ctx);
Thanks,
Snehal

Apache Storm JoinBolt

I am trying to join two kafka data streams(using kafka spouts) into one using JoinBolt with following code snippet (http://storm.apache.org/releases/1.1.2/Joins.html)
It says that each of JoinBolt's incoming data streams must be Fields Grouped on a single field. A stream should only be joined with the other streams using the field on which it has been FieldsGrouped
Code Snippet :
KafkaSpout kafka_spout_1 = SpoutBuilder.buildSpout("127.0.0.1:2181","test-topic-1", "/spout-1", "spout-1");//String zkHosts, String topic, String zkRoot, String spoutId
KafkaSpout kafka_spout_2 = SpoutBuilder.buildSpout("127.0.0.1:2181","test-topic-2", "/spout-2", "spout-2");//String zkHosts, String topic, String zkRoot, String spoutId
topologyBuilder.setSpout("kafka-spout-1", kafka_spout_1, 1);
topologyBuilder.setSpout("kafka-spout-2", kafka_spout_2, 1);
JoinBolt joinBolt = new JoinBolt("kafka-spout-1", "id")
.join("kafka-spout-2", "deptId", "kafka-spout-1")
.select("id,deptId,firstName,deptName")
.withTumblingWindow(new Duration(10, TimeUnit.SECONDS));
topologyBuilder.setBolt("joiner", joinBolt, 1)
.fieldsGrouping("spout-1", new Fields("id"))
.fieldsGrouping("spout-2", new Fields("deptId"));
kafka-spout-1 sample record --> {"id" : 1 ,"firstName" : "Alyssa" , "lastName" : "Parker"}
kafka-spout-2 sample record --> {"deptId" : 1 ,"deptName" : "Engineering"}
I got following exception while deploying topology using above code snippet
[main] WARN o.a.s.StormSubmitter - Topology submission exception: Component: [joiner] subscribes from stream: [default] of component [kafka-spout-2] with non-existent fields: #{"deptId"}
java.lang.RuntimeException: InvalidTopologyException(msg:Component: [joiner] subscribes from stream: [default] of component [kafka-spout-2] with non-existent fields: #{"deptId"})
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:273)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:387)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:159)
at BuildTopology.runTopology(BuildTopology.java:71)
at Main.main(Main.java:6)
Caused by: InvalidTopologyException(msg:Component: [joiner] subscribes from stream: [default] of component [kafka-spout-2] with non-existent fields: #{"deptId"})
at org.apache.storm.generated.Nimbus$submitTopology_result$submitTopology_resultStandardScheme.read(Nimbus.java:8070)
at org.apache.storm.generated.Nimbus$submitTopology_result$submitTopology_resultStandardScheme.read(Nimbus.java:8047)
at org.apache.storm.generated.Nimbus$submitTopology_result.read(Nimbus.java:7981)
at org.apache.storm.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.storm.generated.Nimbus$Client.recv_submitTopology(Nimbus.java:306)
at org.apache.storm.generated.Nimbus$Client.submitTopology(Nimbus.java:290)
at org.apache.storm.StormSubmitter.submitTopologyInDistributeMode(StormSubmitter.java:326)
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:260)
... 4 more
How to solve the issue?
Thank you,any help will be appreciated
Consider using storm-kafka-client instead of storm-kafka if you're doing new development. Storm-kafka is deprecated.
Does the spout actually emit a field called "deptId"?
Your configuration snippet doesn't mention that you set the SpoutConfig.scheme, and your example records seem to imply that you're emitting JSON documents containing a "deptId" field.
Storm doesn't know anything about JSON or the contents of the strings coming out of the spout. You need to define a scheme that makes the spout emit the "deptId" field separately from the rest of the record.
Here's the relevant snippet from one of the built-in schemes that emits the message, topic and offset in separate fields:
#Override
public List<Object> deserializeMessageWithMetadata(ByteBuffer message, Partition partition, long offset) {
String stringMessage = StringScheme.deserializeString(message);
return new Values(stringMessage, partition.partition, offset);
}
#Override
public Fields getOutputFields() {
return new Fields(STRING_SCHEME_KEY, STRING_SCHEME_PARTITION_KEY, STRING_SCHEME_OFFSET);
}
See https://github.com/apache/storm/blob/v1.2.2/external/storm-kafka/src/jvm/org/apache/storm/kafka/StringMessageAndMetadataScheme.java for reference.
An alternative to you doing this with a scheme is that you make a bolt in between the spout and the JoinBolt that extracts the "deptId" from the record and emits it as a field alongside the record.

Passing validation exceptions from Camel to CXF SOAP service

I have a problem that i cannot solve for some time already, plus i'm new to apache camel and it does not help.
My simple app exposes SOAP web service using CXF (with jetty as http engine) then soap request are passed to akka actors using camel.
I want to validate SOAP request on the road to actor and check if it contains certain headers and content values. I do not want to use CXF interceptor. Problem is, that what ever happens in camel (exception, fault message return) is not propagate to cxf. I always get SUCCESS 202 as a response and information about validation exception in logs.
This is my simple app:
class RequestActor extends Actor with WithLogger {
def receive = {
case CamelMessage(body: Request, headers) =>
logger.info(s"Received Request $body [$headers]")
case msg: CamelMessage =>
logger.error(s"unknown message ${msg.body}")
}
}
class CustomRouteBuilder(endpointUrl: String, serviceClassPath: String, system: ActorSystem)
extends RouteBuilder {
def configure {
val requestActor = system.actorOf(Props[RequestActor])
from(s"cxf:${endpointUrl}?serviceClass=${serviceClassPath}")
.onException(classOf[PredicateValidationException])
.handled(true)
.process(new Processor {
override def process(exchange: Exchange): Unit = {
val message = MessageFactory.newInstance().createMessage();
val envelope = message.getSOAPPart().getEnvelope();
val body = message.getSOAPBody();
val fault = body.addFault();
fault.setFaultCode("Server");
fault.setFaultString("Unexpected server error.");
val detail = fault.addDetail();
val entryName = envelope.createName("message");
val entry = detail.addDetailEntry(entryName);
entry.addTextNode("The server is not able to complete the request. Internal error.");
log.info(s"Returning $message")
exchange.getOut.setFault(true)
exchange.getOut.setBody(message)
}
})
.end()
.validate(header("attribute").isEqualTo("for_sure_not_defined"))
.to(genericActor)
}
}
object Init extends App {
implicit val system = ActorSystem("superman")
val camel = CamelExtension(system)
val camelContext = camel.context
val producerTemplate = camel.template
val endpointClassPath = classOf[Service].getName
val endpointUrl = "http://localhost:1234/endpoint"
camel.context.addRoutes(new CustomRouteBuilder(endpointUrl, endpointClassPath, system))
}
When i run app i see log from log.info(s"Returning $message") so i'm sure route invokes processor, also actor is not invoked therefore lines:
exchange.getOut.setFault(true)
exchange.getOut.setBody(message)
do their job. But still my SOAP service returns 202 SUCCESS instead of fault information.
I'm not sure is what you are looking for, but I processed Exceptions for CXF endpoint differently. I had to return HTTP-500 with custom details in the SOAPFault (like validation error messages etc.), so...
Keep exception unhandled by Camel to pass it to CXF .onException(classOf[PredicateValidationException]).handled(false)
Create org.apache.cxf.interceptor.Fault object with all needed details out of Exception. (Not SOAP Fault). It allows to set custom detail element, custom FaultCode element, message.
finally replace Exchange.EXCEPTION_CAUGHT property with that cxfFault exchange.setProperty(Exchange.EXCEPTION_CAUGHT, cxfFault)
Resulting message from CXF Endpoint is a HTTP-500 with SOAPFault in the body with details I set in cxfFault
Camel is only looking at the in-portion of the exchange but you are modifying the out-portion.
Try changing
exchange.getOut.setFault(true)
exchange.getOut.setBody(message)
to
exchange.getIn.setFault(true)
exchange.getIn.setBody(message)

Understanding Esper IO Http example

What is Trigger Event here ?
How to plug this to the EsperEngine for getting events ?
What URI should be passed ? how should engineURI look like ?
Is it the remote location of the esper engine ?
ConfigurationHTTPAdapter adapterConfig = new ConfigurationHTTPAdapter();
// add additional configuration
Request request = new Request();
request.setStream("TriggerEvent");
request.setUri("http://localhost:8077/root");
adapterConfig.getRequests().add(request);
// start adapter
EsperIOHTTPAdapter httpAdapter = new EsperIOHTTPAdapter(adapterConfig, "engineURI");
httpAdapter.start();
// destroy the adapter when done
httpAdapter.destroy();
Changed the stream from TriggerEvents to HttpEvents and I get this exception given below
ConfigurationException: Event type by name 'HttpEvents' not found
The "engineURI" is a name for the CEP engine instance and has nothing to do with the EsperIO http transport. Its a name for looking up what engines exists and finding the engine by name. So any text can be used here and the default CEP engine is named "default" when you allocate the default one.
You should define the event type of the event you expect to receive via http. A sample code is in http://svn.codehaus.org/esper/esper/trunk/esperio-socket/src/test/java/com/espertech/esperio/socket/TestSocketAdapterCSV.java
You need to declare your event type(s) in either Java, or through Esper's EPL statements.
The reason why you are getting exception is because your type is not defined.
Then you can start sending events by specifying type you are sending in HTTP request. For example, here is a bit of code in python:
import urllib
cepurl = "http://localhost:8084"
param = urllib.urlencode({'stream':'DataEvent',
'date': datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"),
'src':data["ipsrc"],
'dst':data["ipdst"],
'type':data["type"]})
# sending event:
f = urllib.urlopen(cepurl + "/sendevent?" + param);
rez = f.read()
in java this probably would be something like this:
SupportHTTPClient client = new SupportHTTPClient();
client.request(8084, "sendevent", "stream", "DataEvent", "date", "mydate");

Set charset when processing xml using Dispatch Databinder 0.10

I'm wrapping an upstream API with a Scalatra application and using Dispatch to make async requests. However, I'm having trouble turning the upstream XML into xml.Elems using Dispatch's built-in XML processing support.
I'm trying to do something fairly similar to what's in the Dispatch docs, namely retrieve the upstream XML and do some reprocessing. The functions in question look something like:
def facilitiesSvc = {
val myhost = host("upstream.api.co.uk") / "organisations" / "foo" / "123" / "bar" / "core.xml"
myhost.addQueryParameter("apikey", "123456")
myhost
}
def facilitiesXml: Future[Either[String, xml.Elem]] = {
val res: Future[Either[Throwable, xml.Elem]] = Http((facilitiesSvc) OK as.xml.Elem).either
for(exc <- res.left)
yield "Can't connect to facilities service: \n" +
exc.getMessage
}
This results in:
Left(Can't connect to facilities service: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.)
The upstream API isn't sending back a charset, and when retrieving it, Dispatch is showing it with a Byte Order Mark before the XML begins: <?xml version="1.0" encoding="utf-8"?>.
I can see that earlier versions of Dispatch solved this problem in the following way:
new Http apply(url(uri.toString).copy(defaultCharset = "iso-8859-1") as_str)
However I can't currently see a way to make this work with Dispatch 0.10. Does anybody have any tips for setting the charset on this response, so I can parse what's returned?