Remove the error messages from the validation report - hapi-fhir

Snehal Wale Bhosle
2:22 PM (2 hours ago)
to HAPI FHIR
Hi All,
I am using HL7 HAPI framework from my resource instance validation.Currently even if the cardinality of a field is 0..1. or 0..*. error messages are captured for these fields if they have null value saying that the field cannot be null.. Can we ignore such message as it will be a false positive result.
Image of Validation report line and comparison with HL7 FHIR structure for Patient model
Code:
args = new String[]{"DSTU3", "Patient", "/Users/sbhosle/Documents/hl7FHIR/Patient.json", "/Users/sbhosle/Documents/hl7FHIR"};
FhirContext ctx = ValidatorFactory.getValidator(args[0]);
System.out.println(args);
//Get resource file
//Create a new validation
FhirValidator validator = ctx.newValidator();
IValidatorModule module = new FhirInstanceValidator(ctx);
validator.registerValidatorModule(module);
// Did we succeed?
ValidationResult result = validator.validateWithResult(readeFile(args[2]));
String validationResult = ValidatorFactory.getOperationOutcome(args[0], result, ctx);
Thanks,
Snehal

Related

Interpreting server response for events correclty

I would like to store values of event properties received from the server in a database. My problems are that in the event consumer:
I cant figure out which eventtype my client received.
I dont know how to map variant indexes to properties without knowing the EventType.
Events come with the property "EventType", which would solve my first problem. But since I am receiving many different event types, I do not know in which variant index it is located. Should I always relocate "EventType" at index 0 in the select clause whenever creating a new EventFilter?
For the second problem, item.getMonitoringFilter().decode(client.getSerializationContext())) offers a view on the property structure but I am not sure how to use it for mapping of variants to properties. Does anybody know how to solve those problems?
Here is the event consumer code that I use. It is taken from milo client examples.
for (UaMonitoredItem monitoredItem: mItems){
monitoredItem.setEventConsumer((item, vs) -> {
LOGGER.info(
"Event Received from: {}", item.getReadValueId().getNodeId());
LOGGER.info(
"getMonitoredItemId: {}", item.getMonitoredItemId());
LOGGER.info(
"getMonitoringFilter: {}", item.getMonitoringFilter().decode(client.getSerializationContext()));
for (int i = 0; i < vs.length; i++) {
LOGGER.info("variant[{}]:, datatype={}, value={}", i, vs[i].getDataType(), vs[i].getValue());
}
});
}
Thank you in advance.
Update:
Seems I have figured it out, by typcasting to EventFilter. Further information such as qName for event properties or event type node IDs can then be derived:
ExtensionObject eObject = item.getMonitoringFilter();
EventFilter eFilter = ((EventFilter) eObject.decode(client.getSerializationContext()));
QualifiedName qName = eFilter.getSelectClauses()[0].getBrowsePath()[0];
LiteralOperand literalOperand = (LiteralOperand) eFilter.getWhereClause().getElements()[0]
.getFilterOperands()[1].decode(client.getSerializationContext());
NodeId eventTypeNodeId = (NodeId) literalOperand.getValue().getValue();
Didn't you supply the filter in the first place when you created the MonitoredItem? Why do you need to "reverse engineer" the filter result to get back to what you did in the first place?
The properties you receive in the event data and the order they come in are defined by the select clause you used when creating the MonitoredItem. If you choose to select the EventId field then it will always be at the same corresponding index.

How to get all Kubernetes Deployment objects using kubernetes java client?

I am planning to write simple program using kubernetes java client (https://github.com/kubernetes-client/java/). I could get all namespaces and pods but how do i get list of deployments in a given namespace? I couldn't find any method. Is there any way to get it?
for (V1Namespace ns: namespaces.getItems()) {
System.out.println("------Begin-----");
System.out.println("Namespace: " + ns.getMetadata().getName());
V1PodList pods = api.listNamespacedPod(ns.getMetadata().getName(), null, null, null, null, null, null, null, null, null);
int count = 0;
for (V1Pod pod: pods.getItems()) {
System.out.println("Pod " + (++count) + ": " + pod.getMetadata().getName());
System.out.println("Node: " + pod.getSpec().getNodeName());
}
System.out.println("------ENd-----");
}
I guess you're looking for the following example:
public class Example {
public static void main(String[] args) {
ApiClient defaultClient = Configuration.getDefaultApiClient();
defaultClient.setBasePath("http://localhost");
// Configure API key authorization: BearerToken
ApiKeyAuth BearerToken = (ApiKeyAuth) defaultClient.getAuthentication("BearerToken");
BearerToken.setApiKey("YOUR API KEY");
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//BearerToken.setApiKeyPrefix("Token");
AppsV1Api apiInstance = new AppsV1Api(defaultClient);
String namespace = "namespace_example"; // String | object name and auth scope, such as for teams and projects
String pretty = "pretty_example"; // String | If 'true', then the output is pretty printed.
Boolean allowWatchBookmarks = true; // Boolean | allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored.
String _continue = "_continue_example"; // String | The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
String fieldSelector = "fieldSelector_example"; // String | A selector to restrict the list of returned objects by their fields. Defaults to everything.
String labelSelector = "labelSelector_example"; // String | A selector to restrict the list of returned objects by their labels. Defaults to everything.
Integer limit = 56; // Integer | limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
String resourceVersion = "resourceVersion_example"; // String | When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
Integer timeoutSeconds = 56; // Integer | Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.
Boolean watch = true; // Boolean | Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
try {
V1DeploymentList result = apiInstance.listNamespacedDeployment(namespace, pretty, allowWatchBookmarks, _continue, fieldSelector, labelSelector, limit, resourceVersion, timeoutSeconds, watch);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling AppsV1Api#listNamespacedDeployment");
System.err.println("Status code: " + e.getCode());
System.err.println("Reason: " + e.getResponseBody());
System.err.println("Response headers: " + e.getResponseHeaders());
e.printStackTrace();
}
}
}

Apache Storm JoinBolt

I am trying to join two kafka data streams(using kafka spouts) into one using JoinBolt with following code snippet (http://storm.apache.org/releases/1.1.2/Joins.html)
It says that each of JoinBolt's incoming data streams must be Fields Grouped on a single field. A stream should only be joined with the other streams using the field on which it has been FieldsGrouped
Code Snippet :
KafkaSpout kafka_spout_1 = SpoutBuilder.buildSpout("127.0.0.1:2181","test-topic-1", "/spout-1", "spout-1");//String zkHosts, String topic, String zkRoot, String spoutId
KafkaSpout kafka_spout_2 = SpoutBuilder.buildSpout("127.0.0.1:2181","test-topic-2", "/spout-2", "spout-2");//String zkHosts, String topic, String zkRoot, String spoutId
topologyBuilder.setSpout("kafka-spout-1", kafka_spout_1, 1);
topologyBuilder.setSpout("kafka-spout-2", kafka_spout_2, 1);
JoinBolt joinBolt = new JoinBolt("kafka-spout-1", "id")
.join("kafka-spout-2", "deptId", "kafka-spout-1")
.select("id,deptId,firstName,deptName")
.withTumblingWindow(new Duration(10, TimeUnit.SECONDS));
topologyBuilder.setBolt("joiner", joinBolt, 1)
.fieldsGrouping("spout-1", new Fields("id"))
.fieldsGrouping("spout-2", new Fields("deptId"));
kafka-spout-1 sample record --> {"id" : 1 ,"firstName" : "Alyssa" , "lastName" : "Parker"}
kafka-spout-2 sample record --> {"deptId" : 1 ,"deptName" : "Engineering"}
I got following exception while deploying topology using above code snippet
[main] WARN o.a.s.StormSubmitter - Topology submission exception: Component: [joiner] subscribes from stream: [default] of component [kafka-spout-2] with non-existent fields: #{"deptId"}
java.lang.RuntimeException: InvalidTopologyException(msg:Component: [joiner] subscribes from stream: [default] of component [kafka-spout-2] with non-existent fields: #{"deptId"})
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:273)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:387)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:159)
at BuildTopology.runTopology(BuildTopology.java:71)
at Main.main(Main.java:6)
Caused by: InvalidTopologyException(msg:Component: [joiner] subscribes from stream: [default] of component [kafka-spout-2] with non-existent fields: #{"deptId"})
at org.apache.storm.generated.Nimbus$submitTopology_result$submitTopology_resultStandardScheme.read(Nimbus.java:8070)
at org.apache.storm.generated.Nimbus$submitTopology_result$submitTopology_resultStandardScheme.read(Nimbus.java:8047)
at org.apache.storm.generated.Nimbus$submitTopology_result.read(Nimbus.java:7981)
at org.apache.storm.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.storm.generated.Nimbus$Client.recv_submitTopology(Nimbus.java:306)
at org.apache.storm.generated.Nimbus$Client.submitTopology(Nimbus.java:290)
at org.apache.storm.StormSubmitter.submitTopologyInDistributeMode(StormSubmitter.java:326)
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:260)
... 4 more
How to solve the issue?
Thank you,any help will be appreciated
Consider using storm-kafka-client instead of storm-kafka if you're doing new development. Storm-kafka is deprecated.
Does the spout actually emit a field called "deptId"?
Your configuration snippet doesn't mention that you set the SpoutConfig.scheme, and your example records seem to imply that you're emitting JSON documents containing a "deptId" field.
Storm doesn't know anything about JSON or the contents of the strings coming out of the spout. You need to define a scheme that makes the spout emit the "deptId" field separately from the rest of the record.
Here's the relevant snippet from one of the built-in schemes that emits the message, topic and offset in separate fields:
#Override
public List<Object> deserializeMessageWithMetadata(ByteBuffer message, Partition partition, long offset) {
String stringMessage = StringScheme.deserializeString(message);
return new Values(stringMessage, partition.partition, offset);
}
#Override
public Fields getOutputFields() {
return new Fields(STRING_SCHEME_KEY, STRING_SCHEME_PARTITION_KEY, STRING_SCHEME_OFFSET);
}
See https://github.com/apache/storm/blob/v1.2.2/external/storm-kafka/src/jvm/org/apache/storm/kafka/StringMessageAndMetadataScheme.java for reference.
An alternative to you doing this with a scheme is that you make a bolt in between the spout and the JoinBolt that extracts the "deptId" from the record and emits it as a field alongside the record.

'msg' is undefined error Mirth

I am getting an error msg is not defined when trying to loop through different OBX segments. In my destination DB Writer I have the code var msg = channelMap.get('msg'); but how to store the msg in the transformer with channelMap.put('msg', msg)?
This is what I have currently in the transformer(javascript mapper):
var message = message.getRawData();
channelMap.put('msg', message);
In destination DB writer:
var msg = channelMap.get('msg');
Error:
TypeError: Cannot call method "getRawData" of undefined;
On the source connector filters and transformers, msg is a given variable, you can't (and should not) declare it (var msg) and is not on the channelMap to be able to get it (channelMap.get('msg')), you just use it: msg[...].
On the destination you can get the message without putting it on the channelMap.

Understanding Esper IO Http example

What is Trigger Event here ?
How to plug this to the EsperEngine for getting events ?
What URI should be passed ? how should engineURI look like ?
Is it the remote location of the esper engine ?
ConfigurationHTTPAdapter adapterConfig = new ConfigurationHTTPAdapter();
// add additional configuration
Request request = new Request();
request.setStream("TriggerEvent");
request.setUri("http://localhost:8077/root");
adapterConfig.getRequests().add(request);
// start adapter
EsperIOHTTPAdapter httpAdapter = new EsperIOHTTPAdapter(adapterConfig, "engineURI");
httpAdapter.start();
// destroy the adapter when done
httpAdapter.destroy();
Changed the stream from TriggerEvents to HttpEvents and I get this exception given below
ConfigurationException: Event type by name 'HttpEvents' not found
The "engineURI" is a name for the CEP engine instance and has nothing to do with the EsperIO http transport. Its a name for looking up what engines exists and finding the engine by name. So any text can be used here and the default CEP engine is named "default" when you allocate the default one.
You should define the event type of the event you expect to receive via http. A sample code is in http://svn.codehaus.org/esper/esper/trunk/esperio-socket/src/test/java/com/espertech/esperio/socket/TestSocketAdapterCSV.java
You need to declare your event type(s) in either Java, or through Esper's EPL statements.
The reason why you are getting exception is because your type is not defined.
Then you can start sending events by specifying type you are sending in HTTP request. For example, here is a bit of code in python:
import urllib
cepurl = "http://localhost:8084"
param = urllib.urlencode({'stream':'DataEvent',
'date': datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"),
'src':data["ipsrc"],
'dst':data["ipdst"],
'type':data["type"]})
# sending event:
f = urllib.urlopen(cepurl + "/sendevent?" + param);
rez = f.read()
in java this probably would be something like this:
SupportHTTPClient client = new SupportHTTPClient();
client.request(8084, "sendevent", "stream", "DataEvent", "date", "mydate");