WSO2 CEP Extension ML with Collaborative Filtering - collaborative-filtering

It's possibile to integrate a Collaborative Filtering Explicit Data model generated with WSO2 Machine Learner module? I want to query model with Siddhi, but in WSO2 docs i not found any way to do.

Yes it is possible to integrate machine learning models with WSO2 CEP and use Siddhi to get the predictions. Please use this guide.
Tishan

No. The current released versions of WSO2 Machine Learner (1.0.0 and 1.1.0) do not provide support for collaborative filtering as a CEP extension, therefore you cannot use collaborative filtering models created with Machine Learner with a Siddhi query.
At the moment, only models created for numerical prediction, classification, anomaly detection and deep learning can be used with a Siddhi query.

Not only collaborative filtering algorithm, any machine learning model you developed using WSO2 Machine Learning Server can be easily integrated with other products in the WSO2 echo system. For instance, you can integrate WSO2 ML models easily with WSO2 ESB using a special mediator called predict mediator [1]. Also, we have written an extension for the WSO2 CEP Server [2] as well. In addition to that, we are planning to add few more extensions in upcoming releases.
Sometimes, you might want to install machine learning models built using WSO2 ML Server outside of the WSO2 echo system. For this purpose, we have provided two options namely Predictive Model Markup Language (PMML) and pure Java serialized object support.
[1]. https://docs.wso2.com/display/ML110/Predict+Mediator+for+WSO2+ESB
[2]. https://docs.wso2.com/display/ML110/WSO2+CEP+Extension+for+ML+Predictions

Related

To run a GraphQL server in Python that allows queries and subscriptions, do I have to combine it with a web framework service?

Excuse my ignorance in this area: most of my programming has been in optimization and research. I am very new to GraphQL and client-server programming.
My organization is working on an automated scheduler in Python 3.9 for scheduling observations for a large-scale telescope.
We are relying on many different services to all communicate via GraphQL. At the moment, I am trying to implement a GraphQL server that can be queried or accept subscriptions to disseminate when a new schedule for the night is created (for any number of reasons such as changing weather conditions, instrument faults, modifications to observations). Eventually, we will need to allow mutations (e.g. to the priority of observations, or to fix an observation at a given time).
I am looking at both Strawberry and Graphene as my possible options, but what is unclear to me is if I require them to be combined with a web framework service like Django or Flask to achieve the functionality that I need.
I see that Strawberry has a built-in (possibly only debug) server, but it also discusses integration with Django, Flask, and others, and I am not certain if I need to go to that level. I have been working through examples and completed a JavaScript course using Apollo Server / Client, but I'm not sure how these compare to Python GraphQL server implementations.
I apologize for my lack of knowledge: I am trying to keep the project as simple as possible for now, and having played around with Graphene and Django, I'm not sure if I'm overcomplicating things of if this approach is necessary.
Statements like "Graphene is fully featured with integrations for the most popular web frameworks and ORMs" lead me to believe a web framework is required, but again, I am not sure and feel very out of my depth since in this area is virtually nonexistent.
I'm the maintainer of Strawberry GraphQL 😊
For both Strawberry and Graphene you'd need framework like Django or Flask.
Strawberry has support for Subscriptions when using an ASGI framework like Starlette or FastAPI, there's some example here: https://strawberry.rocks/docs/general/subscriptions#subscriptions
We also have an almost-done PR that adds support for subscriptions using django: https://github.com/strawberry-graphql/strawberry/pull/1407

hapi fhir server full turn key implementation

So I have been using Hapi Fhir Server (for several years) as a way to expose proprietary data in my company....aka, implementing IResourceProvider for several resources.
Think "read only" in this world.
Now I am considering accepting writes.
The Hapi Fhir Server has this exert:
JPA Server
The HAPI FHIR RestfulServer module can be used to create a FHIR server
endpoint against an arbitrary data source, which could be a database
of your own design, an existing clinical system, a set of files, or
anything else you come up with.
HAPI also provides a persistence module which can be used to provide a
complete RESTful server implementation, backed by a database of your
choosing. This module uses the JPA 2.0 API to store data in a database
without depending on any specific database technology.
Important Note: This implementation uses a fairly simple table design,
with a single table being used to hold resource bodies (which are
stored as CLOBs, optionally GZipped to save space) and a set of tables
to hold search indexes, tags, history details, etc. This design is
only one of many possible ways of designing a FHIR server so it is
worth considering whether it is appropriate for the problem you are
trying to solve.
http://hapifhir.io/doc_jpa.html
So I did this download (of the jpa server) and got it working against a real db-engine (overriding the default jpa definition).....and I observed the "fairly simple table design". So I am thankful for this simple demo. But looking at the simple, it does concern me for a full blown production setup.
If I wanted to setup a Fhir Server, are there any "non trivial" (where above says "fairly simple table design") ... to implement a robust fhir server...
that supports versioning (history) of the resources, validation of "references (example, if someone uploads an Encounter, it checks the Patient(reference) and the Practitioner(reference) in the Encounter payload......etc, etc?
And that is using a robust nosql database?
Or am I on the hook for implementing a non-trivial nosql database?
Or did I go down the wrong path with JPA?
I'm ok with starting from "scratch" (an empty data-store for my fhir-server)....and if I had to import any data, I understand what that would entail.
Thanks.
Another way to ask this.....is......is there a hapi-fhir way to emulate this library: (please don't regress into holy-war issues between java and dotnet)
But below is more what I would consider a "full turn key" solution. Using NoSql (CosmoDB).
https://github.com/Microsoft/fhir-server
A .NET Core implementation of the FHIR standard.
FHIR Server for Azure is an open-source implementation of the
emerging HL7 Fast Healthcare Interoperability Resources (FHIR)
specification designed for the Microsoft cloud. The FHIR specification
defines how clinical health data can be made interoperable across
systems, and the FHIR Server for Azure helps facilitate that
interoperability in the cloud. The goal of this Microsoft Healthcare
project is to enable developers to rapidly deploy a FHIR service.
With data in the FHIR format, the FHIR Server for Azure enables
developers to quickly ingest and manage FHIR datasets in the cloud,
track and manage data access and normalize data for machine learning
workloads. FHIR Server for Azure is optimized for the Azure ecosystem:
I'm not aware of any implementation of the HAPI server which support a full persistence layer in NoSQL.
HAPI has been around for a while, the persistence layer has evolved quite a bit and seems to be appropriate for many production scenarios, especially when backed by a performant relational database.
The team that maintains HAPI also uses it as the basis for a commercial offering, Smile CDR. Many of the enhancements that went into making Smile CDR production ready are baked into the HAPI open source project. There has also been some discussion on scaling the JPA implementation.
If you're serious about using HAPI in production I'd recommend doing some benchmarks on the demo server you set up that simulate some of your production use-cases to see if it will get you what you want, you may be surprised. You can also contact the folks at Smile CDR as they do consulting and could likely tell you more specifically how to tune an instance to scale for your production priorities.
You can use Firely's implementation of FHIR. The most used repo is the FHIR SDK;
https://github.com/FirelyTeam/firely-net-sdk
But if you want more done for you out of the box you can use their Spark repo. This uses the SDK underneath and ultimately gives you a IAsyncFhirService which you can use for CRUD operations;
https://github.com/FirelyTeam/spark
And to your question; Spark currently only supports Mongo DB as the data persistence layer i.e. there is no entity like mapping done to create a db schema in a relational database. NoSQL I think made sense in this case.
Alternatively, check out the list of FHIR implementations in other languages maintained by HL7 themselves;
https://wiki.hl7.org/Open_Source_FHIR_implementations

IBM Watson Natural Language Classifier

one simple question: how can I create more than one classifier within a instance of Natural Language Classifier using the beta toolkit?
I've asked that because I don't know how to upload and train a new classifier after I've just deployed one.
Thanks for the help.
Your question is about the Toolkit. You can manage your training data and classifiers by using the IBM Watsonâ„¢ Natural Language Classifier Toolkit web application. The toolkit gives you a unified view of all the classifiers that are running in the same Bluemix service instance. So you need to create another classifier and use the toolkit to manage.
I think you can view this document, about Natural Language Classifier using Toolkit.
Obs.: The first classifier is free, but each other you will need to pay.
See the API Reference to use NLC.
As #Sayuri mentions above, use the Toolkit to manage your Classifiers.
Something to keep in mind that when you create the first NLC instance (the little box in Bluemix), this is called a service instance. Within this service instance, you can have up to 7 unique classifiers. If you need to create an 8th classifier, you will need to create a new service instance.

Does cloud compatible complex event proceesing (CEP) framework exist?

I have used ESPER's Complex Event Processing (CEP) framework for my work. It is working in standalone system very well. I do not know whether any cloud compatible CEP framework exists or not. By cloud compatible, i mean, it should scale on multiple machines. Please tell me if you are aware of any such CEP framework.
Thank you very much

Workflow engine BPMN, Drools, etc or ESB?

We currently have an application that is based on an in-house developed workflow engine with YAML based DSL. We are looking to move parts of it to Java.
I have discovered a number of java solutions like Intalio, JBPM, Drools Expert, Drools Flow etc.
They appear to be aimed at businesses where the business analyst creates the workflows using a graphical editor and submits them to the workflow engine. They seem geared towards ease of use for non-technical people rather than for developers with a focus on human interaction.
The workflows tend to look like.
Discover-a-file -\
-> join -> process-file -> move-file -> register-file
Discover-some-metadata -/
If any step fails we need to retry it X times. We also need to be able to stop the system and be able to restart it and have it continue from where it was (durable).
Some of our workflows can be defined by a set of goals we need to achieve so Jess's backwards rule chaining sounds interesting but it is not open source.
It might be that what we are after is a Finite State Machine engine or just an Enterprise Service Bus and do everything as JMS queues.
Is there a good open source workflow engine that is both standards-based but also geared towards developers. We don't particular want to use a graphical workflow designer or write reams of XML and it should ideally be in Java or language agnostic (makes REST/Soap calls to external services).
Thanks,
Tom
Both Activiti and Bonita are open source and standard based (BPMN2). See for example this blog post.
Ruote is not standard based but seems close to your DSL approach and runs on a JVM thanks to JRuby.
Intaloi an open source BPM engine it offers a BPMN-support Designer and a BPEL engine. it's written in Java.
Camunda BPM is a developer-friendly Open Source workflow engine that is based on the open standards BPMN 2.0, DMN 1.1 and CMMN 1.1.
While it does come with a comfortable graphical workflow designer it also ships with a fluent API to build workflows programmatically. Camunda is written in Java, but can also be invoked from other languages via its REST API and it can make REST/Soap calls to external services.
jBPM 5 (open source, ASL, BPMN2) is just released and it's the best of Drools Flow and jBPM 4. It's lightweight but it can also integrate deeply with the Drools rule engine to make decisions.
For anyone looking for Python based enterprise grade solution.
Zengine, is GPL3 BPMN workflow based framework with Tornado, Rabbit AMQP, advanced permissions, extensible scaffolding features and more.
Built on top of following major components;
SpiffWorkflow: Powerful workflow engine with BPMN 2.0 support.
Tornado: Tornado is a Python web framework and asynchronous networking library.
Pyoko: Django esque ORM for Riak KV store.
RabbitMQ: Fast, ultrasharp AMQP server written with legendary Erlang lang.