Building OPC UA Server for Historical Data Access using eclipse milo - opc-ua

I’m new to OPC-UA and came across eclipse milo project. Project seems interesting but there is very little developer help. I am trying to browse code to figure out how to implement Node with historical data. Project has other examples for reference but missing history service example. I tried to modify provided example in ExampleNameSpace.java to enable history on the UaVariableNode but in Prosys OPC UA Client, it doesn't enable "Show History" menu for the Node. Here is what I tried
UaVariableNode node = new UaVariableNode.UaVariableNodeBuilder(server.getNodeMap())
.setNodeId(new NodeId(namespaceIndex, "HelloWorld/Dynamic/" + name))
.setAccessLevel(ubyte(AccessLevel.getMask(AccessLevel.READ_WRITE)))
.setBrowseName(new QualifiedName(namespaceIndex, name))
.setDisplayName(LocalizedText.english(name))
.setDataType(typeId)
.setTypeDefinition(Identifiers.BaseDataVariableType)
**.setHistorizing(true)**
.build();
It will be very helpful if some one who implemented historyService using milo can share example.
UPDATE: Sorry, I should have included other part that I implemented. After reading other stack overflow post, I implemented historyRead function in namespace, which will take care of pulling history reading from datastore. My trouble right now is to indicate OPC client that Node is history capable. Test is to make prosys OPC client to enable "History" menu for the Node. I am probably missing something here.

The Milo Server SDK does not implement historical services for you.
Setting the Historizing attribute is just the tip of the iceberg. Your Namespace also has to override the historyRead (and historyUpdate if you want to support it) methods defined in AttributeHistoryManager and provide implementations. This will be impossible if you're not familiar with how UA history works, which is all defined in Part 11 of the spec.
You'll also have to take responsibility for actually storing the history for any nodes that have their Historizing attribute set, so that the services you implement actually have some data to go and query.
FWIW, developer documentation is a work in progress and should drastically improve in the next couple releases.
History is unlikely to ever be implemented as part of the SDK in such a way that you can just flip a switch and it will start working. It's fairly complicated and an efficient implementation of the services is likely to be coupled to whatever backing store you're using.

Related

Which is the difference between these google KMS client packages? (CloudKMS vs KeyManagementServiceClient)

I have a java codebase that seems to be using "com.google.api.services.cloudkms.v1.CloudKMS" to call KMS. The online docs says to use "com.google.cloud.kms.v1.KeyManagementServiceClient"
When i looked up both packages seem to be updated, however the reference docs recommend using the latter.
https://developers.google.com/resources/api-libraries/documentation/cloudkms/v1/java/latest/com/google/api/services/cloudkms/v1/CloudKMS.html
https://cloud.google.com/kms/docs/reference/libraries
Could someone tell me what is the difference between these 2 clients packages and if i should move to the one the reference links to?
In general, you should prefer the library referenced on the Reference Libraries page, currently com.google.cloud.kms. The examples and tutorials on the website will use this client library.
Probably more history than you need to know, but we have two client libraries because they run over different protocols. The new libraries (the one's listed on the reference page) use gRPC to communicate. This means less bandwidth and less time spent serializing/de-serializing JSON. On the flip side, gRPC requires HTTP/2, and some organizations can't/won't support HTTP/2 yet. As a result, we still publish and maintain legacy libraries that are REST over HTTP/1. It is strongly recommended you use the gRPC ones unless you can't use HTTP/2.
You can read more about the background and technical details in Kickstart your cryptography with new Cloud KMS client libraries and samples.

How to provide a REST API into 3rd Party data?

I use OmniFocus a ton and I'd really like to be able to connect my data there to other things (Zapier, IFFFT, Beeminder, etc). There's a lot of support for putting data into OmniFocus through these services, but I can't find any support for getting data out of OmniFocus.
In thinking about this, I realized my question isn't really about OmniFocus but rather about building a connector to a service that I don't own. So this is my scenario:
I have data on some publicly accessible web service (in the case of OF, it's Dropbox)
I want to build and host some sort of application that accesses that data and parses it and then provides a REST API that other servers can then query.
Ideally I'd like to make this service available to others - this seems tricky because they have to somehow enable my application to read their data.
I'm a fairly experienced software dev but I have zero experience with web applications or cloud applications. I'm not looking for a super in-depth answer here, but more of a general sketch of how this would work (or a confirmation that this really isn't feasible).

Is the Sql Azure Dac Import/Export service WCF or REST or something else?

I downloaded the example application and was surprised to see quite complex web request building and handling.
Unfortunately I have not been able to find even one scrap of documentation about the service.
I tried using AddServiceReference in VS and svcutil.exe on the end points (both the http general one and the https region specific ones) which I found in the example project (again I couldn't find them listed anywhere on the web) and both seamed to find a wsdl of sorts which they both used to create wrapper classes. But neither one created an app.config
and no mater what kind of binding I set up for them, I can not get the client to communicate.
Is there any documentation for the service?
Is there a way to use it with WCF?
Thank you
Rabbi,
i have the same thing here, there are some non MS sites discussing this:
- http://www.britishdeveloper.co.uk/2012/05/export-and-back-up-your-sql-azure.html
- http://www.codeproject.com/Articles/287597/Sql-Azure-Import-Export-Service-bacpac-dac-Extract
There is also a DacSample site, but that doc is bit messed up, mixing the DAC client tools with the hosted solution. if i read the doc correctly and follow the links i end up going in circles. Not funny :)
Good luck!
Pete

OSGi service trackers not always working

Hey guys. We're using OSGi services in an Eclipse RCP application. To track them, we're using the org.osgi.util.tracker.ServiceTracker class. A sample code from the application looks like
mailServiceTracker = new ServiceTracker(context, MailService.class.getName(), null);
mailServiceTracker.open();
MailService service = (MailService) mailServiceTracker.getService();
Now my problem is that the getService() method frequently returns null when I created a new service. The code works very well for services that are existing for a long time in the application, but each time I create a new service, I have to do many things until the service is finally found and tracked. I regularly try for example
'Clean...' in Eclipse
'Refresh' all projects in Eclipse
Rebuild the project on the command line
Sometimes those things help, and sometimes they don't. Does anyone have experiences with those trackers and can tell me how to avoid this behavior and how to get the services tracked immediately upon creation?
Thanks
The problem is that the services you want may not have been created yet (especially in an bundle activator, as some bundles may not yet have started). If you still want to use the service tracker, you will need to provide a ServiceTrackerCustomizer, and keep track (sorry, no pun intended) of the services as they come and go.
Or, you could just switch over to Declarative Services that handle this for you.
There is nothing wrong with using ServiceTrackers other than the fact that it's a fairly low-level way of tracking services. Whilst I agree that declarative services are a nice mechanism, simply dismissing ServiceTrackers because of "all sorts of issues" sounds like bad advice.
Back to the question.
As soon as a service tracker is created and opened, it gives you access to all services that match the filter condition you specified upon creation. There is no delay there. The only thing I can think about is that somehow your bundles are not correctly resolved, so services that are registered from a bundle A are simply not visible to a bundle B using a ServiceTracker. To check this, first locate the bundle that exports the package containing the service interface, and then make sure both A and B are actually wired to it.
Explaining the update/refresh mechanism in OSGi a bit more:
Whenever you update something in OSGi, it's a two step process.
Let's assume you update a bundle that contains a new version of an exported package. Let's also assume there is some consumer that imports it. As long as you only update the bundle but not explicitly refresh the wiring (of which import links to which export) the consumer will still be wired to the old version of the package. As soon as you do a package refresh (something you can do in OSGi via the PackageAdmin service) your consumer will be resolved again and will be wired to the new version.
The reason this is decoupled is that you might want to do updates of several bundles and not "refresh" after each one but instead defer such a refresh until all of them are updated.
It's quite possible that this is the effect you're seeing. Initially you only do an update, and only after the refresh will the tracker actually see the new version of the service.
Not being flippant at all, don't use service trackers. They appear to make your life simple, but there are all sort of issues with them. I'd recommend that you look into using Declarative Services instead. The support for DS in Eclipse has been very good from 3.5 onward.
You might want to check out this book and the associated presentations for more information on why using Service Trackers is a bad idea.
http://equinoxosgi.org/

Accessing Erlang business layer via REST

For a college project i'm thinking of implementing the business layer in Erlang and then accessing it via multiple front-ends using REST. I would like to avail of OTP features like distributed applications, etc.
My question is how do I expose gen_server calls/casts to other applications? Obviously I could make RPC calls via language specific "bridges" like OTP.net or JInterface, but I want a consistent way to access it like REST.
As already mentioned Yaws or Mochiweb are a great way to go but if you'd like a dead simple way to get your RESTful API done quickly and correctly then use Webmachine. It's a layer on top of Mochiweb that implements proper HTTP behavior based on Alan Dean's amazing HTTP flow diagram and makes it easy to get REST done right.
I'm using it right now to expose a REST API as well as handle a COMET application and it's been pretty easy to do, even for an Erlang newbie such as myself.
I did something similar for my job and found it best to use REST to expose the business layer because even Legacy languages such as SoftwareAG's Natural is able to access it. The best mechanism that I have found in Erlang is to use Mochiweb.
You can find more information about using it from the screencast located at
Erlang In Practice Screencast. Episode 6 is particularly helpful but all of them are excellent.
A resource to walk you through installation is How To Quickly Set Up Ubuntu 8.04 loaded with Erlang, Mochiweb and Nginx and Migrating a native Erlang interface to RESTful Mochiweb (with a bit of TDD) provides a good start if you don't find the screencasts to your liking.
The HTTP flow diagram link is dead. The original version and a updated version created in collaboration between Alan Dean and Justin Sheehy ist also hosted in the Webmachine project: link to latest version of the HTTP Diagramm.
There is valuable approach to design gen_server calls/casts in flavor of REST if possible. You can use messages as
{get, Resource}
{set, Resource, Value} % aka PUT
{delete, Resource}
{add, Resource, Value} % aka POST (possible another names are append, modify or similar)
Then its mapping is easy. You can make some transformation URI->RESOURCE or use identity. For most of your application this should be wort approach and special cases you should handle specially. You can think there will be big margin, where you can't use this approach, but this should be mostly premature optimization.
Do you really mean a RESTful interface or RPC over HTTP? Building a RESTful interface on top of an existing layer is more work than just exposing existing methods via HTTP.
I'd suggest to use mochiweb or yaws to implement a (generic) rpc layer.
Just an update, Webmachine has moved to bitbucket: new link to Webmachine