I implemented the DiscoveryServiceSet interface from Milo and I already have a demo UA Server implemented using the Milo SDK. I would like to create an LDS Server by adding the DiscoveryServices which I have implemented. How do I enable this? I looked in OpcUaServer.class and the constructor adds all the services. However, it uses the this.sessionManager which I am not able to access or modify the constructor. In addition, I also saw that UaStackServer.class also has the AddServiceSet methods built in. Do I have to use one of those methods to enable my DiscoveryServices? If so, the constructor for UaStackServer already has added the DefaultDiscoveryService using those methods. How can I override it to use my implementation?
Related
I am currently writing a library that can be used by a wayland client software. The library is intended to be mostly independent of the client toolkit (currently only Qt, but other wayland-enabled toolkits should be able to use it as well). It requires only a wl_display pointer passed to the initialization routine, which is retrieved from the GUI toolkit. After the initialization is done, the library should be able to work without contact to the toolkit to make it cross-toolkit compatible.
The problem arises when my library requires a couple of global object proxies (ie. wl_output). The library uses a custom wl_registry to bind custom proxies to required global objects. However, from the server's perspective all proxies to these objects are equivalent. When events that contain object proxies are sent by the server, they may contain either toolkit's or library's proxy reference. This leads to problems because there is no easy way to differentiate these. When the toolkit receives such events, it blindly assumes that the user data of the proxy belongs to the toolkit and uses it, even if the proxy it received belongs to my library.
Is there any way to reconcile such unrelated code, or is such use beyond the scope of the wayland library/protocol and I should re-design my solution?
Qt Wayland developer here.
When the toolkit receives such events, it blindly assumes that the user data of the proxy belongs to the toolkit and uses it, even if the proxy it received belongs to my library.
Are you sure about this part? When you bind to the global, you create a new proxy object, I don't see how the toolkit can know about this... Or are you talking about the arguments in events sent by globals. I.e. wl_pointer.set_cursor and the like? Would be nice if you could be more specific about what goes wrong...
After updating to .net core 2.2 we have the following issue:
Autofac.Core.Registration.ComponentNotRegisteredException: 'The
requested service 'Microsoft.AspNetCore.Hosting.Server.IServer' has
not been registered. To avoid this exception, either register a
component to provide the service, check for service registration using
IsRegistered(), or use the ResolveOptional() method to resolve an
optional dependency.'
We are using preBuilder.Populate(services);.
Any ideas?
Thanks for your help
I had the same problem when following Microsoft migration guide for migrating from Core 2.1 to 2.2.
The problem might occur if you are not using WebHost.CreateDefaultBuilder to create the default web host builder, and you change in the CreateWebHostBuilder method of the Program class to call ConfigureKestrel instead of UseKestrel, as suggested in the migration guide.
As far as I understand if you use WebHost.CreateDefaultBuilder to create the default web host builder, it already calls UseKestrel which registers the IServer service. But you might get into some conflicts if also using UseIIS, so to avoid this problems there is a new ConfigureKestrel call that does not register the IServer. So I think that if you are not using WebHost.CreateDefaultBuilder then you still need to call UseKestrel or UseIIS explicitly.
Of course it might be something else that is causing the problems in your case, but I suspect that following the migration guide blindly (as I did) could cause problems for many developers out there.
This method is package-private (I only checked version 7.6.0), but I found it very hard building proper failsafes into more complex components without the ability of checking the initialization states of the internal components. If I could access that method publicly it would certainly do no harm (it's a read-only method). Yet I did not find any alternative way of checking if a component instance passed initialization phase.
I see that the method is public in 8.x (https://github.com/apache/wicket/commit/d1710298c7e371f260299f732c58d0bf4d647161). So you have two options: 1) use Wicket 8.0.0-M4 or file a ticket to make it public in 7.x as well.
In a custom plugin in custusX i use mServices->patientModelService->getPatientLandmarks()->setLandmark to programmatically alter some landmarks. I want to perform the registration with a already present volume.
In LandmarkPatientRegistrationWidget in org.custusx.registration.method.landmark, performRegistration() calls mServices.registrationService->doPatientRegistration().
However, I'm not sure whether my approach to get hold of a registrationService instance is right.
I have so far added org_custusx_registration to the CMakeLists.txt file and added "cxRegistrationService.h" and <cxRegServices.h> as includes.
Now I can define a RegServices mRegServices and initialize it in the constructor with the mContext of the plugin.
But do I create an own registration service or do I get access to the already existing? How can I get access to the already running registration service?
Your method correctly reuses the existing running registration service.
The default setup of CustusX register a single instance (object) implementing the cx::RegistrationService interface. The cx::RegServices helper class contains a cx::RegistrationServiceProxy, which acts as a smart pointer referring the object. Service objects are only created by the plugin that implement them: Users simply get access to these objects.
The RegistrationServiceProxy implements this using a ctkServiceTracker and related classes, see for example this tutorial on OSGi, section 5.4 (in java, but the principles apply).
I'm doing a Java Backed Webscript to put in Alfresco and call it via REST. This Webscript must do a set of 3 operations (find a path, create a folder and upload a document).
I read about this and found similar examples to do this operations throw the native Alfresco API, with methods like getFileFolderService, getContentService, etc. of Repository or ServiceRegistry classes. All in Java, without javascript.
But I would rather use REST calls instead of Alfresco API inside my Webscript. I think that if already exists Webscripts to do these operacions, is easier call them than use Alfresco API methods to try to do it. And if the API changes in future versions, the REST calls would remain the same. But I'm new here and I don't know if I'm wrong.
In summary: to do these 3 operacions, one after another, in my backed webscript, what is better and why? Use native API methods or use REST calls to existing webscripts?
And if I try to do the second option, is possible to do this? Using HttpClient class and GetMethod/PostMethod for the REST calls inside my Java Webscript may be the best option for Rest calls?. Or this could give me problems? Because I use a Rest call to my backed webscript that do another rest calls to another webscripts.
Thanks a lot!
I think it's bad practice to do it like this. In a lot of Alfresco versions the default services didn't change a bit. Even when they changed they still had deprecated methods.
The rest api changed as well. If you want to make an upgrade proof system I guess it's better to stick with the Webservices (which didn't change since version 2.x) or go with CMIS.
But then it doesn't make sense to have your code within Alfresco, so putting it within an interface is better.
I'd personally just stick with the JavaScript API which didn't change a lot. Yes more functions were enabled within, but the default actions to search & CRUD remained the same.
You could even to a duo: Have your Java Backendscript do whatever fancy stuff and send the result to je JavaScript controller and do the default stuff.
Executing HTTP calls against the process you are already in is a very very bad idea in general. It is slower, much more complex and error-prone, hogs more resources (two threads), and in your case, you will even lose transaction safety. Just imagine the last call fails for some reason. Besides you will most likely have to handle security context propagation yourself. Use the native public API and it will be easy, safe and stable.