How does RMI implements the distributed system's single image appearance to end user - rmi

We bind Server's name with it's object in rmi registry.And then look up the registry by the server's name or address.How does this keep the distributed System's single image intact as a user must not care about from where the data comes?

I don't know what you mean by 'the distributed system's single image', but the object you lookup in the Registry can be located anywhere, not just in the host containing the Registry. Although putting it somewhere else does take a little more work.

Related

OPC server create model information from existing nodeset file

I have a task that is create an opc server and instantaneously read data from an injection molding cnc machine with opc client. I have read a lot of documentation and came to a conclusion which I need a model information xml file and create c# classes with a compiler according to the model information file.
I have come across with OPCFoundation/UA-Nodeset repository which has Node Id's that plastic rubber devices shares.(I assume that) There is also a Opc.Ua.PlasticsRubber.IMM2MES.NodeSet2.xml file which is the final ingredient that model compiler produces.
Also I assume when I point my opc server address to the molding cnc device, I will read or machine push data with those spesific nodeIds.(I might be awfully wrong here)
Now the confusion begins here; In the Opc.Ua.PlasticsRubber.IMM2MES.NodeSet2.xml, there are some nodeIds presented. Is the data presented by the molding device published with that nodeIds or those id's are just a unique key for model file? Also when I try to create model information file, nodeId's are different. Is nodeIds should match with the nodeset2.xml?
In the end, I want to read lets say machine status data which nodeId is 5006, should match the model information file which I created in order to get data?
Thank you.
The nodeset in the Companion Specification usually contains only Types (such as ObjectTypes, VariableTypes, etc.) and sometimes an object that serves as an entry point (e.g. DeviceSet of DI). To use these types, you need to create an instance of an object in the address space of your OPC UA server. For example, in your case, the instance might be of the IMM_MES_InterfaceType. The nodes of your instance will have different nodeIds than the types.
As an OPC UA client, you should use the BrowsePath (and the browse service) to locate the correct node in the address space. Once you have the nodeId, you can read or write data from it. In the first step, you can use a generic OPC UA client such as UaExpert for browsing, but it is recommended to implement browsing in your own application. This will allow you to connect to other machines with the same interface.
In think your BrowsePath for the MachineStatus should something like this:
Objects.DeviceSet.IMM_<Manufacturer>_<SerialNumber>.MachineStatus
An example of an plastic rubber devices should be here

How long does a wormhole file transfer persist

I am trying to use magic-wormhole to receive a file.
My partner and I are in different time zones, however.
If my partner types wormhole send filename, for how long will this file persist (i.e. how much later can I type wormhole receive keyword and still get the file)?
From the "Timing" section in the docs:
The program does not have any built-in timeouts, however it is expected that both clients will be run within an hour or so of each other ... Both clients must be left running until the transfer has finished.
So... maybe? Consider using some cloud storage instead, depending on the file. You could also encrypt it before uploading it to cloud storage if the contents of the file is private.

Better way to "mutex" than with a .lock file over the network?

I have a small setup consisting of n clients (CL0, CL1, ... CLn) that access a windows share on the server.
On that server a json file holds important data that needs to be read- and writable by all players in the game. It holds key value pairs that are constantly read and changed:
{
"CurrentActive": "CL1",
"DataToProcess": false,
"NeedsReboot": false,
"Timestamp": "2020-05-25 16:10"
}
I already got the following done with PowerShell:
if a client writes the file, a lock file is generated that holds the hostname and the timestamp of the access. After the access, the lock file is removed. Each write "job" first checks if there is a lockfile and if the timestamp is still valid and will then conditionally write to the file after the lock is removed.
#Some Pseudo Code:
if(!Test-Path $lockfile){
gc $json
}else{
#wait for some time and try again
#check if lock is from my own hostname
#check if timestamp in lock is still valid
}
This works ok, but is very complex to build up, since I needed to implement the lock-mechanism and also a way to force remove the lock when the client is not able to remove the file for multiple reasons a.s.o. (and I am sure I also included some errors...). Plus, in some cases reading the file will return an empty one. I assume this is in the sweetspot during the write of the file by another client, when it is flushed and then filled with the new content.
I was looking for other options such as mutex and it works like a charm on a single client with multiple threads, but since relying on SafeHandles in the scope of one system, not with multiple clients over the network. Is it possible to get mutex running over the network with
I also stumbled about AlphaFS which would allow me to do transactional processing on the filesystem, but that doesn't fix the root cause that multiple clients access one file at the same time.
Is there a better way to store the data? I was thinking about the Windows Registy but I could not find anything on mutex there.
Any thoughts highly appreciated!

HAR file - access "Size" column entries from Chrome Dev Tools Network tab?

I am working on measuring the percentage of GET requests being handled / returned by a site's service worker. Within Chrome Dev Tools there is a "Size" column that shows "(from ServiceWorker)" for files matched by the cache.
When I right-click on any row and choose "Save as HAR with content" then open up the downloaded file in a text editor, searching for "service worker" includes some results (where within the response, there is "statusText": "Service Worker Fallback Required"), but none of them look related to the fact that some requests were handled by the service worker.
Is this information I'm looking for accessible anywhere within the downloaded HAR file? Alternatively, could this be found out by some other means like capturing network traffic through Selenium Webdriver / ChromeDriver?
It looks like the content object defines the size of requests: http://www.softwareishard.com/blog/har-12-spec/#content
But I'm not seeing anything in a sample HAR file from airhorner.com that would help you determine that the request came from a service worker. Seems like a shortcoming in the HAR spec.
It looks like Puppeteer provides this information. See response.fromServiceWorker().
I tried to investigate this a bit in Chrome 70. Here's a summary.
I'm tracking all requests for the https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.5/require.min.js URL, which is a critical script for my site.
TL;DR
As Kayce suggests, within a Chrome HAR file there is no explicit way of determining that an entry was handled by a service worker (as far as I can see). I also haven't been able to find a combination of existing HAR entry fields that would positively identify an entry as being handled by a service worker (but perhaps there is such a combination).
In any case, it would be useful for browsers to record any explicit relationships between HAR entries, so that tools like HAR Viewer could recognise that two entries are for the same logical request, and therefore not display two requests in the waterfall.
Setup
Clear cache, cookies, etc, using the Clear Cache extension.
First and second entries found in HAR
The first entry (below) looks like a request that is made by the page and intercepted/handled by the service worker. There is no serverIPAddress and no connection, so we can probably assume this is not a 'real' network request.
The second entry is also present as a result of the initial page load - there has been no other refresh/reload - you get 2 entries in the HAR for the same URL on initial page load (if it passes through a service worker and reaches the network).
The second entry (below) looks like a request made by the service worker to the network. We see the serverIPAddress and response.connection fields populated.
An interesting observation here is that entry#2's startedDateTime and time fall 'within' the startedDateTime and time of the 'parent' request/entry.
By this I mean entry#2's start and end time fall completely within entry#1's start and end time. Which makes sense as entry#2 is a kind of 'sub-request' of entry#1.
It would be good if the HAR spec had a way of explicitly recording this relationship. I.e. that request-A from the page resulted in request-B being sent by the service worker. Then a tool like HAR Viewer would not display two entries for what is effectively a single request (would this cover the case where a single fetch made by the page resulted in multiple service worker fetches?).
Another observation is that entry#1 records the request.httpVersion and response.httpVersion as http/1.1, whereas the 'real' request used http/2.0.
Third entry (from pressing enter in the address bar after initial page load)
This entry appears in the HAR as a result of pressing enter in the address bar. The _fromCache field is memory as expected, as the resource should be served from regular browser cache in this situation (the resource uses cache-control=public, max-age=30672000).
Questions:
Was this entry 'handled' by the service worker's fetch event?
Maybe when a resource is in memory cache the service worker fetch event isn't fired?
Or is service worker effectively 'transparent' here?
There is no serverIPAddress or connection fields as expected, as there was no 'real' network request.
There is a pageref field present, unlike for entry#2 (entry#2 was a service worker initiated network request).
Fourth entry
The preparation work for this entry was:
Add the resource to a service-worker-cache (https://developer.mozilla.org/en-US/docs/Web/API/Cache).
Use Clear Cache extension to clear all caches (except service-worker-cache).
Reload page.
This entry has fromCache set to disk. I assume this is because the service-worker-cache was able to satisfy the request.
There is no serverIPAddress or connection field set, but the pageref is set.
Fifth entry
The preparation work for this entry was:
Use devtools to enter 'Offline' mode.
This entry is basically the same as entry#4.

Bind multiple remote objects to the same RMI registry

I have a client program that calls remote methods on a server. Now, I want to create 3 different servers based upon the IP address sent by the client.
Question: Should I create 3 different Remote objects and bind them to the same registry. or should I create 3 different Remote objects and bind them to their respective registry ??
What I am doing right now is one Remote object and binding all 3 objects to the same registry.
Remote obj=UnicastRemoteObject.exportObject(this,2026);
Registry r=LocateRegistry.createRegistry(2026);
r.bind("NA", obj);
r.bind("EU", obj);
r.bind("AS", obj);
It has been a long time since I worked with RMI; be that as it may, my advice is to bind all objects in the same registry, which, I guess, you are already doing.
There's no reason to use multiple Registries in the same host, especially if they are all started by the same JVM. Use a single one. Multiple entries in a single hash table inside one Registry are a lot cheaper than multiple Registries.