I would like to set up two different queues using TORQUE on ROCKS. Each would be functional for different sets of compute nodes, but all of these compute nodes can be accessed from the same mother node or head node. What I need to know is how to do this using the qmgr command. Also, I would like to know how to add different nodes in the two different queues.
I met the same problem. Perhaps you could find solution here: Queue Configuration
Related
what would be the best idea to follow if the nodes are not in the same network?
I have Source and destination nodes in different networks. when the fork lifters pick the item from the source Node and process the item, store it in Storage and later retrieve it to drop at the destination for the shipment.
Because of different networks, the fork lifters are not following the paths.
If you work on the same Level, you ideally should connect your networks (just draw a path between them).
If your networks are on separate levels, use the "Lift" block. This will allow agents to move across networks on different levels.
PS: If you are on the same level but do not want to connect networks manually, you could use a trick where each of your networks is put on a separate Level, you give the levels the same height and use the "Lift" blocks.
Can someone please point me to the part in the documentation explaining the difference between nodes and indices? I'm going over code that was written by someone else and it seems to use nodes and indices interchangeably. Also, when I apply NodeToIndex or IndexToNode on a variable, the value does not change.
Please read: https://developers.google.com/optimization/routing
indices are internal object belonging to the solver, nodes are linked to the distance matrix and the user visits.
In the underlying constraint programming model of routing problems, each stop is exactly visited once. Each stop is a index. The routing library allows several vehicles to start and end at a stop. This causes a conflict because a stop may be visited by several vehicles. In ortools this conflict is resolved by creating dummy indices for nodes that are visited by several vehicles. Hence there may be several indicies that are map to the same node. The depot is a typical example.
This page about the auxillary graph helped me: https://acrogenesis.com/or-tools/documentation/user_manual/manual/tsp/model_behind_scenes.html#the-auxiliary-graph
I have an IOT application which is architected in the following way :-
There are plants which has its own sets of devices.
Now the entire pipeline is deployed in kubernetes consisting of the following units :-
A job which wakes up every x seconds, reads data from all plants, pushes the data to a mqtt broker.
An MQTT broker.
A subscriber service which receives data from all plants and pushes it to a timeseries database.
Jobs running at intervals of 5min, 15min, 1hr, 4hr, 1day and performs downsampling on the data of all the projects and pushes it to separate downsampled tables.
Jobs running every day to check if there was in holes/gaps in the data and tries to fill it up if possible.
Now this works fine for few Plants, but when the number of plants increases it becomes difficult perform data retrieval, push, downsampling using single service/job as it becomes too much memory intensive and chokes at multiple places. As a temporary fix scaling it vertically fixes the issue to some extent but in that case I need to put all the pods in a single machine which I am scaling vertically and scaling multiple nodes vertically is quite expensive.
Hence I am planning to break down the system in a way so that I can scale horizontally and I am looking for suggestions for the possible ways I can achieve this.
Is there a way to make SPSS Modeler output the association rules when performing a clustering analysis like K-means? I'd like to have the set of rules that associate any observation to a certain cluster (like Var1<0 and Var2 = 1 then cluster = A and so on) so that I'm able to use it regardless of SPSS.
I looked for that in SPSS online tutorial but no success. I know that it outputs the rules for decision tree nodes, so it seemed to me just natural that it would work the same for K-means and etc. Thank you in advance.
You could create a derive node with that logic ( if Var1<0 and Var2 = 1 then cluster = 1 else 0 endif ) and then use that new variable as input in the K-Means Model Node. I use some similar variables in the Anomaly node and works fine for me. Just remember to use a Type node in front of K-Means node and set that variable as input.
Hope to have been helpful!
Those are two different types of analysis and I'd kindly ask: what do you really want to achieve?
Clustering means that you group observations.
Association rules (recommendation engine) would suit you if your observations have made multiple activities or choices and you want to see the next most likely choice.
But what you described looks more like a classification task to me, e.g. different approach, because you described a rule set, and that is exactly what certain classification models return.
http://share.opsy.st/56e7090e92b6c-MathWorks_Figure+1_Machine+Learning+Types.jpg
Use case:
nodes are documents
Links are links between documents that have an associated correlation (e.g., 0 to 1)
Being new, it is not clear how to apply those correlations or "weights' so that the document cluster in a logical manner.
Can anyone point me to an existing example?
Thanks in advance.
Positioning nodes is done by the layout. Use any force-directed (physics) layout, like CoSE or Cola. Those layouts allow your to specify how strongly nodes should be pulled towards one another on a per-edge basis.
Try some of the force-directed layouts to see which one gives results that you like. Each one has different trade-offs (speed, aesthetics, etc.).
Just make sure to set the edge force for whatever layout, e.g. edgeElasticity for CoSE, to be proportional to edge.data('weight').
Example: http://js.cytoscape.org/demos/7b511e1f48ffd044ad66/