How to make Windows' Bonjour resolve foo.bar.local subdomains created by Avahi - bonjour

Why can't Windows' Bonjour (the Apple one) automatically resolve foo.bar.local, when Ubuntu and macOS can?
foo.local instead is resolved without issues by every OS.
Here's my avahi-daemon.conf:
[server]
host-name=foo
domain-name=bar.local
...
This discussion mentions that Windows' Bonjour implementation does not support aliases, is this the culprit?
How does this tool differ from my solution?
EDIT: I don't want to set an alias. foo.bar.local is different from bar.local.
I just want to have different hostnames under the same "domain".
For example, foo.bar.local is 192.168.0.8 while foo1.bar.local is 192.168.0.9.
I won't have foo.local, bar.local and foo.bar.local all in the same network. I will use foo.bar.local, with only foo varying (*.bar.local)

From my current findings, this seems to be intentional. Excerpt from the source code (mDNSResponder-878.30.4, function NSPLookupServiceBegin in mdnsNSP.c):
// <rdar://problem/4050633>
// Don't resolve multi-label name
// <rdar://problem/5914160> Eliminate use of GetNextLabel in mdnsNSP
// Add checks for GetNextLabel returning NULL, individual labels being greater than
// 64 bytes, and the number of labels being greater than MAX_LABELS
replyDomain = translated;
while (replyDomain && *replyDomain && labels < MAX_LABELS)
{
label[labels++] = replyDomain;
replyDomain = GetNextLabel(replyDomain, text);
}
require_action( labels == 2, exit, err = WSASERVICE_NOT_FOUND );
It returns an error if the name has more than two labels, as in the case of foo.bar.local.
In my tests, I just removed the last line. With the new build, it resolved names with multiple labels successfully. I did not yet encounter any side-effects so far.
Has anybody an idea about the intention behind not resolving multi-label names?

Related

Cimplicity Screen - one object/button that is dependent on hundreds of points

So I have created a huge screen that essentially just shows the robot status for every robot in this factory (individually)… At the very end of the project, they decided they want one object on the screen that blinks if any of the 300 robots fault. I am trying to think of a way to make this work. Maybe a global script of some kind? Problem is, I do not do much scripting in Cimplicity, so any help is appreciated.
All the points that are currently used on this screen (to indicate a fault) have very similar names… as in, the beginning is the same… so I was thinking of a script that could maybe recognize if a bit is high based on PART of it's string name characteristic. The end will change a little each time, but I am sure there is a way to only look for part of a string and negate the rest. If the end has to be hard coded, that's fine.
You can use a Python script in Cimplicity.
I will not go into detail on the use of python in Cimplicity, which is well described in the documentation indicated above.
Here's an example of what can be done... note that I don't have a way to test it and, of course, this will work if the name of your robots in the declaration follows the format Robot_1, Robot_2, Robot_3 ... Robot_10 ... Robot_300 and it also depends on the Name and the Type of the fault variable... as you didn't define it, I imagine it can be an integer, with ZERO indicating no error. But if you use something other than that, you can easily change it.
import cimplicity
(...)
OneRobotWithFault = False
# Here you get the values and check for fault
for i in range(0, 300):
pointName = f'MyFactory.Robot_{i}.FaultCode'
robotFaultCode = cimplicity.point_get(pointName)
if robotFaultCode > 0:
OneRobotWithFault = True
break
# Set the status to the variable "WeHaveRobotWithFault"
cimplicity.point_set("WeHaveRobotWithFault", OneRobotWithFault)

Does Get-NetIPConfiguration return interfaces in binding order?

I noticed that
(Get-NetIPConfiguration).InterfaceIndex
always seems to return the index's in interface binding order for me. is this the way its suppose to function, always returning in interface binding order or it is just a fluke that it always returns in interface binding order for me in this case?
You could try
(Get-NetIPConfiguration).InterfaceIndex | Sort [int]
ok, got this figured out. Starting with windows 10 the preferred IP Address enabled (either IPv4 or IPv6) interface is the one with the lowest sum of the IPv4 (or IPv4) route metric and the interface metric (route metric + if metric). This may not be the one with the default route 0.0.0.0/0 depending on if you have multiple interfaces set up or not with multiple default routes. That interface will bind to the one with the next lowest lowest sum on the same route ...and so forth and so on. This also, it turns out, may not be the order of the interface GUID's in the registry value at 'HKLM:\SYSTEM\Currentcontrolset\Services\TCPIP\Linkage\bind\
But...how they appear in a list, including other methods, is determined just by the interface metric. So in Windows 10 there is (i'm gonna call it) 'route binding order' and the list order. Although this may seem to match up sometimes because most people don't have multiple interfaces in their systems it becomes important if you are going to start playing around with setting interface and route metric to ensure the interface you want to be used is the one being used for a particular route. So you want the actual binding order and not the list order.
so, in windows 10, to get correct route binding order (vs list order) you have to examine the route table and do the 'math' for sum of route metric + interface metric to get actual binding order, and use that same 'sum' methodolgy to set up interfaces if you are going to do it yourself.
Below is an example of this method usage that i threw together quickly. I have one Windows 10 machine with two hardware interfaces, one for hard line connection and the other wifi. I also have OpenVPN with a TAP adapter installed on this system, so i need to know which hardware interface that OpenVPN was binding to through the TAP adapter. The TAP adapter is a virtual (software) interface. When the TAP is installed/connected it uses a interface metric less than 10 because windows minimum automatic interface metric is always no lower than 10 (although you can manually set it lower) , and the tap uses a route metric of 0. So the tap is going to be the lowest 'sum' Ip enabled interface in the system. So all this info and using the below code I got the hardware interface the Open/VPN/Tap is binding to (code thrown together quickly, will refine later when I have the time)..
$route = (Get-NetRoute); $rtmetric = $route.RouteMetric; $ifmetric = $route.InterfaceMetric; $index = $route.ifIndex;
$sumarray = #(); $indexarray = #(); For ($i=0; $i -lt $rtmetric.Length; $i++)
{
$sum = $rtmetric[$i] + $ifmetric[$i]
$sumarray += $sum
$indexarray += $index[$i]
}
$sumsort = $sumarray | sort -unique; [int]$sumlowest = ($sumsort | measure -Minimum).Minimum;
$pos = [Array]::indexof($sumsort,$sumlowest) + 1; $sumget = $sumsort[$pos]; $posA = [Array]::indexof($sumarray,$sumget); $ifindexbind = $indexarray[$posA];
(Get-NetAdapter | ? ifIndex -eq $ifindexbind)
so, that's the long way around it to get the route binding order vs the list order. If you do this:
(Get-NetAdapter).InstanceID
you will get the binding in list order as they are in the registry key starting from the bottom and working your way up. But.... if you do this
(Get-NetIPConfiguration).NetAdapter.InstanceID
you will notice that as interfaces are disabled or enable the order given will also change, and that's because '(Get-NetIPConfiguration).NetAdapter.InstanceID' is giving the route binding order and not the registry list order. Disabling an interface removes it from the route table but just disabling an interface does not remove its binding to another interface but the priority changes so the previously bound interface now becomes the preferred interface in the route so moves to the top. When that disabled interface with the binding to the enabled interface, is enabled again it takes it place in the route binding on top. The one on top will be the preferred adapter (the one with the lowest 'sum' value), the one under that will be the 'next prefered' (the one with the next lowest 'sum' value) and if on the same route is the interface bound to one above it, thus binding order. So using (Get-NetIPConfiguration).NetAdapter.InstanceID the needed route binding order for enabled and connected interfaces can be had without resorting to reading the registry.

How does resource.data.size() work in firestore rules (what is being counted)?

TLDR: What is request.resource.data.size() counting in the firestore rules when writing, say, some booleans and a nested Object to a document? Not sure what the docs mean by "entries in the map" (https://firebase.google.com/docs/reference/rules/rules.firestore.Resource#data, https://firebase.google.com/docs/reference/rules/rules.Map) and my assumptions appear to be wrong when testing in the rules simulator (similar problem with request.resource.data.keys().size()).
Longer version: Running into a problem in Firestore rules where not being able to update data as expected (despite similar tests working in the rules simulator). Have narrowed down the problem to point where can see that it is a rule checking for request.resource.data.size() equaling a certain number.
An example of the data being passed to the firestore update function looks like
Object {
"parentObj": Object {
"nestedObj": Object {
"key1": Timestamp {
"nanoseconds": 998000000,
"seconds": 1536498767,
},
},
},
"otherKey": true,
}
where the timestamp is generated via firebase.firestore.Timestamp.now().
This appears to work fine in the rules simulator, but not for the actual data when doing
let obj = {}
obj.otherKey = true
// since want to set object key name dynamically as nestedObj value,
// see https://stackoverflow.com/a/47296152/8236733
obj.parentObj = {} // needed for adding nested dynamic keys
obj.parentObj[nestedObj] = {
key1: fb.firestore.Timestamp.now()
}
firebase.firestore.collection('mycollection')
.doc('mydoc')
.update(obj)
Among some other rules, I use the rule request.resource.data.size() == 2 and this appears to be the rules that causes a permission denied error (since commenting out this rules get things working again). Would think that since the object is being passed with 2 (top-level) keys, then request.resource.data.size()=2, but this is apparently not the case (nor is it the number of keys total in the passed object) (similar problem with request.resource.data.keys().size()). So there's a long example to a short question. Would be very helpful if someone could clarify for me what is going wrong here.
From my last communications with firebase support around a month ago - there were issues with request.resource.data.size() and timestamp based security rules for queries.
I was also told that request.resource.data.size() is the size of the document AFTER a successful write. So if you're writing 2 additional keys to a document with 4 keys, that value you should be checking against is 6, not 2.
Having said all that - I am still having problems with request.resource.data.size() and any alternatives such as request.resource.size() which seems to be used in this documentation
https://firebase.google.com/docs/firestore/solutions/role-based-access
I also have some places in my security rules where it seems to work. I personally don't know why that is though.
Been struggling with that for a few hours and I see now that the doc on Firebase is clear: "the request.resource variable contains the future state of the document". So with ALL the fields, not only the ones being sent.
https://firebase.google.com/docs/firestore/security/rules-conditions#data_validation.
But there is actually another way to ONLY count the number of fields being sent with request.writeFields.size(). The property writeFields is a table with all the incoming fields.
Beware: writeFields is deprecated and may stop working anytime, but I have not found any replacement.
EDIT: writeFields apparently does not work in the simulator anymore...

Traversing DOM nodes in CKEditor-4

We have a bug at this CKEditor plugin, and a generic solution is like to this, applying a generic string-filter only to terminal text nodes of the DOM.
QUESTION: how (need code example) to traverse a selection node (editor.getSelection()) by CKEditor-4 tools, like CKEDITOR.dom.range?
First step will be to get ranges from current selection. To do this just call:
var ranges = editor.getSelection().getRanges();
This gives you an array of ranges, because theoretically (and this theory is proved only by Firefox) one selection can contain many ranges. But in 99% of real world cases you can just handle the first one and ignore other. So you've got range.
Now, the easiest way to iterate over each node in this range is to use CKEDITOR.dom.walker. Use it for example this way:
var walker = new CKEDITOR.dom.walker( range ),
node;
while ( ( node = walker.next() ) ) {
// Log values of every text node in range.
console.log( node.type == CKEDITOR.NODE_TEXT && node.getText() );
}
However, there's a problem with text nodes at the range's boundaries. Consider following range:
<p>[foo<i>bar</i>bo]m</p>
This will log: foo, bar, and bom. Yes - entire bom, because it is a single node and walker does not modify DOM (there's a bug in documentation).
Therefore you should check every node you want to transform if it equals range's startContainer or endContainer and if yes - transform only that part of it which is inside range.
Note: I don't know walker's internals and I'm not sure whether you can modify DOM while iterating over it. I'd recommend first caching nodes and then making changes.

How do I Benchmark RESTful Service with Variable Parameters?

I'm currently working on benchmarking a RESTful service I've made, and part of that is making sure it runs in a reasonable amount of times for a large array of parameters. For example, let's say I have RESTful API of the form some_site.com/item?item_id=y. In that case to be sure my service is working as fast as I'd like it to work, I'd want to try out many values for y one by one, preferably coming from some text file. I can't figure out any way of doing this in ab or httperf. I'm open to using a different benchmarking program if I have, but would prefer something simple and light. What I want to do seems like something pretty standard, so I'm guessing there must already be a program that let's me do it, but an hour or so of googling hasn't gotten me an answer. Ideas?
Answer: Jmeter (which is apparently awesome). This faq explains how to do it. Hopefully this helps someone else, as it took me like a day of searching to figure this out.
I have just had some good experience with using JavaScript (via BSF/Rhino) in JMeter.
I have put one thread group in my test plan and stick a 'Simple Controller' with two elements under it - 'HTTP Request' sampler and 'BSF PreProcessor'.
Set BSF language to 'javascript' and either type the code into the text box or point it to a file (use full path or relative to CWD of JMeter process).
/* Since `Math.random()` gives us float, we use `java.util.Random()`
* see: http://docs.oracle.com/javase/7/docs/api/java/util/Random.html */
var Random = new Packages.java.util.Random();
var min = 10-1;
var max = 2;
var maxLines = (min)+Random.nextInt(max-min);
var s = '';
for (var d = 0; d <= maxLines; d++) {
s += d.toString()+','+Random.nextInt(1000).toString()+'\n';
}
// s => '0,312\n1,104\n2,608\n'
vars.put('PAYLOAD', s);
Now I can refer to ${PAYLOAD} in the HTTP request!
You can generate JSON, but you will need to upgrade jakarta-jmeter-2.5.1/lib/js-1.6R5.jar with the newest version of Rhino to get JSON.stringify and JSON.parse. That worked perfectly for me also, though I thought I'd put a simple example here.
You can use BSF pre-processor for URL params as well, just set another variable with vars.put('X', 'some value') and pass it as ${X} in the request parameter.
This blog post helped quite a bit, by the way.