Use of Binomial Theorem in IP address distribution - discrete-mathematics

I am currently making a project on Binomial theorem/Distribution for my semester. I need some very interesting real life applications of these to add in my project(I need to add in depth explanation of that application). I came accross these applications:
Distribution of Internet Protocol Address (or IP Address)
This method in IP distribution condition where you have been given IP address of the fixed host and number of host are more
than total round off then you may use this theorem to distribute bits so that all host may be covered in IP addressing. This
method is known as variable sub netting.
Weather forecasting
Moreover binomial theorem is used in forecast services. The
disaster forecast also depends upon the use of binomial
theorems
But I couldn't find the explanation for point 1 anywhere. I know this is somewhat lame but if any of u can explain it in detail or if u could simply explain some other real life application of Binomial theorem/Distribution to me, I would really appreciate it!

Related

I want to know the exact concept of virtual address

Virtual address is described as linear address in some places, and logical address in others.
I'd like to know which one is right with the clear concept of virtual address.
The concept of virtual addresses is that you have a fake/pretend address space and convert/map that somehow to the real/physical address space for one or more reasons (to improve flexibility, to improve portability, to improve security, etc). How this is implemented in practice doesn't really effect the theoretical concept.
For the implementation of the concept on 80x86; virtual addresses are converted into linear addresses using segmentation, then linear addresses are converted into physical addresses using paging. However; segmentation can be configured so that "virtual = linear" (by setting segment bases to zero and segment limits to max., including in 64-bit code if FS and GS are configured so that they do nothing); and paging can be disabled resulting in "linear = physical"; and if neither segmentation nor paging are used you end up with "virtual = linear = physical".
Most operating systems for 80x86 don't use segmentation but do use paging; so virtual addresses can be described as linear addresses for most operating systems (and most applications) on 80x86; but "technically can" isn't a good reason for increasing confusion and almost nobody would call them linear addresses (instead of virtual addresses) without a reason - normally you'd only see the word "linear" used if the difference might matter.
For logical addresses, I have no idea where you saw that, and without context I'd (correctly or incorrectly) assume it's related to storage space and has nothing to do with memory (e.g. "logical block address" as an alternative to "cylinder, head, sector addressing" for old hard disks).
The three basic concept you need to know:
Physical - An actual, specific device
Logical - A redirection to a device
Virtual - A simulated device
In ye olde days before large memory system, virtual and logical were often conflated in regard to addresses. In reality, there is no such thing as a virtual address. A logical address can map to a nothing at all, a physical address, or memory that is simulated virtually.
You can have virtual memory that is accessed by logical addresses.

How Finagle aperture algorithm chooses "non overlapping" subsets?

I have been reading about Finagle and trying to understand the code to figure out how Aperture's subset choice works.
I have seen that ApertureLeastLoaded has a "useDeterministicOrdering" and an "EndpointFactory" which I guess should be the key points to make the decision of which clients to take in the subset.
While reading the "deterministic subsetting" section of Google SRE's book, I understood that the best way to pick a subset of servers from the client point of view, is to know the total number of clients, and a unique sequential identifier of the current client, that can be used as seed of the subset generator.
In Finagle I can't understand how this process is done (I'm not super familiar with Scala) and the documentation both on the website and in the code, explain just how the aperture paradigm works, but not very clear how the initial subset is chosen
I hope somebody can enlighten me
One of the unique properties of Aperture is that its window is sized dynamically based on a clients offered load. That is, clients have a built in controller which can expand or shrink their window at runtime. This property is important as it allows clients to operate more efficiently and better adapt to a changing environment, but it does make it more complex to achieve a uniform load distribution across servers.
To contrast, the subsetting algorithm, as proposed by the Google SRE book, suggests that operators choose a static subset size which allows a uniform load distribution to be calculated analytically but introduces another static configuration that needs to be revisited as a system evolves.
Deterministic Aperture is, to the best of our knowledge, a novel algorithm for achieving a uniform load distribution while maintaining the dynamic properties of the window sizing mentioned above. From a high level, clients construct a topology of their peer cluster (which gives them a sense of ordering and proximity) and then derive a unique per-client permutation of the servers from the topology such that each server is uniformly represented across the permutations.
We are still in the early stages of testing this in production at Twitter, but early results look very promising. After we gather more empirical results, we hope to publish some more detailed content on how the algorithm works and its properties.

NAT simulation for P2P data transfer

I am currently implemented a P2P data transfer application based on Libjingle, I want to do following simulations to verify the implementation:
Simulate different types of NATs (full cone, port restricted cone, address restricted cone, symmetric cone)
Simulate the network delay, packet loss.
Simulate large scale P2P networks. Say, I want to deploy this application to 1000 nodes to test if the concurrently data transfer is well handled.
Is there any tools to help me to build such environment easily?
There is no straight forward tool available to perform such type of task, though you may build such tools by utilizing the following :
*Virtual Boxes Or Virtual Instances or Amazon VPC etc, to simulate network
*OpenvSwitch , for various network automation
For NAT :
*You may use set of IP tables rules to prepare different types of NAT Boxes
Or
*Directly buy different type of switch to test the NAT traversing.
For Network Delay /Packet Loss:
No concrete idea as of now.

Online k-means clustering

Is there a online version of the k-Means clustering algorithm?
By online I mean that every data point is processed in serial, one at a time as they enter the system, hence saving computing time when used in real time.
I have wrote one my self with good results, but I would really prefer to have something "standardized" to refer to, since it is to be used in my master thesis.
Also, does anyone have advice for other online clustering algorithms?
(lmgtfy failed ;))
Yes there is. Google failed to find it because it's more commonly known as "sequential k-means".
You can find two pseudo-code implementations of sequential K-means in this section of some Princeton CS class notes by Richard Duda. I've reproduced one of the two implementations below:
Make initial guesses for the means m1, m2, ..., mk
Set the counts n1, n2, ..., nk to zero
Until interrupted
Acquire the next example, x
If mi is closest to x
Increment ni
Replace mi by mi + (1/ni)*( x - mi)
end_if
end_until
The beautiful thing about it is that you only need to remember the mean of each cluster and the count of the number of data points assigned to the cluster. Once you update those two variables, you can throw away the data point.
I'm not sure where you would be able to find a citation for it. I would start looking in Duda's classic text Pattern Classification and Scene Analysis or the newer edition Pattern Classification. If it's not there, you could try Chris Bishop's newest book or Daphne Koller and Nir Friedman's recent text.

Can a virtual machine be implemented as a neural network?

Disclaimer: I'm not a mathematical genius, nor do I have any experience with writing neural networks. So, please, forgive whatever idiotic things I happen to say here. ;)
I've always read about neural networks being used for machine learning, but while experimenting with writing simple virtual machines, I began to wonder if they could be applied in another way.
Specifically, can a virtual machine be created as a neural network? If so, how would it work (feel free to use an abstract description here, if you have to)?
I've heard of the Joycean Machine, but I can't find any information other than very, very vague explanations.
EDIT: What I'm looking for here is an explanation of exactly how a neural network-based VM would interpret assembly. How would inputs be handled, etc? Would each individual input be a memory address? Let's brainstorm!
You really made my day buddy...
Since an already trained neural network won't be much different than a regular state machine, there is no point writing a neural network VM for a deterministic instruction set.
It might be interesting to train such a VM with multiple instruction sets or an unknown set. However, I doubt it will be practical to execute such a training and even a %99 correct interpreter will be of any use for conventional bytecode.
The only use of a neural network VM I can think of is executing a program that contains fuzzy logic constructs or AI algorithm heuristics.
Some silly stack machine example to demonstrate the idea:
push [x1]
push [y1] ;start coord
push [x2]
push [y2] ;end coord
pushmap [map] ;some struct
stepastar ;push the next step of A* heuristics to accumulator and update the map
pop ;do sth with is and pop
stepastar ;next step again
... ;stack top is a map
reward ;we liked the coordinate. reinforce the heuristic
stepastar
... ;stack top is a map
punish ;we didn't like the next coordinate. try something different
There is no explict heuristic here. Just assume we keep all state in *map including the heuristic algorithm.
You see it looks silly and not completely context sensitive but a neural network is of no value if it doesn't learn online.
Of course. With a rather complex network no doubt.
Much of the parsing of bytecodes/opcodes is pattern matching which neural networks excel at.
You could certainly do this with a neural network - I could easily see learning the correct state transitions for a given piece of bytecode.
Input could be something like:
Value at top of stack
Value in current accumulator
Byte code at current instruction pointer
Byte value at current data pointer
Previous flags
Output could be something like:
Change to instruction pointer
Change to data pointer
Change to accumulator
Stack operation (push, pop, or nothing)
Memory operation (read to accumulator, write accumulator or nothing)
New flags
However - I'm not sure why you would want to do this in the first place. A neural network would be much less efficient (and potentially make mistakes unless you trained it well enough) compared to just executing the bytecode directly. You'd probably need to write an accurate bytecode evaluator anyway just to create enough training data....
Also, in my experience neural networks tend to be good at pattern recognition but very bad at learning logical operations (like binary addition or XORs) once you get beyond a certain scale (i.e. more than a few bits). So depending on the complexity of your instruction set, the network could take a very large amount of time to train.