NAT simulation for P2P data transfer - simulation

I am currently implemented a P2P data transfer application based on Libjingle, I want to do following simulations to verify the implementation:
Simulate different types of NATs (full cone, port restricted cone, address restricted cone, symmetric cone)
Simulate the network delay, packet loss.
Simulate large scale P2P networks. Say, I want to deploy this application to 1000 nodes to test if the concurrently data transfer is well handled.
Is there any tools to help me to build such environment easily?

There is no straight forward tool available to perform such type of task, though you may build such tools by utilizing the following :
*Virtual Boxes Or Virtual Instances or Amazon VPC etc, to simulate network
*OpenvSwitch , for various network automation
For NAT :
*You may use set of IP tables rules to prepare different types of NAT Boxes
Or
*Directly buy different type of switch to test the NAT traversing.
For Network Delay /Packet Loss:
No concrete idea as of now.

Related

Because of different networks, the fork lifters are not following the paths

what would be the best idea to follow if the nodes are not in the same network?
I have Source and destination nodes in different networks. when the fork lifters pick the item from the source Node and process the item, store it in Storage and later retrieve it to drop at the destination for the shipment.
Because of different networks, the fork lifters are not following the paths.
If you work on the same Level, you ideally should connect your networks (just draw a path between them).
If your networks are on separate levels, use the "Lift" block. This will allow agents to move across networks on different levels.
PS: If you are on the same level but do not want to connect networks manually, you could use a trick where each of your networks is put on a separate Level, you give the levels the same height and use the "Lift" blocks.

Use of Binomial Theorem in IP address distribution

I am currently making a project on Binomial theorem/Distribution for my semester. I need some very interesting real life applications of these to add in my project(I need to add in depth explanation of that application). I came accross these applications:
Distribution of Internet Protocol Address (or IP Address)
This method in IP distribution condition where you have been given IP address of the fixed host and number of host are more
than total round off then you may use this theorem to distribute bits so that all host may be covered in IP addressing. This
method is known as variable sub netting.
Weather forecasting
Moreover binomial theorem is used in forecast services. The
disaster forecast also depends upon the use of binomial
theorems
But I couldn't find the explanation for point 1 anywhere. I know this is somewhat lame but if any of u can explain it in detail or if u could simply explain some other real life application of Binomial theorem/Distribution to me, I would really appreciate it!

Using a subset of a SUMO scenario for OMNeT++ network simulation (with VEINS)

I'm trying to evaluate an application that runs on a vehicular network using OMNeT++, Veins and SUMO. Because the application relies on realistic traffic behavior, so I decided to use the LuST Scenario, which seems to be the state of the art for such data. However, I'd like to use specific parts of this scenario instead of the entire scenario (e.g., a high and a low traffic load fragment, perhaps others). It'd be nice to keep the bidirectional functionality that VEINS offers, although I'm mostly interested in getting traffic data from SUMO into my simulation.
One obvious way to implement this would be to use a warm-up period. However, I'm wondering if there is a more efficient way -- simulating 8 hours of traffic just to get a several-minute fragment feels inefficient and may be problematic for simulations with sufficient repetitions.
Does VEINS have a built-in mechanism for warm-up periods, primarily one that avoids sending messages (which is by far the most time consuming part in the simulation), or does it have a way to wait for SUMO to advance, e.g., to a specific time stamp (which also avoids creating vehicle objects in OMNeT++ and thus all the initiation code)?
In case it's relevant -- I'm using the latest stable versions of OMNeT++ and SUMO (OMNeT++ 4.6 with SUMO 0.25.0) and my code base is based on VEINS 4a2 (with some changes, notably accepting the TraCI API version 10).
There are two things you can do here for reducing the number of sent messages in Veins:
Use the OMNeT++ Warm-Up Period as described here in the manual. Basically it means to set warmup-period in your .ini file and make sure your code checks this with if (simTime() >= simulation.getWarmupPeriod()). The OMNeT++ signals for result collection are aware of this.
The TraCIScenarioManager offers a variable double firstStepAt #unit("s") which you can use to delay the start of it. Again this can be set in the .ini file.
As the VEINS FAQ states, the TraCIScenarioManagerLaunchd offers two variables to configure the region of interest, based on rectangles or roads (string roiRoads and string roiRects). To reduce the simulated area, you can restrict simulation to a specific rectangle; for example, *.manager.rioRects="1000,1000-3000,3000" simulates a 2x2km area between the two supplied coordinates.
With both solutions (best used in combination) you still have to run SUMO - but Veins barely consums any of the time.

Netlogo High performance Computing

Are there any high performance computing facilites available for running NetLogo behavior space like R servers.
Thanks.
You can use headless mode to run batches of experiments on a cluster/cloud computing platform. This involves simply running an executable so should be compatible with most setups. If you don't have access to a cluster through an institution, I know people use AWS and Google compute. You probably want an instance with many cores, since that allows a single instance of BehaviorSpace to automatically distribute the runs involved in an experiment across multiple processes. Higher processing power of course helps too. You shouldn't need much memory. The n1-highcpu-16 or n1-standard-16 instance types in Google compute looks pretty ideal to me.

Soft hand off in CDMA cellular networks

Hi,
In the CDMA cellular networks when MS (Mobile Station) need to change a BS(Base Station), exactly necessary for hand-off, i know that is soft hand-off (make a connection with a target BS before leaving current BS-s). But i want to know, because connection of MS remaining within a time with more than one BS, MS use the same code in CDMA to communicate with all BS-s or different code for different BS-s ?
Thanks in advance
For the benefit of everyone, i have touched upon few points before coming to the main point.
Soft Handoff is also termed as "make-before-break" handoff. This technique falls under the category of MAHO (Mobile Assisted Handover). The key theme behind this is having the MS to maintain a simultaneous communication link with two or more BS for ensuring a un-interrupted call.
In DL direction, it is achieved using different transmission codes(transmit same bit stream) on different physical channels in the same frequency by two or more BTS wherein the CDMA phone simultaneously receives the signals from these two or more BTS. In the active set, there can be more than one pilot as there could be three carriers involved in soft hand off. Also, there shall also be a rake receiver that shall do maximal combining of received signals.
In UL direction, MS shall operate on a candidate set where there could be more than 1 pilot that have sufficient signal strength for usage as reported by MS. The BTS shall tag each of the user's data with Frame reliability indicator that can provide details about the transmission quality to BSC. So, even though the signals(MS code channel) are received by both base stations, it is achieved by routing the signals to the BSC along with information of quality of received signals, which shall examine the quality based on the Frame reliability indicator and choose the best quality stream or the best candidate.