I want write a code to get the available bandwidth.
Using one of the algorithm.ex.spruce / pathload.
I wanted to a code in C++ in Windows.
I have got linux code .
But i wanted a Windows based code , which can get me up and down bandwidth.
Bandwidth for what resource? If this is a network resource there isn't anything in any language or the OS that will give you any real estimation of bandwidth. You would need to call out to something at the other end of the link you need to traverse and get an estimation of bandwidth at that point in time.
Or better said... You would need to call a file on a web server to test the download speed of someone's home Internet connection. Keep in mind that the numbers obtained are only accurate for that point in time though. As the bandwidth on any resource can be higher or lower when you actually use them since external factors always affect bandwidth (other prorcesses, users, etc.)
Why do you need the bandwidth and for what resource?
If you asking, you not up to it. Converting linux to windows requires knowledge of both platform, which you clearly doesnt have.
In my experience, almost all network friendly bandwidth estimation algorithm (pathload, pathchirp etc) are unsuitable for high speed bandwidth. Those old algorithm are suitable and practical if the bandwidth is around 1mb. Also, these algorithm assume the network is 'clean'(no other traffic). Nowadays, almost all of these 'network friendly' algorithm is not practical.
Other variant bandwidth estimation tools like netperf, netcps is based on brute force method. Brute force method are not network friendly. Most of this algorithm have problem with latency(if tcp based) and reached hdd read/write speed(if write to hdd instead of memory).
IMO, the best bandwidth estimation tools is UDP based(not influenced by latency unlike tcp) brute force(not influenced by other traffic) with custom made control flow tuned for high speed networks.
Other problem you will encounter is code optimization. You must ensure that your code is highly optimized. If you use c#, GC will pose a possible problem.
Related
I am a new student studying OS course. I have already know that OS can serve for better communication between applications and hardwares in modern computer. But sometimes it seems more time efficient if applications can control hardware directly. May I ask whether it is possible?
yes it is possible but that would be a single application computer that computer only can run one particular application.
Applications handling hardware directly is faster as there is less of overhead of what OS does in its management.
You can take the example of DMA - Direct Memory Access. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.
But you should keep in mind the importance of operating system in handling other hardwares as not everything can be managed that trivially and need processing for decision making.
I am trying to learn an RTOS from scratch and for this, I use freeRTOS.org as a reference. I find out this site as a best resource to learn an RTOS. However, I have some doubts and I was trying to find out but not able to get exact answers.
1) How to find out that device have Real-time capability e.g. some controller has (TI Hercules) and other don't have(MSP430)?
2) Does that depend upon the architecture of the CORE (ARM Cortex-R CPU in TI Hercules TMS570)?
I know that these questions make nuisance, but I don't know how to get the answer of these questions.
Thanks in advance
EDIT:
One more query I have that what is meant by "OS" in RTOS? Does that mean the same OS like others or it's just contains the source code file for the API's?
Figuring out whether a device has a "Real-Time" capability is somewhat arbitrary and depends on your project's timing requirements. If you have timing requirements that are very high, you'll want to use a faster microcontroller/processor.
Using an RTOS (e.g. FreeRTOS, eCOS, or uCOS-X) can help ensure that a given task will execute at a predictable time. The FreeRTOS website provides a good discussion of what operating systems are and what it means for an operating system to claim Real-Time capabilities. http://www.freertos.org/about-RTOS.html
You can also see from the ports pages of uC/OS-X and FreeRTOS that they can run on a variety target microcontrollers / microprocessors.
Real-time capability is a matter of degree. A 32-bit DSP running at 1 GHz has more real-time capability than an 8-bit microcontroller running at 16 MHz. The more powerful microcontroller could be paired with faster memories and ports and could manage applications requiring large amounts of data and computations (such as real-time video image processing). The less powerful microcontroller would be limited to less demanding applications requiring a relatively small amount of data and computations (perhaps real-time motor control).
The MSP430 has real-time capabilities and it's used in a variety of real-time applications. There are many RTOS that have been ported to the MSP430, including FreeRTOS.
When selecting a microcontroller for a real-time application you need to consider the data bandwidth and computational requirements of the application. How much data needs to be processed in what amount of time? Also consider the range and precision of the data (integer or floating point). Then figure out which microcontroller can support those requirements.
While Cortex-R is optimised for hard real-time; that does not imply that other processors are not suited to real-time applications, or even better suited to a specific application. What you need to consider is whether a particular combination of RTOS and processor will meet the real-time constraints of your application; and even then the most critical factor is your software design rather then the platform.
The main goal you want to obtain from an RTOS is determinism, most other features are already available in most other non-RTOS operating systems.
The -OS part in RTOS means Operating System, simply put, and as all other operating systems, RTOSes provide the required infrastructure for managing processor resources so you work on a higher level when designing your application. For accessing those functionalities the OS provides an API. Using that API you can use semaphores, message queues, mutexes, etc.
An RTOS has one requirement to be an RTOS, it must be pre-emptive. This means that it must support task priorities so when a higher-priority task gets ready to run, one of possible task states, the scheduler must switch the current context to that task.
This operation has two implications, one is the requirement of one precise and dedicated timer, tick timer, and the other is that, during context switching, there is a considerable memory operations overhead. The current CPU status, or CPU's in case of multi-core SoCs, must be copied into the pre-empted task's context information and the new ready to run task's context must be restored in the CPU.
ARM processors already provide support for the System Timer, which is intended for a dedicated use as an OS tick timer. Not so long ago, the tick timer was required to be implemented with a regular, non-dedicated timer.
One optimization in cores designed for RTOSes with real-time capabilities is the ability to save/restore the CPU context state with minimum code, so it results in much less execution time than that in regular processors.
It is possible to implement an RTOS in nearly any processor, and there are some implementations targeted to resource constrained cores. You mainly need a timer with interrupt capacity and RAM. If the CPU is very fast you can run the OS tick at high rates, sub-millisecond in some real-time applications with DSPs, or at a lower rate like just 10~100 ticks per second for applications with low timing requirements in low end CPUs.
Some theoretical knowledge would be quite useful too, e.g. figuring out whether a given task set is schedulable under given scheduling approach (sometimes it may not), differences between static-priority and dynamic-priority scheduling, priority inversion problem, etc.
These questions may sound very esoteric to most, but I'd really like to know more about this stuff.
1st
I'm wondering how long does it take for an FPGA to reconfigure itself, from the time its modelled circuit is powered down to the time a new one is in place and operational.
I am aware that Place-&-Route is a costly process, but that is because the P&R tools must decide where to put the components and how to route them.
Consider that P&R analysis is done, and all that's left is actually reconfiguring the FPGA: is that a slow process by itself? Can it be done hundreds or thousands of times per second?
There are several implications for such a possibility that I'm curious about. To name 2, it could allow us to serve an FPGA to multiple concurrent "clients" (the same way a GPU is capable of rendering stuff for multiple different programs), or provide for extremely fine-tuned circuits for long number-crunching processes of well-defined but numerous processing stages of highly asynchronous processing (think: complex Haskell programs).
2nd
Anothing thing I'd like to ask is whether an FPGA can be partially reconfigured in realtime, while the modelled circuit is powered and operational, as long as the parts being reconfigured are powered off, of course.
Several interesting implications would arise from such a possibility as well, for example allowing for realtime reconfigurable buses, hardware emulation of neural networks, etc.
Are such things being extensively researched right now? And how likely are they to be researched in the future?
The reconfiguration time depends on a lot of things. The big ones are
how much of the FPGA you are reconfiguring (how many bits need to go in)
How fast you can get the data in (using quad-SPI seems to be the favoured way of bringing FPGAs up fast nowadays)
Big FPGAs can be many 10s to 100s of milliseconds to completely reconfigure.
A small configuration can be achieved within the PCI express startup time (100ms IIRC) in order to enable a pure FPGA card to be enumerated in time and then the rest of the config can be loaded later.
In terms of very dynamic reconfiguration, its more likely that the bottle neck is swapping the various data sets in and out that go with each bitstream - I imagine anything which needs a lot of FPGA to accelerate it is a pretty large dataset... but you might have other applications in mind?
We have a data streaming application which uses local caching to reduce future downloads. The full datasets are larger than what is streamed to the end user - i.e. just the bits the end user wants to see. The concept is pretty much like a browser, except the streamed data is exclusively jpg and png.
The usage patterns are sporadic and unpredictable. There are download spikes on initial usage while the cache is populated. What would be the theoretical and practical/experimental means of modelling and measuring the bandwidth usage of this application. We have size values of the source datasets, but little knowledge of the usage patterns.
There is not enough information to derive a useful theoretical model for bandwidth usage. If you know something about the rollout pattern, you could attempt to model the distribution of spikes. Is this a closed user group that will all get the app within a short period of time? Will you sell to individual customers that in turn will roll out to a number of employees? Are you selling to consumers? All of these will impact the distribution of peaks.
As for the steady-state bandwidth requirements, that depends a great deal on usage patterns (do they frequently re-use the same data or frequently seek new data?) This is a great thing to determine during a beta program. Log usage patterns locally and/or on the server for beta users, and try to get beta users that are representative of the overall user community.
Finally, to manage spikes in consumption, consider deploying your content on a service such as Amazon CloudFront. This allows you to pay for the bandwidth you actually use, but scale as needed to handle peaks in demand.
I need to know what internet connection is available when my application is running. I checked out the Reachability example from Apple, but this differs only between wifi and carrier network. What I need to know is what carrier network is selected, UMTS or EDGE or GPRS.
Currently, this information is not available. If you want this feature, file a new bug and mention that this is a duplicate of bug 6014806.
You could take a guess at what kind of network you are on by checking the latency of a round trip to your server. If you are getting figures of under 100ms, you are almost certainly on WiFi.
GPRS and EDGE run at around 600ms latency. UMTS/HSDPA is 100-200ms.
Source: my informal testing, and [AT&T][1] figures.
Rather than hardcoding different versions of your site for 3G, EDGE, GPRS, wifi broadband, why not build a framework which detects connection speed and bootstraps your site up to the appropriate level of bandwidth? That way you would get appropriate results on slow 3G / wifi, and it would naturally scale to the next generation of wireless broadband (e.g. WiMax and 802.11n) with a minimal amount of effort / disruption.
For example, you could determine different bandwidth "checkpoints" (which may correspond to 3G, EDGE, etc.), then you could do something like transfer some small bit of data or cache a small image (such as an icon) common to all bandwidth levels, benchmark the download speed in the background and set the bandwidth level accordingly.
File only
I like Wedge's answer. I'm not sure that the file wouldn't be cached by ISPs though. You could always keep generating a new file name or choose one big enough that you only test for long enough to get a result.
Simple latency
The idea of using latency is close but as Shivan mentioned it's inaccurate. A user in Australia to UK will get a latency of around 350ms vs the local user who could see it as low as 30-40ms
Solution: Mean deviation
If you ping your server with 3 packets and then look at the mean deviation (mdev) under 3G it's usually under 50ms. With 2G/EDGE it's almost always over 100ms. I got one outlier at 65ms to AUS.
My tests found a range of 4ms-38ms, with only one exception on a test to Australia from Belgium at 202ms.
Hope that's useful to someone..