WebAudio API - do nodes without input consume processing power? - web-audio-api

In the WebAudio API, if I have a delay node connected to a gain node but no oscillator or media source going into the delay node filter, will the filter still consume memory/CPU by virtue of the fact that it's still connected to the gain node? Or does it only consume memory when it receives and emits sound? I ask because I'm working on a WebAudio example where many oscillators are intermittently connected to and disconnected from many delay nodes, and I'm wondering if it would speed up processing if I also disconnected the delay node from the gain whenever it was idle? Thanks!

I think the answer is it depends. If the browser is smart enough to see that the delay node memory is all zeroes, and there's no input connected, then nothing needs to be done. However, if the delay node memory is not all zeroes, the node is supposed to continue running, even if nothing is connected. This is so that if the output were connected to the destination, you'll hear the delay signal played out at the appropriate time.
However, Chrome (and probably Safari and Edge) don't behave like this. If a node is not somehow connected to the destination, no processing is done at all. If it's later connected, the delay node will output the data as if nothing had happened.

Related

Why is my TCP socket showing connected but not responding?

I have a program using a bi-directional TCP socket to send messages from the host PC to a VLinx ethernet-to-serial converter and then on to a PLC via RS-232. During heavy traffic the socket will intermittently stop communicating although all soft tests of the connection show that it is connected, active and writeable. I suspect that something is interrupting the connection causing the socket to close with out FIN/ACK. How can I test to see where this disconnect might be occuring?
The program itself is written in VB6 and uses Catalyst SocketTools/SocketWrench as opposed to the standard Winsock library. The methodology, properties and code seem to be sound since the same setup works reliably at two other sites. It's just this one site in particular where this problem occurs. It only happens during production when there is traffic on the network and can lose connection anywhere between 20 - 100 times per 10-hour day.
There are redundant tests in place to catch this loss of communication and keep the system running. We have tests on ACK messages, message queue size, time between transmissions (tokens on 2s interval), etc. Typically, the socket will not be unresponsive for more than 30 seconds before it is caught, closed and re-established which works properly >99% of the time.
Previously I had enabled the SocketTools logging capabilities which did not capture any relevant information. Most recently I have tried to have the system ping the VLinx on the first sign of a missed message (2.5 seconds). Those pings have always been successful, meaning that if there is a momentary loss of connection at a switch or AP it does not stay disconnected for long.
I do not have access to the network hardware aside from the PC and VLinx that we own. The facility's IT is also not inclined to help track these kinds of things down because they work on a project-based model.
Does anyone have any suggestions what I can do to try and determine where the problem is occurring so that I can then try to come up with a permanent solution to this issue rather than the band-aid of reconnecting multiple times per day?
A tool like Wireshark may be helpful in seeing what's going on at the network level. The logging facility in SocketTools/SocketWrench can only report what's going on at the API level, and it sounds like whatever the underlying problem is occurs at a lower level in the TCP stack.
If this is occurring after periods of relative inactivity, followed by a burst of activity, one thing you could try doing is enabling keep-alive and see if that makes any difference.

How long does a UDP packet keeps floating and where?

We keep hearing about unreliability of udp that it may reach or not reach or just reach out of order (Signifying delay).
Where is it held until delivered?
Since its connection less if you keep sending packets without a network connection where will it go? Driver buffer?
Similarly when the receiver is not reachable is the packet immediately lost or does it float around a bit expecting host to be available soon? if yes then where?
On a direct connection from one device to another, with no intervening devices, there shouldn't be a problem. Where you can run into problems is where you go through a bunch of switches and routers (like the Internet).
A few reasons:
If a switch drops a frame, there is no mechanism to resend the frame.
Routers will buffer packets when they get congested, and packets can
be dropped if the buffers are full, or they may be purposely dropped to prevent congestion.
Load balancing can cause packets to be delivered out of order.
You have no control over anything outside your network.
Where is it held until delivered?
Packet buffering can occur if packets arrive faster than the device can read. Buffering can be either at NIC of the device or software queue of device driver or in the software queue between driver and stack. But, if the rate of arrival
is much higher such that it cannot be handled by these buffering mechanisms, then it will get dropped at those appropriate layer/location (based on design).
Since its connection less if you keep sending packets without a
network connection where will it go? Driver buffer?
If there is no network, there might be no other intermediate network devices and hence there should not be significant problems. But, it also depends on your architecture / design / configurations. If the configured value of internal OS receive buffer limit / socket buffer size (SO_RCVBUF, rmem_max, rmem_default) is exceeded, there can be drops here. And, if the software queue in respective device driver overflows or the software queue between device driver & stack of the device overflows, there can be drops here. Also, if the CPU is busy addressing another priority task where by it suspends reception, there can be drops here.
Similarly when the receiver is not reachable is the packet immediately
lost or does it float around a bit expecting host to be available
soon? if yes then where?
If there is no reachable destination, it shall be dropped by router.
Also, note that the particular router shall also drop the packet if the TTL/hoplimit count (in IP) is zero by the time the packet reaches this router.

How/does DMA handle multiple concurrent transfers?

I am working on implementing a VM and trying to model all the different hardware components as accurately as possible, just for pure learning purposes.
My question is, how does a DMA device handle multiple concurrent transfer requests? From what I understand a DMA device has several registers to set the location in memory, the type of operation (read or write) and the number of bytes, so what happens when the CPU requests an operation from DMA, puts the thread to sleep and then the next thread that runs also requests a DMA operation while the previous one is still in progress? Is this even supported?
Unless you're talking about ancient, ISA-era hardware, DMA nowadays is handled by the device itself taking ownership of the bus and requesting the data directly from the RAM. See the Wikipedia article on Bus Mastering for more information.
Therefore, it is really up to any individual device how to handle DMA, so not much can be said for the general case. However, most simple devices just support a single DMA operation at a time; if the host wants to submit two DMA operations, it would simply wait for the first DMA to complete (being notified by an interrupt) and then instruct the device to do the second one, the OS putting the requesting thread to sleep while the first DMA is in progress. There are certainly variations, however, such as using a command-buffer that can specify multiple (DMA-involving or not) operations for the device to do in sequence without interrupting the CPU between each.
I doubt there are very many devices at all that try to carry out multiple transfers simultaneously, however, seeing as how interleaving DRAM accesses would just hurt performance anyway. But I wouldn't exclude their existence, especially if the operations involve very large transfers.
In the end, you'll just have to read up on the particular device you're trying to emulate.

WiFi lag spikes after quiet period

I have a simple client<>server setup where the client sends UDP packets to the server on say port 2000 many times per second. The server has a thread with an open BSD socket listening on port 2000 and reads data using a blocking recvfrom call. That's it. I've set up a simple tic toc timer around the recvfrom call in the server and plotted the results when running this over Wifi.
When the server is connected to the access point via Wifi, it's similar in that usually the recvfrom call also take 0.015 seconds. However, after a short period of radio silence where no packets are sent (about half a second) the next packet that comes in on the server will cause the recvfrom call to take an extremely long time (between 0.6 and 3 seconds), followed by a succession of very quick calls (about 0.000005 seconds) and then back to normal (around 0.015 seconds). Here's some sample data:
0.017361 <--normal
0.014914
0.015633
0.015867
0.015621
... <-- radio silence
1.168011 <-- spike after period of radio silence
0.000010 <-- bunch of really fast recvfrom calls
0.000005
0.000006
0.000005
0.000006
0.000006
0.000005
0.015950 <-- back to normal
0.015968
0.015915
0.015646
If you look closely you can notice this on the graph.
When I connect the server to the access point over a LAN (i.e. with a cable), everything works perfectly fine and the recvfrom call always takes around 0.015 seconds. But over Wifi I get these crazy spikes.
What on earth could be going on here?
P.S. The server is running Mac OS X, the client is an iPhone which was connected to the access point via Wifi in both cases. I've tried running the client on an iPad and get the same results. The access point is a Apple Airport Extreme base station with a network that is extended using an Apple Airport Express. I've also tried with a Thompson router and a simple (non WDS network) and still get the same issue.
UPDATE
I rewrote the server part on Windows .NET in C# and tested it over the Wifi keeping everything else the same and the issue disappeared. So it suggests that it's a OS/network stack/socket issue on Mac OS X.
I don't think you can do anything about it. Several things can happen:
The WiFi MAC layer must allocate bandwidth slots to multiple users, it will usually try to give a user long enough time to send as much traffic as possible. But while other users are busy, this client can't send traffic. You even see this with only one user (consequence of the 802.11 protocol), but you'll notice this most with multiple active users of course.
IOS itself may have some kind of power saving and buffers packets for some time to send bursts, so it can keep some subsystems idle for a period of time.
You have some other radio signal that interferes.
This is not an exhaustive list, just what I could think of on short-notice with only the given input.
One thing: 0.6 to 3 seconds is not an extremely long time in the wireless domain, it might be 'long', but latency is with reason one of the biggest issues in all wireless communications. Don't forget that most wifi AP's are based on quite old specs, so I wouldn't say these numbers are extreme (I wouldn't expect 3s gaps however).

Are events built on polling?

an event is when you click on something, and code is run right away
polling is when the application constantly checks if your mouse button is held down, and if it's held down in a certain spot, code is run
do events really exist in computing, or is it all a layer built on polling?
This is a complicated question, and the answer depends on how far down you go (in abstraction layers) to answer it. Ultimately, your USB keyboard device is being polled once per millisecond by the computer to ask what keys are being held down. This information gets passed to the keyboard driver through a CPU interrupt when the USB device (in the computer) gets a packet of data from the keyboard. From then on, interrupts are used to pass the data from process to process (through the GUI framework) and eventually reach your application.
As Marc Cohen said in his answer, CPU interrupts are also raised to signal I/O completion. This is an example of something which has no polling until you get to the hardware level, where checks are performed (perhaps once per clock cycle? Someone with more experience with computer architecture should answer) to see if the event has taken place.
It's a common technique to simulate events by polling but that's often very inefficient and leads to a dilemma where you have a tradeoff between event resolution and polling overhead but that doesn't mean true events don't exist.
A CPU interrupt, which could be raised to signal an external event, like I/O completion, is an example of an event all the way down at the hardware layer.
Well, both operating system and application level depend on events not polling. Polling is usually possible where states cannot be maintained. On desktop applications and OS levels however, applications have states; so, they use events for their processes, not polling.