STM32 ADC: leave it running at 'high' speed or switch it off as much as possible? - stm32

I am using a G0 with one ADC and 8 channels. Works fine. I use 4 channels. One is temperature that is measured constantly and I am interested in the value every 60s. Another one is almost the opposite: it is measuring sound waves for a couple a minutes per day and I need those samples at 10kHz.
I solved this by letting all 4 channels sample at 10kHz and have the four readings moved to memory by DMA (array of length 4 with 1 measurement each). Every 60s I take the temperature and when I need the audio, I retrieve the audio values.
If I had two ADC's, I would start the temperature ADC reading for 1 conversion every 60s. Non-stop. And I would only start the audio ADC for the the couple of minutes a day that it is needed. But with the one ADC solution, it seems simple to let all conversions run at this high speed continuously and that raised my question: Is there any true downside in having 40.000 conversions per second, 24 hours per day? If not, the code is simple. I just have the most recent values in memory all the time. But maybe I ruin the chip? I use too much energy I know. But there is plenty of it in this case.

You aren't going to "wear it out" by running it when you don't need to.
The main problems are wasting power and RAM.
If you have enough of these, then the lesser problems are:
The wasted power will become heat, this may upset your temperature measurements (this is a very small amount though).
Having the DMA running will increase your interrupt latency and maybe also slow down the processor slightly, if it encounters bus contention (this only matters if you are close to capacity in these regards).
Having it running all the time may also have the advantage of more stable readings, not being perturbed turning things on and off.

Related

UPS for 750W to be up 6 hours

I have a Dell PowerEdge t630 server 750W power with PF 94%, I need a UPS that make my server Up for 6 hours at least
please advice the UPS capacity in KVA
thanks
The UPS power capacity will need to be at least 750VA. With a power factor that high, watts and VA are effectively the same. To supply 750W, you need a 750VA UPS.
Make sure the UPS can also supply 750 watts. Not every 750 VA UPS can supply 750 watts. Many UPSes can supply more reactive power than they can real power. You may wind up needing a UPS rated for 1,000 VA to get one that can supply 50 watts.
The amount of time the UPS needs to last has no effect on how much power it needs to be able to supply. A 100 watt light bulb needs 100 watts, whether it's on for a minute or a year.
I would caution you that there is no reliable way to compute a UPS's run time at a given load based on its stated efficiency or reserve time at other load values. Battery efficiency changes drastically with load. For example, halving the battery size will often cut the run time by much more than half. The only reliable way to predict UPS run time is to ask the manufacturer or look at at the manufacturer's graph of run time versus load.
Note that doubling the battery size will typically more than double the run time. This means that, for example, a UPS that claims to run for one hour at half load will almost certainly not run for half an hour at full load, it will usually run for much less time than that.
For such a long run time, look for a UPS that can be expanded with additional battery modules. The manufacturer should provide a chart that will tell you how long each module will last, and that will let you figure out how many modules you will need. Be aware that adding more battery modules often increases the charge time.

Polling vs. Interrupts with slow/fast I/O devices

I'm learning about the differences between Polling and Interrupts for I/O in my OS class and one of the things my teacher mentioned was that the speed of the I/O device can make a difference in which method would be better. He didn't follow up on it but I've been wracking my brain about it and I can't figure out why. I feel like using Interrupts is almost always better and I just don't see how the speed of the I/O device has anything to do with it.
The only advantage of polling comes when you don't care about every change that occurs.
Assume you have a real-time system that measures the temperature of a vat of molten plastic used for molding. Let's also say that your device can measure to a resolution of 1/1000 of a degree and can take new temperature every 1/10,000 of a second.
However, you only need the temperature every second and you only need to know the temperature within 1/10 of a degree.
In that kind of environment, polling the device might be preferable. Make one polling request every second. If you used interrupts, you could get 10,000 interrupts a second as the temperature moved +/- 1/1000 of a degree.
Polling used to be common with certain I/O devices, such as joysticks and pointing devices.
That said, there is VERY little need for polling and it has pretty much gone away.
Generally you would want to use interrupts, because polling can waste a lot of CPU cycles. However, if the event is frequent, synchronous (and if other factors apply e.g. short polling times...) polling can be a good alternative, especially because interrupts create more overhead than polling cycles.
You might want to take a look at this thread as well for more detail:
Polling or Interrupt based method

Measure the electricity consumed by a browser to render a webpage

Is there a way to calculate the electricity consumed to load and render a webpage (frontend)? I was thinking of a 'test' made with phantomjs for example:
load a web page
scroll to the bottom
And measure how much electricity was needed. I can perhaps extrapolate from CPU cycle. But phantomjs is headless, rendering in real browser is certainly different. Perhaps it's impossible to do real measurements.. but with an index it may be possible to compare websites.
Do you have other suggestions?
It's pretty much impossible to measure this internally in modern processors (anything more recent than 286). By internally, I mean by counting cycles. This is because different parts of the processor consume different levels of energy per cycle depending upon the instruction.
That said, you can make your measurements. Stick a power meter between the wall and the processor. Here's a procedure:
Measure the baseline energy usage, i.e. nothing running except the OS and the browser, and the browser completely static (i.e. not doing anything). You need to make sure that everything is stead state (SS) meaning start your measurements only after several minutes of idle.
Measure the usage doing the operation you want. Again, you want to avoid any start up and stopping work, so make sure you start measuring at least 15 seconds after you start the operation. Stopping isn't an issue since the browser will execute any termination code after you finish your measurement.
Sounds simple, right? Unfortunately, because of the nature of your measurements, there are some gotchas.
Do you recall your physics classes (or EE classes) that talked about signal to noise ratios? Well, a scroll down uses very little energy, so the signal (scrolling) is well in the noise (normal background processes). This means you have to take a LOT of samples to get anything useful.
Your browser startup energy usage, or anything else that uses a decent amount of processing, is much easier to measure (better signal to noise ratio).
Also, make sure you understand the underlying electronics. For example, power is VA (voltage*amperage) where both V and A are in phase. I don't think this will be an issue since I'm pretty sure they are in phase for computers. Also, any decent power meter understands the difference.
I'm guessing you intend to do this for mobile devices. Your measurements will only be roughly the same from processor to processor. This is due to architectural differences from generation to generation, and from manufacturer to manufacturer.
Good luck.

Can a sub-microsecond clock resolution be achieved with current hardware?

I have a thread that needs to process a list of items every X nanoseconds, where X < 1 microsecond. I understand that with standard x86 hardware the clock resolution is at best 15 - 16 milliseconds. Is there hardware available that would enable a clock resolution < 1 microsecond? At present, the thread runs continuously as the resolution of nanosleep() is insufficient. The thread obtains the current time from a GPS reference.
You can get the current time with extremely high precision on x86 using the rdtsc instruction. It counts clock cycles (on a fixed reference clock, not the actually dynamic frequency CPU clock), so you can use it as a time source once you find the coefficients that map it to real GPS-time.
This is the clock-source Linux uses internally, on new enough hardware. (Older CPUs had the rdtsc clock pause when the CPU was halted on idle, and/or change frequency with CPU frequency scaling). It was originally intended for measuring CPU-time, but it turns out that a very precise clock with very low-cost reads (~30 clock cycles) was valuable, hence decoupling it from CPU core clock changes.
It sounds like an accurate clock isn't your only problem, though: If you need to process a list every ~1 us, without ever missing a wakeup, you need a realtime OS, or at least realtime functionality on top of a regular OS (like Linux).
Knowing what time it is when you do eventually wake up doesn't help if you slept 10 ms too long because you read a page of memory that the OS decided to evict, and had to get from disk.

How does an OS with a 1 kHz tick rate make nanosecond measurements?

I've worked on low level devices such as microcontrollers, and I understand how timers and interrupts work; but, I was wondering how an operating system with a 1 kHz tick rate is able to give time measurements of events down to the nanosecond precision. Are they using timers with input capture or something like that? Is there a limit to how many measurements you can take at once?
On modern x86 platforms, the TSC can be used for high resolution time measurement, and even to generate high resolution interrupts (that's a different interface of course, but it counts in TSC ticks). There is no inherent limit to the number of parallel measurements. Before the "Invariant TSC" feature was introduced it was much harder to use the TSC that way, because the TSC frequency changed with the C-state (and so usually went out of sync across cores and in any case was hard to use as a stopwatch).