what is meant by real time operating system tick time and what is the use of this system tick time - rtos

i want to understand what is meant by operating system tick time and what is the use of it ?
and how it is different from CPU tick rate

The system tick is the time unit that OS timers and delays are based on. The system tick is a scheduling event - i.e. it causes the scheduler to run and may cause a context switch - for example if a timer has expired or a task delay completed.
If the RTOS supports round-robin/time-sliced scheduling of tasks of the same priority, the OS tick may cause a context switch directly without the task requesting a delay or timer event.
The system tick interrupt is not the only scheduling event, other mechanisms and events may cause scheduling asynchronously to the system tick.
An RTOS system tick period will typically be in the order or 1ms to 100ms, but may be longer or shorter. The overhead of running the scheduler is increasingly significant the shorter the period, so there is a trade off between timer resolution and CPU overhead. In many cases real-time response does not rely on timer resolution because events generate interrupts that cause the scheduler to run asynchronously to the clock.
Take a look at Fundamentals of Real-Time Operating Systems for a good overview of RTOS. Part 17 in particular is relevant to this question.

Related

How to run a periodic thread in high frequency(> 100kHz) in a Cortex-M3 microcontroller in an RTOS?

I'm implementing a high frequency(>100kHz) Data acquisition system with an STM32F107VC microcontroller. It uses the spi peripheral to communicate with a high frequency ADC chip. I have to use an RTOS. How can I do this?
I have tried FreeRTOS but its maximum tick frequency is 1000Hz so I can't run a thread for example every 1us with FreeRTOS. I also tried Keil RTX5 and its tick frequency can be up to 1MHz but I studied somewhere that it is not recommended to set the tick frequency high because it increases the overall context switching time. So what should I do?
Thanks.
You do not want to run a task at this frequency. As you mentioned, context switches will kill the performance. This is horribly inefficient.
Instead, you want to use buffering, interrupts and DMA. Since it's a high frequency ADC chip, it probably has an internal buffer of its own. Check the datasheet for this. If the chip has a 16 samples buffer, a 100kHz sampling will only need processing at 6.25kHz. Now don't use a task to process the samples at 6.25kHz. Do the receiving in an interrupt (timer or some signal), and the interrupt should only fill a buffer, and wake up a task for processing when the buffer is full (and switch to another buffer until the task has finished). With this you can have a task that runs only every 10ms or so. An interrupt is not a context switch. On a Cortex-M3 it will have a latency of around 12 cycles, which is low enough to be negligible at 6.25kHz.
If your ADC chip doesn't have a buffer (but I doubt that), you may be ok with a 100kHz interrupt, but put as little code as possible inside.
A better solution is to use a DMA if your MCU supports that. For example, you can setup a DMA to receive from the SPI using a timer as a request generator. Depending on your case it may be impossible or tricky to configure, but a working DMA means that you can receive a large buffer of samples without any code running on your MCU.
I have to use an RTOS.
No way. If it's a requirement by your boss or client, run away from the project fast. If that's not possible, communicate your concerns in writing now to save your posterior when the reasons of failure will be discussed. If it's your idea, then reconsider now.
The maximum system clock speed of the STM32F107 is 36 MHz (72 if there is an external HSE quartz), meaning that there are only 360 to 720 system clock cycles between the ticks coming at 100 kHz. The RTX5 warning is right, a significant amount of this time would be required for task switching overhead.
It is possible to have a timer interrupt at 100 kHz, and do some simple processing in the interrupt handler (don't even think about using HAL), but I'd recommend investigating first whether it's really necessary to run code every 10 μs, or is it possible to offload something that it would do to the DMA or timer hardware.
Since you only have a few hundred cycles (instructions) between input, the typical solution is to use an interrupt to be alerted that data is available, and then the interrupt handler put the data somewhere so you can process them at your leisure. Of course if the data comes in continuously at that rate, you maybe in trouble with no time for actual processing. Depending on how much data is coming in and how frequent, a simple round buffer maybe sufficient. If the amount of data is relatively large (how large is large? Consider that it takes more than one CPU cycle to do a memory access, and it takes 2 memory accesses per each datum that comes in), then using DMA as #Elderbug suggested is a great solution as that consumes the minimal amount of CPU cycles.
There is no need to set the RTOS tick to match the data acquisition rate - the two are unrelated. And to do so would be a very poor and ill-advised solution.
The STM32 has DMA capability for most peripherals including SPI. You need to configure the DMA and SPI to transfer a sequence of samples directly to memory. The DMA controller has full and half transfer interrupts, and can cycle a provided buffer so that when it is full, it starts again from the beginning. That can be used to "double buffer" the sample blocks.
So for example if you use a DMA buffer of say 256 samples and sample at 100Ksps, you will get a DMA interrupt every 1.28ms independent of the RTOS tick interrupt and scheduling. On the half-transfer interrupt the first 128 samples are ready for processing, on the full-transfer, the second 128 samples can be processed, and in the 1.28ms interval, the processor is free to do useful work.
In the interrupt handler, rather then processing all the block data in the interrupt handler - which would not in any case be possible if the processing were non-deterministic or blocking, such as writing it to a file system - you might for example send the samples in blocks via a message queue to a task context that performs the less deterministic processing.
Note that none of this relies on the RTOS tick - the scheduler will run after any interrupt if that interrupt calls a scheduling function such as posting to a message queue. Synchronising actions to an RTOS clock running asynchronously to the triggering event (i.e. polling) is not a good way to achieve highly deterministic real-time response and is a particularly poor method for signal acquisition, which requires a jitter free sampling interval to avoid false artefacts in the signal from aperiodic sampling.
Your assumption that you need to solve this problem by an inappropriately high RTOS tick rate is to misunderstand the operation of the RTOS, and will probably only work if your processor is doing no other work beyond sampling data - in which case you might not need an RTOS at all, but it would not be a very efficient use of the processor.

Polling vs. Interrupts with slow/fast I/O devices

I'm learning about the differences between Polling and Interrupts for I/O in my OS class and one of the things my teacher mentioned was that the speed of the I/O device can make a difference in which method would be better. He didn't follow up on it but I've been wracking my brain about it and I can't figure out why. I feel like using Interrupts is almost always better and I just don't see how the speed of the I/O device has anything to do with it.
The only advantage of polling comes when you don't care about every change that occurs.
Assume you have a real-time system that measures the temperature of a vat of molten plastic used for molding. Let's also say that your device can measure to a resolution of 1/1000 of a degree and can take new temperature every 1/10,000 of a second.
However, you only need the temperature every second and you only need to know the temperature within 1/10 of a degree.
In that kind of environment, polling the device might be preferable. Make one polling request every second. If you used interrupts, you could get 10,000 interrupts a second as the temperature moved +/- 1/1000 of a degree.
Polling used to be common with certain I/O devices, such as joysticks and pointing devices.
That said, there is VERY little need for polling and it has pretty much gone away.
Generally you would want to use interrupts, because polling can waste a lot of CPU cycles. However, if the event is frequent, synchronous (and if other factors apply e.g. short polling times...) polling can be a good alternative, especially because interrupts create more overhead than polling cycles.
You might want to take a look at this thread as well for more detail:
Polling or Interrupt based method

Can a sub-microsecond clock resolution be achieved with current hardware?

I have a thread that needs to process a list of items every X nanoseconds, where X < 1 microsecond. I understand that with standard x86 hardware the clock resolution is at best 15 - 16 milliseconds. Is there hardware available that would enable a clock resolution < 1 microsecond? At present, the thread runs continuously as the resolution of nanosleep() is insufficient. The thread obtains the current time from a GPS reference.
You can get the current time with extremely high precision on x86 using the rdtsc instruction. It counts clock cycles (on a fixed reference clock, not the actually dynamic frequency CPU clock), so you can use it as a time source once you find the coefficients that map it to real GPS-time.
This is the clock-source Linux uses internally, on new enough hardware. (Older CPUs had the rdtsc clock pause when the CPU was halted on idle, and/or change frequency with CPU frequency scaling). It was originally intended for measuring CPU-time, but it turns out that a very precise clock with very low-cost reads (~30 clock cycles) was valuable, hence decoupling it from CPU core clock changes.
It sounds like an accurate clock isn't your only problem, though: If you need to process a list every ~1 us, without ever missing a wakeup, you need a realtime OS, or at least realtime functionality on top of a regular OS (like Linux).
Knowing what time it is when you do eventually wake up doesn't help if you slept 10 ms too long because you read a page of memory that the OS decided to evict, and had to get from disk.

Idle state in RTOS, sleep state or lowest frequency?

In real time systems using an RTOS, what how would the RTOS handle an idle period? Would it run nop instructions at the lowest frequency supported by a Dynamic Voltage Scaling capable processor? or would it turn to a sleep state? Can anyone refer me to actual practical implementations. Thanks
It will depend entirely on the target hardware and possibly the needs and design of the application. For example on ARM Cortex-M you would typically invoke the WFI instruction which shuts down the core until the occurrence of an interrupt.
In many microcontroller/SoC cases, reducing the PLL clock frequency would affect the on-chip peripherals from which hardware interrupts might occur, so that is less likely. It would affect baud rates and timer resolution, and is perhaps hard to manage easily. There is a paper here on a tickless idle power management method on FreeRTOS/Cortex-M3.
In most cases the idle loop source is provided as part of the board-support, so you can customise it to your needs.

RTC vs PIT for scheduler

My professor said that it is recommended to use the PIT instead of the RTC to implement a epoch based round robin scheduler. He didn't really mention any concrete reasons and I can't think of any either. Any thoughts?
I personally would use the PIT (if you can Only choose between these two, modern OSes use the HPET iirc)
One, it can generate interrupts at a faster frequency (although I question if preempting a process within milliseconds is beneficial)
two, it has a higher priority on the PIC chip, which means it can't be interrupted by other IRQs.
Personally I use the PIT for the scheduler and the RTC timer for wall clock time keeping.
The RTC can be changed (it is, after all, a normal "clock"), meaning it's values can't be trusted from an OS perspective. It might also not have good enough resolution and/or precision needed for OS scheduler interrupts.
While this doesn't answer the question directly, here are some further insights into choosing the preemption timer.
On modern systems (i586+; I am not sure if i486's external local APIC (LAPIC) had timer) you should use neither, because you always get the local APIC timer, which is per-core. There's even more: using either PIT or RTC for timer interrupts is already obsolete.
The LAPIC timer is usually used for preemption on modern systems, while HPET is used for high precision events. On systems having HPET, there's usually no physical PIT; also, first two comparators of HPET are capable of replacing PIT and RTC interrupt sources, which is the simplest possible configuration for them and is preferred in most cases.
PITs are faster. RTCs typically increment no faster than 8 kHz and are most commonly configured to increment at 1 Hz (once a second).
PIT has interrupt function.
PIT has higher resolution than Real-Time Clock.