seccomp system call priority values 65535 - bpf

I've read that priority can be a value between 0 and 255 (http://man7.org/linux/man-pages/man3/seccomp_syscall_priority.3.html). Why using seccomp_export_pfc the baseline priority is 65535???
# filter for syscall "exit_group" (231) [priority: 65535]
if ($syscall == 231)
action ALLOW;
# filter for syscall "exit" (60) [priority: 65535]
if ($syscall == 60)
action ALLOW;

They are two different things: with seccomp_syscall_priority, you provide a priority hint, whereas seccomp_export_pfc outputs libseccomp's internal priority.
As explained in the source code comments, the internal priority contains your priority hint, if any, as the first 16 bits. The last 16 bits are useful is case of tie (i.e., two filters have the same upper 16 bits), in which case libseccomp gives higher priority to filters that are easier to evaluate.
So, in your case, since you did not provide any hint, the internal priority is equal to 0x0000FFFF, or 65535.

Related

Why are registers restrained about the number of registers?

I wonder why register must be only 32.
I know vaguely about the reason but i want to know more exactly.
Let's look at what we have with 32 general purpose integer-oriented registers, and what would happen if we went to 256 registers:
Diminishing returns
Normal compiled code demonstrates that with 32 registers, most function leave some of the registers unused.  So, adding more registers than 32 doesn't help most code.
Encoding size
On a register machine, binary operators like addition, subtraction, comparison, others require three operands: left source, right source, and target.  On a RISC machine, each of these uses a register operand, so that means 3 register operands in one instruction.  This means that 3 x 5 bits = 15 bits are used in such an instruction on a machine with 32 registers.
If we were to increase the number of registers to, say 256, then we would need 8 bits for each register operand.  That would mean 3 x 8 bits = 24 bits.  Instructions become larger, and this decreases the efficiency of the instruction cache — a critical component to performance.
Many instruction sets do have more than 32 registers
They add specialized registers, such as a whole second set for floating point, and also another set of extra wide registers for SIMD and vector operations.
In context, these additional register sets don't necessarily suffer the same code expansion as described above because these additional register sets don't intermix with each other: in other words we can have 32 integer registers and also 32 floating point registers, and still maintain 5 bit register fields in the instructions, because the instructions involved know which register set they are using and don't support mixing of the register sets in the same instruction.
Also, to be clear, many instruction sets have used different numbers of registers, many less than 32 yet some more than 32.

why does cmsis oblige a max of priorities of 64

i am trying to implement CMSIS RTOS on my project using ThreadX. how ever i found in the file cmsis_os2.c that it is obligatory to have a max priority of 64. i would like to keep it to 32 (ram optimisation) so does anyone has an explication on why i should use 64 and not 32. and does it bother to use 32 and simply modify the cmsis file? this is the code i found:
/* Ensure the maximum number of priorities is modified by the user to 64. */
#if(TX_MAX_PRIORITIES != 64)
#error "CMSIS RTOS ThreadX Wrapper: TX_MAX_PRIORITIES must be fixed to 64 in tx_user.h file"
#endif
CMSIS enumerates the priority with enum osPriority_t. This is a bad idea in my opinion, but it rather constrains the implementation, and would be hard to change without breaking the abstraction.
In ThreadX having 64 rather 32 priorities carries a 128 byte overhead (so not much of a RAM optimisation). If that is really a problem, then you could in the porting layer map the CMSIS 64 priorities to 32 levels simply by dividing the priority by 2 when creating the task with the native API. That might however modify the scheduling because tasks at priorities Nx2 and (Nx2)+1 would both map to the same priority N.
Another issue with changing the number of priorities is that porting your code to a different CMSIS RTOS2 implementation could change the scheduling behaviour, which rather defeats the object of the abstraction.
You have to take care with CMSIS RTOS2 priorities because in fact only 47 from 8 to 55 are normally used for user tasks, as can be seen from the enumeration. With 0, 1 and 56 reserved and 2 to 7 given no enumeration. How those map to native priorities is implementation dependent, and if you were to change the implementation you would still have to account for the reserved values. It is therefore not advisable to simply pass integer priorities without ensuring that they are in the range osPriorityLow to osPriorityRealtime7 . It is on the whole not a perfect abstraction.
ThreadX is perhaps unusual in having this overhead related to the number of priority levels. It is also unusual in having a configurable number of priority levels in any case. In many RTOS it is an 8 or 16 bit integer.
This is a CMSIS issue, not an Azure RTOS issue. You'll have to ask the CMSIS folks.

Implementation of zero flag without zero flag in program status word

In a typical processor’s PSW, zero flag is not implemented, however, it has Carry, Sign, Parity, and Overflow flags. In this architecture how would a programmer implement JZ (jump on zero).
You can't implement JZ after any arbitrary instruction like add reg, reg if there is no zero flag, none of the other flags carry the same information. e.g. 8-bit -128 + -128 overflows and carries to 0, but you can't distinguish that from -128 + -1 that overflows / carries to 127. Or various other combinations that you can't distinguish even with the help of SF and PF.
That's why we have a Zero Flag in normal ISAs, including 8080 or x86 whose flags and mnemonics you're using.
Did you actually just want to emulate x86 test eax,eax / jz or ARM cbz reg, target (conditional-branch on a register being zero) to test a register and jump if it was zero?
Note that 0 is the only number unsigned-below 1, so you can cmp / jnc. This looks like homework so I'm not going to spell it out more than that.
Or do what MIPS does, and provide an instruction like beq $reg, $reg, target that you can use to compare-and-branch on any pair of regs. (MIPS doesn't have a PSW / FLAGS at all). MIPS has an architectural zero register that always reads as zero, so you can always branch on any other register being zero with one machine instruction.
ARM Thumb, and AArch64, provide a limited version of that: cbz and cbnz that compare/branch on a single register being zero or non-zero, separate from the ARM flags register.
But really if you're going to have a FLAGS / PSW register at all, implement a zero flag. That's one of the most basic useful things. Although to be fair, a carry flag is even more critical. If you could only have one flag, it would probably be carry because you can still test for zero efficiently. Signed compares for greater or less are harder to emulate with SF and OF, though.

Minimum and maximum length of X509 serialNumber

The CA/Browser Forum Baseline Requirements section 7.1 states the following:
CAs SHOULD generate non‐sequential Certificate serial numbers that exhibit at least 20 bits of entropy.
At the mean time, RFC 5280 section 4.1.2.2 specifies:
Certificate users MUST be able to handle serialNumber values up to 20 octets. Conforming CAs MUST NOT use serialNumber values longer than 20 octets.
Which integer range can I use in order to fullfill both requirements. It is my understanding that the max. value will be 2^159 (730750818665451459101842416358141509827966271488). What is the min. value?
The CAB requirement has changed to minimum of 64 bit entropy. Since the leading bit in the representation for positive integers must be 0 there are a number of strategies:
produce random bit string with 64 bits. If leading bit is 1 then prepend 7 more random bits. This has the disadvantage that it is not obvious if the generation considered 63 or 64 bits
produce random bit string with 64 bits and add leading 0 byte. This is still compact (wastes only 7-8 bit) but is obvious that it has 64 bit entropy
produce a random string longer than 64 bit, for example 71 or 127 bit (+leading 0 bit). 16 byte seems to be a common length and well under the 20 byte limit.
BTW since it is unclear if the 20 byte maximum length includes a potential 0 prefix for interop reasons you should not generate more than 159 bit
A random integer with x bit entropy can be produced with generating a random number between 0 and (2^x)-1 (inclusive)

odd numbers for operator precedence levels

just curious..is there a particular reason (historical or some sort) why Swift uses numbers from 160 to 90 to express default precedence levels of operators. Thanks
According to Apple's Operation Declaration documentation
The precedence level can be any whole number (decimal integer) from 0
to 255
Although the precedence level is a specific number, it is significant
only relative to another operator.
The simple answer is that 90 to 160 fall near the center of the 0 to 255 range.
Now if you check all of Apple's binary expressions documentation, you will see that the default operators range from a precedence of 90 to a precedence of 160, as you stated in your question. This is a range of 70 and because precedence values are relative to each other, any starting/ending point for this range could be chosen.
However, if they made the default values 0 to 70 or 185 to 255 then when a user created a custom operator, they could not give it a lower precedence than 0 or a higher precedence than 255, causing the operator to be equal to the precedence of Assignment operators or Exponentiative operators respectively.
Therefore, the only logical thing to do is to start this range in the middle of the 0 to 255 range and rather than set the default values of the range to 93 - 163 (the closest to the actual center of the range as possible), they chose to choose the multiples of 10 (90 to 160) instead because in actuality the values do not matter except in relation to each other.