Kaitai Struct offers predefined types to capture, for example, signed 2-bytes integers (s2be) or signed 4-bytes integers (s4be) but there is no s3be and b24 captures 3-bytes unsigned integer (http://doc.kaitai.io/ksy_reference.html#_bit_size_integers). Is there a way to do it?
field_a:
seq:
- id: two
type: s2be
- id: three
type: ???
- id: four
type: s4be
There are multiple ways to do that. For example, you can use something like this to convert unsigned to signed:
seq:
- id: three
type: s3be
types:
s3be:
seq:
- id: unsigned_value
type: b24
instances:
value:
value: '(unsigned_value & 0x800000 != 0) ? (~(unsigned_value & 0x7fffff)) : unsigned_value'
Note that it will be user type, so to get to the value of the integer, you'll need to use three.value, not just three.
Related
I have probably an ancient .ksy template
seq:
- {id: id_magic, type: u4}
- {id: version, type: u2}
- {id: num_blocks, type: u2}
- {id: block_offsets, type: u4, repeat: expr, repeat-expr: num_blocks}
The block_offsets get offset for frame-blocks but unfortunately, some of them are equal to 0 so I need to remove them somehow to move next. In python, it will look like this but I have no clue how to code it in Kaitai
valid_block_offsets = [offset for offset in block_offsets if offset]
I tried to do something with instances but with no luck.
Using OpenAPI 3.0.3, I am defining an API spec which accepts two input query parameters.
- name: land_area_llimit
in: query
description: Lower limit for land area comparison
required: false
schema:
type: integer
- name: land_area_ulimit
in: query
description: Upper limit for land area comparison
required: false
schema:
type: integer
Ideally, I would like to combine the two and have just one parameter, that accepts a range, like:
[a,b] where a > 0 and b > a > 0. Say, something like:
- name: land_area
in: query
description: lower and upper bounds for land area comparison
required: false
schema:
type: range
## With some way to specify that this parameter accepts a lower bound and an upper bound.
I am aware of the minimum and maximum. That will preset the ranges. I am looking for the ranges to be provided as input.
Can this be achieved?
You can define the range as a tuple (supported since OpenAPI 3.1) or as an array of 2 elements.
However, there's no way to have a dynamic minimum attribute that's based on another value. You'll need to mention this requirement in the description and verify the values on the backend.
# openapi: 3.1.0
- name: land_area
in: query
description: Lower and upper bounds for land area comparison
required: false
schema:
type: array
prefixItems:
- type: integer
description: Lower bound for land area comparison
- type: integer
description: >-
Upper bound for land area comparison.
Must be greater than the lower bound.
minItems: 2
additionalItems: false
I ended up using an object type for the input:
- name: land_area
in: query
description: Land area ranges for comparison (in sqft). lower_bound < upper_bound. Return 400 otherwise.
required: false
schema:
type: object
properties:
land_area_lower_bound:
type: integer
land_area_upper_bound:
type: integer
Checking in Swagger UI, the request URL will resolve to something like:
http://<url>/<api>?land_area_lower_bound=1234&land_area_upper_bound=3456
I'm trying to convert Binary and stuff with Swift and this is my code :
let hexa = String(Int(a, radix: 2)!, radix: 16)// Converting binary to hexadecimal
I am getting the error
Cannot convert value type of 'Int' to expected argument type 'String'
You're misunderstanding how integers are stored.
There is no notion of a "decimal" Int, a "hexadecimal" Int, etc. When you have an Int in memory, it's always binary (radix 2). It's stored as a series of 64 or 32 bits.
When you try to assign to the Int a value like 10 (decimal), 0xA (hex), 0b1010 (binary), the compiler does the necessary parsing to convert your source code's string representation of that Int, into a series of bits that can be stored in the Int's 64 or 32 bits of memory.
When you try to use the Int, for example with print(a), there is conversion behind the scenes to take that Int's binary representation in memory, and convert it into a String whose symbols represent an Int in base 10, using the symbols we're used to (0-9).
On a more fundamental, it helps to understand that the notion of a radix is a construct devised purely for our convenience when working with numbers. Abstractly, a number has a magnitude that is a distinct entity, uncoupled from any radix. A magnitude can be represented concretely using a textual representation and a radix.
This part Int(a, radix: 2), doesn't make sense. Even supposing such an initializer (Int.init?(Int, radix: Int)) existed, it wouldn't do anything!. If a = 5, then a is stored as binary 0b101. This would then be parsed from binary into an Int, giving you... 0b101, or the same 5 you started with.
On the other hand, Strings can have a notion of a radix, because they can be a textual representation of a decimal Int, a hex Int, etc. To convert from a String that contains a number, you use Int.init?(String, radix: Int). The key here is that it takes a String parameter.
let a = 10 //decimal 10 is stored as binary in memory 1010
let hexa = String(a, radix: 16) //the Int is converted to a string, 0xA
Is it better to use number instead of string for enum schema in mongoose in terms of search performance?
For example, I have this:
status: {
type: String,
enum: ['active', 'inactive', 'disabled', 'deleted'],
default: 'inactive'
},
//status: {
// type: Number,
// enum: [0, 1, 2, 3],
// default: '1'
//},
will db.col.find({status: 'active'}) slower than db.col.find({status: 1}) ?
As in this and this similar questions it is for the most part negligible from a performance point of view when using an index - but numbers will be a bit faster.
In case you still want to use number, you could add a virtual property to your schema which translates from your numeric enum to a descriptive string so that you have the best of both worlds in some sense.
Mongodb uses BSON(Binary JSON) to store the documents.
Read this for understanding json and bson Mongodb - https://www.mongodb.com/json-and-bson
Numbers (integers and doubles) are "basic types"
in BSON (the binary format in which data is stored in Mongo), and do
not carry the extra overhead with them.
Strings carry a little extra overhead with
them; bits to tell Mongo that they are strings, and bits to tell Mongo
how long they are.
So number would be faster than string.
Reference
http://bsonspec.org/spec.html
I was looking how to save decimals using MongoDB and stumbled across a few ideas.
Save it as String (as I'm using Java Spring Framework, this is the default implementation)
value: "3.44"
Save it the ebay way using a nested document with the unscaled value + scale: http://www.technology-ebay.de/the-teams/mobile-de/blog/mapping-bigdecimals-with-morphia-for-mongodb.html
value: { unscaled: NumberLong(344) , scale: 2 }
Save the digits before and behind the decimal point separately
value: { major: 3 , minor: 44 }
Basically I'd say:
Is useless, because I can't sort it as numbers (i.e. "9" > "12")
If ebay uses this, it can't be that bad. But I can't figure out how to sort those values?!
Sort is pretty easy: db.collection.find().sort({"value.major":1, "value.minor":1})
Questions:
How do you implement it?
How does sorting work using approach 2.?
Thank you!
Workarounds:
Store major as integer and minor as string:
value: { major: 3 , minor: "44" }
Define the max. precision of your decimal and then you can fill the minor value with zeros. For example with precision = 10:
value: { major: 3 , minor: 4400000000 }