Intelligently decide on the correct SI prefix for a number - number-formatting

I have a field in my MS Access database for the length of a DNA sequence.
DNA sequences are measured in basepairs (bp or b). This is an integer value. However, often they are between 1000-10000, so it is sometimes convenient to use kilobases (kb) instead.
In my field, I want to enter the value as integer showing the number of basepairs. I want Access to look at how big this number is, and if it is smaller than 100, display as #" bp", and otherwise divide it by 1000 and display as #.###" kb".
If possible, it would be great if I could also enter some numbers directly as kb, and have Access convert them to bp, provided this does not involve too many keystrokes per entry.
Is this possible in MS Access 2013? If so, how?

For display purposes, you could create a separate Text field and use it to store the formatted value. For a table named [dna]
id - AutoNumber, Primary Key
dnaSeqCount - Long Integer
dnaSeqDisplay - Text(100)
you could create a Before Change data macro like this
so you could enter the integer value into [dnaSeqCount] and have the [dnaSeqDisplay] formatted automatically:
id dnaSeqCount dnaSeqDisplay
-- ----------- -------------
1 1 1 bp
2 99 99 bp
3 100 0.100 kb
4 101 0.101 kb
5 109 0.109 kb
6 110 0.110 kb
7 111 0.111 kb
8 999 0.999 kb
9 1000 1.000 kb
10 1001 1.001 kb
11 1009 1.009 kb
12 1010 1.010 kb
13 1999 1.999 kb
14 2000 2.000 kb
15 2001 2.001 kb

Related

Are there any instances where a negative time type could give unexpected results?

Are there any instances where a negative time type could give unexpected results if used for specific purposes? When time deltas are calculated between negative time values and non-negative time values for example, there do not appear to be any issues.
time
val
00:00:31.384
-0.3170017
00:06:00.139
0.9033492
00:07:01.099
-0.7661049
Then, for the purpose of a window join later over a 10-min window
win:00:10:00;
winForJoin: (neg win;00:00:00) +\:(exec time from data);
first[winForJoin] gives -00:09:28.616 -00:03:59.861 -00:02:58.901
winForJoin[1]-winForJoin[0] gives 10 minutes as expected
If I understand correctly, you're asking how would a window join behave if the opening interval was a negative time? (due to the interval subtraction taking the values into negative territory, relative to 00:00).
The simple answer is that it won't behave any differently than if the times were numbers, but in practice you may see results you don't expect depending on how your table is set up and what you're trying to achieve.
Taking the example in the official wiki as a starting point: https://code.kx.com/q/ref/wj/
q)t:([]sym:3#`ibm;time:10:01:01 10:01:04 10:01:08;price:100 101 105)
q)a:101 103 103 104 104 107 108 107 108
q)b:98 99 102 103 103 104 106 106 107
q)q:([]sym:`ibm; time:10:01:01+til 9; ask:a; bid:b)
q)f:`sym`time
q)w:-2 1+\:t.time
/add volume too so it's easier to follow:
q)s:908 360 522 257 858 585 90 683 90;
q)update size:s from `q
/add an alternative range which has negative starting time
q)w2:(-11:00;1)+\:t.time
The window join takes all rows in q whose times are between the pairs of time ranges:
q)q[`time]within/:flip w
110000000b
011110000b
000001111b
Under the covers it's asking: are these positive numbers (the quote times) in between those two positive numbers (the window range). There's no reason it can't also ask: are these positive numbers in between this negative number and this positive number
q)q[`time]within/:flip w2
110000000b
111110000b
111111111b
You'll notice that all of them are greater than the negative time - meaning that it will include all rows from the beginning of the q table, up until the end time of that pair. This can be considered expected behaviour - if your start time is negative you must mean "from the beginning of time" - aka, all rows from the beginning of the table.
Comparing sum of size shows how the results differ:
q)wj[w;f;t;(q;(sum;`size))]
sym time price size
-----------------------
ibm 10:01:01 100 1268
ibm 10:01:04 101 1997
ibm 10:01:08 105 1448
q)wj[w2;f;t;(q;(sum;`size))]
sym time price size
-----------------------
ibm 10:01:01 100 1268
ibm 10:01:04 101 2905
ibm 10:01:08 105 4353
Finally - where it might get complicated.....it depends on what "negative" time means in your table. If you're at 00:00 (midnight) and you subtract 10 minutes, are you trying to access data from 23:50 the day before? Or does 00:00 represent the starting time (row zero) of your table? If you're trying to access 23:50 from the day before then you will have problems because 23:50 is NOT in between your negative start time and your positive end time, e.g:
q)23:50 within(-00:58:59;10:01:02)
0b
Again this all depends on how your data looks and what you're trying to do

Grouping data into buckets by frequency in Postgres 11.6

Using Postgres 11.6, I'm trying to analyze some event data. The goal is to find the durations for all events with a specific name, and then split each one out into evenly sized buckets. We're looking for any times that "clump" for a specific event. I'm editing my question as the specific case may be obscuring what I'm trying to ask.
Simple example
The question is "how do you group rows by a value, then split occurrences by frequency into buckets with count and average for each of those buckets." Here's a hand-done toy example with rounded averages:
Months with values, each number here represents a row.
Jan 12 24 60 150 320 488
Feb 8 16 40 100 220
Mar 4 8 20 310
Overall figures
Month Count Avg Min Max
Jan 6 176 12 488
Feb 5 77 8 220
Mar 4 86 4 310
The same original data, but with more data, including repeated values and a wider range.
Jan 12 12 12 12 24 24 60 60 150 320 488 500
Feb 8 8 8 8 8 16 40 100 220 440 1100
Mar 4 8 8 8 8 20 20 20 20 310
Overall figures
Month Count Avg Min Max
Jan 12 140 12 500
Feb 11 178 8 1100
Mar 10 43 4 310
Mock-up of one of the sets of data split out into 3 buckets
Month Count Avg Min Max Bucket
Jan 4 12 12 12 0
Jan 4 42 24 60 1
Jan 4 365 150 500 2
...and so on for Feb and Mar
I'm just guessing at how the buckets would split in the mock-up above.
That pretty much captures what I'm trying to do. Group by month name (from_to_node in my real case), split the resulting rows into buckets, and then get min, max, avg, and count for each bucket. It's starting to sound like a pivot (?)
Real Table Setup
Here's the structure of table I'm getting a feed for:
CREATE TABLE IF NOT EXISTS data.edge_event (
id uuid,
inv_id uuid,
facility_id uuid,
from_node citext,
to_node citext,
from_to_node citext,
from_node_dts timestamp without time zone,
to_node_dts timestamp without time zone,
seconds integer,
cycle_id uuid
);
The duration is pre-calculated in seconds, and the area of interest for now is only the from_to_node name. So, it's fair to think of the example as
CREATE TABLE IF NOT EXISTS data.edge_event (
from_to_node citext,
seconds integer
);
Raw Data
Within the edge_event table, there are 159 distinct from_to_node values over around 300K event rows. Some are found in only a handful of edge_event records, some are found in thousands, or tens of thousands. That's too much to provide a good sample for. But to make the problem simpler to follow, a from_to_node might be
Boxing_Assembly 1256
Meaning "it took 1256 seconds to move this part from the Boxing phase to the Assembly phase." And here we might have 10,000 other records for "Boxing_Assembly" with different durations.
Goal
We're looking for two things out of each from_to_node. For something like Boxing_Assembly, I'm trying to do this:
Sort the seconds taken into buckets, say 20 buckets. This is for a histogram.
For each bucket get the
count of edge_event rows
avg(seconds) within the bucket
min/first_value(seconds) within the bucket
max/last_value(seconds) within the bucket
So, we're looking to chart durations to look for clusters, and then get the raw seconds out of any common clusters.
What I've tried
I've tried a lot of different code, and I've not succeeded. It seems like a problem for GROUP BY and/or window functions. There's something I'm not getting, as my results are far from the mark.
I know that I haven't provided sample data, which makes it harder to help. But I'm guessing that what I'm missing is one++ concepts. Pretty much, I want to know how to split out the edge_event data by from_to_node and then by seconds. Given the huge ranges across from_to_nodes, I'm trying to bucket each individually based on their own min/max.
Thanks very much for any help.
Draft Attempt
I've developed a query that works a bit, but not entirely. This is an edit from my original post with broken code.
WITH
min_max AS
(
SELECT from_to_node,
min(seconds),
max(seconds)
FROM edge_event
GROUP BY from_to_node
)
SELECT edge_event.from_to_node,
width_bucket (seconds, min_max.min, min_max.max, 99) as bucket, -- Bucket are counted from 0, so 9 gets you 10 buckets, if you have enough data.
count(*) as frequency,
min(seconds) as seconds_min,
max(seconds) as seconds_max,
max(seconds) - min(seconds) as bucket_width,
round(avg(seconds)) as seconds_avg
FROM edge_event
JOIN min_max ON (min_max.from_to_node = edge_event.from_to_node)
WHERE min_max.min <> min_max.max AND -- Can't have a bucket with an upper and lower bound that are the same.
edge_event.from_to_node IN (
'Boxing_Assembly',
'Assembly_Waiting For QA')
GROUP BY edge_event.from_to_node,
bucket
ORDER BY from_to_node,
bucket
What I'm getting back looks pretty good:
from_to_node bucket frequency seconds_min seconds_max bucket_width seconds_avg
Boxing_Assembly 1 912 17 7052 7035 3037
Boxing_Assembly 2 226 7058 13937 6879 9472
Boxing_Assembly 3 41 14151 21058 6907 16994
Boxing_Assembly 4 16 21149 27657 6508 23487
Boxing_Assembly 5 4 28926 33896 4970 30867
Boxing_Assembly 6 1 37094 37094 0 37094
Boxing_Assembly 7 1 43228 43228 0 43228
Boxing_Assembly 10 2 63666 64431 765 64049
Boxing_Assembly 14 1 94881 94881 0 94881
Boxing_Assembly 16 1 108254 108254 0 108254
Boxing_Assembly 37 1 257226 257226 0 257226
Boxing_Assembly 40 1 275140 275140 0 275140
Boxing_Assembly 68 1 471727 471727 0 471727
Boxing_Assembly 100 1 696732 696732 0 696732
Assembly_Waiting For QA 1 41875 1 18971 18970 726
Assembly_Waiting For QA 9 1 207457 207457 0 207457
Assembly_Waiting For QA 15 1 336711 336711 0 336711
Assembly_Waiting For QA 38 1 906519 906519 0 906519
Assembly_Waiting For QA 100 1 2369669 2369669 0 2369669
One problem here is that the buckets aren't evenly sized...they seem kind of weird. I've also tried specifying 10, 20, or 100 buckets, and get similar results. I'm hoping that there is a better way to allocate the data to buckets that I'm missing, and that there's a way to have zero-entry buckets instead of gaps.
I would use the PostgreSQL optimizer for that. It collects exactly the information you want.
Create a temporary table with the values you are interested in and ANALYZE it. Then look into pg_stats for the following:
if there are "most common values", you have them and their frequency right there.
Otherwise, look for adjacent histogram boundaries that are close together. Such a bucket is an interval where values are "lumped".

How to determine the number of digits of a number in a table

I am trying to determine the number of digits of a number in a table. For example if I have a table like this:
4 200 50 1236
69 54 285 1
1458 2 69 555
The answer would be
1 3 2 4
2 2 3 1
4 1 2 3
I used to be able to do this with this code
strlength(num2str(ADCPCRUM2(i,2)))
but then my input was numeric, and not a table.
How do I determine the length of a number in a table?
floor(log10(A)) does this. log10() basically counts the number of digits before/behind the decimal separator where the most significant number is.
When using this on a table, a simple call to table2array() should solve it.
Caveat: this only works for integers; for non-integer inputs it would get a lot more involved.

Number of different insertion sequence of Key values in a hash table

A hash table of length 10 uses open addressing with hash function h(k)=k mod 10, and linear probing. After inserting 8 values into an empty hash table, the table is as shown below
0 |
1 | 91
2 | 2
3 | 13
4 | 24
5 | 12
6 | 62
7 | 77
8 | 82
9 |
How many different insertion sequences of the key values using the same hash function and linear probing will result in the hash table shown above?
ANSWER - 128.
I know for 91,2,13,24,77 its 5! = 120 but i can't figure out what are the other 8 combinations ?
The answer given is wrong, Actualy it was a mocktest and answer provided by the source is wrong. The real answer is 168.
It can be done in 2 ways -
1) 91,2,13,24,12,62,77,82 - Here if you see and filter out details
_,91,_,2_,13,_,24,_,12,_,62,_,82
In all the available gaps we could fill 77 it will always result in 7th slot so
total number of ways 77 can come - any of 7 places i.e 7.
Now 91,2,13,24 can come in any order and can be arranged as above so total ways - 4! and for every of the 4! arrangements 77 can come at any of the 7 places so answer is - 4!*7 = 168.
2) Second way is - There are 3 possible sequence only
i) 91,2,13,24,77,12,62,82
Here 91,2,13,24,77 can come in any order, They will get there respective
slots so total 5! ways.
ii) 91,2,13,24,12,77,62,82
Here 91,2,13,24 can come in any order and we have fixed 77 after 12 so total
4! ways.
iii) 91,2,13,24,12,62,77,82
same here with 4! ways 91,2,13,and 24 can come and 77 is fixed after 62.
so total 5!+4!+4!=168.

cluster my data and testing of input

cluster the given data and use any retrieval algorithm to show output as shown below.
(any clustering algorithm)
Euclidean distance may be used for finding closest cases.
let a data file containing input vectors like
caseid f1 f2 f3 f4
1 30 45 9.5 1500
2 35 45 8 1600
3 38 47 10 1550
4 32 50 9.5 1800
..
..
..
t1 30 45 9.5 1500(target)
output should like
NO. f1 f2 f3 f4
t1 30 45 9.5 1500 (target)
21 35 45 10 1500(1st closest to target)
39 35 50 8 1500 (2nd closes)
56 35 42 9.5 1500 (3rd closes)
This looks like a classic nearest neighbor query to me, not like clustering.
Also I'd be careful with using Euclidean distance here. A difference of 1 in attribute f1 does not look like it is equal to a difference of 1 in attribute f4. The values seem to have a completely different magnitude.