SQL - 5% random sample by group - tsql

I have a table with about 10 million rows and 4 columns, no primary key. Data in Column 2 3 4 (x2 x3 and x4) are grouped by 50 groups identified in column1 X1.
To get a random sample of 5% from table, I have always used
SELECT TOP 5 PERCENT *
FROM thistable
ORDER BY NEWID()
The result returns about 500,000 rows. But, some groups get an unequal representation in the sample (relative to their original size) if sampled this way.
This time, to get a better sample, I wanted to get 5% sample from each of the 50 groups identified in column X1. So, at the end, I can get a random sample of 5% of rows in each of the 50 groups in X1 (instead of 5% of entire table).
How can I approach this problem? Thank you.

You need to be able to count each group and then coerce the data out in a random order. Fortuantly, we can do this with a CTE-style query. Although CTE isn't strictly needed it will help break down the solution into little bits, rather than a lots of sub-selects and the like.
I assume you've already got a column that groups the data, and that the value in this column is the same for all items in the group. If so, something like this might work (columns and table names to be changed to suit your situation):
WITH randomID AS (
-- First assign a random ID to all rows. This will give us a random order.
SELECT *, NEWID() as random FROM sourceTable
),
countGroups AS (
-- Now we add row numbers for each group. So each group will start at 1. We order
-- by the random column we generated in the previous expression, so you should get
-- different results in each execution
SELECT *, ROW_NUMBER() OVER (PARTITION BY groupcolumn ORDER BY random) AS rowcnt FROM randomID
)
-- Now we get the data
SELECT *
FROM countGroups c1
WHERE rowcnt <= (
SELECT MAX(rowcnt) / 20 FROM countGroups c2 WHERE c1.groupcolumn = c2.groupcolumn
)
The two CTE expressions allow you to randomly order and then count each group. The final select should then be fairly straightforward: for each group, find out how many rows there are in it, and only return 5% of them (total_row_count_in_group / 20).

Related

How to get the top 99% values in postgresql?

Seemingly similar to How to get the top 10 values in postgresql?, yet somehow very different.
We'll set it up similar to that question:
I have a postgresql table: Scores(score integer).
How would I get the highest 99% of the scores? We cannot say that we know beforehand how many rows there are, so we can't use the same limit to an integer trick. SQL Server has an easy SELECT TOP syntax -- is there anything similarly simple in the postgresql world?
This should be doable with percent_rank()
select score
from (
select score, percent_rank() over (order by score desc) as pct_rank
from scores
) t
where pct_rank <= 0.99
you can use the ntile function to partition the rows into percentiles and then select the rows where tile > 99
example:
-- following query generates 1000 rows with random
-- scores and selects the 99th percentile using the ntile function.
-- because the chance of the same random value appearing twice is extremely
-- small, the result should in most cases yield 10 rows.
with scores as (
select
id
, random() score
from generate_series(1, 1000) id
)
, percentiles AS (
select
*
, ntile(100) over (order by score) tile
from scores
)
select
id
, score
from percentiles
where tile > 99

Postgres: Get percentile of number not necessarily in table column

Imagine I have a column my_variable of floats in my a my_table. I know how to convert each of the rows in this my_variable column into percentiles, but my question is: I have a number x that is not necessarily in the table. Let's call it 7.67. How do I efficiently compute where 7.67 falls in that distribution of my_variable? I would like to be able to say "7.67 is in the 16.7th percentile" or "7.67 is larger than 16.7% of rows in my_variable." Note that 7.67 is not something taken from the column, but I'm inputting it in the SQL query itself.
I was thinking about ordering my_variable in ascending order and counting the number of rows that fall below the number I specify and dividing by the total number of rows, but is there a more computationally efficient way of doing this, perhaps?
If your data does not change too often, you can use a materialized view or a different table, call it percentiles, in which you store 100 or 1.000 (depending on the precision you need). This table should have a descending index on the value column.
Each row contains the minimum value to reach a certain percentile and the percentile itself.
Then you just need to get the first row that have value greater than the given data and read the percentile value.
In you example the table will contain 1.000 rows, and you could have someting like:
Percentile value
16.9 7.71
16.8 7.69
16.7 7.66
16.6 7.65
16.5 7.62
And your query could be something like:
SELECT TOP 1 Percentile FROM percentiles where 7.67 < value ORDER BY value desc
This is a valid solution if the number of SELECTs you make is much bigger than the numbers of updates in the my_table table.
I ended up doing:
select (avg(dummy_var::float))
from (
select case when var_name < 3.14 then 1 else 0 end as dummy_var
from table_name where var_name is not null
)
Where var_name was the variable of interest, table_name was the table of interest, and 3.14 was the number of interest.

Count number of points within certain distance ranges from another set of points

I have the following, which gives me the number of customers within 10,000 meters of any store location:
SELECT COUNT(*) as customer_count FROM customer_table c
WHERE EXISTS(
SELECT 1 FROM locations_table s
WHERE ST_Distance_Sphere(s.the_geom, c.the_geom) < 10000
)
What I need is for this query to return not only the number of customers within 10,000 meters, but also the following. The number of customers within...
10,000 meters
more than 10,000, but less than 50,000
more than 50,000, but less than 10,0000
more than 100,000
...of any location.
I'm open to this working a couple of ways. For a given customer, only count them one time (the shortest distance to any store), which would count everyone exactly once. I realize this is probably pretty complex. I'm also open to having people be counted multiple times, which is really the accurate values anyway and think should be much simpler.
Thanks for any direction.
You can do both types of queries relatively easily. But an issue here is that you do not know which customers are associated with which store locations, which seems like an interesting thing to know. If you want that, use the PK and store_name of the locations_table in the query. See both options with location id and store_name below. To emphasize the difference between the two options:
The first option indicates how many customers are in every distance class for every store location, for all customers for every store location.
The second option indicates how many customers are in every distance class for every store location, for the nearest store location for each customer only.
This is a query of O(n x m) running order (implemented with the CROSS JOIN between customer_table and locations_table) and likely to become rather slow with increasing numbers of rows in either table.
Count customers in all distance classes
You should make a CROSS JOIN between the distances of customers from store locations and then group them by the store location id, name and classes of maximum distance that you define. You can create a "table" from your distance classes with the VALUES command which you can then simply use in any query:
SELECT loc_dist.id, loc_dist.store_name, grps.grp, count(*)
FROM (
SELECT s.id, s.store_name, ST_Distance_Sphere(s.the_geom, c.the_geom) AS dist
FROM customer_table c, locations_table s) AS loc_dist
JOIN (
VALUES(1, 10000.), (2, 50000.), (3, 100000.), (4, 1000000.)
) AS grps(grp, dist) ON loc_dist.dist < grps.dist
GROUP BY 1, 2, 3
ORDER BY 1, 2, 3;
Count customers in the nearest distance class
If you want customers listed in the nearest distance class only, then you should make the same CROSS JOIN on customer_table and locations_table as in the previous case, but then simply select the lowest group (i.e. the closest store) using a CASE clause in the query and GROUP BY store location id, name and distance class as before:
SELECT
id, store_name,
CASE
WHEN dist < 10000. THEN 1
WHEN dist < 50000. THEN 2
WHEN dist < 100000. THEN 3
ELSE 4
END AS grp,
count(*)
FROM (
SELECT s.id, s.store_name, ST_Distance_Sphere(s.the_geom, c.the_geom) AS dist
FROM customer_table c, locations_table s) AS loc_dist
GROUP BY 1, 2, 3
ORDER BY 1, 2, 3;

Cumulative count cut with postgresql

Is there a possibility to realize a cumulative count cut of 2.0 - 98.0% in PostgreSQL?
That means only to select the data range set from 2% until 98%.
I suggest the window function ntile() in a subquery:
SELECT *
FROM (
SELECT *, ntile(50) OVER (ORDER BY ts) AS part
FROM tbl
WHERE ts >= $start
AND ts < $end
) sub
WHERE part NOT IN (1, 50);
ts being your date column.
ntile() assigns integer numbers to the rows dividing them into the number of partitions specified. Per documentation:
integer ranging from 1 to the argument value, dividing the partition as equally as possible
By using 50 partitions the first partition matches the first 2 % as closely as possible and the last the last 2 % (> 98%).
Note, if there are less than 50 rows the assigned numbers never go up to 50. In this case the first row would be trimmed but not the last. Since the first and last 2 % are not well defined with such a low number of rows this may be considered correct or incorrect. Check for the total number of rows and adapt to your needs. For instance by trimming the last row, too. Or none at all.

Select rows randomly distributed around a given mean

I have a table that has a value field. The records have values somewhat evenly distributed between 0 and 100.
I want to query this table for n records, given a target mean, x, so that I'll receive a weighted random result set where avg(value) will be approximately x.
I could easily do something like
SELECT TOP n * FROM table ORDER BY abs(x - value)
... but that would give me the same result every time I run the query.
What I want to do is to add weighting of some sort, so that any record may be selected, but with diminishing probability as the distance from x increases, so that I'll end up with something like a normal distribution around my given mean.
I would appreciate any suggestions as to how I can achieve this.
why not use the RAND() function?
SELECT TOP n * FROM table ORDER BY abs(x - value) + RAND()
EDIT
Using Rand won't work because calls to RAND in a select have a tendency to produce the same number for most of the rows. Heximal was right to use NewID but it needs to be used directly in the order by
SELECT Top N value
FROM table
ORDER BY
abs(X - value) + (cast(cast(Newid() as varbinary) as integer))/10000000000
The large divisor 10000000000 is used to keep the avg(value) closer to X while keeping the AVG(x-value) low.
With that all said maybe asking the question (without the SQL bits) on https://stats.stackexchange.com/ will get you better results.
try
SELECT TOP n * FROM table ORDER BY abs(x - value), newid()
or
select * from (
SELECT TOP n * FROM table ORDER BY abs(x - value)
) a order by newid()