Computation of the number of uppercase letters in a string - postgresql

I've bumped into a seemingly simple problem that I'm unable to solve. I would like to determine whether the number of uppercase letters is greater than the number of lowercase letter (ignoring special character, spaces etc).
Example
id | text | upper_greater_lower | note
------------------------------------------------------------------
1 | Hello World | False | because |HW| < |elloorld|
2 | The XYZ | True | because |TXYZ| > |he|
3 | Foo!!! | False | because |F| < |oo|
4 | BAr??? | True | because |BA| > |r|
My initial idea was to determine the number of lowecase letters, then uppercase letters, and finally, compare them. However, I'm unable to do so in any elegant and efficient way.
I expect handling ~30M rows with ~300 character each.
What would you suggest?
Thanks!

Using regular expression magic, that could be:
SELECT length(regexp_replace(textcol, '[^[:upper:]]', '', 'g'))
> length(regexp_replace(textcol, '[^[:lower:]]', '', 'g'))
FROM atable;

Related

create JSONB array grouped from column values with incrementing integers

For a PostgreSQL table, suppose the following data is in table A:
key_path | key | value
--------------------------------------
foo[1]__scrog | scrog | apple
foo[2]__scrog | scrog | orange
bar | bar | peach
baz[1]__biscuit | biscuit | watermelon
The goal is to group data when there is an incrementing number present for an otherwise identical value for column key_path.
For context, key_path is a JSON key path and key is the leaf key. The desired outcome would be:
key_path_group | key | values
------------------------------------------------------------
[foo[1]__scrog, foo[2]__scrog] | scrog | [apple, orange]
bar | bar | peach
[baz[1]__biscuit] | biscuit | [watermelon]
Also noting that for key_path=baz[1]__biscuit even though there is only a single incrementing value, it still triggers casting to an array of length 1.
Any tips or suggestions much appreciated!
May have answered my own question (sometimes just typing it out helps). The following gets very close, if not exactly, what I'm looking for:
select
regexp_replace(key_path, '(.*)\[(\d+)\](.*)', '\1[x]\3') as key_path_group,
key,
jsonb_agg(value) as values
from A
group by gp_key_path, key;

Update telephone number format to include country identifier

Being a beginner in SQL, I am trying to do the telephone fields to be noted in the format: '"+" + country identifier + telephone number'.
UPDATE public.contact
SET phone_number = CASE WHEN (country_code ='FR')
AND phone_number NOT LIKE '+33%'
AND phone_number <> NULL
THEN CONCAT('+33', phone_number)
WHEN (country_code ='GB')and phone_number NOT LIKE '+44%'
AND phone_number <> NULL
THEN CONCAT('+44', phone_number)
I want to update telephone number format to include country identifier like : 0606080905-> +33606080905 if country_code='FR' . I am looking for a faster and less complex way than what I did.
You can do this with a regular expression using regexp_replace.
Imagine your data being:
+----------+--------------+
Table 'numbers': | country | phone |
+----------+--------------+
| FR | 0606080905 |
| FR | +33606080906 |
| GB | 0123456789 |
| GB | +44987654321 |
| GB | NULL |
+----------+--------------+
Then the following update would replace the leading 0 with the country code +33 for all numbers that do not start with a +xx and have FR as country.
UPDATE numbers
SET phone = REGEXP_REPLACE(trim(phone), '^(0)', '+33')
WHERE country = 'FR'
Explained:
the ^ means start of the string
the (0) is the match that gets replaced (leading zero)
the +33 is the string that is used to replace it
the trim() is just added for safety, in case there are leading spaces
NULL phone numbers won't be affected, as they do not match
You could do this now as you did before with a CASE WHEN or something similar for each of the different possibilities. But since the expression always is the same, an easier way would be to have your country codes and their numerical mapping in a separate table:
+----------+--------+
Table 'mapping': | country | prefix |
+----------+--------+
| FR | +33 |
| GB | +44 |
+----------+--------+
You could then do
UPDATE numbers n
SET phone = REGEXP_REPLACE(trim(phone), '^(0)', prefix)
FROM mapping m
WHERE m.country = n.country
and update all your numbers in one go:
+----------+--------------+
| country | phone |
+----------+--------------+
| FR | +33606080905 |
| FR | +33606080906 |
| GB | +44123456789 |
| GB | +44987654321 |
| GB | NULL |
+----------+--------------+
EDIT: Previously, I had this needlessly complicated answer. You may need something like this if your phone number patterns are more diverse...
The following update would replace the leading 0 with the country code +33 for all numbers that do not start with a +xx and have FR as country.
UPDATE numbers
SET phone = REGEXP_REPLACE(trim(phone), '^(?<![+\d{2}])(0)', '+33')
WHERE country = 'FR'
Explained:
the (?<![+]) is a negative lookbehind assertion that makes sure the regex only matches if there is no + followed by two digits before
the (0) is the match that gets replaced
the +33 is the string that is used to replace it
the trim() is just added for safety, in case there are leading spaces
NULL phone numbers won't be affected, as they do not match
That's about as simple as it gets.
The only way I can imagine to speed up processing is to add a WHERE condition that avoids updating the rows that don't have to be modified.
You could also run several such statements in parallel, where each modifies a different part of the table.
As mentioned in the comment, <> NULL is never true.

Column-wise autocomplete

I have a table in a PostgreSQL database with four columns that contain increasingly more detailed information (think state->city->street->number), along with a column where everything is concatenated according to some simple formatting rules. Example:
| kommun | trakt | block | enhet | beteckning |
| Mora | Gislövs Läge | 9 | 16 | Mora Gislövs Läge 9:16 |
| Mora | Gisslaved | * | 8 | Mora Gisslaved 8 |
| Mora | Gisslaved | * | 9 | Mora Gisslaved 9 |
| Lilla Edet | Sanda | GA | 1 | Lilla Edet Sanda GA:1 |
A web service uses this table to implement a word-wise autocomplete, where the user gets input suggestions as they drill down. An input of mora gis will result in
["Mora Gislövs", "Mora Gisslaved"]
Currently, this is done by splitting the concatenated column by word in this query:
select distinct trim(substring(beteckning from '(^(\S+\s?){NUMPARTS})')) as bet
from beteckning_ac
where upper(beteckning) like upper('mora gis%')
order by bet
Where NUMPARTS is the number of words in the input - 2 in this case.
Now I want the autocomplete to be done column-wise rather than word-wise, so mora gis would now result in this instead:
["Mora Gislövs Läge", "Mora Gisslaved"]
Since the first two columns can contain an arbitrary number of words, I can no longer use the input to determine how many columns to include in the response. Is there a way to do this, or have I maybe gone about this autocomplete business all wrong?
CREATE OR REPLACE FUNCTION get_auto(text)
--$1 is here your input
RETURNS setof text
LANGUAGE plpgsql
AS $function$
declare
NUMPARTS int := array_length(regexp_split_to_array($1,' '), 1);
begin
return query
select
case
when (NUMPARTS = 1) then kommun
when (NUMPARTS = 2) then kommun||' '||trakt
when (NUMPARTS = 3) then kommun||' '||trakt||' '||block
when (NUMPARTS = 4) then kommun||' '||trakt||' '||block||' '||enhet
--alter if you want to
end
from
auto_complete --your tablename here
where
beteckning like $1||'%';
end;
$function$;

How to automatically calculate the SUS Score for a given spreadsheet in LibreOffice Calc?

I have several spreadsheets for a SUS-Score usability test.
They have this form:
| Strongly disagree | | | | Strongly agree |
I think, that I would use this system often | x | | | | |
I found the system too complex | |x| | | |
(..) | | | | | x |
(...) | x | | | | |
To calculate the SUS-Score you have 3 rules:
Odd item: Pos - 1
Even item: 5 - Pos
Add Score, multiply by 2.5
So for the first entry (odd item) you have: Pos - 1 = 1 - 1 = 0
Second item (even): 5 - Pos = 5 - 2 = 3
Now I have several of those spreadsheets and want to calculate the SUS-Score automatically. I've changed the x to a 1 and tried to use IF(F5=1,5-1). But I would need an IF-condition for every column: =IF(F5=1;5-1;IF(E5=1;4-1;IF(D5=1;3-1;IF(C5=1;2-1;IF(B5=1;1-1))))), so is there an easier way to calculate the score, based on the position in the table?
I would use a helper table and then SUM() all the cells of the helper table and multiply by 2.5. This formula (modified as needed, see notes below) can start your helper table and be copy-pasted to fill out the entire table:
=IF(D2="x";IF(MOD(ROW();2)=1;5-D$1;D$1-1);"")
Here D is an answer column
Depending on what row (odd/even) your answers start you may need to change the =1 after the MOD function to =0
This assumes the position number is in row 1; if position numbers are in a different row change the number after the $ appropriately

Sane way to store different data types within same column in postgres?

I'm currently attempting to modify an existing API that interacts with a postgres database. Long story short, it's essentially stores descriptors/metadata to determine where an actual 'asset' (typically this is a file of some sort) is storing on the server's hard disk.
Currently, its possible to 'tag' these 'assets' with any number of undefined key-value pairs (i.e. uploadedBy, addedOn, assetType, etc.) These tags are stored in a separate table with a structure similar to the following:
+---------------+----------------+-------------+
|assetid (text) | tagid(integer) | value(text) |
|---------------+----------------+-------------|
|someStringValue| 1234 | someValue |
|---------------+----------------+-------------|
|aDiffStringKey | 1235 | a username |
|---------------+----------------+-------------|
|aDiffStrKey | 1236 | Nov 5, 1605 |
+---------------+----------------+-------------+
assetid and tagid are foreign keys from other tables. Think of the assetid representing a file and the tagid/value pair is a map of descriptors.
Right now, the API (which is in Java) creates all these key-value pairs as a Map object. This includes things like timestamps/dates. What we'd like to do is to somehow be able to store different types of data for the value in the key-value pair. Or at least, storing it differently within the database, so that if we needed to, we could run queries checking date-ranges and the like on these tags. However, if they're stored as text items in the db, then we'd have to a.) Know that this is actually a date/time/timestamp item, and b.) convert into something that we could actually run such a query on.
There is only 1 idea I could think of thus far, without complete changing changing the layout of the db too much.
It is to expand the assettag table (shown above) to have additional columns for various types (numeric, text, timestamp), allow them to be null, and then on insert, checking the corresponding 'key' to figure out what type of data it really is. However, I can see a lot of problems with that sort of implementation.
Can any PostgreSQL-Ninjas out there offer a suggestion on how to approach this problem? I'm only recently getting thrown back into the deep-end of database interactions, so I admit I'm a bit rusty.
You've basically got two choices:
Option 1: A sparse table
Have one column for each data type, but only use the column that matches that data type you want to store. Of course this leads to most columns being null - a waste of space, but the purists like it because of the strong typing. It's a bit clunky having to check each column for null to figure out which datatype applies. Also, too bad if you actually want to store a null - then you must chose a specific value that "means null" - more clunkiness.
Option 2: Two columns - one for content, one for type
Everything can be expressed as text, so have a text column for the value, and another column (int or text) for the type, so your app code can restore the correct value in the correct type object. Good things are you don't have lots of nulls, but importantly you can easily extend the types to something beyond SQL data types to application classes by storing their value as json and their type as the class name.
I have used option 2 several times in my career and it was always very successful.
Another option, depending on what your doing, could be to just have one value column but store some json around the value...
This could look something like:
{
"type": "datetime",
"value": "2019-05-31 13:51:36"
}
That could even go a step further, using a Json or XML column.
I'm not in any way PostgreSQL ninja, but I think that instead of two columns (one for name and one for type) you could look at hstore data type:
data type for storing sets of key/value pairs within a single
PostgreSQL value. This can be useful in various scenarios, such as
rows with many attributes that are rarely examined, or semi-structured
data. Keys and values are simply text strings.
Of course, you have to check how date/timestamps converting into and from this type and see if it good for you.
You can use 2 different technics:
if you have floating type for every tagid
Define table and ID for every tagid-assetid combination and actual data tables:
maintable:
+---------------+----------------+-----------------+---------------+
|assetid (text) | tagid(integer) | tablename(text) | table_id(int) |
|---------------+----------------+-----------------+---------------|
|someStringValue| 1234 | tablebool | 123 |
|---------------+----------------+-----------------+---------------|
|aDiffStringKey | 1235 | tablefloat | 123 |
|---------------+----------------+-----------------+---------------|
|aDiffStrKey | 1236 | tablestring | 123 |
+---------------+----------------+-----------------+---------------+
tablebool
+-------------+-------------+
| id(integer) | value(bool) |
|-------------+-------------|
| 123 | False |
+-------------+-------------+
tablefloat
+-------------+--------------+
| id(integer) | value(float) |
|-------------+--------------|
| 123 | 12.345 |
+-------------+--------------+
tablestring
+-------------+---------------+
| id(integer) | value(string) |
|-------------+---------------|
| 123 | 'text' |
+-------------+---------------+
In case if every tagid has fixed type
create tagid description table
tag descriptors
+---------------+----------------+-----------------+
|assetid (text) | tagid(integer) | tablename(text) |
|---------------+----------------+-----------------|
|someStringValue| 1234 | tablebool |
|---------------+----------------+-----------------|
|aDiffStringKey | 1235 | tablefloat |
|---------------+----------------+-----------------|
|aDiffStrKey | 1236 | tablestring |
+---------------+----------------+-----------------+
and correspodnding data tables
tablebool
+-------------+----------------+-------------+
| id(integer) | tagid(integer) | value(bool) |
|-------------+----------------+-------------|
| 123 | 1234 | False |
+-------------+----------------+-------------+
tablefloat
+-------------+----------------+--------------+
| id(integer) | tagid(integer) | value(float) |
|-------------+----------------+--------------|
| 123 | 1235 | 12.345 |
+-------------+----------------+--------------+
tablestring
+-------------+----------------+---------------+
| id(integer) | tagid(integer) | value(string) |
|-------------+----------------+---------------|
| 123 | 1236 | 'text' |
+-------------+----------------+---------------+
All this is just for general idea. You should adapt it for your needs.