How to create a column that holds an array in Postgres? - postgresql

Background:
I am making a db for a reservartions calendar. The reservations are hourly based, so I need to insert many items to one column called "hours_reserved".
Example tables of what I need:
Table "Space"
Column / Values
id / 1
date / 5.2.2020
hours / { 8-10, 10-12 }
Table "reservation"
Column / Values
id / 1
space_id / 1
date / 5.2.2020
reserved_hours / 8-10
Table "reservation"
Column / Values
id / 2
space_id / 1
date / 5.2.2020
hours / 10-12
So I need to have multiple items inserted into "space" table "hours" column.
How do I do this in Postgres?
Also is there a better way to accomplish this?

There is more way to do this, depending on the type of the hours field (i.e. text[], json or jsonb) I'd go with jsonb just because you can do a lot of things with it and you'll find this experience to be useful in the short term.
CREATE TABLE "public"."space"
("id" SERIAL, "date_schedule" date, "hours" jsonb, PRIMARY KEY ("id"))
Whenever you insert a record in this table that's manually crafted, write it as text (single quoted json object) and cast it to jsonb
insert into "space"
(date_schedule,hours)
values
('05-02-2020'::date, '["8-10", "10-12"]'::jsonb);
There is more than one way to match these available hours against the reservations and you can take a look at the docs, on the json and jsonb operations. For example, doing:
SELECT id,date_schedule, jsonb_array_elements(hours) hours FROM "public"."space"
would yield
Which has these ugly double quotes (which is correct, since json can hold several kinds of scalars, that column is polimorfic :D)
However, you can perform a little transformation to remove them and be able to perform a join with reservations
with unnested as (
SELECT id,date_schedule, jsonb_array_elements(hours) hours FROM "public"."space"
)
select id,date_schedule,replace(hours::text, '"','') from unnested
The same can be achieved defining the field as text[] (the insertion syntax is different but trivial)
in that scenario your data will look like:
Which you can unwrap as:
SELECT id,date_schedule, unnest(hours) FROM "public"."space"

Apparently
ALTER TABLE mytable
ADD COLUMN myarray text[];
Works fine.
I got a following problem when trying to put(update) into that column using postman(create works fine):
{
"myarray": ["8-10"]
}
Results into:
"message": "error: invalid input syntax for type integer:
\"{\"myarray\":[\"8-10\"]}\""

Related

In a PostgreSQL crosstab, can I automate the tuple part?

I'm trying to do get a tall table (with just 3 columns indicating variable, timestamp and value) into a wide format where timestamp is the index, the columns are the variable names, and the values are the values of the new table.
In python/pandas this would be something along the lines of
import pandas as pd
df = pd.read_csv("./mydata.csv") # assume timestamp, varname & value columns
df.pivot(index="timestamp", columns="varname", values="value")
for PostgreSQL there exists crosstab, so far I have:
SELECT * FROM crosstab(
$$
SELECT
"timestamp",
"varname",
"value"
FROM mydata
ORDER BY "timestamp" ASC, "varname" ASC
$$
) AS ct(
"timestamp" timestamp,
"varname1" numeric,
...
"varnameN" numeric
);
The problem is that I can potentially have dozens to hundreds of variable names. The types are always numeric, number of variable names is not stable (we could need more variables or realize that others are not necessary).
Is there a way to automate the "ct" part so that some other query (e.g. select distinct "varname" from mydata) produces it instead of me having to type in every single variable name present?
PS: PSQL version is 12.9 at home, 14.0 in production. Number of rows in the original table is around 2 million, however I'm going to filter by timestamp and varname, so potentially only a few hundreds of thousands rows. After filtering I got ~50 unique varnames, but that will increase in a few weeks.

How to get data from brackets in PostgreSQL?

I have a table (that shape I can't control) where I have one column in data type varchar(500), but strings are in the brackets.
Can I get data from these brackets in another way than this?
select *
from my_table
where products LIKE '%c%'
;
I tried to change data type for array, json, jsonb - but it doesn't work.
"Products" are always in the same order, but not every user has all of them and some have nothing.

How can I sum/subtract time values from same row

I want to sum and subtract two or more timestamp columns.
I'm using PostgreSQL and I have a structure as you can see:
I can't round the minutes or seconds, so I'm trying to extract the EPOCH and doing the operation after, but I always get an error because the first EXTRACT recognizes the column, but when I put the second EXTRACT in the same SQL command I get an error message saying that the second column does not exist.
I'll give you an example:
SELECT
EXAMPLE.PERSON_ID,
COALESCE(EXTRACT(EPOCH from EXAMPLE.LEFT_AT),0) +
COALESCE(EXTRACT(EPOCH from EXAMPLE.ARRIVED_AT),0) AS CREDIT
FROM
EXAMPLE
WHERE
EXAMPLE.PERSON_ID = 1;
In this example I would get an error like:
Column ARRIVED_AT does not exist
Why is this happening?
Could I sum/subtract time values from same row?
Is ARRIVED_AT a calculated value instead of a column? What did you run to get the query results image you posted showing those columns?
The following script does what you expect, so there's something about the structure of the table you're querying that isn't what you expect.
CREATE SCHEMA so46801016;
SET search_path=so46801016;
CREATE TABLE trips (
person_id serial primary key,
arrived_at time,
left_at time
);
INSERT INTO trips (arrived_at, left_at) VALUES
('14:30'::time, '19:30'::time)
, ('11:27'::time, '20:00'::time)
;
SELECT
t.person_id,
COALESCE(EXTRACT(EPOCH from t.left_at),0) +
COALESCE(EXTRACT(EPOCH from t.arrived_at),0) AS credit
FROM
trips t;
DROP SCHEMA so46801016 CASCADE;

Inserting a substring of column in Redshift

Hello I am using Redshift where I have a staging table & a base table. one of the column (city) in my base table has data type varchar & its length is 100.When I am trying to insert the column value from staging table to base table, I want this value to be truncated to 1st 100 characters or leftmost 100 characters. Can this be possible in Redshift?
INSERT into base_table(org_city) select substring(city,0,100) from staging_table;
I tried using the above query but it failed. Any solutions please ?
Try this! Your base table column length is Varchar(100), so you need to substring 0-99 chars, which is 100 chars. You are trying to substring 101 chars.
INSERT into base_table(org_city) select substring(city,0,99) from staging_table;

T-SQL LEFT JOIN on bigint id return only ids lower than 101 on right table

I have two tables on a Sql Server 2008.
ownership with 3 fields and case with another 3 fields I need to join both on the ID field (bigint).
For testing purposes I'm only using one field from each table. This field is bigint and has values from 1 to 170 (for now).
My query is:
SELECT DISTINCT
ownership.fCase,
case.id
FROM
ownership LEFT JOIN case ON (case.id=ownership.fCase)
WHERE
ownership.dUser='demo'
This was expected to return 4 rows with the same values on both columns. Problem is that the last row of the right table comes as null for the fCase = 140. This is the only value above 100.
If I run the query without the WHERE clause it show all rows on the left table but the values on the right only apear if below 101 otherwise shows null.
Can someone help me, am I doing something wrong or is this a limitation or a bug?
Case is also a verb so it may be getting confused. Try your table and column names in []. E.G. [case].[id] = [ownership].[fCase]. Are you like double check sure that [case].[id] and [ownership].[fCase] are both bigint. If your current values are 1-170 then why bigint (9,223,372,036,854,775,807)? Does that column accept nulls?