Postgresql inserts values falsely - postgresql

I want to add a denormalized table for some data of a gtfs-feed. For that I created a new table:
CREATE TABLE denormalized_trips (
stops_coords json NOT NULL,
stops_object json NOT NULL,
agency_key text NOT NULL,
trip_id text NOT NULL,
route_id text NOT NULL,
service_id text NOT NULL,
shape_id text,
route_color text,
route_long_name text,
route_desc text,
direction_id text
);
CREATE INDEX denormalized_trips_index ON denormalized_trips (agency_key, trip_id);
CREATE UNIQUE INDEX denormalized_trips_index ON denormalized_trips (agency_key, route_id);
Now I want to transfer data from one table to the other via an insert statement. The statement is rather complex.
INSERT INTO denormalized_trips
SELECT
trps.stops_coords,
trps.stops_object,
trps.trip_id,
trps.service_id,
trps.route_id,
trps.direction_id,
trps.agency_key,
trps.shape_id,
trps.route_color,
trps.route_long_name,
trps.route_desc
FROM (
SELECT
array_to_json(ARRAY_AGG(array[stop_lat, stop_lon])) AS stops_coords,
array_to_json(ARRAY_AGG(array[
stops.stop_id,
CAST ( stop_times.stop_sequence AS TEXT ),
stops.stop_name,
stop_times.departure_time,
CAST ( stop_times.departure_time_seconds AS TEXT ),
stop_times.arrival_time,
CAST ( stop_times.arrival_time_seconds AS TEXT )
])) AS stops_object,
trips.trip_id,
trips.service_id,
trips.direction_id,
trips.agency_key,
trips.shape_id,
routes.route_id,
routes.route_color,
routes.route_long_name,
routes.route_desc
FROM gtfs_stop_times AS stop_times
INNER JOIN gtfs_trips AS trips
ON trips.trip_id = stop_times.trip_id AND trips.agency_key = stop_times.agency_key
INNER JOIN gtfs_routes AS routes ON trips.agency_key = routes.agency_key AND routes.route_id = trips.route_id
INNER JOIN gtfs_stops AS stops
ON stops.stop_id = stop_times.stop_id
AND stops.agency_key = stop_times.agency_key
AND NOT EXISTS (
SELECT 0
FROM denormalized_max_stop_sequence AS max
WHERE max.agency_key = stop_times.agency_key
AND max.trip_id = stop_times.trip_id
AND max.trip_max = stop_times.stop_sequence
)
GROUP BY
trips.trip_id,
trips.service_id,
trips.direction_id,
trips.agency_key,
trips.shape_id,
routes.route_id,
routes.route_color,
routes.route_long_name,
routes.route_desc
) as trps
If I just run the inner select statement I will get the right results. They look something like this: (screenshot does not show all tables because it's too long)
But if I execute the insert statement and display the content of the table i will get something like this:
As you may notice the contents are not inserted into the right columns of the table. The agency_key now has the values of the trip_id and the direction_id is now the service_id (and there are more tables that are messed up).
So my question is what am I doing wrong that my insert statement inserts the contents into the wrong columns of the newly created table?
Thanks for your help.

Postgres, by default, will insert your values in the order the columns are declared in the table; it has nothing to do with what your columns are named in the query.
https://www.postgresql.org/docs/9.5/static/sql-insert.html
If no list of column names is given at all, the default is all the columns of the table in their declared order; or the first N column names, if there are only N columns supplied by the VALUES clause or query.
You can alter your insert to declare the order of the columns you're inserting, or you can change the order of your select to match the order of columns in the table.

Related

Extract all the values in jsonb into a row

I'm using postgresql 11, I have a jsonb which represent a row of that table, it's look like
{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com",...,"thirdpartyauthenticationkey":{}}
is there any method that I could gather all the "values" of the jsonb into a string which is separated by ',' and without the keys?
The string I want to obtain with the jsonb above is like
(test, Root, 0, superadmin#ae.com, ..., {})
I need to keep the ORDER of those values as what their keys were in the jsonb. Could I do that with postgresql?
You can use the jsonb_populate_record function (assuming your json data does match the users table). This will force the text value to match the order of your users table:
Schema (PostgreSQL v13)
CREATE TABLE users (
userid text,
rolename text,
loginerror int,
email text,
thirdpartyauthenticationkey json
)
Query #1
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
SELECT jsonb_populate_record(null::users, js),
jsonb_populate_record(null::users, js)::text AS record_as_text,
pg_typeof(jsonb_populate_record(null::users, js)::text)
FROM d
;
jsonb_populate_record
record_as_text
pg_typeof
(test,Root,0,superadmin#ae.com,{})
(test,Root,0,superadmin#ae.com,{})
text
(other,User,324,nope#ae.com,{})
(other,User,324,nope#ae.com,{})
text
Note that if you're building this string to insert it back into postgresql then you don't need to do that, since the result of jsonb_populate_record will match your table:
Query #2
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
INSERT INTO users
SELECT (jsonb_populate_record(null::users, js)).*
FROM d;
There are no results to be displayed.
Query #3
SELECT * FROM users;
userid
rolename
loginerror
email
thirdpartyauthenticationkey
test
Root
0
superadmin#ae.com
[object Object]
other
User
324
nope#ae.com
[object Object]
View on DB Fiddle
You can use jsonb_each_text() to get a set of a text representation of the elements, string_agg() to aggregate them in a comma separated string and concat() to put that in parenthesis.
SELECT concat('(', string_agg(value, ', '), ')')
FROM jsonb_each_text('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb) jet (key,
value);
db<>fiddle
You didn't provide DDL and DML of a (the) table the JSON may reside in (if it does, that isn't clear from your question). The demonstration above therefore only uses the JSON you showed as a scalar. If you have indeed a table you need to CROSS JOIN LATERAL and GROUP BY some key.
Edit:
If you need to be sure the order is retained and you don't have that defined in a table's structure as #Marth's answer assumes, then you can of course extract every value manually in the order you need them.
SELECT concat('(',
concat_ws(', ',
j->>'userid',
j->>'rolename',
j->>'loginerror',
j->>'email',
j->>'thirdpartyauthenticationkey'),
')')
FROM (VALUES ('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb)) v (j);
db<>fiddle

How can I restrict a result to only include rows where one specific field is unique with UNION Select statement in BigQuery?

I have the following code. I try to stitch the two tables together, but restrict it to only add duplicate Opportunity_ID once, and then from the second table (OpportunitiesUpdates).
SELECT
Opportunity.Account_Name,
Opportunity.Opportunity_Name,
Opportunity.Opportunity_Owner,
Opportunity.Opportunity_ID
FROM
Opportunity
UNION DISTINCT
SELECT
OpportunityUpdates.Account_Name,
OpportunityUpdates.Opportunity_Name,
OpportunityUpdates.Opportunity_Owner,
OpportunityUpdates.Opportunity_ID
FROM
OpportunityUpdates
WHERE OpportunityUpdates.Opportunity_ID <> Opportunity.Opportunity_ID
This code consolidates all records from both tables (by Opportunity_ID) and gives priority to the OpportunityUpdates table based on Opportunity_ID.
It assumes that the same Opportunity_ID could be in either table ("duplicates"), but that within each table an Opportunity_ID is unique. It also assumes that Opportunity_ID is not nullable (never null).
SELECT DISTINCT
IF(ou.Opportunity_ID IS NOT NULL, ou.Account_Name, o.Account_Name) Account_Name,
IF(ou.Opportunity_ID IS NOT NULL, ou.Opportunity_Name, o.Opportunity_Name) Opportunity_Name,
IF(ou.Opportunity_ID IS NOT NULL, ou.Opportunity_Owner, o.Opportunity_Owner) Opportunity_Owner,
COALESCE(ou.Opportunity_ID, o.Opportunity_ID) Opportunity_ID
FROM OpportunityUpdates ou
FULL OUTER JOIN
Opportunity o
ON o.Opportunity_ID = ou.Opportunity_ID

SQL NOT LIKE comparison against dynamic list

Working on a new TSQL Stored Procedure, I am wanting to get all rows where values in a specific column don't start with any of a specific set of 2 character substrings.
The general idea is:
SELECT * FROM table WHERE value NOT LIKE 's1%' AND value NOT LIKE 's2%' AND value NOT LIKE 's3%'.
The catch is that I am trying to make it dynamic so that the specific substrings can be pulled from another table in the database, which can have more values added to it.
While I have never used the IN operator before, I think something along these lines should do what I am looking for, however, I don't think it is possible to use wildcards with IN, so I might not be able to compare just the substrings.
SELECT * FROM table WHERE value NOT IN (SELECT substrings FROM subTable)
To get around that limitation, I am trying to do something like this:
SELECT * FROM table WHERE SUBSTRING(value, 1, 2) NOT IN (SELECT Prefix FROM subTable WHERE Prefix IS NOT NULL)
but I'm not sure this is right, or if it is the most efficient way to do this. My preference is to do this in a Stored Procedure, but if that isn't feasible or efficient I'm also open to building the query dynamically in C#.
Here's an option. Load values you want to filter to a table, left outer join and use PATINDEX().
DECLARE #FilterValues TABLE
(
[FilterValue] NVARCHAR(10)
);
--Table with values we want filter on.
INSERT INTO #FilterValues (
[FilterValue]
)
VALUES ( N's1' )
, ( N's2' )
, ( N's3' );
DECLARE #TestData TABLE
(
[TestValues] NVARCHAR(100)
);
--Load some test data
INSERT INTO #TestData (
[TestValues]
)
VALUES ( N's1 Test Data' )
, ( N's2 Test Data' )
, ( N's3 Test Data' )
, ( N'test data not filtered out' )
, ( N'test data not filtered out 1' );
SELECT a.*
FROM #TestData [a]
LEFT OUTER JOIN #FilterValues [b]
ON PATINDEX([b].[FilterValue] + '%', [a].[TestValues]) > 0
WHERE [b].[FilterValue] IS NULL;

Can the categories in the postgres tablefunc crosstab() function be integers?

It's all in the title. Documentation has something like this:
SELECT *
FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 text);
I have two tables, lab_tests and lab_tests_results. All of the lab_tests_results rows are tied to the primary key id integer in the lab_tests table. I'm trying to make a pivot table where the lab tests (identified by an integer) are row headers and the respective results are in the table. I can't get around a syntax error at or around the integer.
Is this possible with the current set up? Am I missing something in the documentation? Or do I need to perform an inner join of sorts to make the categories strings? Or modify the lab_tests_results table to use a text identifier for the lab tests?
Thanks for the help, all. Much appreciated.
Edit: Got it figured out with the help of Dmitry. He had the data layout figured out, but I was unclear on what kind of output I needed. I was trying to get the pivot table to be based on batch_id numbers in the lab_tests_results table. Had to hammer out the base query and casting data types.
SELECT *
FROM crosstab('SELECT lab_tests_results.batch_id, lab_tests.test_name, lab_tests_results.test_result::FLOAT
FROM lab_tests_results, lab_tests
WHERE lab_tests.id=lab_tests_results.lab_test AND (lab_tests.test_name LIKE ''Test Name 1'' OR lab_tests.test_name LIKE ''Test Name 2'')
ORDER BY 1,2'
) AS final_result(batch_id VARCHAR, test_name_1 FLOAT, test_name_2 FLOAT);
This provides a pivot table from the lab_tests_results table like below:
batch_id |test_name_1 |test_name_2
---------------------------------------
batch1 | result1 | <null>
batch2 | result2 | result3
If I understand correctly your tables look something like this:
CREATE TABLE lab_tests (
id INTEGER PRIMARY KEY,
name VARCHAR(500)
);
CREATE TABLE lab_tests_results (
id INTEGER PRIMARY KEY,
lab_tests_id INTEGER REFERENCES lab_tests (id),
result TEXT
);
And your data looks something like this:
INSERT INTO lab_tests (id, name)
VALUES (1, 'test1'),
(2, 'test2');
INSERT INTO lab_tests_results (id, lab_tests_id, result)
VALUES (1,1,'result1'),
(2,1,'result2'),
(3,2,'result3'),
(4,2,'result4'),
(5,2,'result5');
First of all crosstab is part of tablefunc, you need to enable it:
CREATE EXTENSION tablefunc;
You need to run it one per database as per this answer.
The final query will look like this:
SELECT *
FROM crosstab(
'SELECT lt.name::TEXT, lt.id, ltr.result
FROM lab_tests AS lt
JOIN lab_tests_results ltr ON ltr.lab_tests_id = lt.id'
) AS ct(test_name text, result_1 text, result_2 text, result_3 text);
Explanation:
The crosstab() function takes a text of a query which should return 3 columns; (1) a column for name of a group, (2) a column for grouping, (3) the value. The wrapping query just selects all the values those crosstab() returns and defines the list of columns after (the part after AS). First is the category name (test_name) and then the values (result_1, result_2). In my query I'll get up to 3 results. If I have more then 3 results then I won't see them, If I have less then 3 results I'll get nulls.
The result for this query is:
test_name |result_1 |result_2 |result_3
---------------------------------------
test1 |result1 |result2 |<null>
test2 |result3 |result4 |result5

I'm trying to insert tuples into a table A (from table B) if the primary key of the table B tuple doesn't exist in tuple A

Here is what I have so far:
INSERT INTO Tenants (LeaseStartDate, LeaseExpirationDate, Rent, LeaseTenantSSN, RentOverdue)
SELECT CURRENT_DATE, NULL, NewRentPayments.Rent, NewRentPayments.LeaseTenantSSN, FALSE from NewRentPayments
WHERE NOT EXISTS (SELECT * FROM Tenants, NewRentPayments WHERE NewRentPayments.HouseID = Tenants.HouseID AND
NewRentPayments.ApartmentNumber = Tenants.ApartmentNumber)
So, HouseID and ApartmentNumber together make up the primary key. If there is a tuple in table B (NewRentPayments) that doesn't exist in table A (Tenants) based on the primary key, then it needs to be inserted into Tenants.
The problem is, when I run my query, it doesn't insert anything (I know for a fact there should be 1 tuple inserted). I'm at a loss, because it looks like it should work.
Thanks.
Your subquery was not correlated - It was just a non-correlated join query.
As per description of your problem, you don't need this join.
Try this:
insert into Tenants (LeaseStartDate, LeaseExpirationDate, Rent, LeaseTenantSSN, RentOverdue)
select current_date, null, p.Rent, p.LeaseTenantSSN, FALSE
from NewRentPayments p
where not exists (
select *
from Tenants t
where p.HouseID = t.HouseID
and p.ApartmentNumber = t.ApartmentNumber
)