I have the following query:
SELECT
u.*,
array_agg(
json_build_object(
'id', usn.id,
'schemaName', usn.schema_name
)
) AS schemas
FROM dev.users u
FULL OUTER JOIN dev.user_schema_names usn ON u.id = usn.user_id
WHERE u.email = $1
GROUP BY u.id
But for some odd reason, this returns the following:
{
id: 1,
email: 'test#test.com',
schemas: [ { id: null, schemaName: null } ]
}
The dev.user_schema_names has no records in it. I am expecting schemas to be an empty array []. If I insert records in dev.user_schema_names, then it works just fine, however.
What am I doing wrong? Should I be using something else instead of json_build_object?
I am trying to create a measure conditionally based on a dimension.
My dimensions:
dimension_group: date {
hidden: yes
type: time
timeframes: [
raw,
date,
week,
month,
quarter,
year
]
convert_tz: no
datatype: date
sql: ${TABLE}.date ;;
}
dimension: status {
type: string
sql: CASE
WHEN UPPER(${TABLE}.status) ='APPROVED' THEN 'Approved'
WHEN UPPER(${TABLE}.status) ='PENDING' THEN 'Pending'
END;;
}
My Measures:
measure: xyz {
type: sum
value_format: "$#,##0.00"
sql: ${TABLE}.xyz ;;
}
measure: abc {
type: sum
value_format: "$#,##0.00"
sql: ${TABLE}.abc ;;
}
Measure with conditions:
measure: conditional {
type: number
value_format: "$#,##0.00"
sql: CASE WHEN ${status} = 'Pending' THEN ${xyz}
ELSE ${abc}
END;;
}
On my Explore, when I select date and conditional. I keep getting the error:
ERROR: column "table.status" must appear in the GROUP BY clause or be used in an aggregate function
I understand what the error is. I am just not sure how to fix this. How do I resolve this error? I need all the dimensions and measures.
You can create a dimension conditional:
dimension: conditional {
type: number
value_format: "$#,##0.00"
sql: CASE WHEN ${status} = 'Pending' THEN ${xyz}
ELSE ${abc}
END;;
}
And then create a measure sum_conditional on that dimension:
measure: sum_conditional {
type: sum
value_format: "$#,##0.00"
sql: ${conditional};;
}
I have a Table A:
A
--------------
id | name |
How to insert rows in this table if row with such NAME already exists.
I need to do it in Liquibase in Yaml format
Liquibase changesets can be executed on a given precondition. In your case you could run a sqlCheck:
- changeSet:
id: 1
author: me
preConditions:
- onFail: MARK_RAN
- sqlCheck:
expectedResult: 0
sql: SELECT COUNT(*) FROM person WHERE name = 'John'
changes:
- insert:
tableName: person
columns:
- column:
name: id
value: 2
- column:
name: name
value: John
I have a jsonb column (called info) in Postgres which structure looks like this:
{ name: 'john', house_id: null, extra_attrs: [{ attr_id: 4, attr_value: 'a value' }, { attr_id: 5, attr_value: 'another value' }] }
It can have N extra_attrs but we know that each of them will have just two keys: the attr_id and the attr_value.
Now, what is the best way to query for info that has extra_attrs with a specific attr_id and attr_value. I have done it like this, and it works:
Given the following data structure to query for:
[{ attr_id: 4, values: ['a value', 'something else'] }, { attr_id: 5, values: ['another value'] }]
The following query works:
select * from people
where (info #> '{"extra_attrs": [{ "attr_id": 4, "attr_value": "a value" }]} OR info #> '{"extra_attrs": [{ "attr_id": 4, "attr_value": "something else" }]) AND info #> '{"extra_attrs": [{ "attr_id": 5, "attr_value": "another value" }]}
I am wondering if there is a better way to do so or this is fine.
One alternate method would involve json functions and transforming data to apply the filter on:
SELECT people.info
FROM people,
LATERAL (SELECT DISTINCT True is_valid
FROM jsonb_array_elements(info->'extra_attrs') y
WHERE (y->>'attr_id', y->>'attr_value') IN (
('4', 'a value'),
('4', 'something else'),
('5','another value')
)
) y
WHERE is_valid
I believe this method more convenient for dynamic filters since the id/value pairs are added in only 1 place.
A similar (and perhaps slightly faster) method would use WHERE EXISTS and compare json documents like below.
SELECT people.info
FROM people
WHERE EXISTS (SELECT TRUE
FROM jsonb_array_elements(info->'extra_attrs') attrs
WHERE attrs #> ANY(ARRAY[
JSONB '{ "attr_id": 4, "attr_value": "a value" }',
JSONB '{ "attr_id": 4, "attr_value": "something else" }',
JSONB '{ "attr_id": 5, "attr_value": "another value" }'
]
)
)
So I'm starting with this...
SELECT * FROM parts_finishing;
...I get this...
id, id_part, id_finish, id_metal, id_description, date,
inside_hours_k, inside_rate, outside_material
(0 rows)
...so everything looks fine so far so I do this...
INSERT INTO parts_finishing
(
id_part, id_finish, id_metal, id_description,
date, inside_hours_k, inside_rate, outside_material
) VALUES (
('1013', '6', '30', '1', NOW(), '0', '0', '22.43'),
('1013', '6', '30', '2', NOW(), '0', '0', '32.45'));
...and I get...
ERROR: INSERT has more target columns than expressions
Now I've done a few things like ensuring numbers aren't in quotes, are in quotes (would love a table guide to that in regards to integers, numeric types, etc) after I obviously counted the number of column names and values being inserted. I also tried making sure that all the commas are commas...really at a loss here. There are no other columns except for id which is the bigserial primary key.
Remove the extra () :
INSERT INTO parts_finishing
(
id_part, id_finish, id_metal, id_description,
date, inside_hours_k, inside_rate, outside_material
) VALUES
('1013', '6', '30', '1', NOW(), '0', '0', '22.43')
, ('1013', '6', '30', '2', NOW(), '0', '0', '32.45')
;
the (..., ...) in Postgres is the syntax for a tuple literal; The extra set of ( ) would create a tuple of tuples, which makes no sense.
Also: for numeric literals you don't want the quotes:
(1013, 6, 30, 1, NOW(), 0, 0, 22.43)
, ...
, assuming all these types are numerical.
I had a similar problem when using SQL string composition with psycopg2 in Python, but the problem was slightly different. I was missing a comma after one of the fields.
INSERT INTO parts_finishing
(id_part, id_finish, id_metal)
VALUES (
%(id_part)s <-------------------- missing comma
%(id_finish)s,
%(id_metal)s
);
This caused psycopg2 to yield this error:
ERROR: INSERT has more target columns than expressions.
This happened to me in a large insert, everything was ok (comma-wise), it took me a while to notice I was inserting in the wrong table of course the DB does not know your intentions.
Copy-paste is the root of all evil ... :-)
I faced the same issue as well.It will be raised, when the count of columns given and column values given is mismatched.
I have the same error on express js with PostgreSQL
I Solved it. This is my answer.
error fire at the time of inserting record.
error occurred due to invalid column name with values passing
error: INSERT has more target columns than expressions
ERROR : error: INSERT has more target columns than expressions
name: 'error',
length: 116,
severity: 'ERROR',
code: '42601',
detail: undefined,
hint: undefined,
position: '294',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'analyze.c',
line: '945',
here is my code dome
INSERT INTO student(
first_name, last_name, email, phone
)
VALUES
($1, $2, $3, $4),
values
: [ first_name,
last_name,
email,
phone ]
IN my case there was syntax error in sub query.