Postgres array of Golang structs - postgresql

I have the following Go struct:
type Bar struct {
Stuff string `db:"stuff"`
Other string `db:"other"`
}
type Foo struct {
ID int `db:"id"`
Bars []*Bar `db:"bars"`
}
So Foo contains a slice of Bar pointers. I also have the following tables in Postgres:
CREATE TABLE foo (
id INT
)
CREATE TABLE bar (
id INT,
stuff VARCHAR,
other VARCHAR,
trash VARCHAR
)
I want to LEFT JOIN on table bar and aggregate it as an array to be stored in the struct Foo. I've tried:
SELECT f.*,
ARRAY_AGG(b.stuff, b.other) AS bars
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
But it looks like the ARRAY_AGG function signature is incorrect (function array_agg(character varying, character varying) does not exist). Is there a way to do this without making a separate query to bar?

It looks like what you want is for bars to be an array of bar objects to match your Go types. To do this, you should use JSON_AGG rather than ARRAY_AGG since ARRAY_AGG only works on single columns and would produce in this case an array of type text (TEXT[]). JSON_AGG, on the other hand, creates an array of json objects. You can combine this with JSON_BUILD_OBJECT to select only the columns you want.
Here's an example:
SELECT f.*,
JSON_AGG(JSON_BUILD_OBJECT('stuff', b.stuff, 'other', b.other)) AS bars
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
Then you'll have to handle unmarshaling the json in Go, but other than that you should be good to go.
Note also that Go will ignore unused keys for you when unmarshaling json to a struct, so you can simplify the query by just selecting all fields on the bar table if you want. Like so:
SELECT f.*,
JSON_AGG(TO_JSON(b.*)) AS bars -- or JSON_AGG(b.*)
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
If you want to also handle cases where there are no entries in bar for a record in foo, you can use:
SELECT f.*,
COALESCE(
JSON_AGG(TO_JSON(b.*)) FILTER (WHERE b.id IS NOT NULL),
'[]'::JSON
) AS bars
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
Without the FILTER, you'll get [NULL] for rows in foo that have no corresponding rows in bar, and the FILTER gives you just NULL instead, then just use COALESCE to convert to an empty json array.

As you already know array_agg takes a single argument and returns an array of the type of the argument. So, if you want all of a row's columns to be included in the array's elements you can just pass in the row reference directly, e.g.:
SELECT array_agg(b) FROM b
If, however, you only want to include specific columns in the array's elements you can use the ROW constructor, e.g.:
SELECT array_agg(ROW(b.stuff, b.other)) FROM b
Go's standard library provides out-of-the-box support for scanning only scalar values. For scanning more complex values like arbitrary objects and arrays one has to either look for 3rd party solutions, or implement their own sql.Scanner.
To be able to implement your own sql.Scanner and properly parse a postgres array of rows you first need to know what format postgres uses to output the value, you can find this out by using psql and some queries directly:
-- simple values
SELECT ARRAY[ROW(123,'foo'),ROW(456,'bar')];
-- output: {"(123,foo)","(456,bar)"}
-- not so simple values
SELECT ARRAY[ROW(1,'a b'),ROW(2,'a,b'),ROW(3,'a",b'),ROW(4,'(a,b)'),ROW(5,'"','""')];
-- output: {"(1,\"a b\")","(2,\"a,b\")","(3,\"a\"\",b\")","(4,\"(a,b)\")","(5,\"\"\"\",\"\"\"\"\"\")"}
As you can see this can get pretty hairy but nevertheless it's parseable, the syntax looks to be something like this:
{"(column_value[, ...])"[, ...]}
where column_value is either an unquoted value, or a quoted value with escaped double quotes, and such a quoted value itself can contain escaped double quotes but only in twos, i.e. a single escaped double quote will not occur inside the column_value. So a rough and incomplete implementation of the parser might look something like this:
NOTE: there may be other syntax rules, that I do not know of, that need to be taken into consideration during parsing. In addition to that the code below doesn't handle NULLs properly.
func parseRowArray(a []byte) (out [][]string) {
a = a[1 : len(a)-1] // drop surrounding curlies
for i := 0; i < len(a); i++ {
if a[i] == '"' { // start of row element
row := []string{}
i += 2 // skip over current '"' and the following '('
for j := i; j < len(a); j++ {
if a[j] == '\\' && a[j+1] == '"' { // start of quoted column value
var col string // column value
j += 2 // skip over current '\' and following '"'
for k := j; k < len(a); k++ {
if a[k] == '\\' && a[k+1] == '"' { // end of quoted column, maybe
if a[k+2] == '\\' && a[k+3] == '"' { // nope, just escaped quote
col += string(a[j:k]) + `"`
k += 3 // skip over `\"\` (the k++ in the for statement will skip over the `"`)
j = k + 1 // skip over `\"\"`
continue // go to k loop
} else { // yes, end of quoted column
col += string(a[j:k])
row = append(row, col)
j = k + 2 // skip over `\"`
break // go back to j loop
}
}
}
if a[j] == ')' { // row end
out = append(out, row)
i = j + 1 // advance i to j's position and skip the potential ','
break // go to back i loop
}
} else { // assume non quoted column value
for k := j; k < len(a); k++ {
if a[k] == ',' || a[k] == ')' { // column value end
col := string(a[j:k])
row = append(row, col)
j = k // advance j to k's position
break // go back to j loop
}
}
if a[j] == ')' { // row end
out = append(out, row)
i = j + 1 // advance i to j's position and skip the potential ','
break // go to back i loop
}
}
}
}
}
return out
}
Try it on playground.
With something like that you can then implement an sql.Scanner for your Go slice of bars.
type BarList []*Bar
func (ls *BarList) Scan(src interface{}) error {
switch data := src.(type) {
case []byte:
a := praseRowArray(data)
res := make(BarList, len(a))
for i := 0; i < len(a); i++ {
bar := new(Bar)
// Here i'm assuming the parser produced a slice of at least two
// strings, if there are cases where this may not be the true you
// should add proper length checks to avoid unnecessary panics.
bar.Stuff = a[i][0]
bar.Other = a[i][1]
res[i] = bar
}
*ls = res
}
return nil
}
Now if you change the type of the Bars field in the Foo type from []*Bar to BarList you'll be able to directly pass in a pointer of the field to a (*sql.Row|*sql.Rows).Scan call:
rows.Scan(&f.Bars)
If you don't want to change the field's type you can still make it work by converting the pointer just when it's being passed to the Scan method:
rows.Scan((*BarList)(&f.Bars))
JSON
An sql.Scanner implementation for the json solution suggested by Henry Woody would look something like this:
type BarList []*Bar
func (ls *BarList) Scan(src interface{}) error {
if b, ok := src.([]byte); ok {
return json.Unmarshal(b, ls)
}
return nil
}

Related

create an empty table which gets the column names from another table

We have two tables 't' and 's'.
These tables may or may not have data but the schema of both t and s will alwaya be same.
Tables:
q)t:([] id:("ab";"cd";"ef";"gh";"ij"); refid:("";"ab";"";"ef";""); typ:`BUY`SELL`BUY`SELL`BUY)
q)s:t / For example purpose
Now, in my function. I want to concatenate the output of these two tables and return it, for which I'm using variable named res.
The problem is initially res is empty and not of type 98h, hence if I try to join t or s to res then it fails(which is obvious).
q){$[not ((count res) ~ 0); res: res,t ; res:t ]; $[not ((count res) ~ 0); res: res,s ; res:s ]; :res}[]
'res
One solution to this is create an empty schema for res(same as t and s table) and it works perfectly.
q){res:([] id:(); refid:(); typ:`$());$[not ((count res) ~ 0); res: res,t ; res:t ]; $[not ((count res) ~ 0); res: res,s ; res:s ]; :res}[]
But, is there a way that we don't have to create empty schema for res with all columns before hand, rather assign res as null(empty) table which can get the schema same as t or s when t or s is joined with res.
Your example isn't entirely clear - you mention res already exists in a comment, but then state that "initially res is empty and not of type 98h".
If you only want to assign res to be an empty table if it doesn't already exist, you can use a system command to check if res has already been defined in the root namespace, like the below:
f:{
if[not res in system"a";res:()]
$[count res;res,:t;res:t];
$[count res;res,:s;res:s];
:res;
};
Assign res to be 0 take the schema in question.
q)t:([] id:("ab";"cd";"ef";"gh";"ij"); refid:("";"ab";"";"ef";""); typ:`BUY`SELL`BUY`SELL`BUY)
q)res:0#t
q)meta res
c | t f a
-----| -----
id |
refid|
typ | s
So in this case you can do the following
q){[]res:0#t;if[count res;res,:t];$[count res;res,:s;res:s]}[]

how can use loop or IF in case when statements in postgresql

select distinct
x.vrgid, x.Weights, x.geom
from
(select
-- postcodes.id as postcodeid ,,fishnet.geom as geom
fishnet.gid as vrgid,
fishnet.geom as geom,
CASE WHEN
st_intersects(centroids.geom, urban.geom) and counts.nonurbancells != 0
THEN 0.95 :: numeric / counts.urbancells -------1st case
WHEN st_intersects(centroids.geom, urban.geom) and counts.nonurbancells = 0
THEN 1.00 :: numeric / counts.urbancells ---2ndcase
WHEN Not st_intersects(centroids.geom, urban.geom) = fishnet.gid :: boolean and counts.nonurbancells != 0
THEN 0.05 :: numeric / counts.nonurbancells ----3rd case
ELSE 0
END AS Weights
from vrg.urban_nonurban_count_new as counts
inner join vrg.gfk_2016_id_5_digit_pcd_areas2013_projected as postcodes on postcodes.id = counts.postid
right outer join vrg.rdsid_86_quadgrid_centroids as centroids on st_contains(postcodes.geom, centroids.geom)
left outer join vrg.rdsid86_quadgrid AS fishnet on fishnet.gid = centroids.gid
right outer join vrg.rdsid86_katrisk_poly_projected as urban on st_intersects(urban.geom, fishnet.geom)
where postcodes.id = '42395') as x
I have several case statements in my PL/Pgsql function in which the I get duplicate rows (with duplicate vrgids). Below is the query result
Here (the marked row) with id 7192 is the result of 1st case statement(refer the query)
What I would like to do is use loop or if condition to remove the vrgid and corresponding weights from the result once the 1st case statement is true.
So thet I won't get duplicate records. How is it possible?
May be I should use a condition in 3rd statement to result out the vrgids not present in 1st case statement.

why my Linqued query is not getting executed

I have this linqued query
var moreThen1dayLeavefed = (from LApp in db.LeaveApplications
join Emp in db.Employees
on LApp.Employee equals Convert.ToInt32(Emp.EmployeeNumber)
join LBrk in db.LeaveBreakups
on LApp.Id equals LBrk.LeaveApplication
where Emp.Team == 8 && LBrk.StartDate.Year == 2015 && LBrk.StartDate.Month == 5
select new { StartDate = LBrk.StartDate.Day, EndDate = LBrk.EndDate.Day, diff = (DbFunctions.DiffDays(LBrk.StartDate, LBrk.EndDate) + 1) }).ToList();
it gives error
LINQ to Entities does not recognize the method 'Int32 ToInt32(System.String)' method, and this method cannot be translated into a store expression.
on line 3, i.e
on LApp.Employee equals Convert.ToInt32(Emp.EmployeeNumber)
as I am converting the string to int during inner join
Just saw your related question. Your EmployeeNumber field seems to be filled with fixed size (5) zero left padded string representation of a number. If that's true, you can use the trick from how to sort varchar column containing numeric values with linq lambdas to Entity to solve the issue.
Just replace
on LApp.Employee equals Convert.ToInt32(Emp.EmployeeNumber)
with
on DbFunctions.Right("0000" + LApp.Employee.ToString(), 5) equals Emp.EmployeeNumber

Generate series for various mixes of numbers and letteres

I am using this syntax:
generate_series(1, COALESCE((string_to_array(table.id_number, '-')) [2] :: INT, 1)) AS n (numbers)
To generate IDs in elements that have got ID like 32.22.1-4 to get 4 rows with IDs 32.22.1, 32.22.2, 32.22.3 and 32.22.4. How to change it to accept also letters?
So for 32.22.a-c there would be:
32.22.a, 32.22.b, 32.22.c
And for 32.22.d1-d4 there would be
32.22.d1, 32.22.d2, 32.22.d3, 32.22.d4
EDIT:
The whole code looks like:
INSERT INTO ...
(
SELECT
...
FROM table
CROSS JOIN LATERAL
generate_series(1, COALESCE((string_to_array(table.id_number, '-')) [2] :: INT, 1)) AS n (numbers)
WHERE table.id_number LIKE ...
);
WITH t(id_number) AS ( VALUES
('32.33.a1-a5'::TEXT),
('32.34.a-c'::TEXT),
('32.35.b-e'::TEXT)
), stats AS (
SELECT
chars,
chars[1] pattern, -- pattern use
CASE
WHEN (ascii(chars[3]) - ascii(chars[2])) = 0
THEN FALSE
ELSE TRUE
END char_pattern, -- check if series of chars
CASE
WHEN (ascii(chars[3]) - ascii(chars [2])) = 0
THEN right(chars[3],1)::INTEGER
ELSE (ascii(chars[3]) + 1 - ascii(chars[2]))::INTEGER
END i -- number of series
FROM t,
regexp_matches(t.id_number, '(.*\.)(\w*)-(\w*)$') chars
)
SELECT
CASE WHEN char_pattern
THEN pattern || chr(ascii(chars[2]) - 1 + s)
ELSE pattern || left(chars[2],1) || s::TEXT
END output
FROM stats,generate_series(1,stats.i) s;
Result:
output
---------
32.33.a1
32.33.a2
32.33.a3
32.33.a4
32.33.a5
32.34.a
32.34.b
32.34.c
32.35.b
32.35.c
32.35.d
32.35.e
(12 rows)

Entity SQL concatenation

I have a select in Entity SQL that returns simple strings, and I should concatenate to a string.
select (p.X + p.Y) from ExampleEntities.ExampleTable as p
group by p.X, p.Y
For example it returns 3 string and I should concatenate it to 1 string.
I'm not sure if you want to concatenate all rows or per row, but this is a per row solution:
from p in ExampleEntities.ExampleTable
select string.Concat(p.X, p.Y, p.Z)
If you want a single result you'll need the following:
var temp = (from p in ExampleEntities.ExampleTable
select string.Concat(p.X, p.Y, p.Z)).ToList();
string result = temp.Aggregate((current, next) => current + next);