Explanation for function composition and infix function application(PureScript) - purescript

This code works
findName :: String -> String -> AddressBook -> Boolean
findName fname lname = not <<< null <<< filter findN
where
findN :: Entry -> Boolean
findN entry = entry.firstName == fname && entry.lastName == lname
but this does not
findName fname lname book = not <<< null <<< filter findN book
Again this code works
findName fname lname book= not $ null $ filter findN book
and this does not
findName fname lname = not null $ filter findN

In short, it is because these different examples are equivalent to different placements of parentheses, so that the code is evaluated differently.
f <<< g, where f and g are functions, is equivalent to \x -> f (g x), whereas f x $ g y is equivalent to (f x) (g y).
Whenever you have an infix symbol like <<< and no other infix symbols, the expressions to the left and right of the symbol are evaluated first, so your first example is evaluated as
findName fname lname = ((not) <<< (null) <<< (filter findN)),
which with the book parameter made explicit is
findName fname lname book = ((not) <<< (null) <<< (filter findN)) book,
whereas your second example is evaluated as
findName fname lname book = (not) <<< (null) <<< (filter findN book).
filter findN book yields a list, but <<< requires function arguments.
For your third and fourth example the problem is similar: If I make the book parameter explicit in the 4th example, it would be
findName fname lname = (not $ null (filter findN)) book, (where you forgot one $).
not $ null $ ... requires ... to be an AddressBook, but filter findN is a function, not an AddressBook.

Related

Passing list of values to SELECT PostgreSQL query in Haskell

I'm studying PostgreSQL with Haskell with this lib: https://hackage.haskell.org/package/postgresql-simple-0.4.10.0/docs/Database-PostgreSQL-Simple.html#v:query
While I could select an user like this:
(query_ conn "SELECT * FROM users WHERE NAME == john" :: IO [Only Int]) >>= mapM_ print
using query_:
query_ :: FromRow r => Connection -> Query -> IO [r]
I think I should use query:
query :: (ToRow q, FromRow r) => Connection -> Query -> q -> IO [r]
to pass a list of values. However, how do I pass this list?
For example, for INSERT, I was able to do this:
(execute conn "INSERT INTO users (NAME, PASSWORD) VALUES (?,?)") (["john", "123456"]::[String]) >>= print
but what is the equivalent for SELECT?
I'm not sure I understand your question, since you ask about lists and I don't see how they enter into the picture. But the parameterized version of your select query is this:
query conn "SELECT * FROM users where NAME == ?" (Only ("john" :: String))

Postgres array of Golang structs

I have the following Go struct:
type Bar struct {
Stuff string `db:"stuff"`
Other string `db:"other"`
}
type Foo struct {
ID int `db:"id"`
Bars []*Bar `db:"bars"`
}
So Foo contains a slice of Bar pointers. I also have the following tables in Postgres:
CREATE TABLE foo (
id INT
)
CREATE TABLE bar (
id INT,
stuff VARCHAR,
other VARCHAR,
trash VARCHAR
)
I want to LEFT JOIN on table bar and aggregate it as an array to be stored in the struct Foo. I've tried:
SELECT f.*,
ARRAY_AGG(b.stuff, b.other) AS bars
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
But it looks like the ARRAY_AGG function signature is incorrect (function array_agg(character varying, character varying) does not exist). Is there a way to do this without making a separate query to bar?
It looks like what you want is for bars to be an array of bar objects to match your Go types. To do this, you should use JSON_AGG rather than ARRAY_AGG since ARRAY_AGG only works on single columns and would produce in this case an array of type text (TEXT[]). JSON_AGG, on the other hand, creates an array of json objects. You can combine this with JSON_BUILD_OBJECT to select only the columns you want.
Here's an example:
SELECT f.*,
JSON_AGG(JSON_BUILD_OBJECT('stuff', b.stuff, 'other', b.other)) AS bars
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
Then you'll have to handle unmarshaling the json in Go, but other than that you should be good to go.
Note also that Go will ignore unused keys for you when unmarshaling json to a struct, so you can simplify the query by just selecting all fields on the bar table if you want. Like so:
SELECT f.*,
JSON_AGG(TO_JSON(b.*)) AS bars -- or JSON_AGG(b.*)
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
If you want to also handle cases where there are no entries in bar for a record in foo, you can use:
SELECT f.*,
COALESCE(
JSON_AGG(TO_JSON(b.*)) FILTER (WHERE b.id IS NOT NULL),
'[]'::JSON
) AS bars
FROM foo f
LEFT JOIN bar b
ON f.id = b.id
WHERE f.id = $1
GROUP BY f.id
Without the FILTER, you'll get [NULL] for rows in foo that have no corresponding rows in bar, and the FILTER gives you just NULL instead, then just use COALESCE to convert to an empty json array.
As you already know array_agg takes a single argument and returns an array of the type of the argument. So, if you want all of a row's columns to be included in the array's elements you can just pass in the row reference directly, e.g.:
SELECT array_agg(b) FROM b
If, however, you only want to include specific columns in the array's elements you can use the ROW constructor, e.g.:
SELECT array_agg(ROW(b.stuff, b.other)) FROM b
Go's standard library provides out-of-the-box support for scanning only scalar values. For scanning more complex values like arbitrary objects and arrays one has to either look for 3rd party solutions, or implement their own sql.Scanner.
To be able to implement your own sql.Scanner and properly parse a postgres array of rows you first need to know what format postgres uses to output the value, you can find this out by using psql and some queries directly:
-- simple values
SELECT ARRAY[ROW(123,'foo'),ROW(456,'bar')];
-- output: {"(123,foo)","(456,bar)"}
-- not so simple values
SELECT ARRAY[ROW(1,'a b'),ROW(2,'a,b'),ROW(3,'a",b'),ROW(4,'(a,b)'),ROW(5,'"','""')];
-- output: {"(1,\"a b\")","(2,\"a,b\")","(3,\"a\"\",b\")","(4,\"(a,b)\")","(5,\"\"\"\",\"\"\"\"\"\")"}
As you can see this can get pretty hairy but nevertheless it's parseable, the syntax looks to be something like this:
{"(column_value[, ...])"[, ...]}
where column_value is either an unquoted value, or a quoted value with escaped double quotes, and such a quoted value itself can contain escaped double quotes but only in twos, i.e. a single escaped double quote will not occur inside the column_value. So a rough and incomplete implementation of the parser might look something like this:
NOTE: there may be other syntax rules, that I do not know of, that need to be taken into consideration during parsing. In addition to that the code below doesn't handle NULLs properly.
func parseRowArray(a []byte) (out [][]string) {
a = a[1 : len(a)-1] // drop surrounding curlies
for i := 0; i < len(a); i++ {
if a[i] == '"' { // start of row element
row := []string{}
i += 2 // skip over current '"' and the following '('
for j := i; j < len(a); j++ {
if a[j] == '\\' && a[j+1] == '"' { // start of quoted column value
var col string // column value
j += 2 // skip over current '\' and following '"'
for k := j; k < len(a); k++ {
if a[k] == '\\' && a[k+1] == '"' { // end of quoted column, maybe
if a[k+2] == '\\' && a[k+3] == '"' { // nope, just escaped quote
col += string(a[j:k]) + `"`
k += 3 // skip over `\"\` (the k++ in the for statement will skip over the `"`)
j = k + 1 // skip over `\"\"`
continue // go to k loop
} else { // yes, end of quoted column
col += string(a[j:k])
row = append(row, col)
j = k + 2 // skip over `\"`
break // go back to j loop
}
}
}
if a[j] == ')' { // row end
out = append(out, row)
i = j + 1 // advance i to j's position and skip the potential ','
break // go to back i loop
}
} else { // assume non quoted column value
for k := j; k < len(a); k++ {
if a[k] == ',' || a[k] == ')' { // column value end
col := string(a[j:k])
row = append(row, col)
j = k // advance j to k's position
break // go back to j loop
}
}
if a[j] == ')' { // row end
out = append(out, row)
i = j + 1 // advance i to j's position and skip the potential ','
break // go to back i loop
}
}
}
}
}
return out
}
Try it on playground.
With something like that you can then implement an sql.Scanner for your Go slice of bars.
type BarList []*Bar
func (ls *BarList) Scan(src interface{}) error {
switch data := src.(type) {
case []byte:
a := praseRowArray(data)
res := make(BarList, len(a))
for i := 0; i < len(a); i++ {
bar := new(Bar)
// Here i'm assuming the parser produced a slice of at least two
// strings, if there are cases where this may not be the true you
// should add proper length checks to avoid unnecessary panics.
bar.Stuff = a[i][0]
bar.Other = a[i][1]
res[i] = bar
}
*ls = res
}
return nil
}
Now if you change the type of the Bars field in the Foo type from []*Bar to BarList you'll be able to directly pass in a pointer of the field to a (*sql.Row|*sql.Rows).Scan call:
rows.Scan(&f.Bars)
If you don't want to change the field's type you can still make it work by converting the pointer just when it's being passed to the Scan method:
rows.Scan((*BarList)(&f.Bars))
JSON
An sql.Scanner implementation for the json solution suggested by Henry Woody would look something like this:
type BarList []*Bar
func (ls *BarList) Scan(src interface{}) error {
if b, ok := src.([]byte); ok {
return json.Unmarshal(b, ls)
}
return nil
}

Print all values from column using custom predicate

How can I print next values
'aa, bb, cc, dd, ee'
from DB column, like that:
'aa-bb, aa-cc, aa-dd, aa-ee, etc'?
NOTE: If I have, for example following value: 'aa-bb', so 'bb-aa' should be skipped out
My TSQL Example doesn't work:
select l.SURNAME + '-' + l1.SURNAME
from LECTURERS l cross join LECTURERS l1
where l.CITY = 'Kyiv' and l.SURNAME <> l1.SURNAME and LEFT(l1.SURNAME,
charindex('-', l1.SURNAME)) <> l.SURNAME
Prints me following 'aa-bb, aa-cc, aa-dd, aa-ee', but how can I except 'aa-bb' <> 'bb-aa'

haskell postgresql-simple incompatible type _int8 and Int64 (and Integer)

The erroneous function below is part of a program called subdivide working with Postgis geospatial intersections on the server side and processing the returned array of Int64 on the client side.
It is built and run under Stack, resolving to Nightly 2016-08-02 and explicitly specifying architecture x86_64.
I get the following runtime error executing the Postgres query defined as "intersectionsSql" (see the comment RUNTIME ERROR HERE):
"Created table: server : [Only {fromOnly = \"PostgreSQL 9.6beta2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\"}] quadrant: BOX3D(-180.0 90.0, -90.0 45.0)"
subdivide: Incompatible {errSQLType = "_int8", errSQLTableOid = Nothing, errSQLField = "object_ids", errHaskellType = "Int64", errMessage = "types incompatible"}
I have tried Integer, Int64 and Int, all with the same result, which is counter-intuitive as those Haskell types should all be compatible with _int8 according to the PostgreSQL-simple instance documentation:
https://hackage.haskell.org/package/postgresql-simple-0.5.0.0/candidate/docs/Database-PostgreSQL-Simple-FromField.html
The SQL query should return a single row of postgres bigint[], which I have confirmed via PGAdmin.
Any ideas?
Also any comments around how I have written the code - its over a decade since last I worked with GHC and times have changed.
Thanks for your consideration.
Mike Thomas
accumulateIntersections :: Identifier -> Identifier -> ConnectInfo -> ((Double,Double),(Double,Double)) -> IO ()
accumulateIntersections sourceTable accumulationTable connectionInfo q =
let
theBox = makeBox3D (fst (fst q)) (snd (fst q)) (fst (snd q)) (snd (snd q))
theValue = (Only theBox)
dropTable = [sql| DROP TABLE IF EXISTS ? CASCADE |]
createTable = [sql| CREATE TABLE ? ( quadrant_id BIGSERIAL, area_metres_squared FLOAT8, shape GEOMETRY, object_ids BIGINT[] ) |]
aggregateSql = [sql| DROP AGGREGATE IF EXISTS _array_agg (anyarray);
CREATE AGGREGATE _array_agg(anyarray) (SFUNC = array_cat, STYPE = anyarray);
|]
intersectionsSql = [sql| SELECT _array_agg (object_ids) object_ids
FROM ?
WHERE ST_Intersects(ST_SetSRID ( ?::box3d, 4326 ), shape)
|]
insertIntersections = [sql| INSERT INTO ? (shape, object_ids)
VALUES ( ST_SetSRID ( ?::box3d, 4326 )
, ? ) |]
in
do
connection <- connect connectionInfo
execute_ connection aggregateSql
postgresVersion <- (query_ connection "SELECT version()" :: IO [Only String])
i0 <- execute connection dropTable (Only accumulationTable)
i1 <- execute connection createTable (Only accumulationTable)
print ("Created table: server : " ++ (show postgresVersion) ++ " quadrant: " ++ theBox)
is :: [Only Int64] <- query connection intersectionsSql (sourceTable, theBox) -- RUNTIME ERROR HERE
print ("Intersections done.")
ids::[Int64] <- forM is (\(Only id) -> return id)
print ("Ids done.")
close connection
return ()
See the above comment relayed from LP Smith, who I contacted when no answers were forthcoming here. It resolves my issue.
The key was to recognize that _int8 represents an array of 8 byte integers, rather than thinking, as I had done, that it was an internal representation for a single 8 byte integer. Leon's suggested change was to substitute "[Only (Vector Int64)]" for "[Only Int64]" in the line marked above as the point of the runtime error.
Thank you Leon.

F# Navigate object graph to return specific Nodes

I'm trying to build a list of DataTables based on DataRelations in a DataSet, where the tables returned are only included by their relationships with each other, knowing each end of the chain in advance. Such that my DataSet has 7 Tables. The relationships look like this:
Table1 -> Table2 -> Table3 -> Table4 -> Table5
-> Table6 -> Table7
So given Table1 and Table7, I want to return Tables 1, 2, 3, 6, 7
My code so far traverses all the relations and returns all Tables so in the example it returns Table4 and Table5 as well. I've passed in the first, last as arguments, and know that I'm not yet using the last one yet, I am still trying to think how to go about it, and is where I need the help.
type DataItem =
| T of DataTable
| R of DataRelation list
let GetRelatedTables (first, last) =
let rec flat_rec dt acc =
match dt with
| T(dt) ->
let rels = [ for r in dt.ParentRelations do yield r ]
dt :: flat_rec(R(rels)) acc
| R(h::t) ->
flat_rec(R(t)) acc # flat_rec(T(h.ParentTable)) acc
| R([]) -> []
flat_rec first []
I think something like this would do it (although I haven't tested it). It returns DataTable list option because, in theory, a path between two tables might not exist.
let findPathBetweenTables (table1 : DataTable) (table2 : DataTable) =
let visited = System.Collections.Generic.HashSet() //check for circular references
let rec search path =
let table = List.head path
if not (visited.Add(table)) then None
elif table = table2 then Some(List.rev path)
else
table.ChildRelations
|> Seq.cast<DataRelation>
|> Seq.tryPick (fun rel -> search (rel.ChildTable::path))
search [table1]