function st_distancespheroid(geometry, geometry) does not exist - postgresql

I have installed postgis, I still have problem:
function st_distancespheroid(geometry, geometry) does not exist
WITH data AS (
SELECT a.*,
CAST(ST_DistanceSpheroid(geometry(location), st_geomfromtext('POINT(' || $7::decimal || ' ' || $8::decimal || ')', 4326))as numeric) / 1000 AS distance
FROM agent AS a
WHERE (a.agent_code ILIKE $1
OR a.name ILIKE $1
OR a.phone LIKE $1)
AND
a.sub_district_name LIKE ANY(string_to_array($4, ','))
AND
a.agent_status_id LIKE ANY(string_to_array($5, ','))
AND CASE WHEN $6 = 'FAVORITE' THEN
a.is_subscription = true
WHEN $6 = 'REGULER' THEN
a.is_subscription != true
ELSE
a.location LIKE '%%'
END
ORDER BY distance ASC
),
data_counter AS (
SELECT COUNT(agent_id) AS __total__
FROM data
)
SELECT *
FROM data, data_counter
LIMIT $2
OFFSET $3
`
when I run my go repo

How to install PostGIS
After installation you have to create the extension in the database. For instance, to install PostGIS 3.0 in a Debian distribution running PostgreSQL 13:
apt-get install postgresql-13-postgis-3
After that you create the extension in the database
CREATE EXTENSION postgis;
Now you'll be able to use the spatial functions.
How to calculate distances over a spheroid / sphere
The function ST_DistanceSpheroid expects three parameters, namely two geometries and the shperoid. In your code you're calling the function using only two geometries, hence the error function st_distancespheroid(geometry, geometry) does not exist. This is how it should be called (adapt it to the spheroid of your choice):
SELECT
ST_DistanceSpheroid(geom1,geom2,'SPHEROID["WGS 84",6378137,298.257223563]')
FROM t;
You can also tell ST_Distance to compute the distances using the spheroid if you cast the geometry parameters to geography:
SELECT ST_Distance(geom1::geography,geom2::geography,true) FROM t;
Another option - less accurate - is to use ST_DistanceSphere:
SELECT ST_DistanceSphere(geom1,geom2) FROM t;

Related

How to use st_Line_Locate_Point() with a MULTILINESTRING convertion in PostGIS?

I am trying to get the index position of a POINT in a MULTILINESTRING.
Here is the whole query I'm stuck with :
SELECT req.id, (dp).geom, netgeo_point_tech.id, ST_Line_Locate_Point(st_lineMerge(geom_cable), (dp).geom)
FROM (SELECT id, ST_DumpPoints(geom) as dp, geom as geom_cable FROM netgeo_cable_test ) as req
JOIN netgeo_point_tech ON ST_dwithin(netgeo_point_tech.geom, (dp).geom, 1)
ORDER BY req.id, (dp).path [ 1] ASC
The error I get is : line_locate_point : 1st arg isnt a line.
The error is due to the return of st_lineMerge() function that is returning LINESTRING but also MULTILINESTRING.
I don't get this. st_lineMerge() is supposed to return only LINESTRING.ST_LineMerge()
When I jsut try a simple query like this :
select st_astext(st_linemerge(geom)) from netgeo_cable_test
The output is :
)
I want to learn from this, so, if possible, explain to me what I'm doing wrong here, or if my approach is lacking insight.
Thanks to #JGH for the suggestion to use ST_Dump I came up with this function:
create or replace function MultiLineLocatePoint(line geometry, point geometry) returns numeric as $$
select (base + extra) / ST_Length(line)
from (
select
sum(ST_Length(l.geom)) over (order by l.path) - ST_Length(l.geom) base,
ST_LineLocatePoint(l.geom, point) * ST_Length(l.geom) extra,
ST_Distance(l.geom, point) dist
from ST_Dump(line) l
) points
order by dist
limit 1;
$$ language SQL;

How to translate the PostgreSQL array_agg function to SQLite?

This query works in PostgreSQL:
Select ot.MCode,array_to_string(array_agg(tk1.TName || ',' || ot.TTime), ' - ') as oujyu_name_list
From TR_A ot
inner join MS_B tk1 on ot.Code = tk1.Code
Where ot.Code in (Select Code From TR_C )
Group byot.MCode
but it does not work in SQLite, because SQLite does not have the array_agg() function. How can this query be converted to SQLite?
For this query, you can use group_concat, which directly returns a string:
SELECT ..., group_concat(tk1.TName || ',' || ot.TTime, ' - ')
FROM ...
SQLite now has the JSON1 extension (which ships in the default distribution) that can group and create arrays of JSON objects. For example,
select
ot.MCode,
json_group_array(json_object('tname', tk1.TName, 'ttime', ot.TTime)) as oujyu_name_list
from TR_A as ot
inner join MS_B as tk1
on (ot.Code = tk1.Code)
where ot.Code in (select code from TR_C)
group by ot.MCode;
The second column will be formatted as a JSON array e.g. [{"tname":...,"ttime":...},...].

Postgres CTE : type character varying(255)[] in non-recursive term but type character varying[] overall

I am new to SO and postgres so please excuse my ignorance. Attempting to get the cluster for a graph in postgres using a solution similar to the one in this post Find cluster given node in PostgreSQL
the only difference is my id is a UUID and I am using varchar(255) to store this id
when i try to run the query I get the following error (but not sure how to cast):
ERROR: recursive query "search_graph" column 1 has type character varying(255)[] in non-recursive term but type character varying[] overall
SQL state: 42804
Hint: Cast the output of the non-recursive term to the correct type.
Character: 81
my code (basically same as previous post):
WITH RECURSIVE search_graph(path, last_profile1, last_profile2) AS (
SELECT ARRAY[id], id, id
FROM node WHERE id = '408d6b12-d03e-42c2-a2a7-066b3c060a0b'
UNION ALL
SELECT sg.path || m.toid || m.fromid, m.fromid, m.toid
FROM search_graph sg
JOIN rel m
ON (m.fromid = sg.last_profile2 AND NOT sg.path #> ARRAY[m.toid])
OR (m.toid = sg.last_profile1 AND NOT sg.path #> ARRAY[m.fromid])
)
SELECT DISTINCT unnest(path) FROM search_graph;
Try casting the SELECT lists in the recursive and non-recursive terms to varchar.
WITH RECURSIVE search_graph(path, last_profile1, last_profile2) AS (
SELECT ARRAY[id]::varchar[], id::varchar, id::varchar
FROM node WHERE id = '408d6b12-d03e-42c2-a2a7-066b3c060a0b'
UNION ALL
SELECT (sg.path || m.toid || m.fromid)::varchar[], m.fromid::varchar, m.toid::varchar
FROM search_graph sg
JOIN rel m
ON (m.fromid = sg.last_profile2 AND NOT sg.path #> ARRAY[m.toid])
OR (m.toid = sg.last_profile1 AND NOT sg.path #> ARRAY[m.fromid])
)
SELECT DISTINCT unnest(path) FROM search_graph;

Dynamic pivot - how to obtain column titles parametrically?

I wish to write a Query for SAP B1 (t-sql) that will list all Income and Expenses Items by total and month by month.
I have successfully written a Query using PIVOT, but I do not want the column headings to be hardcoded like: Jan-11, Feb-11, Mar-11 ... Dec-11.
Rather I want the column headings to be parametrically generated, so that if I input:
--------------------------------------
Query - Selection Criteria
--------------------------------------
Posting Date greater or equal 01.09.10
Posting Date smaller or equal 31.08.11
[OK] [Cancel]
the Query will generate the following columns:
Sep-10, Oct-10, Nov-10, ..... Aug-11
I guess DYNAMIC PIVOT can do the trick.
So, I modified one SQL obtained from another forum to suit my purpose, but it does not work. The error message I get is Incorrect Syntax near 20100901.
Could anybody help me locate my error?
Note: In SAP B1, '[%1]' is an input variable
Here's my query:
/*Section 1*/
DECLARE #listCol VARCHAR(2000)
DECLARE #query VARCHAR(4000)
-------------------------------------
/*Section 2*/
SELECT #listCol =
STUFF(
( SELECT DISTINCT '],[' + CONVERT(VARCHAR, MONTH(T0.RefDate), 102)
FROM JDT1
FOR XML PATH(''))
, 1, 2, '') + ']'
------------------------------------
/*Section 3*/
SET #query = '
SELECT * FROM
(
SELECT
T0.Account,
T1.GroupMask,
T1.AcctName,
MONTH(T0.RefDate) as [Month],
(T0.Debit - T0.Credit) as [Amount]
FROM dbo.JDT1 T0
JOIN dbo.OACT T1 ON T0.Account = T1.AcctCode
WHERE
T1.GroupMask IN (4,5,6,7) AND
T0.[Refdate] >= '[%1]' AND
T0.[Refdate] <= '[%2]'
) S
PIVOT
(
Sum(Amount)
FOR [Month] IN ('+#listCol+')
) AS pvt
'
--------------------------------------------
/*Section 4*/
EXECUTE (#query)
I don't know SAP, but a couple of things spring to mind:
It looks like you want #listCol to contain a collection of numbers within square brackets, for example [07],[08],[09].... However, your code appears not to put a [ at the start of this string.
Try replacing the lines
T0.[Refdate] >= '[%1]' AND
T0.[Refdate] <= '[%2]'
with
T0.[Refdate] >= ''[%1]'' AND
T0.[Refdate] <= ''[%2]''
(I also added a space before the AND in the first of these two lines while I was editing your question.)

Loading, listing, and using R Modules and Functions in PL/R

I am having difficulty with:
Listing the R packages and functions available to PostgreSQL.
Installing a package (such as Kendall) for use with PL/R
Calling an R function within PostgreSQL
Listing Available R Packages
Q.1. How do you find out what R modules have been loaded?
SELECT * FROM r_typenames();
That shows the types that are available, but what about checking if Kendall( X, Y ) is loaded? For example, the documentation shows:
CREATE TABLE plr_modules (
modseq int4,
modsrc text
);
That seems to allow inserting records to dictate that Kendall is to be loaded, but the following code doesn't explain, syntactically, how to ensure that it gets loaded:
INSERT INTO plr_modules
VALUES (0, 'pg.test.module.load <-function(msg) {print(msg)}');
Q.2. What would the above line look like if you were trying to load Kendall?
Q.3. Is it applicable?
Installing R Packages
Using the "synaptic" package manager the following packages have been installed:
r-base
r-base-core
r-base-dev
r-base-html
r-base-latex
r-cran-acepack
r-cran-boot
r-cran-car
r-cran-chron
r-cran-cluster
r-cran-codetools
r-cran-design
r-cran-foreign
r-cran-hmisc
r-cran-kernsmooth
r-cran-lattice
r-cran-matrix
r-cran-mgcv
r-cran-nlme
r-cran-quadprog
r-cran-robustbase
r-cran-rpart
r-cran-survival
r-cran-vr
r-recommended
Q.4. How do I know if Kendall is in there?
Q.5. If it isn't, how do I find out what package it is in?
Q.6. If it isn't in a package suitable for installing with apt-get (aptitude, synaptic, dpkg, what have you), how do I go about installing it on Ubuntu?
Q.7. Where are the installation steps documented?
Calling R Functions
I have the following code:
EXECUTE 'SELECT '
'regr_slope( amount, year_taken ),'
'regr_intercept( amount, year_taken ),'
'corr( amount, year_taken ),'
'sum( measurements ) AS total_measurements '
'FROM temp_regression'
INTO STRICT slope, intercept, correlation, total_measurements;
This code calls the PostgreSQL function corr to calculate Pearson's correlation over the data. Ideally, I'd like to do the following (by switching corr for plr_kendall):
EXECUTE 'SELECT '
'regr_slope( amount, year_taken ),'
'regr_intercept( amount, year_taken ),'
'plr_kendall( amount, year_taken ),'
'sum( measurements ) AS total_measurements '
'FROM temp_regression'
INTO STRICT slope, intercept, correlation, total_measurements;
Q.8. Do I have to write plr_kendall myself?
Q.9. Where can I find a simple example that walks through:
Loading an R module into PG.
Writing a PG wrapper for the desired R function.
Calling the PG wrapper from a SELECT.
For example, would the last two steps look like:
create or replace function plr_kendall( _float8, _float8 ) returns float as '
agg_kendall(arg1, arg2)
' language 'plr';
CREATE AGGREGATE agg_kendall (
sfunc = plr_array_accum,
basetype = float8, -- ???
stype = _float8, -- ???
finalfunc = plr_kendall
);
And then the SELECT as above?
Thank you!
Overview
These steps list how to call an R function from PostgreSQL using PL/R.
Prerequisties
You must already have PostgreSQL, R, and PL/R installed.
Steps
Find R Module name (e.g., Kendall)
Change to the database user:
sudo su - postgres
Run R
R
Install R Module (accept $HOME/R/x86_64-pc-linux-gnu-library/2.9/):
install.packages("Kendall", dependencies = TRUE)
Choose a CRAN Mirror, when prompted.
Create the following table:
CREATE TABLE plr_modules (
modseq int4,
modsrc text
);
Insert into that table the directive to load the R Module in question:
INSERT INTO plr_modules
VALUES (0, 'library(Kendall)' );
Restart the database (or SELECT * FROM reload_plr_modules();):
sudo /etc/init.d/postgresql-8.4 restart
Create a wrapper function in PostgreSQL:
CREATE OR REPLACE FUNCTION climate.plr_corr_kendall(
double precision[],
double precision[] )
RETURNS double precision AS
$BODY$
Kendall(arg1, arg2)
$BODY$
LANGUAGE 'plr' VOLATILE STRICT;
Create a function that uses the wrapper function.
Test the new function.
Wrapper Function
This function performs the work of gathering data from the database and creating two arrays. These arrays are passed into the plr_corr_kendall wrapper function.
CREATE OR REPLACE FUNCTION climate.analysis_vector()
RETURNS double precision AS
$BODY$
DECLARE
v_year_taken double precision[];
v_amount double precision[];
i RECORD;
BEGIN
FOR i IN (
SELECT
extract(YEAR FROM m.taken) AS year_taken,
avg( m.amount ) AS amount
FROM
climate.city c,
climate.station s,
climate.station_category sc,
climate.measurement m
WHERE
c.id = 5148 AND
earth_distance(
ll_to_earth(c.latitude_decimal,c.longitude_decimal),
ll_to_earth(s.latitude_decimal,s.longitude_decimal)) <= 30 AND
s.elevation BETWEEN 0 AND 3000 AND
s.applicable AND
sc.station_id = s.id AND
sc.category_id = 1 AND
extract(YEAR FROM sc.taken_start) >= 1900 AND
extract(YEAR FROM sc.taken_end) <= 2009 AND
m.station_id = s.id AND
m.taken BETWEEN sc.taken_start AND sc.taken_end AND
m.category_id = sc.category_id
GROUP BY
extract(YEAR FROM m.taken)
ORDER BY
extract(YEAR FROM m.taken)
) LOOP
SELECT array_append( v_year_taken, i.year_taken ) INTO v_year_taken;
SELECT array_append( v_amount, i.amount::double precision ) INTO v_amount;
END LOOP;
RAISE NOTICE '%', v_year_taken;
RAISE NOTICE '%', v_amount;
RETURN climate.plr_corr_kendall( v_year_taken, v_amount );
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE
COST 100;
Test
Test the function as follows:
SELECT
*
FROM
climate.analysis_vector();
Result
A number: -0.0578900910913944