converting hex string in ipv6 format in postgresql - postgresql

I have a hex string like \xfc80000000000000ea508bfff217b628 in bytea format and I want to convert it into fc80:0000:0000:0000:ea50:8bff:f217:b628 in select query, I tried:
select '0:0:0:0:0:0:0:0'::inet + encode(ip::bytea,'hex') from a;
but following error is coming
ERROR: operator does not exist: inet + text
LINE 1: select '0:0:0:0:0:0:0:0'::inet + encode(stationipv6::bytea,'...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.

substring() works with bytea values and you can use that to extract the individual parts of the bytes to convert it to an inet:
select concat_ws(':',
encode(substring(stationipv6, 1, 2), 'hex'),
encode(substring(stationipv6, 3, 2), 'hex'),
encode(substring(stationipv6, 5, 2), 'hex'),
encode(substring(stationipv6, 7, 2), 'hex'),
encode(substring(stationipv6, 9, 2), 'hex'),
encode(substring(stationipv6, 11, 2), 'hex'),
encode(substring(stationipv6, 13, 2), 'hex'),
encode(substring(stationipv6, 15, 2), 'hex')
)::inet
from your_table
works on bytea columns

Related

Is there a PostGIS function to conditionally merge linestring geometries to the neighboring ones?

I had a lines (multilinestring) table in my PostGIS database (Postgres 11), which I have converted to linestrings and also checked the validity (ST_IsValid()) of new linestring geometries.
create table my_line_tbl as
select
gid gid_multi,
adm_code, t_count,
st_length((st_dump(st_linemerge(geom))).geom)::int len,
(st_dump(st_linemerge(geom))).geom geom
from
my_multiline_tbl
order by gid;
alter table my_line_tbl add column id serial primary key not null;
The first 10 rows look like this:
id, gid_multi, adm_code, t_count, len, geom
1, 1, 30, 5242, 407, LINESTRING(...)
2, 1, 30, 3421, 561, LINESTRING(...)
3, 2, 50, 5248, 3, LINESTRING(...)
4, 2, 50, 1458, 3, LINESTRING(...)
5, 2, 60, 2541, 28, LINESTRING(...)
6, 2, 30, 3325, 4, LINESTRING(...)
7, 2, 20, 1142, 5, LINESTRING(...)
8, 2, 30, 1425, 7, LINESTRING(...)
9, 3, 30, 2254, 4, LINESTRING(...)
10, 3, 50, 2254, 50, LINESTRING(...)
I am trying to develop the logic.
Find all <= 10 m segments and merge those to neighboring geometries
(previous or next) > 10 m
If there are many <= 10 m segments next to
each other merge them to make > 10 m segments (min length: > 10 m)
In case of intersections, merge any <= 10 m segments to the longest neighboring geometry
I thought of using SQL window functions to check the length (st_length()) of succeeding geometries (lead(id) over()), and then merging them, but the problem with this approach is, the successive IDs are not next to each other (do not intersects, st_intersects()).
My code attempt (dynamic SQL) is here, where I try to separate <= 10 and > 10 meter geometries.
with lt10mseg as (
select
id, gid_multi,
len, geom lt10m_geom
from
my_line_tbl
where len <= 10
order by id
), gt10mseg as (
select
id, gid_multi,
len, geom gt10m_geom
from
my_line_tbl
where len > 10
order by id
)
select
st_intersects(lt10m_geom,gt10m_geom)
from
lt10mseg, gt10mseg
order by id
Any help/suggestions (dynamic SQL/PLPGSQL) to continue develop the above logic? The ultimate goal is to get rid of <= 10 m segments by merging them to the neighbors.

Unexpected end of input in Postgresql stored procedure multidimensial array parameter

I have built stored procedure in PostgreSql which accept multidimensial array parameters like below
SELECT horecami.insert_obj_common(
'{"(5, 2, LLLLL rest, 46181, a#a.com, ooo, kkk, 12:09, 20:40, 23, true, 49.667, 48.232, fu, 2011-12-15 15:28:19+04, 2011-12-15 15:28:19+04, 3, 1)"}'::obj_special[],
'{"(1, 3, q1, q2, q3, q4, qson latest, true, 2011-12-15 15:28:19+04, 2, 2, 3, 2011-12-15 15:28:19+04, ' || '{"(1, 1, 1, 1, 1)"}'::horecami.obj_soft_hardware[] || ')"}'::obj_soft[]
);
Inside this procedure there ara foreach loops that works without problem.
But when i added last extra parametr as array (horecami.obj_soft_hardware[]) it gives me malformed array error.
This is error
ERROR: malformed array literal: "{"(1, 3, q1, q2, q3, q4, qson latest, true, 2011-12-15 15:28:19+04, 2, 2, 3, 2011-12-15 15:28:19+04, "
LINE 3: '{"(1, 3, q1, q2, q3, q4, qson latest, true, 2011-12-15 1...
^
DETAIL: Unexpected end of input.
SQL state: 22P02
Character: 202
It must return number
I guess this syntac error.
Thanks beforehands.
You don't have a multi-dimensional array; you've got an array containing a composite type, which in turn contains an array containing a composite type.
When writing this as a string literal, certain characters have to be escaped (e.g. strings with spaces need quotes, and those quotes need escaping). Then at nested levels they all need to be double quoted and escaped.
To determine what the string literal should look like, just create it using actual arrays and rows (or composite types), then cast to text to get the literal string value with all fields correctly quoted and escaped:
SELECT ARRAY[ROW(1, 3, 'q1', 'q2', 'q3', 'q4', 'qson latest', ARRAY[ROW(1, 1, 1, 1, 1)])]::TEXT
Returns:
{"(1,3,q1,q2,q3,q4,\"qson latest\",\"{\"\"(1,1,1,1,1)\"\"}\")"}

Pivotal HDB -Complaints "Data line too long.likely due to invalid csv data"

We have a small pivotal Hadoop-hawq cluster.We have created externtal table on it and pointing to hadoop files.
Given Environment:
Product Version:
(HAWQ 1.3.0.2 build 14421) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2
Tried :
When We are trying to read from external table using command.
ie
test=# select count(*) from EXT_TAB ; GETTING following error : ERROR: data line too long. likely due to invalid csv data (seg0 slice1 SEG0.HOSTNAME.COM:40000 pid=447247)
DETAIL: External table trcd_stg0, line 12059 of pxf://hostname/tmp/def_rcd/?profile=HdfsTextSimple: "2012-08-06 00:00:00.0^2012-08-06 00:00:00.0^6552^2016-01-09 03:15:43.427^0005567^COMPLAINTS ..." :
Additional Information:
DDL of external table is :
CREATE READABLE EXTERNAL TABLE sysprocompanyb.trcd_stg0
(
"DispDt" DATE,
"InvoiceDt" DATE,
"ID" INTEGER,
time timestamp without time zone,
"Customer" CHAR(7),
"CustomerName" CHARACTER VARYING(30),
"MasterAccount" CHAR(7),
"MasterAccName" CHAR(30),
"SalesOrder" CHAR(6),
"SalesOrderLine" NUMERIC(4, 0),
"OrderStatus" CHAR(200),
"MStockCode" CHAR(30),
"MStockDes" CHARACTER VARYING(500),
"MWarehouse" CHAR(200),
"MOrderQty" NUMERIC(10, 3),
"MShipQty" NUMERIC(10, 3),
"MBackOrderQty" NUMERIC(10, 3),
"MUnitCost" NUMERIC(15, 5),
"MPrice" NUMERIC(15, 5),
"MProductClass" CHAR(200),
"Salesperson" CHAR(200),
"CustomerPoNumber" CHAR(30),
"OrderDate" DATE,
"ReqShipDate" DATE,
"DispatchesMade" CHAR(1),
"NumDispatches" NUMERIC(4, 0),
"OrderValue" NUMERIC(26, 8),
"BOValue" NUMERIC(26, 8),
"OrdQtyInEaches" NUMERIC(21, 9),
"BOQtyInEaches" NUMERIC(21, 9),
"DispQty" NUMERIC(38, 3),
"DispQtyInEaches" NUMERIC(38, 9),
"CustomerClass" CHAR(200),
"MLineShipDate" DATE
)
LOCATION (
'pxf://HOSTNAME-HA/tmp/def_rcd/?profile=HdfsTextSimple'
)
FORMAT 'CSV' (delimiter '^' null '' escape '"' quote '"')
ENCODING 'UTF8';
Any help would be much appreciated ?
based on source code:
https://github.com/apache/incubator-hawq/blob/e48a07b0d8a5c8d41d2d4aaaa70254867b11ee11/src/backend/commands/copy.c
The error occurs when cstate->line_buf.len >= gp_max_csv_line_length is true.
According to: http://hawq.docs.pivotal.io/docs-hawq/guc_config-gp_max_csv_line_length.html
the default length of csv is 1048576 bytes. Have you checked your csv file length and tried increasing value of this setting?
check if line 12059 number of columns match the number of delimited fields. If some lines get grouped together during parsing then we might exceed the max line length. this typically happens because of bad data
echo $LINE | awk -F "^" '(total = total + NF); END {print total}'

What does the exclude_nodata_value argument to ST_DumpValues do?

Could anyone explain what the exclude_nodata_value argument to ST_DumpValues does?
For example, given the following:
WITH
-- Create a raster 4x4 raster, with each value set to 8 and NODATA set to -99.
tbl_1 AS (
SELECT
ST_AddBand(
ST_MakeEmptyRaster(4, 4, 0, 0, 1, -1, 0, 0, 4326),
1, '32BF', 8, -99
) AS rast
),
-- Set the values in rows 1 and 2 to -99.
tbl_2 AS (
SELECT
ST_SetValues(
rast, 1, 1, 1, 4, 2, -99, FALSE
) AS rast FROM tbl_1)
Why does the following select statement return NULLs in the first two rows:
SELECT ST_DumpValues(rast, 1, TRUE) AS cell_values FROM tbl_2;
Like this:
{{NULL,NULL,NULL,NULL},{NULL,NULL,NULL,NULL},{8,8,8,8},{8,8,8,8}}
But the following select statement return -99s?
SELECT ST_DumpValues(rast, 1, FALSE) AS cell_values FROM tbl_2;
Like this:
{{-99,-99,-99,-99},{-99,-99,-99,-99},{8,8,8,8},{8,8,8,8}}
Clearly, with both statements the first two rows really contain -99s. However, in the first case (exclude_nodata_value=TRUE) these values have been masked (but not replaced) by NULLS.
Thanks for any help. The subtle differences between NULL and NODATA within PostGIS have been driving me crazy for several days.

How to save results of postgresql to csv/excel file using psycopg2?

I use driving_distance in postgresql to find distances between all nodes, and here's my python script in pyscripter,
import sys
#set up psycopg2 environment
import psycopg2
#driving_distance module
query = """
select *
from driving_distance ($$
select
gid as id,
start_id::int4 as source,
end_id::int4 as target,
shape_leng::double precision as cost
from network
$$, %s, %s, %s, %s
)
;"""
#make connection between python and postgresql
conn = psycopg2.connect("dbname = 'routing_template' user = 'postgres' host = 'localhost' password = '****'")
cur = conn.cursor()
#count rows in the table
cur.execute("select count(*) from network")
result = cur.fetchone()
k = result[0] + 1
#run loops
rs = []
i = 1
while i <= k:
cur.execute(query, (i, 1000000, False, False))
rs.append(cur.fetchall())
i = i + 1
#print result
for record in rs:
print record
conn.close()
The result is fine, and part of the it in python interpreter looks like this,
[(1, 2, 35789.4069722436), (2, 2, 31060.0761437413), (3, 19, 30915.1312550546), (4, 3, 33438.0715007666), (5, 4, 29149.0894812718), (6, 7, 25504.020006665), (7, 7, 29594.741802956), (8, 5, 20736.2427352646), (9, 10, 19545.809601197), (10, 8, 22609.5146670393), (11, 9, 14134.5400189648), (12, 11, 12266.7845493204), (13, 18, 17426.7449057031), (14, 21, 11754.7277029158), (15, 18, 13128.3548040769), (16, 20, 21924.2253916803), (17, 11, 15209.9969992088), (18, 20, 26316.7797545076), (19, 13, 604.414419026164), (20, 16, 740.652673783403), (21, 15, 0.0), (22, 15, 2378.768084459)]
[(1, 2, 38168.1750567026), (2, 2, 33438.8442282003), (3, 19, 33293.8993395136), (4, 3, 35816.8395852256), (5, 4, 31527.8575657308), (6, 7, 27882.788091124), (7, 7, 31973.509887415), (8, 5, 23115.0108197236), (9, 10, 21924.577685656), (10, 8, 24988.2827514983), (11, 9, 16513.3081034238), (12, 11, 14645.5526337793), (13, 18, 19805.5129901621), (14, 21, 14133.4957873748), (15, 18, 15507.1228885359), (16, 20, 24302.9934761393), (17, 11, 17588.7650836678), (18, 20, 28695.5478389666), (19, 13, 2983.18250348516), (20, 16, 3119.4207582424), (21, 15, 2378.768084459), (22, 15, 0.0)]
I want to export these results to a new csv or excel files, and I have looked these related post and website,
PostgreSQL: export resulting data from SQL query to Excel/CSV
save (postgres) sql output to csv file
Psycopg 2.5.3.dev0 documentation
But still can't export these working under pyscripter, how can I do?
I am working with postgresql 8.4, python 2.7.6 under Windows 8.1 x64.
Update#1:
I tried the following code provided by Talvalin(thanks!),
import sys
#set up psycopg2 environment
import psycopg2
#driving_distance module
query = """
select *
from driving_distance ($$
select
gid as id,
start_id::int4 as source,
end_id::int4 as target,
shape_leng::double precision as cost
from network
$$, %s, %s, %s, %s
)
"""
#make connection between python and postgresql
conn = psycopg2.connect("dbname = 'TC_routing' user = 'postgres' host = 'localhost' password = '****'")
cur = conn.cursor()
outputquery = 'copy ({0}) to stdout with csv header'.format(query)
with open('resultsfile', 'w') as f:
cur.copy_expert(outputquery, f)
conn.close()
But got error below,
>>>
Traceback (most recent call last):
File "C:/Users/Heinz/Desktop/python_test/driving_distance_loop_test.py", line 27, in <module>
cur.copy_expert(outputquery, f)
ProgrammingError: 錯誤: 在"語法錯誤"附近發生 %
LINE 10: $$, %s, %s, %s, %s
^
Maybe I need to add something more in the code above.
Based on Psycopg2's cursor.copy_expert() and Postgres COPY documentation and your original code sample, please try this out. I tested a similar query export on my laptop, so I'm reasonably confident this should work, but let me know if there are any issues.
import sys
#set up psycopg2 environment
import psycopg2
#driving_distance module
#note the lack of trailing semi-colon in the query string, as per the Postgres documentation
query = """
select *
from driving_distance ($$
select
gid as id,
start_id::int4 as source,
end_id::int4 as target,
shape_leng::double precision as cost
from network
$$, %s, %s, %s, %s
)
"""
#make connection between python and postgresql
conn = psycopg2.connect("dbname = 'routing_template' user = 'postgres' host = 'localhost' password = 'xxxx'")
cur = conn.cursor()
outputquery = "COPY ({0}) TO STDOUT WITH CSV HEADER".format(query)
with open('resultsfile', 'w') as f:
cur.copy_expert(outputquery, f)
conn.close()