How to determine the total freespce of the db PostgreSQL - postgresql

How to identify the total size of the db (Used) and the total size of the db.
for the total size of the db (Used) : pg_database_size('dbName') cmmnd works.
But I am not sure how to calculate the free space size of the db. (total capacity I mean)
I saw pg_spacefree('table'). but it would need GRANT access. Is there any other way?
please guide me here?

As I mentioned in the comment you can create your own function in an untrusted language, for example, plpython
CREATE EXTENSION plpython3u;
CREATE TYPE disk_use AS (Total_GB numeric , Used_GB numeric, free_GB numeric);
CREATE OR REPLACE FUNCTION fn_disk_use(part text default '/') RETURNS disk_use AS
$$
import shutil
hdd = shutil.disk_usage(part)
return (hdd.total / (2**30), hdd.used / (2**30),hdd.free / (2**30))
$$ LANGUAGE plpython3u;
SELECT * from fn_disk_use('/');
total_gb | used_gb | free_gb
-------------------+-------------------+--------------------
455.2938232421875 | 402.6443672180176 | 29.455318450927734
(1 row)

Related

Postgres FTS Priority Field

I am using Postgres FTS to search a field in a table. The only issue is for some reason the below issue is happening.
store=# select name from service where to_tsvector(name) ## to_tsquery('find:*') = true;
name
--------------
Finding Nora
(1 row)
store=# select name from service where to_tsvector(name) ## to_tsquery('findi:*') = true;
name
------
(0 rows)
store=# select name from service where to_tsvector(name) ## to_tsquery('findi:*') = true;
How come when searching using the query findi:*,the result doesnt show?
In my PG 12.2 with default text search configuration I have:
# select to_tsvector('Finding Nora');
to_tsvector
-------------------
'find':1 'nora':2
(1 row)
# select to_tsquery('findi:*');
to_tsquery
------------
'findi':*
(1 row)
I understand that because there is no lexeme findi in the default dictionary, the query does not find any match.

Create a middle line through a polygon path using postgis

I'm trying to create a middle line through a polygon path but having problems, now i'm totally lost how to do it. Can anyone help to achieve this goal ?
ST_ApproximateMedialAxis might be what you're looking for.
This PostGIS function can be installed with the extension postgis_sfcgal:
CREATE EXTENSION postgis_sfcgal
Data Sample:
CREATE TABLE t (geom GEOMETRY);
INSERT INTO t VALUES ('POLYGON((-4.689807593822478 54.20411976258862,-4.68751162290573 54.20415427666532,-4.686465561389922 54.20414172609529,-4.685768187046051 54.20414800138079,-4.685280025005341 54.20414486373812,-4.685070812702178 54.204126037877415,-4.685092270374298 54.2040538719985,-4.685854017734527 54.204078973188075,-4.687039554119109 54.20407583554021,-4.688123166561126 54.204082110835685,-4.689078032970428 54.2040601472973,-4.689936339855194 54.20403818374726,-4.689807593822478 54.20411976258862))');
Query:
SELECT ST_ASText(ST_ApproximateMedialAxis(geom)) FROM t;
--------------------------------------------------------
MULTILINESTRING((-4.68993633985519 54.2040381837473,-4.68979598869017 54.2040808603332),(-4.68812343743121 54.2041135944836,-4.68907889005644 54.2040954248621),(-4.68812343743121 54.2041135944836,-4.68751156547988 54.2041164214432),(-4.68646560965613 54.2041095395079,-4.6858535007301 54.2041131034922),(-4.68646560965613 54.2041095395079,-4.68703949691079 54.2041122226661),(-4.6858535007301 54.2041131034922,-4.68576814087419 54.2041120816007),(-4.68907889005644 54.2040954248621,-4.68979598869017 54.2040808603332),(-4.68576814087419 54.2041120816007,-4.68528206303828 54.2041025125126),(-4.68703949691079 54.2041122226661,-4.68751156547988 54.2041164214432),(-4.68512015518242 54.2040925683677,-4.68528206303828 54.2041025125126))
(1 Zeile)
Depending on your use case, another option would be ST_StraightSkeleton:
SELECT ST_ASText(ST_StraightSkeleton(geom)) FROM t;
-----------------------------------------------------
MULTILINESTRING((-4.68980759382248 54.2041197625886,-4.68979598869017 54.2040808603332),(-4.68993633985519 54.2040381837473,-4.68979598869017 54.2040808603332),(-4.68907803297043 54.2040601472973,-4.68907889005644 54.2040954248621),(-4.68812316656113 54.2040821108357,-4.68812343743121 54.2041135944836),(-4.68703955411911 54.2040758355402,-4.68703949691079 54.2041122226661),(-4.68585401773453 54.2040789731881,-4.6858535007301 54.2041131034922),(-4.6850922703743 54.2040538719985,-4.68512015518242 54.2040925683677),(-4.68507081270218 54.2041260378774,-4.68512015518242 54.2040925683677),(-4.68528002500534 54.2041448637381,-4.68528206303828 54.2041025125126),(-4.68576818704605 54.2041480013808,-4.68576814087419 54.2041120816007),(-4.68646556138992 54.2041417260953,-4.68646560965613 54.2041095395079),(-4.68751162290573 54.2041542766653,-4.68751156547988 54.2041164214432),(-4.68812343743121 54.2041135944836,-4.68907889005644 54.2040954248621),(-4.68812343743121 54.2041135944836,-4.68751156547988 54.2041164214432),(-4.68646560965613 54.2041095395079,-4.6858535007301 54.2041131034922),(-4.68646560965613 54.2041095395079,-4.68703949691079 54.2041122226661),(-4.6858535007301 54.2041131034922,-4.68576814087419 54.2041120816007),(-4.68907889005644 54.2040954248621,-4.68979598869017 54.2040808603332),(-4.68576814087419 54.2041120816007,-4.68528206303828 54.2041025125126),(-4.68703949691079 54.2041122226661,-4.68751156547988 54.2041164214432),(-4.68512015518242 54.2040925683677,-4.68528206303828 54.2041025125126))
(1 Zeile)

very large fields in As400 ISeries database

I would like to save a large XML string (possibly longer than 32K or 64K) into an AS400 file field. Either DDS or SQL files would be OK. Example of SQL file below.
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (70K ) ALLOCATE(1000) NOT NULL WITH DEFAULT)
We would use RPGLE to read and write to fields.
The goal is to then pull out data via ODBC connection on a client side.
AS400 character fields seem to have 32K limit, so this is not great option.
What options do I have? I have been reading up on CLOBs but there appear to be restrictions writing large strings to CLOBS and reading CLOB field remotely. Note that client is (still) on v5R4 of AS400 OS.
thanks!
Charles' answer below shows how to extract data. I would like to insert data. This code runs, but throws a '22501' SQL error.
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
//eval longdesc = *ALL'123';
eval Wlongdesc = '123';
exec SQL
INSERT INTO PRODUCT (PRODCODE, PRODDESC, LONGDESC)
VALUES (123, 'Product Description', :LongDesc );
if %subst(sqlstt:1:2) <> '00';
// an error occurred.
endif;
// get length explicitly, variables are setup by pre-processor
longdesc_len = %len(%trim(longdesc_data));
wLongDesc = %subst(longdesc_data:1:longdesc_len);
/end-free
C Eval *INLR = *on
C Return
Additional question: Is this technique suitable for storing data which I want to extract via ODBC connection later? Does ODBC read CLOB as pointer or can it pull out text?
At v5r4, RPGLE actually supports 64K character variables.
However, the DB is limited to 32K for regular char/varchar fields.
You'd need to use a CLOB for anything bigger than 32K.
If you can live with 64K (or so )
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
You can use RPGLE SQLTYPE support
D code S 5s 0
d wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
exec SQL
select prodcode, longdesc
into :code, :longdesc
from mylib/product
where prodcode = :mykey;
wLongDesc = %substr(longdesc_data:1:longdesc_len);
DoSomthing(wLongDesc);
The pre-compiler will replace longdesc with a DS defined like so:
D longdesc ds
D longdesc_len 10u 0
D longdesc_data 65531a
You could simply use it directly, making sure to only use up to longdesc_len or covert it to a VARYING as I've done above.
If absolutely must handle larger than 64K...
Upgrade to a supported version of the OS (16MB variables supported)
Access the CLOB contents via an IFS file using a file reference
Option 2 is one I've never seen used....and I can't find any examples. Just saw it mentioned in this old article..
http://www.ibmsystemsmag.com/ibmi/developer/general/BLOBs,-CLOBs-and-RPG/?page=2
This example shows how to write to a CLOB field in Db2 database... with help from Charles and Mr Murphy's feedback.
* ----------------------------------------------------------------------
* Create table with CLOB:
* CREATE TABLE MYLIB/PRODUCT
* (MYDEC DEC (5 ) NOT NULL WITH DEFAULT,
* MYCHAR CHAR (30 ) NOT NULL WITH DEFAULT,
* MYCLOB CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
* ----------------------------------------------------------------------
D PRODCODE S 5i 0
D PRODDESC S 30a
D i S 10i 0
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
D* Note that variables longdesc_data and longdesc_len
D* get create automatocally by SQL pre-processor.
/free
eval wLongdesc = '123';
longdesc_data = wLongDesc;
longdesc_len = %len(%trim(wLongDesc));
exec SQL set option commit = *none;
exec SQL
INSERT INTO PRODUCT (MYDEC, MYCHAR, MYCLOB)
VALUES (123, 'Product Description',:longDesc);
if %subst(sqlstt:1:2)<>'00' ;
// an error occurred.
endif;
Eval *INLR = *on;
Return;
/end-free

PostgreSQL function giving all rows but not in key value pair format?

I have table called users in PostgreSQL.I want to get all users in table from my Laravel application.
I can get from table directly as:
$testData = DB::table('users')->get();
output is:
[{"id":1,"name":"Chris Sevilleja","username":"sevilayha","email":"chris#scotch.io","password":"$2y$08$i\/ATa68ierqRL47ZxHX4EesJGEcdtKPckZs8GDGpYS.IR4aaQn.\/q","created_at":"2016-09-07 09:32:41","updated_at":"2016-09-07 09:32:41"},{"id":2,"name":"Bemagoni chandrashekar","username":"chandrashekar","email":"chandrashekar#zessta.com","password":"$2y$08$QG9JsAerYp3UYpXNNyImhuz\/6hiWv8XpURpJX1uJ.hAm8l1RG2JrC","created_at":"2016-09-07 09:32:41","updated_at":"2016-09-07 09:32:41"},{"id":3,"name":"dwefr","username":"ewre","email":"ewrt#egyfrhgt.com","password":"$2y$08$s2XBZvoAvpEqjjhbx8QPw.eSe5TIuJC25XbaFkTskzAKAGi99QWga","created_at":"2016-09-07 09:34:27","updated_at":"2016-09-07 09:34:27"},{"id":4,"name":"r3t54y6u677i","username":"retrytyu","email":"etyru#dddj.com","password":"$2y$08$Xb0Xm4KwdSwcBSHX0F6ETOWU60X.NO5D7\/uwVv\/xUAXTUS8LSPCLu","created_at":"2016-09-07 09:46:26","updated_at":"2016-09-07 09:46:26"},{"id":5,"name":"r3t45y6u","username":"ertrh","email":"435t4y65#sss.com","password":"$2y$08$36DLu49nZ2YOWtU.c625meiyi3\/fmsHxiTuxIU9z9UcyrbIpFSKKW","created_at":"2016-09-07 10:02:05","updated_at":"2016-09-07 10:02:05"},{"id":6,"name":"ewrtryuyj","username":"ryt","email":"wrewtey#sssks.com","password":"$2y$08$GULHtX3GGXPGgm8gA9yDbeawZlQ5QwD2TX7nrCvEU4j7jrSgPWAQO","created_at":"2016-09-07 10:04:17","updated_at":"2016-09-07 10:04:17"},{"id":7,"name":"chandu","username":"chandu","email":"ch1235hdhd#dhdjd.com","password":"$2y$08$gpAhcl\/Sg.lGvb.zk.I\/m.PfcttGI6OPFMsMxQVm15dYOtQDvIWSG","created_at":"2016-09-07 10:06:18","updated_at":"2016-09-07 10:06:18"},{"id":8,"name":"dewfergt","username":"erwrgf","email":"fgf#dgrf.com","password":"$2y$08$ikYAXV1prZsEj2MPxXM4S.Tqn160Jv25cFOQLghK8ptFiSBaIFGZO","created_at":"2016-09-07 10:07:20","updated_at":"2016-09-07 10:07:20"},{"id":9,"name":"rteryt","username":"wrwter","email":"wretr#gjjd.com","password":"$2y$08$pkOUBl1NlNdBShNWiklya.0zlPzPrEH2edCfvdCiHLnj80GY1sdtm","created_at":"2016-09-08 07:11:46","updated_at":"2016-09-08 07:11:46"},{"id":10,"name":"Raghu","username":"raghu","email":"raghu#gdgdjd.com","password":"$2y$08$m9wke.vvTTZytYw91I4\/q.qxKoCobLOW7dbCvs66xFyJyy2R9phni","created_at":"2016-09-10 10:06:40","updated_at":"2016-09-10 10:06:40"},{"id":11,"name":"wewert","username":"ewretr","email":"ewrtey#vsss.com","password":"$2y$08$IXD0eXYTPPGE1MfALonEFey0lr\/KBMZ0.3AIO3sWVgu7IZdWhXwTG","created_at":"2016-09-12 07:48:58","updated_at":"2016-09-12 07:48:58"}]
Through PostgreSQL function calling:
my PostgreSQL function:
-- Function: public."RegiterUsers2"()
-- DROP FUNCTION public."RegiterUsers2"();
CREATE OR REPLACE FUNCTION public."RegiterUsers2"()
RETURNS SETOF users AS
'select * from users'
LANGUAGE sql VOLATILE
COST 100
ROWS 1000;
ALTER FUNCTION public."RegiterUsers2"()
OWNER TO postgres;
Laravel code:
$allUserData=DB::select('SELECT public."RegiterUsers2"()')
output:
[{"RegiterUsers2":"(1,\"Chris Sevilleja\",sevilayha,chris#scotch.io,$2y$08$i\/ATa68ierqRL47ZxHX4EesJGEcdtKPckZs8GDGpYS.IR4aaQn.\/q,\"2016-09-07 09:32:41\",\"2016-09-07 09:32:41\")"},{"RegiterUsers2":"(2,\"Bemagoni chandrashekar\",chandrashekar,chandrashekar#zessta.com,$2y$08$QG9JsAerYp3UYpXNNyImhuz\/6hiWv8XpURpJX1uJ.hAm8l1RG2JrC,\"2016-09-07 09:32:41\",\"2016-09-07 09:32:41\")"},{"RegiterUsers2":"(3,dwefr,ewre,ewrt#egyfrhgt.com,$2y$08$s2XBZvoAvpEqjjhbx8QPw.eSe5TIuJC25XbaFkTskzAKAGi99QWga,\"2016-09-07 09:34:27\",\"2016-09-07 09:34:27\")"},{"RegiterUsers2":"(4,r3t54y6u677i,retrytyu,etyru#dddj.com,$2y$08$Xb0Xm4KwdSwcBSHX0F6ETOWU60X.NO5D7\/uwVv\/xUAXTUS8LSPCLu,\"2016-09-07 09:46:26\",\"2016-09-07 09:46:26\")"},{"RegiterUsers2":"(5,r3t45y6u,ertrh,435t4y65#sss.com,$2y$08$36DLu49nZ2YOWtU.c625meiyi3\/fmsHxiTuxIU9z9UcyrbIpFSKKW,\"2016-09-07 10:02:05\",\"2016-09-07 10:02:05\")"},{"RegiterUsers2":"(6,ewrtryuyj,ryt,wrewtey#sssks.com,$2y$08$GULHtX3GGXPGgm8gA9yDbeawZlQ5QwD2TX7nrCvEU4j7jrSgPWAQO,\"2016-09-07 10:04:17\",\"2016-09-07 10:04:17\")"},{"RegiterUsers2":"(7,chandu,chandu,ch1235hdhd#dhdjd.com,$2y$08$gpAhcl\/Sg.lGvb.zk.I\/m.PfcttGI6OPFMsMxQVm15dYOtQDvIWSG,\"2016-09-07 10:06:18\",\"2016-09-07 10:06:18\")"},{"RegiterUsers2":"(8,dewfergt,erwrgf,fgf#dgrf.com,$2y$08$ikYAXV1prZsEj2MPxXM4S.Tqn160Jv25cFOQLghK8ptFiSBaIFGZO,\"2016-09-07 10:07:20\",\"2016-09-07 10:07:20\")"},{"RegiterUsers2":"(9,rteryt,wrwter,wretr#gjjd.com,$2y$08$pkOUBl1NlNdBShNWiklya.0zlPzPrEH2edCfvdCiHLnj80GY1sdtm,\"2016-09-08 07:11:46\",\"2016-09-08 07:11:46\")"},{"RegiterUsers2":"(10,Raghu,raghu,raghu#gdgdjd.com,$2y$08$m9wke.vvTTZytYw91I4\/q.qxKoCobLOW7dbCvs66xFyJyy2R9phni,\"2016-09-10 10:06:40\",\"2016-09-10 10:06:40\")"},{"RegiterUsers2":"(11,wewert,ewretr,ewrtey#vsss.com,$2y$08$IXD0eXYTPPGE1MfALonEFey0lr\/KBMZ0.3AIO3sWVgu7IZdWhXwTG,\"2016-09-12 07:48:58\",\"2016-09-12 07:48:58\")"}]
I need like this
{"id":1,
"name":"Chris Sevilleja",
"username":"sevilayha",
"email":"chris#scotch.io",
"password":"$2y$08$i\/ATa68ierqRL47ZxHX4EesJGEcdtKPckZs8GDGpYS.IR4aaQn",
"created_at":"2016-09-07 09:32:41",
"updated_at":"2016-09-07 09:32:41"},
so what is wrong with postgres function and how can I get like above one.
When you call a function returning many columns and you want each one, you must call in the form of:
SELECT col1, col2, ... FROM function(...)
Or:
SELECT * FROM function(...)
So in your case you simple want:
$allUserData=DB::select('SELECT * FROM public."RegiterUsers2"()')

Inserting values into multiple columns by splitting a string in PostgreSQL

I have the following heap of text:
"BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,
URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,
16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,
1.0,Version,1.0,".
What I'd like to do is extract data from this in the following manner:
BundleSize:155648
DynamicSize:204800
Identifier:com.URLConnectionSample
Name:URLConnectionSample
ShortVersion:1.0
Version:1.0
BundleSize:155648
DynamicSize:16384
Identifier:com.IdentifierForVendor3
Name:IdentifierForVendor3
ShortVersion:1.0
Version:1.0
All tips and suggestions are welcome.
It isn't quite clear what do you need to do with this data. If you really need to process it entirely in the database (looks like the task for your favorite scripting language instead), one option is to use hstore.
Converting records one by one is easy:
Assuming
%s =
BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0
SELECT * FROM each(hstore(string_to_array(%s, ',')));
Output:
key | value
--------------+-------------------------
Name | URLConnectionSample
Version | 1.0
BundleSize | 155648
Identifier | com.URLConnectionSample
DynamicSize | 204800
ShortVersion | 1.0
If you have table with columns exactly matching field names (note the quotes, populate_record is case-sensitive to key names):
CREATE TABLE data (
"BundleSize" integer, "DynamicSize" integer, "Identifier" text,
"Name" text, "ShortVersion" text, "Version" text);
You can insert hstore records into it like this:
INSERT INTO data SELECT * FROM
populate_record(NULL::data, hstore(string_to_array(%s, ',')));
Things get more complicated if you have comma-separated values for more than one record.
%s = BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,1.0,Version,1.0,
You need to break up an array into chunks of number_of_fields * 2 = 12 elements first.
SELECT hstore(row) FROM (
SELECT array_agg(str) AS row FROM (
SELECT str, row_number() OVER () AS i FROM
unnest(string_to_array(%s, ',')) AS str
) AS str_sub
GROUP BY (i - 1) / 12) AS row_sub
WHERE array_length(row, 1) = 12;
Output:
"Name"=>"URLConnectionSample", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.URLConnectionSample", "DynamicSize"=>"204800", "ShortVersion"=>"1.0"
"Name"=>"IdentifierForVendor3", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.IdentifierForVendor3", "DynamicSize"=>"16384", "ShortVersion"=>"1.0"
And inserting this into the aforementioned table:
INSERT INTO data SELECT (populate_record(NULL::data, hstore(row))).* FROM ...
the rest of the query is the same.