Iseries qshell db2 truncate - db2

I'm on iseries V7. I'm trying to use the qshell db2 command with large data.
My command under qshell (strqsh):
$ db2 "select data from mytable"
Works fine except when data is bigger than 4096 characters : in that last case the output is always truncated to 4096 bytes (batch or interactive have the same behaviour)
Anyone as any clue about how to deal with date larger than 4096 characters ?
(After many researches on Stdout, pipe, PIPE_BUF, Rfile, db2 limitation, ... I can't find any valuable reason why my datas are always truncated)
Any help would be very appreciated.
Gwenaeld

Related

Import CSV file into PostgreSQL while rounding from decimal to integer

I am loading a 10 GB CSV file into an AWS Aurora postgres database. This file has a few fields where the values are decimal and the values are +/- 0.1 from whole number, but in reality they are supposed to be integers. When I loaded this data into Oracle using SQLLDR I was able to round the fields from decimal to integers. I would like to do the same in the PostgreSQL database using the \copy command, but I can't find any options which allow this.
Is there a way to import this data and round the values during a \copy without going through a multistep process like creating a temporary table?
There doesn't seem to be a built-in way to do this as I have seen in other database applications.
I didn't use an external program as suggested in the comments, but I did preprocess the data using an awk script that read each line and reformatted the incorrect field with the printf function to round the output with the parameter "%.0f".

Decimals less than 1 appear as ",x" in output file while they appear correctly in the result window

I am having difficulty with my decimal columns. I have defined a view in which I convert my decimal values like this
E.g.
SELECT CONVERT(decimal(8,2), [ps_index]) AS PriceSensitivityIndex
When I query my view, the numbers appear correctly on the results window e.g. 0,50, 0,35.
However, when I export my view to file using Tasks > Export Data ... feature of SSMS, the decimals lower than zero appear as ,5, ,35.
How can I get the same output as in the results window?
Change your query to this:
SELECT CAST( CONVERT(decimal(8,2), [ps_index]) AS VARCHAR( 20 ) ) AS PriceSensitivityIndex
Not sure why, but bcp is dropping leading zero. My guess is it's either because of the transition from SQL Storage to a text file. Similar to how the "empty string" and nulls are exchanged on BCP in or out. Or there is some deeper config (windows, sql server, ?) where a SQL Server config differs from an OS config? Not sure yet. But since you are going to text/character data anyway when you BCP to a text file, it's safe (and likely better in most cases) to first cast/convert your data to a character data type.

MATLAB - How to load and handle of a big TXT file (32GB)

First os all, sorry about my english...
I would like to know a better way to load and handle a big TXT file (around 32GB, matrix 83.000.000x66). I already tried some experiments with TEXTSCAN, IMPORT (out of memory), fgets, fget1,.... Except import approach, all methods works but take to much time (much more than 1 week).
I aim to use this database to execute my sampling process and, after that, a neural network for learning the behabiour.
Someone know how to import this type of data faster? I am thinking to make a database dump in other format (instead TXT), for exemplo SQL server and try to handle with this data accessing the database by queries.
Other doubt, after load all data, can I save in .MAT format and handle with this format in my experiments? Other better idea?
Thanks in advance.
It's impossible to hold such big matrix (5,478,000,000 values) in your workspace/memory (unless you've got tons of ram). So the file format (.mat or .csv) doesn't matter!
You definitly have to use a database (or split the file in sevaral smaller ones and calculate step by step (takes very long too).
Personaly, I only have experiances with sqlite3 and did similar with a 1.47mio x 23 matrix/csv file.
http://git.osuv.de/markus/sqlite-demo (Remember that my csv2sqlite.m was just designed to run with GNU Octave [19k seconds at night ...well, it was bad scripted too :) ].
After everything was imported to the sqlite3 database, I simply can access only the data I need within 8-12 seconds (take a look in the comment header of leistung.m).
If your csv file is straight, you can simply import it with sqlite3 itself
For example:
┌─[markus#x121e]─[/tmp]
└──╼ cat file.csv
0.9736834199195674,0.7239387515366997,0.3382008456696883
0.6963824911102146,0.8328410999877027,0.5863203843393815
0.2291736458336333,0.1427739134201017,0.8062332551565472
┌─[markus#x121e]─[/tmp]
└──╼ sqlite3 csv.db
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
sqlite> CREATE TABLE csvtest (col1 TEXT NOT NULL, col2 TEXT NOT NULL, col3 TEXT NOT NULL);
sqlite> .separator ","
sqlite> .import file.csv csvtest
sqlite> select * from csvtest;
0.9736834199195674,0.7239387515366997,0.3382008456696883
0.6963824911102146,0.8328410999877027,0.5863203843393815
0.2291736458336333,0.1427739134201017,0.8062332551565472
sqlite> select col1 from csvtest;
0.9736834199195674
0.6963824911102146
0.2291736458336333
All is done with https://github.com/markuman/go-sqlite (Matlab and Octave compatible! but I guess no one but me has ever used it!)
However, I recommand Version 2-beta in branch 2 (git checkout -b 2 origin/2) running in coop mode (You'll hit max string length from sqlite3 in ego mode). There's a html doku for version 2 too. http://go-sqlite.osuv.de/doc/

The pgsql2shp.exe cuts-off text to max 254 characters (varchar(254))

im using the pgsql2shp tool to generate *.shp files from geometries in Postgres. The thing is that I have a description colomn with a lot of text. In the Postgres DB it is of type text. But when I use pgsql2shp these columns are cut-off to max 254 characters it makes a varchar(254) of this column.
Any ideas to make this work?
After some more googling and asking around, i found out that the accompanying dbf file with *.shp is based on a dBase IV format. This has a maximum length of a text field = 254 characters. Therefore it cuts the text off.
So I need to find some other solution.
As you have discovered, this is a limitation of Shapefiles. To get more characters in the output, you need to export to a different format.
You can use ogr2ogr to convert the spatial data into several different formats, such as Spatialite, GeoJSON, etc.

pgAdmin III Why query results are shortened?

I've recently installed pgAdmin III 1.18.1 and noticed a strange thing:
Long json query results are shortened to 256 symbols and then ' (...)' is added.
Could someone help me disable this shortening?
Thanks to user Erwin Brandstetter for his answer on Database Administrators.
There is a setting for that in the options: Max characters per column - useful when dealing with big columns. Obviously your setting is 256 characters.
Set it higher or set it to -1 to disable the feature.
in pgadmin (Version 5.4) select
file >> preferences >> Query Tool >> Results grid
and change Resize by data? to false. This will the column be sized to the widest of the datatype or column name.