PostgreSQL 9.3 sql result: long text has shortened - postgresql

I'm trying to SELECT some text that so long (about 500~ char), when using pg-admin > sql to test my sql, its return result which has shortened (about 250 char + '(...)').
Anybody know how to config PostgreSQL to always show fully text result ?
Thank you.
Updated
my_table(
my_column text;
)
INSERT INTO my_table(my_column) VALUES ('this is long(500~ char more) long text');
SELECT my_column FROM my_table
output pane display:
-> this is long(250~ char more) (...)
I think this will more clearly :)

In pgadmin go in the menù:
File --> options
Under "Query tool", select "Query editor".
In the box "Max Characters per column", insert... a big number :-)
Maximum is 2147483647, but it can consume a lot of memory in some case... Anyway, if it is not a production server you have not to worry.

Related

Postgres INSERT timestamp with UNION ALL error

Good day!
I need to export/import data from SQL Server 2019 to AWS RDS running PostgreSQL 13.3
It's just a few hundred rows from a handful of tables.
This is my first ever encounter with Postgres, so I decided to simply script data as "INSERT ... SELECT", as I would with SQL Server... and I've looked into AWS Glue, RDS S3 Import - it all seems waaay too much for what I need.
I am using DBeaver v21 for of this as I have easy access to both source and destination DBs.
This I tested with success:
CREATE TABLE public.invoices (
invoiceno int8 NOT NULL GENERATED BY DEFAULT AS IDENTITY,
terminalid int4 NOT NULL,
invoicedate timestamp NOT NULL,
description varchar(100) NOT null
);
INSERT INTO public.invoices(InvoiceNo,TerminalID,InvoiceDate,Description)
SELECT 7 as invoiceno , 5 as terminalid , '2018-10-24 21:29:00' as invoicedate , N'Coffe and cookie' as description
-- Updated Rows 1
-- No problem here
I scripted the rest of the data with UNION ALL, like so (shortened example) :
INSERT INTO public.invoices(InvoiceNo,TerminalID,InvoiceDate,Description)
SELECT 7 as invoiceno , 5 as terminalid , '2018-10-24 21:29:00' as invoicedate , N'Coffe and cookie' as description
UNION ALL
SELECT 1000, 5 , '2018-10-24 21:29:00' , N'Tea and crumpets'
and now I get:
SQL Error [42804]: ERROR: column "invoicedate" is of type timestamp without time zone but expression is of type text
Hint: You will need to rewrite or cast the expression.
Position: 118
I do see in the message that it can be "fixed" with a CAST (or rewrite!)....
but how come Postgres can convert 1 row implicitly, yet 2 rows is impossible?
why does this fail when more than 1 row is being inserted? - it clearly knows how to convert text -> date ...
I tried using VALUES, CTE, derived tables with no success.
As I have to spend more time with postgres - I really would like to understand what's going on here. Is my syntax wrong (works fine SQL Server), is DBeaver messing up something with my data, etc...?
Any suggestions would be appreciated.
Thank you
'2018-10-24 21:29:00' is a string value and Postgres is a bit more picky about correct data types then SQL Server.
You need to specify the value as a proper timestamp constant,
timestamp '2018-10-24 21:29:00'
Note that you can write that in a bit more compact form using a values clause:
INSERT INTO public.invoices(InvoiceNo,TerminalID,InvoiceDate,Description)
values
(7, 5, timestamp '2018-10-24 21:29:00', 'Coffe and cookie'),
(1000, 5 , timestamp '2018-10-24 21:29:00' , 'Tea and crumpets');
The reason of such behaviour is in order of compilation.
In situation when you use VIEW first are compiled querys in view and types of columns (names too) in view is taken from the first part of a "view" (the first SELECT command).
So, you have got text instead of timestamp and it doesn't match to inserted table type.
MSSQL compiler is a little bit smarter :-).
In first example you have simple INSERT INTO ... SELECT ....
and compiler at once expect timestamp type - so , it not rise any compilation error (but error can ocure in execution time when the data do not pass rules of automatic conversion).

How to removing spacing in SQL

I have data in DB2 then i want to insert that data to SQL.
The DB2 data that i had is like :
select char('AAA ') as test from Table_1
But then, when i select in SQL after doing insert, the data become like this.
select test from Table_1
result :
test
------
AAA
why Space character read into box character. How do I fix this so that the space character is read into.
Or is there a setting I need to change? or do I have to use a parameter?
I used AS400 and datastage.
Thank you.
Datastage appends pad characters so you know that there are spaces there. The pad character is 0x00 (NUL) by default and that's what you're seeing.
Research the APT_STRING_PADCHAR environment variable; you can set it to something else if you want.
The 0x00 characters are not actually in your database. The short answer is, you can safely ignore it.
When you said:
select char('AAA ') as test from Table_1
You were not actually showing any data from the table. Instead you were showing an expression casting a constant AAA as a character value, and giving that result column the name test which coincidentally seems to be the name of a column in the table, although that coincidence doesn't matter here.
Then your 2nd statement does show the contents of the database column.
select test from Table_1
Find out what the hexadecimal value actually is.

SQL Server 2012 Express Edition Database Size

We have a requirement in our project to store millions of records(~100 million) in database.
And we know that SQL Express Edition 2012 can maximum accommodate 10GB of data.
I am using this query to get the actual size of the database - Is this right?
use [Bio Lambda8R32S50X]
SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) = 'Bio Lambda8R32S50X'
GO
SET NOCOUNT ON
DBCC UPDATEUSAGE(0)
-- Table row counts and sizes.
CREATE TABLE #t
(
[name] NVARCHAR(128),
[rows] CHAR(11),
reserved VARCHAR(18),
data VARCHAR(18),
index_size VARCHAR(18),
unused VARCHAR(18)
)
INSERT #t EXEC sp_msForEachTable 'EXEC sp_spaceused ''?'''
SELECT *
FROM #t
-- # of rows.
SELECT SUM(CAST([rows] AS int)) AS [rows]
FROM #t
DROP TABLE #t
The second question is this restriction is only on the database size of the Primary file group or inclusive of the log files as well?
If we do a lot of delete and insert, or may be delete and insert back the same number of records, does the database size vary or remains the same?
This is very crucial, since this will decide whether we can go ahead with SQL Server 2012 Express Edition or not?
Thanks and regards
Subasish
I can see that the first query is to get the overall size of the database for the data and logs. The second one is for each table. So I would say yes to both.
Based upon my experience seeing db's over 40GB and this linkmaximum DB size limits that the limit on sql server express is based upon the mdf and ndf files not the ldf.
You might be safer however, just to go with SQL Server Standard and use CAL licensing in case your database starts growing.
Good Luck!

SSRS 2005 passing parameters to SQL Server 2000 stored procedure

Below is code that I built from an example I found online, I can't find the link, but the code is referenced in the answers on this stack overflow question: Passing multiple values for a single parameter in Reporting Services.
Here is the SQL code I am working with right now within my stored procedure, it was a long procedure so I summed it down to just the section I am working on, and added the DECLARE and SET for #EMPLOYEES, which are passed as a parameter from SSRS to make the code snippet run.
DECLARE #EMPLOYEES varchar(8000)
-- EMPLOYEES is a comma separated list of EMPLOYEE IDS
-- FROM SSRS Report Parameters. Each ID is 12 characters
-- And there are 806 Employees to choose from, which
-- when all are selected, the Comma separated string grows
-- to 11,193 characters, much longer than 8000
SET #EMPLOYEES = 'EMP000000001,EMP000000002,EMP000000003'
CREATE TABLE #EMPLOYEEIDS
(
EMPLOYEEID varchar(100) NOT NULL
)
DECLARE #CharIndex AS int
DECLARE #Piece AS varchar(100)
-- FILL THE #EMPLOYEEIDS TABLE WITH THE COMMA SEPARATED EMPLOYEE IDS
SELECT #CharIndex = 1
WHILE #CharIndex > 0 AND LEN(#EMPLOYEES) > 0
BEGIN
SELECT #CharIndex = CHARINDEX(',', #EMPLOYEES)
IF #CharIndex > 0
SELECT #Piece = LEFT(#EMPLOYEES, #CharIndex - 1)
ELSE
SELECT #Piece = #EMPLOYEES
INSERT INTO #EMPLOYEEIDS (EMPLOYEEID) VALUES (#Piece)
SELECT #EMPLOYEES = RIGHT(#EMPLOYEES, LEN(#EMPLOYEES) - #CharIndex)
END
SELECT * FROM #EMPLOYEEIDS
DROP TABLE #EMPLOYEEIDS
I had 6 sets of multi-values, all of them worked fine, until I found that the reports were missing much of the data for employees, to which I found that the VARCHAR(8000) was overflowed when selecting all the employees on the report parameters (there are over 800 of them). The Report would run, SQL would happily truncate the VARCHAR to 8000 characters, and a quarter of the IDS were not parsed.
So I tried to switch the VARCHAR to a text field, and none of the parsing functions would work when the field is set up as TEXT. I get errors like the following:
Msg 8116, Level 16, State 2, Procedure usp_QualityMonitoring_AllProfiles_SelectWithParameters, Line 89
Argument data type text is invalid for argument 1 of left function.
This is understandable, I know that many functions that work with VARCHAR will not work with TEXT. So, SQL is truncating everything after 8000 characters when I use a VARCHAR, and the procedure won't ever run if I switch it to TEXT.
What other options to I have to pass multi-valued parameters from SSRS to a SQL Server stored procedure that can support this many options?
OR is there a way to fix the code in the stored procedure to parse through TEXT instead of VARCHAR?
Note: I originally thought the SQL Server running the Stored Proc was 2005, but I have determined that it is not:
SELECT ##VERSION
-- Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

Insert hex string value to sql server image field is appending extra 0

Have an image field and want to insert into this from a hex string:
insert into imageTable(imageField)
values(convert(image, 0x3C3F78...))
however when I run select the value is return with an extra 0 as 0x03C3F78...
This extra 0 is causing a problem in another application, I dont want it.
How to stop the extra 0 being added?
The schema is:
CREATE TABLE [dbo].[templates](
[templateId] [int] IDENTITY(1,1) NOT NULL,
[templateName] [nvarchar](50) NOT NULL,
[templateBody] [image] NOT NULL,
[templateType] [int] NULL)
and the query is:
insert into templates(templateName, templateBody, templateType)
values('I love stackoverflow', convert(image, 0x3C3F786D6C2076657273696F6E3D.......), 2)
the actual hex string is quite large to post here.
I have just had similar problem and I blame myself.
It is possible, that you copy just part of data you need. In my case, I added '0' to the end of the blob.
The cause of this could be copying the value from SQL Management Studio to clipboard.
insert into imageTable(imageField) values(0x3C3F78...A)
Select returned: 0x03C3F78...
insert into imageTable(imageField) values(0x3C3F78...A 0 )
Select returned: 0x3C3F78...
I hope this will help.
Best wishes.
This is correct for 0x0: each pair of digits makes one byte so 0x00 has the value stored
When I run SELECT convert(varbinary(max), 0x55) I get 0x55 out on SQL Server 2008. SELECT convert(varbinary(max), 85) gives me 0x00000055 which is correct is 85 is a 32 bit integer
What datatype are you casting to varbinary?
Edit: I still can't reproduce using image not varbinary
Some questions though:
is this an upgraded database? What is the compatibility level?
why use image: use varbinary(max) instead
what happens when you change everything to varbinary?