Redshift analyze compression: result output - amazon-redshift

I need to use the outputs of 'analyze compression' in Redshift stored procedure, is there a way to store the results of 'analyze compression' to a temp table?

Related

Simple Select query is running more than 5hours db2

We have a select query as below . Query to fetch the data is running more than 5 hours.
select ColumnA,
ColumnB,
ColumnC,
ColumnD,
ColumnE
from Table
where CodeN <> 'Z'
Is there any way we can collect stats or any other way to improve performance .. ?
And in DB2 do we have any table where we can check whether collect stats are collected on the below table..
The RUNSTATS command collects table & indexes statistics. Note, that this is a Db2 command and not an SQL statement, so you may either run it with Db2 Command Line Processor (CLP) or using relational interface with a special Stored Procedure, which is able to run such commands:
RUNSTATS command using the ADMIN_CMD procedure.
Statistics is stored in the SYSSTAT schema views. Refer to Road map to the catalog views - Table 2. Road map to the updatable catalog views.
How many rows exist in table?
and not equal operator '<>' not indexable predicates

How to load bulk data to table from table using query as quick as possible? (postgresql)

I have a large table(postgre_a) which has 0.1 billion records with 100 columns. I want to duplicate this data into the same table.
I tried to do this using sql
INSERT INTO postgre_a select i1 + 100000000, i2, ... FROM postgre_a;
However, this query is running more than 10 hours now... so I want to do this more faster. I tried to do this with copy, but I cannot find the way to use copy from statement with query.
Is there any other method can do this faster?
You cannot directly use a query in COPY FROM, but maybe you can use COPY FROM PROGRAM with a query to do what you want:
COPY postgre_a
FROM PROGRAM '/usr/pgsql-10/bin/psql -d test'
' -c ''copy (SELECT i1+ 100000000, i2, ... FROM postgre_a) TO STDOUT''';
(Of course you have to replace the path to psql and the database name with your values.)
I am not sure if that is faster than using INSERT, but it is worth a try.
You should definitely drop all indexes and constraints before the operation and recreate them afterwards.

Datagrip simple calculation on the Select Query Result

I recently switch from Aginity for Redshift to Datagrip for accessing my redshift db. I am wondering for Datagrip is it a function to get simple calculations (like count, sum) on certain columns of the select results like the Aginity for Redshift? It is very handy function when doing a quick check for the result return. Appreciate for your help all.

Using SAS to insert records into DB2 database

To give a background, I am using
- base SAS in mainframe (executed by JCL) and
- DB2 as the database.
I have the list of keys to read DB in a mainframe dataset. I understood that we can join a sas dataset with a DB2 table to read as follows.
%LET DSN=DSN;
%LET QLF=QUALIFIER;
PROC SQL;
CONNECT TO DB2(SSID=&DSN);
CREATE TABLE STAFFTBL AS
(SELECT * FROM SASDSET FLE,
CONNECTION TO DB2
(SELECT COL1, COL2, COL3
FROM &QLF..TABLE_NAME)
AS DB2 (COL1, COL2, COL3)
WHERE DB2.COL1 = FLE.COL1);
DISCONNET FROM DB2;
%PUT &SQLXMSG;
QUIT;
can someone suggest me, if I have a dataset with list of values to be inserted in a mainframe dataset, how should we proceed.
We can read the mainframe dataset and get the values in a SAS dataset. But I am not able to guess on how to use the sas dataset to insert values to DB2.
I know we can do it using COBOL. But I am willing to learn if it is possible using SAS.
Thanks!
Solution:
Have to assign library to write to DB. Please refer to the SAS Manual here
Your above query creates a local SAS dataset in the Work library or wherever your default library is declared. This table is not connected to your backend DB2 database but simply a copy used as import into SAS.
Consider establishing a live connection using an ODBC SAS library. If not ODBC, use the DB2 API SAS has installed. Once connected all tables in specified database will emerge as available SAS datasets in a SAS library and these are not imported copies but live tables. Then run any proc sql append or use proc.append to insert records to table from SAS.
Below are generic examples with DSN or non-DSN which you can modify according to your credentials or database driver type.
* WITH DSN;
libname DBdata odbc datasrc="DSN Name" user="username" password="password";
* WITH DRIVER (NON-DSN) - CHECK DRIVER INSTALLATION;
libname DBdata odbc complete="driver=DB2 Driver; Server=servername;
user=username; pwd=password; database=databasename;";
Append procedures:
* WITH SQL;
proc sql;
INSERT INTO DBdata.tableName (col1, col2, col3)
SELECT col1, col2, col3 FROM SASDATASET;
quit;
* WITH APPEND (ASSUMING COLUMNS MATCH TOGETHER);
proc datasets;
append base = DBdata.tableName
data = SASDATASET
force;
quit;
NOTE: Be very careful not to unintentionally add, modify, or delete any table in the SAS ODBC library as these datasets are live tables, so such changes will reflect in backend DB2 database. When finished with work, do not delete the library (or all tables will be cleaned out), simply unassign it from environment:
libname DBdata clear;
Provided that you have the necessary write access, you should be to do this via a proc sql insert into statement. Alternatively, if you can access the db2 table via a library, it may be possible to use a data step with both a modify and an output / replace statement.

Write from SAS Table to DB2 Temp Table

I have a local table in SAS that I am trying to create as a temporary table table on a remote DB2 server. Is there anyway to do this other than build an insert statement elsewhere and stream it?
libname temp db2 uid=blagh pwd=blagh dsn=blagh connection=global schema=Session;
Proc SQL;
Connect to db2 (user=blagh pw=blagh dsn=blagh connection=global);
Execute (
Declare Global Temporary Table Session.Test
( foo char(10))
On Commit Preserve Rows
Not Logged
) by db2;
Execute (Commit) by db2;
Insert Into Session.Test
Select Distinct A.foo From Work.fooSource A;
I have tried several variations on these theme, each resulting in errors. The above code produces.
ERROR: Column foo could not be found in the table/view identified with the correlation name A.
ERROR: Unresolved reference to table/correlation name A.
Removing the alias gives me.
ERROR: INSERT statement does not permit correlation with the table being inserted into.
A pass-through statement like below should work.
proc sql;
connect to db2 (user=blagh pw=blagh dsn=blagh connection=global);
execute (create view
sasdemo.tableA as
select VarA,
VarB,
VarC
from sasdemo.orders)
by db2;
execute
(grant select on
sasdemo.tableA to testuser)
by db2;
disconnect from db2;
quit;
The code below is what I routinely use to upload to DB2
rsubmit YourServer;
libname temp db2 uid=blagh pwd=blagh dsn=blagh connection=global schema=Session;
data temp.Uploaded_table(bulkload = yes bl_method = cliload);
set work.SAS_Local_table;
run;
endrsubmit;
libname temp remote server=YourServer;
More options for DB2 are available from SAS support... http://support.sas.com/documentation/onlinedoc/91pdf/sasdoc_913/access_dbspc_9420.pdf
I don't know db2 so I don't know for sure this works, but the 'normal' way to do this is with PROC COPY (although the data step also should work). I would guess in your code above that db2 doesn't allow inserts that way (I think that's fairly common to not be supported in SQL flavors).
libname temp db2 uid=blagh pwd=blagh dsn=blagh connection=global schema=Session;
proc copy in=work out=temp;
select work.foosource;
run;
If you need the name to be different (in PROC COPY it won't be), you can do a simple data step.
data temp.yourname;
set work.foosource;
run;
You shouldn't need to do the inserts in SQL. If you want to first declare it in db2 (in a connect to... session) you probably can do that and still do either of these options (though again this varies some based on the RDBMS, so test this).