RUNSTATS after run a LOAD utility in db2 - db2

I have a bash that loads 100 csv files into one db2 table using the LOAD utility, my question is: should I executes a RUNSTATS after each file is loaded or just when all the files are loaded? Currently the bash is running a RUNSTATS after each file is loaded but sometimes the RUNSTATS fails with below error:
DB2 runstats throws exception " The utility could not generate statistics. Error "-911" was returned.. SQLCODE=-2310, SQLSTATE= , DRIVER=3.53.71"

Related

I can't open the file in SQL Shell Permission Denied

I have a 10gb postgresql file I am trying to open but it is not allowed.Pgadmin gives the output exit code as 1 and I can't get the data. When I do it in SQL Shell, it says it didn't allow it.When I put it in the general file areas, it still gives an error. I set the sharing options for everyone, it still didn't work.
Postgresql 9.6 and Pgadmin 4v4

How to run load database command of pgloader in a terminal while migrating data?

I want to migrate data from SQLite to PostgreSQL and I decided to use pgloader for that.
However, the instructions are so poor that I do not know how to execute this command:
load database
from sqlite:///Users/dim/Downloads/lastfm_tags.db
into postgresql:///tags
with include drop, create tables, create indexes, reset sequences
set work_mem to '16MB', maintenance_work_mem to '512 MB';
It throws an error that load command not found or if I run it with pgloader load it throws an error to check the available options with pgloader.
The script you posted should be saved as file. Lets assume you save that as lastfm_tags.load.
Then you can run it as pgloader lastfm_tags.load

Import data fails in DB2

I'm using Data Studio to connect to a DB2 server. When I'm trying to use the 'import utility' in the Data Studio, it succeeds with a warning and the result show that no record has been inserted into the database. The Import wizard is generating the following SQL command
CALL SYSPROC.ADMIN_CMD( 'IMPORT FROM "/home/xyz/backup/TRANSACTION" OF DEL MODIFIED BY coldel| delprioritychar INSERT INTO S.TRANSACTION' );
If I copy this command and paste it in a sql script in DB2 and then run it it give another error
An I/O error (reason = "sqlofopn -2029060079") occurred while opening the input file.. SQLCODE=-3030, SQLSTATE=
If I use the db2 shell to execute the IMPORT part of the command (without CALL SYSPROC.ADMIN_CMD) it succeed without any issue. What is wrong here?
When you (or DataStudio) runs SYSPROC.ADMIN_CMD (which is the default method used by DataStudio for import), the action happens on the Db2-server using the account of the Db2-instance-owner (for Db2-LUW).
That account (for example db2inst1) requires read access to the specified filename. In your case, the Db2-instance owner did not have access to the file (and/or the path containing the file), so the exception got thrown.
You may see additional detail in the Db2-server diagnostic file (db2diag.log) for the failed action, depending on the diagnostics level that is active on the Db2-server.
ADMIN_CMD expects the input file to be on the server, because it (as any other stored procedure) runs on the server; it has no access to your local file system.
Commands you run in the Db2 command line processor execute on the client and therefore can access the file locally.

Insert data into Redshift from Windows txt files

I have 50 txt files on windows and I would like to insert their data into a single table on Redshift.
I created the basic table structure and now I'm having issues with inserting the data. I tried using COPY command from SQLWorkbench/J but it didn't work out.
Here's the command:
copy feed
from 'F:\Data\feed\feed1.txt'
credentials 'aws_access_key_id=<access>;aws_secret_access_key=<key>'
Here's the error:
-----------------------------------------------
error: CREDENTIALS argument is not supported when loading from file system
code: 8001
context:
query: 0
location: xen_load_unload.cpp:333
process: padbmaster [pid=1970]
-----------------------------------------------;
Upon removing the Credentials argument, here's the error I get:
[Amazon](500310) Invalid operation: LOAD source is not supported. (Hint: only S3 or DynamoDB or EMR based load is allowed);
I'm not a UNIX user so I don't really know how this should be done. Any help in this regard would be appreciated.
#patthebug is correct in that Redshift cannot see your local Windows drive. You must push the data into an S3 bucket. There are some additional sources you can use per http://docs.aws.amazon.com/redshift/latest/dg/t_Loading_tables_with_the_COPY_command.html, but they seem outside the context you're working with. I suggest you get a copy of Cloudberry Explorer (http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx) which you can use to copy those files up to S3.

Replace Existing File with Temp File:I/O Error

I have an Access 2007 database from which I call a Windows batch file to retrieve files from an external server via a ribbon menu. When executing the file manually everything works just fine. When executing the batch file via the Access ribbon menu the following error appears within the command line:
Opening data connection for ...
> Replace Existing File with Temp File:I/O Error
Binary transfer complete.
I've read something about this error in relation to (Admin) rights, but since the batch file actually runs when called by Access it not seems to be the issue.