We're copying S3 uploaded files to Redshift by Workbench/J in our batch as follows;
java -jar %WBJ_DIR%\sqlworkbench.jar -profile='%WBJ_PROF%' -script='%ScriptFile%' -displayResult=true
Script File is copy command, created precedingly, according to target tables and conditions(used as file name, 'yyyy-mm-dd_TABLE_NAME.gz').
So far, uploading task is going successfully, but we'd like to add any error captures to confirm upload result on Redshift.
Just wondering if there is any option like 'SET NoCount ON' in SQL Server.
If only get record count, with no header and no message, I think we can get the upload error count from STL_LOAD_ERRORS on our batch like this;
--Check SQL
set COMMAND=select count(*) from STL_LOAD_ERRORS where filename='%COPY_FILE%' and starttime > '%bat_start_time%';
-- Get Error Count
for /F "delims=" %%A in ('%COMMAND%') do (
set REC_CNT=%%A
)
echo Upload Error Count : %REC_CNT%
Of cource, any other good way to confirm the uploaded result on Redshift would be appreciated.
Thank you so much for your useful advice in advance.
Related
I created a mapping that pulls data from a flat file that shows me usage data for specific SSRS reports. The file is overwritten each day with the previous days usage data. My issue is, sometimes the report doesn't have any usage for that day and my ETL sends me a "Failed" email because there wasn't any data in the Source. The job from running if there is no data in the source or to prevent it from failing.
--Thanks
A simple way to solve this is to create a "Passthrough" mapping that only contains a flat file source, source qualifier, and a flat file target.
You would create a session that runs this mapping at the beginning of your workflow and have it read your flat file source. The target can just be a dummy flat file that you keep overwriting. Then you would have this condition in the link to your next session that would actually process the file:
$s_Passthrough.SrcSuccessRows > 0
Yes, there are several ways, you can do this.
You can provide an empty file to ETL job when there is no source data. To do this, use a pre-session command like touch <filename> in the Informatica workflow. This will create an empty file with the <filename> if it is not present. The workflow will run successfully with 0 rows.
If you have a script that triggers the Informatica job, then you can put a check there as well like this:
if [ -e <filename> ]
then
pmcmd ...
fi
This will skip the job from executing.
Have another session before the actual dataload. Read the file, use a FALSE filter and some dummy target. Link this one to the session you already have and set the following link condition:
$yourDummySessionName.SrcSuccessRows > 0
I am attempting to automate the copy of files from S3 to Redshift. Standing in my way is my ignorance of how to make the date (in bold below) always be today's date.
COPY cid_localytics_export
FROM 's3://exports.localytics.t-z/<redacted>/**2014/11/19**/eb0ecf515724b3ef5a6d8cb-a8a0fa68-3551-11e4-4b45-00a426b17dd8/00.log.gz'
CREDENTIALS 'aws_access_key_id=<redacted>;aws_secret_access_key=<redacted>'
JSON 's3://exports.localytics.t-z/<redacted>/redshift_example.json'
GZIP
TRUNCATECOLUMNS;
When I attempt the following I get a syntax error. Any ideas or suggestions would be greatly appreciated.
COPY cid_localytics_export_test
FROM (select ''''||'s3://exports.localytics.t-z/<redacted>/'||(select ltrim(rtrim(replace(replace(cast(date_trunc('day', now()) - interval '1 day' as varchar),'-','/'),' 00:00:00+00',''))))||'/eb0ecf515724b3ef5a6d8cb-a8a0fa68-3551-11e4-4b45-00a426b17dd8/00.log.gz'||'''')
CREDENTIALS 'aws_access_key_id=<redacted>;aws_secret_access_key=<redacted>'
JSON 's3://exports.localytics.t-z/whitepages/redshift_example.json'
GZIP
TRUNCATECOLUMNS;
Redshift copy command expects exact s3 path for folder or file (s3://abc/def or s3://abc/def/ijk.csv )
You need to give correct path for the file.
You can write simple java /php or shell script to ( postgres drivers) create s3 path using date dynamically build query and then fire. So that you replace it with actual date value.
So that there will be on syntax errors.
I have 2 directories of detached SQL Server .mdf's and .ldf's respectively, are there any scripts in T-SQL or Powershell that can pick these files up and attach them to a SQL Server without manually inputting every specific file?
Occasionally unattached data and log files would be dumped to these locations so I would ideally like to not update the script every time.
You can Schedule a job to execute the following T-sql statements and it will do the job for you.
USE [master]
GO
CREATE DATABASE [DataBase_Name] ON
( FILENAME = N'C:\Data\DataBase_Name_Data.mdf' ), --<---- Path to your .mdf file------
( FILENAME = N'C:\Data\DataBase_Name_Log.ldf' ) --<---- Path to your .ldf file------
FOR ATTACH
GO
Found the solution:
http://gallery.technet.microsoft.com/scriptcenter/Attach-all-the-databases-ace9ed34
A few tweaks here and there and it did exactly what i was looking for.
I want to know if there is any way for me to deploy a stored procedure (sql file) that is checked-in at VSS? I have a couple of procedures that I want to deploy to SQL Server. I'm trying to create a batch file to deploy them from VSS to SQL Server.
I want to achieve this since we would like to remove direct access to SQL Server. So that everything step we do on the procedures could be monitored.
Thanks!
EDIT:
I have also read that it's possible in Powershell. If anyone can point me to a good way to do it, that would be appreciated so much! I'm new to the VSS, Batch Files, and Powershells. So I'm a little bit confused where to start. Thanks!
This is what I have so far. But it doesn't work.
#echo off
cls
set path=C:\Program Files\Microsoft Visual SourceSafe
set ssdir=\\MySampel_VSS\VSS\SampleDB
set Recursive = Yes
set /p SName=Server Name :
set /p UName=User Name :
set /p Pwd=Password :
set /p DbName=Database Name :
set /p choice=ARE YOU SURE TO EXECUTE SCRIPTS in %DbName% (y/n) ?
if '%choice%'=='y' goto begin
goto end
:begin
if exist C:\Scripts\error.txt del C:\Scripts\error.txt
#echo on
sqlcmd -S %SName% -U %UName% -P %Pwd% -d %DbName% -I -i $/Database/SampleDB/Procedures/MySample.sql >> error.txt 2>&1
#notepad error.txt
:end
You need to use Visual Source Safe command line in a batch file to deploy a project that is checked in. Take a look at the Checkout and Deploy commands in particular.
I am trying to export a row to a DEL file using;
EXPORT TO ExportedFile.DEL OF DEL SELECT * From MyTable
However, when I execute the query I get this;
Error: SQL0104N An unexpected token "ExportedFile" was found following "EXPORT TO ". Expected tokens may include: "JOIN". SQLSTATE=42601
I understand that Export is not a supported command in WinSQL but is there a supported command?
EXPORT is a utility – not an SQL statement – and is handled by the DB2 command line processor.
IBM have made it possible to execute the EXPORT utility in an SQL statement by using the ADMIN_CMD stored procedure:
call sysproc.admin_cmd('export to /tmp/file.del of del select * from table')
Note that, unlike using the EXPORT utility directly on your machine with the DB2 client, this will write the file to the specified path on the database server. You will have to retrieve the file from the server manually.
WinSQL only passes its information to the back end databases via ODBC. Therefore, I'd like to add to #Ian Bjorhovde's answer and recommend that you enter that call into the query window for DB2 in WinSQL, namely:
call sysproc.admin_cmd('export to /tmp/file.del of del select * from table')