Importing Access DB file in SAS but failing - import

I am trying to run below code but failing with error 'ERROR: DBMS type ACCESS not valid for import.' Not sure what is the issue.
LIBNAME db 'C:\Extra';
PROC IMPORT DBMS=ACCESS
OUT=WORK.Finished_IP
DATATABLE='IP Input Data'
REPLACE;
DATABASE= "db.Finished data.accdb";
USEDATE=YES;
SCANTIME=NO;
DBSASLABEL=NONE;
RUN;

If you have the "Microsoft Access Driver (*.mdb, *.accdb)" driver installed you can use this to create your libname directly to the database file like so:
libname Test ODBC noprompt="DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};
DBQ=C:\local\Your_DB.accdb";
You can see if you have the driver under the ODBC Data Source Administrator found in Administration Tools in the Control Panel.
Using this method means that tables in the database can be read in like normal SAS datasets.

Related

DB2 Z/OS - Revalidate UDF and Stored Procedure

On DB2 AIX I can use the SYSPROC.ADMIN_REVALIDATE_DB_OBJECTS stored procedure to revalidate all Stored Procedures and Functions defined in my schema.
How can I do the same thing on DB2 z/OS (v.12)?
Thanks
REGENERATE automatically rebinds, at the local server, the package for the SQL control statements for the procedure and rebinds the package for the SQL statements that are included in the procedure body. If a remote bind is also needed, the BIND PACKAGE COPY command must be explicitly done for all of the remote servers.
ALTER PROCEDURE SCHEMA.NAME_SP REGENERATE ACTIVE VERSION;
For the moment I have not found anything that automatically revalidates/regenerates all Stored/UDFs of a Schema.

How do i dump data from an Oracle Database without access to the database's file system

I am trying to dump the schema and data from an existing Oracle DB and import it into another Oracle DB.
I have tried using the "Export Wizard" provided by sqldeveloper.
I found answers using Oracle Data Pump, however i do not have access to the filesystem of the DB server.
I expect to get a file that i can copy and import into another DB
Without Data Pump, you have to make some concessions.
The biggest concession is you're going to ask a Client application, running somewhere on your network, to deal with a potentially HUGE amount of data/IO.
Withing reasonable limits, you can use the Tools > Database Export wizard to build a series of SQLPlus style scripts, both DDL (CREATEs) and DATA (INSERTs).
Once you have those scripts, you can use SQLPlus, SQLcl, or SQL Developer to run them on your new/target database.

Connecting to PostgreSQL Data Source in SQL Server Import Export Tool

I'm trying to setup an easily-replicable (or even manual and I do it once a month or so) process for moving data from a large Azure PostgreSQL database to a more manageable Azure SQL database for end users that are most familiar with SQL Server. I've successfully connected to the PostgreSQL database via PGAdmin, so I know all my connection string info.
I started by installing the latest ODBC driver from here.
I then used a connection string which was given to me from the Azure portal, filled in the proper database name and password, and attempted to use the following drivers:
PostgreSQL ODBC Driver(UNICODE)
PostgreSQL ODBC Driver(ANSI)
I am getting the following error with either of them:
ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
What step am I missing in this process? Or how best can I troubleshoot this?
After more research, I attempted to add the ODBC driver here:
And got the following error (I'm not sure why Tableau is relevant to this?):
Thank you.

Export ixf in db2

EXPORT TO myFile.ixf OF ixf SELECT * FROM TABLE_NAME WHERE SSN='DATA' AND EMPLOYER_ID=DATA AND CREATED_TS='DATA'
I am using this statement to export a couple of rows. for privacy purposes DATA has been inserted where necessary. however the following error is produced. I have followed IBM's guide on export and feel like this should be correct but unsure exactly as to what is wrong. the error log is as follows
Error: DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=myFile;EXPORT TO ;JOIN, DRIVER=3.53.70
SQLState: 42601
ErrorCode: -104
As already remarked, you cannot directly run Db2-commands (such as import, export, load ... etc.) from plain SQL , as you are trying to do via JDBC.
Instead, if your Db2-server runs on Linux/Unix/Windows, you can either use a stored procedure, or (for any Db2-server operating system) you can use the command-line.
However, when you use stored-procedure SYSPROC.ADMIN_CMD for Db2-LUW, all file-names in stored-procedure parameters are relative to the Db2-server (and not your remote jdbc-client, if you are running remotely).
That means after a successful export via stored-procedure, if you really need the exported IXF file to be on your workstation then you must do file-transfer to your workstation using whatever tools you have for that purpose.
For example, this shows an export on Unix to an IXF file in /tmp on the Db2-server:
call sysproc.admin_cmd('EXPORT TO /tmp/myFile.ixf OF ixf SELECT * FROM user1.stk1 with ur') ;
If you don't want to use a stored procedure, you must use the command-line shell (for example on Windows, use db2ntcmd.bat , or on Unix use bash or ksh) and connect to the database in the shell and perform the export. This requires the workstation to have a Db2-client and also that the relevant database and node be first catalogued.
If you specify your Db2-version and the operating-system on which your Db2-server runs, then you will get more details.

Connecting SAS 9.2 with Amazon Redshift

I need to create reports/summary tables on Redshift using SAS. My client data is on Amazon Redshift and he provided me all credentials to access the database. I have SAS 9.2 (32bit) and downloaded PostgresSQL 32bit driver to my system (as Redshift is based on PostgresSQL). I setup ODBC data source successfully and now I am connecting SAS using below command:
LIBNAME RdSft ODBC DSN='Redshift server' user='xxxxxxx' pw='xxxxxx';
data Rdsft.new_table;
set Rdsft.old_table(obs=10);
run;
I am able to connect and can see contents of tables on Redshift but not able to make any table there. Sometimes I could but its taking hours to create a table just with 10 observations. Someone suggested me to use DbVisulizer to do this task but I am comfortable with SAS only.
Please suggest.
If you have SAS/ACCESS try using the postgres engine for the library instead of going via ODBC eg:
libname RdSft postgres server="<server-address>" database=<db-name> port=5432 user='xxxxxxx' pw='xxxxxx';
Also, try adding conopts="UseServerSidePrepare=1" to the libname as suggested by this article: http://support.sas.com/kb/52/585.html
The simple fact of the matter, is that when you're connecting to Redshift via ODBC, even your simple data step query:
"data Rdsft.new_table;
set Rdsft.old_table(obs=10);
run;"
Is essentially translating to "select * from rdsft.old_table" before the obs subset is getting applied.
The SAS/ACCESS postgres solution is solid, you may also want to use proc sql, select only the columns you want, and subset as much as possible. Proc Sql will translate a bit easier into Redshift query language through an ODBC than the data step will.
SAS will hopefully be issuing a SAS/ACCESS for REDSHIFT option sometime soon! :)