Is there a common set of catalog tables I can use between db2 udb (iseries) and db2 for linux, unix and windows?
currently the ruby sequel gem uses a schema called syscat which does not exist on db2 udb.
I have attempted to locate some commonalities between them in the ibm docs but I can't seem to figure out the best catalog tables to use. It seems like db2 udb has 3 different ones that can be used. is there a compatible set of catalog tables to use?
With DB2 for IBM i there are 3 different catalog views available:
IBM i catalog views are stored in schema QSYS2, for example QSYS2.SYSTABLES
ODBC / JDBC catalog views are stored in schema SYSIBM, for example SYSIBM.SQLTABLES
ANS / ISO catalog views have two schemas. INFORMATION_SCHEMA is for low privilege users and SYSIBM is for high privilege users. For example INFORMATION_SCHEMA.TABLES or SYSIBM.TABLES
There are a bunch of system tables on the iseries and plenty of views. IDK if they match.
The tables and views are in QSYS2.
AUTHIDS
AUTHORIZATIONS
CATALOG_NAME
CHARACTER_SETS
CHARACTER_SETS_S
CHECK_CONSTRAINTS
CHECK_CSTS
COLPRIV
COLUMN_PRIVILEGES
COLUMNS
COLUMNS_S
CONDENSEDINDEXADVICE
CONDIDXA
DBMON_QUERIES
FCN_INFO
FCN_USAGE
FUNCTION_INFO
FUNCTION_USAGE
GROUP_PROFILE_ENTRIE >
GROUP_PTF_INFO
GROUPLIST
GRPPTFINFO
INFORMATION_SCHEMA_C >
JOURNAL_INFO
JRNINFO
JVM_INFO
LIBLIST
LIBRARY_LIST_INFO
LOCATIONS
PARAMETERS
PARAMETERS_S
PARM_S
PGMSTMSTAT
PKGSTMSTAT
PROCEDURES
PTF_INFO
REF_CONSTRAINTS
REF_CST1
REF_CST2
REFERENTIAL_CONSTRAI
REPLY_LIST_INFO
REPLYLIST
ROUTINE_PRIVILEGES
ROUTINES
ROUTINES_S
RTNPRIV
SCHEMATA
SCHEMATA_S
SERVER_SBS_ROUTING
SQL_FEATURES
SQL_LANG_S
SQL_LANGUAGES
SQL_LANGUAGES_S
SQL_SIZING
SQLQMAUDIT
SQLQMPROF
SQLQMPROFILES
SQLQMPROFILESAUDIT
SRVRSBSRTG
SYSCATALOGS
SYSCAT1
SYSCAT2
SYSCHKCST
SYSCHRSET1
SYSCHRSET2
SYSCOLAUTH
SYSCOLUMNS
SYSCOLUMNSTAT
SYSCOLUMNS2
SYSCOLUMN2
SYSCST
SYSCSTAT
SYSCSTCOL
SYSCSTDEP
SYSDISKS
SYSDISKSTAT
SYSFEATURE
SYSFIELDS
SYSFUNCS
SYSINDEXES
SYSINDEXSTAT
SYSIXADV
SYSIXSTAT
SYSJARCONT
SYSJARCONTENTS
SYSJAROBJ
SYSJAROBJECTS
SYSKEYCST
SYSKEYS
SYSLANGS
SYSLIMITS
SYSMQTSTAT
SYSPACKAGE
SYSPACKAGEAUTH
SYSPACKAGESTAT
SYSPACKAGESTMTSTAT
SYSPARMS
SYSPARTITIONDISK
SYSPARTITIONINDEXDIS
SYSPARTITIONINDEXES
SYSPARTITIONINDEXSTA
SYSPARTITIONMQTS
SYSPARTITIONSTAT
SYSPDISK
SYSPGSTAT
SYSPIDISK
Related
I am using DB2 9.7 (LUW) in a windows server, in which multiple DBs are available in a single DB instance. I just found that in one of these DBs, I am unable to add a column with DATE data type, during table creation or altering. The column been added is getting changed to timestamp instead.
Any help on this will be welcome.
Check out your Oracle Compatibility setting
Depending on that setting a date is interpreted as Timestamp(0) like in your example.
Because these settings take effect if the database has been created after setting the DB2_COMPATIBILITY_VECTOR registry variable your database can show a different behaviour.
I have a set of daily CSV files of uniform structure which I will upload to S3. There is a downstream job which loads the CSV data into a Redshift database table. The number of columns in the CSV may increase and from that point onwards the new files will come with the new columns in them. When this happens, I would like to detect the change and add the column to the target Redshift table automatically.
My plan is to run a Glue Crawler on the source CSV files. Any change in schema would generate a new version of the table in the Glue Data Catalog. I would then like to programmatically read the table structure (columns and their datatypes) of the latest version of the Table in the Glue Data Catalog using Java, .NET or other languages and compare it with the schema of the Redshift table. In case new columns are found, I will generate a DDL statement to alter the Redshift table to add the columns.
Can someone point me to any examples of reading Glue Data Catalog tables using Java, .NET or other languages? Are there any better ideas to automatically add new columns to Redshift tables?
If you want to use Java, use the dependency:
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-glue</artifactId>
<version>{VERSION}</version>
</dependency>
And here's a code snippet to get your table versions and the list of columns:
AWSGlue client = AWSGlueClientBuilder.defaultClient();
GetTableVersionsRequest tableVersionsRequest = new GetTableVersionsRequest()
.withDatabaseName("glue_catalog_database_name")
.withCatalogId("table_name_generated_by_crawler");
GetTableVersionsResult results = client.getTableVersions(tableVersionsRequest);
// Here you have all the table versions, at this point you can check for new ones
List<TableVersion> versions = results.getTableVersions();
// Here's how to get to the table columns
List<Column> tableColumns = versions.get(0).getTable().getStorageDescriptor().getColumns();
Here you can see AWS Doc for the TableVersion and the StorageDescriptor objects.
You could also use the boto3 library for Python.
Hope this helps.
Is it possible to access old data state in an DB2 database?
Oracle has the clause select ... as of timestamp to do it. Does DB2 have something like it?
Yes, you can select a set of rows that were / will be valid in a past / future time. This is called Time Travel in DB2, but you have to configure / create the table with the extra columns in order to activate this feature. This is new in DB2 10, but I think it is not available in all editions.
For more information, take a look at this: http://www.ibm.com/developerworks/data/library/techarticle/dm-1204db2temporaldata/
Remember, there are two concepts: business time and application time, and when using both is called bi-temporal.
I am a SQL Server Developer familier with SSIS. But it is my first time to work on SSAS. I am trying to learn it from free video tutorials offered by Microsoft. In the tutorial When they try to create Data Source View with tables in AdventureWorks Database ..the schema had relationships between DimDate and FactInternetSales(with 3 connecting columns/lines) while other tables have one connecting column/line.
But when i tried to do the same, the schema shows no relationship between DimDate and FactInternetSales. Note: Other tables had one connectivity ..same as the video tutorial.
Please advice.
Thanks,
Vanu
There must be a 1-to-1 relationship between the Fact date keys and the DimDate dimension. So, in your example, there were three connecting pointers in the schema view, meaning the Fact table must contain three different date keys. This could be something like Start_Date_Key, End_Date_Key, As_of_Date_Key, which could all point to the DimDate.Date_Key key field. With this in mind, your row would look something like the following
Start_Date_Key = 20120101
As_of_Date_Key = 20120605
End_Date_Key = 20150101
SomeotherKey = 10
Sales_Product_Key = 5
Hope this helps.
I don't usually work with linked servers, and so I'm not sure what I'm doing wrong here.
A query like this will work to a linked foxpro server from sql 2000:
EXEC('Select * from openquery(linkedServer, ''select * from linkedTable'')')
However, from researching on the internet, something like this should also work:
Select * from linkedserver...linkedtable
but I receive this error:
Server: Msg 7313, Level 16, State 1, Line 1
Invalid schema or catalog specified for provider 'MSDASQL'.
OLE DB error trace [Non-interface error: Invalid schema or catalog specified for the provider.].
I realize it's supposed to be ServerAlias.Category.Schema.TableName, but if I run sp_ tables _ex on the linked server, for the category for all tables I just get the network path to where the data files are, and the schema is null.
Is this server setup incorrectly? Or is what I'm trying to do not possible?
From MSDN:
Always use fully qualified names when
working with objects on linked
servers. There is no support for
implicit resolution to the dbo owner
name for tables in linked servers
You cannot rely on the implicit schema name resolution of the '..' notation for linked servers. For a FoxPro 'server' you're going to have to use the database and schema as they map to their FoxPro counterparts in the driver you use (I think they map to folder and file name, but I have't use a ISAM file driver in more than 10 years now).
I think you need to be explicit about resources in the linked server part of the query, for example:
EXEC SomeLinkedServer.Database.dbo.SomeStoredProc
In other words just dotting them out doesn't work in this case, you have to be more specific.
It's actually:
ServerAlias.Catalog.Schema.LinkedTable
Catalog is the database that you're querying on the linked server, and catalog is the catalog of the remote table. So a valid four-part name would look lik this
ServerAlias.AdventureWorks.HumanResources.Employee
or
ServerAlias.MyDB.dbo.MyTable