DB2/400 Query - record format level identifiers for all tables in a library - db2

We have multiple copies of the same library for testing, QA, development etc. consisting of hundreds of tables. Over time these libraries got out of sync and we run into a lot of level check problems. I would like to list all tables with a different Record Level Format Identifier from the corresponding tables in a model library. Is this possible using SQL? If not what other choices do we have?

A quick peek into SYSTABLES didn't show anything, but the QDBRTVFD API has that information in the file definition header. If APIs are not your thing, you can use DSPFD FILE(somelib/*ALL) TYPE(*RCDFMT) OUTPUT(*OUTFILE) FILEATR(*PF *LF) OUTFILE(QTEMP/RCDFMTS) to create a file you CAN use SQL on.

Related

How can I fill the fields of a table in LibreOffice Base automatically?

I have a database which contains a table of cellphones. Let's say that every cellphone has 10 fields. In order to fill or modify the table I will have several forms available for the user. However, I don't want the user to modify all 10 fields every time. I want him to just give information about 4 of the fields and the rest of them will be automatically filled or modified by a program. Does someone know how to do that? :)
While possible with triggers, macros, or other coding, it's generally bad database practice to have calculated fields or duplicate data stored in tables. Related data should be stored through relationships between tables and displayed in a query, not directly in the table.
So if, say, each store only sells a single color of phone, you would have the user enter only the store. You would have another table that showed the relationship between store name and phone color. Then when you wanted a list of users and their phone colors, you would write a query that looks at the table list of users and where they bought their phones and joins it to the list of stores and what colors they sell.
My advice has three tiers:
Almost certainly best - redesign your database to be more normalized, meaning use relationships between tables to prevent the need for duplicate data.
If you decide you need macros, a good resource for working with OpenOffice macros is Andrew Pitonyak's book OpenOffice Macros Explained (a free download from his website).
SQL Triggers are often a cleaner way of doing this (compared to macros) but are not supported by the old database engine that is the Base default. (Base itself only handles queries, forms, and reports. The tables are handled by separate software, which by default is an old version 1.8 of HyperSQL Database or HSQLDB that is "embedded" inside Base.) You would need to upgrade to a newer database software. Instructions on upgrading to HSQLDB 2.3 are in this thread: [Tutorial] Splitting an "embedded HSQL database"

Combining two data sources with exact same schema in Tableau

I have two data sources containing list of orders with the exact field structure (one is the archive, and another is the active database). I'm accessing them through OData connection in Tableau.
What I want is to combine these two data sources so the Tableau chart will display all order numbers and information (as opposed to just the active one, which I'm doing with a single data source).
The two tables don't overlap (since whatever is archived is by definition not active), so I cannot join or blend with the primary key Order No. (or any key for that matter).
How can I combine these data sources? Does the fact that the connection is OData make any difference?
For relational databases, the solution is to define custom SQL with two back to back select statements (1 for each table) separated by the SQL UNION ALL keywords
I don't know whether OData sources support UNION ALL
Create TDE file for Archive data and add this file to the current data extract using the option "Append data from file" from Data-->Extract;

Migrating a schema from one database to other

As part of some requirement, I need to migrate a schema from some existing database to a new schema in a different database. Some part of it is already done and now I need to compare the 2 schema and make changes in the new schema as per gap finding.
I am not using a tool and was trying to understand some details using syscat command but could not get much success.
Any pointer on what is the best way to solve this?
Regards,
Ramakant
A tool really is the best way to solve this – IBM Data Studio is free and can compare schemas between databases.
Assuming you are using DB2 for Linux/UNIX/Windows, you can do a rudimentary compare by looking at selected columns in SYSCAT.TABLES and SYSCAT.COLUMNS (for table definitions), and SYSCAT.INDEXES (for indexes). Exporting this data to files and using diff may be the easiest method. However, doing this for more complex structures (tables with range or database partitioning, foreign keys, etc) will become very complex very quickly as this information is spread across a lot of different system catalog tables.
An alternative method would be to extract DDL using the db2look utility. However, you can't specify the order that db2look outputs objects (db2look extracts DDL based on the objects' CREATE_TIME), so you can't extract DDL for an entire schema into a file and expect to use diff to compare. You would need to extract DDL into a separate file for each table.
Use SchemaCrawler for IBM DB2, a free open-source tool that is designed to produce text output that is designed to be diffed. You can get very detailed information about your schema, including view and stored procedure definitions. All of the information that you need will be output in a single file, and can be compared very easily using a standard diff tool.
Sualeh Fatehi, SchemaCrawler
unfortunately as per company policy, cannot use these tools at this point of time. So am writing some program using JDBC to get the details and do some comparison kind of stuff.

Loading DB2 table rows as Marklogic documents

Is there any tool to quickly convert a DB2 table rows into collection of XML documents that we can load to Marklogic?
DB2 supports the SQL/XML publishing extensions that were introduced in SQL:2003. These functions include XMLSERIALIZE, XMLELEMENT, XMLATTRIBUTE, and XMLFOREST, and are easily added to a SQL SELECT statement to produce a simple, well-formed XML document for each row in the result set. By writing queries that retrieve the table names and column layouts from DB2's catalog views, it is possible to automate the creation of the XML-publishing SELECT statements for a large number of tables.
One way of doing this would be to use the MLSQL toolkit ( http://developer.marklogic.com/code/mlsql ). It allows accessing relational databases from within your XQuery code in MarkLogic. Not sure how the returned data actually looks like, but it should be easy to process it within XQuery, and insert your data as XML into MarkLogic.
Just make sure not to try to load a million records in one statement, but instead try to spawn batches of lets say 1000 records at a time. Spawning will also allow for handling it with multiple threads, so should be faster for that reason too..
HTH!
Do you need to stream from DB2 to MarkLogic? Or can you temporarily dump all the documents to an intermediary filesystem and then read them in? If you can dump, then simply use some DB2 tooling (like #Fred's answer above) to export the rows to a bunch of XML documenets in a filesystem and use one of many methods for reading in a directory full of XML files into MarkLogic (like Information Studio (UI or apis), RecordLoader, and so on).
If you have don't want to store them in the filesystem as an intermediary, then you could write an InformationStudio plugin for MarkLogic that will pull out each row and insert a document into MarkLogic. You'd like need some web-service or rest endpoint that the plugin could call to extract the document data from DB2.
Alternatively, I suspect you could use the DB2 tooling (described by #Fred) that will let you execute some code per row of your table. If you can do that in Java (or .Net), then pull in the MarkLogic XCC APIs which will give you the ability to write documents into MarkLogic.

What applications do you use for data entry and retrieval via ODBC?

What apps or tools do you use for data entry into your database? I'm trying to improve our existing (cumbersome) system that uses a php web based system for entering data one ... item ... at ... a ... time.
My current solution to this is to use a spreadsheet. It works well with text and numbers that are human readable, but not with foreign keys that are used to join with the other table's rows.
Imagine that I want a row of data to include what city someone lives in. The column holding this is id_city, which is keyed to the "city" table which has two columns: id (serial) and name (text).
I envision being able to extend the spreadsheet capabilities to include dropdown menu's for every row of the id_city column that would allow the user to select which city (displaying the text of the city names), but actually storing the city id chosen. This way, the spreadsheet would:
(1) show a great deal of data on each screen and
(2) could be exported as a csv file and thrown to our existing scripts that manually insert rows into the database.
I have been playing around with MS Excel and Access, as well as OpenOffice's suite, but have not found something that gives me the functionality I mention above.
Other items on my wish-list:
(1) dynamically fetch the name of cities that can be selected by the user.
(2) allow the user to push the data directly into the backend (not via external files/scripts.
(3) If any of the columns of the rows of data gets changed in the backend, the user could refresh the data on the screen to reflect any recent changes.
Do you know how I could improve the process of data entry? What tools do you use? I use PostgreSQL for the backend and have access to MS Office, OpenOffice, as well as web based solutions. I would love a solution that is flexible, powerful, and doesn't require much time to develop or deploy (I know, dream on...)
I know that pgAdmin3 has similar functionality, but from what I have seen, it is more of an administrative tool rather than something for users to use.
As j_random_hacker noted, I've used MS Access for years (since Access 97) to connect to an ODBC Data Source.
You can do this via linking to external tables: (in Access 2010:)
New -> Blank Database
External Data -> ODBC Database -> Link to Data Source
Machine Data Source -> New -> System Data Source -> Select Driver (Oracle, or whatever) -> Finish
Enter a new name for your DSN, the all of the connection parameters, then click OK
Select newly created DSN, hit ok.
You can do so much once Access sees your external table as a linked table, including sorting, filtering, etc. There's one caveat: as far as I can tell, ALL operations happen on the client side unless you're using a pass-through query. That's fine if you're looking at a table with 3000 records. With 2,000,000 records, that hurts. To be clear, all data in the table comes down to the workstation, for all tables being joined, and the join happens client-side, NOT server-side.
There are usually standalone tools for basic database management - e.g., for Oracle and MySQL a free tool called SQL Developer suffices for basic database data entry.
For more complex types (especially involving clobs) I can usually knock an application together in Java+SWT in a day if we already have the model and DAOs available on the Java side. Yeah, you have to put some effort in, but if it will be used regularly in the future then it is probably worth it.
In your case (well, the case where you have bulk imports of data) knocking up some Perl that reads from the CSV and does the city id lookup would be trivial to implement. Maybe a waste for a one-off thing? Depends on the amount of data to import.
I would be surprised if MS Access can't do what you're looking for -- this is basically the exact use case for it. Namely, quickly throwing together a nice UI for a simple CRUD DB application that a spreadsheet doesn't quite stretch to.
This is an answer, technically, but not a recommendation:
I've used Excel and SSIS for importing simple data entry files into MS SQL, but it's not adequate - there's very little ability to control the data, and SSIS is so very touchy, especially when working with Excel.
MS Access does not work well with some non-Microsoft databases. There is an open-source equivalent called Apache OpenOffice Base you may want to try.