Where to start learning DB2 programming? [closed] - db2

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am going to use a legacy DB2 on zOS as database in a banking project. I am proficient in programming on Oracle. Also I've used MySQL and SQL Server for many years but know nothing about DB2 and their SQL dialect and procedural language for writing stored procedures and functions.
I am looking for good resources to learn DB2 architecture + SQL dialect and their procedural language.
Thank you very much

Assuming you don't have a z10 EC and licences to use DB2/z at home, first step is to get DB2/LUW (the Linux/Unix/Windows version). The Express edition is here.
Then head on over to publib, the first site anyone should go to for IBM product related information.
And the Redbooks are another very good source of information. IBM employees frequently get time off to do these (I say "time off" but it's actually very gruelling, believe me).
As for the mainframe product, it's not always an exact match for LUW but it is close. Stored procedures can be written in any of the languages available on the mainframe (we mostly use REXX) and I think you can also use all the UNIX (USS) toolchain as well if you'd prefer bash, Perl and tools you may be more familiar with.

There is a bunch of information on the DB2 Infocenters hosted at IBM. The Infocenter pages are version specific, here is a link to an Infocenter including information on DB2 UDB for z/OS v8 and DB2 v9.1 for z/OS:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db2.doc/db2prodhome.htm

You can find a lot of reference in IBM redbooks; See for exemple this url (sorry in french) for some links to IBM sites (DB2 centric)

You will already know the basic principles of tables and sql from ORACLE.
There are numerous annoying differences in SQL function names and some keywords but that shouldnt slow you down too much.
Internally DB2 is vastly different from ORACLE especially in the way storage is allocated and the way locking and transactions are implemented. This should not bother you too much unless thay expect you to do some intense performance and tuning work.
The main areas of difference are specific to z/OS rather than DB2. Firstly most mainframe programs are written in COBOL or DB2 to run inside either CICS or IMS transaction monitors (think J2EE containers but for COBOL) and usually these programs use "STATIC" sql. So its definately worth reading the manual on how staic sql programs are written and implemented. The programming is actually easier as the precompiler does most of the hard work and delivers the data to actual fields in your program, but, there is extra messing around woth DBRMs, basiclilly the SQL is stripped from the source code and stored in a file, before you run a program the file must be loaded into the target database (using BIND PLAN ) and at this point the optimisation and access plan is done so when you come to run your program there is an access plan ready built and waiting.
The second major pain is you will need to learn JCL. Which is a pretty unique hangover from the very first 360 series circa 1968. Think of it as a very primative ant script!

FREE Book- Getting Started with DB2 Express-C
Find out what DB2 Express-C is all about
Understand DB2 architecture, tools, security
Learn how to administer DB2 databases
Write SQL, XQuery, stored procedures
Develop database applications for DB2
Practice using hands-on exercises

Related

If Ado.Net is "outdated" then why is it that everyone still uses it in solutions with sql?

I was having a discussion the other day in regards to connecting to databases and using stored procedures and such, then I was told that Ado.Net is outdated and not really being used, but everywhere you look for solutions on connecting with SQL or interacting with SQL that everyone is still using Ado. I was told that the days of using Ado is long gone and now its entity framework that is being widely used. Is this because fewer people are learning Ado? Or is it for a separation of programmers and Sql developers?

Any Postgres compatible ORM for Node.js? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm seeking for a good ORM for postgres under Node.js, one that supports declaration of relationships beetween models, and fields validation. I've searched during a long time and cannot get any satisfying results. Maybe someone can point me to a project I missed during my researches.
Thx.
node-orm2 looks good: supports association, validators, and mysql, postgres, and mongo (in beta)
UPDATE: The node-orm2 package is no longer maintained. Possible alternatives include bookshelf or sequelize.
SequelizeJS - models, validation and migrations
BookshelfJS - a promise based ORM looks quite promising
JugglingDB - multidatabase ORM inspired by activerecord and datamapper. Supports validations, hooks, relations. Works with: mysql, postgres, sqlite, memory, redis, mongodb, neo4j.
Not production ready now (march 2012), but growing fast. I plan stable release soon.
Would recommend trying Knex for the database and Bookshelf as an ORM on top of it (developed by same person). I'm using it with postgres, but supports SQLite, MySQL/MariaDB and Oracle (in alpha) too.
Very expressive promise-based API with bluebird behind it, knex has a well documented and great command line tool for making migrations, seed files etc. Bookshelf uses backbone models and collections as an inspiration, including the .extend(..) paradigm for inheritance, so picking it up is a breeze if you come from that world. So far, so good.
Missy is a universal ORM for both SQL and NoSQL databases which is simple, flexible, well-documented and supports some fancy features that other ORMs are lacking
ORM's are a little too slow for fast nature of node.js; plain database driver is fine, but a little tiring. That's for I do write something just between: prego. It provides automatic statement preparation, migrations, simple models with associations, transactions and few utilities, all callback style and fast. Ideas/issues are welcome.
I suggest you use this pair: pg (like a driver) and light-orm (like orm wrapper).
https://npmjs.org/package/pg
https://npmjs.org/package/light-orm
https://www.npmjs.org/package/rdb
Simple, flexible mapper.
Transaction with commit and rollback.
Persistence ignorance - no need for explicit saving, everything is handled by transaction.
Eager or lazy loading.
Based on promises.
Well documented by (running) examples.

What options are there for Free for commercial use NoSql Datastores for the .NET world? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I've been looking around....
MongoDB seems to require a commercial license. http://www.mongodb.org/display/DOCS/Licensing
RavenDB has quite a costly scheme. http://ravendb.net/licensing
CouchDB, seems to be free for commercial use? But requires Apache, which is a bit of a pain.
Are there any other good options for .NET?
From my understanding, MongoDB is open source and free to use. There are two license types: AGPL v3.0 and a commercial license. There are a few minor restrictions with the AGPL 3.0, so some may need to remove these restrictions with a commercial license, but most probably won't.
So in short, I believe it is free and can most likely suit your needs.
It may or may not apply to you: if you're going to use RavenDB for a startup company, you may request a free license.
Of course, there's nothing preventing you from using a table in an ordinary SQL database as a simple repository for key-value pairs, which is essentially what a NoSQL database is.
This has the added benefit of still allowing you to use SQL where it is appropriate.
Cassandra uses Facebook's "thrift" (now Apache Thrift) RPC mechanism for its client layer. This is capable of generating C# output, which you can compile into a .NET assembly and call from a MS CLR application.
Whether Cassandra itself does what you want, is very much dependent upon what you want is.
https://github.com/mcintyre321/PieDb is a very basic MIT-licenced embedded document db wot I wrote
It
writes objects to app_data using json.net serialized documents
uses Lucene.Net.Linq to provide basic IQueryable support
optimistic concurrency
requires no configuration
It would be nice to get some other developers behind it, as it's only had about a weekend of work on it, but it works for simple cases as a RavenDb replacement.
Google has released a beta preview of their Cloud Datastore (previously only available for App Engine apps), but now can be used via their JSON API. It is free up to 1GB with 50K calls per day and there is a paid option after that.
GCD is rather low level, but I wrote a .NET ORM for it called Pogo that supports LINQ. The API is inspired by the RavenDB client API.
The source code and documentation for Pogo is available here - http://code.thecodeprose.com/pogo, and it is also available on Nuget.
For .Net there is also FatDB, which we used for a smaller project. They have a one year demo version : http://fatcloud.com/

Sockets and COBOL

I have received a job at a hospital which still uses COBOL for all organizational work, the whole (now 20 Terabyte) database (Which was a homebrew in, guess what, COBOL) is filled with the data of every patient since the last 45 (or so) years.
So that was my story. Now to my question:
Currently, all sockets were (from what I've seen) implemented by COBOL programs writing their data into files. These files then were read out by C++ programs (That was an additional module added in the late 1980s) and using C++ sockets sent to the database.
Now this solution has stopped working as they are moving the database from COBOL to COBOL, yes - they didn't use MySQL or so - they implemented a new database - again in COBOL. I asked the guy that worked there before me (hes around 70 now) why the hell someone would do that and he told me that he is so good at COBOL that he doesn't want to write it in any other language.
So far so good now my question:
How can I implement socket connections in COBOL? I need to create an interface to the external COBOL database located at, for example, 192.168.1.23:283.
You need to give more information about your OS and compiler.
If you are on IBM z/OS with a Language Environment supported compiler, you can just call the EZASOCK functions from the z/OS communications services. The calls are well documented in their references and have good Cobol examples.
Other platforms will have other options.
In most cases, you can just "CALL" an external module written in whatever language you need, but that a DLL or a sharedLib or whatever.
Can you give some more detail about your environment?
Why don't you just write directly to the database from the Cobol program?
IBM mainframes has two sockets APIs that can be used form COBOL.
One for use inside a CICS programs (where there are special thread safety and envrinment considerations) and one for use in ordinary Batch or IMS programs.
The complete TCP/IP functionality is implemented and its reliable enough to handle Credit Card protocols to MVA standards ( I know 'cos Ive done it).
Most COBOL compiler will allow you to link and call in an object module or DLL. As Kati says I know I can help but need the additional information. I've done this previously from windows to DEC so i know it can be done.
Recall that Google is your FRIEND.
The answer will depend heavily on your execution environment.
IBM does claim to have a Sockets API callable from COBOL, as part of CICS for z/OS.
Micro Focus appears to have something.

Derby vs PostgreSql Performance Comparison

We are doing research right now on whether to switch our postgresql db to an embedded Derby db. Both would be using glassfish 3 for our data layer. Anybody have any opinions or knowledge that could help us decide?
Thanks!
edit: we are writing some performance tests ourselves right now. Looking for answers more based on experience / first hand knowledge
I know I'm late to post an answer here, but I want to make sure nobody makes the mistake of using Derby over any production-quality database in the future. I apologize in advance for how negative this answer is - I'm trying to capture an entire engineering team's ill feelings in a brief Q&A answer.
Our experience using Derby in many small-ish customer deployments has led us to seriously doubt how useful it is for anything but test environments. Some problems we've had:
Deadlocks caused by lock escalations - this is the biggest one and happens to one customer about once every week or two
Interrupted I/Os cause Derby to fail outright on Solaris (may not be an issue on other platforms) - we had to build a shim to protect it from these failures
Can't handle complicated queries which MySQL/PostgreSQL would handle with ease
Buggy transaction log implementation caused a table corruption which required us to export the database and then re-import it (couldn't just drop the corrupted table), and we still lost the table in the process - thank goodness we had a backup
No LIMIT syntax
Low performance for complicated queries
Low performance for large datasets
Due to the fact that it's embedded, Derby is more of a competitor to SQLite than it is to PostgreSQL, which is an extremely mature production-quality database which is used to store multi-petabyte datasets by some of the largest websites in the world. If you want to be ready for growth and don't want to get caught debugging someone else's database code, I would recommend not using Derby. I don't have any experience with SQLite, but I can't imagine it being much less reliable than Derby has been for us and still being as popular as it is.
In fact, we're in the process of porting to PostgreSQL now.
Derby still is relatively slow in performance, but ... where ever your Java application goes your database server goes, completely platform independent. You don't even need to think about installing a DB server where your Java app is being copied to.
I was using MySQL with Java, but having an embedded implementation of your Database server sitting right within my Java App is just stunning and unprecedented productivity, freedom and flexibility.
Always having a DB server included whenever and wherever on any platform for me is just heaven !!!
Have not compared Postgresql to Derby directly. However, having used both in different circumstances, I have found Derby to be highly reliable. However you will need to pay attention to Derby configuration to ensure it suits your application needs.
When looking at the H2 databases stats site, it's worth reading follow up discussion which comes out in favour of Derby compared to the H2 conclusions. http://groups.google.com/group/h2-database/browse_thread/thread/55a7558563248148?pli=1
Some stats from the H2 database site here:
http://www.h2database.com/html/performance.html
There are a number of performance test suites that are included as part of the Derby source code distribution itself; they are used by Derby developers to conduct their own performance testing of Derby. So if you need examples of performance tests, or want additional ones, you could consider using those. Look in the subdirectory named java/testing/org/apache/derbyTesting/perf in the Derby source distribution.
I'm not sure what you mean that Derby is embedded. Certainly that is an option, but you can also install it as a separate server.