DB2 v11.5 FixPacks - anyone know when will they be released? - db2

I don't see any FixPacks listed for DB2 v11.5. We are considering its installation, but normally wait for a baseline correction before implementation.
Does anyone know if there is a FixPack in the works for DB2 v11.5 - and if so, when it will be released?

Db2 11.5 ModPack 1, ModPack 2 and ModPack 3 (i.e. Db2 11.5.1.0, Db2 11.5.2.0 and Db2 11.5.3.0 respectively ) are available as a container only release
For example from
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.wn.doc/doc/c_whats_new_v11-5-1.html
This mod pack release is currently available in the following Db2 products:
The single container deployments of Db2 Warehouse and IBM Integrated Analytics System (IIAS)
The container micro-service deployment of Db2 on Red Hat OpenShift
The Db2 cartridge used by IBM Cloud Pak for Data
Similar statements are on the what's new pages for 11.5.2 and 11.5.3
See the main What's new page for information on container-only Db2 Mod Pack releases
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.wn.doc/doc/r_whats_new_home.html
IBM® offers a number of Db2-based solutions for small to enterprise
size container environments [e.g., Db2 Community Edition, Db2
Warehouse, the IBM Integrated Analytics System (IIAS)].
These
container-deployed products, and the Db2 engine that powers them, are
released at regular intervals between traditional Db2 on-prem
releases. The Db2 engine for these products is identified using the
Mod Pack numbering scheme currently used in the Db2 product signature.
New features that are available in these container-deployed Mod Pack
releases are rolled into a subsequent Db2 on-prem release. The on-prem
release also contains any new features that are available in the
container-deployed release that aligns with its release date. So, each
on-prem release of Db2 aligns with a similarly numbered
container-ready release.
Container releases are available from the IBM Cloud Container Registry. E.g. for Db2 Warehouse, see the instructions here https://www.ibm.com/support/knowledgecenter/SSCJDQ/com.ibm.swg.im.dashdb.doc/admin/get_image.html
The fixlist for Db2 11.5.1.0 and later releases can be found here https://www.ibm.com/support/pages/fix-list-db2-version-115-linux-unix-and-windows

Starting with Db2 11.1
Db2® uses a four part product signature that includes both Modification Pack values (Mod Pack) and Fix Pack values."
The product signature format follows the Maintenance Delivery Vehicle standard (http://www.ibm.com/support/docview.wss?uid=swg27008656) that is used throughout the IBM® product lines.
The IBM® product signature The Db2 four part product signature is of
the format VV.RR.MM.FF where:
VV Is the Version number.
RR Is the Release number.
MM Is the Modification number.
FF Is the Fix Pack number.
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.wn.doc/doc/c0070229.html#c0070229

Related

Specifying storage for mongoDB server in fresh install on Azure

I'm planning to install MongoDB server 4.4 on a Ubuntu Linux vm on Azure. In the marketplace I see images already available. Why are these chargeable hourly? It's the community edition which is free, so I cannot understand the charge
Secondly, I plan on adding a 10TB data disk for storing the ever increasing data. But if I use preinstalled image then I suppose mongoDB server will get installed on the OS disk(128GB) whereas I would prefer it used the data disk(10TB).
Does it make sense to then use the preinstalled image? I found a mongodb setting at https://docs.mongodb.com/manual/reference/configuration-options/#mongodb-setting-storage.dbPath to specify storage path. Both the OS disk and data disk will be premium SSD. Will it make a difference if the software is installed on OS disk and collections/databases get stored in data disk?
The reason why you see pricing is because it's neither added by Canonical nor MongoDB publishers. The golden image you shared is being offered by Cloud Infrastructure Services. They not only offer MongoDB CE images but also packer, docker compose etc., built on top of Opensource OS.
As you say both OS & Data disks will be provisioned using Premium SSDs. I wouldn't worry much about performance.
To cut down costs, I would propose below things.
Create MongoDB image yourself by picking free version of Ubuntu/Centos or whatever. You can use packer or Azure Image Builder. I would setup automation to periodically build images with different versions of Mongo & Linux flavor & version them
Provision VM, install MongoDB app on top of the VM & regularly patch, upgrade both linux & application packages
If you have MongoDB server on-prem, you could create a snapshot of OS with MongoDB application packaged in it, convert it into vhd file & export the image onto Azure. You can follow documentation here
If MongoDB offers golden images as private offers & you have existing license contract with them. Check out with MongoDB, register your subscription & you may be eligible to access their golden images on Azure appearing as "Private Offers". RedHat offers it's images to customers this way.
The image being offered on Azure doesn't make much financial sense. Let's assume you have 3 node cluster.
Pricing for using image is :: 0.025 euros per hour
per day it would be :: 24*0.025 = 0.6 euros
yearly cost could be :: 365*0.6 = 219 euros
combined cost for 3 nodes :: 219*3 = 657 euros
If you have multiple environments, multiple databases in different regions, cost just gets multiplied on & on.
Now, the choice is yours!

Synapse Analytics vs SQL Server 2019 Big Data Cluster

Could someone explain the difference between SQL Server 2019 BDC vs Azure Synapse Analytics other than OLAP & OLTP differences? Why would one use Analytics over SQL Server 2019 BDC?
Azure Synapse Analytics is a Cloud based DWH with DataLake, ADF & PowerBI designers tightly integrated. it is a PaaS offering and it is not available on-prem. The DWH engine is MPP with limited polybase support (DataLake).
it also allows ypu to provision Apache Spark if needed.
SQLServer 2019 Big Data Cluster is a IaaS platform based on Kubernetes. it can be implemented on-prem on VMs or on OpenShift or on AKS Any cloud for that matter).
Its Data Virtualization support is very good with support for ODBC data sources and a Data Pool to support Data Virtualization- Implemented via Polybase.
Apache Spark makes up the Big Data compute.
Though it is not a MPP like Synapse, because of Pods in Kubernetes, multiple pods can be created on the fly through scalability features such as VMSS ... etc.
If you want Analytical capability on-prem you will use SQLServer 2019 BDC but if you want a Cloud based DWH with analytical capability features you will use Synapse
explain the difference between SQL Server 2019 BDC vs Azure Synapse Analytics
Server is OLTP and Synapse is OLAP. :D
other than OLAP & OLTP differences? Why would one use Analytics over SQL Server 2019 BDC?
Purely from a terminology point of view their product management have no clue what they are doing.
"SQL Server" is a DYI/on-prem/managed-by-you DB.
Fully Azure managed SaaS version of SQL Server is known as Azure SQL Database.
They also have "Azure SQL Managed Instance", and "SQL Server on Azure VM".
Azure Synapse is renamed Dedicated SQL-Pools.
Azure Synapse On-demand is renamed to Serverless SQL-Pools.
Azure Synapse Analytics = Dedicated + Serverless + bunch of ML services.
I'm going to answer assuming your question is:
Why would one use "Azure Synapse Dedicated or Serverless" over SQL Server?
SQL Server is on prem DIY, other is SaaS, fully managed by Azure. With this comes all the pros/cons of SaaS like No CAPEX, no management, elastic, very large scale, ...
Synapse' USP is it's MPP, which SQL Server does not have. Though I see things like Polybase and EXTERNAL TABLES being supported by SQL Server.
Due to MPP architecture, Synapse's transactional performance is worst by far (that I've seen). E.g. Executing INSERT INTO xxx VALUES(...) to add one row via JDBC would take about 1-2 seconds as against 10-12 seconds for importing CSV files with 10s of thousands of rows using COPY command. And INSERT INTO does not scale with JDBC batching. It'll take 100 seconds to insert 100 rows in one batch.
It is not your fault that you are confused. IMO Azure Product Management for Databases (SQL Server, DW, ADP, Synapse, Analytics and the 10 other flavors of all these) have no clue what they want to offer 2 years from today. Every product boasts of Big Data, Massive this and that, ML and Analytics, Elastic this and that. Go figure.
PS: Check out Snowflake if you haven't.
I'm not affiliated with Microsoft or Snowflake.
I believe the user user3129206 is asking
SQL Server 2019 BDC vs Azure Synapse Analytics
not
SQL Server vs Azure Synapse Analytics
so the first answer is relevant.
The only thing I'd argue is that the BDC is also an MPP like Synapse because of Pods in Kubernetes if implemented right, with many servers + HDS.
I plan to test BDC on-premises and see how demanding the install and maintenance are.
The neat thing about the BDC seems to be easy, partially or fully, to port it from on-premises to Azure or any cloud.
It seems that BDC is both OLTP and OLAP, trying to provide the best of both worlds.
As I am on the same comparison quest, I'll try to get back and share what I learn.

AWS RDS PostgreSQL Minor Update

I am a bit confused about how to perform a minor PostgreSQL version update on AWS RDS.
I read multiple articles from AWS documentation:
https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-rds-enhances-auto-minor-version-upgrades/
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Upgrading.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html
None of them pointed me to the exact command or set of instructions necessary to perform the minor update released in early August 2019.
I fully understand that major updates can be performed from the AWS Console -> Modify section of the RDS DB Instance or from the AWS CLI.
I even did a search on the available engine versions for Postgres:
aws rds describe-db-engine-versions --engine postgres
And this command only outputs major engine versions, and the latest one is "PostgreSQL 11.4-R1", the one I use.
I am aware that minor updates can be enabled during the maintenance period, but I did not see any minor updates applied.
The lastest August release is crucial four our DB instance because it solves a couple of bugs we have reported regarding PG 11 Partitioning.
Is there a way to perform a manual version update on RDS for Postgres? Locally I updated the PG engine and all works fine.
Thank you and have a great day!
In the RDS console, when you go to the database details and view the "Maintenance & backups" tab, there is a section that displays if there are pending maintenance tasks. Here's a screenshot of a database that has a pending maintenance task:
If there are no pending maintenance tasks that will say "none" instead of "available". If there are no pending maintenance tasks then your database should be running the latest version. If there are pending maintenance tasks, then you can manually initiate the maintenance tasks anytime you want, which should update your database to the latest minor version if it isn't already updated.
I don't have a PostgreSQL RDS instance to test this on, but you could try running SELECT version(); on the database to get the current version, which might indicate the minor release version.
I don't see any other way to get to the minor version unfortunately, so you may have to open an AWS support ticket to get them to tell you what version the DB instance is running.
You will need to change the maintenance window to the earliest time.
AWS doesn't allow us to manually trigger the minor update process.

Setting up backup strategy for backing up postgresql database on cloud foundry

We have setup a community postgresql service on Cloud Foundry (IBM Blumix). This is a free service and no automated backup and recovery is supported out of the box.
Is there a way to set up a standby server or a regular backup in case there is any data corruption/failure?
IBM compose and ElephantSQL can provide this service at a cost, butwe are not ready for it yet.
PostgreSQL is an experimental service and there is not a dashboard and other advanced features (Daily backup for example) that you can find in other services that you mentioned. If you want to do a backup you could write an ad-hoc script that 'saves'\exports all tables as you want and run it every day.
If you need PostegreSQL you can create a PostegreSQL by compose service $17.50 / mo for the first GB and $12 for Extra GB )
We used Postgresql Studio and deployed it on IBM Bluemix. The database service was connected to the pgstudio interface (This restricts the access to only connected databases). We also had to make minor changes to pgstudio so that we could use pg_dump with the interface.
The result: We could manually dump the data. This solution works well as we could take regular dumps (though manually).
In the free tier you are right in saying that you cant get the backup. Those features are available only in Compose for PostgresSQL service - but that's a paid service.

Does Heroku standalone PostgreSQL available on Asia(Tokyo/Singapore) region?

I am building a service, and considering using of Heroku standalone PostgreSQL. Anyway I need to deploy the service to the Asian area (Tokyo/Singapore replicated), and I saw PostgreSQL is available only on US/EU region for Heroku dynos.
I want to know whether the standalone PostgreSQL is also available only on those regions or also available on Tokyo/Singapore regions.
No, Heroku Postgres (in fact any of Heroku) is only available in US and EU regions presently.