DB2: How to setup federation between two local database instances - minikube - db2

In a single box environment spinning 2 db2 containers running at different ports, how to setup federation ?
How can I use the command similar to the below
create nickname myschema.Table1 for
<remotehost.remoteschema.remoteTable>
Pod names are fed-database-somehash-blah format and it seems - dash are not acceptable
Error: unexpected token -database was found
Setup:
Minikube
DB2 version 11.5.5
please advice.

Related

Postgres subchart not recommended for production enviroment for airflow in Kubernetes

I am new working with Airflow and Kubernetes. I am trying to use apache Airflow in Kubernetes.
To deploy it I used this chart: https://github.com/apache/airflow/tree/master/chart.
When I deploy it like in the link above a PostgreSQL database is created. When I explore the value.yml file of the chart I found this:
# Configuration for postgresql subchart
# Not recommended for production
postgresql:
enabled: true
postgresqlPassword: postgres
postgresqlUsername: postgres
I cannot find why is not recommended for production.
and also this:
data:
# If secret names are provided, use those secrets
metadataSecretName: ~
resultBackendSecretName: ~
# Otherwise pass connection values in
metadataConnection:
user: postgres
pass: postgres
host: ~
port: 5432
db: postgres
sslmode: disable
resultBackendConnection:
user: postgres
pass: postgres
host: ~
port: 5432
db: postgres
sslmode: disable
What is recommended for production? use my own PostgreSQL database outside Kubernetes? If it is correct, how can I use it instead this one? How I have to modify it to use my own postgresql?
The reason why it is not recommended for production is because the chart provides a very basic Postgres setup.
In container world containers are transient unlike processes in the VM world. So likelihood of database getting restarted or killed is high. So if we are running stateful components in K8s, someone needs to make sure that the Pod is always running with its configured storage backend.
The following tools help to run Postgres with High Availablity on K8s/containers and provides various other benefits:
Patroni
Stolon
We have used Stolon to run 80+ Postgres instances on Kubernetes in a microservices environment. These are for public facing products so services are heavily loaded as well.
Its very easy to setup a Stolon cluster once you understand its architecture. Apart from HA it also provides replication, standby clusters and CLI for cluster administration.
Also please consider this blog as well for making your decision. It brings in the perspective of how much Ops will be involved in different solutions.
Managing databases in Kubernetes its a pain and not recommended due to scaling, replicating, backups, among other common tasks are not as easy to do, what you should do is set up your own Postgres in VM or a managed cloud service as RDS or GCP, more information:
https://cloud.google.com/blog/products/databases/to-run-or-not-to-run-a-database-on-kubernetes-what-to-consider

How do I connect to an AWS PostgreSQL RDS instance using SSL and the sslrootcert parameter from a Windows environment?

We have a Windows EC2 instance on which we are running a custom command line application (C# console app using NpgSQL) to connect to a PostgreSQL RDS instance. Based on the instructions here:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.SSL
we created a new DB parameter group with rds.force_ssl set to 1 and rebooted our RDS instance. We also downloaded and imported to Windows the pem file referenced on the page.
I was able to connect to the RDS instance from my Windows EC2 instance via pgAdmin by specifying SSL mode as Verify-Full. Our command-line application reads connection strings from a file and they look like this now that I've added the sslmode parameter:
Server=OurInstanceAddress;Port=5432;SearchPath='$user,public,topology';Database=OurDatabase;User Id=username;Password=mypassword;sslmode=verify-full;
Using this connection string failed with the error referenced at the bottom of the page:
FATAL: no pg_hba.conf entry for host "host.ip", user "someuser", database "postgres", SSL off
I tried adding the sslrootcert parameter, but I'm not sure if I'm dealing with it properly. I tried using the example (sslrootcert=rds-ssl-ca-cert.pem) and I tried using the name of the pem that I downloaded. I feel like there is something about the path information that I'm giving to the sslrootcert parameter that isn't right, especially in a Windows environment. I've tried using the name, I've tried using the following paths:
- sslrootcert=C:\keys\rds-combined-ca-bundle.pem - single backslash
- sslrootcert=C:\\\keys\\\rds-combined-ca-bundle.pem - double backslash
- sslrootcert=C:/keys/rds-combined-ca-bundle.pem - Linux style backslash
All of these produced the same error mentioned above.
Any insight would be appreciated.
I solved it using the environment variables instead for specifiying cert paths in connection url
-DPGSSLROOTCERT=/certs/root.crt
-DPGSSLKEY=/certs/amazon-postgresql.key
-PGSSLCERT=/certs/amazon-postgresql.crt
Although I'm in cygwin. There are some hints in the documentation when using windows here https://www.postgresql.org/docs/9.0/static/libpq-ssl.html

Issue with datanodes on postgres-XL cluster

Postgres-XL not working as expected.
I have configured a Postgres-XL cluster as below:
GTM running on node3
GMT_Proxy running on node2 and node1
Co-ordinators and datanodes running on node2 and node1.
When I try to do any operation connecting to the database directly, I get the below error which is expected anyway.
postgres=# create table test(eno integer);
ERROR: cannot execute CREATE TABLE in a read-only transaction
But when I login via the co-ordinator, it says the below error:
postgres=# \l+
ERROR: Could not begin transaction on data node.
In the postresql.log, I can see the below errors. any idea what to be done?
2016-06-26 20:20:29.786 AEST,"postgres","postgres",3880,"192.168.87.130:45479",576fabb5.f28,1,"SET",2016-06-26 20:17:25 AEST,2/31,0,ERROR,22023,"node ""coord1_3878"" does not exist",,,,,,"SET global_session TO coord1_3878;SET parentPGXCPid TO 3878;",,,"pgxc"
2016-06-26 20:20:47.180 AEST,"postgres","postgres",3895,"192.168.87.131:45802",576fac7d.f37,1,"SELECT",2016-06-26 20:20:45 AEST,3/19,0,LOG,00000,"No nodes altered. Returning",,,,,,"SELECT pgxc_pool_reload();",,,"psql"
2016-06-26 20:21:12.147 AEST,"postgres","postgres",3897,"192.168.87.131:45807",576fac98.f39,1,"SET",2016-06-26 20:21:12 AEST,3/22,0,ERROR,22023,"node ""coord1_3741"" does not exist",,,,,,"SET global_session TO coord1_3741;SET parentPGXCPid TO 3741;",,,"pgxc"
PostresXL version - 9.5 r1.1
psql (PGXL 9.5r1.1, based on PG 9.5.3 (Postgres-XL 9.5r1.1))
Ant idea for this?
Seems like you haven't really configured pgxc_ctl well. Just type in
prepare config minimal
in pgxc_ctl command line which will generate you a general pgxc_ctl.conf file that you can modify accordingly.
And you can follow the official postgres XL documentation to add nodes from the pgxc_ctl command line as John H suggested.
I have managed to fix my issue:
1) Used the source from the git repository, XL9_5_STABLE branch ( https://git.postgresql.org/gitweb/?p=postgres-xl.git;a=summary). The source tarball they provide at http://www.postgres-xl.org/download/ did not work for me
2) Used pgxc_ctl as mentioned above. I was getting Could not obtain a transaction ID from GTM because of the fact that when adding the gtm I was using localhost instead of the ip.
add gtm master gtm localhost 20001 $dataDirRoot/gtm
instead of
add gtm master gtm 10.222.1.49 20001 $dataDirRoot/gtm

How to connect a running container(tomcat) on amazon ec2 to RDS postgres

In aws, I have an amazon linux instance running with docker installed and my app running as a container. It's running in tomcat. However I need to connect it to my database.
I have made this work with a postgres container earlier doing this:
docker run --link <dbcontainername>:db -P -d tomcat-image
But to have the database more reliable it is wanted to use amazon RDS instead.
I have created a VPC with two subnets which both the instance and the RDS uses
And they are also both in the same Security Group.
I am able to access the tomcat fine through the public ip, but it throws errors because it isn't connected to the db.
Networking is not my strong suit, so there might be something there I am missing, but I find it hard to find any text describing this process without mentioning Elastic bean stalk.(It is my impression that it should be possible to do everything EBS does, manually)
There's a similar question asked here about 8 months ago, but he didn't get any responses so I'm trying again.

rs0:FATAL error after recreating the deleted Previous Primary Member in 3 Machine cluster created using VMWare

In my project MongoDb is installed in our software.I created 3 Machines in cloud Using VMWare. Regarding my testbed I can say that, I have ESXI software installed in cisco UCS-Blade and above that,we are creating our VM Machines with our created software(MongoDB 2.4.6 is already pre-installed in our software).
For checking cluster creation,I created 3 VM Machines and created cluster among themselves.I created a database and put some data in primary and its successfully reflecting in other Machines.
Then to check replication,I switched off the primary VM and other machine from secondary become primary as excepted.
But when I created the machine by using the same ip(The IP of the Machine which I deleted Previously),in mongodb its giving rs0):FATAL error.It is not going to secondary VM as excepted.
If I type rs.status() in that machine its always telling its in syncing state.
Request you to kindly help on this regard or if its a known bug,pls give me the bug ID.
Got my Answer from Outside Source.So thought of knowledge sharing for others to follow.
A MongoDB node can come up in FATAL state (not joining back its cluster) if a database content is too much out-of-sync.
You can find the current node status by running the mongo shell with command ‘mongo’ – the status is in the shell prompt. You can exit the shell using “exit”.
To recover from this situation manually, do the following:
1) On the ‘FATAL’ node, run mongo shell with the command ‘mongo’;
2) In the mongo shell, list all local databases using “show dbs”;
3) For each database present, do “use ” and “db.dropDatabase()”;
4) After all databases are gone, do “use admin” and “db.shutdownServer()”;
5) Upstart will restart mongo server automatically and it will now join the cluster and sync.