Save Cluster Variables / Variable PSPP - cluster-analysis

I am using PSPP (NOT SPSS since I can't get that running on my Ubuntu machine) and having my set of ~100k records clustered with a k-means cluster. Now what I really need is a more detailed output than just how many records are in each cluster. I need the cluster variable saved i.e.
row 1 => cluster 1
row 2 => cluster 4
row 3 => cluster 1
etc...
Essentially I need the extra field that saves the resulting cluster affinity of each record. My current syntax is:
QUICK CLUSTER cat1 cat2 cat3 cat4 cat5 cat6 cat7 cat8 cat9 cat10 cat11 cat12
/CRITERIA=CLUSTERS(12) MXITER(100000000).
SPSS and PSPP share a lot of the same syntax so if there is an option in SPSS it might work here too.

Statistics should run on Ubuntu, but the Statistics QUICK CLUSTER command has a subcommand
/SAVE CLUSTER
that should do what you want. You can optionally specify a variable name in parentheses after CLUSTER.

The PSPP does not handle /SAVE CLUSTER subcommand. Try it!
QUICK CLUSTER var_list
[/CRITERIA=CLUSTERS(k) [MXITER(max_iter)] CONVERGE(epsilon) [NOINITIAL]]
[/MISSING={EXCLUDE,INCLUDE} {LISTWISE, PAIRWISE}]
[/PRINT={INITIAL} {CLUSTER}]
See on GNU page of PSPP

I know you're looking for something in PSPP, but your best bet is probably to save the output as an open doc, open up your data file as a .csv in a spreadsheet, then copy in the cluster members ships (assuming you added /print=cluster to your command line).

Related

If I declare 2 replicas of PostgreSQL StatefulSet pods in k8s, are they the same database or they just share the volume?

After making 2 replicas of PostgreSQL StatefulSet pods in k8s, are the the same database?
If they do, why I created DB and user in one pod, and can not find the value in the other.
If they not, is there no point of creating replicas?
There isn't one simple answer here, it depends on how you configured things. Postgres doesn't support multiple instances sharing the same underlying volume without massive corruption so if you did set things up that way, it's definitely a mistake. More common would be to use the volumeClaimTemplate system so each pod gets its own distinct storage. Then you set up Postgres streaming replication yourself.
Or look at using an operator which handles that setup (and probably more) for you.
To add the answer in coderanger, as he said it's not easy to say how Postgres will work with the multi replicas, and data replication across the cluster unless checking more in-depth. Setting the multiple replicas directly without reading the document of replication of data might lead to big issue.
Here is one nice example from google for ref : https://cloud.google.com/architecture/deploying-highly-available-postgresql-with-gke
For the example of Postgres database replication example and clustering config files : https://github.com/CrunchyData/crunchy-containers/tree/master/examples/kube

Running kubectl commands in parallel with different credentials

I'm currently running two Kubernetes clusters one on Google cloud and one on IBM cloud. To manage them I use kubectl. I've made a script that executes some commands on one of the clusters then switches to the other and does some other work there.
This works fine as long as the script only runs in one process, however when run in parallel the credentials are sometimes overwritten by one process when in use by another and this obviously causes issues.
I therefore want to know if I can supply kubectl with a credentials file for every call, instead of storing it in a environmental variable with kubectl config set-credentials.
Any help/solution is much appreciated.
If I need to work with multiple clusters using kubectl I am splitting my terminal and setting KUBECONFIG for each split:
For my first split:
export KUBECONFIG=~/.kube/cluster1
For the second split
export KUBECONFIG=~/.kube/cluster2
It is working pretty well, but this approach has one issue:
If you are using some kind of prompt with the current Kubernetes context it will give you different output and it might be missing leading.
For scripts, I am just changing value of KUBECONFIG in for loop, to loop over each cluster.
You need to use Kubefed in order to manage multiple clusters.
It will take one cluster as the main one, and execute all the same requests to the second cluster.

How to move Postgres DB from one Kubernetes cluster to another

I have a big (300Gb) Postgres DB running on GKE cluster (Stateful Set, SSD Volume). I need to move this DB to another GKE cluster.
What is the easiest way to accomplish it?
I tried to do it with piping pg_dump/pg_restore, but it takes forever and for some reason, not all constraints/triggers were recreated.
Is there any proper way to gracefully "shutdown" Postgres server running in Kubernetes and copy the /pgdata folder directly (from one volume to another)?
Other ideas?
tnx
I got few ideas (listed from the most probable to the least) about how you could approach this:
Remember to use proper format when using pg_dump. Default plain format may not work properly with pg_restore. Try to specify different format with pg_dump or use psql -f xxx.tar instead of pg_restore. Remember that it might take a while.
You can use a tool to assist you with that. For example pghoard.
You can make a tared backup of you DB and try to copy as a object via Google Cloud Storage.
You can try to create PVCs manually, attach pods to those PVCs and than copy your dataset onto those pods.
Finally, you may try to create an Init container and use it later for your new cluster.
I suggest starting from point 1 as I think it is the most possible solution. If that would not be enough, try later points from the list.
Please let me know if that helped.

Get Redshift cluster status in outputs of cloudformation

I am creating a redshift cluster using CF and then I need to output the cluster status (basically if its available or not). There are ways to output the endpoints and port but I could not find any possible way of outputting the status.
How can I get that, or it is not possible ?
You are correct. According to AWS::Redshift::Cluster - AWS CloudFormation, the only available outputs are Endpoint.Address and Endpoint.Port.
Status is not something that you'd normally want to output from CloudFormation because the value changes.
If you really want to wait until the cluster is available, you could create a WaitCondition and then have something monitor the status and the signal for the Wait Condition to continue. This would probably need to be an Amazon EC2 instance with some User Data. Linux instances are charged per-second, so this would be quite feasible.

Mongodb cluster with aws cloud formation and auto scaling

I've been investigating creating my own mongodb cluster in AWS. Aws mongodb template provides some good starting points. However, it doesn't cover auto scaling or when a node goes down. For example, if I have 1 primary and 2 secondary nodes. And the primary goes down and auto scaling kicks in. How would I add the newly launched mongodb instance to the replica set?
If you look at the template, it uses an init.sh script to check if the node being launched is a primary node and waits for all other nodes to exist and creates a replica set with thier ip addresses on the primary. When the Replica set is configured initailly, all the nodes already exist.
Not only that, but my node app uses mongoose. Part of the database connection allows you to specify multiple nodes. How would I keep track of what's currently up and running (I guess I could use DynamoDB but not sure).
What's the usual flow if an instance goes down? Do people generally manually re-configure clusters if this happens?
Any thoughts? Thanks.
This is a very good question and I went through this very painful journey myself recently. I am writing a fairly extensive answer here in the hope that some of these thoughts of running a MongoDB cluster via CloudFormation are useful to others.
I'm assuming that you're creating a MongoDB production cluster as follows: -
3 config servers (micros/smalls instances can work here)
At least 1 shard consisting of e.g. 2 (primary & secondary) shard instances (minimum or large) with large disks configured for data / log / journal disks.
arbiter machine for voting (micro probably OK).
i.e. https://docs.mongodb.org/manual/core/sharded-cluster-architectures-production/
Like yourself, I initially tried the AWS MongoDB CloudFormation template that you posted in the link (https://s3.amazonaws.com/quickstart-reference/mongodb/latest/templates/MongoDB-VPC.template) but to be honest it was far, far too complex i.e. it's 9,300 lines long and sets up multiple servers (i.e. replica shards, configs, arbitors, etc). Running the CloudFormation template took ages and it kept failing (e.g. after 15 mintues) which meant the servers all terminated again and I had to try again which was really frustrating / time consuming.
The solution I went for in the end (which I'm super happy with) was to create separate templates for each type of MongoDB server in the cluster e.g.
MongoDbConfigServer.template (template to create config servers - run this 3 times)
MongoDbShardedReplicaServer.template (template to create replica - run 2 times for each shard)
MongoDbArbiterServer.template (template to create arbiter - run once for each shard)
NOTE: templates available at https://github.com/adoreboard/aws-cloudformation-templates
The idea then is to bring up each server in the cluster individually i.e. 3 config servers, 2 sharded replica servers (for 1 shard) and an arbitor. You can then add custom parameters into each of the templates e.g. the parameters for the replica server could include: -
InstanceType e.g. t2.micro
ReplicaSetName e.g. s1r (shard 1 replica)
ReplicaSetNumber e.g. 2 (used with ReplicaSetName to create name e.g. name becomes s1r2)
VpcId e.g. vpc-e4ad2b25 (not a real VPC obviously!)
SubnetId e.g. subnet-2d39a157 (not a real subnet obviously!)
GroupId (name of existing MongoDB group Id)
Route53 (boolean to add a record to an internal DNS - best practices)
Route53HostedZone (if boolean is true then ID of internal DNS using Route53)
The really cool thing about CloudFormation is that these custom parameters can have (a) a useful description for people running it, (b) special types (e.g. when running creates a prefiltered combobox so mistakes are harder to make) and (c) default values. Here's an example: -
"Route53HostedZone": {
"Description": "Route 53 hosted zone for updating internal DNS (Only applicable if the parameter [ UpdateRoute53 ] = \"true\"",
"Type": "AWS::Route53::HostedZone::Id",
"Default": "YA3VWJWIX3FDC"
},
This makes running the CloudFormation template an absolute breeze as a lot of the time we can rely on the default values and only tweak a couple of things depending on the server instance we're creating (or replacing).
As well as parameters, each of the 3 templates mentioned earlier have a "Resources" section which creates the instance. We can do cool things via the "AWS::CloudFormation::Init" section also. e.g.
"Resources": {
"MongoDbConfigServer": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets" : {
"Install" : [ "Metric-Uploading-Config", "Install-MongoDB", "Update-Route53" ]
},
The "configSets" in the previous example shows that creating a MongoDB server isn't simply a matter of creating an AWS instance and installing MongoDB on it but also we can (a) install CloudWatch disk / memory metrics (b) Update Route53 DNS etc. The idea is you want to automate things like DNS / Monitoring etc as much as possible.
IMO, creating a template, and therefore a stack for each server has the very nice advantage of being able to replace a server extremely quickly via the CloudFormation web console. Also, because we have a server-per-template it's easy to build the MongoDB cluster up bit by bit.
My final bit of advice on creating the templates would be to copy what works for you from other GitHub MongoDB CloudFormation templates e.g. I used the following to create the replica servers to use RAID10 (instead of the massively more expensive AWS provisioned IOPS disks).
https://github.com/CaptainCodeman/mongo-aws-vpc/blob/master/src/templates/mongo-master.template
In your question you mentioned auto-scaling - my preference would be to add a shard / replace a broken instance manually (auto-scaling makes sense with web containers e.g. Tomcat / Apache but a MongoDB cluster should really grow slowly over time). However, monitoring is very important, especially the disk sizes on the shard servers to alert you when disks are filling up (so you can either add a new shard to delete data). Monitoring can be achieved fairly easily using AWS CloudWatch metrics / alarms or using the MongoDB MMS service.
If a node goes down e.g one of the replicas in a shard, then you can simply kill the server, recreate it using your CloudFormation template and the disks will sync across automatically. This is my normal flow if an instance goes down and generally no re-configuration is necessary. I've wasted far too many hours in the past trying to fix servers - sometimes lucky / sometimes not. My backup strategy now is run a mongodump of the important collections of the database once a day via a crontab, zip up and upload to AWS S3. This means if the nuclear option happens (complete database corruption) we can recreate the entire database and mongorestore in an hour or 2.
However, if you create a new shard (because you're running out of space) configuration is necessary. For example, if you are adding a new Shard 3 you would create 2 replica nodes (e.g. primary with name => mongo-s3r1 / secondary with name => mongo-s3r2) and 1 arbitor (e.g. with name mongo-s3r-arb) then you'd connect via a MongoDB shell to a mongos (MongoDB router) and run this command: -
sh.addShard("s3r/mongo-s3r1.internal.mycompany.com:27017,mongo-s3r2.internal.mycompany.com:27017")
NOTE: - This commands assumes you are using private DNS via Route53 (best practice). You can simply use the private IPs of the 2 replicas in the addShard command but I have been very badly burned with this in the past (e.g. serveral months back all the AWS instances were restarted and new private IPs generated for all of them. Fixing the MongoDB cluster took me 2 days as I had to reconfigure everything manually - whereas changing the IPs in Route53 takes a few seconds ... ;-)
You could argue we should also add the addShard command to another CloudFormation template but IMO this adds unnecessary complexity because it has to know about a server which has a MongoDB router (mongos) and connect to that to run the addShard command. Therefore I simply run this after the instances in a new MongoDB shard have been created.
Anyways, that's my rather rambling thoughts on the matter. The main thing is that once you have the templates in place your life becomes much easier and defo worth the effort! Best of luck! :-)