How to run yum update on a kubernetes cluster? - kubernetes

I formed a k8s cluster with 5 nodes.
I want to run a command on all nodes which equals:
k8s-node1: yum update
k8s-node2: yum update
k8s-node3: yum update
k8s-node4: yum update
k8s-node5: yum update
How to achieve this?

Just as cookiedough mentions. Your question is not Kubernetes related but rather Linux server administration in this case it also should not be asked on StackOverflow as it is for programming related topics.
You can find this question has been asked and answered here. And in similar cases where there is a need to run command or update on multiple machines.
Just making this short you can use lots of tools like ansible, pssh or some script etc.
In case of existing pods on the Node and having a specific need to reboot the Node (for example kernel upgrade, libc upgrade etc.) you can read about this scenario in official documentation - Maintenance on a Node.

Related

Salt-minions service consuming over 100% CPU on ubuntu, cannot remove it or find how its restarting

We used to use salt-master to manage about 10 servers, but have stopped now.
One of the servers is now constantly running a salt-minions service which consumes around 140%-160% of the CPU all the time. I kill it, and it just comes back again and again.
I have used apt-get to remove and purge any packages that include salt-*, as well as used dpkg to do the same.. The master salt server is not running. Yet, this instance just keeps smashing itself with this random process that just won't die.
Any assistance is greatly appreciated!
Screenshot of running processes and output from apt-get packages
This looks to be CVE related, and you need to rebuild/redeploy these systems.
Please read both:
SaltStack Blog: Critical Vulnerabilities Update CVE-2020-11651 and CVE-2020-11652
saltexploit.com
Snippet from the CVE blog:
...a critical vulnerability was discovered affecting Salt Master versions 2019.2.3 and 3000.1 and earlier. SaltStack customers and Salt users who have followed fundamental internet security guidelines and best practices are not affected by this vulnerability. The vulnerability is easily exploitable if a Salt Master is exposed to the open internet.
As always, please follow our guidance and secure your Salt infrastructure with the best practices found in this guide: See: Hardening your Salt Environment.
This vulnerability has been rated as critical with a Common Vulnerability Scoring System (CVSS) score of 10.0.
Snippet from saltexploit:
This was a crypto-mining operation
salt minions are affected and as of version 5, masters may be as well
salt-minions is a compiled xmrig binary.
salt-store contains a RAT, nspps, which continues to evolve and become more nasty
Atlassian confirms the newer revisions of the binary are a version of h2miner
Additional information
As a RAT, salt-store is more concerning than salt-minions. More on that later
There have been at least 5 different versions of the salt-store payload, each more advanced than the last.
There are additional backdoors installed, and private keys are getting stolen right and left
Seriously, change out your keys!
Symptoms
very high CPU usage, sometimes only on 1 core, sometimes on all
Fan spin! (Physical hardware only, of course.)
Mysterious process named salt-minions is CPU intensive, stemming from /tmp/salt-minions or /usr/bin/
additional binary in /var/tmp/ or /usr/bin named salt-store or salt-storer
Firewalls disabled
Most applications crashing or taken offline
Your screenshots have shown salt-minions named processes, and high CPU usage, just as described.
It would be a good idea to join the Salt community slack, too: SaltStack Community Slack and take a look at both the #salt-store-miner-public and #security channels.
Check your root's crontab with:
sudo crontab -u root -l
If you see a wget/curl download of a shell script from a random ip, you got the miner.
How to fix it:
Plan A:
Update salt-server
Wipe and reinstall all the servers connected to that master.
Plan B (usually more feasible)
Update salt-server and minions
delete the crontab with sudo crontab -u root -e
kill the salt-store and the salt minion process with kill -9 <pid> <pid> (get the pids with ps faxu)
delete the salt minion fake executable, usually in /tmp/

How to install Multi Machine Cluster in Standalone Service Fabric?

I am going through guide here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server
Section "Step 1B: Create a multi-machine cluster".
I have installed Cluster on one box and trying to use the same json (as per instructions) and trying to install it on another box so that i can have Cluster running on 2 VMs.
I am now getting this error when I run by TestConfig.ps1:
Previous Fabric installation detected on machine XXX. Please clean the machine.
Previous Fabric installation detected on machine XXX. Please clean the machine.
Data Root node Dev Box1 exists on machine XXX in \XXX\C$\ProgramData\SF\Dev Box1. This is an artifact from a previous installation - please delete the directory corresponding to this node.
First, take a look on this link. These are the requirements for each cluster node that are need to be met if you want to create the cluster.
The error is pretty obvious. You most likely have already SF installed on the machine. So either you have SF runtime or some uncleaned cluster data there.
Your first try should be running CleanFabric powershell script from the SF standalone package on each node. It should clean all SF data (cluster, runtime, registry etc.). Try this and then run TestConfiguration script once again. If this does not help, you would have to go to each node and manually delete any SF data that TestConfiguration script is complaining about.

Installing kubernetes on less ram

Is it possible to install kubernetes by kubeadm init command on system has RAM less than 1GB ?I have tried to install but it failed in kubeadm init command.
As mentioned in the installation steps to be taken before you begin, you need to have:
linux compatible system for master and nodes
2GB or more RAM per machine
network connectivity
swap disabled on every node
But going back to your question, It may be possible to run the installation process, but the further usability is not possible. This configuration will be not stable.

Kubernetes command logging on Google Cloud Platform for PCI Compliance

Using Kubernetes' kubectl I can execute arbitrary commands on any pod such as kubectl exec pod-id-here -c container-id -- malicious_command --steal=creditcards
Should that ever happen, I would need to be able to pull up a log saying who executed the command and what command they executed. This includes if they decided to run something else by simply running /bin/bash and then stealing data through the tty.
How would I see which authenticated user executed the command as well as the command they executed?
Audit logging is not currently offered, but the Kubernetes community is working to get it available in the 1.4 release, which should come around the end of September.
There are 3rd party solutions that can solve the auditing issue, and if you're looking for a PCI compliance as the title implies solutions exist that helps solve the broader problem, and not just auditing.
Here is a link to such a solution by Twistlock. https://info.twistlock.com/guide-to-pci-compliance-for-containers
Disclaimer, I work for Twistlock.

Managing ROCKS cluster

I suddenly became an admin of the cluster in my lab and I'm lost.
I have experiences managing linux servers but not clusters.
Cluster seems to be quite different.
I figured the cluster is running CentOS and ROCKS.
I'm not sure what SGE and if it is used in the cluster or not.
Would you point me to an overview or documents of how cluster is configured and how to manage it? I googled but there seem to be lots of way to build a cluster and it is confusing where to start.
I too suddenly became a Rocks Clusters admin. While your CentOS knowledge will be handy, there are some 'Rocks' way of doing things, which you need to read up on. They mostly start with the CLI command rocks list|set command, and they are very nice to work with, when you get to learn them.
You should probadly start by reading the documentation (for the newest version, you can find yours with 'rocks report version'):
http://central6.rocksclusters.org/roll-documentation/base/6.1/
You can read up on SGE part at
http://central6.rocksclusters.org/roll-documentation/sge/6.1/
I would recommend you to sign up for the Rokcs Clusters discussion mailing list on:
https://lists.sdsc.edu/mailman/listinfo/npaci-rocks-discussion
The list is very friendly.