Can not change ceph configuration - ceph

I deployed ceph cluster with cephadm on 5 nodes, I am trying to change bluestore_cache_size with this command:
sudo ceph config-key set bluestore_cache_size 200221225472
but when run this command:
sudo ceph-conf --show-config |grep bluestore_cache
always bluestore_cache_size = 0 appears. How I can change this configuration? Any help would be appreciated.

try this:
ceph tell osd.* injectargs --bluestore_cache_size=200221225472
you might need to know about different config
The following CLI commands are used to configure the cluster:
ceph config dump will dump the entire configuration database for the cluster.
ceph config get <who> will dump the configuration for a specific daemon or client (e.g., mds.a), as stored in the monitors’ configuration database.
ceph config set <who> <option> <value> will set a configuration option in the monitors’ configuration database.
ceph config show <who> will show the reported running configuration for a running daemon. These settings may differ from those stored by the monitors if there are also local configuration files in use or options have been overridden on the command line or at run time. The source of the option values is reported as part of the output.
ceph config assimilate-conf -i <input file> -o <output file> will ingest a configuration file from input file and move any valid options into the monitors’ configuration database. Any settings that are unrecognized, invalid, or cannot be controlled by the monitor will be returned in an abbreviated config file stored in output file. This command is useful for transitioning from legacy configuration files to centralized monitor-based configuration.
source: https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/

Related

How can I change the config file of the mongo running on ECS

I changed the mongod.conf.orig of the mongo running on ECS, but when I restart, the changes are gone.
Here's the details:
I have a mongodb running on ECS, it always crashes due to out of memory.
I have found the reason, I set the ECS memory to 8G, but because the mongo is running in a container, it detected a higher memory.
when I run db.hostInfo()
I got the memSizeMB higher than 16G.
It caused that when I run db.serverStatus().wiredTiger.cache
I got a "maximum bytes configured" higher than 8G
so I need to reduce the wiredTigerCacheSizeGB in config file.
I used the command line copilot svc exec -c /bin/sh -n mongo to connect to it.
Then I found a file named mongod.conf.orig.
I ran apt-get install vim to install vi and edit this file mongod.conf.orig.
But after I restart the mongo task, all my changes are gone. include the vi I just installed.
Did anyone meet the same problem? Any information will be appreciated.
ECS containers has ephemeral storage. In your case, you could create an EFS and mount it in a container, then share the configuration.
If you use CloudFormation, look at mount points.

Are there minikube args that could be applied to all minikube start calls?

I tend to run minikube start with a handful of flags. Like --memory and --cpus and --kubernetes-verison. I'd like to specify this in a file and not have to create a shell script that makes this start-up simple and consistent. Does such a thing already exists in some capacity?
The best option is use minikube config to customize the minikube startup.
The command minikube config writes a file in ~/.minikube/config/config.json and you can set options using miikube config set [OPTION] [VALUE].
Here you can find a list of available commands.
To set the parameters mentioned in your post, you need to use:
minikube config set memory xxx
minikube config set cpus xxx
minikube config set kubernetes-version xxx
Please refer to documentation page to get all options.

Check files integrity in a docker using OSSEC

Can OSSEC be used to check files which on inside a docker. From what I have read OSSEC can only monitor file integrity of the Host machine.
Yes, you may configure an OSSEC or Wazuh agent to do File Integrity Monitoring within docker containers.
Docker uses the OverlayFS storage driver that places the file structure of containers within the /var/lib/docker/overlay2/ directory (or /var/lib/docker/overlay/ in older versions), more information on this can be found here: https://docs.docker.com/storage/storagedriver/overlayfs-driver/
To determine which is the folder of the container you wish to monitor, you may use the inspect command: docker inspect <container-name> | grep MergedDir and then configure OSSEC or Wazuh to monitor this path.
For example, let's say you have an nginx container and want to monitor its configuration files:
The first step is to determine the container's folder:
# docker inspect docker-nginx | grep MergedDir
"MergedDir": "/var/lib/docker/overlay2/4f38dc4ff95f934ad368ca2770e7641f5cd492c289d2fd717fee22bda60b3560/merged"
and then add the directory to monitor in the ossec.conf file of your OSSEC or Wazuh agent:
<syscheck>
<directories check_all="yes" realtime="yes" restrict="*.conf">/var/lib/docker/overlay2/4f38dc4ff95f934ad368ca2770e7641f5cd492c289d2fd717fee22bda60b3560/merged/etc/nginx/</directories>
</syscheck>
A detailed explanation of how to configure File Integrity Monitoring can be found here: https://documentation.wazuh.com/3.13/user-manual/capabilities/file-integrity/fim-configuration.html
If you also want to monitor the docker server activity, you can use the Wazuh docker module: https://documentation.wazuh.com/3.13/docker-monitor/monitoring_containers_activity.html
Best regards,
Sandra.

error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

I have KUBECONFIG inside my .kube/ folder still facing this issue.
I have also tried
kubectl config set-context ~/.kube/kubeconfig.yml
kubectl config use-context ~/.kube/kubeconfig.yml
No luck! Still the same.
Answering my own question
Initially, I had lost a lot of time on this error, but later I found that my kubeconfig is not having the correct context.
I tried the same steps:
kubectl config set-context ~/.kube/kubeconfig1.yml
kubectl config use-context ~/.kube/kubeconfig1.yml
or add a line with kubeconfig to your ~/.bashrc file
export KUBECONFIG=~/.kube/<kubeconfig_env>.yml
Also, if you want to add multiple kubeconfig: Add it like in ~/.bashrc file
export KUBECONFIG=~/.kube/kubeconfig_dev.yml:~/.kube/kubeconfig_dev1.yml:~/.kube/kubeconfig_dev2.yml
with different Kube contexts and it worked well.
Though the error should simplify and return the specific error related to kubecontext.
But after some digging it worked.
if u are running k3s u can put the following line to ur ~/.zshrc / ~/.bashrc
k3s
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
By default, kubectl looks for a file named config in the $HOME/. kube directory. So you have to rename your config file or set its location properly.
Look here for more detailed explanation
I had the same problem because I used the Linux subsystem on Windows.
If you use the command kubectl in the subsystem, you must copy your config file to part of the Linux system.
For example, if your username on Windows is jammy, and your username is also jammy in the Linux subsystem, you must copy the config file
from:
/mnt/c/Users/jammy/.kube/config
to:
/home/jammy/.kube/

kubernetes: pods cannot connect to internet

I cannot connect to internet from pods. My kubernetes cluster is behind proxy.
I have already set /env/environment and /etc/systemd/system/docker.service.d/http_proxy.conf, and confirmed that environment variables(http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY, no_proxy, NO_PROXY) are correct.
But in the pod, when I tried echo $http_proxy, answer is empty. I also tried curl -I https://rubygems.org but it returned curl: (6) Could not resolve host: rubygems.org.
So I think pod doesn't receive environment values correctly or there is something I forget to do what I should do. How should I do to solve it?
I tried to export http_proxy=http://xx.xx.xxx.xxx:xxxx; export https_proxy=....
After that, I tried again curl -I https://rubygems.org and I can received header with 200.
What I see is that you have wrong proxy.conf name.
As per official documention the name should be /etc/systemd/system/docker.service.d/http-proxy.confand not /etc/systemd/system/docker.service.d/http_proxy.conf.
Next you add proxies, reload daemon and restart docker, as mentioned in provided in comments another answer
/etc/systemd/system/docker.service.d/http_proxy.conf:
Content:
[Service]
Environment="HTTP_PROXY=http://x.x.x:xxxx"
Environment="HTTPS_PROXY=http://x.x.x.x:xxxx"
# systemctl daemon-reload
# systemctl restart docker
Or, as per #mk_ska answer you can
add http_proxy setting to your Docker machine in order to forward
packets from the nested Pod container through the target proxy server.
For Ubuntu based operating system:
Add export http_proxy='http://:' record to the file
/etc/default/docker
For Centos based operating system:
Add export http_proxy='http://:' record to the file
/etc/sysconfig/docker
Afterwards restart Docker service.
Above will set proxy for all containers what will be used by docker engine