I've recently installed Icinga2 on a bunch of Ubuntu LXC containers. I have a master node where you can log into icingaweb to check status.
However the load thresholds seem low and I cannot see how our even where you can adjust the parameters. May I ask for someone to point me in the right direction? Is this done on the master or the remote nodes? What's the file and where does it sit in the file structure?
I installed Icinga2 on Ubuntu 16.04 server from the Icinga2 PPA
Create a service definition for load in master:
apply Service "load" {
import "generic-service"
check_command = "load"
vars.load_wload1 = 5
vars.load_wload5 = 4
vars.load_wload15 = 3
vars.load_cload1 = 10
vars.load_cload5 = 6
vars.load_cload15 = 4
command_endpoint = host.address
assign where host.name == "monitored client"
}
More info here
Related
I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine
I am trying to start a ipyparallel cluster using MPI.
The ipcluster_config has following lines modified as such:
c.MPILauncher.mpi_cmd = ['mpiexec']
c.MPIControllerLauncher.controller_args = ['--ip=*']
c.MPILauncher.mpi_args = ["-machinefile", "~/mpi_hosts"]
The ipcontroller_config.py is configured as such:
c.HubFactory.engine_ip = '*'
c.HubFactory.ip = '*'
c.HubFactory.client_ip = '*'
However, when I launch the cluster using command
ipcluster start --profile mpi -n 2
it fails with following message
Engines shutdown early, they probably failed to connect.
You can set this by adding "--ip='*'" to your ControllerLauncher.controller_args
Not sure how to debug further.
Following are the steps i follow to setup cluster in 3 different machine.
1. Unzip JBoss fuse in three different folders, so that you have the following configuration:
- machine1/jboss-fuse-6.3.0.redhat-187
- machine2/jboss-fuse-6.3.0.redhat-187
- machine3/jboss-fuse-6.3.0.redhat-187
2. Edit etc/org.apache.karaf.management.cfg and change rmiRegistryPort, rmiServerPort, assiging an unique port:
**#machine1**
rmiRegistryPort = 1099
rmiServerPort = 44444
**#machine2**
rmiRegistryPort = 1100
rmiServerPort = 44445
**#machine3**
rmiRegistryPort = 1101
rmiServerPort = 44446
3. Edit etc/org.apache.karaf.shell.cfg and change sshPort, assiging an unique port:
#machine1
sshPort = 8101
#machine2
sshPort = 8102
#machine3
sshPort = 8103
4. Edit etc/system.properties. Change karaf.name, org.osgi.service.http.port, activemq.port , assiging an unique port:
#machine1
karaf.name = root1
org.osgi.service.http.port=8181
activemq.port = 61616
#machine2
karaf.name = root2
org.osgi.service.http.port=8182
activemq.port = 61617
#machine3
karaf.name = root3
org.osgi.service.http.port=8183
activemq.port = 61618
5. start the root1 Container
./fuse
6. And create the Fabric:
JBossFuse:karaf#root1> fabric:create --new-user administrator --new-user-password password --new-user-role Administrator --zookeeper-password ZooPass1 --resolver manualip --manual-ip 192.168.1.9 --wait-for-provisioning
Above is My root1 machine1 IP Address : 192.168.1.9
7. Now, start the root2 Container and join the Fabric:
./fuse
JBossFuse:karaf#root2> fabric:join 192.168.1.10:2181
Ensemble password: ZooPass1
8. Now, start the root3 Container and join the Fabric:
./fuse
JBossFuse:karaf#root3> fabric:join 192.168.1.11:2181
Ensemble password: ZooPass1
9. Run the following command to ensemble:
JBossFuse:karaf#root1> fabric:ensemble-add root2 root3
This will change of the zookeeper connection string.
Are you sure want to proceed(yes/no):yes
JBossFuse:karaf#root1> fabric:ensemble-list
[id]
root1
root2
root3
Then, i deployed the rest service on all 3 nodes and create the profile also added require profile with HTTP GETEWAY for load balancer and HA but request is not gone throgh machine 2 and machine 3. Even i am also not able access machine 1 and machine 2 hawtio console as per give below URL.
192.168.1.10:8182/hawtio/login
192.168.1.10:8183/hawtio/login
Can anybody help to to achieve load balancing for cluster environment with 3 different machine?
I would suggest -- don't do any of this :) If you're using Fabric8, install one instance of Fuse, do fabric:create, then use container-create-ssh --host localhost to set up other containers on the same machine. That will automatically take care of all the port conflicts that I suspect are at the root of your problem. Fabric8 uses many, many ports, and trying to fix them all up manually is a ghastly job.
We are using Chef to manage our infrastructure, and I'm running into an issue where the Splunk TA (Add-on for Kafka) simply refuses to acknowledge I've dropped kafka_credential.conf file in the local directory of the plugin. If I use the "Web UI", it generates an entry properly and it shows up in the add-on configuration.
[root#ip-10-14-1-42 local]# ls
app.conf inputs.conf kafka.conf kafka_credentials.conf
[root#ip-10-14-1-42 local]# grep -nr "" *.conf
app.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
app.conf:2:[install]
app.conf:3:is_configured = 1
inputs.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
inputs.conf:2:[kafka_mod]
inputs.conf:3:interval = 60
inputs.conf:4:start_by_shell = false
inputs.conf:5:
inputs.conf:6:[kafka_mod://my_app]
inputs.conf:7:kafka_cluster = default
inputs.conf:8:kafka_topic = log-my_app
inputs.conf:9:kafka_topic_group = my_app
inputs.conf:10:kafka_partition_offset = earliest
inputs.conf:11:index = main
kafka.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka.conf:2:[global_settings]
kafka.conf:3:log_level = INFO
kafka.conf:4:index = main
kafka.conf:5:use_kv_store = 0
kafka.conf:6:use_multiprocess_consumer = 1
kafka.conf:7:fetch_message_max_bytes = 1048576
kafka_credentials.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka_credentials.conf:2:[default]
kafka_credentials.conf:3:kafka_brokers = 10.14.2.164:9092,10.14.2.194:9092
kafka_credentials.conf:4:kafka_partition_offset = earliest
kafka_credentials.conf:5:index = main
Upon restarting splunk, the add-on is installed, and even the input is created under the Inputs section, but the cluster itself is "not available" and when examining the logs I see this:
2017-08-09 01:40:25,442 INFO pid=29212 tid=MainThread file=kafka_mod.py:main:168 | Start Kafka
2017-08-09 01:40:30,508 INFO pid=29212 tid=MainThread file=kafka_config.py:_get_kafka_clusters:228 | Clusters: {}
2017-08-09 01:40:30,509 INFO pid=29212 tid=MainThread file=kafka_config.py:__init__:188 | No Kafka cluster are configured
It seems like this plugin is only respecting clusters created through the WebUI. That is not going to work as we want to be able to fully configure this through Chef. Short of hacking the REST API, and fudging around with the .py files in the addon directory and forcing a dictionary in, what are my options?
Wondering if anyone has encountered this before.
If I had to guess it is silently rejecting the files because # is not traditionally used for comments in INI files. Try a ; instead.
building a master localy.
I am trying to use /kubernetes/docs/getting-started-guides/docker-multinode/master.sh
master is done!
some detail imformation:
K8S_VERSION is set to: 1.2.4
ETCD_VERSION is set to: 2.2.1
FLANNEL_VERSION is set to: 0.5.5
FLANNEL_IFACE is set to: wlan0
FLANNEL_IPMASQ is set to: true
MASTER_IP is set to: 192.168.1.104
ARCH is set to: amd64
Detecting your OS distro ...
Starting bootstrap docker ...
Starting k8s ...
4fa730ca61727c165345885639cbef4d5a34b5d0fa3a361c13b0abc3ead0dd58
{ "Network": "10.1.0.0/16", "Backend": {"Type": "vxlan"}}
vDOCKER_OPTS="$DOCKER_OPTS --mtu=1450 --bip=10.1.48.1/24"
next add a vagrant virtualbox slave to master.
trying to use /kubernetes/docs/getting-started-guides/docker-multinode/worker.sh it bring some issue:
K8S_VERSION is set to: 1.2.4
FLANNEL_VERSION is set to: 0.5.5
FLANNEL_IFACE is set to: wlan0
FLANNEL_IPMASQ is set to: true
MASTER_IP is set to: 192.168.1.104
ARCH is set to: amd64
Detecting your OS distro ...
Starting bootstrap docker ...
Starting k8s ...
Error response from daemon: lstat /var/lib/docker-bootstrap/aufs/mnt/8881a46cdd1fe51b7f0f86e8659bc8f9c905089b0d86a273e85d134
c637041c1/tmp/flannel/subnet.env: no such file or directory
I have two vm,
one is on public_network:192.168.1.110
other is on private_network:192.168.50.2
all is the same error output.
ps: can I use the worker.sh in aws ec2?
question is a bit miscellaneous,
hoping a cattle help me to consolidation.
I am lucky enough to pass the work node, but I don't know the exactly reason.
now I package the vagrant box in my google cloud disk, url is below----[https://drive.google.com/file/d/0B9WzutmgDDrodkpLRGM2c0J1RXc/view?usp=sharing][1].
before you take a public or private network with it, you should download a pre-compiled kubernetes version, and copy it to the vagrant folder.
use vagrant init kube-slave to start.