I have used docker-compose based zabbix version 4.x in the past.
When I started the environment I could count more than 5 containers, including zabbix-agent.
I am now trying to start versions 5.0, 5.2 and 5.4 but it always happens to me that
the contaniers started are only 3 (much less than those indicated
in the yaml file)
the container with zabbix agent is not running and listening.
Furthermore on docker-compose up I have this errors
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
WARNING: The following deploy sub-keys are not supported and have been ignored: resources.reservations.cpus
....
Zabbix agent item "system.cpu.intr" on host "Zabbix server" failed: first network error, wait for 15 seconds
Zabbix agent item "vm.memory.size[pavailable]" on host "Zabbix server" failed: another network error, wait for 15 seconds
In addition, I have to use active checks with some servers, because the VMs are behind a NAT and cannot be reached by the server: on the documentation I saw that in the web interface you can choose between active or passive agent but on my server I only have the voice "agent" without passive or active.
For the Zabbix agent, there is a choice between ‘Zabbix agent (passive)’ and ‘Zabbix agent (active)’.
I m on centos 7, Docker version 1.13.1, build 7d71120/1.13.1, docker-compose version 1.29.2, build 5becea4c
I don't know if you still having this problem, but the solution is to run the docker compose using the 'all' profile:
docker-compose --profile all up
Starting with docker-compose 1.28.0 you have the option to run some services from the docker file. You can have multiple profiles to run, so in the new zabbix compose files there is a profile called 'full' and 'all'.
Related
I have a VPS running Windows Server 2022, and I installed docker on it. I am trying to run a container which uses the postgres:14 docker image. But every time I try to run it I get:
no matching manifest for windows/amd64. As far as I understand this is a matching error of sorts where docker is trying to get the postgres image which supports the architecture of the host. Is there any fix for this or should I try another database technology?
I'm tryng to set up a CI/CD pipeline with the Azure Cosmos DB Emulator build task in Azure DevOps.
I've installed it from the marketplace, and YAML file contains:
> task: CosmosDbEmulator#2 inputs:
> containerName: 'azure-cosmosdb-emulator'
> enableAPI: 'SQL'
> portMapping: '8081:8081, 8901:8901, 8902:8902, 8979:8979, 10250:10250, 10251:10251, 10252:10252, 10253:10253, 10254:10254,
> 10255:10255, 10256:10256, 10350:10350'
> hostDirectory: '$(Build.BinariesDirectory)\azure-cosmosdb-emulator'
Running this results in failure " The term 'docker' is not recognized as the name of a cmdlet, function, script file, or operable", so I added this to the YAML:
task: DockerInstaller#0
displayName: Docker Installer
inputs:
dockerVersion: 17.09.0-ce
releaseType: stable
resulting in failure:
error during connect: (...): open //./pipe/docker_engine: The system
cannot find the file specified. In the default daemon configuration on
Windows, the docker client must be run elevated to connect. This error
may also indicate that the docker daemon is not running.
New-CosmosDbEmulatorContainer : Could not create container
azure-cosmosdb-emulator from
mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator:latest"
I'm relatively new to azure pipelines and docker, so any help is really appreciated!
error during connect: (...): open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect.
Above error you encountered is because the docker is not installed in your build agent, or the docker client is not successfully started up. DockerInstaller#0 task only install Docker cli, it doesnot install docker client.
See below extract from this document.
The agent pool to be selected for this CI should have Docker for Windows installed unless the installation is done manually in a prior task as a part of the CI. See Microsoft hosted agents article for a selection of agent pools; we recommend to start with Hosted VS2017.
As above document recommended. Please use hosted vs2017 agent to run your pipeline. Set the pool section in your yaml file like below: See pool docuemnt.
pool:
vmImage: vs2017-win2016
If you are using self-hosted agent. Please install docker client in your self-hosted agent machine. And make sure the docker client is up and running.
I have installed and run interactively a private build agent for Azure DevOps on Linux.
However, when attempting to follow documentation to setup as a service it's failing to run. It usually completes the install successfully. But, starting the service always returns an error.
Configuration: New VM running Ubuntu 18.04LTS, secured with AAD and JIT, logged in with VM Admin permissions.
Error:
$ sudo ./svc.sh install
Creating launch agent in /etc/systemd/system/vsts.agent.xxx.linux-agent-01.service
Run as user: xxx#microsoft.com
Run as uid: 1613914
gid: 1613914
$ sudo ./svc.sh start
Failed to start vsts.agent.xxx.linux-agent-01.service: Unit vsts.agent.xxx.linux-agent-01.service is not loaded properly: Exec format error.
See system logs and 'systemctl status vsts.agent.xxx.linux-agent-01.service' for details.
Failed: failed to start vsts.agent.xxx.linux-agent-01.service
$
When attempting to run I get this:
3$ sudo ./svc.sh status
/etc/systemd/system/vsts.agent.edgewebui.LinuxAgent03.service
● vsts.agent.edgewebui.LinuxAgent03.service - VSTS Agent (edgewebui.LinuxAgent03)
Loaded: error (Reason: Exec format error)
Active: inactive (dead)
Feb 28 18:59:18 build-agent-linux systemd[1]: /etc/systemd/system/vsts.agent.edgewebui.LinuxAgent03.service:7: Invalid user/group…osoft.com
Hint: Some lines were ellipsized, use -l to show in full.
Any suggestions on why this isn't working.
I am trying to install Kubernetes on windows server 2016.
I tried to install minikube, and got some errors.
This is the tutorial that I followed:
https://www.assistanz.com/installing-minikube-on-windows-2016-server/
This is the command + error that I got:
PS C:\Windows\system32> minikube start –vm-driver=hyperv –hyperv-virtual-switch=Minikube
Starting local Kubernetes v1.10.0 cluster...
Starting VM... Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
E1106 19:29:10.616564 11852 start.go:168] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path.
Retrying.
E1106 19:29:10.689675 11852 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Someone knows how to solve it?
I googled it, but no luck.
Thanks!
I was never able to get the config parameters to work with minikube start.
I was able to get past this error using the minikube config commands in PowerShell (should also work at a command prompt):
minikube config set vm-driver hyperv
minikube config set hyperv-virtual-switch ExternalSwitch
minikube config view
minikube delete
minikube start
For more information on the command run: minikube config -h
Looking at the documentation you have provided, I have noticed that the screenshot shows a slight difference to the one they've quote.
I have also found this command in another piece of documentation from kubernetes here, showing the same command as that from the screenshot.
I suggest you try the following command;
minikube start --vm-driver=hyperv --hyperv-virtual-switch=Minikube
It is true that OP has pasted the incorrect command, because there is - instead of --. I tried to pass this arguments to minikube and all you get is an instant error. So the issue must be somewhere else. I remember having similar issue and it got resolved after deleting the .kube and .minikube folders and trying to run it again.
After taking a closer look this tutorial is destined for installation of minikube inside of a Windows Server 2016 Virtual Machine, so you have to have a Nested Virtualization able hardware:
Prerequisites The Hyper-V host and guest must both be Windows Server
2016/Windows 10 Anniversary Update or later. VM configuration version
8.0 or greater. An Intel processor with VT-x and EPT technology -- nesting is currently Intel-only. There are some differences with
virtual networking for second-level virtual machines. See "Nested
Virtual Machine Networking".
So the main question is, is that true in your scenario? Are you trying to perform your steps on Windows Server Hyper-V virtual machine with nested virtualization feature?
If you confirm that I have technical possibilities to check it in that scenario.
Otherwise I recommend using the "traditional way" of running minikube in Windows, according for example to this tutorial.
I've just upgraded my puppet environment from 3.4.2 to 3.4.3. through puppetlabs' apt repos. I was upgrading agent(s) and master. Doing an agent run leads to following error:
Info: Retrieving pluginfacts
Debug: Failed to load library 'msgpack' for feature 'msgpack'
Debug: file_metadata supports formats: pson yaml b64_zlib_yaml raw
Debug: Failed to load library 'msgpack' for feature 'msgpack'
Debug: file_metadata supports formats: pson yaml b64_zlib_yaml raw
Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://<puppetserver>/pluginfacts
Debug: Finishing transaction [...]
Nevertheless I retrieve a catalog from master, so the agent run still works and seems to do the things it should do. (Or let's say, I acutally can't determine, if something is going wrong that is related to the error message.)
However, I want to get rid of the Error message.
I double-checked version of puppet with puppet --version on agent and master. I use passenger for puppetmaster. Facter has version 2.0.1. So what did I miss?
Addition: When running an agent with the previous version 3.4.2 there will be no error message.
Any ideas? Many thanks for your support.
ITL
This is due to this bug: https://tickets.puppetlabs.com/browse/PUP-3655
The issue is that for pluginsync to work, there must be at least one module in the environment that has a facts.d directory directly off of the top level of the module.
My work around for this was to create an executable facts.d/README file at the top level of one of our main internal modules that contained the following:
#!/bin/bash
# This directory is where external fact scripts would go, if we had any. This
# directory exists only because with directory environments puppet will
# complain if there isn't a single module in an environment that doesn't have a
# facts.d directory.
echo "bug=https://tickets.puppetlabs.com/browse/PUP-3655"
exit 0
The problem your encounter here comes from facter update, and the way you use private facts with puppet on version 3.X with facter 2.X.when you want to distribute external facts (that was my case).
As said in facter 2.2 documentation, you need to relocate your facter folder into module tree :
The best way to distribute external facts is with pluginsync, which
added support for them in Puppet 3.4/Facter 2.0.1. To add external
facts to your puppet modules, just place them in
MODULEPATH/MODULE/facts.d/.
So, in older versions, the path for external facts was :
MODULEPATH/MODULE/lib/facter/external_fact.rb
If you change it to :
MODULEPATH/MODULE/facts.d/external_fact.rb
Then you won't encounter the problem any more.
Regards
--
rustx
Facter 2.0.1 was released yesterday. That's your problem. Downgrade to 1.7.x and you should be fine.
Caught the same error today, reconfiguring my puppet master:
Info: Retrieving pluginfacts
Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://puppet/pluginfacts
Info: Retrieving plugin
Error: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://puppet/plugins
Info: Caching catalog for puppet
Info: Applying configuration version '1405577010'
Here are my versions:
grundic#puppet:~$ puppet --version
3.6.2
grundic#puppet:~$ facter --version
2.1.0
Restarting daemon helped me (I use puppet master behind passenger):
grundic#puppet:~$ sudo service apache2 restart
* Restarting web server apache2
... waiting ...done.
grundic#puppet:~$ sudo puppet agent --test --verbose
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet
Info: Applying configuration version '1405607835'
Notice: Dummy message for debugging
Notice: /Stage[main]/Main/Notify[Dummy message for debugging]/message: defined 'message' as 'Dummy message for debugging'
Notice: Finished catalog run in 0.06 seconds
I had the same erorrs running puppet 3.6.2 on centos 6.5.
Downgrading puppet,puppet-server, facter and hiera to the previous version (3.6.1, 2.0.2, 1.3.3) 'resolves' the issue..
As Grundic said, restart master.
Then clean up certs to the puppet agent on the master and remove the certs on the agent. Then re-run puppet agent -t and puppet cert sign --all. It will all go away. That worked for me.
for path in `ls */lib/facter | grep :$ | sed "s,:,,"`;
do MODULE=`echo $path | sed "s,/lib/facter,,"`;
cd $MODULE && ln -s lib/facter facts.d && cd .. ;
done
these parts are especially important
`ls */lib/facter | grep :$ | sed "s,:,,"`
`echo $path | sed "s,/lib/facter,,"`
This code snippet ought to be run from /etc/puppet/modules as well as from the modules/ path for each environment at /etc/puppet/environments.