i am unable to start the application from cloud foundry cli - ibm-cloud

i am getting error bits not uploaded even though i am able to upload the app from cf cli.
check the log of commands executed below.
Where is the error?
i followed all steps as shown in dw001 video correctly.
F:\IBM BLUEMIX\DW001\testappl1>
F:\IBM BLUEMIX\DW001\testappl1>cd dir
The system cannot find the path specified.
F:\IBM BLUEMIX\DW001\testappl1>dir
Volume in drive F is Comp Stuff
Volume Serial Number is 5635-6A95
Directory of F:\IBM BLUEMIX\DW001\testappl1
01-12-2015 23:12 <DIR> .
01-12-2015 23:12 <DIR> ..
01-12-2015 17:40 13 .cfignore
01-12-2015 17:40 7,171 .jshintrc
01-12-2015 17:40 429 .project
01-12-2015 23:12 <DIR> .settings
01-12-2015 17:40 9,873 app.js
01-12-2015 17:40 176 manifest.yml
01-12-2015 17:40 429 package.json
01-12-2015 23:12 <DIR> public
01-12-2015 17:40 445 README.md
01-12-2015 23:12 <DIR> routes
01-12-2015 23:12 <DIR> views
7 File(s) 18,536 bytes
6 Dir(s) 1,294,770,176 bytes free
F:\IBM BLUEMIX\DW001\testappl1>cf push BI-MyFirstDeploy-3 -c "node app.js" -m 128M --no-manifest
Updating app BI-MyFirstDeploy-3 in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
OK
Creating route bi-myfirstdeploy-3.mybluemix.net...
FAILED
Server error, status code: 400, error code: 210003, message: The host is taken: bi-myfirstdeploy-3
F:\IBM BLUEMIX\DW001\testappl1>cf push testappl -c "node app.js" -m 128M --no-manifest
Creating app testappl in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
OK
Creating route testappl.mybluemix.net...
FAILED
Server error, status code: 400, error code: 210003, message: The host is taken: testappl
F:\IBM BLUEMIX\DW001\testappl1>cf bs testappl testappl1-cloudantNoSQLDB
Binding service testappl1-cloudantNoSQLDB to app testappl in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
OK
TIP: Use 'cf restage testappl' to ensure your env variable changes take effect
F:\IBM BLUEMIX\DW001\testappl1>cf start testappl
Starting app testappl in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
FAILED
Server error, status code: 400, error code: 150001, message: The app package is invalid: bits have not been uploaded
F:\IBM BLUEMIX\DW001\testappl1>cf cs cloudantNoSQLDB Shared testservice1
Creating service instance testservice1 in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
OK
Attention: The plan `Shared` of service `cloudantNoSQLDB` is not free. The instance `testservice1` will incur a cost. Contact your administrator if
you think this is in error.
F:\IBM BLUEMIX\DW001\testappl1>cf bs testappl testservice1
FAILED
App testappl not found
F:\IBM BLUEMIX\DW001\testappl1>cf push testappl -c "node app.js" -m 128M --no-manifest
Creating app testappl in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
OK
Creating route testappl.mybluemix.net...
FAILED
Server error, status code: 400, error code: 210003, message: The host is taken: testappl
F:\IBM BLUEMIX\DW001\testappl1>cf bs testappl testservice1
Binding service testservice1 to app testappl in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
OK
TIP: Use 'cf restage testappl' to ensure your env variable changes take effect
F:\IBM BLUEMIX\DW001\testappl1>cf start testappl
Starting app testappl in org pr00330912#techmahindra.com / space Test1 as pr00330912#techmahindra.com...
FAILED
Server error, status code: 400, error code: 150001, message: The app package is invalid: bits have not been uploaded
Any help is appreciated and thanks in advance.

Server error, status code: 400, error code: 210003, message: The host is taken:
The host name (application name by default) is already taken by another user or org. Pick a unique name. For example:
cf push testappl_GOWTHI -c "node app.js" -m 128M --no-manifest

Related

Why does my postgresql+repmgr+timescaledb container stop right after start up?

I was trying to build an customized image of postgresql+repmgr+timescaledb on docker.
Here is my dockerfile:
FROM bitnami/postgresql-repmgr:12.4.0-debian-10-r90
USER root
RUN apt-get update \
&& apt-get -y install \
gcc cmake git clang-format clang-tidy openssl libssl-dev \
&& git clone https://github.com/timescale/timescaledb.git
RUN cd timescaledb \
&& git checkout 2.8.1 \
&& ./bootstrap -DREGRESS_CHECKS=OFF -DCMAKE_EXPORT_COMPILE_COMMANDS=ON \
&& cd build \
&& make \
&& make install
RUN echo 'en_US.UTF-8 UTF-8' >> /etc/locale.gen && locale-gen
USER 1001
build command:
docker build -f dockerfile -t my/pg-repmgr-12-tsdb:12.4.0-debian-10-r90 .
When I tested it, it ran perfectly for the primary node, but when I tried to establish a stand by node, the instance stopped almost immediately after starting up and leaving the logs to be:
postgresql-repmgr 18:51:11.00
postgresql-repmgr 18:51:11.00 Welcome to the Bitnami postgresql-repmgr container
postgresql-repmgr 18:51:11.00 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql-repmgr
postgresql-repmgr 18:51:11.00 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql-repmgr/issues
postgresql-repmgr 18:51:11.01
postgresql-repmgr 18:51:11.03 INFO ==> ** Starting PostgreSQL with Replication Manager setup **
postgresql-repmgr 18:51:11.05 INFO ==> Validating settings in REPMGR_* env vars...
postgresql-repmgr 18:51:11.06 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql-repmgr 18:51:11.06 INFO ==> Querying all partner nodes for common upstream node...
postgresql-repmgr 18:51:11.13 INFO ==> Auto-detected primary node: 'pg-0:5432'
postgresql-repmgr 18:51:11.14 INFO ==> Preparing PostgreSQL configuration...
postgresql-repmgr 18:51:11.14 INFO ==> postgresql.conf file not detected. Generating it...
postgresql-repmgr 18:51:11.26 INFO ==> Preparing repmgr configuration...
postgresql-repmgr 18:51:11.27 INFO ==> Initializing Repmgr...
postgresql-repmgr 18:51:11.28 INFO ==> Waiting for primary node...
postgresql-repmgr 18:51:11.30 INFO ==> Cloning data from primary node...
postgresql-repmgr 18:51:12.11 INFO ==> Initializing PostgreSQL database...
postgresql-repmgr 18:51:12.11 INFO ==> Cleaning stale /bitnami/postgresql/data/standby.signal file
postgresql-repmgr 18:51:12.12 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/postgresql.conf detected
postgresql-repmgr 18:51:12.13 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
postgresql-repmgr 18:51:12.16 INFO ==> Deploying PostgreSQL with persisted data...
postgresql-repmgr 18:51:12.19 INFO ==> Configuring replication parameters
postgresql-repmgr 18:51:12.23 INFO ==> Configuring fsync
postgresql-repmgr 18:51:12.25 INFO ==> Setting up streaming replication slave...
postgresql-repmgr 18:51:12.28 INFO ==> Starting PostgreSQL in background...
postgresql-repmgr 18:51:12.52 INFO ==> Unregistering standby node...
postgresql-repmgr 18:51:12.59 INFO ==> Registering Standby node...
postgresql-repmgr 18:51:12.64 INFO ==> Running standby follow...
postgresql-repmgr 18:51:12.71 INFO ==> Stopping PostgreSQL...
waiting for server to shut down.... done
server stopped
while normal logs continues with several restarts. The logs were confusing because no error is thrown.
Thanks to the first comment, I found that the postgres logs (which I used volumes to access later) said:
2022-10-17 12:37:51.070 GMT [171] LOG: pgaudit extension initialized
2022-10-17 12:37:51.070 GMT [171] LOG: starting PostgreSQL 12.4 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2022-10-17 12:37:51.072 GMT [171] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-10-17 12:37:51.072 GMT [171] LOG: listening on IPv6 address "::", port 5432
2022-10-17 12:37:51.074 GMT [171] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2022-10-17 12:37:51.106 GMT [171] LOG: redirecting log output to logging collector process
2022-10-17 12:37:51.106 GMT [171] HINT: Future log output will appear in directory "/opt/bitnami/postgresql/logs".
2022-10-17 12:37:51.119 GMT [173] LOG: database system was interrupted; last known up at 2022-10-17 12:37:49 GMT
2022-10-17 12:37:51.242 GMT [173] LOG: entering standby mode
2022-10-17 12:37:51.252 GMT [173] LOG: redo starts at 0/E000028
2022-10-17 12:37:51.266 GMT [173] LOG: consistent recovery state reached at 0/E000100
2022-10-17 12:37:51.266 GMT [171] LOG: database system is ready to accept read only connections
2022-10-17 12:37:51.274 GMT [177] LOG: started streaming WAL from primary at 0/F000000 on timeline 1
2022-10-17 12:37:51.579 GMT [171] LOG: received fast shutdown request
2022-10-17 12:37:51.580 GMT [171] LOG: aborting any active transactions
2022-10-17 12:37:51.580 GMT [177] FATAL: terminating walreceiver process due to administrator command
2022-10-17 12:37:51.581 GMT [174] LOG: shutting down
2022-10-17 12:37:51.601 GMT [171] LOG: database system is shut down
Can someone please tell me where I did wrong? Much appreciated!
Additional information on reproducing:
the command used for the primary node instance:
docker run --detach --name pg-0 --network my-network --env REPMGR_PARTNER_NODES=pg-0,pg-1 --env REPMGR_NODE_NAME=pg-0 --env REPMGR_NODE_NETWORK_NAME=pg-0 --env REPMGR_PRIMARY_HOST=pg-0 --env REPMGR_PASSWORD=repmgrpass --env POSTGRESQL_POSTGRES_PASSWORD=adminpassword --env POSTGRESQL_USERNAME=customuser --env POSTGRESQL_PASSWORD=custompassword --env POSTGRESQL_DATABASE=customdatabase --env POSTGRESQL_SHARED_PRELOAD_LIBRARIES=repmgr,pgaudit,timescaledb -p 5420:5432 -v /etc/localtime:/etc/localtime:ro my/pg-repmgr-12-tsdb:12.4.0-debian-10-r90
the command used for the standby node instance:
docker run --name pg-1 --network my-network --env REPMGR_PARTNER_NODES=pg-0,pg-1 --env REPMGR_NODE_NAME=pg-1 --env REPMGR_NODE_NETWORK_NAME=pg-1 --env REPMGR_PRIMARY_HOST=pg-0 --env REPMGR_PASSWORD=repmgrpass --env POSTGRESQL_POSTGRES_PASSWORD=adminpassword --env POSTGRESQL_USERNAME=customuser --env POSTGRESQL_PASSWORD=custompassword --env POSTGRESQL_DATABASE=customdatabase --env POSTGRESQL_SHARED_PRELOAD_LIBRARIES=repmgr,pgaudit,timescaledb -v /etc/localtime:/etc/localtime:ro -p 5421:5432 my/pg-repmgr-12-tsdb:12.4.0-debian-10-r90
Sometimes the best way through is just to find another...
I changed the dockerfile to
FROM bitnami/postgresql-repmgr:13.6.0-debian-10-r90
USER root
RUN apt-get update \
&& apt-get -y install \
gcc cmake git clang-format clang-tidy openssl libssl-dev \
&& git clone https://github.com/timescale/timescaledb.git
RUN cd timescaledb \
&& git checkout 2.8.0 \
&& ./bootstrap -DREGRESS_CHECKS=OFF -DCMAKE_EXPORT_COMPILE_COMMANDS=ON \
&& cd build \
&& make \
&& make install
RUN echo 'en_US.UTF-8 UTF-8' >> /etc/locale.gen && locale-gen
USER 1001
and the problem is solved.
I even have no clue whether the version of base image or the version of timescaledb did the magic, but anyhow my problem is solved. Hope any one who encountered the same issue later can benefit from my struggle. >3<

Mosquitto 2.0 config still not working on Raspberry Pi

I'm running an MQTT server mosquitto version 2.0.11 on the same Raspberry Pi Bullseye (3 A+) as both broker and client. I had code working, but understand that one needs to modify a .conf file to get things working. I must still not be understanding something because here's my file:
# I had pid_file /run/mosquitto/mosquitto.pid below, but changed this when docs suggested below should be included if running automatically when device boots, which it will be.
pid_file /var/run/mosquitto/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
listener 1883
allow_anonymous true
Now when I try to run mosquitto like this:
mosquitto -c /etc/mosquitto/conf.d/mosquitto.conf
I get this error:
1637370455: Loading config file /etc/mosquitto/conf.d/mosquitto.conf
1637370455: Error: Duplicate pid_file value in configuration.
1637370455: Error found at /etc/mosquitto/conf.d/mosquitto.conf:7.
1637370455: Error found at /etc/mosquitto/conf.d/mosquitto.conf:14.
Line 7 is the pid_file /var/run/mosquitto/mosquitto.pid
Line 14 is the include_dir /etc/mosquitto/conf.d
I can make basic pub and sub tests with localhost but still no luck with the hostname. Yes I know you should use security but I have an app that controls a robot over local WiFi and want to preserve app usage without changing that component too.
Any help on getting me back on track to getting the Mosquitto broker & client working on the same pi, allowing anonymous access, and running, is much appreciated. I hav gone through the docs, example file, and consulted other tutorials like Steve’s but proper configuration is still unclear. Thx!
Firstly the errors about not being able to open the pid or log files are because you are running mosquitto as a normal user (probably pi). This user does not have permission to read/write to file in /var/run or /var/log hence the failure when you try and run it "manually".
You've not said how you installed 2.0.11, as the default version bundled with Bullseys is still a 1.5.x build. Assuming you used the mosquitto.org repository then the mosquitto service will have been installed and configured. It will automatically pick up the default config file at /etc/mosquitto/mosquitto.conf as should be displayed with:
$ sudo service mosquitto status
● mosquitto.service - Mosquitto MQTT Broker
Loaded: loaded (/lib/systemd/system/mosquitto.service; enabled; vendor preset
Active: active (running) since Sun 2021-10-31 17:28:52 GMT; 2 weeks 5 days ag
Docs: man:mosquitto.conf(5)
man:mosquitto(8)
Process: 499 ExecStartPre=/bin/mkdir -m 740 -p /var/log/mosquitto (code=exited
Process: 505 ExecStartPre=/bin/chown mosquitto /var/log/mosquitto (code=exited
Process: 507 ExecStartPre=/bin/mkdir -m 740 -p /run/mosquitto (code=exited, st
Process: 510 ExecStartPre=/bin/chown mosquitto /run/mosquitto (code=exited, st
Process: 25679 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCE
Main PID: 511 (mosquitto)
Tasks: 1 (limit: 2181)
CGroup: /system.slice/mosquitto.service
└─511 /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
Nov 19 00:00:10 www systemd[1]: Reloading Mosquitto MQTT Broker.
Nov 19 00:00:10 www systemd[1]: Reloaded Mosquitto MQTT Broker.
Warning: Journal has been rotated since unit was started. Log output is incomple
The simplest way to enable access from other machines is to do the following:
Reset the default config file to as it was when installed
# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
pid_file /var/run/mosquitto/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
port 1883
include_dir /etc/mosquitto/conf.d
create a new file in /etc/mosquitto/conf.d e.g. called connect.conf
listener 1883
allow_anonymous true
restart the service with sudo service mosquitto restart

Kubelet service not starting up - KUBELET_EXTRA_ARGS (code=exited, status=255)

I am trying to start up the kubelet service on a worker node (the 3rd worker node)... at the moment, I can't quite tell what the error is here.. I do however, see F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS in the output given by sudo systemctl status kubelet -l:
[svc.jenkins#node6 ~]$ sudo systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Mon 2018-07-16 16:42:20 CDT; 4s ago
Docs: http://kubernetes.io/docs/
Process: 556 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 556 (code=exited, status=255)
Jul 16 16:42:20 node6 kubelet[556]: --tls-cert-file string File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: --tls-private-key-file string File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: -v, --v Level log level for V logs
Jul 16 16:42:20 node6 kubelet[556]: --version version[=true] Print version information and quit
Jul 16 16:42:20 node6 kubelet[556]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Jul 16 16:42:20 node6 kubelet[556]: --volume-plugin-dir string The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/")
Jul 16 16:42:20 node6 kubelet[556]: --volume-stats-agg-period duration Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS
Here is the configuration for my dropin loacated at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (it is the same on the other nodes that are in a working state):
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/data01/kubelet/pki"
Environment="KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data01/kubelet"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
Just need help diagnosing what the issue preventing it from starting so that it can be resolved.. Thank in advanced :)
EDIT:
[svc.jenkins#node6 ~]$ kubelet --version
Kubernetes v1.10.4
Currently, in systemd a bit different approach is used. All options are put to separate file and systemd config script refers to that file.
In your case, it would be something like this:
/etc/sysconfig/kubelet
----------------------
KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin
KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local
KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt
KUBELET_CADVISOR_ARGS=--cadvisor-port=0
KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs
KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/data01/kubelet/pki
KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data01/kubelet
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-----------------------------------------------------
...
[Service]
EnvironmentFile=/etc/sysconfig/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
The variables in systemd config file could look like ${VARIABLE} or $VARIABLE. Both cases should work fine.

PostgreSQL - Fail to start delegate ip address on pgpool 2

I try to set up a pgpool server on ubuntu server and following this link : pgpool-II Tutorial [ Watchdog ].
But when I to start a pgpool service, the delegated IP doesn't start.
I have seen in a log file on syslog and got some error like this.
Oct 25 08:46:25 pgpool-1 pgpool[1647]: [8-2] 2017-10-25 08:46:25: pid 1647: DETAIL: Host:"172.16.0.42" WD Port:9000 pgpool-II port:5432
Oct 25 08:46:25 pgpool-1 pgpool: SIOCSIFADDR: Operation not permitted
Oct 25 08:46:25 pgpool-1 pgpool: SIOCSIFFLAGS: Operation not permitted
Oct 25 08:46:25 pgpool-1 pgpool: SIOCSIFNETMASK: Operation not permitted
Oct 25 08:46:25 pgpool-1 pgpool[1648]: [18-1] 2017-10-25 08:46:25: pid 1648: LOG: failed to acquire the delegate IP address
Oct 25 08:46:25 pgpool-1 pgpool[1648]: [18-2] 2017-10-25 08:46:25: pid 1648: DETAIL: 'if_up_cmd' failed
Oct 25 08:46:25 pgpool-1 pgpool[1648]: [19-1] 2017-10-25 08:46:25: pid 1648: WARNING: watchdog escalation failed to acquire delegate IP
I use ubuntu 14.04 with pgpool2 version 3.6.6-1, and watchdog version 5.31-1.
And I has configured on pgpool.conf at virtual IP setting like this.
# - Virtual IP control Setting -
delegate_IP = '172.16.0.201'
if_cmd_path = '/sbin'
if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.0.0'
if_down_cmd = 'ifconfig eth0:0 down'
arping_path = '/usr/sbin'
arping_cmd = 'arping -U $_IP_$ -w 1'
Any suggestion for this? Thank you for any help.
Looks like user that runs it doesn't have permission to use ifconfig.
Did you follow those steps from tutorial?
setuid configuration
In watchdog process, root privilege is required to contol virtual IP.
You could start pgpool-II as root user. However in this tutorial,
Apache needs to start pgpool as apache user and control virtual IP
because we are using pgpoolAdmin. For this purpose, we setuid
if_config and arping. Also we don't want any user other than apache
accesses the commands because of security reason. Execute following
commands on each of osspc19 and osspc20 (It requires root privilege).
At first, make a directory for containing ipconfig and arping which is
set setuid. The path is specified at ifconif_path and arping_path; in
this tutorial, this is /home/apache/sbin. Then give execute privilege
to only apache user.
$ su -
# mkdir -p /home/apache/sbin
# chown apache:apache /home/apache/sbin
# chmod 700 /home/apache/sbin
Next, copy the original ifconfig and arping to the directory and then
set setuid to these.
# cp /sbin/ifconfig /home/apache/sbin
# cp /use/sbin/arping /home/apache/sbin
# chmod 4755 /home/apache/sbin/ifconfig
# chmod 4755 /home/apache/sbin/arping
Note that explained above should be used for tutorial purpose only. In
the real world you'd better create setuid wrapper programs to execute
ifconfig and arping. This is left for your exercise.
(Note: this answer may help in case you run Pgpool-II servers with Watchdog in Docker containers)
I tried to setup Pgpool-II servers with Watchdog in Docker containers today, and I got almost the same error (though I did set the SUID bit and even tried running Pgpool-II as the root user):
SIOCSIFADDR: Operation not permitted
SIOCSIFFLAGS: Operation not permitted
SIOCSIFNETMASK: Operation not permitted
pid 88: LOG: failed to acquire the delegate IP address
pid 88: DETAIL: 'if_up_cmd' failed
pid 88: WARNING: watchdog escalation failed to acquire delegate IP
Later I found that it was because the container did not have the privilege to change its network configurations, by default by design.
I then ran my Pgpool-II Docker containers in the privileged mode as shown below:
pgpool1:
privileged: true
image: postdock/pgpool:latest-pgpool36
...
The error is gone and the virtual IP is set up correctly.
My problem is solved by the following method.
Make a directory for containing ipconfig and arping. Then give execute privilege to only non-root user.
$mkdir /var/lib/pgsql/sbin
$chown postgres:postgres /var/lib/pgsql/sbin
$cp /sbin/ip /var/lib/pgsql/sbin
$cp /sbin/arping /var/lib/pgsql/sbin
Run visudo, which safely edits the sudoers file:
$visudo
Then add an entry like this in sudoers file:
postgres ALL = NOPASSWD: /var/lib/pgsql/sbin/ip *, /var/lib/pgsql/sbin/arping *
Next, create bash files(ipadd.sh,ipdel.sh,arping.sh) to run ip and arping commands with sudo.
$cat /var/lib/pgsql/sbin/ipadd.sh
#!/bin/bash
sudo /var/lib/pgsql/sbin/ip addr add $1/24 dev eth1 label eth1:0
$cat /var/lib/pgsql/sbin/ipdel.sh
#!/bin/bash
sudo /var/lib/pgsql/sbin/ip addr del $1/24 dev eth1
$cat /var/lib/pgsql/sbin/arping.sh
#!/bin/bash
sudo /var/lib/pgsql/sbin/arping -U $1 -w 1 -I eth1
$chmod 755 /var/lib/pgsql/sbin/*
$chown postgres:postgres /var/lib/pgsql/sbin/*
Add an entry like this in pgpool.conf:
delegate_IP = '10.10.10.62'
if_up_cmd = 'ipadd.sh $_IP_$'
if_down_cmd = 'ipdel.sh $_IP_$'
arping_cmd = 'arping.sh $_IP_$'
if_cmd_path = '/var/lib/pgsql/sbin'
arping_path = '/var/lib/pgsql/sbin'
Then restart the pgpool service. Ignore the warning you can see as follows.
WARNING: checking setuid bit of if_up_cmd
DETAIL: ifup[/var/lib/pgsql/sbin/ipadd.sh] doesn't have setuid bit
WARNING: checking setuid bit of if_down_cmd
DETAIL: ifdown[/var/lib/pgsql/sbin/ipdel.sh] doesn't have setuid bit
WARNING: checking setuid bit of arping command
DETAIL: arping[/var/lib/pgsql/sbin/arping.sh] doesn't have setuid bit
Stop and check one of your two pgpool services.

FAILED TO WRITE PID installing Zookeeper

I am new to Zookeeper and it has being a real issue to install it and run. I am not sure what is wrong in here but I will explain what I've being doing to make it more clear:
1.- I've followed the installation guide provided by Apache. This means download the Zookeeper distribution (stable release) extracted the file and moved into the home directory.
2.- As I am using Ubuntu 12.04 I've modified the .bashrc file including this:
export ZOOKEEPER_INSTALL=/home/myusername/zookeeper-3.4.5
export PATH=$PATH:$ZOOKEEPER_INSTALL/bin
3.- Create a config file on conf/zoo.cfg
tickTime=2000
dataDir=/var/zookeeper
clientPort=2181
and also tried with:
dataDir=/var/log/zookeeper
and
dataDir=/var/bin/zookeeper
4.- When running the start command
zkServer.sh start or `bin/zkServer.sh start` nothing happens and always returns this
JMX enabled by default
Using config: /home/sasuke/zookeeper-3.4.5/bin/../conf/zoo.cfg
mkdir: cannot create directory `/var/zookeeper': Permission denied
Starting zookeeper ... /home/sasuke/zookeeper-3.4.5/bin/zkServer.sh: line 113: /var/zookeeper/zookeeper_server.pid: No such file or directory
FAILED TO WRITE PID
I have Java installed and inside the zookeper directory there is a zookeeper.jar file that I think it's not running.
Checking here on stackoverflow there was a guy that said he could run zookeeper after typing
ssh localhost
But when I try to do it I get this error
ssh: connect to host localhost port 22: Connection refused
Please help. I've being here trying to solve it for too long.
Getting started guide of zookeeper:
http://zookeeper.apache.org/doc/r3.1.2/zookeeperStarted.html
Previous case solved with the shh localhost
Zookeeper: FAILED TO WRITE PID
UPDATE:
The permissions for log are:
drwxr-xr-x 19 root root 4096 Oct 10 07:52 log
and for zookeeper:
drwxr-xr-x 2 zookeeper zookeeper 4096 Mar 23 2012 zookeeper
Should I change any of these?
I have had the same problem. In my case was useful to start Zookeeper and directly specify a configuration file:
/bin/zkServer.sh start conf/zoo.conf
It seems you do not have the required permissions. The /var/log owner is is going to be root. Zookeeper stores the process id and snapshot of data in that directory. The process id of the spawned zookeeper server is stored in a file -zookeeper_server.pid (as of 3.3.6)
If you have root previleges, you could start zookeeper with sudo (root) previleges, it should work but definitely not recommended. Make sure you start zookeeper with the same(or higher) permissions as the owner of the directory.
Create a new directory in your home folder like /home/username/zookeeper-data.
Let dataDir point to that directory and it should work.
The default zookeeper installation (tar extract) comes with the conf file named conf/zoo_sample.cfg while the same extract's bin/zkServer.sh expects the conf file to be called zoo.cfg thereby resulting in a "No such file or dir" and the "failed to write pid" error. So before running zkServer.sh to start or stop zookeeper instance, either:
rename the zoo_sample.cfg in the conf dir to zoo.cfg, or
give the name (and path) to the conf file (as suggested by Ilya Lapitan), or, of course
edit zkServer.sh ;-)
When you create the Directory for dataDir make sure to use the -p option. This will allow subsequent directories to be created as required by the application placing files.
mkdir -p /var/log/zookeeperData
Then set:
dataDir=/var/log/zookeeperData
Seems there's all kinds of reasons this can happen. So many helpful answers here!
For me, I had improper line endings in my zoo.cfg file, and possibly invisible characters, so zookeeper was trying to create directories like /var/zookeeper? and /var/zookeeper\r. Reworking my zoo.cfg a bit fixed it for me, along with deleting zoo_sample.conf.
This happens to me due to low disk space. cause zookeeper cant create pid file inside zookeeper data folder.
I have faced the same issue while starting the zookeeper with this command:
hadoop#ubuntu:~/hadoop/zookeeper/zookeeper-3.4.8$ bin/zkServer.sh
start
ERROR [main] client.ConnectionManager$HConnectionImplementation:
The node /hbase is not in ZooKeeper.
It should have been written by the master. Check the value configured in zookeeper.znode.parent. There could be a mismatch with the one configured in the master.
But running the script as su rectified the issue:
hadoop#ubuntu:~/hadoop/zookeeper/zookeeper-3.4.8$ sudo bin/zkServer.sh
start
ZooKeeper JMX enabled by default Using config:
/home/hadoop/hadoop/zookeeper/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
Go to /usr/local/etc/
You will find zookeeper directory
delete the directory
and restart the server - zkServer start
Change the path give dataDir=/tmp/zookeeper. If it works then its clearly access issues
But its generally not advisable to use tmp directory.
This seems to be an ownership issue; running the following solved this for me.
$ sudo chown -R $USER /var/lib/zookeeper
N.B.
I've outlined my steps below which show the error I was getting (the same as the error in this SO question) and the attempt at trying the solution proposed by a user above, which advised to provide zoo.cfg as an argument.
13:01:29 ✔ ~ :: $ZK/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/../conf/zoo.cfg
Starting zookeeper ... /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/zkServer.sh: line 149: /var/lib/zookeeper/zookeeper_server.pid: Permission denied
FAILED TO WRITE PID
13:01:32 ✘ ~ :: $ZK/bin/zkServer.sh start $ZK/conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: /usr/local/Cellar/zookeeper/3.4.14/libexec/conf/zoo.cfg
Starting zookeeper ... /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/zkServer.sh: line 149: /var/lib/zookeeper/zookeeper_server.pid: Permission denied
FAILED TO WRITE PID
13:04:45 ✔ /var/lib :: ls -la
total 0
drwxr-xr-x 4 root wheel 128 Apr 19 18:55 .
drwxr-xr-x 27 root wheel 864 Apr 19 18:55 ..
drwxr--r-- 3 root wheel 96 Mar 24 15:07 zookeeper
13:04:48 ✔ /var/lib :: echo $USER
tallamjr
13:06:03 ✔ /var/lib :: sudo chown -R $USER zookeeper
Password:
13:06:44 ✔ /var/lib :: ls -la
total 0
drwxr-xr-x 4 root wheel 128 Apr 19 18:55 .
drwxr-xr-x 27 root wheel 864 Apr 19 18:55 ..
drwxr--r-- 3 tallamjr wheel 96 Mar 24 15:07 zookeeper
13:06:48 ✔ ~ :: $ZK/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
REF:
- https://askubuntu.com/questions/6723/change-folder-permissions-and-ownership
For me this solution worked:
I granted the read, write and execute permissions for everyone using the command $sudo chmod 777 foldername for the directory zookeeper by going inside the directory /var (/var/zookeeper).
After executing this command try running the zookeeper. It ran in my case
try to use sudo -E bin/zkServer.sh start