minikube --kubernetes-version URI fails. Can I use a customized localkube binary? - kubernetes

Can I use a custom kubernetes version in which I have made some code modifications? I wanted to use the --kubernetes-version string flag to use a customized kubernete localkube binary. It is possible??
Minikube documentation says:
--kubernetes-version string The kubernetes version that the minikube VM will use (ex: v1.2.3)
OR a URI which contains a localkube binary (ex: https://storage.googleapis.com/minikube/k8sReleases/v1.3.0/localkube-linux-amd64) (default "v1.7.5")
But even, when I try that flag with official localkube binaries, it fails:
minikube start --kubernetes-version https://storage.googleapis.com/minikube/k8sReleases/v1.7.0/localkube-linux-amd64 --v 5
Invalid Kubernetes version.
The following Kubernetes versions are available:
- v1.7.5
- v1.7.4
- v1.7.3
- v1.7.2
- v1.7.0
- v1.7.0-rc.1
- v1.7.0-alpha.2
- v1.6.4
- v1.6.3
- v1.6.0
- v1.6.0-rc.1
- v1.6.0-beta.4
- v1.6.0-beta.3
- v1.6.0-beta.2
- v1.6.0-alpha.1
- v1.6.0-alpha.0
- v1.5.3
- v1.5.2
- v1.5.1
- v1.4.5
- v1.4.3
- v1.4.2
- v1.4.1
- v1.4.0
- v1.3.7
- v1.3.6
- v1.3.5
- v1.3.4
- v1.3.3
- v1.3.0
Many thanks!

Two options come to mind:
You can launch minikube with --vm-driver=none, so the binaries are installed in your local filesystem. Then replacing the binaries should not be a difficult process.
You can create your own minikube iso and then use the --iso-url flag. In order to build the ISO, you can follow this guide https://github.com/kubernetes/minikube/blob/master/docs/contributors/minikube_iso.md

Related

Helmfile + Kustomize - cannot unmarshal !!seq into state.HelmState

I am trying to use the Kustomize with Helmfile by following the instructions given in Readme, but I am getting below error when I try to run the sync command.
helmfile --environment dev --file=helmfile.yaml sync
in ./helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 4: cannot unmarshal !!seq into state.HelmState
helmfile.yaml
- name: common-virtual-services
chart: ./common-virtual-services
hooks:
- events: ["prepare", "cleanup"]
command: "./helmify"
args: ["{{`{{if eq .Event.Name \"prepare\"}}build{{else}}clean{{end}}`}}", "{{`{{.Release.Chart}}`}}", "{{`{{.Environment.Name}}`}}"]
Environment:
helmfile version v0.119.0
kustomize - Version:3.6.1
OS - Darwin DEM-C02X5AKLJG5J 18.7.0 Darwin Kernel Version 18.7.0
Please let me know if you need more details.

Fresh macos install - kubectl outputs error message

On Macbook Pro, tried installing from binary with curl and then with brew.
Both installs generate an error at the end of output:
~ via 🐘 v7.1.23
➜ kubectl version --output=yaml
clientVersion:
buildDate: "2019-04-19T22:12:47Z"
compiler: gc
gitCommit: b7394102d6ef778017f2ca4046abbaa23b88c290
gitTreeState: clean
gitVersion: v1.14.1
goVersion: go1.12.4
major: "1"
minor: "14"
platform: darwin/amd64
error: unable to parse the server version: invalid character '<' looking for beginning of value
Is there a way to fix this?
I think there is another application listening on 8080 port. By default, kubectl will try to connect on localhost:8080 if no server is passed.
If you have deployed kubernetes apiserver on some other machine or port, pass --server=IP:PORT to kubectl.

Setting Up HLF network V1.4 with tls enabled and kafka based ordering

I am creating an HLF v1.4 network with TLS enabled and Kafka based ordering, But when I am trying to create a channel it throws an error saying
and when I saw the logs of orderer it is showing
Configs for TLS in network
Peer Configs
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
Orderer Configs
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peer/tls/ca.crt]
Cli Configs
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/peer/peers/peer0.org1/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/peer/peers/peer0.org1/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/peer/peers/peer0.org1/tls/ca.crt
Can anyone help me in this regard
as the error says, bad certificate while creating a channel, orderer certificate is not found, that's why the error bad certificate.
In the compose.yaml file, set the environment variable
FABRIC_LOGGING_SPEC=DEBUG, to see exactly what the error is.

How to set node-exporter of Prometheus

How to set node-exporter of Prometheus for collecting host metrics in docker-swarm
version: '3.3'
services:
node-exporter:
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
- '--collector.textfile.directory=/etc/node-exporter/'
- '--collector.enabled="conntrack,diskstats,entropy,filefd,filesystem,loadavg,mdadm,meminfo,netdev,netstat,stat,textfile,time,vmstat,ipvs"'
ports:
- 9100:9100
i am getting this error:- node_exporter: error: unknown long flag '--collector.enabled', try --help
what's wrong about last line under command section in this docker-compose file & if wrongly set/passed, how to pass it correctly.
Try to use --collector.[collector_name] (e.g. --collector.diskstats) keys instead of --collector.enabled as it does not work anymore since 0.15 version or higher.
For multiple collectors you can try as below after version "< 0.15":
--collector.processes --collector.ntp ...... so on
In the older version " > 0.15 " we were using as below for specific collectors:
--collectors.enabled meminfo,loadavg,filesystem

Etcdctl get a lot of garbage info after upgrade aip-server to use etcdv3 api

I'm upgrading etcd to v3.0.17 for k8s to use etcdv3 api.
After etcd upgraded and apiserver started without 'storage-backend=etcd2' and 'storage-media-type=application/json' defined, I noticed that etcdctl get can't display some info properly, like this:
/_egi_+_y/c-+fig+a-_/++be-_y_+e+/fi+ebea+-_-a_+-c-+f
{"+i+d":"C-+figMa-","a-iVe__i-+":"+1","+e+ada+a":{"+a+e":"fi+ebea+-_-a_+- c-+f","+a+e_-ace":"++be-_y_+e+","+id":"9a09cef0-348c-11e7-ba37- 1418775d636e","c_ea+i-+Ti+e_+a+-":"2017-05-09T07:53:34Z"},"da+a":{"fi+ebea+.y++":"fi+ebea+:\+ -_-_-ec+-__:\+ -\+ i+-++_+y-e: +-g\+ d-c++e++_+y-e: _-a_+\+ fie+d_:\+ +a+e_-ace: ++be-_y_+e+\+ fie+d__++de___--+: +_+e\+ -a+h_:\+ - /+-g/*/*/_+de__\+ -\+ i+-++_+y-e: +-g\+ d-c++e++_+y-e: _-a_+\+ fie+d_:\+ +a+e_-ace: ++be-_y_+e+\+ fie+d__++de___--+: +_+e\+ -a+h_:\+ - /+-g/*/*/_+d-++\+\+-++-++:\+ +-g_+a_h:\+ h-_+_: [\"192.168.197.200:5044\",\"192.168.197.199:5044\"]\+ +-_+e_: 4\+ c-+-_e__i-+_+e+e+: 3\+ +-adba+a+ce: +_+e\+ i+de|: ++be-_y_+e+\+"}}
Also, I created a new pod, it was stored in the etcd like this:
# ETCDCTL_API=3 /opt/bin/etcdctl --endpoints http://10.3.7.27:2379 get /registry/pods/default/busybox-116p6
/registry/pods/default/busybox-116p6
k8s
v1Pod¯
ã
busybox-116pbusybox-default"-/api/v1/namespaces/default/pods/busybox-116p6*$1cc8a15c-5996-11e8-ac41-1418775d2f8528B
ý¯ô׋ĉîZ
a--b+_yb-|Z&
c-++_-++e_-_e+i_i-+-ha_h
1615036418Z
--d-+e+-+a+e-ge+e_a+i-+1bû
++be_+e+e_.i-/c_ea+ed-byÞ{"+i+d":"Se_ia+izedRefe_e+ce","a-iVe__i-+":"+1","_efe_e+ce":{"+i+d":"Dae+-+Se+","+a+e_-ace":"defa+++","+a+e":"b+_yb-|","+id":"fa90b377-44a0-11e8-83ca-1418775d636e","a-iVe__i-+":"e|+e+_i-+_","_e_-+_ceVe__i-+":"100605842"}}
b.
*_ched++e_.a+-ha.++be_+e+e_.i-/c_i+ica+---d+R
Dae+-+Se+b+_yb-|"$fa90b377-44a0-11e8-83ca-1418775d636e*e|+e+_i-+_/+1be+a108zý
1
defa+++-+-+e+-+8g9+2
defa+++-+-+e+-+8g9+¤è
b+_yb-|-_egi_+_y.-_-d.+ca_i+c.c-+/+--+_/b+_yb-|:g+ibc/bi+/_h-c+hi+e +_+e; d- _+ee- 3600; d-+e*BJH
defa+++-+-+e+-+8g9+-/+a_/_++/_ec_e+_/++be_+e+e_.i-/_e_+iceacc-+++"+/de+/+e_+i+a+i-+-+-g_
IfN-+P_e_e++€ˆ¢Fi+eA++ay_ 2
C++_+e_Fi__+Bdefa+++Jdefa+++Raz06.++_.+ab.+ca_i+c.c-+X`h_‚Ššdefa+++-_ched++e_²8
!+-de.a+-ha.++be_+e+e_.i-/+-+ReadyE|i_+_" N-E|ec++e²;
$+-de.a+-ha.++be_+e+e_.i-/++_eachab+eE|i_+_" N-E|ec++eÆ
R+++i+g#
I+i+ia+izedT_+ý¯ô×*2
ReadyT_+þ¯ô×*2$
P-dSched++edT_+þ¯ô×*2"* 10.3.7.342172.29.224.22ý¯ô×B›
b+_yb-|
þ¯ô× (2-_egi_+_y.-_-d.+ca_i+c.c-+/+--+_/b+_yb-|:g+ibc:d-c+e_--+++ab+e://_egi_+_y.-_-d.+ca_i+c.c-+/+--+_/b+_yb-|#_ha256:9f5597958a437eacae2634ff71b9f28f94720a3bf43378c1db282693c4fed9e5BId-c+e_://4c54e2b9224f0b2545311f272b08af110a557941efa7d77f9207f8ec617be37dJ
Be_+Eff-_+"
What's wrong with here? The data write through v3 api by apiserver is not readable?
My environment: Coreos 1298.5.0, kubernetes v1.7.10, etcd v3.0.17
It should be readable with the storage-media-type=application/json flag.
The output of your command looks like JSON, but with something like broken encoding.
Try to dump the output to a file, not to a terminal and read it by some other tool or on the other server.