I've just migrated to M1 Macbook and tried to deploy couchbase using Couchbase Helm Chart on Kubernetes. https://docs.couchbase.com/operator/current/helm-setup-guide.html
But, couchbase server pod fails with message below
Readiness probe failed: dial tcp 172.17.0.7:8091: connect: connection
refused
Pod uses image: couchbase/server:7.0.2
Error from log file:
Starting Couchbase Server -- Web UI available at http://<ip>:8091
and logs available in /opt/couchbase/var/lib/couchbase/logs
runtime: failed to create new OS thread (have 2 already; errno=22)
fatal error: newosproc
runtime stack:
runtime.throw(0x4d8d66, 0x9)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/panic.go:596 +0x95
runtime.newosproc(0xc420028000, 0xc420038000)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/os_linux.go:163 +0x18c
runtime.newm(0x4df870, 0x0)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:1628 +0x137
runtime.main.func1()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:126 +0x36
runtime.systemstack(0x552700)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:327 +0x79
runtime.mstart()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:1132
goroutine 1 [running]:
runtime.systemstack_switch()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:281 fp=0xc420024788 sp=0xc420024780
runtime.main()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:127 +0x6c fp=0xc4200247e0 sp=0xc420024788
runtime.goexit()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200247e8 sp=0xc4200247e0
{"init terminating in do_boot",{{badmatch,{error,{{shutdown,{failed_to_start_child,encryption_service,{port_terminated,normal}}},{ns_babysitter,start,[normal,[]]}}}},[{ns_babysitter_bootstrap,start,0,[{file,"src/ns_babysitter_bootstrap.erl"},{line,23}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}
init terminating in do_boot ({{badmatch,{error,{{_},{_}}}},[{ns_babysitter_bootstrap,start,0,[{_},{_}]},{init,start_em,1,[]},{init,do_boot,3,[]}]})
Any help would be appreciated.
It seems ARM64 version of Couchbase Server for MacOS has become available since Couchbase Server 7.1.1.
So, I ran the command below to install couchbase.
helm install couchbasev1 --values myvalues.yaml couchbase/couchbase-operator
myvalues.yaml:
cluster:
image: couchbase/server:7.1.1
And it worked.
Related
I'm trying to install the Eclipse-Che by following this blog : https://che.eclipseprojects.io/2022/07/25/#karatkep-installing-eclipse-che-on-aks.html,
yet following all the steps i'm not able to successfully install the Eclipse che.
1)
After running this command:
kubectl logs -l app.kubernetes.io/component=che-operator -n eclipse-che -f
these are the errors i'm facing:
logs: Waited for 1.034843163s due to client-side throttling, not priority and fairness, request: GET:https://10.1.0.1:443/apis/discovery.k8s.io/v1?timeout=32s
time="2022-09-12T14:08:29Z" level=info msg="Successfully reconciled."
2) the Che-gateway pod is failing:
che-gateway-7d54ccdd59-bblw6 3/4 CrashLoopBackOff 18 (2m51s ago) 70m
Description: Oauth-proxy container is getting failed (Crash loop back error)
Logs of the oauth- Proxy container:
#invalid configuration:
missing setting: login-url
missing setting: redeem-url
dear all,Recently I ran ETCD in Aarch64 boot failed , but in k8s 1.18 is succeed;it is strange that ,erro info:
enter image description here
i think only pod ready corndns Return to parse,What's changed in version 1.23,
I am trying to deploy Hyperledger fabric 1.0.5 on k8s, and use the balance transfer to test it. Everything is right before instantiate-chaincode, and I get this:
[2019-01-02 23:23:14.392] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the timeout period. undefined
[2019-01-02 23:23:14.393] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined
and I use kubectl logs to get the peer0's log which is like this:
[ConnProducer] NewConnection -> ERRO 61a Failed connecting to orderer2.orderer1:7050 , error: context deadline exceeded
[ConnProducer] NewConnection -> ERRO 61b Failed connecting to orderer1.orderer1:7050 , error: context deadline exceeded
[ConnProducer] NewConnection -> ERRO 61c Failed connecting to orderer0.orderer1:7050 , error: context deadline exceeded
[deliveryClient] connect -> DEBU 61d Connected to
[deliveryClient] connect -> ERRO 61e Failed obtaining connection: Could not connect to any of the endpoints: [orderer2.orderer1:7050 orderer1.orderer1:7050 orderer0.orderer1:7050]
I checked the connectivity of orderer0:7050 and found no problem.
What should I do next?
Thank for help!
You didn't describe what runbook you followed to deploy Hyperledger Fabric but looks like your pods cannot find each other through DNS. If you are following Kubernetes standards your pods should be in the orderer1 namespace and hopefully, you have Kubernetes services for orderer0, orderer1, and orderer2.
You can read more about communication between the Fabric components here in the "Communication between Fabric components" section. Also, read on the "Work around the chaincode sandbox" where it shows you a workaround for --dns-search.
It looks like firewall problem.
In my case to run hlf on k8s, I disabled firewall service.
I am trying to get minishift runnin on my machine (Windows 10) with Virtualbox 5.1.24.
Minishift version: 1.0.0+4f8cb6d
CDK Version: 3.0.0-2
Starting minishift gives me the following:
C:\>minishift start --vm-driver virtualbox
Starting local OpenShift cluster using 'virtualbox' hypervisor...
E0727 18:34:21.682796 17204 start.go:176] Error starting the VM: Error
creating new host: Error attempting to get plugin server address for RPC:
Failed to dial the plugin server in 10s. Retrying.
E0727 18:34:31.740746 17204 start.go:176] Error starting the VM: Error
creating new host: Error attempting to get plugin server address for RPC:
Failed to dial the plugin server in 10s. Retrying.
E0727 18:34:41.770667 17204 start.go:176] Error starting the VM: Error
creating new host: Error attempting to get plugin server address for RPC:
Failed to dial the plugin server in 10s. Retrying.
Error starting the VM: Error creating new host: Error attempting to get
plugin server address for RPC: Failed to dial the plugin server in 10s
Error creating new host: Error attempting to get plugin server address for
RPC: Failed to dial the plugin server in 10s
Error creating new host: Error attempting to get plugin server address for
RPC: Failed to dial the plugin server in 10s
I read the comments that it needs to run from the C:\drive but it looks like this did not fix the problem. I am happy about any hints how to fix this. If there is any additional information you need, just let me know.
Sounds like you got it working.
I usually encourage folks who are having trouble starting their minishift VMs to try the following:
Find your preferred virtualization provider from the list of available options
Install the appropriate driver plugin for your system
Persist your VM provider configuration: minishift config set vm-driver virualbox
On upgrading kubernetes from 1.0.6 to 1.1.3, I now see a bunch of the below errors during a rolling upgrade when any of my kube master or etcd hosts are down. We currently have a single master, with two etcd hosts.
2015-12-11T19:30:19.061+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.726490 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871210 (3871628)
2015-12-11T19:30:19.075+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.733331 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871156 (3871628)
2015-12-11T19:30:19.081+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.736569 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871623 (3871628)
2015-12-11T19:30:19.095+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.740328 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871622 (3871628)
2015-12-11T19:30:19.110+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.742972 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871210 (3871628)
I believe these errors are caused by this new feature in 1.1, the adding of the --watch-cache option by default. The errors cease at the end of the rolling upgrade.
I would like to know how to explain these errors, if they can be safely ignored, and how to change the system to avoid them in the future (for a longer term solution).
Yes - as you suggested, those errors are related to the new feature of serving watch from in-memory cache in apiserver.
So, if I understand correctly, what happened here is that:
- you upgraded (or in general restarted) apiserver
- this cause all the existing watch connections to terminate
- once apiserver started successfully, it regenerated its internal in-memory cache
- since watch can have some delays, it's possible that clients (that were renewing their watch connections) were slightly behind
- this caused generating such error, and forced clients to relist and start watching from the new point
IIUC, those errors were present only during upgrade and disappeared after - so that's good.
In other words - such errors may appear on update (or in general immediately after any restart of apiserver). In such situations they may be safely ignore.
In fact, those shouldn't probably be errors - we can probably change them to warnings.