invalid skaffold config: proxy: unknown scheme: http : VSCode Cloud Code - visual-studio-code

I tried to follow the instructions on Youtube https://www.youtube.com/watch?v=EtMIEtLQNa0
to run a Cloud Run service locally.
I created a sample Python hello-world application. When I choose the option "Run On Cloud Run Emulator", it gives me an error saying "invalid skaffold config: proxy: unknown scheme: http".
Any pointers on how this can be solved?
I'm using an Intel Mac, on a corporate VPN.
Skaffold &{Version:v2.0.4 ConfigVersion:skaffold/v4beta1 GitVersion: GitCommit: ....BuildDate:2022-12-21T09:11:50Z GoVersion:go1.19.1 Compiler:gc Platform:darwin/amd64 User:}
Loaded Skaffold defaults from \"/Users/.../.skaffold/config\"
map entry found when executing locate for &{hello-world-2 . 0x.... {<nil> <nil> <nil> <nil> <nil> 0x.... <nil>} [] {[] []} []} of type *latest.Artifact and pointer: 824635590528
Using kubectl context: cloud-run-dev-internal
invalid skaffold config: proxy: unknown scheme: http
Skaffold exited with code 1.
invalid skaffold config: proxy: unknown scheme: http
Deleted the temporary directory /var/folders/4j/../T/cloud-code-cloud-run-JSw6y6.

Related

Zipkin tracing not working for docker-compose and Dapr

Traces that should have been sent by dapr runtime to zipkin server somehow fails to reach it.
The situation is the following:
I'm using Docker Desktop on my Windows PC. I have downloaded the sample from dapr repository (https://github.com/dapr/samples/tree/master/hello-docker-compose) which runs perfectly out of the box with docker-compose up.
Then I've added Zipkin support as per dapr documentation:
added this service in the bottom of docker-compose.yml
zipkin:
image: "openzipkin/zipkin"
ports:
- "9411:9411"
networks:
- hello-dapr
added config.yaml in components folder
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprsystem
spec:
mtls:
enabled: false
tracing:
enabled: true
exporterType: zipkin
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
When application runs, it should send traces to the server, but nothing is found in zipkin UI and logs.
Strange thing start to appear in the logs from nodeapp-dapr_1 service: error while reading spiffe id from client cert
pythonapp-dapr_1 | time="2021-03-15T19:14:17.9654602Z" level=debug msg="found mDNS IPv4 address in cache: 172.19.0.7:34549" app_id=pythonapp instance=ce32220407e2 scope=dapr.contrib type=log ver=edge
nodeapp-dapr_1 | time="2021-03-15T19:14:17.9661792Z" level=debug msg="error while reading spiffe id from client cert: unable to retrieve peer auth info. applying default global policy action" app_id=nodeapp instance=773c486b5aac scope=dapr.runtime.grpc.api type=log ver=edge
nodeapp_1 | Got a new order! Order ID: 947
nodeapp_1 | Successfully persisted state.
Additional info - current dapr version used is 1.0.1. I made sure that security (mtls) is disabled in config file.
Configuration file is supposed to be in different folder then components.
Create new folder e.g. dapr next to the components folder.
Move components folder into newly created dapr folder.
Then create config.yaml in dapr folder.
Update docker-compose accordingly.
docker-compose
services:
nodeapp-dapr:
image: "daprio/daprd:edge"
command: ["./daprd",
"-app-id", "nodeapp",
"-app-port", "3000",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002",
"-components-path", "/dapr/components",
"-config", "/dapr/config.yaml"]
volumes:
- "./dapr/components/:/dapr"
depends_on:
- nodeapp
network_mode: "service:nodeapp"
config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprConfig
spec:
mtls:
enabled: false
tracing:
enabled: true
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: http://host.docker.internal:9411/api/v2/spans
I had issue with localhost and 127.0.0.1 in URL which I resolved using host.docker.internal as hostname.
PS: Don't forget to kill all *-dapr_1 containers so it can load new configuration.

cant enable mongo service

I am trying to enable mongo service using ansible on my aws AMI. Here is the task for the playbook
- name: Mongodb repo
yum_repository:
name: mongodb
description: mongodb
baseurl: https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgkey: https://www.mongodb.org/static/pgp/server-3.4.asc
- name: Install mongodb
yum:
name: mongodb-org
state: present
- name: Enable mongodb
service:
name: mongodb-org
enabled: true
and here is the error
TASK [mongodb_ami : Enable mongodb] ********************************************
fatal: [default]: FAILED! => {"changed": false, "msg": "Could not find the requested service mongodb-org: host"}
The first two task are okay but the last one (enabling doesnt work). How can I resolve this?
Are you sure the service name is mongodb-org? I think the service name is mongod:
- name: Enable mongodb
service:
name: mongod
enabled: true

Trying to connect to mongodb service through Consul Connect Sidecar Proxy

I have a Minikube set up and a mongo instance running in it. I use Consul + Consul Connect to mesh my services. Only I can not connect to mongo from another service using sidecar upstreams, some weird stuff is happening...
My mongo instance is installed using bitnami helm chart, I just set the service name, set username and change the storage class to match my need, and put consul annotations for service mesh in pod annotation section:
image:
registry: docker.io
repository: bitnami/mongodb
tag: 4.2.5-debian-10-r3
pullPolicy: IfNotPresent
debug: false
serviceAccount:
create: true
name: "svc-identity-data"
usePassword: true
mongodbRootPassword: rootpassword
mongodbUsername: identity
mongodbPassword: identity
mongodbDatabase: company
service:
name: svc-identity-data
annotations: {}
type: ClusterIP
port: 27017
useStatefulSet: true
replicaSet:
enabled: false
useHostnames: true
name: rs0
replicas:
secondary: 1
arbiter: 1
pdb:
enabled: true
minAvailable:
primary: 1
secondary: 1
arbiter: 1
annotations: {}
labels: {}
podAnnotations:
"consul.hashicorp.com/connect-inject": "true"
"consul.hashicorp.com/connect-service": "svc-identity-data"
"consul.hashicorp.com/connect-service-protocol": "tcp"
persistence:
enabled: true
mountPath: /bitnami/mongodb
subPath: ""
storageClass: "standard"
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
configmap:
storage:
dbPath: /bitnami/mongodb/data/db
journal:
enabled: true
directoryPerDB: false
systemLog:
destination: file
quiet: false
logAppend: true
logRotate: reopen
path: /opt/bitnami/mongodb/logs/mongodb.log
verbosity: 0
net:
port: 27017
unixDomainSocket:
enabled: true
pathPrefix: /opt/bitnami/mongodb/tmp
ipv6: false
bindIp: 0.0.0.0
processManagement:
fork: false
pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
setParameter:
enableLocalhostAuthBypass: true
security:
authorization: enabled
Secondly I started a stand-alone mongodb pod to use mongo client, and meshed with consul connect using annotations
apiVersion: v1
kind: Pod
metadata:
name: mongo-client
labels:
name: mongo-client
annotations:
"consul.hashicorp.com/connect-inject": "true"
"consul.hashicorp.com/connect-service-upstreams": "svc-identity-data:28017"
"consul.hashicorp.com/connect-service-protocol": "tcp"
spec:
containers:
- name: mongo-client
image: mongo:4.2.5
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 27017
I now have a mongodb service and a mongo client pod with an upstream to mongodb service binded on 127.0.0.1:28017
When I try to connect to mongodb service using my upstream I get a behavior I don't understand
> kubectl exec -it mongo-client mongo --host 127.0.0.1 --port 28017 -u root -p rootpassword
MongoDB shell version v4.2.5
connecting to: mongodb://127.0.0.1:28017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("8c46012d-8083-4029-8495-167bbe8bf063") }
MongoDB server version: 4.2.5
Server has startup warnings:
2020-04-22T12:20:14.777+0000 I STORAGE [initandlisten]
2020-04-22T12:20:14.777+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-04-22T12:20:14.777+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
>
bye
No problem here, everything works perfectly fine to me, but if I use mongo with a connection string instead of separate parameters, I get a connection refused
> kubectl exec -it mongo-client mongo mongodb://root:roopassword#127.0.0.1:28017/?authSource=admin
MongoDB shell version v4.2.5
connecting to: mongodb://127.0.0.1:28017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
2020-04-22T15:04:07.955+0000 I NETWORK [js] DBClientConnection failed to receive message from 127.0.0.1:28017 - HostUnreachable: Connection closed by peer
2020-04-22T15:04:07.968+0000 E QUERY [js] Error: network error while attempting to run command 'isMaster' on host '127.0.0.1:28017' :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-04-22T15:04:07.973+0000 F - [main] exception: connect failed
2020-04-22T15:04:07.973+0000 E - [main] exiting with code 1
I don't understand at all what is the difference between using connection string and separate parameters, if you have any clue or a solution, please let me know.
P.S : I didn't set any secure communication (tls), I'm on a minikube (because I'm a Microservice Architecture and Kubernetes n00b) and it is to experiment service mesh (we need to live in the current era), a solution involving connecting to service without using the sidecar is not the point, by the way connecting directly to service works perfectly using connection string.
> kubectl exec -it mongo-client mongo -mongodb://root:roopassword#svc-identity-data:28017/?authSource=admin
MongoDB shell version v4.2.5
connecting to: mongodb://svc-identity-data:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("713febaf-2000-4ca6-8b1f-963c76986e72") }
MongoDB server version: 4.2.5
Server has startup warnings:
2020-04-22T12:20:14.777+0000 I STORAGE [initandlisten]
2020-04-22T12:20:14.777+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-04-22T12:20:14.777+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
>
bye
EDIT : Rebooting minikube make all things work as intended. I will investigate more on the matter to understand why. Maybe someone else will hit the same issue.
EDIT 2 : I discovered one thing : connection error when connecting to mongo through sidecar is random, when I run command until it success, here is what I get
root#mongo-client:/# mongo mongodb://root:rootpassword#localhost:28017/?authSource=admin
MongoDB shell version v4.2.5
connecting to: mongodb://localhost:28017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
2020-04-24T12:51:15.641+0000 I NETWORK [js] DBClientConnection failed to receive message from localhost:28017 - HostUnreachable: Connection closed by peer
2020-04-24T12:51:15.702+0000 E QUERY [js] Error: network error while attempting to run command 'isMaster' on host 'localhost:28017' :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-04-24T12:51:15.729+0000 F - [main] exception: connect failed
2020-04-24T12:51:15.729+0000 E - [main] exiting with code 1
root#mongo-client:/# mongo mongodb://root:rootpassword#localhost:28017/?authSource=admin
MongoDB shell version v4.2.5
connecting to: mongodb://localhost:28017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("628bfcf9-6d44-4168-ab74-19a717d746f6") }
MongoDB server version: 4.2.5
Server has startup warnings:
2020-04-24T06:43:39.359+0000 I STORAGE [initandlisten]
2020-04-24T06:43:39.359+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-04-24T06:43:39.359+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
>
bye
And on the side of mongo the log :
2020-04-24T12:51:19.281+0000 I NETWORK [conn6647] end connection 127.0.0.1:54148 (6 connections now open)
2020-04-24T12:51:19.526+0000 I COMMAND [conn6646] command admin.$cmd appName: "MongoDB Shell" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } numYields:0 reslen:196 locks:{} protocol:op_msg 231ms
2020-04-24T12:51:19.938+0000 I ACCESS [conn6646] Successfully authenticated as principal root on admin from client 127.0.0.1:54142
2020-04-24T12:51:20.024+0000 I NETWORK [listener] connection accepted from 127.0.0.1:54168 #6648 (7 connections now open)
2020-04-24T12:51:20.027+0000 I NETWORK [conn6648] received client metadata from 127.0.0.1:54168 conn6648: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.5" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 4.19.94" } }
2020-04-24T12:51:20.215+0000 I NETWORK [conn6648] end connection 127.0.0.1:54168 (6 connections now open)
2020-04-24T12:51:21.328+0000 I NETWORK [conn6646] end connection 127.0.0.1:54142 (5 connections now open)
I am more and more confused, I can not explain that behavior.
I found out the solution, it turns out to be the simpliest issue possible : resources
My minikube wasn't enough to make all pods running swiftly, it was introducing a latency between the sidecar proxy pods even if kubenetes raised no error on any outage.
I'm a kubernetes learner so I didn't think of it right away. Now that I know what happened I can investigate in the right direction to undestand in what extends the latency can be an issue.
The problem may be that CN of the certificate doesn't match the value of hostname in config file of MongoDB. Its is about MongoDB specification and parameters with which you are running it.
CN (common name) or SAN (subject alternative name) of the certificate has to match the value of --hostname that you supply when running mongo.
Your MongoDB URI is:
MONGODB_URI=mongodb://root:roopassword#127.0.0.1:28017/?authSource=admin
the MongoDB is NOT on localhost. Also the MongoDB server needs to allow ANY host to connect to the database. By default it will ONLY allow connections from the SAME runtime. You need to get IP address of service which is assigned to pod with your database container - svc-identity-data has address 10.107.99.51.
Take a look: mongodb-ssl, mongodb-failed-to-connect

How to use my own hub image when deploying a jupyterhub on google kubernetes engine?

I'm trying to deploy JupyterHub on Google Kubernetes engine.
I managed to deploy it by following the Zero to JupyterHub with Kubernetes tutorial.
My next step is to deploy JupyterHub using my own hub image but I keep getting an error message (from the proxy apparently).
So I created a repository on Docker Hub registry and tried to modify my helm config file so it will pull the image (I used the helm Configuration Reference).
I updated the deploy with the following command:
helm upgrade --install $RELEASE jupyterhub/jupyterhub --namespace $NAMESPACE --version=0.8.2 --values config.yaml
As a result, I get a "Service Unavailable" message (The pods are all running).
The proxy pod log:
09:14:24.370 - info: [ConfigProxy] Adding route / ->
http://10.47.249.21:8081 09:14:24.380 - info: [ConfigProxy] Proxying
http://0.0.0.0:8000 to http://10.47.249.21:8081
09:14:24.381 - info:
[ConfigProxy] Proxy API at http://0.0.0.0:8001/api/routes 09:16:01.434
- error: [ConfigProxy] 503 GET /hub/admin connect ECONNREFUSED 10.47.249.21:8081 09:16:01.438 - error: [ConfigProxy] Failed to get custom error page Error: connect ECONNREFUSED 10.47.249.21:8081
at Object.exports._errnoException (util.js:1020:11)
at exports._exceptionWithHostPort (util.js:1043:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1086:14)
Hub image Dockerfile:
FROM jupyterhub/jupyterhub:0.9.6
USER root
COPY MZ_logo.jpg /usr/local/share/jupyter/hub/static/images/MZ-logo.jpg
USER ${NB_USER}
Helm config.yaml file:
proxy:
secretToken: "<TOKEN>"
auth:
admin:
users:
- admin1
- admin2
whitelist:
users:
- user1
- user2
hub:
imagePullPolicy: 'Always'
imagePullSecret:
enabled: true
username: <DOCKER_HUB_USERNAME>
password: <DOCKER_HUB_PASSWORD>
image:
name: <DOCKER_HUB_USERNAME>/<DOCKER_HUB_REPO>
tag: latest
extraConfig: |
c.JupyterHub.logo_file = '/usr/local/share/jupyter/hub/static/images/MZ-logo.jpg'

Docker [for mac] file system became read-only which breaks almost all features of docker

My Docker ran into an error state, where I cannot use it anymore.
output of docker system info:
Containers: 14
Running: 2
Paused: 0
Stopped: 12
Images: 61
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: error
NodeID:
Error: open /var/lib/docker/swarm/worker/tasks.db: read-only file system
Is Manager: false
Node Address: 192.168.65.3
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.87-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.952GiB
Name: linuxkit-025000000001
ID: MCSC:SFXH:R3JC:NU4D:OJ5V:K4B5:LPMJ:2BFL:LHT3:LYCI:XKY2:DTE6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
This behaviour occured, after I built the following Dockerfile:
FROM perl:5.20
RUN apt-get update && apt-get install -y libsoap-lite-perl \
&& rm -rf /var/lib/apt/lists/*
RUN cpan SOAP::LITE
the error message when I try to build an image or run a container or remove an image is always similar to this:
Error: open /var/lib/docker/swarm/worker/tasks.db: read-only file system
for example if I try to execute this command:
docker container run -it perl:5.20 bash
I get this error:
docker: Error response from daemon: mkdir /var/lib/docker/overlay2/1b966e163e500a8c78a64e8d0f14984b091c1c5fe188a60b8bd030672d3138d9-init: read-only file system.
How can I reset my docker so these errors go away?
Go to your docker for mac Icon in the top right, click on it and then click Restart.
After that Docker works as expected.
This seems to be an temporary issue since I cannot reproduce it after restarting docker. My guess is that I had an network communication breakdown while docker tried to download and install the packages in the Dockerfile.