Bootstrap InfluxDB 2 in a docker container with pre-existing influx-configs file? - kubernetes

I'd like to run InfluxDB2 in a docker container in Kubernetes, and I'd like to avoid having to manually setup a user. I do know from https://hub.docker.com/_/influxdb that it's possible to do this using environment variables, and I've made that work, but I'd like to do this using a kubernetes secret instead and mount that as the file /etc/influxdb2/influx-configs in the container.
I have this secret:
apiVersion: v1
kind: Secret
metadata:
name: influxdb-org-user-auth-secret
stringData:
influx-configs: |+
[default]
url = "http://localhost:8086"
token = "token_token_token_token"
org = "initial_organization"
active = true
And I'm mounting it like this in my statefulset:
...
volumeMounts:
- name: influxdb-org-user-auth
readOnly: true
mountPath: "/etc/influxdb2"
...
volumes:
- name: influxdb-org-user-auth
secret:
secretName: influxdb-org-user-auth-secret
And this seems to work. If I go into the container I can see this:
I have no name!#influxdb-0:/$ cat /etc/influxdb2/influx-configs
[default]
url = "http://localhost:8086"
token = "token_token_token_token"
org = "initial_organization"
active = true
I can also see that it seems to be a symbolic link:
I have no name!#influxdb-0:/$ ls -ahl /etc/influxdb2/influx-configs
lrwxrwxrwx 1 root 20000 21 May 5 10:49 /etc/influxdb2/influx-configs -> ..data/influx-configs
However, if I port forward (kubectl -n observability port-forward influxdb-0 8086:8086) and open browser at http://localhost:8086 I'm redirected to http://localhost:8086/onboarding/0, which seems to indicate that my efforts failed.
Here are the initial logs of the influxdb container:
chmod: changing permissions of '/var/lib/influxdb2': Operation not permitted
chmod: changing permissions of '/etc/influxdb2': Read-only file system
2022-05-05T10:49:57.580064860Z warn boltdb not found at configured path, but DOCKER_INFLUXDB_INIT_MODE not specified, skipping setup wrapper {"system": "docker", "bolt_path": ""}
ts=2022-05-05T10:49:57.703727Z lvl=info msg="Welcome to InfluxDB" log_id=0aGyIUml000 version=2.1.1 commit=657e1839de build_date=2021-11-09T03:03:48Z
ts=2022-05-05T10:49:57.707452Z lvl=info msg="Resources opened" log_id=0aGyIUml000 service=bolt path=/var/lib/influxdb2/influxd.bolt
ts=2022-05-05T10:49:57.707518Z lvl=info msg="Resources opened" log_id=0aGyIUml000 service=sqlite path=/var/lib/influxdb2/influxd.sqlite
ts=2022-05-05T10:49:57.708371Z lvl=info msg="Bringing up metadata migrations" log_id=0aGyIUml000 service="KV migrations" migration_count=18
ts=2022-05-05T10:49:57.797799Z lvl=info msg="Bringing up metadata migrations" log_id=0aGyIUml000 service="SQL migrations" migration_count=3
ts=2022-05-05T10:49:57.805939Z lvl=info msg="Using data dir" log_id=0aGyIUml000 service=storage-engine service=store path=/var/lib/influxdb2/engine/data
ts=2022-05-05T10:49:57.805974Z lvl=info msg="Compaction settings" log_id=0aGyIUml000 service=storage-engine service=store max_concurrent_compactions=8 throughput_bytes_per_second=50331648 throughput_bytes_per_second_burst=50331648
ts=2022-05-05T10:49:57.805986Z lvl=info msg="Open store (start)" log_id=0aGyIUml000 service=storage-engine service=store op_name=tsdb_open op_event=start
ts=2022-05-05T10:49:57.806024Z lvl=info msg="Open store (end)" log_id=0aGyIUml000 service=storage-engine service=store op_name=tsdb_open op_event=end op_elapsed=0.037ms
ts=2022-05-05T10:49:57.806043Z lvl=info msg="Starting retention policy enforcement service" log_id=0aGyIUml000 service=retention check_interval=30m
ts=2022-05-05T10:49:57.806049Z lvl=info msg="Starting precreation service" log_id=0aGyIUml000 service=shard-precreation check_interval=10m advance_period=30m
ts=2022-05-05T10:49:57.806082Z lvl=info msg="Starting query controller" log_id=0aGyIUml000 service=storage-reads concurrency_quota=1024 initial_memory_bytes_quota_per_query=9223372036854775807 memory_bytes_quota_per_query=9223372036854775807 max_memory_bytes=0 queue_size=1024
ts=2022-05-05T10:49:57.806839Z lvl=info msg="Configuring InfluxQL statement executor (zeros indicate unlimited)." log_id=0aGyIUml000 max_select_point=0 max_select_series=0 max_select_buckets=0
ts=2022-05-05T10:49:58.091674Z lvl=info msg=Listening log_id=0aGyIUml000 service=tcp-listener transport=http addr=:8086 port=8086
ts=2022-05-05T10:49:58.091705Z lvl=info msg=Starting log_id=0aGyIUml000 service=telemetry interval=8h
Should this be possible? If so, what am I missing? Thanks for reading!
(Also asked here: https://github.com/influxdata/influxdata-docker/issues/611)

Related

SELINUX_ERR invalid context on raspberrypi3 on simple policy

I have built core-image-selinux image and flashed on raspberry pi3.
root#raspberrypi3:~# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: requested (insecure)
Max kernel policy version: 33
root#raspberrypi3:~# id -Z
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
When i am trying to run my application i am getting the following error
type=SELINUX_ERR msg=audit(1661751160.368:117): op=security_compute_sid invalid_context="unconfined_u:unconfined_r:myapp_t:s0-s0:c0.c1023" scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=system_u:object_r:myapp_exec_t:s0 tclass=process
My policy code:
$ cat userapp.te
policy_module(userapp, 1.0.0)
require {
type unconfined_t;
class process transition;
}
type myapp_t;
type myapp_exec_t;
domain_type(myapp_t)
domain_entry_file(myapp_t, myapp_exec_t)
type_transition unconfined_t myapp_exec_t : process myapp_t;
$ cat userapp.fc
/usr/bin/userapp -- gen_context(system_u:object_r:myapp_exec_t,s0)

Getting error 'failed to execute when condition: cannot fetch phase from <nil>' with Argo CD notifications

I have installed a brand-new Argo CD (v2.1.7) and notifications (v1.2.0). I configured it to send me a Slack message and subscribed my application to the on-deployed trigger using the following annotation:
annotations:
notifications.argoproj.io/subscribe.on-deployed.slack: my_channel
When I deploy my application, the log output of the argocd-notifications-controller is:
time="2021-12-10T12:18:23Z" level=error msg="failed to execute oncePer condition: cannot fetch syncResult from <nil> (1:27)\n | app.status.operationState.syncResult.revision\n | ..........................^"
time="2021-12-10T12:18:23Z" level=info msg="Trigger on-deployed result: [{[0].y7b5sbwa2Q329JYH755peeq-fBs [app-deployed] false}]" resource=argocd/ah-ctp-argocd-test
time="2021-12-10T12:18:23Z" level=info msg="Processing completed" resource=argocd/ah-ctp-argocd-test
time="2021-12-10T12:19:21Z" level=info msg="Start processing" resource=argocd/ah-ctp-argocd-test
time="2021-12-10T12:19:21Z" level=error msg="failed to execute when condition: cannot fetch phase from <nil> (1:27)\n | app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy'\n | ..........................^"
As a test, I changed the when condition to when: true and added {{ .app }} to the message body, then re-deployed. I received the notification in Slack, however {{ .app }} does not contain operationState. It does contain, for instance app.status.health.status and its value is 'Healthy'.
I see no one has posted any similar error online, which leads me to think I must be doing something wrong. Does anyone have any advice?
Okay, so this was a PEDKAC error. I received a notification the first time, but never again. It turns out that Argo CD keeps track for which commits it has sent notifications. The problem was that I was deleting and redeploying the app in Argo CD, but it was always the same commit. When I created a new commit and deployed from that, the notifications came through as expected.
For me the same errors appeared when I had wrong values in the arocd-notifications-cm config map.
I was using unquoted value for apiURL which I was not using anyways. Once I removed (commented) the apiURL entry in service.slack, everything started working well.

Problem with minikube and nginx ingress when reinstalled minikube

When I'm running following code:
minikube addons enable ingress
I'm getting following error:
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
❌ Exiting due to MK_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: Process exited with status 1
stdout:
namespace/ingress-nginx unchanged
configmap/ingress-nginx-controller unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
serviceaccount/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
serviceaccount/ingress-nginx-admission unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
service/ingress-nginx-controller-admission unchanged
service/ingress-nginx-controller unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
stderr:
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-controller\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"minReadySeconds\":0,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"}},\"strategy\":{\"rollingUpdate\":{\"maxUnavailable\":1},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"gcp-auth-skip-secret\":\"true\"}},\"spec\":{\"containers\":[{\"args\":[\"/nginx-ingress-controller\",\"--ingress-class=nginx\",\"--configmap=$(POD_NAMESPACE)/ingress-nginx-controller\",\"--report-node-internal-ip-address\",\"--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services\",\"--udp-services-configmap=$(POD_NAMESPACE)/udp-services\",\"--validating-webhook=:8443\",\"--validating-webhook-certificate=/usr/local/certificates/cert\",\"--validating-webhook-key=/usr/local/certificates/key\"],\"env\":[{\"name\":\"POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"LD_PRELOAD\",\"value\":\"/usr/local/lib/libmimalloc.so\"}],\"image\":\"k8s.gcr.io/ingress-nginx/controller:v0.44.0#sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\",\"imagePullPolicy\":\"IfNotPresent\",\"lifecycle\":{\"preStop\":{\"exec\":{\"command\":[\"/wait-shutdown\"]}}},\"livenessProbe\":{\"failureThreshold\":5,\"httpGet\":{\"path\":\"/healthz\",\"port\":10254,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":1},\"name\":\"controller\",\"ports\":[{\"containerPort\":80,\"hostPort\":80,\"name\":\"http\",\"protocol\":\"TCP\"},{\"containerPort\":443,\"hostPort\":443,\"name\":\"https\",\"protocol\":\"TCP\"},{\"containerPort\":8443,\"name\":\"webhook\",\"protocol\":\"TCP\"}],\"readinessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"path\":\"/healthz\",\"port\":10254,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":1},\"resources\":{\"requests\":{\"cpu\":\"100m\",\"memory\":\"90Mi\"}},\"securityContext\":{\"allowPrivilegeEscalation\":true,\"capabilities\":{\"add\":[\"NET_BIND_SERVICE\"],\"drop\":[\"ALL\"]},\"runAsUser\":101},\"volumeMounts\":[{\"mountPath\":\"/usr/local/certificates/\",\"name\":\"webhook-cert\",\"readOnly\":true}]}],\"dnsPolicy\":\"ClusterFirst\",\"serviceAccountName\":\"ingress-nginx\",\"volumes\":[{\"name\":\"webhook-cert\",\"secret\":{\"secretName\":\"ingress-nginx-admission\"}}]}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"minReadySeconds":0,"selector":{"matchLabels":{"addonmanager.kubernetes.io/mode":"Reconcile"}},"strategy":{"$retainKeys":["rollingUpdate","type"],"rollingUpdate":{"maxUnavailable":1}},"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","gcp-auth-skip-secret":"true"}},"spec":{"$setElementOrder/containers":[{"name":"controller"}],"containers":[{"$setElementOrder/ports":[{"containerPort":80},{"containerPort":443},{"containerPort":8443}],"args":["/nginx-ingress-controller","--ingress-class=nginx","--configmap=$(POD_NAMESPACE)/ingress-nginx-controller","--report-node-internal-ip-address","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--validating-webhook=:8443","--validating-webhook-certificate=/usr/local/certificates/cert","--validating-webhook-key=/usr/local/certificates/key"],"image":"k8s.gcr.io/ingress-nginx/controller:v0.44.0#sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a","name":"controller","ports":[{"containerPort":80,"hostPort":80},{"containerPort":443,"hostPort":443}]}],"nodeSelector":null,"terminationGracePeriodSeconds":null}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "ingress-nginx-controller", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Deployment.apps "ingress-nginx-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"controller", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"helm.sh/hook":null,"helm.sh/hook-delete-policy":null,"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc\",\"--namespace=$(POD_NAMESPACE)\",\"--secret-name=ingress-nginx-admission\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/jettech/kube-webhook-certgen:v1.5.1#sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"docker.io/jettech/kube-webhook-certgen:v1.5.1#sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7","name":"create"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-create", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"d33a74a3-101c-4e82-a2b7-45b46068f189", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"docker.io/jettech/kube-webhook-certgen:v1.5.1#sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc", "--namespace=$(POD_NAMESPACE)", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00a79dea0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc003184dc0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc010b3d980), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"helm.sh/hook":null,"helm.sh/hook-delete-policy":null,"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--webhook-name=ingress-nginx-admission\",\"--namespace=$(POD_NAMESPACE)\",\"--patch-mutating=false\",\"--secret-name=ingress-nginx-admission\",\"--patch-failure-policy=Fail\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/jettech/kube-webhook-certgen:v1.5.1#sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"patch\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"docker.io/jettech/kube-webhook-certgen:v1.5.1#sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7","name":"patch"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-patch", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"ef303f40-b52d-49c5-ab80-8330379fed36", "job-name":"ingress-nginx-admission-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"docker.io/jettech/kube-webhook-certgen:v1.5.1#sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", Command:[]string(nil), Args:[]string{"patch", "--webhook-name=ingress-nginx-admission", "--namespace=$(POD_NAMESPACE)", "--patch-mutating=false", "--secret-name=ingress-nginx-admission", "--patch-failure-policy=Fail"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00fd798a0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc00573d190), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00d7d9100), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
]
😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose
So I had some bug issue in my PC. So, i reinstall minikube. After this when I use minikube start and all want fine. But when i enable ingress then the above error was showing.
And when i run skaffold dev the following error was showing:
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
- Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
exiting dev mode because first deploy failed: kubectl apply: exit status 1
As #Brian de Alwis pointed out in the comments section, this PR #11189 should resolve the above issue.
You can try the v1.20.0-beta.0 release with this fix. Additionally, a stable v1.20.0 version is now available.

FN Hello World App 8080 Connection Issues

I'm following the Introduction to Fn with Python Tutorial here. I'm on Mac Catalina 10.15.7.
When I run the command "fn create app pythonapp" in my terminal, I receive the error
Fn: Post "http://127.0.0.1:8080/v2/apps": dial tcp 127.0.0.1:8080: connect: connection refused
I tried the solution explain here but it didn't fix the problem.
When I run "fn version" this is my output.
Client version is latest version: 0.6.1
Server version: ?
If I run "fn start" this is my output.
020/12/14 10:29:57 ¡¡¡ 'fn start' should NOT be used for PRODUCTION !!! see https://github.com/fnproject/fn-helm/
time="2020-12-14T18:29:57Z" level=info msg="Setting log level to" fields.level=info
time="2020-12-14T18:29:57Z" level=info msg="Registering data store provider 'sql'"
time="2020-12-14T18:29:57Z" level=info msg="Connecting to DB" url="sqlite3:///app/data/fn.db"
time="2020-12-14T18:29:57Z" level=info msg="datastore dialed" datastore=sqlite3 max_idle_connections=256 url="sqlite3:///app/data/fn.db"
time="2020-12-14T18:29:57Z" level=info msg="agent starting cfg={MinDockerVersion:17.10.0-ce ContainerLabelTag: DockerNetworks: DockerLoadFile: DisableUnprivilegedContainers:false FreezeIdle:50ms HotPoll:200ms HotLauncherTimeout:1h0m0s HotPullTimeout:10m0s HotStartTimeout:5s DetachedHeadRoom:6m0s MaxResponseSize:0 MaxHdrResponseSize:0 MaxLogSize:1048576 MaxTotalCPU:0 MaxTotalMemory:0 MaxFsSize:0 MaxPIDs:50 MaxOpenFiles:0xc42020cb98 MaxLockedMemory:0xc42020cbb0 MaxPendingSignals:0xc42020cbb8 MaxMessageQueue:0xc42020cbc0 PreForkPoolSize:0 PreForkImage:busybox PreForkCmd:tail -f /dev/null PreForkUseOnce:0 PreForkNetworks: EnableNBResourceTracker:false MaxTmpFsInodes:0 DisableReadOnlyRootFs:false DisableDebugUserLogs:false IOFSEnableTmpfs:false EnableFDKDebugInfo:false IOFSAgentPath:/iofs IOFSMountRoot:/Users/shaymasirving/.fn/iofs IOFSOpts: ImageCleanMaxSize:0 ImageCleanExemptTags: ImageEnableVolume:false}"
time="2020-12-14T18:29:57Z" level=info msg="no docker auths from config files found (this is fine)" error="open /root/.dockercfg: no such file or directory"
time="2020-12-14T18:29:57Z" level=info msg="available memory" cgroup_limit=9223372036854771712 head_room=268435456 total_memory=1077936128
time="2020-12-14T18:29:57Z" level=info msg="ram reservations" avail_memory=809500672
time="2020-12-14T18:29:57Z" level=info msg="available cpu" avail_cpu=8000 total_cpu=8000
time="2020-12-14T18:29:57Z" level=info msg="cpu reservations" cpu=8000
time="2020-12-14T18:29:57Z" level=info msg="\n ______\n / ____/___\n / /_ / __ \\\n / __/ / / / /\n /_/ /_/ /_/\n"
time="2020-12-14T18:29:57Z" level=info msg="Fn serving on `:8080`" type=full version=0.3.749

fluentd isnt shipping logs to stackdriver

I have an application deployed on kubernetes on GKE,
Kubernetes version: v1.7.11-gke.1
Stackdriver Logging is enabled on my cluster
fluntd-gcp image on my cluster (by default):
gcr.io/google-containers/fluentd-gcp:2.0.9
my logs were all ok, seen in stackdriver,
but since a few days ago logs from one deployment (lets call it my-app ) stopped arriving in stackdriver
even though they are logged from my app :
kubectl logs -f my-app-3270987706-cx0r2 --namespace=production
{"time":"2018-01-30 16:11:13.155","msg":"ignoring xml"}
{"time":"2018-01-30 16:11:14.155","msg":"success blabla"}
I see the following logs from fluentd:
2018-01-30 16:11:46 +0000 [warn]: emit transaction failed:
error_class=Errno::ENOENT error="No such file or directory # sys_fail2 -
(/var/log/fluentd-buffers/kubernetes.system.buffer..b563203c1da7cb5e1.log, /var/log/fluentd-
buffers/kubernetes.system.buffer..q563203c1da7cb5e1.log)" tag="docker"
2018-01-30 16:11:46 +0000 [warn]: suppressed same stacktrace
2018-01-30 16:11:46 +0000 [error]: Exception emitting record:
No such file or directory # sys_fail2 -
(/var/log/fluentd-buffers/kubernetes.system.buffer..b563203c1da7cb5e1.log,
/var/log/fluentd-buffers/kubernetes.system.buffer..q563203c1da7cb5e1.log)
why logs arent shipped to stackdriver? how can I fix it?
edit:
Ill note that the logs of other apps do appear in stackdriver
the logs of the failing app are very big - maybe thats why they fail to log?