I am a beginner with Kubernetes. I have enabled it from Docker Destop and now I want to install Kubernetes Dashboard.
I followed this link:
https://github.com/kubernetes/dashboard#getting-started
And I executed my first command in Powershell as an administrator:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
I get the following error:
error: error validating
"https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml":
error validating data:
ValidationError(Deployment.spec.template.spec.securityContext):
unknown field "seccompProfile" in
io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these
errors, turn validation off with --validate=false
In which case I tried to use the same command with --validate=false.
Then it went and gave no errors and when I execute :
kubectl proxy
I got an access token using:
kubectl describe secret -n kube-system
and I try to access the link as provided in the guide :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
I get the following swagger response:
The error indicated that your cluster version is not compatible to use seccompProfile.type: RuntimeDefault. In this case you don't apply the dashboard spec (https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml) right away, you download and comment the following line in the spec:
...
spec:
# securityContext:
# seccompProfile:
# type: RuntimeDefault
...
Then you apply the updated spec kubectl apply -f recommended.yaml.
Related
I'm attempting to get Config Connector up and running on my GKE project and am following this getting started guide.
So far I have enabled the appropriate APIs:
> gcloud services enable cloudresourcemanager.googleapis.com
Created my service account and added policy binding:
> gcloud iam service-accounts create cnrm-system
> gcloud iam service-accounts add-iam-policy-binding ncnrm-system#test-connector.iam.gserviceaccount.com --member="serviceAccount:test-connector.svc.id.goog[cnrm-system/cnrm-controller-manager]" --role="roles/iam.workloadIdentityUser"
> kubectl wait -n cnrm-system --for=condition=Ready pod --all
Annotated my namespace:
> kubectl annotate namespace default cnrm.cloud.google.com/project-id=test-connector
And then run through trying to apply the Spanner yaml in the example:
~ >>> kubectl describe spannerinstance spannerinstance-sample
Name: spannerinstance-sample
Namespace: default
Labels: label-one=value-one
Annotations: cnrm.cloud.google.com/management-conflict-prevention-policy: resource
cnrm.cloud.google.com/project-id: test-connector
API Version: spanner.cnrm.cloud.google.com/v1beta1
Kind: SpannerInstance
Metadata:
Creation Timestamp: 2020-09-18T18:44:41Z
Generation: 2
Resource Version: 5805305
Self Link: /apis/spanner.cnrm.cloud.google.com/v1beta1/namespaces/default/spannerinstances/spannerinstance-sample
UID:
Spec:
Config: northamerica-northeast1-a
Display Name: Spanner Instance Sample
Num Nodes: 1
Status:
Conditions:
Last Transition Time: 2020-09-18T18:44:41Z
Message: Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
Reason: UpdateFailed
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning UpdateFailed 6m41s spannerinstance-controller Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
I'm not really sure what's going on here, because my cnrm service account has ownership of the project my cluster is in, and I have the APIs listed in the guide enabled.
The CC pods themselves appear to be healthy:
~ >>> kubectl wait -n cnrm-system --for=condition=Ready pod --all
pod/cnrm-controller-manager-0 condition met
pod/cnrm-deletiondefender-0 condition met
pod/cnrm-resource-stats-recorder-58cb6c9fc-lf9nt condition met
pod/cnrm-webhook-manager-7658bbb9-kxp4g condition met
Any insight in to this would be greatly appreciated!
By the error message you have posted, I should supposed that it might be an error in your GKE scopes.
To GKE access others GCP APIs you must allow this access when creating the cluster. You can check the enabled scopes with the command:
gcloud container clusters describe <cluster-name> and find in the result for oauthScopes.
Here you can see the scope's name for Cloud Spanner, you must enable the scope https://www.googleapis.com/auth/cloud-platform as minimum permission.
To verify in the GUI, you can see the permission in: Kubernetes Engine > <Cluster-name> > expand the section permissions and find for Cloud Platform
Im a beginner in Kubernetes. I have been trying out kubeless on minikube. I have set up both in the latest version available. When i deploy the function, this is the output that i got:
INFO[0000] Deploying function...
INFO[0000] Function hello submitted for deployment
INFO[0000] Check the deployment status executing 'kubeless function ls hello'
When i run the kubeless function ls, i get this:
NAME NAMESPACE HANDLER RUNTIME DEPENDENCIES STATUS
hello default example.hello python3.6 MISSING: Check controller logs
MISSING: Check controller logs every time i create a function it is showing this status. I also checked by changing the RUNTIME to python2.7, but still it doesn't work. The deploy command is following
kubeless function deploy hello --runtime python3.6 --from-file python-example/example.py --handler example.hello
Please guide me on how to fix this issue.
As I can see from the kubeless.io:
To debug "MISSING: Check controller logs" kind of issues it is necessary to check what is the error in the controller logs. To retrieve these logs execute:
$ kubectl logs -n kubeless -l kubeless=controller
There are cases in which the validations done in the CLI won't be enough to spot a problem in the given parameters. If that is the case the function Deployment will never appear.
Hope that helps.
From kubeless code, this status happens if kubeless cannot get status of the k8s deployment for this function.
status, err := getDeploymentStatus(cli, f.ObjectMeta.Name, f.ObjectMeta.Namespace)
if err != nil && k8sErrors.IsNotFound(err) {
status = "MISSING: Check controller logs"
}
So there are some possible reasons as followed:
There is a runtime issue for this function, for example, syntax issue or dependency issue which cause the pod failed to run. Check the pod logs can help to figure out.(This happens for my case, not sure whether it is caused by the second reason which cause kubeless cannot get the failure message)
The kubeless version is not compatible with the k8s cluster version. From k8s 1.15, the extension/v1beta1 version for deployment is removed. However, early version kubeless still uses extension/v1beta1 to get the status of deployment. You can check the api-resources of your k8s cluster.
$kubectl api-resources | grep deployments
deployments deploy apps true Deployment
#kubectl api-versions | grep apps
apps/v1
Check the following change list of kubeless which uses new apps/v1 endpoint.
Use new apps/v1 endpoint
func getDeploymentStatus(cli kubernetes.Interface, funcName, ns string) (string, error) {
- dpm, err := cli.ExtensionsV1beta1().Deployments(ns).Get(funcName, metav1.GetOptions{})
+ dpm, err := cli.AppsV1().Deployments(ns).Get(funcName, metav1.GetOptions{})
First, get the name for the kubeless-controller pod:
kubectl -n kubeless get pods
You can get the logs from the Kubeles controller:
kubectl logs -n kubeless -c kubeless-function-controller kubeless-controller-manager-5dc8f64bb7-b9x4r
While trying to install "incubator/fluentd-cloudwatch" using helm on Amazon EKS, and setting user to root, I am getting below response.
Command used :
helm install --name fluentd incubator/fluentd-cloudwatch --set awsRegion=eu-west-1,rbac.create=true --set extraVars[0]="{ name: FLUENT_UID, value: '0' }"
Error:
Error: YAML parse error on fluentd-cloudwatch/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 38: did not find expected ',' or ']'
If we do not set user to root, then by default, fluentd runs with "fluent" user and its log shows:
[error]: unexpected error error_class=Errno::EACCES error=#<Errno::
EACCES: Permission denied # rb_sysopen - /var/log/fluentd-containers.log.pos>`
Based on this looks like it's just trying to convert eu-west-1,rbac.create=true to a JSON field as field and there's an extra comma(,) there causing it to fail.
And if you look at the values.yaml you'll see the right separate options are awsRegion and rbac.create so --set awsRegion=eu-west-1 --set rbac.create=true should fix the first error.
With respect to the /var/log/... Permission denied error, you can see here that its mounted as a hostPath so if you do a:
# (means read/write user/group/world)
$ sudo chmod 444 /var/log
and all your nodes, the error should go away. Note that you need to add it to all the nodes because your pod can land anywhere in your cluster.
Download and update values.yaml as below. The changes are in awsRegion, rbac.create=true and extraVars field.
annotations: {}
awsRegion: us-east-1
awsRole:
awsAccessKeyId:
awsSecretAccessKey:
logGroupName: kubernetes
rbac:
## If true, create and use RBAC resources
create: true
## Ignored if rbac.create is true
serviceAccountName: default
# Add extra environment variables if specified (must be specified as a single line
object and be quoted)
extraVars:
- "{ name: FLUENT_UID, value: '0' }"
Then run below command to setup fluentd on Kubernetes cluster to send logs to CloudWatch Logs.
$ helm install --name fluentd -f .\fluentd-cloudwatch-values.yaml incubator/fluentd-cloudwatch
I did this and it worked for me. Logs were sent to CloudWatch Logs. Also make sure your ec2 nodes have IAM role with appropriate permissions for CloudWatch Logs.
I am new to kubernetes and docker. I am trying to chain 2 containers in a pod such that the second container should not be up until the first one is running. I searched and got a solution here. It says to add "depends" field in YAML file for the container which is dependent on another container. Following is a sample of my YAML file:
apiVersion: v1beta4
kind: Pod
metadata:
name: test
labels:
apps: test
spec:
containers:
- name: container1
image: <image-name>
ports:
- containerPort: 8080
hostPort: 8080
- name: container2
image: <image-name>
depends: ["container1"]
Kubernetes gives me following error after running the above yaml file:
Error from server (BadRequest): error when creating "new.yaml": Pod in version "v1beta4" cannot be handled as a Pod: no kind "Pod" is registered for version "v1beta4"
Is the apiVersion problem here? I even tried v1, apps/v1, extensions/v1 but got following errors (respectively):
error: error validating "new.yaml": error validating data: ValidationError(Pod.spec.containers[1]): unknown field "depends" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
error: unable to recognize "new.yaml": no matches for apps/, Kind=Pod
error: unable to recognize "new.yaml": no matches for extensions/, Kind=Pod
What am I doing wrong here?
As I understand there is no field called depends in the Pod Specification.
You can verify and validate by following command:
kubectl explain pod.spec --recursive
I have attached a link to understand the structure of the k8s resources.
kubectl-explain
There is no property "depends" in the Container API object.
You split your containers in two different pods and let the kubernetes cli wait for the first container to become available:
kubectl create -f container1.yaml --wait # run command until the pod is available.
kubectl create -f container2.yaml --wait
I'm a Kubernetes newbie trying to follow along with the Udacity tutorial class linked on the Kubernetes website.
I execute
kubectl create -f pods/secure-monolith.yaml
That is referencing this official yaml file: https://github.com/udacity/ud615/blob/master/kubernetes/pods/secure-monolith.yaml
I get this error:
error: error validating "pods/secure-monolith.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
FYI, the official lesson link is here: https://classroom.udacity.com/courses/ud615/lessons/7824962412/concepts/81991020770923
My first guess is that the provided yaml is out of date and incompatible with the current Kubernetes. Is this right? How can I fix/update?
I ran into the exact same problem but with a much simpler example.
Here's my yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
ports:
- containerPort: 80
The command kubectl create -f pod-nginx.yaml returns:
error: error validating "pod-nginx.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
As the error says, I am able to override it but I am still at a loss as to the cause of the original issue.
Local versions:
Ubuntu 16.04
minikube version: v0.22.2
kubectl version: 1.8
Thanks in advance!
After correct kubectl version (Same with server version), then the issue is fixed, see:
$ kubectl create -f config.yml
configmap "test-cfg" created
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", ...
Server Version: version.Info{Major:"1", Minor:"7", ...
This is the case before modification:
$ kubectl create -f config.yml
error: error validating "config.yml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"ConfigMap"}; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8",...
Server Version: version.Info{Major:"1", Minor:"7",...
In general, we should used same version for kubectl and kubernetes.