while I am configure API
piVersion: v1
kind: calicoApiConfig
metadata:
spec:
etcdEndpoints: https:....
etcdKeyFile: /admin-key.pem
etcdCertFile: admin.pem
etcdCACertFile:
below is the error I am getting, here what is the admin-key.pem and admin.pem?
C:\Program Files (x86)\Google\Cloud SDK>calicoctl-windows-amd64.exe get nodes --config=calicoctl.cfg.txt
Failed to create Calico API client: open C:\Users\Admin.bluemix\plugins\container-service\clusters\VoiceGW_Cluser/admin.pem: The system cannot find the file specified.
Related
I'm trying to deploy a Flink stream processor to a Kubernetes cluster with the help of the official Flink kubernetes operator.
The Flink app also uses Minio as its state backend. Everything worked fine until I tried to provide the credentials from Hashicorp Vault in the following way:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-app
namespace: default
spec:
serviceAccount: sa-example
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
serviceAccountName: default:sa-example
containers:
- name: flink-main-container
# ....
flinkVersion: v1_14
flinkConfiguration:
presto.s3.endpoint: https://s3-example-api.dev.net
high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
high-availability.storageDir: s3p://example-flink/example-1/high-availability/
high-availability.cluster-id: example-1
high-availability.namespace: example
high-availability.service-account: default:sa-example
# presto.s3.access-key: *
# presto.s3.secret-key: *
presto.s3.path-style-access: "true"
web.upload.dir: /opt/flink
jobManager:
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: job-manager-pod-template
annotations:
vault.hashicorp.com/namespace: "/example/dev"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject-secret-appsecrets.yaml: "example/Minio"
vault.hashicorp.com/role: "example-serviceaccount"
vault.hashicorp.com/auth-path: auth/example
vault.hashicorp.com/agent-inject-template-appsecrets.yaml: |
{{- with secret "example/Minio" -}}
presto.s3.access-key: {{.Data.data.accessKey}}
presto.s3.secret-key: {{.Data.data.secretKey}}
{{- end }}
When I comment the presto.s3.access-key and presto.s3.secret-key config values in the flinkConfiguration, replace them with the above listed Hashicorp Vault annotations and try to provide them programmatically during runtime:
val configuration: Configuration = getSecretsFromFile("/vault/secrets/appsecrets.yaml")
val env = org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(configuration)
I receive the following error message:
java.io.IOException: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: You must specify a value for roleArn and roleSessionName, com.amazonaws.auth.profile.ProfileCredentialsProvider#5331f738: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#bc0353f: Failed to connect to service endpoint: ]
at com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3OutputStream.uploadObject(PrestoS3FileSystem.java:1278) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3OutputStream.close(PrestoS3FileSystem.java:1226) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.flink.fs.s3presto.common.HadoopDataOutputStream.close(HadoopDataOutputStream.java:52) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:80) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:72) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobUtils.moveTempFileToStore(BlobUtils.java:385) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServer.moveTempFileToStore(BlobServer.java:680) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:350) [flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:110) [flink-dist_2.12-1.14.2.jar:1.14.2]
I initially also tried to append the secrets to flink-config.yaml in the docker-entrypoint.sh based on this documentation - Configure Access Credentials:
if [ -f '/vault/secrets/appsecrets.yaml' ]; then
(echo && cat '/vault/secrets/appsecrets.yaml') >> $FLINK_HOME/conf/flink-conf.yaml
fi
The question is how to provide the S3 credentials during the runtime since the Flink operator mounts the flink-config.yaml from a config map and it is a flink-conf.yaml: Read-only file system.
Thank you
There is no support for this from the Kubernetes operator. In fact, this is not a limitation of the Flink Kubernetes operator, it is due to the fact of lack in support in Kubernetes native integration. There is a separate story for this in the Kubernetes operator side - FLINK-27491.
As a workaround, what you can do is, set upĀ an init container and update the config map from the init container using kubernetes API after reading it from the vault. So the updated config map should have the secrets replaced by the init container and those will be visible to the job manager and all of its task managers. The whole Flink cluster journey starts only after updating the config map from the init container so it should be visible to the Flink cluster.
A simple example to update the config map from the init container can be found here. In this example, the config map is updated with a simple CURL command. In theory, you can use any lightweight client to update the config map like this.
A side note: If possible I would suggest to use AWS IAM role rather than IAM plain secrets as IAM role is more secure compared to IAM static credentials.
I'm trying to setup HPA v1 for my pods using Pulumi
new HorizontalPodAutoscaler(
`${prefixedHPAName}`,
{
apiVersion: 'autoscaling/v1',
kind: 'HorizontalPodAutoscaler',
metadata: {
name: 'worker',
clusterName: 'redacted',
namespace: namespaceName
},
spec: {
maxReplicas: 3,
minReplicas: 1,
scaleTargetRef: {
apiVersion: 'apps/v1',
kind: 'Deployment',
name: 'worker'
},
targetCPUUtilizationPercentage: 50,
}
}
);
but I keep getting the following error
kubernetes:autoscaling/v1:HorizontalPodAutoscaler (tushar-routex-routex-hpa):
error: configured Kubernetes cluster is unreachable: unable to load Kubernetes client configuration from kubeconfig file: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Are there any examples on how to set HPA for my pods via pulumi?
The problem is not related to HPA nor pulumi, its a common problem with kubectl/kubernetes clients, where the client cannot find the kube config file to connect to the K8S API.
Check if you have a config file in the path ~/.kube/config, then try to export the path before using pulumi
export KUBECONFIG=~/.kube/config
If you don't have a file, you can check the documentation of your cloud provider to create it.
I am using minikube (docker driver) with kubectl to test an agones fleet deployment. Upon running kubectl apply -f lobby-fleet.yml (and when I try to apply any other agones yaml file) I receive the following error:
error: resource mapping not found for name: "lobby" namespace: "" from "lobby-fleet.yml": no matches for kind "Fleet" in version "agones.dev/v1"
ensure CRDs are installed first
lobby-fleet.yml:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: lobby
spec:
replicas: 2
scheduling: Packed
template:
metadata:
labels:
mode: lobby
spec:
ports:
- name: default
portPolicy: Dynamic
containerPort: 7600
container: lobby
template:
spec:
containers:
- name: lobby
image: gcr.io/agones-images/simple-game-server:0.12 # Modify to correct image
I am running this on WSL2, but receive the same error when using the windows installation of kubectl (through choco). I have minikube installed and running for ubuntu in WSL2 using docker.
I am still new to using k8s, so apologies if the answer to this question is clear, I just couldn't find it elsewhere.
Thanks in advance!
In order to create a resource of kind Fleet, you have to apply the Custom Resource Definition (CRD) that defines what is a Fleet first.
I've looked into the YAML installation instructions of agones, and the manifest contains the CRDs. you can find it by searching kind: CustomResourceDefinition.
I recommend you to first try to install according to the instructions in the docs.
microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.
We have tried to setup hivemq manifest file. We have hivemq docker image in our private repository
Step1: I have logged into the private repository like
docker login "private repo name"
It was success
After that I have tried to create manifest file for that like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hivemq
spec:
replicas: 1
template:
metadata:
labels:
name: hivemq1
spec:
containers:
- env:
xxxxx some envronment values I have passed
name: hivemq
image: privatereponame:portnumber/directoryname/hivemq:
ports:
- containerPort: 1883
Its successfully creating, but I am getting the below issues. Could you please help any one to solve this issue.
hivemq-4236597916-mkxr4 0/1 ImagePullBackOff 0 1h
Logs:
Error from server (BadRequest): container "hivemq16" in pod "hivemq16-1341290525-qtkhb" is waiting to start: InvalidImageName
Some times I am getting that kind of issues
Error from server (BadRequest): container "hivemq" in pod "hivemq-4236597916-mkxr4" is waiting to start: trying and failing to pull image
In order to use a private docker registry with Kubernetes it's not enough to docker login.
You need to add a Kubernetes docker-registry Secret with your credentials as described here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/. Also in that article is imagePullSecrets setting you have to add to your yaml deployment file, referencing that secret.
I just fixed this on my machine, kubectl v1.9.0 failed to create the secret properly. Upgrading to v1.9.1, deleting the secret, recreating it resolved the issue for me. https://github.com/kubernetes/kubernetes/issues/57427