Failed to start container with specified command in json - kubernetes

I tried multiple ways to specify docker run command but ends up with errors.
When I run kubectl get pods which returns below error in status.
rpc error: code = 2 desc = failed to start container "3329716cb47a0f795b2372dd630ca1017b0bad8bf4ab0e05490d1ac5eb28ca1b": Error response from daemon: {"message":"container 3329716cb47a0f795b2372dd630ca1017b0bad8bf4ab0e05490d1ac5eb28ca1b encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2) extra info: {\"ApplicationName\":\"\",\"CommandLine\":\"\\\"powershell.exe -command\\\" \\\"docker run -e VSTS_ACCOUNT=apidrop -e VSTS_TOKEN=5zcp7yf5h2dofz642eykiwpo6lj6kniu4jkmgxljipocab4vc2wa -e VSTS_POOL=apexpool --name winvs2017_vstsagent raychen320/buildagent:1.0\\\"\",\"User\":\"\",\"WorkingDirectory\":\"C:\\\\BuildAgent\",\"Environment\":{\"DOTNET_DOWNLOAD_URL\":\"https://dotnetcli.blob.core.windows.net/dotnet/preview/Binaries/1.0.4/dotnet-win-x64.1.0.4.zip\",\"DOTNET_SDK_DOWNLOAD_URL\":\"https://dotnetcli.blob.core.windows.net/dotnet/Sdk/1.0.1/dotnet-dev-win-x64.1.0.1.zip\",\"DOTNET_SDK_VERSION\":\"1.0.1\",\"DOTNET_VERSION\":\"1.0.4\",\"KUBERNETES_PORT\":\"tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP\":\"tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP_ADDR\":\"10.0.0.1\",\"KUBERNETES_PORT_443_TCP_PORT\":\"443\",\"KUBERNETES_PORT_443_TCP_PROTO\":\"tcp\",\"KUBERNETES_SERVICE_HOST\":\"10.0.0.1\",\"KUBERNETES_SERVICE_PORT\":\"443\",\"KUBERNETES_SERVICE_PORT_HTTPS\":\"443\",\"NUGET_XMLDOC_MODE\":\"skip\",\"chocolateyUseWindowsCompression\":\"false\"},\"EmulateConsole\":false,\"CreateStdInPipe\":true,\"CreateStdOutPipe\":true,\"CreateStdErrPipe\":true,\"ConsoleSize\":[0,0]}"} 0 26s
The Json file I use for kubectl apply command to create pod is:
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "buildagent",
"labels": {
"name": "buildagent"
}
},
"spec": {
"containers": [
{
"name": "buildagent",
"image": "raychen320/buildagent:1.0",
"command": [
"powershell.exe -command",
"docker run -e VSTS_ACCOUNT=apitest -e VSTS_TOKEN=tgctxxx -e VSTS_POOL=testpool --name myagent raychen320/buildagent:1.0"
],
"ports": [
{
"containerPort": 80
}
]
}
],
"nodeSelector": {
"beta.kubernetes.io/os": "windows"
}
}
}
What should I set in command?

You are trying to launch an executable named powershell.exe -command which obviously does not exist. What you should try to launch is more like :
command: "powershell.exe"
args:
- "-command"
- "docker run..."

Docker can't find powershell.exe in you system PATH.
Maybe you should use full path of powershell.exe in "command"

Related

Kubeedge on Kataocoda - no matches for kind "Node" in version "v1"

I am following Kubeedge v1.0.0 deployment on Katacoda and on executing the following command.
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
It gives me an error
error: unable to recognize "/root/kubeedge/src/github.com/kubeedge/kubeedge/build/node.json": no matches for kind "Node" in version "v1"
Tried searching for this error but found no relevant answers. Anyone has idea on to get through this?
Below is the content of my node.json file
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node-1",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
I have reproduced it in Katakoda and in my case it works perfectly. I recommend you to go through the tutorial once again and take each step carefully.
You need to pay attention for step 7. Change metadata.name to the name of the edge node:
vim $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
Then, execute following command, where you need to change IP address:
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
Another command to check if a correct API version was used, is:
kubectl explain node -s <kubedge-node-ip-address>:8080
After successful creation of node you should see:
node/edge-node created

Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}

kubectl run with env var from a secret config

How can I issue a kubectl run that pulls an environment var from a k8s secret configmap?
Currently I have:
kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=foo
Look into the overrides flag of the run command... it reads as:
An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
So in your case I guess it would be something like:
kubectl run oneoff -i --rm --overrides='
{
"spec": {
"containers": [
{
"name": "oneoff",
"image": "IMAGE",
"env": [
{
"name": "ENV_NAME"
"valueFrom": {
"secretKeyRef": {
"name": "SECRET_NAME",
"key": "SECRET_KEY"
}
}
}
]
}
]
}
}
' --image= IMAGE
This is another one that does the trick:
kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=$(kubectl get secret your-secret -o=jsonpath="{.server['secret\.yml']}")

kubernetes - volume mapping via command

I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash

Create kubernetes pod with volume using kubectl run

I understand that you can create a pod with Deployment/Job using kubectl run. But is it possible to create one with a volume attached to it? I tried running this command:
kubectl run -i --rm --tty ubuntu --overrides='{ "apiVersion":"batch/v1", "spec": {"containers": {"image": "ubuntu:14.04", "volumeMounts": {"mountPath": "/home/store", "name":"store"}}, "volumes":{"name":"store", "emptyDir":{}}}}' --image=ubuntu:14.04 --restart=Never -- bash
But the volume does not appear in the interactive bash.
Is there a better way to create a pod with volume that you can attach to?
Your JSON override is specified incorrectly. Unfortunately kubectl run just ignores fields it doesn't understand.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
To debug this issue I ran the command you specified, and then in another terminal ran:
kubectl get job ubuntu -o json
From there you can see that the actual job structure differs from your json override (you were missing the nested template/spec, and volumes, volumeMounts, and containers need to be arrays).