keda func deploy from a dir which contains spaces is failing - visual-studio-code

I am using Visual Code with Azure Core Tools to deploy a container to a K8S cluster which has KEDA installed. But seeing this docker error. The error is caused because the docker build is run without the double quotes.
$ func kubernetes deploy --name bollaservicebusfunc --registry sbolladockerhub --python
Running 'docker build -t sbolladockerhub/bollaservicebusfunc C:\Users\20835918\work\welcome to space'....done
Error running docker build -t sbolladockerhub/bollaservicebusfunc C:\Users\20835918\work\welcome to space.
output:
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
(.venv)
20835918#CROC1LWPF1S99JJ MINGW64 ~/work/welcome to space (master)
I know there is a known bug Spaces in directory
But posting to see if there is a workaround, this is important as I have eveything in Onedrive - Comapny Name and it has spaces in it

Looking into the code for func, you could specify --image-name instead of --registry which seems to skip building the container.
You would have to build your docker container manually using the same code shown in the output and provide the value for the -t argument of the docker command for --image-name of the func command after.
Also, since this would not push your docker container as well, make sure to push it before running the func command.

Related

Keep containers running after build

We are using the Docker Compose TeamCity build runner and would like the containers to continue running after the build.
I understand that the docker-compose build step itself follows a successful 'up' with a 'down', so I have attempted to bring them back up in a subsequent command line step with simply:
docker-compose up -d
I can see from the log that this is initially successful but when the build process exits, so do the containers. I have also tried:
nohup docker-compose up -d &
The outcome is the same.
How do we keep the containers running when the build has finished?
For info, the environment is both TeamCity and its BuildAgent running on the same Ubuntu box.
I have achieved this by NOT using the Docker Compose build runner. I now just have a single command line build step doing:
docker-compose down
docker-compose up -d
This works, and I feel rather silly ;-)

Deploy a private image inside minikube on linux

I am starting to use kubernetes/Minikube to deploy my application which is currently running on docker containers.
Docker version:19.03.7
Minikube version: v1.25.2
From what I read I gather that first of all I need to build my frontend/backend images inside minikube.
The image is available on the server and I can see it using:
$ docker image ls
The first step, as far as I understand, is to use the "docker build" command:
$docke build -t my-image .
However, the dot at the end, so I understand, means it is looking for a Dockerfile in the curretn directoy, and indeed I get an error:
unable to evaluate symlinks in Dockerfile path: lstat
/home/dep/k8s-config/Dockerfile: no such file or directory
So, where do I get this dockerfile for the "docker build" to succeed?
Thanks
My missunderstanding...
I have the Dockefile now, so I should put it anywhere and use docker build from there.

Start interactive shell into a sql server 2019 container running in an aks pod

I am using the mssql docker image (Linux) for sql server 2019. The default user is not root but mssql.
I need to perform some operations as root inside the container:
docker exec -it sql bash
mssql#7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
Then I start the shell as root:
docker exec -it --user=root sql bash
root#7f5a78a63728:/# <command>
...
This works.
Now I need to do this in a container deployed in an AKS cluster
kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql#rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
as expected. But then:
kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
So when the container is in an AKS cluster, starting a shell as root doesn't work.
I then try to ssh into the node and use docker from inside:
kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root#aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
Looks like a Kubernetes cluster node doesn't have docker installed!
Any clues?
EDIT
The image I used locally and in Kubernetes is exactly the same,
mcr.microsoft.com/mssql/server:2019-latest untouched
David Maze has well mentioned in the comment:
Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?
Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at pod lifecycle.

Docker-stack - get image always from the hub.docker.com

Summary: When I perform a 'docker stack deploy' in an AWS / EC2 environment the local (old) image is used. How can I overrule this behaviour to have the 'docker stack' use the new image from the hub.docker.com? As a workaround I first do a 'docker pull' of the image from index.docker.com before executing the 'docker stack deploy'. Is this extra step needed?
Situation:
On a Jenkins server (not on AWS / EC2) I have the following building steps:
Maven build
docker login -u ${env.DOCKER_USERNAME} -p ${env.DOCKER_PASSWORD}
docker build -t local-username/image-name:latest
docker tag local-username/image-name dockerhub-username/image-name:latest
docker push dockerhub-username/image-name:latest
The next steps in my Jenkinsfile are executed via a secure shell (ssh) on my AWS environement:
docker stack deploy -c docker-compose.yml stackname
When I execute this Jenkins job, the docker image is taken from the local image repo on AWS. I want to use the newest image put on hub.docker.com.
When I insert the following action BEFORE the 'docker stack deploy' everything works smoothly:
docker pull index.docker.io/dockerhub-username/image-name:latest
My questions:
Why do I need this extra 'docker pull' action?
How can I remove this action? Just by adding 'index.docker.io' in front of the image in the docker-compose.yml file? Or is there a better approach?
The extra docker pull should, of course not be needed.
What will help you?
The answer of #Tarun may work.
Or just name the Docker hub registry. Use the following line in your docker-compose.yml (stack) file
servicename:
image: index.docker.io/dockerhub-username/image-name
That will help you.
Maybe this is due to the fact that you build locally or on a separate Jenkins server, push the image to Docker hub and start on a remote shell on EC2. On that EC2 there is only a current image.
I tried the above solution for you, and it worked.
You can just execute the 'docker stack deploy' and the right image is used.

aspnetcore:2.0 based image won't run on AKS nodes?

I have an asp.net core 2.0 application whose docker image runs fine locally, but when that same image is deployed to an AKS cluster, the pods have a status of CrashLoopBackOff and the pod log shows:
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409.
And since you can't ssh to AKS clusters, it's pretty difficult to figure this out?
Dockerfile:
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapi.dll"]
Turned out that our build system wasn't putting the app code into the container as we thought. Since the container wasn't runnable, I didn't know how to inspect its contents until I found this command which is a lifesaver for these kinds of situations:
docker run --rm -it --entrypoint=/bin/bash [image_id]
... which at this point, you can freely inspect/verify the contents of the container.
I just ran into the same issue and it's because I was missing a key piece to the puzzle.
docker-compose -f docker-compose.ci.build.yml run ci-build
VS2017 Docker Tools will create that docker-compose.ci.build.yml file. After that command is run, the publish folder is populated and docker build -t <tag> will build a populated image (without an empty /app folder).