Invalid compose name even docker-compose.yaml exists - docker-compose

I try to upload my rasa chatbot with okteto via docker. So i has implemented a "Dockerfile", a "docker-compose.yaml" and a "okteto.yaml". The last past weeks the code works fine. Today it wont work anymore because Okteto gives the error: Invalid compose name: must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric characterexit status 1.
I really dont understand what i should change. thanks
docker-compose.yaml:
version: '3.4'
services:
rasa-server:
image: rasa-bot:latest
working_dir: /app
build: "./"
restart: always
volumes:
- ./actions:/app/actions
- ./data:/app/data
command: bash -c "rm -rf .rasa/* && rasa train && rasa run --enable-api --cors \"*\" -p 5006"
ports:
- '5006:5006'
networks:
- all
rasa-actions-server:
image: rasa-bot:latest
working_dir: /app
build: "./"
restart: always
volumes:
- ./actions:/app/actions
command: bash -c "rasa run actions"
ports:
- '5055:5055'
networks:
- all
networks:
all:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "true"
Dockerfile:
FROM python:3.7.13 AS BASE
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["./bot.py"]
RUN pip install --no-cache-dir --upgrade pip
RUN pip install rasa==3.3.0
ADD config.yml config.yaml
ADD domain.yml domain.yaml
ADD credentials.yml credentials.yaml
ADD endpoints.yml endpoints.yaml
okteto.yml:
name: stubu4ewi
autocreate: true
image: okteto.dev/rasa-bot:latest
command: bash
volumes:
- /root/.cache/pip
sync:
- .:/app
forward:
- 5006:5006
reverse:
- 9000:9000
Error
Found okteto manifest on /okteto/src/okteto.yml
Unmarshalling manifest...
Okteto manifest unmarshalled successfully
Found okteto compose manifest on docker-compose.yaml
Unmarshalling compose...
x Invalid compose name: must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric characterexit
status 1
Dont have any clue what went wrong. It works fine till yesterday and even when nothings changed okteto gives this error.
Tried to rename the docker-compose.yaml to: docker-compose.yml, okteto-compose.yml

That error is not about the file's name itself but the name of the services defined inside your docker-compose.yaml file.
What command did you run, and what version of the okteto cli are you using? okteto version will give it to you.

If you everr face the Problem: Rename your Repo so that it consists only of lower case alphanumeric characters or '-', and starts and ends with an alphanumeric character.
Seems like Okteto uses the Repository Name to build the Images.

Related

Diesel doesnt run migrations: relation does not exist

Im develop my first project with rust+diesel and I have a problem: diesel doesnt run migrations, caused by error "relation does not exist", although relations exist.
My code:
main.rs:
pub fn establish_connection() -> PgConnection {
dotenv().ok();
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
PgConnection::establish(&database_url)
.unwrap_or_else(|_| panic!("Error connecting to {}", database_url))
}
embed_migrations!("./migrations");
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
let connect = database::establish_connection();
match embedded_migrations::run_with_output(&connect, &mut std::io::stdout()) {
Ok(()) => println!("migrations success"),
Err(e) => panic!("migrations error: {}", e)
}
}
Dockerfile:
FROM rust:1.61 as builder
WORKDIR /build/
COPY ./Cargo.toml .
COPY ./Cargo.lock .
COPY ./src ./src
COPY ./migrations ./migrations
RUN cargo build --release
FROM ubuntu:22.04 AS run
EXPOSE 8080
WORKDIR /run/
COPY --from=builder /build/target/release/to_do .
COPY ./dist ./dist
COPY ./docker-entrypoint.sh .
RUN apt-get update -y
RUN apt-get install -y libpq-dev
# docker-entrypoint.sh - wait 5432 port aviable & run to_do app
CMD ["sh", "docker-entrypoint.sh"]
docker-compose.yml:
version: "3.8"
services:
postgres_db:
image: postgres:14.3
container_name: to_do_postgres
restart: always
ports:
- ${POSTGRES_DB_PORT}:5432
environment:
- POSTGRES_USER=${POSTGRES_DB_USER}
- POSTGRES_DB=${POSTGRES_DB_NAME}
- POSTGRES_PASSWORD=${POSTGRES_DB_PASSWORD}
app:
image: to_do_build
container_name: to_do_app
restart: always
ports:
- ${APP_PORT}:8080
depends_on:
- postgres_db
environment:
- DATABASE_URL=postgres://${POSTGRES_DB_USER}:${POSTGRES_DB_PASSWORD}#postgres_db:${POSTGRES_DB_PORT}/${POSTGRES_DB_NAME}
- JWT_SECRET=${JWT_SECRET}
When im run a service by command:
docker-compose --env-file .conf up -d
.conf:
POSTGRES_DB_USER=to_do_user
POSTGRES_DB_NAME=to_do
POSTGRES_DB_PASSWORD=to_do_password
POSTGRES_DB_PORT=5432
JWT_SECRET=secret
APP_PORT=8080
Service runned, but when looking at the logs:
docker logs to_do_app
I get an error:
// DATABASE_URL = postgres://to_do_user:to_do_password#postgres_db:5432/to_do
Running migration 20220525065207
Executing migration script 20220525065207/up.sql
thread 'main' panicked at 'migrations error: Failed with: relation "to_do" does not exist', src/main.rs:28:19
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Although if we connect to postgres via psql, we see that diesel has created its own system table:
docker exec -it to_do_postgres sh
$ psql -U to_do_user to_do
psql (14.3 (Debian 14.3-1.pgdg110+1))
Type "help" for help.
to_do=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------------+-------+------------
public | __diesel_schema_migrations | table | to_do_user
(1 row)
I don't understand what I'm doing wrong.
I was just fighting with this myself and am also using actix-web like you. It finally worked for me though after I realized that I did not have COPY ./migrations ./migrations in my Dockerfile.
Looking at yours, I see you're not copying over diesel.toml:
COPY ./diesel.toml ./diesel.toml
Would that be it?

Moving a file from host with docker-compose volume before Dockerfile is built

I have a few Dockerfiles that are dependant on a "pat" (Personal Access Token) file to be able to access a private nuget feed. I have taken some inspiration from somakdas to get this working.
To run my single Dockerfile I first create a "pat" file containing my token and build with docker build -f Services/User.API/Dockerfile -t userapi:dev --secret id=pat,src=pat .
This works as intended, but my issue is getting this to work using a docker-compose.yml file.
First I took a look at using docker-compose secrets, but it came to my attention that docker-compose secrets are access at runtime, not build-time. https://github.com/docker/compose/issues/6358
So now I'm trying to create a volume containing my pat file but I get cat: /pat: No such file or directory when the command RUN --mount=type=secret... is running. This may not be secure but it will only be running locally.
My Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
WORKDIR /src
COPY ["User.API.csproj", "/Services/User.API"]
RUN --mount=type=secret,id=pat,dst=/pat export ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS="{\"endpointCredentials\": [{\"endpoint\":\"<feed>\", \"username\":\"<user>\", \"password\":\"`cat /pat`\"}]}" \
&& dotnet restore "User.API.csproj" \
&& unset VSS_NUGET_EXTERNAL_FEED_ENDPOINTS
...
My docker-compose.yml
services:
user.api:
container_name: User.API
image: ${DOCKER_REGISTRY-}userapi
build:
context: .
dockerfile: Services/User.API/Dockerfile
networks:
- app_network
volumes:
- ./pat:/app/src/pat
Am I only able to access docker-compose volumes after the Dockerfile is built?
I solved this by attacking the problem in a different way. As the main goal was to get this working locally I created Dockerfile.Local and docker-compose.local.yml. Together with this I created an .env file containing the "pat".
The docker-compose.local.yml passes the "pat" as an argument to the Dockerfile.Local where it's used. I also discarded --mount=type=secret and set the value to VSS_NUGET_EXTERNAL_FEED_ENDPOINTS directly.
.env file:
PAT_TOKEN=<personal access token>
docker-compose.local.yml:
services:
user.api:
container_name: User.API
image: ${DOCKER_REGISTRY-}userapi
build:
context: .
dockerfile: Services/User.API/Dockerfile
args:
- PAT=${PAT_TOKEN}
networks:
- app_network
volumes:
- ./pat:/app/src/pat
Dockerfile.Local:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
WORKDIR /src
COPY ["User.API.csproj", "/Services/User.API"]
ARG PAT
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS="{\"endpointCredentials\": [{\"endpoint\":\"<feed>\", \"username\":\"<user>\", \"password\":\"${PAT}\"}]}" \
&& dotnet restore "User.API.csproj" \
&& unset VSS_NUGET_EXTERNAL_FEED_ENDPOINTS
...
Note: The .env file was added to .gitignore because it is containing sensitive information. We don't want that in our repository.

Gitlab CI not able to use pg_prove

I'm struggling to get a Gitlab CI up and running that uses the correct version of postgres (13) and has PGTap installed.
I deploy my project locally using a Dockerfile which uses postgres:13.3-alpine and then installs PGTap too. However, I'm not sure if I can use this Dockerfile to help with my CI issues.
In my gitlab-ci.yml file, I currently have:
variables:
GIT_SUBMODULE_STRATEGY: recursive
pgtap:
only:
refs:
- merge_request
- master
changes:
- ddl/**/*
image: postgres:13.1-alpine
services:
- name: postgres:13.1-alpine
alias: db
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
script:
- psql postgres://postgres#db/postgres -c 'create extension pgtap;'
- psql postgres://postgres#db/postgres -f ddl/01.sql
- cd ddl/
- psql postgres://postgres#db/postgres -f 02.sql
- psql postgres://postgres#db/postgres -f 03.sql
- pg_prove -d postgres://postgres#db/postgres --recurse test_*
The above works until it gets to the pg_prove command at the bottom as I get the below error:
pg_prove: command not found
Is there a way I can install pg_prove using the script commands? Or is there a better way to do this?
There is an old issue closed.
To summarize, either you build you own image based on postgres:13.1-alpine installing PGTap or you use a non official image where PGTap is installed 1maa/postgres:13-alpine :
docker run -it 1maa/postgres:13-alpine sh
/ # which pg_prove
/usr/local/bin/pg_prove
Since your step image is alpine based, you can try:
script:
- apk add --no-cache --update build-base make perl perl-dev git openssl-dev
- cpan TAP::Parser::SourceHandler::pgTAP
- psql.. etc
You can probably omit some of the packages...

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT

Fig up error exec: "bundle": executable file not found in $PATH

I'm trying to run a Dockerized sinatra app with no database using fig, but I keep getting this error:
$ fig up
Recreating my_web_1...
Cannot start container 93f4a091bd6387bd28d8afb8636d2b14623a08d259fba383e8771fee811061a3: exec: "bundle": executable file not found in $PATH
Here is the Dockerfile
FROM ubuntu-nginx
MAINTAINER Ben Bithacker ben#bithacker.org
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
WORKDIR /app
RUN ["/bin/bash", "-l", "-c", "bundle install"]
ADD config/container/start-server.sh /usr/bin/start-server
RUN chmod +x /usr/bin/start-server
ADD . /app
EXPOSE 9292
CMD ["/usr/bin/start-server"]
The config/container/start-server.sh looks like this
#!/bin/bash
cd /app
source /etc/profile.d/rvm.sh
bundle exec rackup config.ru
The fig.yml looks like this:
web:
build: .
command: bundle exec rackup config.ru
volumes:
- .:/app
ports:
- "3000:3000"
environment:
- SOME_VAR=adsfasdfgasdfdfd
- SOME_VAR2=ba2gezcjsdhwzhlz24zurg5ira
I think there are a couple problems with this setup. Where is bundler installed? Normally you would apt-get install ruby-bundler and it would always be on your path.
I believe your immediate problem is that you're overriding the CMD from the Dockerfile with the command in the fig.yml. I'm assuming (based on the contents of start-server.sh) that you need the path to be set? You should remove the command line from the fig.yml.
You're also overriding the /app directory in the container with the volumes: .:/app in the fig.yml. You probably also want to remove that line.