I am using docker to generate the go api code but how do I pass the -Dnoservice to the docker container
docker run --rm -v "${PWD}:/local" openapitools/openapi-generator-cli generate \
-i https://raw.githubusercontent.com/openapitools/openapi-generator/master/modules/openapi-generator/src/test/resources/3_0/petstore.yaml \
-g go-server \
-o /local/out/go
referring to https://openapi-generator.tech/docs/generators/go-server
where it says
Generates a Go server library using OpenAPI-Generator. By default, it
will also generate service classes -- which you can disable with the
-Dnoservice environment variable.
Related
Is it possible to use a local repository (mirror) that is used by docker when building a container with docker-compose?
I have a local mirror, and my interest is that when I build a container with docker-compose build, the apk is downloaded from the local mirror, instead of the Internet
Example.
If a local deploy with Dockerfile
FROM alpine:3.13
# Install packages and remove default server definition
RUN apk --no-cache add php8=8.0.2-r0 php8-fpm php8-opcache php8-mysqli php8-json \
php8-openssl php8-curl php8-soap php8-zlib php8-xml php8-phar php8-intl php8-dom php8-xmlreader php8-ctype \
php8-session php8-simplexml php8-mbstring php8-gd nginx supervisor curl php8-exif php8-zip php8-fileinfo \
php8-iconv php8-soap tzdata htop mysql-client php8-pecl-imagick php8-pecl-redis php8-tokenizer php8-xmlwriter \
nano && rm /etc/nginx/conf.d/default.conf
When deploy container with docker-compose build I like that use my local repository of alpine instead any internet mirrors of Alpine such:
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/124) Installing ca-certificates (20191127-r5)
(2/124) Installing brotli-libs (1.0.9-r3)
(3/124) Installing nghttp2-libs (1.42.0-r1)
...
After read alpine docs Alpine Linux package management I get solution using static IP of local mirror. I tried to use the name of the hostname (repoalpine.test) but I couldn't find how to make the hostname public on the docker network.
RUN apk --no-cache -X http://172.20.0.254/v3.13/main -X http://172.20.0.254/v3.13/community \
add php8=8.0.2-r0 php8-fpm php8-opcache php8-mysqli php8-json \
php8-openssl php8-curl php8-soap php8-zlib php8-xml php8-phar php8-intl php8-dom php8-xmlreader php8-ctype \
php8-session php8-simplexml php8-mbstring php8-gd nginx supervisor curl php8-exif php8-zip php8-fileinfo \
php8-iconv php8-soap tzdata htop mysql-client php8-pecl-imagick php8-pecl-redis php8-tokenizer php8-xmlwriter \
nano && rm /etc/nginx/conf.d/default.conf
Now work
Step 7/22 : RUN apk --no-cache -X http://172.20.0.254/v3.13/main -X http://172.20.0.254/v3.13/community add php8=8.0.2-r0 php8-fpm php8-opcache php8-mysqli php8-json php8-openssl php8-curl php8-soap php8-zlib php8-xml php8-phar php8-intl php8-dom php8-xmlreader php8-ctype php8-session php8-simplexml php8-mbstring php8-gd nginx supervisor curl php8-exif php8-zip php8-fileinfo php8-iconv php8-soap tzdata htop mysql-client php8-pecl-imagick php8-pecl-redis php8-tokenizer php8-xmlwriter nano && rm /etc/nginx/conf.d/default.conf
---> Running in 4f2c6521e6e6
fetch http://172.20.0.254/v3.13/community/x86_64/APKINDEX.tar.gz
...
You need a local Docker Registry
I want to run the cp-schema-registry image on AWS ECS, so I am trying to get it to run on docker locally. I have a command like this:
docker run -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.address:9092,2.kafka.address:9092,3.kafka.address:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_PLAINTEXT \
confluentinc/cp-schema-registry:5.5.3
(I have replaced the kafka urls).
My consumers/producers connect to the cluster with params:
["sasl.mechanism"] = "PLAIN"
["sasl.username"] = <username>
["sasl.password"] = <password>
Docs seem to indicate there is a file I can create with these parameters, but I don't know how to pass this into the docker run command. Can this be done?
Thanks to OneCricketeer for the help above with the SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG var. The command ended up like this (I added port 8081:8081 so I could test with curl):
docker run -p 8081:8081 -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.broker:9092,2.kafka.broker:9092,3.kafka.broker:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_SSL \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM=PLAIN \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="user" password="pass";' confluentinc/cp-schema-registry:5.5.3
Then test with curl localhost:8081/subjects and get [] as a response.
I am trying to make the Scaleway CLI installed as part of a docker image I'm building to run Azure Pipelines.
My Dockerfile looks like this:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat\
docker.io \
s3cmd
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
# Add private key for SSH connections
COPY ./config/id_rsa $HOME/.ssh/id_rsa
# Config s3cmd
COPY ./config/.s3cfg $HOME/.s3cfg
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
The key section being:
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
The config.yaml file referenced above looks like the following (minus the real values of course):
access_key: <key>
secret_key: <secret>
default_organization_id: <orgId>
default_project_id: <projectId>
default_region: nl-ams
default_zone: nl-ams-1
However, when it executes RUN scw init, the output is Invalid email or secret-key: ''
I have tried without running scw init at all, but then calls to scw fail, saying
Access key is required
Details: Access_key can be initialised using the command "scw init".
After initialisation, there are three ways to provide access_key:
with the Scaleway config file, in the access_key key: /root/.config/scw/config.yaml;
with the SCW_ACCESS_KEY environement variable;
Note that the last method has the highest priority.
More info:
https://github.com/scaleway/scaleway-sdk-go/tree/master/scw#scaleway-config
Hint: You can get your credentials here:
https://console.scaleway.com/account/credentials
Which admittedly is one of the better error messages I've seen, but nonetheless has not helped me. I am going to try the Environment Variable approach, which I suspect may do the trick, but I'd still like to know what I'm doing wrong with this config.yaml file.
Lastly... someone with more rep than me needs to create the tag "scaleway". Hard to reference the actual technology in question when the tag doesn't exist.
I have cloned landoop fast-data-dev docker repo from this GitHub repo.
and built the image using command docker build --tag=landoop .
After building the image, I ran it using:
docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=10.10.X.X -e DEBUG=1 -e AWS_ACCESS_KEY_ID=XXX -e AWS_SECRET_ACCESS_KEY=XXX landoop
Once the UI was up, I tried to create a s3 sink connection but it failed saying:
Caused by: java.io.FileNotFoundException: /usr/lib/libnss3.so
Also I don't see the libnss3.so file in the location. However if I run the docker container directly using the command below, I can see the file in the location and there is no error when creating the s3 sink connector.
docker run --rm --net=host landoop/fast-data-dev
Has anyone faced this error?
Answering my own question so that others can benefit,if it's not appropriate please leave a comment and I will make it a comment. I figured out that the libnss3 library was missing from debian image and had to install while building the image. For this I edited the setp-and-run.sh and added the libnss3, the script looks like :
FROM debian as compile-lkd
RUN apt-get update \
&& apt-get install -y \
unzip \
wget \
libnss3 \
I have created a docker volume for postgres on my local machine.
docker create volume postgres-data
Then I used this volume and run a docker.
docker run -it -v postgres-data:/var/lib/postgresql/9.6/main postgres
After that I did some database operations which got stored automatically in postgres-data. Now I want to copy that volume from my local machine to another remote machine. How to do the same.
Note - Database size is very large
If the second machine has SSH enabled you can use an Alpine container on the first machine to map the volume, bundle it up and send it to the second machine.
That would look like this:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c \
"cd /from ; tar -cf - . " | \
ssh <TARGET_HOST> \
'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - "'
You will need to change:
SOURCE_DATA_VOLUME_NAME
TARGET_HOST
TARGET_DATA_VOLUME_NAME
Or, you could try using this helper script https://github.com/gdiepen/docker-convenience-scripts
Hope this helps.
I had an exact same problem but in my case, both volumes were in separate VPCs and couldn't expose SSH to outside world. I ended up creating dvsync which uses ngrok to create a tunnel between them and then use rsync over SSH to copy the data. In your case you could start the dvsync-server on your machine:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=postgres-data,target=/data,readonly \
quay.io/suda/dvsync-server
and then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The NGROK_AUTHTOKEN can be found in ngrok dashboard and the DVSYNC_TOKEN is being shown by the dvsync-server in its stdout.
Once the synchronization is done, the dvsync-client container will stop.