I am trying to dockerize my application but I am running into following error when I run it:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/app/bin/app-core": stat /app/bin/app-core: no such file or directory: unknown.
But when I do a sbt docker:publishLocal, it works perfectly. I am using Windows 11 and running Docker desktop. Any ideas what might be the cause?
build.sbt:
docker / dockerfile := {
val targetDir = "/app"
new Dockerfile {
from("openjdk:17")
entryPoint(s"$targetDir/bin/${executableScriptName.value}")
expose(8080)
}
}
docker / buildOptions := BuildOptions(cache = false)
Dockerfile created by sbt-docker & sbt-native-packager:
FROM openjdk:17
ENTRYPOINT ["\/app\/bin\/app-core"]
EXPOSE 8080
When I do a sbt docker:publishLocal, it works perfectly. But when I do a build:
docker build . --file Dockerfile --tag app:test1
Then try to run it via:
docker run -itd -p 8080:8080 app-core:test1
Then it gives me the error above.
Related
I spun up a Debian 11 EC2 on AWS, and installed postgres 14.5 on it and docker and docker compose on it.I added a superuser to postgres of "admin' with a password. I created my docker-compose.yml file and a .env file.
When I try to use the docker-compose.yml file, I get:
sudo docker compose up -d
services.database.environment must be a mapping
When I build my docker container with
sudo docker build . -t tvappbuilder:latest
and then try to run it with:
sudo docker run -p 8080:8080 tvappbuilder:latest --env-file .env -it
Config Path .
4:47PM INF server/utils/logging.go:105 > logging configured fileLogging=true fileName=app-builder-logs logDirectory=./logs maxAgeInDays=0 maxBackups=0 maxSizeMB=0
4:47PM FTL server/cmd/video_conferencing/server.go:71 > Error initializing database error="pq: Could not detect default username. Please provide one explicitly"
Here are the dockers so far:
sudo docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 6e5f035abda5 18 hours ago 1.82GB
tvappbuilder latest 6166e24a47e0 21 hours ago 21.8MB
<none> <none> cedcaf2facd1 21 hours ago 1.82GB
hello-world latest feb5d9fea6a5 12 months ago 13.3kB
golang 1.15.1 9f495162f677 2 years ago 839MB
Here is the docker-compose.yml:
version: 3.7
services:
server:
container_name: server
build: .
depends_on:
- database
ports:
- 8080:8080
environment:
- APP_ID: $APP_ID
- APP_CERTIFICATE: $APP_CERTIFICATE
- CUSTOMER_ID: $CUSTOMER_ID
- CUSTOMER_CERTIFICATE: $CUSTOMER_CERTIFICATE
- BUCKET_NAME: $BUCKET_NAME
- BUCKET_ACCESS_KEY: $BUCKET_ACCESS_KEY
- BUCKET_ACCESS_SECRET: $BUCKET_ACCESS_SECRET
- CLIENT_ID: $CLIENT_ID
- CLIENT_SECRET: $CLIENT_SECRET
- PSTN_USERNAME: $PSTN_USERNAME
- PSTN_PASSWORD: $PSTN_PASSWORD
- SCHEME: $SCHEME
- ALLOWED_ORIGIN: ""
- ENABLE_NEWRELIC_MONITORING: false
- RUN_MIGRATION: true
- DATABASE_URL: postgresql://$POSTGRES_USER:$POSTGRES_PASSWORD#database:5432/$POSTGRES_DB?sslmode=disable
database:
container_name: server_database
image: postgres-14.5
restart: always
hostname: database
environment:
- POSTGRES_USER: $POSTGRES_USER
- POSTGRES_PASSWORD: $POSTGRES_PASSWORD
- POSTGRES_DB: $POSTGRES_DB
Here is the Dockerfile:
## Using Dockerfile from the following post: https://medium.com/#petomalina/using-go-mod-download-to-speed-up-golang-docker-builds-707591336888
FROM golang:1.15.1 as build-env
# All these steps will be cached
RUN mkdir /server
WORKDIR /server
COPY go.mod .
COPY go.sum .
# Get dependancies - will also be cached if we won't change mod/sum
RUN go mod download
# COPY the source code as the last step
COPY . .
# Build the binary
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o /go/bin/server /server/cmd/video_conferencing
# Second step to build minimal image
FROM scratch
COPY --from=build-env /go/bin/server /go/bin/server
COPY --from=build-env /server/config.json config.json
ENTRYPOINT ["/go/bin/server"]
and here is the .env file:
ENCRYPTION_ENABLED=0
POSTGRES_USER=admin
POSTGRES_PASSWORD=<correct pswd for admin>
POSTGRES_DB=tvappbuilder
APP_ID=<my real app ID>
APP_CERTIFICATE=<my real app cert>
CUSTOMER_ID=<my real ID>
CUSTOMER_CERTIFICATE=<my real cert>
RECORDING_REGION=0
BUCKET_NAME=<my bucket name>
BUCKET_ACCESS_KEY=<my real key>
BUCKET_ACCESS_SECRET=<my real secret>
CLIENT_ID=
CLIENT_SECRET=
PSTN_USERNAME=
PSTN_PASSWORD=
PSTN_ACCOUNT=
PSTN_EMAIL=
SCHEME=esports1_agora
ENABLE_SLACK_OAUTH=0
SLACK_CLIENT_ID=
SLACK_CLIENT_SECRET=
GOOGLE_CLIENT_ID=
ENABLE_GOOGLE_OAUTH=0
GOOGLE_CLIENT_SECRET=
ENABLE_MICROSOFT_OAUTH=0
MICROSOFT_CLIENT_ID=
MICROSOFT_CLIENT_SECRET=
APPLE_CLIENT_ID=
APPLE_PRIVATE_KEY=
APPLE_KEY_ID=
APPLE_TEAM_ID=
ENABLE_APPLE_OAUTH=0
PAPERTRAIL_API_TOKEN=<my real token>
According to this: https://pkg.go.dev/github.com/lib/pq
I probably should not need to use pq, and instead use postgres directly, but it appears it
was set up this way.
Many thanks for any pointers!
As per the comments there are a number of issues with your setup.
The first is the error services.database.environment must be a mapping when running docker compose up -d. This is caused by lines like - APP_ID: $APP_ID in your docker-compose.yml - use either APP_ID: $APP_ID or - APP_ID=$APP_ID as per the documentation.
A further issue is that you installed Postgres on the bare OS and are then using a postgres container. You only need to do one or the other (but if using docker you will want to use a volume or mount for the Postgres data (otherwise it will be lose when the container is rebuilt).
There are probably further issues but the above should get you started.
I am trying to build mongo inside docker and I want to push database, collection and document inside the collection I tried with docker build and below my Dockerfile
FROM mongo
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'var document = {"_id": "61912ebb4b6d7dcc7e689914","name": "Test Account","email":"test#test.net", "role": "admin", "company_domain": "test.net","type": "regular","status": "active","createdBy": "61901a01097cb16e554f5a19","twoFactorAuth": false, "password": "$2a$10$MPDjDZIboLlD8xpc/RfOouAAAmBLwEEp2ESykk/2rLcqcDJJEbEVS"}; db.Users.insert(document);'
EXPOSE 27017
and using Docker Compose
version: '3.9'
services:
web:
build:
context: ./server
dockerfile: Dockerfile
ports:
- "8080:8080"
demeter_db:
image: "mongo"
volumes:
- ./mongodata:/data/db
ports:
- "27017:27017"
command: mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
demeter_redis:
image: "redis"
I want to add the below records because the Web Server is using them in backend. if there is a better way of doing it I would be thankful.
What I get is the below error
demeter_db_1 | Current Mongosh Log ID: 61dc697509ee790cc89fc7aa
demeter_db_1 | Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
demeter_db_1 | MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
Knowing when I connect to interactive shell inside mongo container and add them manually things works fine.
root#8b20d117586d:/# mongosh 127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
Current Mongosh Log ID: 61dc64ee8a2945352c13c177
Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
Using MongoDB: 5.0.5
Using Mongosh: 1.1.7
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
------
The server generated these startup warnings when booting:
2022-01-10T16:52:14.717+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
2022-01-10T16:52:15.514+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
------
{ ok: 1 }
root#8b20d117586d:/# exit
exit
Cheers
So I have created a node/mongo app and I am trying to run everything on docker.
I can get everything to run just fine until I try and add the init file for the mongo instance into the entry-point.
Here is my docker file for mongo: (Called mongo.dockerfile in /MongoDB)
FROM mongo:4.2
WORKDIR /usr/src/mongo
VOLUME /docker/volumes/mongo /user/data/mongo
ADD ./db-init /docker-entrypoint-initdb.d
CMD ["mongod", "--auth"]
The db-init folder contains an init.js file that looks like so (removed the names of stuff):
db.createUser({
user: '',
pwd: '',
roles: [ { role: 'readWrite', db: '' } ]
})
Here is my docker-compose file:
version: "3.7"
services:
web:
container_name: web
env_file:
- API/web.env
build:
context: ./API
target: prod
dockerfile: web.dockerfile
ports:
- "127.0.0.1:3000:3000"
depends_on:
- mongo
links:
- mongo
restart: always
mongo:
container_name: mongo
env_file:
- MongoDB/mongo.env
build:
context: ./MongoDB
dockerfile: mongo.dockerfile
restart: always
The exact error I get when running a docker-compose up is:
ERROR: for mongo Cannot start service mongo: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"docker-entrypoint-initdb.d\": executable file not found in $PATH": unknown
I had this working at one point with another project but cannot seem to get this on to work at all.
Any thoughts on what I am doing wrong?
Also note I have seen other issues like this saying to chmod +x the path (tried that didnt work)
Also tried to chmod 777 also didnt work. (Maybe I did this wrong and I dont know exactly what to run this on?)
Your entrypoint has been modified from the upstream image, and it's not clear how from the input you've provided. You may have modified the mongo image itself and need to pull a fresh copy with docker-compose build --pull. Otherwise, you can force the entrypoint back to the upstream value:
ENTRYPOINT ["docker-entrypoint.sh"]
I have developed in Visual Studio 2017 (version 15.9.16) a simple .Net Core 2.2 unit test project that connects to a MongoDb instance enabling Container Orchrestation Support. My purpose is running a container with a MongoDb instance where the unit tests will connect to whenever they are launched from Test Explorer window. The problem I am facing is that from the unit test code I cannot connect to the MongoDb container, the service name defined in docker-compose.yml cannot be resolved.
Here are the contents of the docker-compose.yml file:
version: '3.4'
services:
myapp:
image: ${DOCKER_REGISTRY-}myapp
build:
context: .
dockerfile: myapp/Dockerfile
depends_on:
- mongo
mongo:
image: mongo:latest
The contents of the Dockerfile of the app are the following:
FROM microsoft/dotnet:2.2-runtime AS base
WORKDIR /app
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY myapp/myapp.csproj myapp/
RUN dotnet restore myapp/myapp.csproj
COPY . .
WORKDIR /src/myapp
RUN dotnet build myapp.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish myapp.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
Inside the unit test code, if I try to connect to the MongoDb instance using var client = new MongoClient("mongodb://mongo:27017"); when trying to write to the database I get the following exception:
A timeout occured after 30000ms selecting a server using
CompositeServerSelector{ Selectors =
MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector,
LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000
} }. Client view of cluster state is { ClusterId : "1", ConnectionMode
: "Automatic", Type : "Unknown", State : "Disconnected", Servers : [{
ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/mongo:27017" }",
EndPoint: "Unspecified/mongo:27017", State: "Disconnected", Type:
"Unknown", HeartbeatException:
"MongoDB.Driver.MongoConnectionException: An exception occurred while
opening a connection to the server. --->
System.Net.Sockets.SocketException: Unknown host at
System.Net.Dns.HostResolutionEndHelper(IAsyncResult asyncResult) at
System.Net.Dns.EndGetHostAddresses(IAsyncResult asyncResult) at
System.Net.Dns.<>c.b__25_1(IAsyncResult
asyncResult) at
System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult
iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean
requiresSynchronization)
--- End of stack trace from previous location where exception was thrown --- at
MongoDB.Driver.Core.Connections.TcpStreamFactory.ResolveEndPointsAsync(EndPoint
initial) at
MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint
endPoint, CancellationToken cancellationToken) at
MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken
cancellationToken) --- End of inner exception stack trace --- at
MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken
cancellationToken) at
MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken
cancellationToken)", LastUpdateTimestamp:
"2019-10-03T10:00:40.7989018Z" }] }.'
If I try to resolve MongoDb container service name using System.Net.Dns.GetHostEntry("mongo") I get this exception:
System.Net.SocketException: 'Unknown host'
It seems clear to me that the .Net Core code inside the unit test container cannot resolve docker-compose.yml service names. On the other hand, if I start a session in the unit test container I can do a ping mongo or telnet mongo 27017 with success. I have also tried ensuring that MongoDb container has started as proposed in this question, but with no luck. Something must be missing in my code or the docker configuration files to enable service name resolution. Any help would be much appreciated.
You can declare a network and associate an IP adress for each service, then use the ip address of mongo service instead of resolving its hostname:
version: '3.4'
services:
myapp:
image: ${DOCKER_REGISTRY-}myapp
build:
context: .
dockerfile: myapp/Dockerfile
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 178.25.0.3
mongo:
image: mongo:latest
networks:
mynetwork:
ipv4_address: 178.25.0.2
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: 178.25.0.0/24
MongoClient("mongodb://178.25.0.2:27017");
Finally I found the reason for this behavior after running System.Net.Dns.GetHostName() inside my unit test code and seeing that the returning name was the host name and not the container name. I forgot to mention that I was running my unit test from the Test Explorer window and this way container support is completely ignored. Opposite from what I expected, Docker Compose is not being invoked so neither MongoDb container nor unit test project container are launched, unit test project is just running inside the host. The feature of running unit tests in container when container support is enabled in Visual Studio is already asked at https://developercommunity.visualstudio.com/idea/554907/allow-running-unit-tests-in-docker.html. Please vote up for this feature!
I'm building Iroha for which i'm running a script for environment setup which is internally calling the docker-compose.yml, where i"m getting the error:
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "/home/cdac/iroha/docker/docker-compose.yml", line 3, column 5
expected <block end>, but found '<scalar>'
in "/home/cdac/iroha/docker/docker-compose.yml", line 13, column 6
docker-compose.yml file is showing below.
services:
node:
image: hyperledger/iroha:develop-build
ports:
- "${IROHA_PORT}:50051"
- "${DEBUGGER_PORT}:20000"
environment:
- IROHA_POSTGRES_HOST=${COMPOSE_PROJECT_NAME}_postgres_1
- IROHA_POSTGRES_PORT=5432
- IROHA_POSTGRES_USER=iroha
- IROHA_POSTGRES_PASSWORD=helloworld
- CCACHE_DIR=/tmp/ccache
export G_ID=$(id -g $(whoami))
export U_ID=$(id -g $(whoami))
user: ${U_ID:-0}:${G_ID:-0}
depends_on:
- postgres
tty: true
volumes:
- ../:/opt/iroha
- ccache-data:/tmp/ccache
working_dir: /opt/iroha
cap_add:
- SYS_PTRACE
security_opt:
- seccomp:unconfined
postgres:
image: postgres:9.5
environment:
- POSTGRES_USER=iroha
- IROHA_POSTGRES_PASSWORD=helloworld
command: -c 'max_prepared_transactions=100'
volumes:
ccache-data:
any help will be appreciate, thanks in advance.
These lines are not belongs to the docker-compose syntax
export G_ID=$(id -g $(whoami))
export U_ID=$(id -g $(whoami))
Also this line wont be able to work as expected
user: ${U_ID:-0}:${G_ID:-0}
You should write your own shell script and use it as an entry point for the docker container (this should be done in the Dockerfile step) then run a container directly from the image that you have created without the need to assign a user or export anything within the docker-compose as it will be executed once your container is running.
Check the following URL which contains more explanation about the allowed keywords in docker-compose: Compose File: Service Configuration Reference
#MostafaHussein I removed the above 3 lines then executed the run-iroha-dev.sh script, and it started to work. it attached me to /opt/iroha in docker container and downloaded hyperledger/iroha:develop-build and iroha images and launched two containers.
is it same what you are suggesting?