How to send from one container to another container using docker-compose? - docker-compose

I have two containers. When I call the first container using fastapi, then the container sends an image to the second container using requests and the second container receive the image and store it in a volume. I'm getting error.
Files of first container:
main.py
import base64
import io
import json
import logging
import os
from io import BytesIO
import requests
import uvicorn
from fastapi import FastAPI, File, Form, UploadFile
from fastapi.responses import FileResponse
from PIL import Image
app = FastAPI()
#app.get("/")
def read_root():
img = Image.new('RGB', (200, 50), color = (73,195,150))
img.save('newfile.jpg')
print("image saved")
img.show()
##send the image
api = 'http://localhost:81/test'
filename ='newfile.jpg'
up = {'image':(filename, open(filename, 'rb'))}
#json = {'first': "Hello", 'second': "World"}
request = requests.post(api, files=up)
print(request.text)
return {"image":"sent successfully:", "statuscode":request.status_code}
Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./app /app
WORKDIR /app
RUN pip install Pillow requests python-multipart
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
COPY . /app
Files of the second container:
main.py:
from fastapi import FastAPI,UploadFile,File,Form
from PIL import Image
import uvicorn
import io
import json
import base64
import logging
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
import shutil
app = FastAPI()
def read_imagefile(file) -> Image.Image:
image = Image.open(BytesIO(file))
return image
#app.post("/test")
async def predict_api(file: UploadFile = File(...)):
extension = file.filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not extension:
return "Image must be jpg or png format!"
img = read_imagefile(await file.read())
#img = Image.open(myfile)
draw = ImageDraw.Draw(img)
# font = ImageFont.truetype(<font-file>, <font-size>)
font = ImageFont.truetype("sans-serif.ttf", 16)
# draw.text((x, y),"Sample Text",(r,g,b))
draw.text((0, 0),"Manipulated",(255,255,255),font=font)
img.save('sample-out.jpg')
return {"image": "saved in vol"}
Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./app /app
WORKDIR /app
RUN pip install Pillow python-multipart
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "81"]
COPY . /app
Docker-compose file:
version: '3.8'
services:
app1:
build: ./app1/
ports:
- 80:80
networks:
- my-proxy-net
app2:
build: ./app2/
volumes:
- myapp:/app
ports:
- 81:81
networks:
my-proxy-net:
external: true
volumes:
myapp:

If your services are in same network, they can reach each other by using their respective container_name as hostname:
In your case for example:
http://app1
Use
docker container ps -a
and check "NAMES" column in order to fight the right name to call

Related

mongodb memory server Wrongfully downloading binaries on CircleCi even tho they are cached

Im using mongodb memory server.
Locally everything passes, but on circle ci it tries to download the binary even tho it's already there.
By running node node_modules/mongodb-memory-server/postinstall.js on CircleCi I see this output:
However then down the line when I run my tests it tries to download the binary again:
in addition, the SYSTEM_BINARY env variable is set to the path where the binary is.
Versions
NodeJS: v17.6.0
mongodb-memory-server-*: 8.8.0
mongoose: 6.4.7
system: ubuntu 20.04
Code Example
// .circleci.config.yml
version: 2.1
orbs:
node: circleci/node#5.0.2
jobs:
main:
docker:
- image: cimg/node:lts-browsers
steps:
- checkout
- node/install-packages:
pkg-manager: yarn
- run: echo $SYSTEM_BINARY
- run: node node_modules/mongodb-memory-server/postinstall.js
- run: yarn test:api
workflows:
build:
jobs:
- main
// mock-db-test-util.js
import { MongoMemoryServer } from "mongodb-memory-server"
import mockMongoose from "mongoose"
const mockMongoMemoryServer = MongoMemoryServer.create()
// #ts-ignore
jest.mock("../../src/db", () => ({
// #ts-ignore
...(jest.requireActual("../../src/db").default as any),
// #ts-ignore
connect: jest.fn().mockImplementation(async () => {
const mongo = await mockMongoMemoryServer
const uri = mongo.getUri()
await mockMongoose.connect(uri)
}),
}))

Can't establish a connection between Flask application and Mongo using Docker and Docker-Compose

I'm having trouble getting flask to connect to the mongo database. I was able to connect to the base via Pycharm - Database, additionally I downloaded MongoDB Compass to check if I can also connect from there and I did it. I installed the mongo server in containers, not locally.
This is my docker and docker-compose:
dockerfile:
FROM python:3.10
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
WORKDIR /code
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
CMD python main.py
docker-compose.yaml:
version: "3.8"
services:
db_mongo:
image: mongo:5.0
container_name: mongo
ports:
- "27018:27017"
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
env_file:
- ./env/.env_database
backend:
build: .
volumes:
- .:/code/
ports:
- '8100:5000'
container_name: flask_api_container
depends_on:
- db_mongo
init-mongo.js:
db.log.insertOne({"message": "Database created."});
db = db.getSiblingDB('admin');
db.auth('mk', 'adminPass')
db = db.getSiblingDB('flask_api_db');
db.createUser(
{
user: 'flask_api_user',
pwd: 'dbPass',
roles: [
{
role: 'dbOwner',
db: 'flask_api_db'
}
]
}
)
db.createCollection('collection_test');
I copied the uri address from MongoDB Compass (I connected to the base with this address). I have tried various combinations of this address.
main.py:
import pymongo
from flask import Flask, Response, jsonify
from flask_pymongo import PyMongo
app = Flask(__name__)
uri = 'mongodb://flask_api_user:dbPass#db_mongo:27018/?authMechanism=DEFAULT&authSource=flask_api_db'
client = pymongo.MongoClient(uri)
client.admin.command('ismaster') # to check if the connection has been established - show errors in terminal
# try:
# mongodb_client = PyMongo(
# app=app,
# uri='mongodb://flask_api_user:dbPass#db_mongo:27018/?authMechanism=DEFAULT&authSource=flask_api_db'
# # uri='mongodb://flask_api_user:dbPass#localhost:27018/?authMechanism=DEFAULT&authSource=flask_api_db'
# )
# db = mongodb_client.db
# print('DB: ', db, flush=True)
# # client.admin.command('ismaster')
# # print('OK', flush=True)
# except:
# print('Error', flush=True)
#app.route('/')
def index():
return 'Test2'
# #app.route("/add_one/")
# def add_one():
# db.my_collection.insert_one({'title': "todo title", 'body': "todo body"})
# return jsonify(message="success")
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8100)
Errors:
when I did this, it returned None:
# db = mongodb_client.db
# print('DB: ', db, flush=True)
below errors from this method:
uri = 'mongodb://flask_api_user:dbPass#db_mongo:27018/?authMechanism=DEFAULT&authSource=flask_api_db'
client = pymongo.MongoClient(uri)
client.admin.command('ismaster') # to check if the connection has been established - show errors in terminal
flask_api_container | Traceback (most recent call last):
flask_api_container | File "/code/main.py", line 9, in <module>
flask_api_container | client.admin.command('ismaster')
flask_api_container | File "/usr/local/lib/python3.10/site-packages/pymongo/database.py", line 721, in command
flask_api_container | with self.__client._socket_for_reads(read_preference, session) as (
flask_api_container | File "/usr/local/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1235, in _socket_for_reads
flask_api_container | server = self._select_server(read_preference, session)
flask_api_container | File "/usr/local/lib/python3.10/site-packages/pymongo/mongo_client.py", line 1196, in _select_server
flask_api_container | server = topology.select_server(server_selector)
flask_api_container | File "/usr/local/lib/python3.10/site-packages/pymongo/topology.py", line 251, in select_server
flask_api_container | servers = self.select_servers(selector, server_selection_timeout, address)
flask_api_container | File "/usr/local/lib/python3.10/site-packages/pymongo/topology.py", line 212, in select_servers
flask_api_container | server_descriptions = self._select_servers_loop(selector, server_timeout, address)
flask_api_container | File "/usr/local/lib/python3.10/site-packages/pymongo/topology.py", line 227, in _select_servers_loop
flask_api_container | raise ServerSelectionTimeoutError(
flask_api_container | pymongo.errors.ServerSelectionTimeoutError: db_mongo:27018: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 62a9baa2cc5e740091e0a60b, topology_type: Unknown, servers: [<ServerDescription ('db_mongo', 27018) server_type: Unknown, rtt: None, error=AutoReconnect('db_mongo:27018: [Errno 111] Connection refused')>]>
flask_api_container exited with code 1
None of the uri used worked.
Thanks for any help in resolving this issue.
main.py
MongoDB Compass
PyCharm - database
main.py 2
The mapped port (27018) is used when you connect from the host to the database. Containers on the same bridge network as the database connect directly to the container and should use the container port (27017).
So your connection string should be
uri = 'mongodb://flask_api_user:dbPass#db_mongo:27017/?authMechanism=DEFAULT&authSource=flask_api_db'

Flutter can not connect to the docker localhost parse-server

I am trying to run a parse-server and parse-dashboard via the following docker-compose.yml
docker-compose:
version: '3.9'
services:
database:
image: mongo:5.0
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
volumes:
- data_volume:/data/mongodb
server:
restart: always
image: parseplatform/parse-server:4.10.4
ports:
- 1337:1337
environment:
- PARSE_SERVER_APPLICATION_ID=COOK_APP
- PARSE_SERVER_MASTER_KEY=MASTER_KEY_1
- PARSE_SERVER_CLIENT_KEY=CLIENT_KEY_1
- PARSE_SERVER_DATABASE_URI=mongodb://admin:admin#mongo/parse_server?authSource=admin
- PARSE_ENABLE_CLOUD_CODE=yes
links:
- database:mongo
volumes:
- data_volume:/data/server
- ./../lib/core/database/parse_server/cloud:/parse-server/cloud
dashboard:
image: parseplatform/parse-dashboard:4.0.0
ports:
- "4040:4040"
depends_on:
- server
restart: always
environment:
- PARSE_DASHBOARD_APP_ID=COOK_APP
- PARSE_DASHBOARD_MASTER_KEY=MASTER_KEY_1
- PARSE_DASHBOARD_USER_ID=admin
- PARSE_DASHBOARD_USER_PASSWORD=admin
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=true
- PARSE_DASHBOARD_SERVER_URL=http://localhost:1337/parse
volumes:
- data_volume:/data/dashboard
volumes:
data_volume:
driver: local
After the container is running via docker-compose up I am trying to connect to it using Flutter and write a new class to my server:
Flutter code:
import 'package:flutter/material.dart';
import 'package:parse_server_sdk_flutter/parse_server_sdk.dart';
void main() async {
WidgetsFlutterBinding.ensureInitialized();
const keyApplicationId = 'COOK_APP';
const keyClientKey = 'CLIENT_KEY_1';
const keyParseServerUrl = 'http://localhost:1337/parse';
var res = await Parse().initialize(keyApplicationId, keyParseServerUrl,
clientKey: keyClientKey, autoSendSessionId: true);
var connRes = await res.healthCheck();
var s = connRes.error?.message ?? "";
print("ERROR:" + s);
var firstObject = ParseObject('FirstClass')
..set(
'message', 'Hey ! First message from Flutter. Parse is now connected');
await firstObject.save();
print('done');
}
My error message:
SocketException: Connection refused (OS Error: Connection refused, errno = 111), address = localhost, port = 35262
But for some unknown reason, I can not connect to my local server even if I can access with no problem my dashboard.
Your application running on the mobile device/emulator will treat localhost as own machine. In order to access the parse server running on your host machine (docker image exposing 0.0.0.0 global) you need to specify the host IP address.
Just replace const keyParseServerUrl = 'http://localhost:1337/parse'; with const keyParseServerUrl = 'http://YOUR_HOST_IP_ADDRESS:1337/parse'; provided in the docker parse server 0.0.0.0 should be exposed.

Connecting Golang and Postgres docker containers

I'm trying to run a golang server at localhost:8080 that uses a postgres database. I've tried to containerize both the db and the server but can't seem to get them connected.
main.go
func (a *App) Initialize() {
var db *gorm.DB
var err error
envErr := godotenv.Load(".env")
if envErr != nil {
log.Fatalf("Error loading .env file")
}
var dbString = fmt.Sprintf("port=5432 user=sample dbname=sampledb sslmode=disable password=password host=db")
db, err = gorm.Open("postgres", dbString)
if err != nil {
fmt.Printf("failed to connect to databse\n",err)
}
a.DB=model.DBMigrate(db)
a.Router = mux.NewRouter()
a.setRoutes()
}
//Get : get wrapper
func (a *App) Get(path string, f func(w http.ResponseWriter, r *http.Request)) {
a.Router.HandleFunc(path, f).Methods("GET")
}
//Post : post wrapper
func (a *App) Post(path string, f func(w http.ResponseWriter, r *http.Request)) {
a.Router.HandleFunc(path, f).Methods("POST")
}
//Run : run on port
func (a *App) Run(port string) {
handler := cors.Default().Handler(a.Router)
log.Fatal(http.ListenAndServe(port, handler))
}
func (a *App) setRoutes() {
a.Get("/", a.handleRequest(controller.Welcome))
a.Get("/users", a.handleRequest(controller.GetUsers))
a.Get("/user/{id}", a.handleRequest(controller.GetUser))
a.Post("/login", a.handleRequest(controller.HandleLogin))
a.Post("/users/add", a.handleRequest(controller.CreateUser))
a.Post("/validate", a.handleRequest(controller.HandleValidation))
}
func main() {
app := &App{}
app.Initialize()
app.Run(":8080")
}
server Dockerfile
FROM golang:latest
RUN mkdir /app
WORKDIR /app/server
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
docker-compose.yml
version: '3.7'
services:
db:
image: postgres
container_name: ep-db
environment:
- POSTGRES_PORT=${DB_PORT}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
ports:
- '5432:5432'
volumes:
- ./db:/var/lib/postgresql/data"
networks:
- internal
server:
container_name: ep-server
build:
context: ./server
dockerfile: Dockerfile
command: bash -c "go build && ./server -b 0.0.0.0:8080 --timeout 120"
volumes:
- './server:/app/server'
expose:
- 8080
depends_on:
- db
networks:
- internal
stdin_open: true
volumes:
db:
server:
networks:
internal:
driver: bridge
I have some get and post requests that return the right values when i run it locally on my computer (for ex. localhost:8080/users would return a JSON full of users from the database) but when I use curl inside the server container, I don't get any results. I am new to docker, Is there something wrong with what I'm doing so far?
Each docker container has its own IP address. When you connect to the postgres db from your application, you are using localhost, which is the container for the application and not the db. Based on your docker-compose, you should use the hostname db (the service name) to connect to the database.
As suggested by #DavidMaze you should verify the logs from your server container. Also,
First ensure ep-server is running (check that the output of docker container ls has status running for ep-server)
Run docker logs ep-server to view errors (if any)
If there are no errors in the logs, then run a docker exec -it ep-server bash to login to your container and run a telnet ep-db 5432 to verify that your postgres instance is reacheable from ep-server

How to pass a database connection into Airflow KubernetesPodOperator

I'm having a confusion with KubernetesPodOperator from Airflow, and I'm wondering how to pass the load_users_into_table() function that it has a conn_id parameter stored in connection of Airflow in the Pod ?
In the official doc proposes to put the conn_id in Secret but I don't understand how can I pass it in my function load_users_into_table() after that.
https://airflow.apache.org/docs/stable/kubernetes.html
the function (task) to be executed in the pod:
def load_users_into_table(postgres_hook, schema, path):
gdf = read_csv(path)
gdf.to_sql('users', con=postgres_hook.get_sqlalchemy_engine(), schema=schema)
the dag:
_pg_hook = PostgresHook(postgres_conn_id = _conn_id)
with dag:
test = KubernetesPodOperator(
namespace=namespace,
image=image_name,
cmds=["python", "-c"],
arguments=[load_users_into_table],
labels={"dag-id": dag.dag_id},
name="airflow-test-pod",
task_id="task-1",
is_delete_operator_pod=True,
in_cluster=in_cluster,
get_logs=True,
config_file=config_file,
executor_config={
"KubernetesExecutor": {"request_memory": "512Mi",
"limit_memory": "1024Mi",
"request_cpu": "1",
"limit_cpu": "2"}
}
)
Assuming you want to run with K8sPodOperator, you can use argparse and add arguments to the docker cmd. Something in these lines should do the job:
import argparse
def f(arg):
print(arg)
parser = argparse.ArgumentParser()
parser.add_argument('--foo', help='foo help')
args = parser.parse_args()
if __name__ == '__main__':
f(args.foo)
Dockerfile:
FROM python:3
COPY main.py main.py
CMD ["python", "main.py", "--foo", "somebar"]
There are other ways to solve this such as using secrets, configMaps or even Airflow Variables, but this should get you moving forward.