When running my Dockerfile I need to grab dependencies. This is done using go get ./....
However when doing docker build -t test . it hangs at the go get command.
here is the error message
exec go get -v -d
github.com/gorilla/mux (download)
cd .; git clone https://github.com/gorilla/mux /go/src/github.com/gorilla/mux Cloning into
'/go/src/github.com/gorilla/mux'... fatal: unable to access
'https://github.com/gorilla/mux/': Could not resolve host: github.com
package github.com/gorilla/mux: exit status 128
here is the dockerfile
FROM golang
# Create a directory inside the container to store all our application and then make it the working directory.
RUN mkdir -p /go/src/example-app
WORKDIR /go/src/example-app
# Copy the example-app directory (where the Dockerfile lives) into the container.
COPY . /go/src/example-app
# Download and install any required third party dependencies into the container.
RUN go-wrapper download
RUN go-wrapper install
RUN go get ./...
# Set the PORT environment variable inside the container
ENV PORT 8080
# Expose port 8080 to the host so we can access our application
EXPOSE 8080
# Now tell Docker what command to run when the container starts
CMD ["go-wrapper", "run"]
I assume you're doing that via ssh on another machine. Check if it has a dns server in your /etc/network/interfaces. It should look somehow like this:
iface eth0 inet static
address 192.168.2.9
gateway 192.168.2.1
netmask 255.255.255.0
broadcast 192.168.2.255
dns-nameservers 192.168.2.1 8.8.4.4
DNS servers that "always" work are 8.8.8.8 and 8.8.4.4, both provided by Google. If that doesn't resolve your problem, you should check your internet connection for other misconfigurations, but first try this.
Related
I'm trying out Github codespaces, specifically the "Node.js & Mongo DB" default settings.
The port is forwarded, and my objective is to connect with MongoDB Compass running on my local machine.
The address forwarded to 27017 is something like https://<long-address>.githubpreview.dev/
My attempt
I attempted to use the following connection string, but it did not work in MongoDB compass. It failed with No addresses found at host. I'm actually unsure about how I even determine if MongoDB is actually running in the Github codespace?
mongodb+srv://root:example#<long-address>.githubpreview.dev/
.devcontainer files
docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
# Update 'VARIANT' to pick an LTS version of Node.js: 16, 14, 12.
# Append -bullseye or -buster to pin to an OS version.
# Use -bullseye variants on local arm64/Apple Silicon.
VARIANT: "16"
volumes:
- ..:/workspace:cached
init: true
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:db
# Uncomment the next line to use a non-root user for all processes.
# user: node
# Use "forwardPorts" in **devcontainer.json** to forward an app port locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
db:
image: mongo:latest
restart: unless-stopped
volumes:
- mongodb-data:/data/db
# Uncomment to change startup options
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
MONGO_INITDB_DATABASE: foo
# Add "forwardPorts": ["27017"] to **devcontainer.json** to forward MongoDB locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
volumes:
mongodb-data: null
And a devcontainer.json file
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.203.0/containers/javascript-node-mongo
// Update the VARIANT arg in docker-compose.yml to pick a Node.js version
{
"name": "Node.js & Mongo DB",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"mongodb.mongodb-vscode"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [3000, 27017],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "yarn install",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node",
"features": {
"git": "os-provided"
}
}
and finally a Docker file:
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 16, 14, 12, 16-bullseye, 14-bullseye, 12-bullseye, 16-buster, 14-buster, 12-buster
ARG VARIANT=16-bullseye
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# Install MongoDB command line tools if on buster and x86_64 (arm64 not supported)
ARG MONGO_TOOLS_VERSION=5.0
RUN . /etc/os-release \
&& if [ "${VERSION_CODENAME}" = "buster" ] && [ "$(dpkg --print-architecture)" = "amd64" ]; then \
curl -sSL "https://www.mongodb.org/static/pgp/server-${MONGO_TOOLS_VERSION}.asc" | gpg --dearmor > /usr/share/keyrings/mongodb-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/mongodb-archive-keyring.gpg] http://repo.mongodb.org/apt/debian $(lsb_release -cs)/mongodb-org/${MONGO_TOOLS_VERSION} main" | tee /etc/apt/sources.list.d/mongodb-org-${MONGO_TOOLS_VERSION}.list \
&& apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get install -y mongodb-database-tools mongodb-mongosh \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*; \
fi
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
Update
I also posted here in the MongoDB community, but no help...
As #iravinandan said you need to set up a tunnel.
Publishing a port alone won't help as all incoming requests are going through an http proxy.
If you dig CNAME <long-address>.githubpreview.dev you will see it's github-codespaces.app.online.visualstudio.com. You can put anything in the githubpreview.dev subdomain and it will still be resolved on the DNS level.
The proxy relies on HTTP Host header to route the request to correct upstream so it will work for HTTP protocols only.
To use any other protocol (MongoDb wire protocol in your case) you need to set up a TCP tunnel from codespaces to your machine.
Simplest set up - direct connection
At the time of writing the default Node + Mongo codespace uses Debian buster, so ssh port forwarding would be the obvious choice. In the codespace/VSCode terminal:
ssh -R 27017:localhost:27017 your_public_ip
Then in your compas connect to
mongodb://localhost:27017
It will require your local machine to run sshd of course, have a white IP (or at least your router should forward incoming ssh traffic to your computer) and allow it in the firewall. You can pick any port if 27017 is already being used locally.
It's the simplest set up but it exposes your laptop to the internet, and it's just a matter of time to get it infected.
A bit more secure - jumpbox in the middle
To keep your local system behind DMZ you can set up a jumpbox instead - a minimalistic disposable linux box somewhere in the internet, which will be used to chain 2 tunnels:
Remote port forwarding from codespace to the jumpbox
Local port forwarding from your laptop to the jumpbox
The same
mongodb://localhost:27017
on mongo compas.
The jumpbox have to expose sshd to the internet, but you can minimise risks by hardening its security. After all it doesn't do anything but proxy traffic. EC2 nano will be more than enough, just keep in mind large data transfers might be expensive.
Hassle-free tunnel-as-a-service
Something you can try in 5 min. ngrok has been around for more than a decade and it does exactly this - it sells tunnels (with some free tier sufficient for the demo).
In your codespace/VScode terminal:
npm i ngrok --save-dev
To avoid installing every time but ensure you don't ship with production code.
You will need to register an account on ngrok (SSO with github will do) to get an authentication code and pass it to the codespaces/VSCode terminal:
./node_modules/.bin/ngrok authtoken <the token>
Please remember it saves the token to the home directory which will be wiped after rebuild. Once authorised you can open the tunnel in the codespaces/VSCode terminal:
./node_modules/.bin/ngrok tcp 27017
Codespace will automatically forward the port:
And the terminal will show you some stats (mind the free tier limit) and connection string:
The subdomain and port will change every time you open the tunnel.
From the image above the connection parameters for mongodb compas will be:
mongodb://0.tcp.ngrok.io:18862
with authorization parameters on mongodb level as needed.
Again, keep in mind you leave your mongodb exposed to the internet (0.tcp.ngrok.io:18862), and mongo accepts unauthenticated connections.
I wouldn't leave it open for longer than necessary.
Use built-in mongodb client
The node + mongo environment comes with handy VScode plugins pre-installed:
Of course it lacks many of compas analytical tools but it works out of the box and is sufficient for development.
Just open the plugin and connect to localhost:
Compass D.I.Y
The best option to get compass functionality without compromising security and achieve zero-config objective is to host compass yourself. It's an electron application and works perfectly in a browser in Mongodb Atlas.
The source code is available at https://github.com/mongodb-js/compass.
With a bit of effort you can craft a docker image to host compass, include this image into docker-compose, and forward the port in devcontainer.json
Github codespaces will take care of authentication (keep the forwarded port private so only owner of the space will have access to it). All communication from desktop to compass will be over https, and compass to mongodb will be local to the docker network. Security-wise it will be on par with VSCode mongodb plugin
I have to consume an external rest API(using restTemplate.exchange) with Spring Boot. My rest API is running on port 8083 with URL http://localhost:8083/myrest (Docker command : docker run -p 8083:8083 myrest-app)
External API is available in form of public docker image and after running below command , I am able to pull and run it locally.
docker pull dockerExternalId/external-rest-api docker
run -d -p 3000:3000 dockerExternalId/external-rest-api
a) If I enter external rest API URL, for example http://localhost:3000/externalrestapi/testresource directly in chrome, then I get valid JSON data.
b) If I invoke it with myrest application from eclipse(Spring Boot Application), still I am getting valid JSON Response. (I am using Windows Platform to test this)
c) But if I run it on Docker and execute myrest service (say http://localhost:8083/myrest), then i am facing java.net.ConnectException: Connection refused
More details :
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:3000/externalrestapi/testresource": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
P.S - I am using Docker on Windows.
# The problem
You run with:
docker run -p 8083:8083 myrest-app
But you need to run like:
docker run --network "host" --name "app" myrest-app
So passing the flag --network with value host will allow you container to access your computer network.
Please ignore my first approach, instead use a better alternative that does not expose the container to the entire host network... is possible to make it work, but is not a best practice.
A Better Alternative
Create a network to be used by both containers:
docker network create external-api
Then run both containers with the flag --network external-api.
docker run --network "external-api" --name "app" -p 8083:8083 myrest-app
and
docker run -d --network "external-api" --name "api" -p 3000:3000 dockerExternalId/external-rest-api
The use of flag -p to publish the ports for the api container are only necessary if you want to access it from your computers browser, otherwise just leave them out, because they aren't needed for 2 containers to communicate in the external-api network.
TIP: docker pull is not necessary, once docker run will try to pull the image if does not found it in your computer.
Let me know how it went...
Call the External API
So in both solutions I have added the --name flag so that we can reach the other container in the network.
So to reach the external api from my rest app you need to use the url http://api:3000/externalrestapi/testresource.
Notice how I have replaced localhost by api that matches the value for --name flag in the docker run command for your external api.
From your myrest-app container if you try to access http://localhost:3000/externalrestapi/testresource, it will try to access 3000 port of the same myrest-app container.
Because each container is a separate running Operating System and it has its own network interface, file system, etc.
Docker is all about Isolation.
There are 3 ways by which you can access an API from another container.
Instead of localhost, provide the IP address of the external host machine (i.e the IP address of your machine on which docker is running)
Create a docker network and attach these two containers. Then you can provide the container_name instead of localhost.
Use --link while starting the container (deprecated)
I'm following a https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents which uses the older microsoft/aspnetcore-build image but I'm running core 2.1 so I'm using microsoft/dotnet:2.1-sdk instead.
The command I'm running is:
docker run -it -p 8080:5001 -v ${pwd}:/app -w "/app"
microsoft/dotnet:2.1-sdk
and then once inside the TTY I do a dotnet run which gives me the following output:
Using launch settings from /app/Properties/launchSettings.json...
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using '/root/.aspnet/DataProtection-Keys'
as key repository; keys will not be encrypted at rest.
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[58]
Creating key {5445e854-c1d9-4261-82f4-0fc3a7543e0a} with creation date
2018-12-14 10:41:13Z, activation date 2018-12-14 10:41:13Z, and
expiration date 2019-03-14 10:41:13Z.
warn:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key
{5445e854-c1d9-4261-82f4-0fc3a7543e0a} may be persisted to storage in
unencrypted form.
info:
Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[39]
Writing data to file
'/root/.aspnet/DataProtection-Keys/key-5445e854-c1d9-4261-82f4-0fc3a7543e0a.xml'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to https://localhost:5001 on the IPv6 loopback
interface: 'Cannot assign requested address'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to http://localhost:5000 on the IPv6 loopback
interface: 'Cannot assign requested address'.
Hosting environment: Development
Content root path: /app
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Then, when I open browser on my host and navigate to http://localhost:8080 I get a "This page isn't working" "localhost didn't send any data" " ERR_EMPTY_RESPONSE"
I've tried a couple different port combinations too with the same result.
Can anyone spot where I went wrong? Or have any ideas / suggestions?
Not sure if this question still relevant for you, but I also encountered this issue and left my solution here for others. I used PowerShell with the next docker command (almost the same as your command, just used internal port 90 instead of 5000 and used --rm switch which will automatically remove the container when it exits):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" microsoft/dotnet /bin/bash
And after that, I got the interactive bash shell, and when typing dotnet run I got the same output as you and cannot reach my site in the container via localhost:8080.
I resolved it by using UseUrls method or --urls command-line argument. They (UseUrls method or --urls command-line argument) indicates the IP addresses or host addresses with ports and protocols that the server should listen on for requests. Below descriptions of solutions which worked for me:
Edit CreateWebHostBuildermethod in Program.cs like below:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseUrls("http://+:90") //for your case you should use 5000 instead of 90
.UseStartup<Startup>();
You can specify several ports if needed using the next syntax .UseUrls("http://+:90;http://+:5000")
With this approach, you just typed dotnet run in bash shell and then your container will be reachable with localhost:8080.
But with the previous approach you alter the default behavior of your source code, which you can forget and then maybe should debug and fix in the future. So I prefer 2nd approach without changing the source code. After typing docker command and getting an interactive bash shell instead of just dotnet run type it with --urls argument like below (in your case use port 5000 instead of 90):
dotnet run --urls="http://+:90"
In the documentation there is also a 3rd approach where you can use ASPNETCORE_URLS environment variable, but this approach didn't work for me. I used the next command (with -e switch):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" -e "ASPNETCORE_URLS=http://+:90" microsoft/dotnet /bin/bash
If you type printenv in bash you will see that ASPNETCORE_URLS environment variable was passed to the container, but for some reason dotnet run is ignoring it.
I'm looking for a way to connect a docker container app so it can access Postgres database via https://postgresapp.com/
Was wondering if there are certain ports to open and what the yml file would look like to enable docker-compose to work with locally running postgres.
Thanks!
You have to make changes in postgres config files in host machine, especially
1) pg_hba.conf
2) postgres.conf
You can find above configuration in /var/lib/postgresql/[version]/ path
First check your docker0 bridge interface it might be 172.17.0.0/16, if not change accordingly.
make changes in postgresql.conf path will be same as pg_hba.conf.
listenaddress to "*"
Then in pg_hba.conf add rule as
host all all 172.17.0.0/16 md5.
Then in docker application use host ip address to connect to postures running in host.
There is no difference between accessing a database from inside a container or outside a container.
Let's say the Postgres database is running on localhost. If you want to connect to it from a small Python script, you can do so as follows, as found in the Docs:
#!/usr/bin/python
import psycopg2
import sys
def main():
#Define our connection string
conn_string = "host='localhost' dbname='my_database' user='postgres' password='secret'"
# print the connection string we will use to connect
print "Connecting to database\n ->%s" % (conn_string)
# get a connection, if a connect cannot be made an exception will be raised here
conn = psycopg2.connect(conn_string)
# conn.cursor will return a cursor object, you can use this cursor to perform queries
cursor = conn.cursor()
print "Connected!\n"
if __name__ == "__main__":
main()
In order for this script to run you need to install Python and psycopg2 on your local machine, probably in a virtual environment. Alternatively, you could put it in a container. Instead of installing everything manually, you would define a Dockerfile with your installation instructions. It would probably look something like this:
FROM python:latest
ADD python-script.py /opt/www/python-script.py
RUN pip install psycopg2
CMD ["python", "/opt/www/python-script.py"]
And if we build and run...
dave-mbp:Desktop dave$ ls -la
-rw-r--r-- 1 dave staff 135 Dec 8 19:05 Dockerfile
-rw-r--r-- 1 dave staff 22 Dec 8 19:04 python-script.py
dave-mbp:Desktop dave$ docker build -t python-script .
Sending build context to Docker daemon 17.92kB
Step 1/4 : FROM python:latest
latest: Pulling from library/python
85b1f47fba49: Already exists
ba6bd283713a: Pull complete
817c8cd48a09: Pull complete
47cc0ed96dc3: Pull complete
4a36819a59dc: Pull complete
db9a0221399f: Pull complete
7a511a7689b6: Pull complete
1223757f6914: Pull complete
Digest: sha256:db9d8546f3ff74e96702abe0a78a0e0454df6ea898de8f124feba81deea416d7
Status: Downloaded newer image for python:latest
---> 79e1dc9af1c1
Step 2/4 : ADD python-script.py /opt/www/python-script.py
---> 21f31c8803f7
Step 3/4 : RUN pip install psycopg2
---> Running in f280c82d74e7
Collecting psycopg2
Downloading psycopg2-2.7.3.2-cp36-cp36m-manylinux1_x86_64.whl (2.7MB)
Installing collected packages: psycopg2
Successfully installed psycopg2-2.7.3.2
---> bd38f911bb6a
Removing intermediate container f280c82d74e7
Step 4/4 : CMD python /opt/www/python-script.py
---> Running in 159b70861893
---> 4aa783be5c90
Removing intermediate container 159b70861893
Successfully built 4aa783be5c90
Successfully tagged python-script:latest
dave-mbp:Desktop dave$ docker run python-script
>> Connected!
Containers are largely a packaging solution. We'll be able to connect to an external database or API endpoint as if we were running everything from outside of a container.
There is one thing to keep in mind though...localhost may resolve to the virtual machine if you're using docker toolbox/virtualbox. In which case, you wouldn't connect to localhost unless you were running a bridged network connection in the VM. Otherwise, just specify the host IP instead of localhost.
Postgres default port is 5432.
Docker offers different networking modes to use. I can tell you how to achieve your goal using bridge mode (used by default).
There is always a bridge network that allows you access host machine. It is created by default and called docker0.
Run the following command in terminal ip addr show docker0 to see your host ip available to all the containers you run.
Output on my machine:
developer#dlbra:~$ ip addr show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:95:60:89 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
Therefore you don't need any additional configuration in docker-compose.yml
You just have to configure your db host to the ip address you saw. In my case it would be 172.17.0.1.
If your locally installed Postgres listens to a different port also specify that port in your application configuration.
I have followed the docker-compose concourse installation set up
Everything is up and running but I cant figure out what to use as --tsa-host value in command to connect worker to TSA host
Would be worth mentioning that docker concourse web and db are running on the same machine that I hope to use as bare metal worker.
I have tried 1. to use IP address of concourse web container but no joy. I cannot even ping the docker container IP from host.
1.
sudo ./concourse worker --work-dir ./worker --tsa-host IP_OF_DOCKER_CONTAINER --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
I have also tried using the 2. CONCOURSE_EXTERNAL_URL and 3. the ip address of the host but no luck either.
2.
sudo ./concourse worker --work-dir ./worker --tsa-host http://10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
3.
sudo ./concourse worker --work-dir ./worker --tsa-host 10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
Other details of setup:
Mac OSX Sierra
Docker For Mac
Please confirm you use the internal IP of the host, not public IP, not container IP.
--tsa-host <INTERNAL_IP_OF_HOST>
If you use docker-compose.yml as in its setup document, you needn't care of TSA-HOST, the environmen thas been defined already
CONCOURSE_TSA_HOST: concourse-web
I used the docker-compose.yml recently with the steps described here https://concourse-ci.org/docker-repository.html .
Please confirm that there is a keys directory next to the docker-compose.yml after you executed the steps.
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker
export CONCOURSE_EXTERNAL_URL=http://192.168.99.100:8080
docker-compose up