- name: run-study-optuna
script:
image: optuna/optuna:py3.8
command: [bash]
source: |
python optuna.py
- name: optuna-dashboard
daemon: true
inputs:
parameters:
- name: postgres-ip
script:
image: optuna/optuna:py3.8
command: [bash]
source: |
pip install optuna-dashboard
pip install optuna-fast-fanova gunicorn
sqlite:///db.sqlite3
The run-study-optuna template itself works fine.
But, i want to use the optuna dashboard using Argo.
optuna-dashboard sqlite:///db.sqlite3
Listening on http://localhost:8080/
Hit Ctrl-C to quit.
The message comes out like this, but if i access the web using pod ip, i can't access it.
I want to know how to make it possible.
Related
hope this question helps others struggling to use GCP.
I am trying to automate deployments of my strapi app to Google App Engine using CloudBuild. This is my cloudbuild.yaml:
steps:
- name: 'ubuntu'
entrypoint: "bash"
args:
- "-c"
- |
rm -rf app.yaml
touch app.yaml
cat <<EOT >> app.yaml
runtime: custom
env: flex
env_variables:
HOST: '0.0.0.0'
NODE_ENV: 'production'
DATABASE_NAME: ${_DATABASE_NAME}
DATABASE_USERNAME: ${_DATABASE_USERNAME}
DATABASE_PASSWORD: ${_DATABASE_PASSWORD}
INSTANCE_CONNECTION_NAME: ${_INSTANCE_CONNECTION_NAME}
beta_settings:
cloud_sql_instances: ${_CLOUD_SQL_INSTANCES}
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
EOT
cat app.yaml
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud app deploy app.yaml --project ecomm-backoffice']
If I understand correctly how general CI/CD works, this file should create an app.yaml and then run gcloud app deploy app.yaml --project ecomm-backoffice command.
However, CloudBuild is creating nested recursive builds once i push my changes to github(triggers are enabled).
Can someone please help me with the right way of deploying strapi/nodejs to app engine using cloudbuild? I tried searching lot of solutions but haven't had any luck so far.
I have the following GitHub workflow for building my project
name: build
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: set up JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Build with Maven
run: mvn clean compile test
The build works just fine.
However the project JUnit test require that a localhost server is listening on port 4444 ... and I get the following error:
Connection refused: localhost/127.0.0.1:4444
The server is spun up before each JUnit test and is part of the tests suite.
How do I tell the docker container that network connections are allowed on this port?
Or are there any open ports by default?
I share my solution. Hopefully it will help.
The Dockerfile for the test server listening on port (in my case 8080). As the comments mention above, you need to expose your port
FROM golang:1.15.2-alpine
# Setup you server
# This container exposes port 8080 to the outside world
EXPOSE 8080
# Run the executable
CMD ["./main"]
The docker-compose file (this is not a must, but in my case I had to start multiple depending containers). Again make sure the port is defined
main-server:
build: ./
container_name: main-server
image: artofimagination/main-server
ports:
- 8080:8080
The github action. Just run the docker-compose command for your server. Then the application that needs to connect to the port. In this case pytest tries to send requests to main-server through port 8080
It is worth to mention that in my example pytest accessing 127.0.0.1:8080
- name: Check out code into the Go module directory
uses: actions/checkout#v2
- name: Start test server
run: docker-compose up -d main-server
- name: Run functional test
run: pip3 install -r test/requirements.txt && pytest -v test
Good luck and I hope this helps.
I have a small cloudbuild.yaml file where I build a Docker image, push it to Google container registry (GCR) and then apply the changes to my Kubernetes cluster. It looks like this:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/frontend:latest || exit 0'
]
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-f",
"./services/frontend/prod.Dockerfile",
"-t",
"gcr.io/$PROJECT_ID/frontend:$REVISION_ID",
"-t",
"gcr.io/$PROJECT_ID/frontend:latest",
".",
]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/frontend"]
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
- name: "gcr.io/cloud-builders/kubectl"
args: ["rollout", "restart", "deployment/frontend-deployment"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
The build runs smoothly, until the last step. args: ["rollout", "restart", "deployment/frontend-deployment"]. It has the following log output:
Already have image (with digest): gcr.io/cloud-builders/kubectl
Running: gcloud container clusters get-credentials --project="cents-ideas" --zone="europe-west3-a" "cents-ideas"
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cents-ideas.
Running: kubectl rollout restart deployment/frontend-deployment
error: unknown command "restart deployment/frontend-deployment"
See 'kubectl rollout -h' for help and examples.
Allegedly, restart is an unknown command. But it works when I run kubectl rollout restart deployment/frontend-deployment manually.
How can I fix this problem?
Looking at the Kubernetes release notes, the kubectl rollout restart commmand was introduced in the v1.15 version. In your case, it seems Cloud Build is using an older version where this command wasn't implemented yet.
After doing some test, it appears Cloud Build uses a kubectl client version depending on the cluster's server version. For example, when running the following build:
steps:
- name: "gcr.io/cloud-builders/kubectl"
args: ["version"]
env:
- "CLOUDSDK_COMPUTE_ZONE=<cluster_zone>"
- "CLOUDSDK_CONTAINER_CLUSTER=<cluster_name>"
if the cluster's master version is v1.14, Cloud Build uses a v1.14 kubectl client and returns the same unknown command "restart" error message. When master's version is v1.15, Cloud Build uses a v1.15 kubectl client and the command runs successfully.
So about your case, I suspect your cluster "cents-ideas" master version is <1.15 which would explain the error you're getting. As per why it works when you run the command manually (I understand locally), I suspect your kubectl may be authenticated to another cluster with master version >=1.15.
I'm trying to set up a CI/CD pipeline in GitHub Actions for my Elixir project.
I can fetch dependencies, compile them, check formatting, credo... But when the tests starts, I'm not able to reach the PostgreSQL service declared on the YAML.
How can I link both containers? (Elixir and PostgreSQL)
According to the logs shown on GitHub Actions, both containers are on the same Docker network, so they should be reachable from each other using their network aliases. However, when I try to connect to the postgres one, it says NXDOMAIN. Also the ping doesn't work, as expected.
The content of my workflow:
name: Elixir CI
on: push
jobs:
build:
runs-on: ubuntu-18.04
container:
image: elixir:1.9.1
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app
POSTGRES_DB: my_app_test
steps:
- uses: actions/checkout#v1
- name: Install Dependencies
env:
MIX_ENV: test
run: |
cp config/test.secret.ci.exs config/test.secret.exs
mix local.rebar --force
mix local.hex --force
apt-get update -qqq && apt-get install make gcc -y -qqq
mix deps.get
- name: Compile
env:
MIX_ENV: test
run: mix compile --warnings-as-errors
- name: Run formatter
env:
MIX_ENV: test
run: mix format --check-formatted
- name: Run Credo
env:
MIX_ENV: test
run: mix credo
- name: Run Tests
env:
MIX_ENV: test
run: mix test
Also, on Elixir I have set up the test task to connect to postgres:5432, but it says the host does not exist.
According to some tutorials and examples I found on the Internet, this configurations looks like valid, but nothing I could do made it work.
You need to pass the name of the service ("postgres") as POSTGRES_HOST to the application and set the port POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }} (spaces matter.)
Github CI dynamically routes port and host to it.
I wrote a blog post on the subject a couple of days ago.
I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
Now I want to create a job , I want to run something like
kubectl create -R -f ./kubernetes
which creates job in kubernetes folder.
I know cloud builder has - name: 'gcr.io/cloud-builders/kubectl' but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json
I wasn't able to connect and get cluster credentials. Here's what I did
Go to IAM, add another Role to xyz#cloudbuild.gserviceaccount.com. I used Project Editor.
Wrote this on cloudbuild.yaml name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']