I am trying to make this run but it retrieves this error:
fscrawler | sed: -e expression #2, char 31: unknown option to `s'
I'm trying to run this command:
command: >
sh -c "sed -i -e "s/{ELASTIC_PASSWORD}/${ELASTIC_PASSWORD}/g"
-e "s/{ELASTICSEARCH_HOST}/${ELASTICSEARCH_HOST}/g"
-e "s/{FSCRAWLER_HOST}/${FSCRAWLER_HOST}/g" /root/.fscrawler/job1/_settings.yaml
&& fscrawler job1 --restart --rest"
I've tried with simple quotes and many other options (backslashes at the end as well) but couldn't make it work.
SOLUTION:
docker-compose.yml
entrypoint: /path/to/entrypoint.sh
environment:
- ELASTIC_HOST=${ELASTIC_HOST}
- ELASTIC_USER=${ELASTIC_USER}
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- FSCRAWLER_HOST=${FSCRAWLER_HOST}
Dockerfile FSCrawler
...
COPY /host/path/to/entrypoint.sh /docker/path/to/entrypoint.sh
RUN chmod u+x /entrypoint.sh
Entrypoint
#!/bin/bash
sed -i -e "s|{ELASTIC_USER}|${ELASTIC_USER}|g" \
-e "s|{ELASTIC_PASSWORD}|${ELASTIC_PASSWORD}|g" \
-e "s|{ELASTIC_HOST}|${ELASTIC_HOST}|g" \
-e "s|{FSCRAWLER_HOST}|${FSCRAWLER_HOST}|g" /root/.fscrawler/job1/_settings.yaml
fscrawler job1 --restart --rest
Related
I am trying to run postgres on docker with this cmd and It gives error
PS D:\Data Engineering with Zoomcamp> docker run -it -e POSTGRES_USER="root" -e POSTGRES_PASSWORD="root" -e POSTGRES_DB="ny_taxi"D:\Data Engineering with Zoomcamp\ny_taxi_postgres:/var/lib/postgresql/data -p 5432:5432 postgres:13
docker: invalid reference format: repository name must be lowercase.
See 'docker run --help'.
you forgot to add the volume tag. Use below command
docker run -it -e POSTGRES_USER="root" -e POSTGRES_PASSWORD="root" -e POSTGRES_DB="ny_taxi" -v "D:\Data Engineering with Zoomcamp\ny_taxi_postgres":/var/lib/postgresql/data -p 5432:5432 postgres:13
When i try to mount a database from postgresql, i see my local directory is empty.
This is my code:
winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v /c/src/ny:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13
When i run that code on MINGW64, i see docker produce a file named "ny;C" and it's empty.
Why is empty and why its named "ny;C" instead of "ny"? How can i fix that problem?
I am trying to deploy my application into aws cluster as follows
Steps
Build image and push into docker hub (it is working)
Deploy the image into aws cluster (I couldn't make it work)
I searched in google, but couldn't find any solution.
Here is my GitHub workflow file
deploy.yml. Any help is appreciated to make it work.
# This is a basic workflow that is manually triggered
name: Deploy Manual
# Controls when the action will run. Workflow runs when manually triggered using the UI
# or API.
on:
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "deploy"
deploy:
# The type of runner that the job will run on
runs-on: ubuntu-latest
env:
IMAGE_TAG: ${{ github.sha }}
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
KUBE_NAMESPACE: production
DOCKER_USER: ${{secrets.DOCKER_HUB_USERNAME}}
DOCKER_PASSWORD: ${{secrets.DOCKER_HUB_ACCESS_TOKEN}}
RELEASE_IMAGE: ucars/ucars-ui3:${{ github.sha }}
steps:
# This step instructs Github to cancel any current run for this job on this very repository.
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action#0.4.1
with:
access_token: ${{ github.token }}
- uses: actions/checkout#v2
- name: docker login
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- name: Build the Docker image
run: docker build . --file Dockerfile --tag $RELEASE_IMAGE
- name: Docker Push
run: docker push $RELEASE_IMAGE
- name: Deploy to Kubernetes cluster
uses: kodermax/kubectl-aws-eks#master
with:
args: set image deployment/ucars-ui3-pod app=${{ env.RELEASE_IMAGE }} --record -n $KUBE_NAMESPACE
It is failing at the step Deploy to Kubernetes cluster
2022-01-14T18:22:14.4557590Z ##[group]Run kodermax/kubectl-aws-eks#master
2022-01-14T18:22:14.4558128Z with:
2022-01-14T18:22:14.4559002Z *** set image deployment/***-ui3-pod app=***/***-ui3:3d23d9fb07a2ce43b3a27502359c1a0685705200 --record -n $KUBE_NAMESPACE
2022-01-14T18:22:14.4559708Z ***
2022-01-14T18:22:14.4560253Z IMAGE_TAG: 3d23d9fb07a2ce43b3a27502359c1a0685705200
2022-01-14T18:22:14.4608584Z KUBE_CONFIG_DATA: ***
2022-01-14T18:22:14.4609135Z KUBE_NAMESPACE: production
2022-01-14T18:22:14.4609639Z DOCKER_USER: ***
2022-01-14T18:22:14.4610253Z DOCKER_PASSWORD: ***
2022-01-14T18:22:14.4610915Z RELEASE_IMAGE: ***/***-ui3:3d23d9fb07a2ce43b3a27502359c1a0685705200
2022-01-14T18:22:14.4611509Z ##[endgroup]
2022-01-14T18:22:14.4809817Z ##[command]/usr/bin/docker run --name a74655ce21da3d4675874b9544657797b0_b31db8 --label 9916a7 --workdir /github/workspace --rm -e IMAGE_TAG -e KUBE_CONFIG_DATA -e KUBE_NAMESPACE -e DOCKER_USER -e DOCKER_PASSWORD -e RELEASE_IMAGE -e INPUT_ARGS -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_REF_NAME -e GITHUB_REF_PROTECTED -e GITHUB_REF_TYPE -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_ARCH -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/***-ui3/***-ui3":"/github/workspace" 9916a7:4655ce21da3d4675874b9544657797b0 set image deployment/***-ui3-pod app=***/***-ui3:3d23d9fb07a2ce43b3a27502359c1a0685705200 --record -n $KUBE_NAMESPACE
2022-01-14T18:22:14.7791749Z base64: invalid input
I think I have found the issue, apparently, KUBE_CONFIG_DATA is invalid. Your entrypoint.sh in kodermax/kubectl-aws-eks#master image is trying to decode it, but can't and throwing the error.
#!/bin/sh
set -e
# Extract the base64 encoded config data and write this to the KUBECONFIG
echo "$KUBE_CONFIG_DATA" | base64 -d > /tmp/config
export KUBECONFIG=/tmp/config
sh -c "kubectl $*"
Please fix the KUBE_CONFIG_DATA, it must be in a valid base64 format. if you put raw kubeconfig file there, you may have to convert it to base64 format first.
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
Hello StackOverflowians.
I'm currently trying to set up Snyk in my GitHub Actions workflow, in a Node project.
The idea is to run two jobs:
A Snyk security gate as per their documentation (found here), such as the first example for keeping it simple.
A build and push job (that works as intended on its own)
However, when attempting to run the first job, it fails with the following log during the "Run Snyk to check for vulnerabilities" step:
Run snyk/actions/node#master
with:
command: test
json: false
env:
REGISTRY: ghcr.io
IMAGE_NAME: <IMAGENAME>
SNYK_TOKEN: ***
/usr/bin/docker run --name snyksnyknode_3aa871 --label 6a6825 --workdir /github/workspace --rm -e REGISTRY -e IMAGE_NAME -e SNYK_TOKEN -e INPUT_ARGS -e INPUT_COMMAND -e INPUT_JSON -e SNYK_INTEGRATION_NAME -e SNYK_INTEGRATION_VERSION -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_REF_NAME -e GITHUB_REF_PROTECTED -e GITHUB_REF_TYPE -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_ARCH -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/SOMEPROJECT/SOMEPROJECT":"/github/workspace" snyk/snyk:node "snyk" "test" "--severity-threshold=high --fail-on=upgradable"
Dependency bindings was not found in undefined. Your package.json and undefined are probably out of sync. Please run "undefined" and try again.
The last part Dependency bindings was not found in undefined. Your package.json and undefined are probably out of sync. Please run "undefined" and try again. is that which I do not understand how it helps me debug.
Is this a known problem with a known solution? If not, how can I go about finding what undefined is referring to?
Thank you in advance,
Raoul
Currently, it seems as though deleting node_modules/ as well as package-lock.json and regenerating them with npm install remedies this issue.
<in root>
rm -rf node_modules/
npm install
I run docker container with apache airflow
If I set executor = LocalExecutor, everything works fine, however, if I set executor = CeleryExecutor and run a DAG I get the following exception printed
[2020-07-13 04:17:41,065] {{celery_executor.py:266}} ERROR - Error fetching Celery task state, ignoring it:OperationalError('(psycopg2.OperationalError) FATAL: password authentication failed for user "airflow"\n')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/executors/celery_executor.py", line 108, in fetch_celery_task_state
I provide however the following ENV variables in docker run call
docker run --name test -it \
-p 8000:80 -p 5555:5555 -p 8080:8080 \
-v `pwd`:/app \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_DEFAULT_REGION \
-e PYTHONPATH=/app \
-e ENVIRONMENT=local \
-e XCOMMAND \
-e POSTGRES_PORT=5432 \
-e POSTGRES_HOST=postgres \
-e POSTGRES_USER=project_user \
-e POSTGRES_PASSWORD=password \
-e DJANGO_SETTINGS_MODULE=config.settings.local \
-e AIRFLOW_DB_NAME=project_airflow_dev \
-e AIRFLOW_ADMIN_USER=project_user \
-e AIRFLOW_ADMIN_EMAIL=admin#project.com \
-e AIRFLOW_ADMIN_PASSWORD=password \
-e AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://project_user:password#postgres:5432/project_airflow_dev \
-e AIRFLOW__CORE__EXECUTOR=CeleryExecutor \
-e AIRFLOW__CELERY__BROKER_URL=redis://redis:6379/1 \
--network="project-network" \
--link project_cassandra_1:cassandra \
--link project_postgres_1:postgres \
--link project_redis_1:redis \
registry.dkr.ecr.us-east-2.amazonaws.com/airflow:v1.0
In LocalExecutor - everything is fine, so I can login into admin UI and trigger the dag and get successful results, it's just that when I switch to CeleryExecutor - I get a weird error about "airflow" user, as if AIRFLOW__CORE__SQL_ALCHEMY_CONN env var is not visible or used at all.
Any ideas?
solution:
Adding AIRFLOW__CELERY__RESULT_BACKEND env var fixed the issue.
...
-e AIRFLOW__CELERY__RESULT_BACKEND=db+postgresql+psycopg2://project_user:password#postgres:5432/project_airflow_dev \
...
or edit airflow.cfg
[celery]
result_backend = db+postgresql://airflow:airflow#postgres/airflow