After installing postgresql(13) on GCP, I tried installing citus using this command:
curl https://install.citusdata.com/community/rpm.sh | sudo bash
However I run into the following error. Any guidance/suggestions would be helpful.
[tony_stark#host]$ curl https://install.citusdata.com/community/rpm.sh | sudo bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8667 100 8667 0 0 21791 0 --:--:-- --:--:-- --:--:-- 21831
Detected operating system as centos/7.
Checking for curl...
Detected curl...
Checking for postgresql13-server...
Detected postgresql13-server...
Checking for EPEL repositories...
Detected EPEL repoitories
Downloading repository file: https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script...
curl: (7) Failed to connect to
Network is unreachable
Unable to run:
curl https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script
The link curl https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script works on my browser though.
Looks like a temporary issue on repository side. I just tried it and it works:
[sergiusz#host ~]$ curl https://install.citusdata.com/community/rpm.sh | sudo bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8667 100 8667 0 0 25686 0 --:--:-- --:--:-- --:--:-- 25718
Detected operating system as centos/7.
Checking for curl...
Detected curl...
Checking for postgresql13-server...
Installing pgdg13 repo... done.
Checking for EPEL repositories...
Detected EPEL repoitories
Downloading repository file: https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script.
.. done.
Installing pygpgme to verify GPG signatures... done.
Installing yum-utils... done.
Generating yum cache for citusdata_community... done.
The repository is set up! You can now install packages.
EDIT:
This file can be also downloaded manually:
curl "https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script" -o /etc/yum.repos.d/citusdata_community.repo
Related
Due to some Android Studio problem, I invalidated flutter cache.
When I try flutter upgrade, I'm stuck with the "Building flutter tool ..." phase
$ flutter upgrade
Downloading Dart SDK from Flutter engine beb8a7ec48f6b980a778d19eeda613635c3897c9...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16 100 16 0 0 25 0 --:--:-- --:--:-- --:--:-- 25
Warning: Transient problem: HTTP error Will retry in 1 seconds. 3 retries
Warning: left.
Throwing away 16 bytes
100 219M 100 219M 0 0 6082k 0 0:00:37 0:00:37 --:--:-- 6324k
Building flutter tool...
There is no verbose log either, after I tried -v.
Now flutter doctor is stuck at the same place.
What should I do?
Solved. Seems a network issue. I'm located in China. Had to use StealthVPN with https://pub.flutter-io.cn as the host URL to finish the building. None of the other local mirrors work, which require no StealthVPN.
It's super weird that I can download the Dart SDK from ALL the mirrors but only one of them can complete the "building".
There really should be a clear warning and detailed progress report. It's hard to imagine "building ..." involving internet access.
Until now I worked a lot with github/bitbucket and jenkins/bamboo. Right now I'm trying to setup a Gitlab CE server with a private kubernetes cluster.
I want to run a hello world project in java with gitlabs AutoDevOps in kubernetes, this is the repo I'm using:
https://github.com/dstar55/docker-hello-world-spring-boot
Everything works fine until runner gets created in kubernetes, downloads the image but gets stuck on downloading maven resources.
Running on runner-h6cwaztm-project-8-concurrent-0jvd9f via runner-gitlab-runner-6dcf7dd458-jl69h...
Fetching changes with git depth set to 50...
00:02
Initialized empty Git repository in /builds/.../hello-world-spring/.git/
Created fresh repository.
From https://.../hello-world-spring
* [new ref] refs/pipelines/14 -> refs/pipelines/14
* [new branch] master -> origin/master
Checking out ad24ac6b as master...
Skipping Git submodules setup
$ if [[ -z "$CI_COMMIT_TAG" ]]; then # collapsed multi-line command
$ /build/build.sh
Logging to GitLab Container Registry with CI credentials...
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Building Dockerfile-based application...
Step 1/10 : FROM maven:3.5.2-jdk-8-alpine AS maven_build
3.5.2-jdk-8-alpine: Pulling from library/maven
22bc7fb81913: Pull complete
Digest: sha256:7cebda60f8a541e1bf2330306d22f9786f989187f4ec96539d398a0d4dbfdadb
Status: Downloaded newer image for maven:3.5.2-jdk-8-alpine
---> 293423a981a7
Step 2/10 : COPY pom.xml /tmp/
---> c0e609a509a8
Step 3/10 : COPY src /tmp/src/
---> e735a08f2b39
Step 4/10 : WORKDIR /tmp/
---> Running in 90620c0ca3ad
Removing intermediate container 90620c0ca3ad
---> a5d9fdc62aa9
Step 5/10 : RUN mvn package
---> Running in dc90f43fc83b
[INFO] Scanning for projects...
Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom
It never throws an error (until it timesout) and it never goes past this point.
Kubernetes has 4 nodes 1 master and 3 slaves, using flannel and MetalLB
Edit:
I added a curl command instead of mvn package and it seems the download speed is 0, how is that possible?
Step 5/11 : RUN curl https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom --output test.pom
---> Running in db2bc24c6a4f
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:05:00 --:--:-- 0
curl: (28) Operation timed out after 300689 milliseconds with 0 out of 0 bytes received
The command '/bin/sh -c curl https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom --output test.pom' returned a non-zero code: 28
ERROR: Job failed: command terminated with exit code 1
According to place where CI hangs, your pipeline stuck at mvn package:
Step 5/10 : RUN mvn package
---> Running in dc90f43fc83b
[INFO] Scanning for projects...
Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom
So, you can try to restart Artifactory.
Also, you can debug mvn packages with mvn clean package -X -e
See: this answer :
java - Maven hanging indefinitely while checking for updates - Stack Overflow
mvn clean package -X -e
I have acceptance tests set up with ddev. They get run locally on ddev composer cookieman:test. I would like to use the same setup with Github actions.
Did anybody have any luck with ddev in Github actions/workflow? I am getting until here where ddev's healthcheck fails:
...
Creating ddev-router ... done
Failed to start extension-cookieman-master: ddev-router failed to become ready: logOutput=2019/11/15 02:24:19 [emerg] 1630#1630: no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: configuration file /etc/nginx/nginx.conf test failed
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (22) The requested URL returned error: 404 Not Found
ddev-router healthcheck endpoint not responding
, err=container /ddev-router unhealthy: 2019/11/15 02:24:19 [emerg] 1630#1630: no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: configuration file /etc/nginx/nginx.conf test failed
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (22) The requested URL returned error: 404 Not Found
ddev-router healthcheck endpoint not responding
##[error]Process completed with exit code 1.
.github/workflows/tests.yml:
name: Tests
on: [push, pull_request]
jobs:
tests-via-ddev:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- run: export DEBIAN_FRONTEND=noninteractive
# update docker
- run: sudo -E apt-get purge -y docker docker-engine docker.io containerd runc nginx
- run: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- run: sudo -E add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- run: sudo -E apt-get update
- run: sudo -E apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
# install linuxbrew
- run: sh -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"
- run: echo "::add-path::/home/linuxbrew/.linuxbrew/bin"
# install ddev + docker-compose
- run: brew tap drud/ddev && brew install ddev docker-compose
# Start ddev
- run: ddev start || exit 0
# Debug
- run: ls -als .ddev/
- run: curl 127.0.0.1 || exit 0
- run: curl 127.0.0.1/healthcheck || exit 0
- run: docker ps || exit 0
# we want Clover coverage
- run: ddev exec enable_xdebug
# Run tests
- run: ddev composer cookieman:test
I tried
using Ubuntu 16.04
fully upgrading all packages on Ubuntu 16.04/18.04
configuring ddev like that:
run: ddev config global --router-bind-all-interfaces=true
run: ddev config global --omit-containers=dba,ddev-ssh-agent
changing to unprivileged router ports (settings router_http_port, router_https_port in config.yaml)
If I force it to continue with ddev start || exit 0 I can see containers up and running:
- run: docker ps || exit 0
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c36601a06fd6 drud/ddev-router:v1.11.0 "/app/docker-entrypo…" 27 seconds ago Up 24 seconds (unhealthy) 0.0.0.0:4430->4430/tcp, 0.0.0.0:4444->4444/tcp, 0.0.0.0:8025->8025/tcp, 80/tcp, 0.0.0.0:8080->8080/tcp ddev-router
18152602a054 drud/ddev-webserver:v1.11.0-built "/start.sh" 30 seconds ago Up 28 seconds (healthy) 8025/tcp, 127.0.0.1:32770->80/tcp, 127.0.0.1:32769->443/tcp ddev-extension-cookieman-master-web
33aca55715f2 selenium/standalone-chrome:3.12 "/opt/bin/entry_poin…" 32 seconds ago Up 30 seconds 4444/tcp ddev-extension-cookieman-master-chrome
6c852ae62974 drud/ddev-dbserver:v1.11.0-10.2-built "/docker-entrypoint.…" 32 seconds ago Up 30 seconds (healthy) 127.0.0.1:32768->3306/tcp ddev-extension-cookieman-master-db
curl 127.0.0.1 yields the default nginx start page (while I would expect '503: No ddev back-end site available')
curl 127.0.0.1/healthcheck yields a 404
So far my conclusion is: ddev-router is reachable but its nginx does not have the appropriate configuration (no servers are inside upstream in /etc/nginx/conf.d/default.conf). Thus ddev only runs the pre-start hook form config.yaml. post-start is not reached.
You can see the output of the last runs here https://github.com/dmind-gmbh/extension-cookieman/actions?query=branch%3Afeat%2Facceptance-tests
EDIT/AMEND:
This is the (mal-)generated /etc/nginx/conf.d/default.conf from ddev-router:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
# ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
rfay mentioned a miscommunication between the ddev-router and the underlying docker daemon via sockets.
EDIT:
I put my findings into a Github action that can be included in other projects, too: https://github.com/marketplace/actions/setup-ddev
I came to the conclusion that the problem is with docker-gen.
In the first line of the template (https://github.com/drud/ddev/blob/master/containers/ddev-router/nginx.tmpl or also jwilder's https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl) the .Docker.CurrentContainerID is empty which seemed to happen to some people in some contexts https://github.com/jwilder/docker-gen/issues/196#issuecomment-225412753.
The suggested removal of '-only-exposed' did not work for me. I changed the template a bit instead to not rely on the container and that was it.
:)
This is still a bit dirty and only a Proof-of-concpet:
this is the changed template https://github.com/jonaseberle/github-action-setup-ddev/blob/master/.ddev/patches/ddev-router/nginx.tmpl (compare the upstream {} section where I removed the check if containers are on the same network with the router)
In the workflow I do
ddev start || exit 0 # this will fail and also not execute any post-start hooks
docker cp nginx-debug.tmpl ddev-router:/app/nginx-debug.tmpl
docker exec ddev-router sh -c "docker-gen -only-exposed -notify 'sleep 1 && nginx -s reload' /app/nginx-debug.tmpl /etc/nginx/conf.d/default.conf"
... ddev is now up and healthy
Not decided how to move on from here. Maybe #rfay would have an idea how to change the nginx-template. Or I will use a custom Dockerfile for the ddev-router with a docker-compose.ddev-router.yaml to change the file just for the Github actions run...
EDIT/AMEND:
The shorter and tested version of this is:
ddev start || docker cp .ddev/patches/ddev-router/nginx.tmpl ddev-router:/app/nginx.tmpl
ddev start - this triggers a container restart and thus a docker-gen run
I'm trying to use docker container task in azure DevOps pipeline to build and push images to ACR and ECR. I am able to do that through a YAML file and automate all the processes but when I am trying the same with a docker file which has dep and glide packages to fetch from other repos both from public GitHub repos and private bitbucket repos. It fails with the Host Key Verification error. The same dockerfile works with Jenkins but I don't know how to solve this ssh-key error on a Hosted Ubuntu Agent.
Step 13/33 : RUN curl https://glide.sh/get | sh
---> Running in 26f7f0a19f91
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 4833 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4833 100 4833 0 0 6943 0 --:--:-- --:--:-- --:--:-- 6934
ARCH=amd64
OS=linux
Using curl as download tool
Getting https://glide.sh/version
TAG=v0.13.3
GLIDE_DIST=glide-v0.13.3-linux-amd64.tar.gz
Downloading https://github.com/Masterminds/glide/releases/download/v0.13.3/glide-v0.13.3-linux-amd64.tar.gz
glide version v0.13.3 installed successfully
Removing intermediate container 26f7f0a19f91
---> d4aa1a720fab
Step 14/33 : RUN glide update --strip-vendor
---> Running in 4614138d27bc
[INFO] wnloading dependencies. Please wait...
[INFO] > Fetching bitbucket.org/myrepositoryname/common
[INFO] > Fetching github.com/golang/protobuf
[INFO] > Fetching bitbucket.org/myrepositoryname/myteksi
[INFO] > Fetching bitbucket.org/myrepositoryname/sdk
[INFO] > Fetching github.com/imdario/mergo
[INFO] > Fetching gopkg.in/go-playground/validator.v9
[INFO] > Fetching github.com/segmentio/kafka-go
[WARN] able to checkout bitbucket.org/myrepositoryname/common
[ERROR] date failed for bitbucket.org/myrepositoryname/common: Unable to get repository: Cloning into '/root/.glide/cache/src/git-bitbucket.org-myrepositoryname-common.git'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
: exit status 128
Unable to get repository: Cloning into '/root/.glide/cache/src/git-bitbucket.org-myrepositoryname.git'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
: exit status 128
Unable to get repository: Cloning into '/root/.glide/cache/src/git-bitbucket.org-myrepositoryname.git'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
: exit status 128
The command '/bin/sh -c glide update --strip-vendor' returned a non-zero code: 1
##[debug]Exit code 1 received from tool '/usr/bin/docker'
##[debug]STDIO streams have closed for tool '/usr/bin/docker'
##[error]The command '/bin/sh -c glide update --strip-vendor' returned a non-zero code: 1
##[debug]Processed: ##vso[task.issue type=error;]The command '/bin/sh -c glide update --strip-vendor' returned a non-zero code: 1
##[debug]Trying to logout from registry: ***
##[debug]DOCKER_CONFIG=/home/vsts/work/_temp/DockerConfig_1564846219701
##[debug]agent.tempDirectory=/home/vsts/work/_temp
##[debug]Found the Docker Config stored in the temp path. Docker config path: /home/vsts/work/_temp/DockerConfig_1564846219701/config.json, Docker config: {"auths": { "***": {"auth": "***", "email": "ServicePrincipal#AzureRM" } }, "HttpHeaders":{"X-Meta-Source-Client":"VSTS"} }
##[debug]Deleting Docker config directory. Path: /home/vsts/work/_temp/DockerConfig_1564846219701/config.json
##[debug]DOCKER_CONFIG=/home/vsts/work/_temp/DockerConfig_1564846219701
##[debug]agent.tempDirectory=/home/vsts/work/_temp
##[debug]Deleting Docker config directory. Path: /home/vsts/work/_temp/DockerConfig_1564846219701
##[debug]set DOCKER_CONFIG=
##[debug]Processed: ##vso[task.setvariable variable=DOCKER_CONFIG;issecret=false;]
##[debug]task result: Failed
##[error]The process '/usr/bin/docker' failed with exit code 1
##[debug]Processed: ##vso[task.issue type=error;]The process '/usr/bin/docker' failed with exit code 1
##[debug]Processed: ##vso[task.complete result=Failed;]The process '/usr/bin/docker' failed with exit code 1
package: bitbucket.org/grabpay/ignite
import:
- package: bitbucket.org/myrepositoryname/common
repo: git#bitbucket.org:myrepositoryname/common.git
version: devel
subpackages:
- crimson
- track
- package: bitbucket.org/myrepositoryname/myfolder1
repo: git#bitbucket.org:myrepositoryname/myfolder1.git
version: fface9afbb72a739d0de8c8969e0fa06fda44614
- package: bitbucket.org/myrepositoryname/myfolder2
repo: git#bitbucket.org:myrepositoryname/myfolder2.git
version: master
- package: github.com/imdario/mergo
version: 2b9c8687f09d230f37f169eea24e1951bb7d1191
- package: gopkg.in/go-playground/validator.v9
- package: github.com/segmentio/kafka-go
- package: github.com/golang/protobuf
version: ^1.3.1
The above file is the dependency repos that are to be fetched using glide.yml file.
I fixed it finally, what I needed was a service account which has read access to all the repos that glide is trying to access. Read access was required for all those repos with a Bitbucket Service Connection in Azure Pipeline. The error is not specific to glide, it is more inclined with the git repo access.
I'm trying to configure a fresh CentOS 6.5 x64 node via knife-solo. But when I run knife solo prepare root#centos I get a strange error.
Bootstrapping Chef...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15934 100 15934 0 0 36862 0 --:--:-- --:--:-- --:--:-- 95413
Downloading Chef 11.14.0.alpha.2 for el...
downloading https://www.opscode.com/chef/metadata?v=11.14.0.alpha.2&prerelease=false&nightlies=false&p=el&pv=6&m=x86_64
to file /tmp/install.sh.1750/metadata.txt
trying curl...
ERROR 404
Unable to retrieve a valid package!
Please file a bug report at http://tickets.opscode.com
Project: Chef
Component: Packages
Label: Omnibus
Version: 11.14.0.alpha.2
Please detail your operating system type, version and any other relevant details
Metadata URL: https://www.opscode.com/chef/metadata?v=11.14.0.alpha.2&prerelease=false&nightlies=false&p=el&pv=6&m=x86_64
When I try to debug that thing and run knife solo prepare -VV root#centos I get this: https://gist.github.com/Almaron/5709a69e09bad92f3475
I've tried to google it and found that it might be a proxy issue, but I have not set up any proxies what so ever.
UPDATE
Tried running knife solo prepare root#centos--bootstrap-version 11.12.0
Here's the result: https://gist.github.com/Almaron/2f7987f314132c80b8ed
Had the same problem. It seems there is a problem on the opscode site with version 11.14.0.alpha.2 - the http code returned is 404 Not Found.
A solution is to pin the chef version when preparing knife solo:
knife solo prepare root#centos --bootstrap-version 11.12.0