I tried it two ways:
- name: Add repository
yum_repository:
# from https://oss-binaries.phusionpassenger.com/yum/definitions/el-passenger.repo
name: passenger
description: Passenger repository
baseurl: https://oss-binaries.phusionpassenger.com/yum/passenger/el/$releasever/$basearch
repo_gpgcheck: 1
gpgcheck: 0
enabled: 1
gpgkey: https://packagecloud.io/gpg.key
sslverify: 1
sslcacert: /etc/pki/tls/certs/ca-bundle.crt
- name: Add repository key (option 1)
rpm_key:
key: https://packagecloud.io/gpg.key
- name: Add repository key (option 2)
command: rpm --import https://packagecloud.io/gpg.key
- name: Install nginx with passenger
yum: name={{ item }}
with_items: [nginx, passenger]
But for it to work, I need to ssh to the machine, confirm importing the key (by running any yum command, e.g. yum list installed), and then continue provisioning. Is there a way to do it automatically?
UPD here's what ansible says:
TASK [nginx : Add repository key] **********************************************
changed: [default]
TASK [nginx : Install nginx with passenger] ************************************
failed: [default] (item=[u'nginx', u'passenger']) => {"failed": true, "item": ["nginx", "passenger"], "msg": "Failure talking
to yum: failure: repodata/repomd.xml from passenger: [Errno 256] No more mirrors to try.\nhttps://oss-binaries.phusionpassen
ger.com/yum/passenger/el/7/x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for passenger"}
So, the key is indeed imported in both cases, but to be used it must be confirmed.
Fixed it by running yum directly with -y switch (and using rpm_key module, if anything):
- name: Install nginx with passenger
command: yum -y install {{ item }}
with_items: [nginx, passenger]
After adding the repository and the repository key, just update that repo's metadata with:
- name: update repo cache for the new repo
command: yum -q makecache -y --disablerepo=* --enablerepo=passenger
Then proceed with yum: name=... as before.
Related
I'm using a package from Artifacts Registery in my cloud run nodejs container.
When I try to gcloud builds submit I get the following error:
Step #1: npm ERR! 403 403 Forbidden - GET https://us-east4-npm.pkg.dev/....
Step #1: npm ERR! 403 In most cases, you or one of your dependencies are requesting
Step #1: npm ERR! 403 a package version that is forbidden by your security policy.
Here is my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/...']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'admin-api'
- '--image'
- 'gcr.io/...'
- '--region'
- 'us-east4'
- '--allow-unauthenticated'
images:
- 'gcr.io/....'
and Dockerfile
FROM node:14-slim
WORKDIR /usr/src/app
COPY --chown=node:node .npmrc ./
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 8080
CMD [ "npm","run" ,"server" ]
.npmrc file:
#scope_xxx:registry=https://us-east4-npm.pkg.dev/project_xxx/repo_xxx/
//us-east4-npm.pkg.dev/project_xxx/repo_xxx/:always-auth=true
the google build service account already has the permission "Artifact Registry Reader"
You have to connect the CloudBuild network in your docker build command. Like that
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '--network=cloudbuild', '.']
I had the same root cause, my setup is close to #AmmAr, after hours of trial and error, found a solution.
Dislaimer, this might not be the reason for your issue, the gcp 403 error message is vague, you need to chip away and eliminate all possibilities, that is how I arrived on this page.
Comparing to #AmmArr above, the changes I made:-
In node.js package.json, add to "scripts" :{...} property
"artifactregistry-login": "npx google-artifactregistry-auth",
"artifactregistry-auth-npmrc": "npx google-artifactregistry-auth .npmrc"
In cloudbuild.yaml, I added two steps prior to the build step, these steps should result in .npmrc getting appended with an access token, thus allowing it to communicate with the gcp artifact repository, that resolved the 403 issue for my scenario.
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-auth-npmrc']
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/...', '.']
#- next steps in your process...
In Dockerfile, copy over .nmprc before package.json
COPY .npmrc ./
COPY package*.json ./
Screenshot of my cloud build config
Now run, and see if it gets past the build step where it pulls npm module from artifact registry.
The solution that worked with me can be found in this blog post:
https://dev.to/brianburton/cloud-build-docker-and-artifact-registry-cicd-pipelines-with-private-packages-5ci2
I am currently developing a project where I need to get the pod names of a Kubernetes Cluster running on Rancher using Ansible. The main thing here is that I have a couple of problems that are preventing me from advance.
I am currently executing a playbook to try to retrieve this information, instead of running a CLI command, because I want to manipulate those Rancher machines later one (e.g. install an rpm file).
Here is the playbook that I am executing tot try to retrieve the pods' names from Rancher:
---
- hosts: localhost
connection: local
remote_user: root
roles:
- role: ansible.kubernetes-modules
- role: hello-world
vars:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
collections:
- community.kubernetes
tasks:
-
name: Gather openShift Dependencies
python_requirements_facts:
dependencies:
- openshift
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '/etc/ansible/RCCloudConfig'
kind: Pod
namespace: redmine
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
The problem is that I am getting a Kubernetes module error each time I am trying to run the playbook:
ERROR! the role 'ansible.kubernetes-modules' was not found in community.kubernetes:ansible .legacy:/etc/ansible/roles:/home/jcp/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/ roles:/etc/ansible
The error appears to be in '/etc/ansible/GetKubectlPods': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- role: ansible.kubernetes-modules
^ here
If I remove that line on the code, Where I try to retrieve that role, I still get a similar error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'kubernetes'
fatal: [localhost]: FAILED! => {"changed": false, "error": "No module named 'kubernetes'", "msg": "Failed to import the required Python library (openshift) on localhost.localdomain's Python /usr/bin/python3.6. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
I have already tried to install ansible-galaxy kubernetes module on the machine and openshift.
Not sure what I am doing wrong since there are so many possibilities for what could be going wrong here.
Ansible Version Output:
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/jcp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jcp/.local/lib/python3.6/site-packages/ansible
executable location = /home/jcp/.local/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
I've debugged my python_required_info output from openshift dependencies and this is what I have:
ok: [localhost] => {
"openshift_dependencies": {
"changed": false,
"failed": false,
"mismatched": {},
"not_found": [],
"python": "/usr/bin/python3.6",
"python_system_path": [
"/tmp/ansible_python_requirements_info_payload_5_kb4a7s/ansible_python_requirements_info_payloa d.zip",
"/usr/lib64/python36.zip",
"/usr/lib64/python3.6",
"/usr/lib64/python3.6/lib-dynload",
"/home/jcp/.local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages/openshift-0.10.0.dev1-py3.6.egg",
"/usr/lib64/python3.6/site-packages",
"/usr/lib/python3.6/site-packages"
],
"python_version": "3.6.8 (default, Nov 21 2019, 19:31:34) \n[GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]" ,
"valid": {
"openshift": {
"desired": null,
"installed": "0.10.0.dev1"
}
}
}
}
Thanks for your help in advance!
Edit: The below answer was given for OP's specific Ansible version (i.e. 2.9.9) and is still valid if you still use it. Since version 2.10, you also need to install the relevant ansible collection if not already present
ansible-galaxy collection install kubernetes.core
See the latest module documentation for more information
In Ansible 2.9.9, you're not supposed to do anything special to use the module except installing the needed python dependencies. See the module documentation for your Ansible version
remove the line - role: ansible.kubernetes-modules, unless it is a module of yours in which case you have to tell us more because this is not a correct declaration.
remove the collection declaration
Add the following task somewhere before using the module:
- name: Make sure python deps are installed
pip:
name: openshift
Your actual python_requirement_facts task is doing nothing else than reporting the dependency is not found. Register the result and debug it to see for yourself.
Now use the k8s_info module normally.
I'm trying to install packages from a private repository I've deployed using chart-releases, but I'm not being able to do it.
Here is what I've done:
I've created a new private repository, added a sample chart to it and ran the following commands:
helm package charts/* --destination .deploy
cr upload -o odelucca -r helm-charts -p .deploy -t $MY_TOKEN
I've created the index.yaml with the following command:
cr index --config .cr.yaml -t $MY_TOKEN
# My .cr.yaml file:
# owner: odelucca
# git-repo: helm-charts
# package-path: .deploy
# index-path: index.yaml
# charts-repo: https://github.com/odelucca/helm-charts/
I've commit the index.yaml to the repo
I've added the remote helm repo with the following command:
helm repo add helm-charts https://raw.githubusercontent.com/odelucca/helm-charts/master --username $MY_EMAIL --password $MY_TOKEN
The repo was added, then I've added the following dependency to a local chart:
dependencies:
- name: serverless-common
version: 1.0.0
repository: "#helm-charts"
Now, I've tried to run the following:
helm dep update
I get the following errors:
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "helm-charts" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 1 charts
Downloading serverless-common from repo https://raw.githubusercontent.com/odelucca/helm-charts/master
Save error occurred: could not download https://github.com/odelucca/helm-charts/releases/download/serverless-common-1.0.0/serverless-common-1.0.0.tgz: Failed to fetch https://github.com/odelucca/helm-charts/releases/download/serverless-common-1.0.0/serverless-common-1.0.0.tgz : 404 Not Found
Deleting newly downloaded charts, restoring pre-update state
Error: could not download https://github.com/odelucca/helm-charts/releases/download/serverless-common-1.0.0/serverless-common-1.0.0.tgz: Failed to fetch https://github.com/odelucca/helm-charts/releases/download/serverless-common-1.0.0/serverless-common-1.0.0.tgz : 404 Not Found
Anyone can help me? I've tried a lot of different approaches, and none of them fix it
Have you tried editing the repository config file?
You should have something like this:
vim ${HOEM}/.config/helm/repositories.yaml
apiVersion: ""
generated: "0001-01-01T00:00:00Z"
repositories:
- caFile: ""
certFile: ""
insecure_skip_tls_verify: false
keyFile: ""
name: helm-charts
password: ""
url: "https://raw.githubusercontent.com/odelucca/helm-charts/master"
username: ""
Edit it and put your username and password to connect to your private registry.
For me, this works fine.
Maybe you also need to check your repo path.
You should be able to download with an URL like this:
${repository}/${name}-${version}.tgz
I'm trying to deploy hyperledger fabric on a raspberry pi, but it doesn't work. I'm searching for some tutorial but i didn't found it, there are someone that just did it?
Last time I've tried to run Hyperledger Fabric on RPi I've prepared following instructions:
Install latest RASPBIAN on SD card, you can download image from:
https://www.raspberrypi.org/downloads/raspbian/
Update and upgrade latest by running:
sudo apt-get update && sudo apt-get upgrade -y
Install required dependencies:
sudo apt-get install git curl gcc libc6-dev libltdl3-dev python-setuptools -y
Upgrade python pip installer:
sudo -H pip install pip --upgrade
Install docker and docker compose:
curl -sSL get.docker.com | shsudo usermod -aG docker pisudo pip install docker-compose
Logout/Login terminal session, so changes will take effect.
Install golang, by following instructions from: https://golang.org/doc/install
Create golang directory:
mkdir -p /home/pi/golang && mkdir -p /home/pi/golang/src/github/hyperledger/
Define environment variable
export GOPATH=/home/pi/golang
Make sure go binaries are in the path, e.g.:
export PATH=/usr/local/go/bin:$PATH
Clone fabric-baseimage repository into /home/pi/golang/src/github/hyperledger/
git clone https://github.com/hyperledger/fabric-baseimage.git
Clone client fabric repository into /home/pi/golang/src/github/hyperledger/
git clone https://github.com/hyperledger/fabric.git
Build based docker images
cd ~/golang/src/github/hyperledger/fabric-baseimage && make docker-local
Apply following patch to fabric code base:
--- a/peer/core.yaml
+++ b/peer/core.yaml
## -68,7 +68,6 ## peer:
# Gossip related configuration
gossip:
- bootstrap: 127.0.0.1:7051
# Use automatically chosen peer (high avalibility) to distribute blocks in channel or static one
# Setting this true and orgLeader true cause panic exit
useLeaderElection: false
## -280,7 +279,7 ## vm:
Config:
max-size: "50m"
max-file: "5"
- Memory: 2147483648
+ Memory: 16777216
AND
--- a/core/container/util/dockerutil.go
+++ b/core/container/util/dockerutil.go
## -45,6 +45,7 ## func NewDockerClient() (client *docker.Client, err error) {
// and GOARCH here.
var archRemap = map[string]string{
"amd64": "x86_64",
+ "arm": "armv7l",
}
func getArch() string {
Build Hyperledger peer and
cd ~/golang/src/github/hyperledger/fabric && make clean peer peer-docker
Peer executable binary will appear in:
~/golang/src/github/hyperledger/fabric/build/bin/
I try install gitlab-ce on CentOS 6.7(Final). but, failed.
My Environment
use proxy (set proxy in /etc/yum.conf)
my gitlab_gitlab-ce.repo file
(https://packages.gitlab.com/gitlab/gitlab-ce/install manual configuration)
[gitlab_gitlab-ce]
name=gitlab_gitlab-ce
baseurl=https://packages.gitlab.com/gitlab/gitlab-ce/el/6/$basearch
repo_gpgcheck=1
enabled=1
gpgkey=https://packages.gitlab.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
[gitlab_gitlab-ce-source]
name=gitlab_gitlab-ce-source
baseurl=https://packages.gitlab.com/gitlab/gitlab-ce/el/6/SRPMS
repo_gpgcheck=1
enabled=1
gpgkey=https://packages.gitlab.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
when excute install command, occur error
yum -q makecache -y --disablerepo='*' --enablerepo='gitlab_gitlab-ce'
https://packages.gitlab.com/gitlab/gitlab-ce/el/6/x86_64/repodata/repomd.xml: [Errno 14] Peer cert cannot be verified or peer cert invalid
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: gitlab_gitlab-ce. Please verify its path and try again
What should I do ? Help me please.
You first check Proxy setting of your system.
Fire below command:
yum update
if it wont work then set proxy in File: /etc/yum.conf
And then check.