Ansible Galaxy collection dependency SSH error with private GitHub repo - github

Being new to Ansible collections I’m hoping I’ve missed something obvious here in my attempt to refactor some old Ansible roles into collections using private GitHub repositories.
I have GitHub setup with 2 linked accounts. I’ll call the main personal account GITHUB_AC_P. The personal account is linked to a child organizational account I’ll call GITHUB_AC_O. I can switch between these accounts in the GitHub web UI and use the following single entry in ~/.ssh/config to access both accounts with git clients:
Host GITHUB_AC_P.github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_github_REDACTED_GITHUB_A
I first added Ansible Galaxy collection files to a new GitHub repository named ansible.common in account GITHUB_AC_O. I plan to reuse this collection in other Ansible Galaxy collections. It currently has a single role and the following galaxy.yml file:
namespace: REDACTED_NS
name: common
version: 0.0.1
description: "Common Ansible collection"
readme: README.md
authors:
- REDACTED_AUTHOR
The following command reports “installed successfully” and I see the collection in ~/.ansible/collections/ansible_collections/REDACTED_NS/common:
ansible-galaxy collection install git#GITHUB_AC_P.github.com:GITHUB_AC_O/ansible.common.git,main
I then created a second Ansible Galaxy collection in a new GitHub repository named ansible.harden_host. This is also in account GITHUB_AC_O. This currently has no roles and uses the following galaxy.yml file to reference the above common collection (the value of REDACTED_NS is the same in both galaxy.yml files):
namespace: REDACTED_NS
name: harden_host
version: 0.0.1
description: "Ansible collection to harden hosts"
readme: README.md
authors:
- REDACTED_AUTHOR
dependencies: {
REDACTED_NS.common: git#GITHUB_AC_P.github.com:GITHUB_AC_O/ansible.common.git,main
}
But when I run the following:
ansible-galaxy collection install --verbose git#GITHUB_AC_P.github.com:GITHUB_AC_O/ansible.harden_host.git,main
It fails with message:
Starting galaxy collection install process
Process install dependency map
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/': <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)>
Why is this trying to hit galaxy.ansible.com instead of my GitHub account?
When I add --ignore-certs and run the following:
ansible-galaxy collection install --ignore-certs git#GUTHUB_AC_P.github.com:GITHUB_AC_O/ansible.harden_host.git,main
It fails with this different message:
ERROR! Failed to find collection REDACTED_NS.common:git#GITHUB_AC_P.github.com:GITHUB_AC_O/ansible.common.git
I pasted the URI from this error (right of the colon) into a ansible-galaxy collection install command to verify there’s no typo in the URI. This worked fine.
The string REDACTED_NS does not equal the value of GITHUB_AC_P or GITHUB_AC_O.
If someone could please explain what’s wrong here and how the issue can be fixed that would be much appreciated.

Solved; it seems the answer was hiding in plain site in Ansible's Using collections document, which says to use the following form for git based dependencies:
dependencies: {'git#github.com:organization/repo_name.git': 'devel'}
The form I was using was for Galaxy servers, hence it was hitting galaxy.ansible.com (unless I overrode the default with e.g. --server localhost).
So the following form works (git repo followed by git reference):
namespace: REDACTED_NS
name: harden_host
version: 0.0.1
description: "Ansible collection to harden hosts"
readme: README.md
authors:
- REDACTED_AUTHOR
dependencies: {
'git#GITHUB_AC_P.github.com:GITHUB_AC_O/ansible.common.git': 'main'
}

Related

vault-secrets-provider alias not recognized with docker-kaniko

I'm having some issues when trying to use Hashicorp vault template (kubernetes with Google Kubernetes Engine) with to.be.continuous.
Actually when I use it with Google Docker Kaniko layer I got an error message: ... wget: bad address 'vault-secrets-provider'.
It seems that Kaniko doesn't recognize the vault-secrets-provider layer. Would you please help me with this? Or perhaps, where I can ask for some help?
This is a summary of .gitlab-ci.yml
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "templates/gitlab-ci-k8s-vault.yml"
...
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"
Error Message:
[ERROR] Failed getting secret K8S_DEFAULT_KUBE_CONFIG:
... wget: bad address 'vault-secrets-provider'
I tried many times directly without Vault layer and Kaniko works ok, I mean without Vault secrets.
How I can accomplish this? I tried modifying the kaniko template but without success.
I will appreciate any help with this.
To fix your issue, first upgrade the docker template to its latest version (2.3.0 at the time this response was written).
Then depending on your case you have 2 options:
Docker needs to handle some of your secrets managed by Vault: then you shall also activate the Vault variant for Docker,
Docker doesn't needs to handle any secret managed by Vault: don't use the Vault variant for Docker, you'll have a warning message from Docker not being able to decode the secret (basically the same as the one you had, but not failing the build),
You shall simply use it in your .gitlab-ci.yml file:
include:
# Docker template
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker.yml'
# Vault variant for Docker (depending on your above case)
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker-vault.yml'
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "/templates/gitlab-ci-k8s-vault.yml"
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"

Invalid repository type PYTHON. Valid type is PYPI

I have created (using terraform resource google_artifact_registry_repository) a python repository on Google Artifact Registry. Here's my terraform code that created it:
resource "google_artifact_registry_repository" "pypi" {
provider = google-beta
project = var.project_id
location = var.region
repository_id = "dataplatformpypi"
description = "PyPi repo for use by dataplatform"
format = "PYTHON"
}
here is that repository:
I am now following the quickstart at https://cloud.google.com/artifact-registry/docs/python/quickstart, specifically the Configure authentication section which instructs me to issue gcloud artifacts print-settings python. I actually modify that slightly to issue:
gcloud --project myproject artifacts print-settings python --repository dataplatformpypi --location europe-west2
and I get error:
ERROR: (gcloud.artifacts.print-settings.python) Invalid repository type PYTHON. Valid type is PYPI.
I haven't specified the repository type as part of that command so I can only assume that "repository type PYTHON" refers to the format of the repository:
However given that the repository has been created successfully and PYTHON is (according to the terraform resource documentation) a valid value for the repository format I am struggling to understand what the problem is here.
I would appreciate any advice.
It doesn't appear to be a user-specific problem. Other users have also encountered the issue. There's a similar issue ongoing in GitHub. You can follow the thread here.

Prisma 1 + MongoDB Atlas deploy to Heroku returns error 404

I've deployed a Prisma 1 GraphQL server app on Heroku, connected to a MongoDB Atlas cluster.
Running prisma deploy locally with the default endpoint http://localhost:4466, the action being run successfully and all the schemas are being generated correctly.
But, if I change the endpoint with the Heroku remote host https://<myapp>.herokuapp.com, prisma deploy fails, returning this exception:
ERROR: GraphQL Error (Code: 404)
{
"error": "\n<html lang="en">\n\n<meta charset="utf-8">\nError\n\n\nCannot POST /management\n\n\n",
"status": 404
}
I think that's could be related to an authentication problem, but I'm getting confused because I've defined both security token in prisma.yml than the management API secret key in docker-compose.yml.
Here's my current configs if it could be helpful:
prisma.yml
# The HTTP endpoint for your Prisma API
# Tried with https://<myapp>.herokuapp.com only too with the same result
endpoint: https://<myapp>.herokuapp.com/dinai/staging
secret: ${env:PRISMA_SERVICE_SECRET}
# Points to the file that contains your datamodel
datamodel: datamodel.prisma
databaseType: document
# Specifies language & location for the generated Prisma client
generate:
- generator: javascript-client
output: ../src/generated/prisma-client
# Ensures Prisma client is re-generated after a datamodel change.
hooks:
post-deploy:
- prisma generate
docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: mongo
uri: mongodb+srv://${MONGO_DB_USER}:${MONGO_DB_PASSWORD}#${MONGO_DB_CLUSTER}/myapp?retryWrites=true&w=majority
database: myapp
Plus, a weird situation happens too. In both cases, if I try to navigate the resulting API with GraphQL Playground, clicking on the tab "Schema" returns an error. On the other side, the tab "Docs" is being populated correctly. Apparently, seems that the exception is blocking the script finishing to generate the rest of the schemas.
A little help by someone experienced with Prisma/Heroku would be awesome.
Thanks in advance.
To date, I still do not clear what was causing the exception in detail. But looking deeply on Prisma docs, I discovered that in version 1 there's the necessity to proxy the app through the Prisma Cloud.
So probably, deploying straight on Heroku without it, was generating the main issue: basically there wasn't any Prisma container service running on the server.
What I did is to follow step by step the official doc about how to deploy your server on Prisma Cloud (here's the video version). As in the example shown in the guide, I already have my own project, which is actually splitted in two different apps: respectively one for the client (front-end) and one for the API (back-end). So, instead to generate a new one, I pointed the back-end API endpoint to the remote URL of the Prisma server generated by the cloud (the Heroku container created by following the tutorial). Then, leaving the management secret API key only on the Prisma server container configuration (which has been generated automatically by the cloud) and, on the other hand, the service secret only on the back-end app, finally I was able to run the prisma deploy correctly and run my project remotely.

Is it possible to use variables in a codeship-steps.yml file?

We currently use Codeship Pro to push Docker images to a private registry on AWS, as well as to deploy those images to an ECS cluster.
However, the codeship-steps.yml file includes a hard-coded region name for which AWS region I'm pushing to. For example:
- name: push_production
service: app
type: push
image_name: 123456789012.dkr.ecr.us-east-1.amazonaws.com/project/app-name
image_tag: "{{.Timestamp}}"
tag: master
registry: https://123456789012.dkr.ecr.us-east-1.amazonaws.com
dockercfg_service: aws_generator
I would like to be able to fairly easily switch this to deploy to a different AWS region. Thus the question:
Is it possible to use variables in a codeship-steps.yml file?
I know some of the properties can use a handful of built-in variables provided by Codeship (such as the {{.Timestamp}} value used for the image_tag property), but I don't know if, for example, values from an env_file can be used in the image_name, registry, and/or command properties of a step.
I'm imagining something like this...
codeship-steps.yml:
- name: push_production
service: app
type: push
image_name: "123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com/project/app-name"
image_tag: "{{.Timestamp}}"
tag: master
registry: "https://123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com"
dockercfg_service: aws_generator
... but that results in an "error parsing image name during push step: invalid reference format" on the push step.
I've tried simply not specifying the registry in the image_name...
image_name: project/app-name
... but I get a "Build Error: no basic auth credentials" on the push step. At this point, I'm running out of ideas.
Is it possible to use [environment] variables in a codeship-steps.yml file?
While the image_tag can take advantage of Go templates, the same is not the case for image_name, registry, or anything else. This is a separate set of templating variables that are accessible only to the image_tag generation.
As for environment variables in general (CI environment variables or those defined in the service configs), these values can be used in codeship-steps.yml on the command step when passed through a shell command. For example:
- service: app
command: echo The branch name is: $CI_BRANCH
Results in:
The branch name is: $CI_BRANCH
- service: app
command: /bin/sh -c 'echo The branch name is: $CI_BRANCH'
Results in:
The branch name is: master
As for your 'no basic auth credentials' error message, it's possible that there's an issue with how you are retrieving the basic auth credentials for access to your image registry. If you are on a MacOS device, I would recommend that you review our documentation on how to generate Docker credentials.

How to configure a custom resource type in a concourse pipeline?

I've already done a google search to find a way to setup a custom resource in concourse pipeline but the answers/documentation do not work.
Can someone provide a working example of custom resource type that is pulled from a local registry and used in a build plan?
For example, say I were to clone the git resource and slightly modify it and pushed it to my local registry.
The git resource image would be name: localhost:5000/local_git:latest
How would you be able to use this custom resource (local_git:latest) in a pipeline definition?
There are two main settings to consider here when running a local registry:
Must use insecure_registries:
insecure_registries: ["my.local.registry:8080"]
If you are running your registry in "localhost", you shouldn't use localhost as the address for your registry, if you do, the docker image will try to resolve to the localhost of the docker image instead of your local machine, in order to avoid this problem, use the IP address of your local machine. (DON'T use 127.0.0.1)
You can define your custom resource type in your pipeline under the resource_types key in the pipeline yml.
Eg:
resource_types:
- name: custom-git
type: docker-image
source:
repository: localhost:5000/local_git
An important note is that custom resource type images are fetched in a manner identical to using a base resource in your pipeline, so for your case of a private Docker registry, you will just need to configure the necessary source: on the docker-image resource (See the docs for the docker-image-resource)
You can then use the type for resources as you would any of the base types:
resources:
- name: some-custom-git-resource
type: custom-git
source: ...
Note the type: key of the resource matches the name: on the resource type.
Take a look at the Concourse Documentation for Configuring Resource Types for more information on how to use custom types in your pipeline.