setting up local drone server: Unable to login. Registration is closed - github

I'm trying to set up a local drone.io server for CD with my github account. I'm using the official docker container. The setup instructions says to add an application in github settings to get the client id and secret needed for drone github remote configuration, which I have done, the only difference from official docs is that I see the "Register new application" on the "Developer Applications" and not on "Authorized Applications", I hope it's the same. Then, I have defined the environment variables:
REMOTE_DRIVER=github
REMOTE_CONFIG=https://github.com?client_id=${client_id}&client_secret=${client_secret}
Replacing the client id and secret with my own. Then I bring the container up and try to login, I get redirected to github's authorization page, I authorize it and when redirected back I get this error:
Unable to login. Registration is closed.
And the redirected URL is:
http://drone.myserver.com/login?error=access_denied
I really don't have a clue on what could possible be missing/misconfigured, the same setup works with the bitbucket remote.

Found the problem. Browsing drone issues I found this one that mentions that I need to add open=true to the query string so drone is able to create the github application.

If you get access_denied in web drone
This is docker-compose and answer -> DRONE_OPEN=true:
version: '2'
services:
drone-server:
image: drone/drone:0.7
ports:
- 80:8000
volumes:
- ./drone:/var/lib/drone/
restart: always
environment:
- DRONE_GITLAB=true
- DRONE_GITLAB_CLIENT=change_value
- DRONE_GITLAB_SECRET=change_value
- DRONE_GITLAB_URL=https://gitlab-01example.com
- DRONE_SECRET=change_value
- DRONE_GITLAB_SKIP_VERIFY=true
- DRONE_DEBUG=true
- DRONE_OPEN=true
drone-agent:
image: drone/drone:0.7
command: agent
restart: always
depends_on:
- drone-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=ws://drone-server:8000/ws/broker
- DRONE_SECRET=change_value
- DRONE_GITLAB_SKIP_VERIFY=true

Related

Which is the correct PiHole DNS Entry

In the last couple of weeks I moved from clicking pihole in portainer to using stacks / docker-compose.yaml
However, this also limited the functionality of my pihole. At some point it was no longer possible to perform the gravity update via the web interface of the pihole. For this I always had to go to the console of the pihole and run
pihole -g
Also manually added black and whitelist entries were only taken into account after a manual update. The deactivation of the pihole in the web interface did not work anymore.
I was able to fix this by removing the following entries in my docker-compose file:
environment:
PIHOLE_DNS_: 9.9.9.9#53;9.9.9.9#53
DNS1: 9.9.9.9 # Quad9 (filtered, DNSSEC)
DNS2: 9.9.9.9 # If we don't specify two, it will auto pick google.
security_opt:
- no-new-privileges:true
cap_add:
- NET_ADMIN
dns:
- 127.0.0.1
- 9.9.9.9
The config lead to 9.9.9.9 in custom1 upstream DNS server. Currently I clicked the upstream server (on the left in settings) manually. Which of the DNS entry do I have to reuse and why does the pihole think its a custom and not one of the standard dns entries?
Are these settings stored in one of the volumes? I could not find any entries in Portainer environment variables when I removed them explicitly.

How can i access the Open Policy Agent Command Line via Docker Desktop in Windows 10

I am attempting to learn the various features of something called Open Policy Agent because I think it may be a useful tool in a microservices based application.
Here is a link to the 'Running with Docker' section of the documentation for this application: https://www.openpolicyagent.org/docs/latest/deployments/#running-with-docker
Currently, I am running Docker using the Docker Desktop in a Windows 10 environment and I already have a docker-compose file set up for my main application which includes various docker images. My thoughts were that I could simply add the latest openpolicyagent image as well as the openpolicyagent demo-restful api so that I could begin learning about the service. To do this, I added the following lines to my docker-compose.yml:
opa:
image: openpolicyagent/opa:0.34.2
ports:
- 8181:8181
command:
- "run"
- "--server"
- "--log-level=debug"
- "api_authz.rego"
volumes:
- C:\Sites\prosaurus\policy\api_authz.rego:/api_authz.rego
api_server:
image: openpolicyagent/demo-restful-api:latest
ports:
- 5000:5000
environment:
- OPA_ADDR=http://opa:8181
- POLICY_PATH=/v1/data/httpapi/authz
This appears to have worked in that I can go to localhost:8181 and i see the Query and Input Data (JSON) boxes as I presume is supposed to happen, however I would like to test some of the command line functions as are mentioned here:
https://www.openpolicyagent.org/docs/latest/#2-try-opa-eval
However I can not seem to access the command line of the docker container which is running the OPA agent. The way I have attempted this is via the Docker Desktop application GUI in Windows. In this application I can see all of the docker instances which are running and each one has an option to run the CLI (you click the button and the cli opens). They all work except for the OPA one. When I click on that one a cmd window opens for a split second, displays something too fast for me to read it and then closes.
What have I done wrong?
OPA can be run in a few different ways, and opa eval is distinctly different from running OPA as a server, i.e. opa run --server.
When you run OPA as a server - which is how you'd normally run OPA in production - you query OPA for policy decisions through OPA's REST API.
opa eval on the other hand is more like a Swiss army knife of OPA, allowing you to quickly evaluate a rule or expression given some provided policy and data.
You can think of them as two entirely different tools.

Running a Concourse-Task with an registry-image resource

I am using Concourse-CI in combination with a private Docker registry and everything works fine. However, I want to run a task as an image I provide via the registry. To clarify: I don't want to run the image within the task, the task source should be my image. Unfortunately I wasn't able to find an example on here or on the Concourse-CI docs.
My resource:
resources:
- name: my-image
type: registry-image
source:
repository: ((registry-url))/my-image
username: ...
password: ...
ca_certs:
- ((registry-cert))
So, if I'm correct, the task/config/source cannot take a resource but an anonymous-resource where I would provide a docker.io link.
I am very appreciative for some help. :)
Edit: OK, so my first mistake was to only look at the Task schema, I can configure an image (https://concourse-ci.org/jobs.html#schema.step.task-step.image) but when I do:
- task: test
image: my-image
config:
platform: linux
inputs:
run:
...
I get this error: find or create container on worker 4c38517c9713: no image plugin configured.
Ok,
so the answer was to make the image privileged for some reason...

Prisma 1 + MongoDB Atlas deploy to Heroku returns error 404

I've deployed a Prisma 1 GraphQL server app on Heroku, connected to a MongoDB Atlas cluster.
Running prisma deploy locally with the default endpoint http://localhost:4466, the action being run successfully and all the schemas are being generated correctly.
But, if I change the endpoint with the Heroku remote host https://<myapp>.herokuapp.com, prisma deploy fails, returning this exception:
ERROR: GraphQL Error (Code: 404)
{
"error": "\n<html lang="en">\n\n<meta charset="utf-8">\nError\n\n\nCannot POST /management\n\n\n",
"status": 404
}
I think that's could be related to an authentication problem, but I'm getting confused because I've defined both security token in prisma.yml than the management API secret key in docker-compose.yml.
Here's my current configs if it could be helpful:
prisma.yml
# The HTTP endpoint for your Prisma API
# Tried with https://<myapp>.herokuapp.com only too with the same result
endpoint: https://<myapp>.herokuapp.com/dinai/staging
secret: ${env:PRISMA_SERVICE_SECRET}
# Points to the file that contains your datamodel
datamodel: datamodel.prisma
databaseType: document
# Specifies language & location for the generated Prisma client
generate:
- generator: javascript-client
output: ../src/generated/prisma-client
# Ensures Prisma client is re-generated after a datamodel change.
hooks:
post-deploy:
- prisma generate
docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: mongo
uri: mongodb+srv://${MONGO_DB_USER}:${MONGO_DB_PASSWORD}#${MONGO_DB_CLUSTER}/myapp?retryWrites=true&w=majority
database: myapp
Plus, a weird situation happens too. In both cases, if I try to navigate the resulting API with GraphQL Playground, clicking on the tab "Schema" returns an error. On the other side, the tab "Docs" is being populated correctly. Apparently, seems that the exception is blocking the script finishing to generate the rest of the schemas.
A little help by someone experienced with Prisma/Heroku would be awesome.
Thanks in advance.
To date, I still do not clear what was causing the exception in detail. But looking deeply on Prisma docs, I discovered that in version 1 there's the necessity to proxy the app through the Prisma Cloud.
So probably, deploying straight on Heroku without it, was generating the main issue: basically there wasn't any Prisma container service running on the server.
What I did is to follow step by step the official doc about how to deploy your server on Prisma Cloud (here's the video version). As in the example shown in the guide, I already have my own project, which is actually splitted in two different apps: respectively one for the client (front-end) and one for the API (back-end). So, instead to generate a new one, I pointed the back-end API endpoint to the remote URL of the Prisma server generated by the cloud (the Heroku container created by following the tutorial). Then, leaving the management secret API key only on the Prisma server container configuration (which has been generated automatically by the cloud) and, on the other hand, the service secret only on the back-end app, finally I was able to run the prisma deploy correctly and run my project remotely.

QNAP Container Station Gitlab Email Server

I have a QNAP TS453a NAS. In the Container Station I installed sameersbn's Docker Gitlab 10.4.2. But I couldn't find any manual how to configure an email server so that Gitlab can send emails when someone forgets his password for example. Can anyone help me?
I installed the Sameersbn version of Gitlab in Container Station as well and I found it quite restrictive. My personal recommendation would be to just use the standard CE version that Gitlab provide.
However at the time I used Sameersbn version of Gitlab there was no way that I could find to successfully configure the email server (Not saying there isn't I just couldn't figure it out). However it doesn't mean you can't do it yourself manually.
I would recommend that you mount your volumes to somewhere on disk instead of within the Container Station so it makes it easier to reconfigure any settings manually.
Here is what my docker-compose file looks like. Very simple and really the only things you need to care about are the volumes and where you are mounting them too.
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: <HOTST_NAME>
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url <EXTERNAL_URL>
ports:
- '10080:80' // Insecure port
- '10443:443' // Secure port
- '10020:22' // SSH port
volumes:
- '/share/Gitlab/config:/etc/gitlab' // To configure the Email Server we care about this one.
- '/share/Gitlab/logs:/var/log/gitlab'
- '/share/Gitlab/data:/var/opt/gitlab'
The one we care about is '/share/Gitlab/config:/etc/gitlab'. If you don't know much about volumes and mounting them it is pretty much like this '<your_local_location>:<container_location>'. So if I navigate to /share/Gitlab/config on my QNAP NAS I will find all the configuration for my GitLab instance.
In /share/Gitlab/config you should see a file called gitlab.rb, this is a ruby file that contains all the configuration for your GitLab instance. If you search in this file you will find the configuration below:
### GitLab email server settings
###! Docs: https://docs.gitlab.com/omnibus/settings/smtp.html
###! **Use smtp instead of sendmail/postfix.**
# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.server"
# gitlab_rails['smtp_port'] = 465
# gitlab_rails['smtp_user_name'] = "smtp user"
# gitlab_rails['smtp_password'] = "smtp password"
# gitlab_rails['smtp_domain'] = "example.com"
# gitlab_rails['smtp_authentication'] = "login"
# gitlab_rails['smtp_enable_starttls_auto'] = true
# gitlab_rails['smtp_tls'] = false
All you need to do is uncomment (# means comment so just remove) and fill in your SMTP details.
This will require you to reconfigure your Gitlab instance. So you will need to ssh into your GitLab Container and just run reconfigure command.
Essentially you need to find away of getting to the gitlab.rb file so you can amend the SMTP Email Server Settings.
Some good reading material for installing GitLab via Docker are:
https://docs.gitlab.com/omnibus/docker/
https://docs.gitlab.com/ee/install/docker.html
https://developer.ibm.com/code/2017/07/13/step-step-guide-running-gitlab-ce-docker/
https://www.digitalocean.com/community/tutorials/how-to-build-docker-images-and-host-a-docker-image-repository-with-gitlab
(Please note that there could be some additional configuration to allow your system to write to /share/Gitlab/config you can do this with chmod command via ssh)