Which is the correct PiHole DNS Entry - docker-compose

In the last couple of weeks I moved from clicking pihole in portainer to using stacks / docker-compose.yaml
However, this also limited the functionality of my pihole. At some point it was no longer possible to perform the gravity update via the web interface of the pihole. For this I always had to go to the console of the pihole and run
pihole -g
Also manually added black and whitelist entries were only taken into account after a manual update. The deactivation of the pihole in the web interface did not work anymore.
I was able to fix this by removing the following entries in my docker-compose file:
environment:
PIHOLE_DNS_: 9.9.9.9#53;9.9.9.9#53
DNS1: 9.9.9.9 # Quad9 (filtered, DNSSEC)
DNS2: 9.9.9.9 # If we don't specify two, it will auto pick google.
security_opt:
- no-new-privileges:true
cap_add:
- NET_ADMIN
dns:
- 127.0.0.1
- 9.9.9.9
The config lead to 9.9.9.9 in custom1 upstream DNS server. Currently I clicked the upstream server (on the left in settings) manually. Which of the DNS entry do I have to reuse and why does the pihole think its a custom and not one of the standard dns entries?
Are these settings stored in one of the volumes? I could not find any entries in Portainer environment variables when I removed them explicitly.

Related

Lando wtih ParcelJS: exposing port

I'm trying to use ParcelJS with Lando and there's one problem if you want HMR to work. You need to expose a port and that seems to be much harder than it should be with Lando. :(
So I know I need to do this for my ParcelJS watch command:
parcel watch dev/scripts.js --out-dir prod/ --hmr-port 6101
Then I need to expose the port I've assigned, in this case "6101" to Docker (via my Lando config file). But that's where it's tricky, apparently, because of the proxy setup Lando uses.
My current .lando.yml config is below, but it doesn't work as expected and the port is not exposed. I still get a "scripts.js:224 WebSocket connection to 'wss://testwp.lndo.site:6101/' failed:" error message from my ParcelJS generated script file in my browser's dev tools:
name: testwp
recipe: wordpress
config:
php: '8.0'
via: nginx
webroot: wordpress
database: mysql:8.0
services:
appserver:
portforward: 6101
I saw a similar post about a problem with LocalWP which does about the same thing Lando does.
Can you maybe try to add the flag --hmr-hostname localhost.
Its ether that or --hmr-hostname testwp.lndo.site.
UPDATE:
After checking the parcel CLI docs the flag could also be --hmr-host localhost try that aswell.

How can i access the Open Policy Agent Command Line via Docker Desktop in Windows 10

I am attempting to learn the various features of something called Open Policy Agent because I think it may be a useful tool in a microservices based application.
Here is a link to the 'Running with Docker' section of the documentation for this application: https://www.openpolicyagent.org/docs/latest/deployments/#running-with-docker
Currently, I am running Docker using the Docker Desktop in a Windows 10 environment and I already have a docker-compose file set up for my main application which includes various docker images. My thoughts were that I could simply add the latest openpolicyagent image as well as the openpolicyagent demo-restful api so that I could begin learning about the service. To do this, I added the following lines to my docker-compose.yml:
opa:
image: openpolicyagent/opa:0.34.2
ports:
- 8181:8181
command:
- "run"
- "--server"
- "--log-level=debug"
- "api_authz.rego"
volumes:
- C:\Sites\prosaurus\policy\api_authz.rego:/api_authz.rego
api_server:
image: openpolicyagent/demo-restful-api:latest
ports:
- 5000:5000
environment:
- OPA_ADDR=http://opa:8181
- POLICY_PATH=/v1/data/httpapi/authz
This appears to have worked in that I can go to localhost:8181 and i see the Query and Input Data (JSON) boxes as I presume is supposed to happen, however I would like to test some of the command line functions as are mentioned here:
https://www.openpolicyagent.org/docs/latest/#2-try-opa-eval
However I can not seem to access the command line of the docker container which is running the OPA agent. The way I have attempted this is via the Docker Desktop application GUI in Windows. In this application I can see all of the docker instances which are running and each one has an option to run the CLI (you click the button and the cli opens). They all work except for the OPA one. When I click on that one a cmd window opens for a split second, displays something too fast for me to read it and then closes.
What have I done wrong?
OPA can be run in a few different ways, and opa eval is distinctly different from running OPA as a server, i.e. opa run --server.
When you run OPA as a server - which is how you'd normally run OPA in production - you query OPA for policy decisions through OPA's REST API.
opa eval on the other hand is more like a Swiss army knife of OPA, allowing you to quickly evaluate a rule or expression given some provided policy and data.
You can think of them as two entirely different tools.

cloud-init ignoring static IP network configuration

I running the Ubuntu 18.04 cloud image and trying to configure networking through cloud-init. For some reason it is ignoring my networking when I try to assign a static IP and just falls back to using DHCP. I'm not sure why and I'm not sure how to debug it. Does anyone know if I am doing something wrong or how I should further troubleshoot this:
Here is my config.yaml I'm using to generate my config.img:
#cloud-config
network:
version: 2
ethernets:
ens2:
dhcp4: false
dhcp6: false
addresses: [10.0.0.40/24]
gateway4: 10.0.0.1
password: secret # for the 'ubuntu' user in case we can't SSH in
chpasswd: { expire: false }
ssh_pwauth: true
users:
- default
- name: brennan
ssh_import_id: gh:brennancheung
sudo: ALL=(ALL) NOPASSWD:ALL
hostname: vm
runcmd:
- [ sh, -xc, "echo Here is the network config for your instance" ]
- [ ip, a ]
final_message: "Cloud init is done. Woohoo!"
Everything else in the config seems to be working, it's as if it doesn't even see the network portion though.
I'm attaching the .img as a cdrom to read the cloud-init. You can see how I'm running it here: https://github.com/brennancheung/playbooks/blob/master/cloud-init-lab/Makefile
NOTE: Once I'm logged into the box I can replace the config in /etc/netplan with the network section above and re-apply it and the networking comes up fine with a static IP. So I think there aren't any obvious errors that I am missing. This leads me to believe it is related to the cloud-init networking module(s) and not netplan itself.
I finally figure it out. Hopefully this helps someone else.
Apparently you can't supply networking configuration in user-data. You have to specify it in the cloud provider's data source or in metadata. In order to do that you have to move the network section into its own file and build the cloud-init image with the --network-config=... option.
Ex:
cloud-localds -v --network-config=network-config-v2.yaml seed.img user-data.yaml
I have the complete setup for configuring and booting a cloud instance in a local KVM if it helps anyone else out.
https://github.com/brennancheung/playbooks/tree/master/cloud-init-lab
If you notice, in /etc/cloud/cloud.cfg.d there exists a file called 99-fake-cloud.cfg (or something similar). If you delete this, then cloud-init will configure the network using the parameters in your user-data file (i.e. - /etc/cloud/cloud.cfg)

QNAP Container Station Gitlab Email Server

I have a QNAP TS453a NAS. In the Container Station I installed sameersbn's Docker Gitlab 10.4.2. But I couldn't find any manual how to configure an email server so that Gitlab can send emails when someone forgets his password for example. Can anyone help me?
I installed the Sameersbn version of Gitlab in Container Station as well and I found it quite restrictive. My personal recommendation would be to just use the standard CE version that Gitlab provide.
However at the time I used Sameersbn version of Gitlab there was no way that I could find to successfully configure the email server (Not saying there isn't I just couldn't figure it out). However it doesn't mean you can't do it yourself manually.
I would recommend that you mount your volumes to somewhere on disk instead of within the Container Station so it makes it easier to reconfigure any settings manually.
Here is what my docker-compose file looks like. Very simple and really the only things you need to care about are the volumes and where you are mounting them too.
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: <HOTST_NAME>
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url <EXTERNAL_URL>
ports:
- '10080:80' // Insecure port
- '10443:443' // Secure port
- '10020:22' // SSH port
volumes:
- '/share/Gitlab/config:/etc/gitlab' // To configure the Email Server we care about this one.
- '/share/Gitlab/logs:/var/log/gitlab'
- '/share/Gitlab/data:/var/opt/gitlab'
The one we care about is '/share/Gitlab/config:/etc/gitlab'. If you don't know much about volumes and mounting them it is pretty much like this '<your_local_location>:<container_location>'. So if I navigate to /share/Gitlab/config on my QNAP NAS I will find all the configuration for my GitLab instance.
In /share/Gitlab/config you should see a file called gitlab.rb, this is a ruby file that contains all the configuration for your GitLab instance. If you search in this file you will find the configuration below:
### GitLab email server settings
###! Docs: https://docs.gitlab.com/omnibus/settings/smtp.html
###! **Use smtp instead of sendmail/postfix.**
# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.server"
# gitlab_rails['smtp_port'] = 465
# gitlab_rails['smtp_user_name'] = "smtp user"
# gitlab_rails['smtp_password'] = "smtp password"
# gitlab_rails['smtp_domain'] = "example.com"
# gitlab_rails['smtp_authentication'] = "login"
# gitlab_rails['smtp_enable_starttls_auto'] = true
# gitlab_rails['smtp_tls'] = false
All you need to do is uncomment (# means comment so just remove) and fill in your SMTP details.
This will require you to reconfigure your Gitlab instance. So you will need to ssh into your GitLab Container and just run reconfigure command.
Essentially you need to find away of getting to the gitlab.rb file so you can amend the SMTP Email Server Settings.
Some good reading material for installing GitLab via Docker are:
https://docs.gitlab.com/omnibus/docker/
https://docs.gitlab.com/ee/install/docker.html
https://developer.ibm.com/code/2017/07/13/step-step-guide-running-gitlab-ce-docker/
https://www.digitalocean.com/community/tutorials/how-to-build-docker-images-and-host-a-docker-image-repository-with-gitlab
(Please note that there could be some additional configuration to allow your system to write to /share/Gitlab/config you can do this with chmod command via ssh)

Rubber believes there is a missing rule when it has expressly identified it earlier

Launching
cap rubber:create_staging
starts to check the account's EC2 existing security groups. The first check is on the default group, which cannot be deleted from the AWS web-console. So the response to the following prompt is naturally 'N'
* Security Group already in cloud, syncing rules: default
Rule '{"protocol"=>"tcp", "from_port"=>"1", "to_port"=>"65535", "source_group_name"=>"", "source_group_account"=>"460491791257"}' exists in cloud, but not locally, remove from cloud? [y/N]: N
Yet, four checks later,
* Missing rule, creating: {"source_group_name"=>"default", "source_group_account"=>"460491791257", "protocol"=>"tcp", "from_port"=>"1", "to_port"=>"65535"}
/Users/you/.rvm/gems/ruby-1.9.3-p551/gems/excon-0.45.4/lib/excon/middlewares/expects.rb:10:in `response_call': Duplicate => the specified rule \"peer: sg-0910926c, TCP, from port: 1, to port: 65535, ALLOW\" already exists (Fog::Compute::AWS::Error)
Clearly there is an attempt to create an identical rule. The only difference is that the one picked up from the check has an empty string for source_group_name, while the rubber routine tries to create the same rule with the source_group_name identified.
Creating a tag in EC2 web-console with 'source_group_name' and the default value does not change any behaviour. Does this require a fix via EC2 or in rubber?
Edit while the following does effectively work, the source of the problem was rubber versions. The latest was not being used and thus probably was at origin of problem list of versions is here
This can be overcome by creating a new security group in the EC2 web-console and editing the config file rubber/rubber.yml to the same identity created in the console (line 206 or thereabout)
security_groups:
default:
description: The default security group
rules:
- source_group_name: rubber_default
Then, in config/rubber/instance- env .yml the security_groups bloc needs amending (line 52 or therabout):
security_groups:
- rubber-default