I want to run a Node.js app in US South which was deployed in Germany before.
manifest.yml:
applications:
- name: ISICC IDES
disk_quota: 1024M
host: isiccides
command: node server.js
path: .
instances: 1
memory: 256M
domain: mybluemix.net
I create a new launch configuration.
Under Spaces I'm still offered the Germany spaces.
Domain is set to eu-de.mybluemix.net, which I can not change.
Related
As I am quite familiar with docker compose and not so familiar with amazons cloudformation I found it extremely nice to be able to basically run your docker compose files via ecs integration and viola behind the scenes everything you need is created for you. So you get your load balancer (if not already created) and your ecs cluster with your services running and everything is connected and just works. When I started wanting to do a bit more advanced things I ran into a problem that I can't seem to find an answer to online.
I have 2 services in my docker compose, my spring boot web app and my postgres db. I wanted to implement ssl and redirect all traffic to https. After a lot of research and a lot of trial and error I finally got it to work by extending my compose file with x-aws-cloudformation and adding native cloudformation yaml. When doing all of this I was forced to choose an application load balancer over a network load balancer as it operates on layer 7 (http/https). However my problem is that now I have no way of reaching my postgres database and running queries against it via for example intellij. My spring boot app works find and can read/write to my database so that works fine. Before the whole ssl implementation I didn't specify a load balancer in my compose file and so it gave me a network load balancer every time I ran my compose file. Then I could connect to my database via intellij and run queries. I have tried adding an inbound rule on my security group that basically allows all inbound traffic to my database via 5432 but that didn't help. I may not be setting the correct host when applying my connection details in intellij but I have tried using the following:
dns name of load balancer
ip-adress of load balancer
public ip of my postgres db task (launch type: fargate)
I would just like to simply reach my database and run queries against it even though it is running inside aws ecs cluster behind an application load balancer. Is there a way of achieving what I am trying to do? Or do I have to have 2 separate load balancers (one application LB and one network LB)?
Here is my docker-compose file(I have omitted a few irrelevant env variables):
version: "3.9"
x-aws-loadbalancer: arn:my-application-load-balancer
services:
my-web-app:
build:
context: .
image: hub/my-web-app
x-aws-pull_credentials: xxxxxxxx
container_name: my-app-name
ports:
- "80:80"
networks:
- my-app-network
depends_on:
- postgres
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/my-db?currentSchema=my-db_schema
- SPRING_DATASOURCE_USERNAME=dbpass
- SPRING_DATASOURCE_PASSWORD=dbpass
- SPRING_DATASOURCE_DRIVER-CLASS-NAME=org.postgresql.Driver
- SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.PostgreSQLDialect
postgres:
build:
context: docker/database
image: hub/my-db
container_name: my-db
networks:
- my-app-network
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- POSTGRES_USER=dbpass
- POSTGRES_PASSWORD=dbpass
- POSTGRES_DB=my-db
networks:
my-app-network:
name: my-app-network
x-aws-cloudformation:
Resources:
MyWebAppTCP80TargetGroup:
Properties:
HealthCheckPath: /actuator/health
Matcher:
HttpCode: 200-499
MyWebAppTCP80Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTP
Port: 80
LoadBalancerArn: xxxxx
DefaultActions:
- Type: redirect
RedirectConfig:
Port: 443
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Protocol: HTTPS
StatusCode: HTTP_301
MyWebAppTCP443Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTPS
Port: 443
LoadBalancerArn: xxxxxxxxx
Certificates:
- CertificateArn: "xxxxxxxxxx"
DefaultActions:
- Type: forward
ForwardConfig:
TargetGroups:
- TargetGroupArn:
Ref: MyWebAppTCP80TargetGroup
MyWebAppTCP80RedirectRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
ListenerArn:
Ref: MyWebAppTCP80Listener
Priority: 1
Conditions:
- Field: host-header
HostHeaderConfig:
Values:
- "*.my-app.com"
- "www.my-app.com"
- "my-app.com"
Actions:
- Type: redirect
RedirectConfig:
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Port: 443
Protocol: HTTPS
StatusCode: HTTP_301
What is the best way to enable BBR on default for my clusters?
In this link, I didn't see an option for controlling the congestion control.
Google BBR can only be enabled in Linux operating systems. By default the Linux servers uses Reno and CUBIC but the latest version kernels also includes the google BBR algorithms and can be enabled manually.
To enable it on CentOS 8 add below lines in /etc/sysctl.conf and issue command sysctl -p
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
For more Linux distributions you can refer to this link.
Maybe set it as part of your Deployment spec:
spec:
initContainers:
- name: sysctl-buddy
image: busybox:1.29
securityContext:
privileged: true
command: ["/bin/sh"]
args:
- -c
- sysctl -w net.core.default_qdisc=fq net.ipv4.tcp_congestion_control=bbr
resources:
requests:
cpu: 1m
memory: 1Mi
Hi all I am working on Nifi and I am trying to install it in AKS (Azure kubernetes service).
Using nifi 1.9.2 version. While installing it in AKS gives me an error
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedSFiVwC’: Operation not permitted
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedK3S1JJ’: Operation not permitted
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedbcm91T’: Operation not permitted
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedIuYSe1’: Operation not permitted
NiFi running with PID 28.
The specified run.as user nifi
does not exist. Exiting.
Received trapped signal, beginning shutdown...
Below is my nifi.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nifi-core
spec:
replicas: 1
selector:
matchLabels:
app: nifi-core
template:
metadata:
labels:
app: nifi-core
spec:
containers:
- name: nifi-core
image: my-azurecr.io/nifi-core-prod:1.9.2
env:
- name: NIFI_WEB_HTTP_PORT
value: "8080"
- name: NIFI_VARIABLE_REGISTRY_PROPERTIES
value: "./conf/custom.properties"
resources:
requests:
cpu: "6"
memory: 12Gi
limits:
cpu: "6"
memory: 12Gi
ports:
- containerPort: 8080
volumeMounts:
- name: my-nifi-core-conf
mountPath: /opt/nifi/nifi-current/conf
volumes:
- name: my-nifi-core-conf
azureFile:
shareName: my-file-nifi-core/nifi/conf
secretName: my-nifi-secret
readOnly: false
I have some customization in nifi Dockerfile, which copies some config files related to my configuration. When I ran my-azurecr.io/nifi-core-prod:1.9.2 docker image on my local it works as expected
But when I try to run it on AKS its giving above error. since its related to permissions I have tried with both user nifi and root in Dockerfile.
All the required configuration files are provided in volume my-nifi-core-conf running in same resourse group.
Since I am starting nifi with docker my exception is, it will behave same regardless of environment. Either on my local or in AKS.
But error also say user nifi does not exist. The official nifi-image setup the user requirement.
Can anyone help, I cant event start container in interaction mode as pods in not in running mode. Thanks in advance.
I think your missing the Security Context definition for your Kubernetes Pod. The user that Nifi runs under within a Docker has a specific UID and GID, and with the error message you getting, I would suspect that because that user is not defined in the Pod's security context it's not launching as expected.
Have a look at section on the Kubernetes documentation about security contexts, and that should be enough get you started.
I would also have a look at using something like Minikube when testing Kubernetes deployments as Kubernetes adds a large number of controls around a container engine like Docker.
Security Contexts Docs: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Minikube: https://kubernetes.io/docs/setup/learning-environment/minikube/
If you never figured this out, I was able to do this by running an initContainer before the main container, and changing the directory perms there.
initContainers:
- name: init1
image: busybox:1.28
volumeMounts:
- name: nifi-pvc
mountPath: "/opt/nifi/nifi-current"
command: ["sh", "-c", "chown -R 1000:1000 /opt/nifi/nifi-current"] #or whatever you want to do as root
update: does not work with nifi 1.14.0 - works with 1.13.2
I have the following code that connects to the logger service haproxies and drains the first logger VM.
Then in a separate task connects to the logger lists of hosts, where the first host is drained and does a service reload.
- name: Haproxy Warmup
hosts: role_max_logger_lb
tasks:
- name: analytics-backend 8300 range
haproxy: 'state=disabled host=maxlog-rwva1-{{ env }}-1.example.com backend=analytics-backend socket=/var/run/admin.sock'
become: true
when: warmup is defined and buildnum is defined
- name: logger-backend 8200
haproxy: 'state=disabled host=maxlog-rwva1-prod-1.example.com :8200 backend=logger-backend socket=/var/run/admin.sock'
become: true
when: warmup is defined and buildnum is defined
- name: Warmup Deploy
hosts: "role_max_logger"
serial: 1
tasks:
- shell: pm2 gracefulReload max-logger
when: warmup is defined and buildnum is defined
- pause: prompt="First host has been deployed to. Please verify the logs before continuing. Ctrl-c to exit, Enter to continue deployment."
when: warmup is defined and buildnum is defined
This code is pretty bad and doesn't work when I try to expand it to do a rolling restart for several services with several haproxies. I'd need to somehow drain 33% of all the app VMs from the haproxy backend and then connect to a different list and do the 33% reboot process there. Then resume at 34-66% of the draining list and then resume at 34% and 66% on the reboot list.
- name: 33% at a time drain
hosts: "role_max_logger_lb"
serial: "33%"
tasks:
- name: analytics-backend 8300 range
haproxy: 'state=disabled host=maxlog-rwva1-prod-1.example.com
backend=analytics-backend socket=/var/run/admin.sock'
become: true
when: warmup is defined and buildnum is defined
- name: logger-backend 8200
haproxy: 'state=disabled host=maxlog-rwva1-prod-1.example.com:8200 backend=logger-backend socket=/var/run/admin.sock'
become: true
when: buildnum is defined and service is defined
- name: 33% at a time deploy
hosts: "role_max_logger"
serial: "33%"
tasks:
- shell: pm2 gracefulReload {{ service }}
when: buildnum is defined and service is defined
- pause: prompt="One third of machines in the pool have been deployed to. Enter to continue"
I could do this much easier in Chef, just query the chef server for all nodes registered in a given role and do all my logic in real ruby. If it matters the host lists I'm calling here are actually ripped from my Chef server and fed in as json.
I don't know what the proper Ansible way of doing this without being able to drop into arbitrary scripting to do all the dirty work.
I was thinking maybe I could do something super hacky like this inside of the of a shell command in Ansible under the deploy, which might work if there is a way of pulling the current host that is being processed out of the host list, like an Ansible equivalent of node['fqdn'] in Chef.
ssh maxlog-lb-rwva1-food-1.example.com 'echo "disable server logger-backend/maxlog-rwva1-food-1.example.com:8200" | socat stdio /run/admin.sock'
Or maybe there is a way I can wrap my entire thing in a serial 33% and include sub-plays that do things. Sort of like this, but again I don't know how to properly pass around a thirded list of my app servers within the sub-plays
- name: Deployer
hosts: role_max_logger
serial: "33%"
- include: drain.yml
- include: reboot.yml
Basically I don't know what I'm doing, I can think of a bunch of ways of trying to do this but they all seem terrible and overly obtuse. If I were to go down these hacky roads I would probably be better off just writing a big shell script or actual ruby to do this.
Reading lots of official Ansible documentation for this has overly simplified examples that don't really map to my situation.
Particularly here where the load balancer is on the same host as the app server.
- hosts: webservers
serial: 5
tasks:
- name: take out of load balancer pool
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
http://docs.ansible.com/ansible/playbooks_delegation.html
I guess my questions are:
Is there an Ansible equivalent of Chef's node['fqdn'] to use the currently being processed host as a variable
Am I just completely off the rails for how I'm trying to do this?
Is there an Ansible equivalent of Chef's node['fqdn'] to use the currently being processed host as a variable
ansible_hostname, ansible_fqdn (both taken from the actual machine settings) or inventory_hostname (defined in the inventory file) depending which you want to use.
As you correctly noted, you need to use delegation for this task.
Here is some pseudocode for you to start with:
- name: 33% at a time deploy
hosts: role_max_logger
serial: 33%
tasks:
- name: take out of lb
shell: take_out_host.sh --name={{ inventory_hostname }}
delegate_to: "{{ item }}"
with_items: "{{ groups['role_max_logger_lb'] }}"
- name: reload backend
shell: reload_service.sh
- name: add back to lb
shell: add_host.sh --name={{ inventory_hostname }}
delegate_to: "{{ item }}"
with_items: "{{ groups['role_max_logger_lb'] }}"
I assume that group role_max_logger defines servers with backend services to be reloaded and group role_max_logger_lb defines servers with load balancers.
This play take all hosts from role_max_logger, splits them into 33% batches; then for each host in the batch it executes take_out_host.sh on each of load balancers passing current backend hostname as parameter; after all hosts from current batch are disabled on load balancers, backend services are reloaded; after that, hosts are added back to LB as in the first task. This operation is then repeated for every batch.
I am a newbee and tried to deploy this demo. I got the following errors:
Downloading artifacts...DOWNLOAD SUCCESSFUL
Target: https://api.ng.bluemix.net
FAILED
Error readingmanifest file:
yaml: line 3: mapping values are not allowed in this context
Finished: FAILED
Stage has no runtime information
Initial manifest.yml:
declared-services:
visual-recognition-free
label: watson_vision_combined
plan: free
applications:
- services:
- visual-recognition-free
name: visual-recognition-demo
command: npm start
path: .
memory: 512M
Due to errors I changed it:
declared-services:
visual-recognition-free:
label: watson_vision_combined
plan: free
applications:
- name: visual-recognition-demo
command: npm start
path: .
memory: 512M
services:
- visual-recognition-free
Clicked on "Deploy the App from Workspace".
Deploy failed: An unknown error occurred.
The issue is related to the manifest.yml syntax. Currently, the manifest.yml file is defined as the following:
---
declared-services:
visual-recognition-free
label: watson_vision_combined
plan: free
applications:
- services:
- visual-recognition-free
name: visual-recognition-demo
command: npm start
path: .
memory: 512M
However, the manifest.yml should as the following:
---
declared-services:
visual-recognition-free:
label: watson_vision_combined
plan: free
applications:
- services:
- visual-recognition-free
name: visual-recognition-demo
command: npm start
path: .
memory: 512M
There should be a ":" at the end of the declared-services name "visual-recognition-free"
I have created a Github issue to get the manifest.yml file updated as well:
https://github.com/watson-developer-cloud/visual-recognition-nodejs/issues/193
In the interim, you can make the change with your project on Jazz Hub.