Allow IPs with TCP Listener using RBAC (Envoy) - postgresql

I am trying to achieve the following with Envoy:
Allow TCP traffic to a Postgres service with RBAC rules to allow only a few IPs.
This is my listener setup.
- name: listener_postgres
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 54322
filter_chains:
filters:
- name: envoy.filters.network.rbac
config:
stat_prefix: rbac_postgres
rules:
action: ALLOW
policies:
"allow":
permissions:
- any: true
principals:
- source_ip:
address_prefix: XX.XX.XX.XX
prefix_len: 32
- source_ip:
address_prefix: XX.XX.XX.XX
prefix_len: 32
- source_ip:
address_prefix: XX.XX.XX.XX
prefix_len: 32
- name: envoy.tcp_proxy
config:
stat_prefix: tcp_postgres
cluster: database_service
I can confirm that the service is setup correctly because I can remove the RBAC rules and I can connect successfully.
When the RBAC rules are added I can not connect to the Postgres database.
But for some reason nothing seems to work, I have also tried remote_ip and direct_remote_ip in place of source_ip.
Am I doing something wrong?
Thanks

Hey I ran into the same issue and this is the configuration worked for me.
I used remote_ip attribute.
Also, check the updated filters names
- name: listener_postgres
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 54322
filter_chains:
filters:
- name: envoy_rbac
config:
stat_prefix: rbac_postgres
rules:
action: ALLOW
policies:
"allow":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: XX.XX.XX.XX
prefix_len: 32
- name: envoy_tcp_proxy
config:
stat_prefix: tcp_postgres
cluster: database_service

It seems that setting the attribute to 'remote_ip' as suggested by Rahul Pratap worked.
Here is a working example:
- name: listener_postgres
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 54322
filter_chains:
filters:
- name: envoy.filters.network.rbac
config:
stat_prefix: rbac_postgres
rules:
action: ALLOW
policies:
"allow":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: XX.XX.XX.XX
prefix_len: 32
- name: envoy.tcp_proxy
config:
stat_prefix: tcp_postgres
cluster: database_service

Related

Flexget does not listen to my configuration

I have flexget in docker compose and it doesn't work for me
That is, I have this configuration that downloads files from an rss and renames and moves them. But flexget doesn't do any of that
web_server:
bind: 0.0.0.0
port: 5050
web_ui: yes
schedules:
- tasks: '*'
interval:
minutes: 1
templates:
transmissionrpc:
transmission:
host: localhost
port: 9091
username: admin
password: "123456"
clean_transmission:
host: localhost
port: 9091
username: admin
password: "123456"
transmission_seed_limits: yes
delete_files: no
enabled: Yes
tv:
thetvdb_lookup: yes
quality: 720p
series:
group:
- Criminal Minds
- Rick and Morty
tasks:
Drarbg task:
rss: http://showrss.info/user/229110.rss?magnets=true&namespaces=true&name=null&quality=null&re=null
priority: 1
all_series: yes
template:
- tv
- transmissionrpc
set:
path: /downloads/complete
sort-series:
metainfo_series: yes
accept_all: yes
filesystem:
path: /downloads/complete
regexp: '.*\.(avi|mkv|mp4)$'
recursive: yes
template: tv
series:
settings:
group:
parse_only: yes
require_field: series_name
move:
to: '/storage/Series/{{series_name}}'
rename: '{{series_name}} - S{{series_season|pad(2)}}E{{series_episode|pad(2)}}'
in the flexget.log it does not give me any error.
I've tried various configurations but it still doesn't work. I don't know what to do anymore. I've opened a help ticket on their github but no one helps me.
My docker-compose:
flexget:
image: wiserain/flexget:3
volumes:
- ${CONFIG}:/config
- ${STORAGE}/torrents:/downloads
- ${MEDIA}:/storage
ports:
- 5050:5050
environment:
- TORRENT_PLUGIN=transmission
- FG_WEBUI_PASSWD=123456
restart: unless-stopped
links:
- transmission
The web_ui does not work either, that is, when I put the ip of my machine followed by :5050 it tells me that the connection has been rejected

'No healthy upstream' error when Envoy proxy is set up manually

I have a very simple environment with a client, a server and an envoy proxy, each running on a separate docker, communicating over http.
When I set it using docker-compose it works.
However, when I set up the dockers and the network manually (with docker network create, setting the aliases, etc.), I get a "503 - no healthy upstream" message when the client tries to send requests to the server. curl to the network alias works from the envoy container. Any idea what is the difference between using docker-compose and setting up the network and containers manually?
envoy.yaml:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: service }
http_filters:
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: server-stub
port_value: 5000
admin:
access_log_path: "/tmp/envoy.log"
address:
socket_address:
address: 0.0.0.0
port_value: 9901
The docker-compose file that worked (but I don't want to use docker-compose, I am using scripts that set up each docker separately):
version: "3.8"
services:
envoy:
image: envoyproxy/envoy:v1.16-latest
ports:
- "10000:10000"
- "9901:9901"
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
server-stub:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
I can't reproduce this. It works fine with your docker-compose file, and it works fine manually. Here are the manual steps I took:
$ docker network create test-net
$ docker container run --network test-net --name envoy -p 10000:10000 -p 9901:9901 --mount type=bind,src=/home/john/projects/tester/envoy.yaml,dst=/etc/envoy/envoy.yaml envoyproxy/envoy:v1.16-latest
$ docker run --network test-net --name server-stub johnharris85/simple-hostname-reporter:3
My sample app also listens on port 5000. I used your exact envoy config. Using Docker 20.10.8 if relevant.

How can I register an ECS Service as a Network Load Balancer target on a non-default port?

I'm trying to deploy a horizontally scaling application consisting of multiple containers with a single reverse proxy in front to AWS ECS backed by EC2. For certain reasons I cannot use an Application Load Balancer, but want to use a Network Load Balancer that forwards all traffic on ports 80 and 443 to the reverse proxy container. I use AWS CDK to define the setup.
I am running into issues when trying to route traffic on both ports to the proxy. No matter what I do, all targets in the created target group point to port 80 on the container. I.e. I get a mapping of 80->80, 443->80 when I would like 80->80, 443->443.
My CDK code looks like this:
const proxyService = new ecs.Ec2Service(this, 'ProxyService', {
serviceName: 'proxy',
cluster,
taskDefinition: proxyTaskDefinition,
minHealthyPercent: 0,
desiredCount: 1,
securityGroups: [securityGroup],
cloudMapOptions: {
name: 'proxy',
cloudMapNamespace: cluster.defaultCloudMapNamespace
}
})
const loadbalancer = new lb.NetworkLoadBalancer(this, 'NetworkLoadBalancer', {
vpc,
internetFacing: true
})
new cdk.CfnOutput(this, 'LoadBalancerDnsName', {
value: loadbalancer.loadBalancerDnsName
})
loadbalancer.addListener('HTTPListener', {
port: 80
})
.addTargets('HTTPTarget', {
port: 80,
targets: [proxyService]
})
loadbalancer.addListener('HTTPSListener', {
port: 443,
})
.addTargets('HTTPSTarget', {
port: 443,
// the proxyService seems to always register itself at port 80
// by calling its attachToNetworkTargetGroup method
targets: [proxyService]
})
}
The Cloudformation generated for the Target Groups looks like this:
NetworkLoadBalancerHTTPListener792E96F1:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn:
Ref: NetworkLoadBalancerHTTPListenerHTTPTargetGroupCEAF8C0F
Type: forward
LoadBalancerArn:
Ref: NetworkLoadBalancer8E753273
Port: 80
Protocol: TCP
Metadata:
aws:cdk:path: SplitClusterStack/NetworkLoadBalancer/HTTPListener/Resource
NetworkLoadBalancerHTTPListenerHTTPTargetGroupCEAF8C0F:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Protocol: TCP
TargetType: ip
VpcId:
Ref: VPCB9E5F0B4
Metadata:
aws:cdk:path: SplitClusterStack/NetworkLoadBalancer/HTTPListener/HTTPTargetGroup/Resource
NetworkLoadBalancerHTTPSListenerAF8F470A:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn:
Ref: NetworkLoadBalancerHTTPSListenerHTTPSTargetGroup4BC6FF0B
Type: forward
LoadBalancerArn:
Ref: NetworkLoadBalancer8E753273
Port: 443
Protocol: TCP
Metadata:
aws:cdk:path: SplitClusterStack/NetworkLoadBalancer/HTTPSListener/Resource
NetworkLoadBalancerHTTPSListenerHTTPSTargetGroup4BC6FF0B:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Protocol: TCP
TargetType: ip
VpcId:
Ref: VPCB9E5F0B4
Metadata:
aws:cdk:path: SplitClusterStack/NetworkLoadBalancer/HTTPSListener/HTTPSTargetGroup/Resource
After deploying this, I can edit the created target groups in the web console to register a new target pointing to 443 on the same IP and deregister port 80 to get things working.
How can I create a Loadbalancer target that:
points to the ECS service
uses port 443
I'm happy to construct this myself of even add overrides if it helps me get this solved.
The ECS service exposes a loadBalancerTarget method that can be used for this:
loadbalancer.addListener('HTTPSListener', {
port: 443,
})
.addTargets('HTTPSTarget', {
port: 443,
targets: [proxyService.loadBalancerTarget({
containerPort: 443,
containerName: 'proxy'
})]
})

fargate failing on docker pull in private subnet

I am having trouble deploying a fargate cluster, and it is failing on the docker pull image with error "CannotPullContainerError". I am creating the stack with cloudformation, which is not optional, and it creates the full stack, but fails when trying to start the task based on the above error.
I have attached the cloudformation stack file which might highlight the problem, and I have doubled checked that the subnet has a route to nat(below). I also ssh'ed into an instance in the same subnet which was able to route externally. I am wondering if i have not correctly placed the pieces required i.e the service + loadbalancer are in the private subnet, or should i not be placing the internal lb in the same subnet???
This subnet is the one that currently has the placement but all 3 in the file have the same nat settings.
subnet routable (subnet-34b92250)
* 0.0.0.0/0 -> nat-05a00385366da527a
cheers in advance.
yaml cloudformaition script:
AWSTemplateFormatVersion: 2010-09-09
Description: Cloudformation stack for the new GRPC endpoints within existing vpc/subnets and using fargate
Parameters:
StackName:
Type: String
Default: cf-core-ci-grpc
Description: The name of the parent Fargate networking stack that you created. Necessary
vpcId:
Type: String
Default: vpc-0d499a68
Description: The name of the parent Fargate networking stack that you created. Necessary
Resources:
CoreGrcpInstanceSecurityGroupOpenWeb:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupName: sgg-core-ci-grpc-ingress
GroupDescription: Allow http to client host
VpcId: !Ref vpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
LoadBalancer:
Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer'
DependsOn:
- CoreGrcpInstanceSecurityGroupOpenWeb
Properties:
Name: lb-core-ci-int-grpc
Scheme: internal
Subnets:
# # pub
# - subnet-f13995a8
# - subnet-f13995a8
# - subnet-f13995a8
# pri
- subnet-34b92250
- subnet-82d85af4
- subnet-ca379b93
LoadBalancerAttributes:
- Key: idle_timeout.timeout_seconds
Value: '50'
SecurityGroups:
- !Ref CoreGrcpInstanceSecurityGroupOpenWeb
TargetGroup:
Type: 'AWS::ElasticLoadBalancingV2::TargetGroup'
DependsOn:
- LoadBalancer
Properties:
Name: tg-core-ci-grpc
Port: 3000
TargetType: ip
Protocol: HTTP
HealthCheckIntervalSeconds: 30
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: 10
HealthyThresholdCount: 4
Matcher:
HttpCode: '200'
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: '20'
UnhealthyThresholdCount: 3
VpcId: !Ref vpcId
LoadBalancerListener:
Type: 'AWS::ElasticLoadBalancingV2::Listener'
DependsOn:
- TargetGroup
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref TargetGroup
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
EcsCluster:
Type: 'AWS::ECS::Cluster'
DependsOn:
- LoadBalancerListener
Properties:
ClusterName: ecs-core-ci-grpc
EcsTaskRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
# - ecs.amazonaws.com
- ecs-tasks.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: iam-policy-ecs-task-core-ci-grpc
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'ecr:**'
Resource: '*'
CoreGrcpTaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
DependsOn:
- EcsCluster
- EcsTaskRole
Properties:
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn: !Ref EcsTaskRole
Cpu: '1024'
Memory: '2048'
ContainerDefinitions:
- Name: container-core-ci-grpc
Image: 'nginx:latest'
Cpu: '256'
Memory: '1024'
PortMappings:
- ContainerPort: '80'
HostPort: '80'
Essential: 'true'
EcsService:
Type: 'AWS::ECS::Service'
DependsOn:
- CoreGrcpTaskDefinition
Properties:
Cluster: !Ref EcsCluster
LaunchType: FARGATE
DesiredCount: '1'
DeploymentConfiguration:
MaximumPercent: 150
MinimumHealthyPercent: 0
LoadBalancers:
- ContainerName: container-core-ci-grpc
ContainerPort: '80'
TargetGroupArn: !Ref TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: DISABLED
SecurityGroups:
- !Ref CoreGrcpInstanceSecurityGroupOpenWeb
Subnets:
- subnet-34b92250
- subnet-82d85af4
- subnet-ca379b93
TaskDefinition: !Ref CoreGrcpTaskDefinition
Unfortunately AWS Fargate only supports images hosted in ECR or public repositories in Docker Hub and does not support private repositories which are hosted in Docker Hub. For more info - https://forums.aws.amazon.com/thread.jspa?threadID=268415
Even we faced the same problem using AWS Fargate couple of months back. You have only two options right now:
Migrate your images to Amazon ECR.
Use AWS Batch with custom AMI, where the custom AMI is built with Docker Hub credentials in ECS config (which we are using right now).
Edit: As mentioned by Christopher Thomas in the comment, ECS fargate now supports pulling images from DockerHub Private repositories. More info on how to set it up can be found here.
Do define this policy in your ECR registry and attach the IAM role with your task.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::99999999999:role/ecsEventsRole"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload"
]
}
]
}

Is there a platform/service agnostic definition for service handoff?

I'm curious if there is a spec for a service handoff definition. For example, if on a PaaS/IaaS a service is provisioned, end users need a hash of details of what the service is, where the endpoint can be reached, what port(s) are published, and what authentication is used. (Think HATEOAS ref_'s) I have a couple mock-ups of what one could look like:
object storage example
name: myobjstor
family: s3
about: https://aws.amazon.com/documentation/s3
zone: public
protocol:
spec: http
host: s3.mysite.com
port: 443
tls: true
authentication:
strategy: oauth2
username: someuser
password: somepassword
definition:
type: swagger
url: 'https://mysite/swagger.json'
openstack example
name: myostack
family: openstack-keystone_v2.1
about: http://developer.openstack.org/api-ref.html
zone: public
protocol:
spec: http
host: keystone.mysite.com
port: 443
tls: true
authentication:
strategy: oauth2
username: someuser
password: somepassword
definition:
type: swagger
url: 'https://mysite/swagger.json'
redis example
name: myredis
family: redis
about: http://redis.io/documentation
zone: public
protocol:
spec: redis
host: redis.mysite.com
port: 6379
options:
db: 0
nfs example
name: mynfs
family: nfs
about: http://nfs.sourceforge.net
zone: public
protocol:
spec: nfsv4
host: nfs.mysite.com
ports:
- 111
- 2049
Is there a standard like this that already exists?
I suggest you looking into the "service discovery"pattern.
There are several tools out there that make it easy to implement, but most of them describe services using key/value pairs e.g. see etcd but it looks like Consul adds a few fields that may be of use to you see here
Example Consul Definition
{
"service": {
"name": "redis",
"tags": ["master"],
"address": "127.0.0.1",
"port": 8000,
"enableTagOverride": false,
"checks": [
{
"script": "/usr/local/bin/check_redis.py",
"interval": "10s"
}
]
}
}