Unable to assign security group to Bare Metal server in IBM Bluemix - ibm-cloud

I have started one bare metal server in IBM Bluemix, the server is Up and Running. But I am unable to open the port, I want allow_all traffic to be enabled on bare metal servers. I can see there are pre-configured security groups, but I am not able to assign those groups to my server.
Any pointers in the right direction will be appreciated.

Generally it’s based on the virtualization technology, so only the virtual servers are eligible to the security groups.
You cannot assign the bare metal servers to the security groups basically, so you have to provision other firewall if you protect those.
https://console.bluemix.net/docs/infrastructure/security-groups/sg_overview.html#about-security-groups

Security groups is only available for Virtual Servers for now. We might in future look into offering security groups for Bare Metal Servers. In the meantime you can use shared hardware firewall to achieve your goal.
More information about different firewall options
https://www.ibm.com/blogs/bluemix/2017/09/next-generation-firewall-fortigate-security-appliance-10gbps/

Related

Access restrictions when using Gcloud vpn with Kubernetes

This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.

VPN between two nodes of a cluster

I have three nodes, a master which is geographically located elsewhere, and the two other nodes that are close, but not on the same network. I've create a cluster with those three, and now, I want to make a tunnel between the two (close) nodes to compare the benefits to communicate without going to the master, and then come back.
I've search a little, and found out these charts:
https://github.com/helm/charts/tree/master/stable/openvpn.
Can I use it to create the VPN between the 2 workers nodes?
Thanks for the help
Is not a good idea to use a helm chart for a VPN if you are trying to use it for the kubernetes internal communications.
My advice is to configure the VPN on the nodes itself but that comes with more problems of automation and availability.
What is the main idea of having that setup, can you use some external VPN service instead of installing inside the cluster? have you tried with peering instead of VPN?
Some actual cloud providers allow you to have easy turnkey clusters, have you tried it?
UPDATE
As per comments maybe two more solutions are good ones by itself or in combination:
Istio https://istio.io/
gRPC https://grpc.io/ in conjunction of mTLS

Restrict access to my ServiceFabric cluster, only allowing one IP (API Management)

We're increasing the safety of our recently developed software (running on Service Fabric), and want all trafic to go through the API Management. In the load balancer of the SFcluster, you can restrict access on a port level, but where do I restrict access to my cluster on IP-address level? We want to only allow incomming trafic from the API Management, and block everything else, so blacklist all IP-addresses but the API Managemnet IP.
Thanks!
You can use a Network Security Group for this.
A network security group (NSG) contains a list of security rules that
allow or deny network traffic to resources connected to Azure Virtual
Networks (VNet). NSGs can be associated to subnets, individual VMs
(classic), or individual network interfaces (NIC) attached to VMs
(Resource Manager). When an NSG is associated to a subnet, the rules
apply to all resources connected to the subnet. Traffic can further be
restricted by also associating an NSG to a VM or NIC.
This quick start template describes how to deploy one.
More about networking here.

VPN access for applications running inside a shared Kubernetes cluster

We are currently providing our software as a software-as-a-service on Amazon EC2 machines. Our software is a microservice-based application with around 20 different services.
For bigger customers we use dedicated installations on a dedicated set of VMs, the number of VMs (and number of instances of our microservices) depending on the customer's requirements. A common requirement of any larger customer is that our software needs access to the customer's datacenter (e.g., for LDAP access). So far, we solved this using Amazon's virtual private gateway feature.
Now we want to move our SaaS deployments to Kubernetes. Of course we could just create a Kubernetes cluster across an individual customer's VMs (e.g., using kops), but that would offer little benefit.
Instead, perspectively, we would like to run a single large Kubernetes cluster on which we deploy the individual customer installations into dedicated namespaces, that way increasing resource utilization and lowering cost compared to the fixed allocation of machines to customers that we have today.
From the Kubernetes side of things, our software works fine already, we can deploy multiple installations to one cluster just fine. An open topic is however the VPN access. What we would need is a way to allow all pods in a customer's namespace access to the customer's VPN, but not to any other customers' VPNs.
When googleing for the topic, I found approaches that add a VPN client to the individual container (e.g., https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/) which is obviously not an option).
Other approaches seem to describe running a VPN server inside K8s (which is also not what we need).
Again others (like the "Strongswan IPSec VPN service", https://www.ibm.com/blogs/bluemix/2017/12/connecting-kubernetes-cluster-premises-resources/ ) use DaemonSets to "configure routing on each of the worker nodes". This also does not seem like a solution that is acceptable to us, since that would allow all pods (irrespective of the namespace they are in) on a worker node access to the respective VPN... and would also not work well if we have dozens of customer installations each requiring its own VPN setup on the cluster.
Is there any approach or solution that provides what we need, .i.e., VPN access for the pods in a specific namespace only?
Or are there any other approaches that could still satisfy our requirement (lower cost due to Kubernetes worker nodes being shared between customers)?
For LDAP access, one option might be to setup a kind of LDAP proxy, so that only this proxy would need to have VPN access to the customer network (by running this proxy on a small dedicated VM for each customer, and then configuring the proxy as LDAP endpoint for the application). However, LDAP access is only one out of many aspects of connectivity that our application needs depending on the use case.
If your IPSec concentrator support VTI, it's possible route the traffic using firewall rules. For example, PFSense suports it: https://www.netgate.com/docs/pfsense/vpn/ipsec/ipsec-routed.html.
Using VTI, you can direct traffic using some kind of policy routing: https://www.netgate.com/docs/pfsense/routing/directing-traffic-with-policy-routing.html
However, i can see two big problems here:
You cannot have two IPSEC tunnels with the conflicted networks. For example, your kube network is 192.168.0.0/24 and you have two customers: A (172.12.0.0/24) and B (172.12.0.0/12). Unfortunelly, this can happen (unless your customer be able to NAT those networks).
Find the ideals criteria for rule match (to allow the routing), since your source network are always the same. Use mark packages (using iptables mangle or even through application) can be a option, but you will still get stucked on the first problem.
A similar scenario is founded on WSO2 (API gateway provider) architecture. They solved it using reverse-proxy in each network (sad but true) https://docs.wso2.com/display/APICloud/Expose+your+On-Premises+Backend+Services+to+the+API+Cloud#ExposeyourOn-PremisesBackendServicestotheAPICloud-ExposeyourservicesusingaVPN
Regards,
UPDATE:
I don't know if you use GKE. If yes, maybe use Alias-IP can be an option: https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips. The PODs IPs will be routable from VPC. So, you can apply some kind of routing policy based on their CIDR.

How to access services in a different Kubernetes cluster

For improved performance and availability we'd like to distribute certain services from out stack across different Kubernetes clusters in different parts of the world (GCP regions).
The majority of our stack will continue to run in one cluster / region but some user facing services will be deployed all over the world.
Some of these services need to access other services in our main cluster.
Q: How can we reliably access services in a different Kubernetes cluster?
Using internal load balancers seems to be out of the question as those are per region only.
We'd like to keep the communication between our services inside the private GCP network and avoid going over the public internet. So an public ingress also wouldn't work.
VPC networks are global resources, not restricted by regional boundaries, and so with the correct firewall rules set up, you should be able to access any internal resource from any other resource "right out of the box", assuming they are in the same VPC network and same project.
Take a look at VPN Peering: https://cloud.google.com/vpc/docs/vpc-peering
It allows you to connect two vpcs (in different regions) so that they can communicate privately.
You may have to recreate/reconfigure your Kubernetes in order to support this vpc architecture.