If I want schema registry to be exposed publicly, how would I go about that? I assume I need to configure the listeners. How would I configure schema registry to allow both HTTP and HTTPS at the same time?
It's public, by default, since it binds on all interfaces on port 8081.
Ideally, you'd implement TLS-termination in an L7 proxy, but you can do this, for example to bind to two ports.
listeners=https://0.0.0.0:8081,http://0.0.0.0:8082
Related
A newbie question on using subnet ACLs with IBM gen2 VPC.
I've an internet facing application which accepts inbound requests, as well as, makes outbound requests to the peer hosts. To enable this, I've to practically open all(>1024) inbound and outbound ports on my subnets.
I'm using IBM's security groups to firewall my VMs, but just curious why make the ACLs stateless, and force the user to open all ports? I certainly see the usage of subnet to subnet ACLs but I'm asking about in my particular use case.
Am I missing something here? Would you please recommend best practice?
Stateless network acls are common for cloud providers supplying VPC. If the remote IP ranges are not fixed it will not be possible to limit them further with acls. Same with the ports where in your example most port numbers will be valid except the non-ephemeral ports that you are not using (as you mention).
You can imagine more constrained use cases where acls would add another layer of security and associated reasoning about connectivity. Say you had a Direct Link from on premises to the cloud and the IP range could be constricted, etc.
I was wondering -
When setting rabbitmq nodes to use a TLS connection (as seen here https://github.com/artooro/rabbitmq-kubernetes-ha/blob/master/configmap.yaml), as I understand, I need to create a certificate that matches the hostname, wildcard can be used - https://www.rabbitmq.com/clustering-ssl.html.
As cluster dns is internal, I guess I should create a certificate with a common name such as - ‘*.rabbitmq.default.svc.cluster.local’.
When I’m exposing the service, I'm supposed to create either a NodePort service or a LoadBalancer service - with a totally different hostname (it should route internally).
My question is - how will the amqps connection work? Won't it present me with one of the node’s certificates - which will not match the load balancer’s dns?
What's the correct way to expose the amqps protocol?
Thanks in advance
If anyone is looking at it, it doesn't matter - this is not a "standard" https connection.
The client needs to specify the correct common name and that's enough for the connection to work.
I am looking for GCP networking best practice, where I can allow connection of auto-scaled instances to Postgresql server installed on separate instance.
So far I tried whitelisting load-balancer IP within firewall and postgresql config file, but failed.
Any help or pointer is highly appreciated.
The load-balancer doesn't process information by itself, it just redirects Frontend addresse(s) and manage the requests with Instance Groups.
That instance group should manage the HTTP requests and connect with the database instance.
The load-balancer is used to dynamically distribute (or even create additional instances) to handle the requests over the same Frontend address.
--
So first you should make it work with a regular instance, configure it and save the instance template. Then you can proceed with creating an instance group that can be managed by a load-balancer.
EDIT - Extended the answer from my comment
"I don't think your problem is related to Google cloud platform now. If you have a known IP address for the PostgreSQL server (connect using an internal network IP address so it doesn't change), then make sure your auto-balanced instances are in the same internal network, use db's internal IP and connect to it."
I want to connect externally to my Redshift cluster (VPC, NOT classic) using aginity workbench. Hence I added my public ip address to the ec2 inbound rules, but I get a connection timeout.
When I allow all traffic to the inbound rules (0.0.0.0/0) it is possible to connect.Off course this is not a preferred solution because of security.
Does anybody have an idea why/where it is failing with my public ip (grabbed from whatismyipaddress.com)?
I created tcp ip application and published it to cloud of Microsoft, but for now I don't know how to find the IP of my server.
Or in another words, how can I find the IP at which implemented role was deployed?
Depends on whether you are trying to get the public IP or the private IP of the server.
If you want to reach this server from outside of the Azure network, then you are looking for the public IP. In this case you must define an InputEndpoint for your role. You'll be required to specify a FQDN for your app. You can find the IP address of this FQDN using the usual methods like tracert, ping, etc.
If you want to reach this server from within the Azure network, typically you'd want some other role in your tenant to communicate with this server, then you'd need to define an InternalEndpoint for your server. You can then use the ServiceRuntime library to discover the private endpoint of your role instance.
Enabling Communication for Role Instances in Windows Azure is an excellent resource to get a better understanding of how this works.