Can Instances in the same security groups can communicate each other any how - amazon-vpc

Can Instances in the same security groups in Amazon VPC can communicate each other any how

Instances associated with the same security group can’t talk to each other unless you add rules allowing it (with the exception being the default security group). you have to add rules to make them able to communicate.

It depends on the rules. The fact that two or more instances are associated with the same security group is not related to the allowed traffic.
Security groups is a set of allowed traffic rules, while the reference point is the instances themselves (meaning incoming traffic into the instance or outgoing traffic from the instance).
The definition whether instances have access to each others depends on the security groups' rules and the network ACLs' rules.
The communication will not be blocked as long there are rules that allow it.
A communication can be RDP, ICMP, HTTP/S and more, but it should be allowed in both security groups and NACLs.
A note to remember: by default, AWS blocks ICMP communication (ping), therefore, although the security group may have "All Traffic" allowed rule a ping request will fail if there is no specific rule that allows it.

Rules to connect to instances from an instance with the same security group
To allow instances that are associated with the same security group to communicate with each other, you must explicitly add rules for this.
The following table describes the inbound rule for a security group that enables associated instances to communicate with each other. The rule allows all types of traffic.
Protocol type
Protocol number
Ports
Source IP
-1 (All)
-1 (All)
-1 (All)
The ID of the security group
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html

Security Group name used in Source & Destination of FireWall rules
To create Security Group, we need to specify the FireWalled type of Inbound and Outbound rules.
Each Rule consists of Protocol, port number and Source (for inbound) or Destination (for outbound) IP addresses that are allowed, not denied.
But why is it that we can indicate Source or Destination as the name of a Security Group?
We do that because we want to refer to the IP addresses (for Source or Destination) of another appliance/instance which are associated with that specific Security Group. [Some people say these appliance/instance is in that specific Security Group]. But specifying Security Group or IP address is one thing. For communication to be successful, the allowed protocol and port number should be explicitly stated in the rule as well.

Related

Apache NiFi - Listen UDP/TCP ranges

I'm trying to configure the ListenUDP or ListenTCP processors to get input from multiple, but very specific IP's. I'm trying to find out if IP ranges can be used instead of a single IP, this way all my palo altos will go to one processor, and so on.
ListenTCP & ListenUDP do not filter incoming traffic.
You can choose which network interface is used to listen for incoming traffic by setting Local Network Interface - so you can apply normal filtering techniques such as iptables or a firewall to filter what traffic is allowed to reach that network interface.
In theory, recieved messages do write an attribute tcp.sender or udp.sender to each FlowFile. So you could technically filter AFTER the ListenTCP by comparing the value of this Attribute and dropping messages that are not valid...but this is a lot less efficient than filtering network traffic outside of NiFi.

How to effectively establish point to point channel using ZeroMQ?

I have trouble with establishing asynchronous point to point channel using ZeroMQ.
My approach to build point to point channel was that it generates as many ZMQ_PAIR sockets as possible up to the number of peers in the network. Because ZMQ_PAIR socket ensures an exclusive connection between two peers, it needs the same number of peers. My first attempt is realized as the following diagram that represents paring connections between two peers.
But the problem of the above approach is the fact that each pairing socket needs a distinct bind address. For example, if four peers are in the network, then each peer should have at least three ( TCP ) address to bind the rest of peers, which is very unrealistic and inefficient.
( I assume that peer has exactly one unique address among others. Ex. tcp://*:5555 )
It seems that there is no way other than using different patterns, which contain some set of message brokers, such as XREQ/XREP.
( I intentionally avoid broker based approach, because my application will heavily exchange message between peers, which it will often result in performance bottleneck at the broker processes. )
But I wonder that if there is anybody who uses ZMQ_PAIR socket to efficiently build point to point channel? Or is there a way to bypass to have distinct host IP addresses for multiple ZMQ_PAIR sockets to bind?
Q: How to effectively establish ... well,
Given the above narrative, the story of "How to effectively ..." ( where a metric of what and how actually measures the desired effectivity may get some further clarification later ), turns into another question - "Can we re-factor the ZeroMQ Signalling / Messaging infrastructure, so as to work without using as many IP-addresses:port#-s as would the tcp://-transport-class based topology actually need?"
Upon an explicitly expressed limit of having not more than a just one IP:PORT# per host/node ( being thus the architecture's / desing's the very, if not the most expensive resource ) one will have to overcome a lot troubles on such a way forward.
It is fair to note, that any such attempt will come at an extra cost to be paid. There will not be any magic wand to "bypass" such a principal limit expressed above. So get ready to indeed pay the costs.
It reminds me one Project in TELCO, where a distributed-system was operated in a similar manner with a similar original motivation. Each node had an ssh/sshd service setup, where local-port forwarding enabled to expose a just one publicly accessible IP:PORT# access-point and all the rest was implemented "inside" a mesh of all the topological links going through ssh-tunnels not just because the encryption service, but right due to the comfort of having the ability to maintain all the local-port-forwarding towards specific remote-ports as a means of how to setup and operate such exclusive peer-to-peer links between all the service-nodes, yet having just a single public access IP:PORT# per node.
If no other approach will seem feasible ( PUB/SUB being evicted for either traffic actually flowing to each terminal node in cases of older ZeroMQ/API versions, where Topic-filtering gets processed but on the SUB-side, which both security and network Departments will not like to support, or for concentrated workloads and immense resources needs on PUB-side, in cases of newer ZeroMQ/API versions, where Topic-filter is being processed on the sender's side. Adressing, dynamic network peer (re-)discovery, maintenance, resources planning, fault resilience, ..., yes, not any easy shortcut seems to be anywhere near to just grab and (re-)use ) the above mentioned "stone-age" ssh/sshd-port-forwarding with ZeroMQ, running against such local-ports only, may save you.
Anyway - Good Luck on the hunt!

How to manage Squid based on per user user bandwidth

I want to manage the bandwidth and traffic based on user activities on Squid Server Proxy.
I made some research but couldn't find the solution that I want.
For example, users who have more than 256K traffic should be restricted from server.
Can you help me?
Thanks
I'm assumed squid 3.x:
To provide a way to limit the bandwidth of certain requests based on any list of criteria.
class:
the class of a delay pool determines how the delay is applied, ie, whether the different client IPs are treated separately or as a group (or both)
class 1:
a class 1 delay pool contains a single unified bucket which is used for all requests from hosts subject to the pool
class 2:
a class 2 delay pool contains one unified bucket and 255 buckets, one for each host on an 8-bit network (IPv4 class C)
class 3:
contains 255 buckets for the subnets in a 16-bit network, and individual buckets for every host on these networks (IPv4 class B )
class 4:
as class 3 but in addition have per authenticated user buckets, one per user.
class 5:
custom class based on tag values returned by external_acl_type helpers in http_access. One bucket per used tag value.
Delay pools allows you to limit traffic for clients or client groups, with various features:
Can specify peer hosts which aren't affected by delay pools, ie,
local peering or other 'free' traffic (with the no-delay peer
option).
delay behavior is selected by ACLs (low and high priority traffic,
staff vs students or student vs authenticated student or so on).
each group of users has a number of buckets, a bucket has an amount
coming into it in a second and a maximum amount it can grow to; when
it reaches zero, objects reads are deferred until one of the object's
clients has some traffic allowance.
any number of pools can be configured with a given class and any set
of limits within the pools can be disabled, for example you might
only want to use the aggregate and per-host bucket groups of class 3,
not the per-network one.
In your case can you use:
For a class 4 delay pool:
delay_pools pool 4
delay_parameters pool aggregate network individual user
The last delay_pool, can be configure in your squid server proxy:
for example; each user will be limited to 128Kbits/sec no matter how many workstations they are logged into:
delay_pools 1
delay_class 1 2
delay_access 1 allow all
delay_parameters 4 32000/32000 8000/8000 600/64000 16000/16000
Please read more:
http://wiki.squid-cache.org/Features/DelayPools
http://www.squid-cache.org/Doc/config/delay_parameters/

Ganglia - security when polling metrics over TCP (xml format) from nodes

Context: I am a student and I am trying to prepare a proof of concept for quick network-monitoring.
our imaginary context is that we have multiple clusters which are on different subnets. I have read numerous documentations regarding ganglia and what I really want to find out is during node polling, assuming that gmetad is on a different subnet as the node as well, is there any security measure that is utilised to protect sending the XML data over TCP.
It's not entirely clear whether you mean to ask about TCP or UDP transport here, but I assume TCP since that's how gmetad-gmetad and gmetad-gmond communication is done.
The only security measures are the trusted_hosts configuration attribute for gmetad and the access control lists that can be specified for gmond's tcp_accept_channel configuration.
You could perhaps consider a secure tunneled route between the hosts if you're looking to avoid eavesdropping?

Is it safe to use Socket.LocalEndPoint as a unique id?

When a server accepts a client over a tcp/ip connection, a new socket is created.
Is it safe to use the LocalEndPoint port (from the client perspective) as an id?
Example (from the server perspective):
int clientId = ((IPEndPoint)client.RemoteEndPoint).Port;
On my local machine, the port seems to be unique, but with multiple clients on different machines, it may not always be the case.
My second question:
Let's say the port can't be used like a unique id, how the server (and hence the protocol stack) can differentiate between two client socket (from the server perspective).
TY.
The uniqueness of a socket is identified by 4 values: (local IP, local port,remote IP, remote port) and that's how the protocol stacks identify a connection.
Given this, you can have several connections from the same port number to same port number but e.g. to a different remote address. Typically you have to specifically request
permissions to use the same local port for more than 1 outbound connection.
Your example int clientId = ((IPEndPoint)client.RemoteEndPoint).Port; doesn't use the local port, but the port on the remote end. This is certainly not unique, as different clients might happen to chose the same port. Your server port is probably fixed, and will always be the same for all connections. Thus if you want something unique on the server side, you have to use the 4 values mentioned above.
However if you only need a unique identifier within your own client application among connections you've set up yourself, the local port will do.
Don't use the remote end point - create a GUID - for each (accepted)connection.
Pass the GUID back to the client socket - get the client to save it (much better than a HTTP session) and add the GUID to any subsequent HTTP headers directed at you :)
then!! the perfect need for a HastTable<> !!! only a couple of situations I know of!
Why not just use "client" as the unique identifier. A unique identifier need not be of a value type.
The short answer to the first question is probably no. The client OS will usually pick a port from a range. Even if that range is 40-50 thousand large, if your server is busy enough, sooner or later you may have the same port coming in from different clients. If it isn't a busy server you may get lucky.
Sockets are differentiated from each other based on pairs of address/port/protocol. The combined set of these values from the client and server will be unique.
Why can't you just use the client address and port as a temporary id?