I am collecting data from netflow using nfcapd. Also, we are monitoring all the devices for In traffic and Out traffic.
I am confused as in which data is sent to me by netflow.
For example,
In a 5 mins span, I receive netflow data which give sum(no_of_bytes) on a particular Link (srcip,dstip,srcifindex,dstifindex) = 10K Bytes.
While "In traffic" gives 20K Bytes and "Out Traffic" give 10K Bytes (approx).
What does this mean ?
My question is :
The sum given by netflow data for a particular link should match with which Traffic on either port of the link ?
After google-ing some Cisco links, I found that Netflow only accounts for Ingress traffic.
So the sum should be equal to out traffic.
Related
Consider a brokerage platform where customers can see all the securities they hold. There is an API which supports the brokerage frontend by providing all the securities data corresponding to a customer Id.
Now this api has a tps of 6000, that is what it is certified for.
And the SLA for this api is 500ms, and 99th percentile response time is 200ms.
The API is hosted on aws, where the servers are autoscaled.
I have few questions:
At peak traffic, the num of users on frontend is in hundreds of thousands if not million. How does the tps relate with the number of users ?
My understanding is that tps is the number of transactions which the api can process in a second. So, the number 6000 for the api, is it for one server or for one thread in a server? And let's say if we add 5 servers, then the cumulative tps would become 30,000 ?
How do we decide the configuration (core, ram) of each server if we know the tps ?
And finally, is the number of servers basically [total num of users / tps] ?
I tried reading about it in couple of books and few blogs, but I wasn't able to clear my confusion entirely.
Kindly correct me if I am asking the wrong questions or if the questions doesn't make sense.
I am working on a project that involves a peer to peer network. Someone raised concerns that we may be expecting a larger bandwidth than is reasonable.
Suppose we had a large number of registered nodes (in the thousands, possibly 10,000), and these nodes constantly are receiving data which they wish to propagate around the network. The data doesn't have to get to every node, but we would like it to get to most of them.
In general, how much data creation and transmission could be handled reasonably as the number of nodes increases? I am hoping that, in my case, the answer is more than 50 MB/minute (as this is the maximum amount of data my system is expected to create), but I don't have a basis for comparison.
Two different applications needs a data transfer from one another for certain activities. Option to do this data transfer is either prepare a file of data and push it through SFTP at certain point of time, or push/pull the changes through REST API in real time.
Which approach will be more secure if the data in one system is completely encrypted and in one it is raw?
When choosing an integration pattern, as always, the answer is: "It depends".
How much data, and how frequently?
What is the security classification for the data (and the potential impact of a data breach)?
How mature are the IT Engineering/Operations/DevOps teams that will be involved in implementing the integration points, on both ends?
What facilities are available for ensuring that data is encrypted at-rest, as well as in-transit?
What is the business requirement regarding data latency?
What is the physical distance between the two systems?
What is the sustainable network connection speed between the two systems?
What are the hours during which the integration needs to be active/scheduled?
Is the data of a bulk/reference type - or is it an event/transaction nature type?
What is the size per message/transaction?
What is the total size of the data (MB? GB? PB? ...?) that needs to be transferred during a given processing cycle (per hour? per day?)?
Additional considerations that should be examined:
Network timeout/retry scenarios
Batch reruns
CPU, Memory, Network bandwidth utilization
Peak hour processing vs. off peak hour processing
Infrastructure/environmental limits/constraints - and cost factors (e.g. Cloud hosting limits on transactions, data, file size, Cloud-hosted API Gateway pricing strategies, ...)
Network Latency introduced by number of messages that must be sent to complete transfer of all data via an API vs. SFTP
Data Latency introduced by SFTP batch scheduling
It is hard to compare SSH (SFTP) with SSL (RESTful API using HTTPS) as both have different functions.
SFTP has a broader surface area for attack as it uses SSH for tunneling which means there are multiple areas for there to be compromises. That does not mean it is less secure.
I need to query or calculate port utilization for various devices registered with Cisco CUCM, for example, H323 Gateway Port Utilization, FXS Port Utilization, BRI Channel Utilization etc.
Are these metrics available from CUCM? If yes, is it possible to query them using AXL? SNMP?
If the port utilization metrics are not available, how to query the total number of ports configured for each registered with CUCM device using AXL? I think I can obtain the number of currently active ports using AXL PerfmonPort service. If I find the way to query the total number of ports I can calculate the port utilization as following:
FXO port utilization = 100% * number of active FXO ports / total number of registered FXO port.
Thank you!
There are some paid products like Solarwinds that will do this for you. I personally prefer Cacti though - it will use SNMP to poll routers, switches, and CUCM itself for data. I'm able to use SNMP to show CUBE concurrent calls, PRI concurrent calls, CUBE transcoding and even CUCM itself. Generally, if it's a router component, you can monitor it with SNMP.
Here is an intro to monitoring CUCM with SNMP:
https://www.ucguru.com/monitoring-callmanager/
I will say that it takes some time to get up and running correctly. You may need different MIBs for each model router.
I am using httperf to benchmark web-servers. My configuration, i5 processor and 4GB RAM. How to stress this configuration to get accurate results...? I mean I have to put 100% load on this server(12.04 LTS server).
you can use httperf like this
$httperf --server --port --wsesslog=200,0,urls.log --rate 10
Here the urls.log contains the different uri/path to be requested. Check the documention for details.
Now try to change the rate value or session value, then see how many RPS you can achieve and what is the reply time. Also in mean time monitor the cpu and memory utilization using mpstat or top command to see if it is reaching 100%.
What's tricky about httperf is that it is often saturating the client first, because of 1) the per-process open files limit, 2) TCP port number limit (excluding the reserved 0-1024, there are only 64512 ports available for tcp connections, meaning only 1075 max sustained connections for 1 minute), 3) socket buffer size. You probably need to tune the above limit to avoid saturating the client.
To saturate a server with 4GB memory, you would probably need multiple physical machines. I tried 6 clients, each of which invokes 300 req/s to a 4GB VM, and it saturates it.
However, there are still other factors impacting hte result, e.g., pages deployed in your apache server, workload access patterns. But the general suggestions are:
1. test the request workload that is closest to your target scenarios.
2. add more physical clients to see if the changes of response rate, response time, error number, in order to make sure you are not saturating the clients.