Locust, How to set source request ip - locust

The virtual machine which run Locust has multiple network cards
Is there any way to set source request IP of Locust?
https://docs.locust.io/en/stable/configuration.html#command-line-options

It should be possible by monkey patching socket.create_connection() using the solution described here, as Locust’s HttpUser uses requests.
Requests, bind to an ip
My original answer was deleted by a mod for some unknown reason. Lets hope it doesnt happen again...

Related

Does Locust make use of IP Spoofing?

I'm running locust load test against my server, which has Cloudflare limit of 250 requests per second per IP (user). And I'm hitting that limit with following locust config:
Users: 100
RPS is around 100
So, the question arises here is:
Does Locust make use of IP spoofing to bypass the DNS limitations?
Does Locust make use of IP spoofing to bypass the DNS limitations?
No, it does not.
But if you actually meant to ask "Could Locust make use of IP spoofing?" the answer is yes, see https://github.com/locustio/locust/issues/376
Regarding your actual core problem (DNS limits), there should be better solutions though. Check out https://github.com/locustio/locust/issues/1614 (although the user who filed that didnt seem to get my proposed solution to work...)

Hosting a website using server software

How can i host a website through my computer using server softwares?
I tried to host a website through my own computer using apache tomcat server but it didnt work ( please briefly explain every point )
The main issue that you need to deal with is getting the clients to your computer.
Yes, it is possible and yes I have done it, albeit a while ago.
You need to see if you can browse to your computers website from another device on your network, this will ensure that apache is working. Try another computer/laptop/tablet/whatever to see if this site reachable by other computers using the IP Address and possibly port number. If you cannot get to the site, there are settings in apache to deny certain ip's, google it to get the exact steps for your version. If it works, move on to step 2.
You will need a static IP Address to ensure that all further steps stay working, google this if you are not sure how to do it
You need to have the external IP address of your router(whatsmyip.org) or use Dynamic DNS to route traffic from an address to your ip and there are services that allow this. I can recommend no-ip.com - This is all assuming that you have access to the router.
You would be required to set up port forwarding on your router. This will direct the internet traffic to your computer. You will need to get the exact instructions for your specific model of router.
Please be aware that you need to have proper firewalls and systems in place to prevent attacks. I am sure that you are just testing at this point though...
All the best!

Is it possible to expose an Owin service?

We have created self-hosted services using OWIN. They are working fine inside the server and we can request and retrieve information using the http://localhost. We use a different port for each service so that we can go and get certain information from http://localhost:8001, other from http://localhost:8015 and so on.
Now, we need to expose the results of one of those self-hosted services to access to it through internet. We'd like to provide a custom address such http://ourpublicinfo.mydomain.com:8001 or using the server ip such http://209.111.145.73:8001.
Is that possible?
How can we implement it?
Our server OS is Windows Server 2012 R2
OWIN Self-Hosted apps can run on a Windows Service, as a Console process and, with if desired, as part of a more robust Host like IIS.
Since you mention your app is running as a service you're probably missing all the GUI goodies IIS provides. In reality however, IIS works on top of http.sys, just as HttpListener does (which is probably what you're using to self-host your app) 1. You just need to do some manual set up yourself:
First of all, you need to make a URL reservation in order to publish on a nonstandard port.
Why would you do that? Quite simply because you're not running under localhost alone anymore on your very own local machine, where you probably are an admin and/or have special privileges/powers.
Since this is a server, and the user used for running the Service might not be an admin (most probably), then you need to give permission to that user to use that URL... and here is where URL reservations come into scene.
You pretty much have to options:
open up the URL to be used by any user:
netsh http add urlacl url=http://209.111.145.73:8001/ user="everyone" listen=yes
or open up the URL to be used by the user(s) running the service, e.g.: NETWORK SERVICE:
netsh http add urlacl url=http://209.111.145.73:8001/ user="NETWORK SERVICE" listen=yes
There is a way to make the reservation for several users too, using sddl, user groups, etc... but I'll not get into it (you can look that up).
Second of all, you need to open up a hall through your firewall (if you don't have one on this day and age, I pity you!)
There are plenty of tutorials on this. You can use a GUI, netsh.exe and what not.
Pretty much all you need to do is make sure you allow incoming connections through that port and that should do the trick.
To make sure the hall is open through and through you can use a tool like http://www.yougetsignal.com/tools/open-ports/ and insert 209.111.145.73 in the Remote Address and 8001 in the Port Number.
If for some reason it shows that the port is closed, even after creating an incoming rule in your firewall for it, then you probably have one or more firewalls in between your server and the outside world.
With those to elements in place you should be able to access your Self-Hosted Service from the outside.
As for accessing your service through an address like http://ourpublicinfo.mydomain.com:8001, you'll need to create a DNS entry somewhere, most likely on your Domain Registrar for mydomain.com, where you could create an A Record for your ourpublicinfo subdomain pointing to 209.111.145.73.
From this point on, you should be able to access your service through direct IP and Port or through the afore mentioned URL.
Best of luck!
Note:
If your service will be access from other domains, you might need to make sure you have CORS (Cross Origen Resourece Sharing) well defined and working on your service too ;)

how to ensure that the service is getting called?

I have one Batch job on one server which is intern calling the service on the other server.
The issue is-(we suspect)the batch job is unable to call the service,but we are not very sure.
How to ensure that service is not getting called from the network point of view.
we checked telnet between the two servers which is fine.
Is this the only way to troubleshoot?
note: we can not do the code level changes now.
You could make use of tools like wireshark or
tcpdump to capture the network traffic and see that your script is indeed sending o/g requests.

Get Azure public IP address from deployed app

I'm implementing the PASV mode in a FTP server, and I send to the client the IP address and port of the data end point. This is stupid because the IP is actually where the client is already connecting, so there ire two options:
How could I get the public IP
address from a given instance? Not
the VIP, but the public one.
How could I get the original target
IP address that the user used from
a Socket object? Considering routers and load balancers in the middle :P
An answer to any of this questions would do, although there is another way that could work... may I get the public IP address doing a DNS look up of myapp.cloudapp.net?
A fourth option would be use the Azure Management API library... but, too much trouble :P.
Cheers.
Not sure if you ever figured this out, but here's my take on it. The individual role instances are all behind the Windows Azure load balancer and have no idea what the original, outward-facing IP address is. Also, there's no Management API call that returns IP address - Get Deployment returns the URL but not the IP address. I think the only option is going to be a dns lookup.
Having said that: I don't think you can host a passive ftp server in your role instance (at least not elegantly). You may open up to 25 input endpoints on your role (up from 5 - see my recent blog post about this update), but there's manual work involved in the configuration. I don't know if your ftp application lets you limit your port range to such a small number of ports. Also:
You'd have to define each port as its own input endpoint (this is the manual labor part I mentioned) - input endpoints don't allow a port range to be specified, unlike the internal endpoints.
You'd have to specify the port number that's used internally, and the port numbers would need to be sequential
One last thing on ftp: you should be able to host an sftp server with no trouble, since all traffic comes through one port.
The hack that I'm contemplating right now is to retrieve http://www.icanhazip.com/. It isn't elegant and is subject to the availability of that service, but it gets the job done. A better solution would be appreciated!