control and monitor proxy server IPv4 - server

There is a private proxy server IPv4 and 1 port. It is possible to use HTTPS or SOCKS5. It will be used on multiple computers (configuration at the browser level, not a PC). DNS server shared (CloudFlare)
Since I am the owner of this proxy, I want to control and monitor it, namely to see:
the number of devices using it at the moment (online)
workload on proxy, traffic
What sites each computer runs
What of this is possible and with what tools? Is there a general solution (one tool) for all tasks?
Is VPS / VDS required for such purposes? If not, how is an addition than he can be useful?

From present:
Tinyproxy can generate static html file with you needs.
Artica Proxy has a web front end with statistics. Users and transit and sites.
From the past (not maintained):
squid: can generate static html file with your needs.

Related

How do I host my script in a Google Cloud server?

So I have created something small which is a image-rehost where I wish to use Python script where I have a URL such as https://i.imgur.com/VBPNX9p.jpg but with my rehost it would be
https://ip:port/abc123def456
so whenever I access that page it would give me the url that I posted here.
However the issue I am having is that I have no clue how to actually host the server that I made by node-js. Right now I just used the external IP with port of 5000. When I tried to send the image through my home ip by using the
https://external_ip:5000/abc123
the server doesn't recognize anything and nothing is being sent to the server which I in that case think I have setup something wrong.
I am using Google cloud server and I would wish to know how I can host my own server in the google cloud?
As you are having trouble adding a firewall rule, I'm going to suggest make sure port 5000 is open and not 8888.
To open the firewall rule for port 5000 in Google Cloud Platform follow these steps.
1) Navigate to VPC Network > Firewall rules > Create firewall rule.
2) In the 'Create a firewall rule' page, select these settings:
Name - choose a name for this firewall rule
Network - select the name of the network your instance belongs to, most probably
'default' unless you've configured a custom network.
Direction of traffic - 'Ingress'.
Action on match - 'Allow'.
Targets - 'All instances in the network'.
Source filter - 'IP ranges'.
Source IP ranges - '0.0.0.0/0'.
Second source filter - 'None'.
Specified protocols and ports - 'tcp:5000' or 'udp:5000' depending on whether the protocol you are using uses tcp or udp.
3) Hit 'Create'.
This will create a rule allowing traffic on port 5000 to all instances in your network from all IP address sources.
My advice would be to see if these settings work, and then once confirming this, lock down the settings by specifying a specific IP address or range of IP addresses in the 'Source IP ranges' text box, and adding a target tag to you instance and specifying 'Specified target tags' so the port is only open to the instance.
If this doesn't work, you may have a firewall rule turned on within the instance, which you would need to configure (or turn it off).
For more detailed information about setting firewall rules please see here.
For running Node.sj on GCE VM I will suggest you use the Bitnami Node.js package on GCP Marketplace which includes the latest version of Node.js, Apache, Python, and Redis. Using a pre-configured Node.js environment gets you up and running quickly because everything works out of the box. Manually configuring an environment can be a difficult and time-consuming hurdle to developing an application.
Also if you wish to do URL redirection you can use URL map feature provided with Google Cloud HTTP load balancer. This feature allows you to direct traffic to different instances based on the incoming URL. For example, you can send requests for http://www.example.com/audio to one backend service, which contains instances configured to deliver audio files, and requests for http://www.example.com/video to another backend service, which contains instances configured to deliver video files. You find steps to configure and more information here.

How to make local running yii2 CRUD application available online?

I have developed Yii2 CRUD application using models, controllers and views. It is used locally on PC. I wanted the users to use it online.
For eg. I have www.example.com and I wanted to make this yii2 CRUD application available on this site. What are the steps?
Your question is not very specific, therefore you haven't received an answer. I'm assuming you are asking about how to use your local PC as the server for an online site. Otherwise please clarify your question.
You will need to do the following steps:
Point the domain name to the public IP address behind which your local PC is sitting. You can find that by going here: http://whatismyipaddress.com/
In your router you need to set up port forwarding (NAT rules) for port 80 (or 443 if using https) to your computer's local IP address.
Depending on your Apache configuration (or whatever webserver you are using) you need to ensure it serves the right website. This is too broad a subject that I can give you any details here.
Note that you are now opening up your local computer to the Internet and hence you should be aware of the security implications it has.

Is it possible to expose an Owin service?

We have created self-hosted services using OWIN. They are working fine inside the server and we can request and retrieve information using the http://localhost. We use a different port for each service so that we can go and get certain information from http://localhost:8001, other from http://localhost:8015 and so on.
Now, we need to expose the results of one of those self-hosted services to access to it through internet. We'd like to provide a custom address such http://ourpublicinfo.mydomain.com:8001 or using the server ip such http://209.111.145.73:8001.
Is that possible?
How can we implement it?
Our server OS is Windows Server 2012 R2
OWIN Self-Hosted apps can run on a Windows Service, as a Console process and, with if desired, as part of a more robust Host like IIS.
Since you mention your app is running as a service you're probably missing all the GUI goodies IIS provides. In reality however, IIS works on top of http.sys, just as HttpListener does (which is probably what you're using to self-host your app) 1. You just need to do some manual set up yourself:
First of all, you need to make a URL reservation in order to publish on a nonstandard port.
Why would you do that? Quite simply because you're not running under localhost alone anymore on your very own local machine, where you probably are an admin and/or have special privileges/powers.
Since this is a server, and the user used for running the Service might not be an admin (most probably), then you need to give permission to that user to use that URL... and here is where URL reservations come into scene.
You pretty much have to options:
open up the URL to be used by any user:
netsh http add urlacl url=http://209.111.145.73:8001/ user="everyone" listen=yes
or open up the URL to be used by the user(s) running the service, e.g.: NETWORK SERVICE:
netsh http add urlacl url=http://209.111.145.73:8001/ user="NETWORK SERVICE" listen=yes
There is a way to make the reservation for several users too, using sddl, user groups, etc... but I'll not get into it (you can look that up).
Second of all, you need to open up a hall through your firewall (if you don't have one on this day and age, I pity you!)
There are plenty of tutorials on this. You can use a GUI, netsh.exe and what not.
Pretty much all you need to do is make sure you allow incoming connections through that port and that should do the trick.
To make sure the hall is open through and through you can use a tool like http://www.yougetsignal.com/tools/open-ports/ and insert 209.111.145.73 in the Remote Address and 8001 in the Port Number.
If for some reason it shows that the port is closed, even after creating an incoming rule in your firewall for it, then you probably have one or more firewalls in between your server and the outside world.
With those to elements in place you should be able to access your Self-Hosted Service from the outside.
As for accessing your service through an address like http://ourpublicinfo.mydomain.com:8001, you'll need to create a DNS entry somewhere, most likely on your Domain Registrar for mydomain.com, where you could create an A Record for your ourpublicinfo subdomain pointing to 209.111.145.73.
From this point on, you should be able to access your service through direct IP and Port or through the afore mentioned URL.
Best of luck!
Note:
If your service will be access from other domains, you might need to make sure you have CORS (Cross Origen Resourece Sharing) well defined and working on your service too ;)

Can I run/access localhost server thru ip and subnet?

Is it possible for me to run a webserver on my computer (shared ip) and access it remotely using my ip + subnet or at least some way that doesn't involve having the IT guys make changes to the machine(s) currently running our virtual servers and/or routing our subnet?
Rationale:
I'm on a computer at work, and I'm making changes to a plugin for Google Website Optimizer. I want GWO to be able to access localhost (i.e. my development environment) so that I don't have to deploy every change to the production server while I'm feeling out the system. (lots of changes; tedious deployment takes up most of the time)
I can't just supply my IP to GWO because that points to our production server (all of our computers at work are on the same IP). If I could construct a URI that points just to my computer, then I suppose I could let GWO view a page on my development environment and interact therewith.
Not only would achieving this purpose be helpful in present circumstances, but it would aid me immensely in that I could let my boss look at what I've got in dev, from his own machine, at his leisure, without deploying changes to production.
I'm not familiar with the Google Website Optimizer, or how/where a plugin for it that you might write would be executed. So I'm going to summarize what I understand about your problem (including some guesses) and go from there, please correct me if I'm wrong.
Your company has one public IP address.
Your workstation and all the hosts on your network are source NAT'ed to the internet.
Port 80 (http) on your public IP address is destination NAT'ed to your production webserver which is hosted as a virtual machine.
You have a development webserver that is hosted on your workstation.
You have reservations about involving your "IT guys" to making routing or system admin changes.
You want your development environment to be accessible from the internet.
First up (assuming everything above is correct):
access it remotely using my ip + subnet - No. Not possible.
Second up:
I could let my boss look at what I've got in dev - Easy, get him to point his browser at your workstation's IP address on your internal network.
Possible solutions for remotely accessible:
Talk to your "IT guys" about getting your dev environment made externally accessible.
Use name-based virtual hosts on your production webserver. Requires setting up a DNS record for the dev site (e.g. dev.your-company) and pointing it to your company's IP address. If SSL is in use this is harder to achieve. You could then:
Proxy requests for a different site name to your workstation (readily achievable with apache).. or
Host your development environment on your production server
Proxy a particular URL path to your workstation. (e.g. /dev/)
Get an unused port (e.g. 8080) on your public IP destination NAT'ed to port 80 on your workstation. Your dev environment URL might then be http://www.your-company:8080/

Is a server farm abstracted on both sides?

I am trying to understand how a solution will behave if deployed in a server farm. We have a Java web application which will talk to an FTP server for file uploads and downloads.
It is also desirable to protect the FTP server with a firewall, such that it will allow incoming traffic only from the web server.
AT the moment since we do not have a server farm, all requests to the FTP server come from the same IP (web server IP) making it possible to add a simple rule in the firewall. However, if the application is moved to a server farm, then I do not know which machine in the farm will make a request to the FTP server.
Just like the farm is hidden behind a facade for it's clients, is it hidden behind a facade for the services it might invoke, so that regardless of which machine from the farm makes the request to the FTP server, it always sees the same IP?
Are all server farms implemented the same way, or would this behavior depend on the type of server farm? I am thinking of using Amazon Elastic CLoud.
It depends very much on how your web cluster is configured. If your cluster is behind a NAT firewall, then yes, all outgoing connections will appear to come from the same address. Otherwise, the IP addresses will be different, but they'll almost certainly all be in a fairly small range of addresses, and you should be able to add that range to the firewall's exclude list, or even just list the IP address of each machine individually.
Usually you can enter cnames or subnets when setting up firewall rules, which would simplify the maintenance of them. You can also send all traffic through a load balancer or proxy. Thats essentially how any cloud/cluster/farm service works.
many client ips <-> load balancer <-> many servers