How will you identify if this is web server / code / vm shortage [closed] - perl

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
given a whole picture first.
In a Oracle VM box, I've installed a WinXP pro(x32) and a Web server. The web server's web root, CGI scripts, and interpreter are mounted from share folders from my host computer(my real C drive), which the folders are read-only.
My problem is, when I create any (CGI) web pages with frames (or iframe), it happens to throw Error 500 in random frame (even I run the page from localhost), but if I reload the frame, or reloads the whole page, it can go normal again (this also happen a first ok frame go error after reload the whole page). And I've checked very carefully, there's no problem for my script. btw, I use Perl for my CGI scripts.
So I suspect there might be some problem along the "traffic" though in the same machine, but I don't know if this can happen if I call the same module among those different frames. Anyone experience similar situation or relative information? or if any test plan you would suggest me to do? I am recently using Abyss x1 as my web server, but I tried Apache also, and same thing happens
Thanks in advance

Windows XP does not allow more than 10 incoming connections and is therefore not a good operating system on which to install a web server.
Note For Windows XP Professional, the maximum number of other computers that are permitted to simultaneously connect over the network is ten. This limit includes all transports and resource sharing protocols combined. For Windows XP Home Edition, the maximum number of other computers that are permitted to simultaneously connect over the network is five. This limit is the number of simultaneous sessions from other computers the system is permitted to host. This limit does not apply to the use of administrative tools that attach from a remote computer.

Thanks Amon and Sinan, that gave the clues. These 2 are reasons why this happen ( only don't sure if they are all the reasons). Since the interpreter and underlying modules also calling from host machine, which is quite expensive. After I installed a Perl(and modules) inside my VM. This problem won't happen again!

Related

How does a host machine actually execute instructions of a virtual machine? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How does a Host machine really "host" a virtual machine? How is it given a kernel of its own? Are instructions and syscalls of a virtual machine translated into machine language and passed to the host? Is it passed down as a byte stream? Is there an interpreter which converts syscalls from the virtual operating system to the host operating system?
The more I think about Virtual Machines the more confusing it gets.
Answers to any of these would be great!
Part of the confusion is that the term 'virtual machine' has been co-opted to describe different things, so each requires a different answer. For example, the 'Java Virtual Machine (JVM)' is really just a program built to interpret a bytecode instruction set custom-built to support Java (though there is more to it than that, of course), so any attempt to answer your question in that context would be to explain how an interpreter works. What I am going to do is go back to the original meaning of 'virtual machine' and explain that. (note: I have no idea how much of what I am about to describe applies for modern VMs)
The term virtual machine originally described a multi-programming operating-system technique used to provide each of a large number of users their own complete computing environment. By 'complete' I mean this: Normally each user is given a 'space' for running programs, but each program can reach outside of its space only via a fixed, common operating system; in this technique, each user would be given a 'space' which appears to be an entire bare machine, so in particular each user could run their own multi-tasking operating system if they so chose.
The way this was achieved depended on two features of the hardware: (1) Programs could be run in one of two modes - user mode or system mode; (2) Some of the instructions are privileged (reserved for the OS), and may only be used in system mode - otherwise the machine 'traps' and tries to execute an illegal-instruction handler routine. This was exploited by having the base OS implement each user's space as a simulation of the same hardware, with simulated user-mode and simulated system-mode, etc. All of the user's code was always run in actual user-mode, regardless of the simulated mode. That means each instruction's execution was simulated by the actual hardware itself, with no interpretive overhead. The privileged instructions were an exception: They would always 'trap' to the actual operating system, which would handle the interrupt according to the user's simulated mode. If the user's 'machine' was in simulated user-mode, the actual operating system would simulate the hardware interrupt, adjusting the simulated machine state and transferring control to the instruction handler routine -in the simulation- (i.e in the user's space); if the user's 'machine' was in simulated system-mode, the actual operating system would emulate the privileged instruction, changing the user's 'machine state' accordingly.

What are the advantages and disadvantages of site mirroring [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Question 1:
When sites are mirrored, the content of their respective servers is synchronized (possibly automatically (live mirrors) or manually). Is this true? Are all servers 'equal', or does a main server exists? which then sends it changes to other 'children servers'? So all changes have to happen on the main server, and children servers are not allowed changes?
Question 2:
Expected advantages:
Global advantage: when a site that is originally hosted in the US is mirrored to a server in London, Europeans will benefit from this. They will have a better response time and because the amount of downloaders is cut down into two pieces (American and European servers) their download speeds can be higher.
Security: When one server crashes or is hacked, the other server can continue to operate normally.
Expected disadvantages:
If live mirroring is not used, some users will have to wait for renewed content.
More servers equals higher upkeep costs.
What other items can be added to these lists?
When sites are mirrored, the content of their respective servers is
synchronized. Is this true?
Yes, mirror sites should always be synchronized with their masters even if, for several reasons (eg. updates propagation times, network failures, etc.) they may not be.
There are several ways to achieve this; for example, a simple method could be using a rsync command in a cron job; a better solution is the "push mirroring" technique, used by the Debian and Ubuntu Linux distributions.
Are all servers 'equal', or does a main server exists, which then
sends it changes to other 'children servers'?
No, not all server are equals; generally the content provider updates one or more master servers which, in turn, provide the updated content to the other mirrors.
For example, in the Fedora infrastructure there are master servers, tier-1 servers (fastest mirrors) and tier-2 servers.
So all changes have to happen on the main server, and children servers
are not allowed changes?
Yes, in a mirrored context the content must be updated only on the master servers (one or more).
Expected advantages
Maybe the most comprehensive list of reasons for mirroring can be found on the Wikipedia:
To preserve a website or page, especially when it is closed or is about to be closed.
To allow faster downloads for users at a specific geographical location.
To counteract censorship and promote freedom of information.
To provide access to otherwise unavailable information.
To preserve historic content.
To balance load.
To counterbalance a sudden, temporary increase in traffic.
To increase a site's ranking in a search engine.
To serve as a method of circumventing firewalls.
Expected disadvantages
Cost: you have to buy additional servers and spend time to operate them.
Inconsistency: when one or more mirrors are not synchronized with the master (and this could happen not only with manual sync, but also with live sync).
As a further reference, since mirroring is a simple form of a Web Distributed System, you could also be interested in this reading.
Also, for files that are popular for downloading, a mirror helps reduce network traffic, ensures better availability of the Web site or files, or enables the site or downloaded files to arrive more quickly for users close to the mirror site. Mirroring is the practice of creating and maintaining mirror sites.
A mirror site is an exact replica of the original site and is usually updated frequently to ensure that it reflects the content of the original site. Mirror sites are used to make access faster when the original site may be geographically distant (for example, a much-used Web site in Germany may arrange to have a mirror site in the United States). In some cases, the original site (for example, on a small university server) may not have a high-speed connection to the Internet and may arrange for a mirror site at a larger site with higher-speed connection and perhaps closer proximity to a large audience.
In addition to mirroring Web sites, you can also mirror files that can be downloaded from an File Transfer Protocol server. Netscape, Microsoft, Sun Microsystems, and other companies have mirror sites from which you can download their browser software.
Mirroring could be considered a static form of content delivery.

How to set ip for vm from outside [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to set ip from outside of virtual machine.
Now we use dhcp server to bind static ip with their MAC.
But when the number of vms is larger and large, that's not easy for administer.
I want to make one interface for the clients to set the ip of the vm when creating it.
By now, i know i can mount the vm disk and config the network setting before creating the vm.
there is one problem for that, the vm disk type may be various, and sometimes they may have totally different partition structure, and may be including LVM,etc. Besides this, i don't know whether it is possible to config ip for Windows operating system with this method.
I don't know how they do this, i mean those Virtual-machine product, like Vmware.
Edit:If those virtual-machine product don't give one interface for client to set ip for vm, then how they manage their ips. we have many many vms, and we specify ip for each of them, the client just use it, they are not authorized to set their ip from within the os, though set, it won't make any sense, they will can't connect to the internet.
I think there must be one approach for this.
Thanks, any help is appreciated.
First of all VMWare does not provide a way to set the IP for the host from it's interface. At least not a general way. If you really want to modify the guest filesystem have a look at libguestfs which provides tools and an api to modify guest images.
You may also want to have a look at foreman smart proxy to manage/control your dhcp server via a REST api. If you use directly theforeman it will allow you to manage the ip addresses via a webui.

gather file(s) from users

I'm looking for ways to gather files from clients. These clients have our software and we are currently using FTP for gathering files from them. The files are collected from the client's database, encrypted and uploaded via FTP to our FTP server. The process is fraught with frustration and obstacles. The software is frequently blocked by common firewalls and often runs into difficulties with VPNs and NAT (switching to Passive instead of Active helps usually).
My question is, what other ideas do people have for getting files programmatically from clients in a reliable manner. Most of the files they are submitting are < 1 MB in size. However, one of them ranges up to 25 MB in size.
I'd considered HTTP POST, however, I'm concerned that a 25 mb file would often fail over a post (the web server timing out before the file could completely be uploaded).
Thoughts?
AndrewG
EDIT: We can use any common web technology. We're using a shared host, which may make central configuration changes difficult to make. I'm familiar with PHP from a common usage perspective... but not from a setup perspective (written lots of code, but not gotten into anything too heavy duty). Ruby on Rails is also possible... but I would be starting from scratch. Ideally... I'm looking for a "web" way of doing it as I'd like to eventually be ready to transition from installed code.
Research scp and rsync.
One option is to have something running in the browser which will break the upload into chunks which would hopefully make it more reliable. A control which does this would also give some feedback to the user as the upload progressed which you wouldn't get with a simple HTTP POST.
A quick Google found this free Java Applet which does just that. There will be lots of other free and pay for options that do the same thing
You probably mean a HTTP PUT. That should work like a charm. If you have a decent web server. But as far as I know it is not restartable.
FTP is the right choice (passive mode to get through the firewalls). Use an FTP server that supports Restartable transfers if you often face VPN connection breakdowns (Hotel networks are soooo crappy :-) ) trouble.
The FTP command that must be supported is REST.
From http://www.nsftools.com/tips/RawFTP.htm:
Syntax: REST position
Sets the point at which a file transfer should start; useful for resuming interrupted transfers. For nonstructured files, this is simply a decimal number. This command must immediately precede a data transfer command (RETR or STOR only); i.e. it must come after any PORT or PASV command.

How do you protect your commercial application from being installed on multiple computers with one license? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
How do you protect your commercial application from being installed on multiple computers from people who only own one license?
Do you think it's a good idea to have more than just a serial based scheme?
My general rules are
Huge deployments in commercial environments - Audit
Medium deployments of low value software < $1000 / seat - License key activation
Small deployments of high value software > $10,000 / seat - Dongles
The following method works well, as long as you have a public server at your disposal:
Serial based protection, user must enter a serial before using the program
On first serial entry, bind the serial to the MAC address and create an auth code generated from both of these values.
Check with your server to make sure the serial and MAC can be bound to eachother. Register the MAC on the server.
On each subsequent run, never contact the server again, but each time make sure the serial + MAC address matches their auth code.
If the user has no MAC address, allow them to run the program as long as they have a serial.
This gives you protection against someone simply copying the registry from one computer to another.
If the user tries to install with the same serial on another computer, the server will not allow you to bind the serial number to the MAC address because it is already bound.
It is not a perfect solution but it protects you 99% of the time.
Do you think it's a good idea to have more than just a serial based scheme?
Speaking as someone who has to install all kinds of software on all kinds of machines, do please spare a thought for the poor network administrators when thinking up your copy protection scheme. Please, please, consider network-wide installs when writing your installer - by all means include some kind of serial number protection, even make me phone up or contact your website and get an authorisation code to get a site-wide installer code or whatever, but please make sure your licensing code works. A good way to ensure your technically-superior-to-anything-else-on-the-market software doesn't get installed and used is to mess up the installer or have an install system that is simply too much trouble.
Use machine-locked licenses or licenses requiring activation to lock licenses to specific machines. Instead of developing such a scheme yourself, consider using a ready-to-use one like CryptoLicensing which supports these features.
DISCLAIMER: I work for LogicNP Software, the developer of CryptoLicensing.
We use a MAC address plus license file approach. We have the customer send us the MAC address of their PC, then generate a license file based on that MAC address. We then send the file to them via email and then they load the license file into the program. The downside is that if people swap out network cards and you'll have to issue them a new license. It takes a little more bookkeeping to make sure people aren't always requesting new licenses, and a little trust in your customer base that they won't try to game the system too much. Depending on that trust level, you can add layers of encoding or encryption into the file so they can't easily duplicate the file. On the plus side, you don't have to implement or maintain any type of authentication server.
You can always use a USB dongle if the software is worth it. Of course, all dongle manufacturers claim that their copy protection cannot be broken.
The advantage of this method is that it allows the user to use the software on multiple computers, but only run on one at a time, and it is actually not such hassle like some sort of product activation. The disadvantage, of course, is that you cannot deploy your application completely electronically. Even though you might think the opposite, actually many customers seem to accept the use of a dongle, at least in the field I work in. It's especially useful if you expect your customers to use (and also install!) the software in a place where no internet connection is available.
Edit: I overread the serial-based thing in the original question. Note that even that may annoy users more than having to put in a dongle, and it's easier for you too because neither the customer nor you have to deal with that numbers. Plug in the dongle and the app works. However, the serial-only method is by far the cheapest.
We use Orion from Agilis. For some of our users we do activation of node-locked licenses, for others they get their activation by a web page or email, and for others we put a license server on their premises. Orion covers all the bases we need.