Improving reliability of a server running programs in Interactive Services? - service

I have a bunch of programs that i run as services in FireDaemon (30+ programs). They all "interact" with the desktop as they have a GUI. I usually check the status and change parameters every now and then by going into the Interactive Services via the Detection message.
The server is a Windows Server 2012 R2 and has plenty of resources (20 cores, 72GB ram, plenty of I/O) and usually is very stable. However, if i am in Interactive Services looking at the program statuses or/and modifying anything it frequently locks me out and i cannot get back onto the server. It is essentially frozen.
How can i increase the stability? Is there something i can do to add more resources to the Interactive Services? Is there a limit to how many programs i can run in interactive services?
Any insight helps - thanks!

You either probably already know this or have seen this since you're using FireDaemon Pro, but they have a free techinical preview of a Zero Session viewer that does just that. We have the same issue and when zero session causes problems it's almost like somebody pulled the network cable to that server.
http://support.firedaemon.com/news/24/firedaemon-session-0-viewer-pre-release-now-available.aspx
So far, so good with their session 0 viewer.

Related

How to establish a Server PC to host my website?

So I am interested in server PCs and I want to buy one, and I will choose very powerful. But I don't know how to establish the hard disk to be connected to the internet. I want other people to see it when they write it's domain in the search. I am just searching for advice.
I went back over your question and this thread and this is what I recommend. You are looking to create a hosting environment for others from what I am understanding. Regardless the platform you select (linux or windows) having a beefy machine is going to be key to this. I would recommend at a minimum for hardware the following specifications. I recommend building a dedicated server with multiple Quad Core processors, 32 GB RAM, 2 or more TB Disk, with provision for backups. If you call say Dell or one of the other big server providers, they can custom-create a build for you that will accommodate your needs. That configuration would be a start; your final build may be beefier according to your needs and budget.

Citrix thin client and thick client (XenApp and XenDesktop)

Need to understand something related to Citrix XenApp and XenDesktop.
If I install a software (e.g. Paint.NET) on Citrix Server and publish it via XenApp and XenDesktop to set of users. My understanding is below,
Users who are accessing published application as XenApp; is a thin client application.
users who are accessing using XenDesktop; is a thick client application.
Is my understanding is correct? I googled a lot but still couldn't get a proper answer. I am very new to this Citrix world.
Can someone please explain me in laymen language?
I'm not sure these categories can really be cleanly applied to Citrix. Let me just explain in a nutshell how it works and you can be the judge yourself which of the (if either) it should be.
I have a farm of Citrix servers that I deploy WPF to. The servers are basically just Windows machines, so I can browse for files, upload, interact with the local file system in any way. The app itself to the Citrix server just like it was a personal computer. Citrix technology basically just transmits a picture of whatever apps each user has open on the server(s). It does this by the user installing a client (web browser plug-in), and all that comes across the wire is the compressed graphics info. There is no discernible lag, so it's basically just like I'm working directly from the server. I can't copy objects directly to my laptop from these web servers, because the OS I'm on there isn't really the same OS (although can browse through the network to my laptop and copy it that way very quickly).
That is Xenapp. I assume XenDesktop is the same as what we call 'Remote Desktop, but double-check me on that. This is what I use to log into a computer in the office from my home and control it. It works very much like the above except that, instead of logging into a server , it's used to log into a desktop PC.
Both technologies just transmit a (compressed) image, and both allow you to send keystrokes and mouse movements so that it's like you're working directly on that machine. As I understand it, Citrix is one of the few games in town with this kind of technology, and last I heard, even MS licensed it from them.
The typical usage is to install fat client apps on a Citrix farm so that they then become web/browser accessible from outside the work place. The apps are published on a gateway web site with links to the individual apps (although you can also browse through the file system and open that way). The only thing the user needs to have installed to do this is the Citrix client to decipher the visual stream. The client is free and lightweight.
So basically, I would say Citrix technology allows for fat clients to be installed on the Citrix server and then accessed like thin clients.
There are a few key differences between Citrix deployment and the way typical web app works. One is that the user has to actually close the app out, not just their local web browser , otherwise the app stays running on the Citrix server. By default that doesn't typically happen because from the Portal, a particular app will be published, so that only that particular app pops up on click of the link (not a desktop or Windows Explorer). So when the close the 'picture' of it running in the browser, they do so by closing the 'X' on the app. But if they're crafty, they're can disconnect the client from the server and leave it running. That can be handy if ones need to some work that shutting downt the laptop would otherwise close out (long-running datawarehouse pulls, etc). Another difference is that speed and performance are pretty much the same no matter the location of the user(at least with XenaPP). Normally, if you have a Wide Area Network, and you say, deploy an ASP.NET web page on web server in City A, the user in City B 1000 miles away may have a bit of a lag, since the web app may have to query a database server, then spit out some Javascript, that then gets consumed and ran on the client. With Citrix Xenapp, everything is occurring on the server in City A. In Citry B, the user is just getting a compressed picture stream. For this reason, it's better to avoid too fancy graphics, because they waste bandwidth and usually get autocompressed to look weird anyhow. But assuming that is done and the farm doesn't suck, performance will be appreciably the same in India or the Philipines or the United States for the same app. Another difference is that the data is inherently Sandboxed, and there are is no URL unless you decide to put the app on a web server and then have users access it through Citrix (which I've seen done in companies with sensitive data using offshore vendors because of the Sandboxing and speed benefits). But if you do that, you have to open the web app from within the Citrix portal and then you can run the browser on that server (you can't just put a link to that web app from the web). Finally--and maybe this is just where I work--but the load balancing seems to work a little differently than with web servers. Users tend to get thrown on the same server if they already have another app open. That can be handy for copying files, etc, but also means less balance in the load for particular servers at times, so that you typically don't want the overall average load to go high (need more servers).
Hopefully that helps explain it and give you an idea. Citrix just sends a picture of the wire that you can use to remote-control a machine. I would say it's kind of "both" on the thick or thin client question. Typically it is used to deploy Winforms, WPF or other 'fat client' technologies, and is largely unnecessary for technologies that already allow for thin clients (web apps). But sometimes web apps are pushed through there also, for various reasons.

simulate server load with BSD sockets

I'm using blocking TCP sockets in C and I want to simulate a high load on the server when there are many simultaneous connections and then I want to measure the time necessary to access the server via a browser during this high load time (the server understands HTTP headers).
Also each client request ends fast (sends a HTTP header - gets text).
How do I do this (without crashing my local machine -> I tried using fork to make many clients; also, I have a virtual machine too).
If anyone has an idea or some general directions about how to do this, it would mean a lot.
Edit: I need to run this with my own client, which uses a modified version of the OpenSSL library to connect to my SSL/TLS server, so I can't use external test tools.
I want to know how to build the client and the server. I don't know too much about other sockets than the blocking ones, I'm just skimming through the UNIX Network Programming book of Richard Stevens, but I was wondering if anyone could point out the exact solution.
Thank you !
The easiest resolution to this would be to download an existing stress testing framework such as fwptt ( http://fwptt.sourceforge.net/ ).
If you want to implemennt your own stress testing framework, I'd suggest you lose the blocking nature of your code and go with a parallel design that will scale beautifully. The limiting factor is pretty much your CPU then.
Having two physical servers would be ideal, so that then your stress test isn't affecting the CPU (and therefore the response times) of the server. Also that VM of yours drains up precious CPU time.

gather file(s) from users

I'm looking for ways to gather files from clients. These clients have our software and we are currently using FTP for gathering files from them. The files are collected from the client's database, encrypted and uploaded via FTP to our FTP server. The process is fraught with frustration and obstacles. The software is frequently blocked by common firewalls and often runs into difficulties with VPNs and NAT (switching to Passive instead of Active helps usually).
My question is, what other ideas do people have for getting files programmatically from clients in a reliable manner. Most of the files they are submitting are < 1 MB in size. However, one of them ranges up to 25 MB in size.
I'd considered HTTP POST, however, I'm concerned that a 25 mb file would often fail over a post (the web server timing out before the file could completely be uploaded).
Thoughts?
AndrewG
EDIT: We can use any common web technology. We're using a shared host, which may make central configuration changes difficult to make. I'm familiar with PHP from a common usage perspective... but not from a setup perspective (written lots of code, but not gotten into anything too heavy duty). Ruby on Rails is also possible... but I would be starting from scratch. Ideally... I'm looking for a "web" way of doing it as I'd like to eventually be ready to transition from installed code.
Research scp and rsync.
One option is to have something running in the browser which will break the upload into chunks which would hopefully make it more reliable. A control which does this would also give some feedback to the user as the upload progressed which you wouldn't get with a simple HTTP POST.
A quick Google found this free Java Applet which does just that. There will be lots of other free and pay for options that do the same thing
You probably mean a HTTP PUT. That should work like a charm. If you have a decent web server. But as far as I know it is not restartable.
FTP is the right choice (passive mode to get through the firewalls). Use an FTP server that supports Restartable transfers if you often face VPN connection breakdowns (Hotel networks are soooo crappy :-) ) trouble.
The FTP command that must be supported is REST.
From http://www.nsftools.com/tips/RawFTP.htm:
Syntax: REST position
Sets the point at which a file transfer should start; useful for resuming interrupted transfers. For nonstructured files, this is simply a decimal number. This command must immediately precede a data transfer command (RETR or STOR only); i.e. it must come after any PORT or PASV command.

Virtualization and why it is good for programmers

Why does it help to know about virtualization from a programmer's perspective? Except testing and developing on several different platforms without the need of switching between operating systems is there a particular reason why virtualization is important for a programmer? Are there any details that must be kept in mind before developing on virtual instances?
I use it for testing our installer, because it is important to check whether the application will work on a clean installation of the operating system.
I used to do these tests by keeping a hard drive with a fresh operating system installation and making a copy of that disk for (almost) every new test run. This was very time consuming, and the virtual machine solution has saved me a lot of time. Note that this even allows you to do remote debugging as easily as when using two non-virtual machines.
Note: If you're interested, I'm using VirtualBox, which is a very good and free virtualization tool.
If you develop a driver or something very close to the hardware with a high risk to crash the machine, you will be glad to be working on a virtual machine.
Reverting to an old state is easier than to repair a damaged OS.
One of the main advantages is having your entire development environment as a single image file. I have a perfectly configured version of Windows Server, Visual Studio, ReSharper, etc. I can easily try a new version of something on a copy of this virtual machine without worrying about it causing problems.
I can also back up my entire dev environment to transfer it to another physical machine very easily. I've been through 3 machines in this office alone so that was a lifesaver in itself.
The only real trade-off I see is performance. You generally have to use less physical CPU cores than you actually have and less memory. With a sufficiently powerful machine this is not much of a problem though.
Edit: As nader said, I/O is obviously important for most projects as well. Although developing on a virtual machine does mean a fairly large I/O penalty compared with a native OS install, in practice I rarely find it to be a problem. The superior random access capabilities of SSDs are helping to mitigate this drawback as well.
Being able to completely reset the state of the system is very useful to debug applications which modify their environment - If the actions are repeated after a reset, and they're constrained to the sandbox environment of the VM, you are pretty much guaranteed to get the same result.
We have a large number of different versions / customer customisations of our software, and its not possible for 2 installs of our software to coexist on the same machine. Virtualisation allows us to replace the 50-60 physical machines that we need to maintain for testing and problem reproduction with 2-3 virtual servers - it takes around 10 miniutes to make a copy of a VHD template we have and create a new virtual machine, and as long as you allocate 1-2Gb of RAM the performance is comparable to that of a (slow) physical machine.
Virtual machines are also great for build machines.
Personally I do all of my development on my deskop machine for best performance, and remote debug into VMs. I dont run virtual machines on my desktop as it uses up too much RAM, we have dedicated virtual servers for that.
Good for developing, because you have same server configuration in virtual machine like on production server.
https://stackoverflow.com/questions/905926/developer-software-setup
From a user space application there should be no difference developing for a virtualised OS versus a normal OS. There may be some gotchas if your code makes explicit assumptions of the machines memory size and number of processors and believes what the hypervisor tells you.
I'm surprised no one has mentioned the ease of deployment. All you need to do is get the build down on the virtual O/S and then you can copy the image to as many new servers (running some kind of virtualization solution [like VMWare]) as you want, easily scaling your application.
Record the state of a bug in a program, and send it to the developer (along with the entire "machine").
Testing your code on various O.S, some of which you don't have.
Working in a more protected environment, making sure that the code doesn't harm your system -useful for understanding dangerous programs, like viruses, and developing security against it, for writing potentially wrong hard-drive programs, and anything that can have catastrophic effects on your system.
Easily Write your own O.S without the need to write on 'real' boot sectors, a potentially harmful act (Hope this is not new...).
Quickly use tools and programs not found on your own O.S.
Demonstrate a program at various times, by restoring a virtual machine,
quicker and less prone to failure, than trying to recreate the state at the minutes before the demo.
Less directly connected to programing, but surfing vie a virtual machine (for example to see documentation) has the added value that your own important system (and code) is less likely to be harmed by malicious programs.
From my experience in most cases the answer is typically "no" (When testing and targeting multiple platforms is removed) Both are huge reasons to be familiar with "desktop" VM solutions. Others have done an excellent job of listing rare exceptions like debugging kernel codes.
There are some quirks one must be aware of when running on a virtual machine. This is hardly an exhaustive list:
Loss of precsision or even time reversal in high resolution timers due to emulation of hardware resources (depends somewhat on the vm platform and operating system)
Virtual network interfaces ususally bridged. We've seen some extremely odd behavior in the host system with an application that sets up its own bridge between virtual interfaces -behavior which logically should not effect the host in one of the leading VM solutions.
Usage models - If your product has orwellian licensing codes or records state dependant behavior when interacting with remote systems you should account for what would happen if a system were "paused" and "restarted" or restarted from an earlier "state". Normally this kind of thing would be taken into account anyway in a robust implementation.
If you are developing in a virtual environment you will want to make sure you know what specifications were used to create the environment. If you have say a 4 Gig machine and create a virtual environment with 1 Gig you will want to make sure things in your development do not grow to a point that it overruns the memory. This will cause slight performance problems. I personally ran into this and it was a pretty tricky thing to track down. The scenario was that I was fixing a bug and testing it in a virtual environment. I did not setup the virtual environment by the way... The application took a performance hit because of all of the memory swapping that was taking place.
A very good use for a virtual environment is when you are developing applications that mess with the Windows Gina. It's much easier to reinstall a virtual environment than an entire PC....(been here done that too).
I do all of my development on a virtual XP instance under VMWare Fusion so that I can use a Mac for everything and still write .NET code ;-)
Sometimes they are necessary, because the platform you are programming doesn't support the standard developer environment. One such example is Sharepoint. As of Sharepoint 2007 you still need a server OS to install Sharepoint 2007, WSS, and the Visual Studio Sharepoint Extensions (VseWSS).
Thus for Sharepoint I have to use a Window Server VM to do my development work. As for Sharepoint 2010 they are supporting installations on Vista and 7 x64, but I will still use a VM, because I don't want to have Sharepoint on my main machine slowing everything down. Rather I want it in a VM where the services are on when needed and off when I don't without having to manually turn off/on each service. This in addition to the many great answers posted above.