Citrix thin client and thick client (XenApp and XenDesktop) - citrix

Need to understand something related to Citrix XenApp and XenDesktop.
If I install a software (e.g. Paint.NET) on Citrix Server and publish it via XenApp and XenDesktop to set of users. My understanding is below,
Users who are accessing published application as XenApp; is a thin client application.
users who are accessing using XenDesktop; is a thick client application.
Is my understanding is correct? I googled a lot but still couldn't get a proper answer. I am very new to this Citrix world.
Can someone please explain me in laymen language?

I'm not sure these categories can really be cleanly applied to Citrix. Let me just explain in a nutshell how it works and you can be the judge yourself which of the (if either) it should be.
I have a farm of Citrix servers that I deploy WPF to. The servers are basically just Windows machines, so I can browse for files, upload, interact with the local file system in any way. The app itself to the Citrix server just like it was a personal computer. Citrix technology basically just transmits a picture of whatever apps each user has open on the server(s). It does this by the user installing a client (web browser plug-in), and all that comes across the wire is the compressed graphics info. There is no discernible lag, so it's basically just like I'm working directly from the server. I can't copy objects directly to my laptop from these web servers, because the OS I'm on there isn't really the same OS (although can browse through the network to my laptop and copy it that way very quickly).
That is Xenapp. I assume XenDesktop is the same as what we call 'Remote Desktop, but double-check me on that. This is what I use to log into a computer in the office from my home and control it. It works very much like the above except that, instead of logging into a server , it's used to log into a desktop PC.
Both technologies just transmit a (compressed) image, and both allow you to send keystrokes and mouse movements so that it's like you're working directly on that machine. As I understand it, Citrix is one of the few games in town with this kind of technology, and last I heard, even MS licensed it from them.
The typical usage is to install fat client apps on a Citrix farm so that they then become web/browser accessible from outside the work place. The apps are published on a gateway web site with links to the individual apps (although you can also browse through the file system and open that way). The only thing the user needs to have installed to do this is the Citrix client to decipher the visual stream. The client is free and lightweight.
So basically, I would say Citrix technology allows for fat clients to be installed on the Citrix server and then accessed like thin clients.
There are a few key differences between Citrix deployment and the way typical web app works. One is that the user has to actually close the app out, not just their local web browser , otherwise the app stays running on the Citrix server. By default that doesn't typically happen because from the Portal, a particular app will be published, so that only that particular app pops up on click of the link (not a desktop or Windows Explorer). So when the close the 'picture' of it running in the browser, they do so by closing the 'X' on the app. But if they're crafty, they're can disconnect the client from the server and leave it running. That can be handy if ones need to some work that shutting downt the laptop would otherwise close out (long-running datawarehouse pulls, etc). Another difference is that speed and performance are pretty much the same no matter the location of the user(at least with XenaPP). Normally, if you have a Wide Area Network, and you say, deploy an ASP.NET web page on web server in City A, the user in City B 1000 miles away may have a bit of a lag, since the web app may have to query a database server, then spit out some Javascript, that then gets consumed and ran on the client. With Citrix Xenapp, everything is occurring on the server in City A. In Citry B, the user is just getting a compressed picture stream. For this reason, it's better to avoid too fancy graphics, because they waste bandwidth and usually get autocompressed to look weird anyhow. But assuming that is done and the farm doesn't suck, performance will be appreciably the same in India or the Philipines or the United States for the same app. Another difference is that the data is inherently Sandboxed, and there are is no URL unless you decide to put the app on a web server and then have users access it through Citrix (which I've seen done in companies with sensitive data using offshore vendors because of the Sandboxing and speed benefits). But if you do that, you have to open the web app from within the Citrix portal and then you can run the browser on that server (you can't just put a link to that web app from the web). Finally--and maybe this is just where I work--but the load balancing seems to work a little differently than with web servers. Users tend to get thrown on the same server if they already have another app open. That can be handy for copying files, etc, but also means less balance in the load for particular servers at times, so that you typically don't want the overall average load to go high (need more servers).
Hopefully that helps explain it and give you an idea. Citrix just sends a picture of the wire that you can use to remote-control a machine. I would say it's kind of "both" on the thick or thin client question. Typically it is used to deploy Winforms, WPF or other 'fat client' technologies, and is largely unnecessary for technologies that already allow for thin clients (web apps). But sometimes web apps are pushed through there also, for various reasons.

Related

Server for iPhone; continuous connection

Ok lets say I want to create a connection between my iPhone app and my server (i'd like to try and use GoDaddy servers for this) to server real time location data to users.
I've seen plenty of good stuff online about using sockets, streams, ASIHttpmessage, CFHTTPMessageRef, etc., but what I'm unclear about is how to set up a server that continuously servers real time data to users (I believe you'd need a stream of data going to the user for this, not just a single http request and response). How does one take a host like GoDaddy and run server code on it. I know you can set up a server like this using terminal, but I don't have access to command line or the ability to run this "server program" from my web host as far as I know. Is there software I can download on my cpanel for this? Do I need a virtual private server and different hosting via GoDaddy maybe?
Does anyone know how I can do this or if my understanding of this whole thing is wrong. Please keep in mind I need this real time (or close to). Please, educate me. I really just need a better understanding of how this works.

Is Citrix client drive mapping a bad idea for external clients?

We are setting up a citrix solution for co-workers from an external partner to access applications in our organisation. The question is if it's a bad idea to allow Citrix Client Drive mapping from a security perspective?
Does anyone know of any best practices?
We have no control over the state(of for example antivirus software) of the clients from where they connect or their network.
This is probably a question for the Citrix forums, but here are my 2 cents:
With Citrix XenApp you can granularly control which level of data exchange between the client (where the user sits) and the server (where applications are executed and data is stored) you want to allow. One extreme is to disable every form of exchange, including the clipboard. In such a scenario the only way users can copy data from the server is via screenshots.
The other extreme is to allow everything including clipboard and client drive mapping. In that case you can copy data to and fro, both via the clipboard and via the file system.
There is no best practice, you need to define which level of security you want and act accordingly. But beware: think of the users, too, and do not restrict them unnecessarily.

Send emails through VB6 if no email client

I have a VB6 app which is used by a large number of clients.
I need to allow the clients to be able to send emails to me. In the past I have done this using Microsoft MAPI controls. However, not all of them have an email client installed, since they use webmail instead.
Is there any other method anyone can recommend which would allow them to do this?
SMTP
You can use CDO for Windows to do this if we make a few assumptions:
Your users are all on Win2K or later.
The users will never be behind a firewall blocking SMTP or proxying all SMTP port use to a corporate server.
You have an SMTP server that you have an account you can let the user-mails be sent with.
You embed the server's address and account credentials in your program.
Sometimes using an SMTP server listening on an alternate port will address the second issue, but often such an alternate port is even more likely to be blocked.
SMTP is Dying
Abuse over time has made SMTP less and less viable for automated/assisted user contact. There are just too many variables involved in trying to open some sort of "clear channel" for SMTP communication as people work harder to fight spammers and such.
Today I would be much more likely to use either WebDAV or a Web Service for this. Both use HTTP/HTTPS which is more likely to get past firewalls and usually get by most proxy servers as well. WebDAV is often more "slippery" at this than Web Services, which more and more proxies are bocking. You can also use something more RESTful than SOAPy since the traffic "smells more like" user browsing to proxy servers.
WebDAV is a Clean Option
There are even free WebDAV providers offering 2GB of storage with a main and a guest user. The guest account can be given limited rights to various folders so some folders they might post your messages to, other folders they might get data from (read only folders), etc. For a paid account you can get more storage, additional users, etc.
This works well. You can even use the same hosting for program version files, new version code to be downloaded and installed, etc. All you need on your end is an aggregator program that scoops up user posted messages and deletes them using the main user/pw.
You still need to embed user credentials in your program, but it can be a simpler matter to change passwords over time. Just have the program fetch an info file with a new password and an effective date and have the program flip the "new" password to "current" once run on that date or after.
WebDAV support in Windows varies. From WinXP SP3 forward you can simply programmatically map a drive letter to a WebDAV share and then use regular file I/O statements against it, and unmap the letter when done. For more general use across even Win9x you can build a simple WebDAV client on top of XMLHTTPRequest or use a 3rd party library.
Web Services Have Higher Costs
Just to start with you have server-side code to write and maintain, and you have to use a specific kind of hosting. For example if you built it using PHP you need a PHP host, ASP an ASP host, ASP.Net an ASP.Net host, etc.
Web Services can also be more problematic in terms of versioning. If you later update your program to provide different information in these user contact posts you have to make another Web Service as well as changing both the application and the aggregator. Using WebDAV you can just make a "new format" folder on the server and have the new program post the data there in the new format. Your aggregator can simply pull from both folders and do any necessary reformatting into your new local database/message repository format.
This is merely an incremental additional effort though and a Web Service might be the way to go, even if it is just something written like an HTML Form GET/POST acceptor.
Although this question is for VBA you may find it of interest. Sending Emails using VBA without MAPI

See what website the user is visiting in a browser independent way

I am trying to build an application that can inform a user about website specific information whenever they are visiting a website that is present in my database. This must be done in a browser independent way so the user will always see the information when visiting a website (no matter what browser or other tool he or she is using to visit the website).
My first (partially successful) approach was by looking at the data packets using the System.Net.Sockets.Socket class etc. Unfortunately I discoverd that this approach only works when the user has administrator rights. And of course, that is not what I want. My goal is that the user can install one relatively simple program that can be used right away.
After this I went looking for alternatives and found a lot about WinPcap and some of it's .NET wrappers (did I tell you I am programming c# .NET already?). But with WinPcap I found out that this must be installed on the user's pc and there is nog way to just reference some dll files and code away. I already looked at including WinPcap as a prerequisite in my installer but that is also to cumbersome.
Well, long story short. I want to know in my application what website my user is visiting at the moment it is happening. I think it must be done by looking at the data packets of the network but can't find a good solution for this. My application is build in C# .NET (4.0).
You could use Fiddler to monitor Internet traffic.
It is
a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.
It's scriptable and can be readily used from .NET.
One simple idea: Instead of monitoring the traffic directly, what about installing a browser extension that sends you the current url of the page. Then you can check if that url is in your database and optionally show the user a message using the browser extension.
This is how extensions like Invisible Hand work... It scans the current page and sends relevant data back to the server for processing. If it finds anything, it uses the browser extension framework to communicate those results back to the user. (Using an alert, or a bar across the top of the window, etc.)
for a good start, wireshark will do what you want.
you can specify a filter to isolate and view http streams.
best part is wireshark is open source, and built opon another program api, winpcap which is open source.
I'm guessing this is what you want.
capture network data off the wire
view the tcp traffic of a computer, isolate and save(in part or in hole) http data.
store information about the http connections
number 1 there is easy, you can google for a winpcap tutorial, or just use some of their sample programs to capture the data.
I recomend you study up on the pcap file format, everything with winpcap uses this basic format and its structers.
now you have to learn how to take a tcp stream and turn it into a solid data stream without curoption, or disorginized parts. (sorry for the spelling)
again, a very good example can be found in the wireshark source code.
then with your data stream, you can simple read the http format, and html data, or what ever your dealing with.
Hope that helps
If the user is cooperating, you could have them set their browser(s) to use a proxy service you provide. This would intercept all web traffic, do whatever you want with it (look up in your database, notify the user, etc), and then pass it on to the original location. Run the proxy on the local system, or on a remote system if that fits your case better.
If the user is not cooperating, or you don't want to make them change their browser settings, you could use one of the packet sniffing solutions, such as fiddler.
A simple stright forward way is to change the comupter DNS to point to your application.
this will cause all DNS traffic to pass though your app which can be sniffed and then redirected to the real DNS server.
it will also save you the hussel of filtering out emule/torrent traffic as it normally work with pure IP address (which also might be a problem as it can be circumvented by using IP address to browse).
-How to change windows DNS Servers
-DNS resolver
Another simple way is to configure (programmaticly) the browsers proxy to pass through your server this will make your life easier but will be more obvious to users.
How to create a simple proxy in C#?

iPhone: Connecting to database over Internet?

I've been talking with someone about the possibility of a iPhone development contract gig. All I really know at this point is that there is a company that wants to make an iPhone app that will hit their internal database. I'm not sure what the database type is( Oracle, MySQL, etc...).
I've wanted to know if the database type was Oracle or MySQL if there is a big learning curve for connecting to one of these across the internet?
If it's a real pain I may do more research before accepting the conract.
I would advise against directly accessing the database from the iPhone application.
Usually, you would create a web service which accesses the database, and then you consume that web service from the iPhone application.
Create a web service. This allows you to make the iphone app more of a thin client. Let the application push commands to the web service for processing and interaction with the database returning only the data needed by the app.
This option is better for the app, the database, and the customer's security.
You can easily perform the connection over the internet, the same way you would locally, but you are opening the database up to attacks if it will accept communication from any remote IP address. Typically you will just connect via a socket open to the server's remote IP address over the open port, MySQL's default port is 3306.
I would recommend against this sort of system in general unless there is some critical reason they want their internal database exposed to the world's hacker community.
What I am doing is creating a web service using Sinatra to access the online database.
Those answers from 2009 are mostly obsolete now.
http://ODBCrouter.com/ipad (new) has XCode client-side ODBC libraries, header files and multi-threaded Objective C objects that let your apps send SQL to server-side ODBC drivers and get back binary results! This reduces the need to stop and separately maintain SOAP/REST servers that can get pretty frightening anyway after a while maintaining it.
The XML schemes were okay for transferring static configurations to mobile devices "every once in a while", but XML was meant for infrequent inter-company type transfers in a "server environment" (with power cords, wired networks and air conditioning) and is definitely not efficient for frequent database queries coming in from n-copies of a mobile app. There are third-party JSON libraries that help things, but even with JSON, everything has to be encoded (and decoded) from the binary representation in the database to text representation on the server (only fine if it's going to be shown to the user in a web browser anyway, but not fine if the mobile app is going to translate it right back into binary so that it can perform calculations "behind the scenes" to what is going on with the user). Aside from the higher network overhead and battery power the mobile CPU will draw with XML and JSON, it will also make you buy more RAM and CPU power on the back-end server faster than just using an ODBC connection to the database.