Difference between desktop and server virtualization? - virtualization

I have read many article but, still difference between them confusing.
Server virualization virualizes the physical server while desktop virtualization virtualizates desktop.
Can somebody elaborate with example?

In the end, this is about
a powerful piece of hardware able to run many more instances of "something"
virtualization technology that supports "slicing" that single hardware instance into those "somethings"
And from there: such a "something" can either be an instance of a service; or it can make up what you, as a user perceive as "desktop".
In that sense, your distinction doesn't really exist: desktop virtualization is nothing else but a server offering "client desktops" to its users; instead of say a http or database service.

Related

Eclipse milo - Performance/scalability when deploying an OPCUA server in the cloud

I have created an OPCUA server with eclipse milo that is installed in the same machine where the clients are installed, so the communication works fast and reliably.
I did a bit of sniffing with wireshark to see how much communication involves under the hood and apparently there is a lot going on when monitoring variable, alarms, etc....
So I am thinking what issues I may expect in terms of performance and scalability if the server gets deployed in the cloud. I have seen that people talks about OPCUA cloud services, but not being this a hot topic is hard to foresee what challenges may come, and how well it scales and performs.
I would imagine that OPCUA uses sticky sessions, which means that you only can support a max number of users/requests, so dynamic scaling may not be an alternative right?
I tried the samples provides by eclipse milo, which are stored somewhere in the network, and it took long timeto connect to it. If that is the performance one may expect then the perception of the service for non-technical users would be that it does not work well.
Is the cloud a right place to use OPCUA considering the network overhead? Any recommendation to stick to local networks (intranet) only and skip the cloud?
Any feedback would be appreciated, thanks!
If you wanted to get into more detail and share Wireshark captures we might be able to go over parameters that would reduce traffic.
If bandwidth is a concern because you're using cellular or other constrained connections then sure, OPC UA may not be the best fit.
I'm curious what kind of delays or latency you experienced running the examples - connecting over the internet generally does not take very long, so perhaps you were also measuring the time it took to compile and start the example or there was something going on with your network.

How to imitate servers (without loss of computing power)?

I have production environment, which is running on one server. But I need to run 2 instances of one software, each on "another" server.
Is it possible to imitate more servers on one real server for free? Without loss of computing power and network flow in/out of the real server?
EDIT:
In another words: I want to run two instances of the same software on one machine.
And then I need to use some function that transport some subinstance from instance1 into instance2. But this function is only possible to use when instance1 is on another server than instance2. So I need to imitate that one of both instances running on local is on different servers.
I'm making an assumption that you are using Windows, in which case you could use a Hypervisor like Hyper-V however if you have only purchased one license of Windows you may be fairly limited in what you can run in a production capacity.
If you mean that the software you need to run only has one license you typically are not allowed to virtualize it either, so it seems like the answer is legally you are not going to be able to do much with just one license, however my assumptions may be all wrong, your question wasn't clear enough.

Looking for a Wi-Fi microcontroller to use with a robot

I want to make a Wi-Fi controlled robot.
After a lot of research, I decided to use an Asynclab's BlackWidow which was the best way for me to do this.
But unfortunately, this product is out of stock everywhere!
I ordered one on roboshop and I got the message 25 days later: Sorry, this product is sold out.
So, I'm looking for another microcontroller with a Wi-Fi interface.
I also need this very quickly (because it is for a school project), and it must be as cheap as possible.
I've been looking all the day but I couldn't find something as "good" as the BlackWidow.
You can get the WiFly shield from sparkfun.
In the past I have used a Linux router (with positive results) with Gargoyle (OpenWrt based) as a wireless gateway and communicate with it through a serial port, as most of them attach a console to the serial port so that you just have to send the command and '\n' to be executed. With the cURL libraries should be fairly easy to communicate without much effort with whatever you want.
You have the power of Linux and a pretty powerful CPU, can configure it through the command line or web page, and most important, a lot of routers are much cheaper than the 'BlackWidow'.
The one I used is the Fonera+ (unmounted doesn't take much more space than an Ethernet Shield) and used to cost around $28 although it is now deprecated, but some other routers from Linksys, TP-Link, etc. are also compatible as stated in the OpenWrt Compatibility Table.

Benefits of JVM atop an OS VM?

I see many deployments where IT groups run effectively nothing but a JVM application stack inside a VM (vmware, &c) instance.
I guess I consider the JVM to be a formal VM: what real benefit is it to run your Java application stack inside another VM?
Two JVM instances within the same (real or virtualized) machine wouldn't be completely isolated from each other: they couldn't both have sockets listening on the same well-known numbered port, they might interfere with each other if they both wrote in the same filesystem, and so on, and so forth.
Using OS-level VMs (vmware or whatever) does guarantee you as much isolation as you would have on physically separate systems, which is quite a different proposition.
It's an unfortunate terminology collision
Those are really two different terms that unfortunately use the same english words, but have only a rather abstract connection.
IBM used the term "virtual machine" first, so I guess we can't rename that one to "virtual server" or something.
Too bad "software framework" doesn't have VM in its initials. If you think of the JVM that way it will be obvious that you are really just running a framework in a VM, not a thing inside the same kind of thing...
So a real VM can casually give away super user shell accounts, ssh access, software installation privs, ....
what real benefit is it to run your Java application stack inside another VM?
By doing this, your JVM will run on virtualized hardware that you can modify and run in parallel of other virtual machines. This is a nice way to slice a big server into "shares" that you can allocate on demand.
(EDIT: I'm answering a comment from the OP directly in this answer)
I get what you're saying, but why would one not be able to do the very same thing as separate processes on the host OS?
I could mention that a guest can possibly run another OS but this is not the most important part. As pointed out in another answer, the biggest difference is that a virtual machine is isolated from other VMs, it's are real dedicated environment. The port stuff was a good example but I prefer to illustrate it this way: another process won't eat "your" CPU cycles. This is a very important difference, especially for IT teams that usually don't like to share resources. Instead, you can size a virtual machine exactly as needed, possibly dynamically, and bill IT teams for what they are really using. This is IMO what makes mutualisation of resources actually possible (and thus costs cutting).

iPhone: Connecting to database over Internet?

I've been talking with someone about the possibility of a iPhone development contract gig. All I really know at this point is that there is a company that wants to make an iPhone app that will hit their internal database. I'm not sure what the database type is( Oracle, MySQL, etc...).
I've wanted to know if the database type was Oracle or MySQL if there is a big learning curve for connecting to one of these across the internet?
If it's a real pain I may do more research before accepting the conract.
I would advise against directly accessing the database from the iPhone application.
Usually, you would create a web service which accesses the database, and then you consume that web service from the iPhone application.
Create a web service. This allows you to make the iphone app more of a thin client. Let the application push commands to the web service for processing and interaction with the database returning only the data needed by the app.
This option is better for the app, the database, and the customer's security.
You can easily perform the connection over the internet, the same way you would locally, but you are opening the database up to attacks if it will accept communication from any remote IP address. Typically you will just connect via a socket open to the server's remote IP address over the open port, MySQL's default port is 3306.
I would recommend against this sort of system in general unless there is some critical reason they want their internal database exposed to the world's hacker community.
What I am doing is creating a web service using Sinatra to access the online database.
Those answers from 2009 are mostly obsolete now.
http://ODBCrouter.com/ipad (new) has XCode client-side ODBC libraries, header files and multi-threaded Objective C objects that let your apps send SQL to server-side ODBC drivers and get back binary results! This reduces the need to stop and separately maintain SOAP/REST servers that can get pretty frightening anyway after a while maintaining it.
The XML schemes were okay for transferring static configurations to mobile devices "every once in a while", but XML was meant for infrequent inter-company type transfers in a "server environment" (with power cords, wired networks and air conditioning) and is definitely not efficient for frequent database queries coming in from n-copies of a mobile app. There are third-party JSON libraries that help things, but even with JSON, everything has to be encoded (and decoded) from the binary representation in the database to text representation on the server (only fine if it's going to be shown to the user in a web browser anyway, but not fine if the mobile app is going to translate it right back into binary so that it can perform calculations "behind the scenes" to what is going on with the user). Aside from the higher network overhead and battery power the mobile CPU will draw with XML and JSON, it will also make you buy more RAM and CPU power on the back-end server faster than just using an ODBC connection to the database.