Do qiskit simulators run locally or on IBM cloud servers? It seems that each time I have used them, my computer goes into max CPU, and sometimes the simulation runs out of memory, exiting with out of memory error message.
Both, depending on the backend that you choose. If you install and run Aer, then the simulation is local. But you can also run a simulator via IBMQ, as if it was a real device. There are several simulator backends on the cloud, like ibm_qasm_simulator. They are listed at the bottom of the webpage https://quantum-computing.ibm.com/services?systems=yours.
Related
Is XAMPP just meant for testing and setting up virtual servers ?(cause that's what wiki say)
Can it be installed on an actual physical server? Do developers actually do that?
I'm a little confused cause if it were true, why would anyone install a virtual server on a physical server? It's like trying to run Excel on VirtualBox.
XAMPP simulates a typical stack used for web development on a local machine. If you have access to an actual physical server, you would typically install things like the web server (such as Apache) and MySQL on the server itself. The developers of XAMPP consider it more of a development tool due to certain features being disabled to make dev easier.
Virtualisation in servers is used because the actual physical machines are very powerful and so are idling a large amount of time. Putting those resources to use by creating two virtual servers on top of the host reduces cost and increases operational throughput.
Virtual server and Docker can be used to test with different environments at the same time, or test beta software for future releases. On Maschines that have 6 or 8 cores and running 3.6 Millions instructions per second, there are plenty of resources to have more than 1 maschine virtual or as a docker file, so that you can uses for example different databases, with out them interfering.
Besides phiscal Hard cost mony to buy and to maintain.
Last virtualisation and docker are only files, that you can simply copy to have a backup. A real maschine is a little more work, to make a backup.
But don't use XAMPP as real maschien that is exposed to the world. There much to many security risks ind teh standard configuration.
i am having a small problem which is quite critical actually, i run a Unity instance in a google cloud VM that work as a server for a small social experience in VR.
The thing is, if Unity is running without GPU, it starts clogging the processor and the game kindda fails with many users, that is why i hired a eGPU Tesla P4, also, to run Unity, i must log in with Remote Desktop and hit Play.
The thing is, Windows disables the GPU when you go Remote Desktop, Unity opens without a GPU and the GPU is "Unknown" (in dxdiag), thats why i need to solve the RDP issue, i need to log in without disabling GPU acceleration so unity can open and go full power running my game, the server is like a player that doesnt show for the other clients, since its made in Photon PUN, its a weird hybrid, but it works as expected.
Now i have to solve this performance issue, i hope i am clear.
what i need to do now: log in with hardware acceleration (paying a lot for that online gpu and not using it).
What i have now : i log in without using the GPU, the CPU dies, and im wasting cash on the server.
thanks community!
PS: in a future i will use a headless server.
It seems this is a 'Windows' feature that many people have been complaining about. Basically windows switches to a generic driver when you are connecting via remote desktop. Try using an alternative remote desktop solution such as VNC.
Sources:
https://boinc.berkeley.edu/dev/forum_thread.php?id=7026
https://setiathome.berkeley.edu/forum_thread.php?id=70853
Still very new to service fabric but I'm surprised that something as advanced as this is so slow to debug. I'm using a fairly fast machine but it takes 4-5 minutes to tear down and restart the cluster. I've googled it and can't see that anyone else has reported this as being a show stopper.
Some clues to help with your slow development turnaround time:
When developing locally, consider using a One-node cluster in order
to speed-up deployments and upgrades (less Upgrade/Fault Domains):
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-with-a-local-cluster#one-node-and-five-node-cluster-mode
You need to setup/create your cluster once and than start it and
keep it running between debugging sessions, Visual Studio will take
care of uninstalling/upgrading the SF Apps when starting the
debugger.
You can modify the properties of the SF Application project to
decide if your SF App will be uninstalled and install or upgraded
when starting the debugger, which impacts the deployment time.
Consider running from a SSD drive which will speed up compilation and
deployment (file intensive).
Expect less than one minute to compile, deploy and attach debugger for a SF App with 2-3 services.
Yes, we have the same issue, we have around 10 services in our application and debugging is very slow, VS fails to refresh 1 node cluster all the time, so cluster reset is only solution. So every debug run takes about 5 minutes.
Yes, very disappointment development process, the only advantage is some reuse of C# code, if you have not decided what to use for your cloud solution, abandon C# as early as possible. Go for any JST based language having no intermediate binaries.
I want to try and learn MPI as well as parallel programming.
Can a sandbox be created on my desktop PC?
How can this be done?
Linux and windows solutions are welcome.
If you want to learn MPI, you can definitely do it on a single PC (Most modern MPIs have shared memory based communication for local communication so you don't need additional configuration). So install a popular MPI (MPICH / OpenMPI) on a linux box and get going! If your programs are going to be CPU bound, I'd suggest only running job sizes that equal the number of processor cores on your machine.
Edit: Since you tagged it as a virtualization question, I wanted to add that you could also run MPI on multiple VMs (on VMPlayer or VirtualBox for example) and run your tests. This would need inter-vm networking to be configured (differs based on your virtualization software).
Whatever you choose (single PC vs VMs) it won't change the way you write your MPI programs. Since this is for learning MPI, I'd suggest going with the first approach (run multiple MPI programs on a single PC).
You don't need to have VMs running to launch multiple copies of your application that communicate using MPI .
MPI can help you a virtual cluster on a given single node by launching multiple copies of your applications.
One benifit though, of having it run in a VM is that (as you already mentioned) it provides sand boxing .Thus any issues if your application creates will remain limited to that VM which is running the app copy.
We do have problems with GWT hosted mode running in Eclipse Ganymede (Windwos XP 3GB RAM). When we start our application in hosted mode it takes very long to start and also the transactions once the application is started are taking minutes to react. It seems as if it takes very long to communicate between Javascript and server.
The processor shows almost no load during this time. Even compiling and starting from an external browser does not help.
Strange is that we do have two other computers (one Windows XP one Linux) with exact the same setup where the hosted mode is working at normal speed without any problems for the same application.
Do yourself a favour, move to GWT 2.0 (currently in RC2) and take advantage of Out Of Process Hosted Mode (OOPHM), which lets you debug straight in the browser, and is lightning fast!
http://code.google.com/p/google-web-toolkit/wiki/UsingOOPHM
Try removing all breakpoints. It helped me in such a scenario. Apparently if you place breakpoints in critical points in the program, it can cause everything to grind to nearly a halt in hosted mode.
I second the suggestion to switch to GWT 2. Please note, however, that with GWT 2, hosted mode is very slow in Chrome. I recently switched from 1.7 to 2.0 and found hosted mode to be very slow ... until I switched to Firefox. Reason for this is that Chrome's process model is not benificial to OOPHM, at least now.
A few ideas:
Does the slow Windows box have a heavily fragmented hard-drive?
Is it a specific database query that's taking a long time once the application is running, or are all interactions slow?
Are the project files on a local filesystem?
Is the database on a local filesystem?
If so, does it have the same size data set as the other machines?
If not, are they on different subnets or have different bandwidth available?