I know, I know. There are a million threads everywhere talking about problems with mapped drives. I've ready many of them but I still can's seem to wrap my head around this problem or come to a solution.
I have a build server/continuous integration server (Win 2003 running CruiseControl.NET) that listens to our source control server. When a change is detected, the build server gets the new code, compiles it, tests it and if successful, copies the files to one of our web servers. There are 6 web servers - 3 Windows 2003 boxes, 3 LAMP boxes. Each OS has a separate development, staging and production box. All 6 web servers are mapped to a different drive on the build server. I have a Windows start-up script that calls a few "NET USE" commands that set the stage for the deployments.
CCNET is the service that listens to SVN. However, CCNET calls NAnt to perform all of the actual processing and tasks (compilation, testing, copying).
When I map the drives manually and run the NAnt scripts manually, everything works beautifully. When the startup script maps the drives and CCNET triggers the NAnt, the drives are nowhere to be found. I think the problem has something to do with user accounts. CCNET runs under the LOCAL SYSTEM account. I don't know what account the startup script runs under. Obviously manual execution runs under my account.
The weirdest part is that at certain points in the past, everything was working great. I am not sure what changed. How can I get the mapped drives to be visible to all users and services? (Also, any other critique of any part of this setup/process is welcome)
The problem definitely was with user accounts. The drives were mapped under different accounts than the account CCNET was running on. Once I finally straightened everything out and got it running on the LOCAL SYSTEM account everything worked fine.
Related
I am looking at an architecture where we have the BluePrism runtime running inside a Citrix Desktop.
I see plenty of articles that talk about processing a Citrix desktop as part of an automation process, but as far as I can see they talk about firing up a Citrix app from within a process. In other words they have a physical laptop that runs the BluePrism runtime, and part of the process requires it to run a citrix desktop, and automate that. I understand that this scenario is problemtic, and requires you to use Surface Automation.
In my case we have a set of physical laptops, and we would like to completely replace these laptops with VMs. So the runtime will be in the same desktop as the target apps.
Question is, does this work, or are we still faced with having to convert all our BluePrism processes to use Surface Automation to get this architecture to work?
This works with VMs and Surface Automation isn't necessary in that case. All your objects will be doing is attach the target apps by calling their runtime process names on the VM desktop, or launch them from the parent (i.e. folder in the root desktop/server) by providing the path in your application model, and then have your BP objects launch attach to them. Surface Auto may be necessary if you are planning to interact with the actual Citrix Receiver (e.g. icons), but not the apps themselves once they are active on the VM desktop. Of course, all this assuming BP will be also on the VM desktop environment.
I'm working on a website using PhpStorm. For a long time I developed it locally, but then I got hosting and a remote ftp server.
I created a new project in PhpStorm with the settings for remote host, and I found that deploying code takes long time (over a minute) before I can see the result, which is quite uncomfortable when debugging.
Is there any possibility to work with code on a local server, and, when I think that the project is ready for deploy, just send it to the server.
I understand, that I can just work in two different projects and just deploy the "ready" version to server via FTP, but maybe there is some more comfortable way?
There is several answers to this question, and most of them opinion based but i will try and keep it objective.
Case 1
A big corporation gives every developer a sandbox, to test their code from, the corp requires every developer to keep their code on the sandbox.
Using mounted drives could be extremely slow. Especially when PhpStorm is indexing.
Case 2
An easy way to keep an auto backup of your code it to use the build in (s)ftp(s) upload/deploy.
Solution
In both cases you could use the auto deploy feature that saves every changes to the server, that way the deploy doesn't take over a minute, but is usually already there before you know it.
I cannot recommend to use the deployment for Production as it will not pass through your version control, SAT, security setups etc. In that case I would suggest something like rocketeer etc.
EDIT:
As for 2 projects, well you can define 2 different deployment servers, and use the default one for your testing, with auto upload or something, and then the other one can be selected from the deployment menu.
We have a c#, .NET 4.0, windows application which we deploy to a terminal server. (Developed using VS 2010). This application makes use of several WCF services sitting on another server.
Our users access the front-end via remote desktop session. (They all have a .RDP file on their desktops.)
My question is regarding the deployment of this front-end. Currently, if we need to do an emergency deployment during business hours, we need to kick all the users off that are hooked into the app (as they are using the dll's that we need to replace). This is not ideal, obviously. We work in quite a business-critical environment, so these deployments are unavoidable. I've investigated ClickOnce, but have read that you cannot use this with terminal services application here. (Which kind of makes sense since it's essentially one app being "accessed" by several clients...)
I would like to be able to do a "silent" deployment whereby the user knows nothing about the fix until they restart their instance of the application. I'm not sure this is even possible?
I would appreciate any guidance or suggestions on this!
Yep, I do this all the time with a RD app -- you just need to move or rename the DLLs instead of deleting them. Windows allows moves and renames when DLLs are in use, but prevents you from deleting them. If you use Windows Installer to deploy your app, it will do the moves automatically (and delete the old versions when the system is next rebooted).
Once you replace the DLLs this way, existing sessions will continue to use the old, renamed versions, and new sessions will use the new versions. Of course, depending on how many DLLs you have, how long it takes your app to load them into memory, and how much activity you have on your server, you could run into a scenario where the app loads some of the old DLLs and some of the new ones when you're in the middle of updating them, but that would likely be rare.
I'm using TeamCity for my CI builds, and I'd like to set up a second build for running automated UI tests on Windows XP and Windows 7 virtual machines.
I imagine the build working as follows:
Compile, run unit tests, etc.
Prepare MSI using WiX
Copy MSI to target test machines
Remotely execute MSI's
Copy test harness code to remote machine
Run tests
Build finishes
The automated UI tests are written using NUnit and would need to be run directly on the test virtual machine (they can't run remotely). It's important that if the tests fail, it appears in the TeamCity build log and the build fails. I'd rather not install VS or the TeamCity build agents on either of the test virtual machines.
It seems that most of this should be possible using psexec.exe. Are there any alternative (preferably open source) tools that I should look at?
takes a deep breath
We were looking into something to help us out with our automated UI tests. We use ranorex to test the UI and TeamCity/Msbuild to execute the tests.
We never found any tools to help us out (I’m constantly keeping an eye out for some so will monitor this thread) but here is what we did instead.
The CI server copies the setup files and test scripts to the Testing Host Server.
The CI server then launches a custom app on the Testing Host Server providing the name of the VM to launch.
The Test Host Server then launches the VM software, using Virtual PC.exe -singlepc -pc vhdname.vhd -launch , and waits for it to shutdown (after it’s run its tests).
The VM grabs the setup files and scripts from the network location and executes.
After the tests are run it then returns the results to a networked location and shuts itself down.
Control is returned to the custom app.
Control is returned to the CI server which determines from the results if it has passed or failed (and updates the UI so developers are made aware of the result).
Results are collection as artifacts in TeamCity and tagged in Svn.
I think that's everything. Its convoluted, however, it works. Hope someone of that helps you.
Jeff Brown of the Gallio team has been talking about a tool called Archimedes that he's planning to write to support this kind of requirement. It sounds promising, but I don't think there has been much progress on it so far.
In the mean time though, there is something in the Gallio project called VM Tool that may do what you want. It provides commands to stop, start and snapshot virtual machines, and more importantly, to copy files back and forth and execute commands.
I presume you have also considered Powershell Remoting?
Does full trust mean the same as Run As Administrator? I have read things stating that "for this to work, the application must be a full-trust application." Is that the same as you must have administrator privileges to run the application? If not, what's the difference? How can you tell if an app is "full-trust"?
I am told that "Administrator or not, .Net apps won't do certain things if they aren't running from a 'trusted' location." What is a "trusted location"? If you run an app from a "trusted location", can you do things that "require full-trust" without being an administrator?
No. Full-trust is a .NET term used to indicate that it's not running in a reduced-priviledge .NET sandbox. In .NET prior to 3.5 SP1, this included running from a network share (in the default configuration). It also includes running as a ClickOnce application that has not requested additional permissions, or in some other browser-based sandbox.
Full-trust means it can do anything the user it is running as can do, not that is running as an administrator.
No. As of version 2.0, the .Net framework has it's own little filesystem setup for security. Administrator or not, .Net apps won't do certain things if they aren't running from a 'trusted' location.
Just about anything on your local hard drive is trusted, but (and supposedly they fixed this for 3.5sp1) even the local intranet is not trusted, so most .Net desktop apps will fail to even start if they're sitting on a network drive or share.
You can change the configuration on a machine so it will allow apps from that zone, but it has to be done for every machine that's going to run the application, which breaks a common corporate deployment scenario.
From an ASP.Net standpoint, it also means that certain activities require more 'trust' than others. Sending e-mail, for example, can cause exceptions if not set up correctly.
Basically Full Trust means that the C# code has total control over the current (.Net) process and all processes running under the Application Pool account.
It is the same as running a C++ dll
Admin access will depend on the IIS settings (ie. if you run the website under System or an admin account)