I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases)
I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients.
I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade.
As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this.
Thanks in advance...
When you do an upgrade your server ID should stay the same. You may need to chnage it is you want to clone your enviroment.
In your test senario you are creating a clone of the TFS server rather than a strate upgrade.
ChangeServerID
You are probably running into problems as this has been run on your test envionment to facilitate it runing on the same network as your production TFS server.
All workspaces and shelvesets remain unchanged, and people will be able to continue working immediately. Even checked-out files are OK and will be picked up correctly.
I would recommend upgrading the server first, and keep the clients as 2008 (using the Forward Compatibility Pack), and then upgrading the clients to 2010 as and when the projects are upgraded.
Related
I have a p2site hosted on my server to provide Eclipse Update Site. The server is running an IIS 7.5
I have the same p2site content stored and provided both in my production environment and in my staging environment (two separate servers, with identical characteristics).
From a couple of days, if I connect with my staging environment p2site from an Eclipse Indigo instance, I'm required to enter credentials, which has never happened before.
Moreover, if I manually download the zip archive and install my plugin from this local archive, I'm asked the credentials too.
I can guess, but I'm not sure, that the problem can be related to the following: in the last days we have added HTTPS enablement for our web site, and installed our certificate in the root certificates of Windows Server 2008 R2.
Anyone knows why Eclipse (Indigo, haven't tested the other platforms yet) is behaving in this way?
And how can I prepare my local zip archive / p2site to overcome this issue?
Thank you very much
cghersi
Just for the sake of completeness, I found the solution on my own: the problem was that for some reasons (that I cannot still recognize...) there was a DENY rule in the .NetAuthorization section for the verbs OPTION,HEAD.
It seems that Eclipse send exactly these kind of requests when looking for p2site and so these requests were rejected and Eclipse was asking for credentials for these requests.
Hope is can be useful in the future for other people.
cghersi
This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.
I have a PHP Azure project which I have to manage with Powershell cmdlets. One of these, Publish-AzureServiceProject doesn't seem to be detecting file changes so these are not updated on the cloud (even though no errors are displayed).
I have remote desktop'd into the machines and the code is definitely not updated from weeks ago.
If I deploy to the local emulator, it is fine but this is much more obvious because it displays "removing old package" and "creating local package". The cloud package definitely contains the latest files, so the packaging is working fine.
Can anyone tell me how to force the publish to update the files on the cloud and more importantly, why this is not happening? Also, if I force the update, will it deploy to a new box and get a new IP Address?
Thanks.
It seems to work now.
I have removed and reinstalled azure libraries from my machine and created a new project from scratch and copied the original files over into it. I have not included diagnostics (not sure if that's an issue) and I have modified the Publish-AzureServiceProject script to select the subscription each time before it publishes.
It is possible that the subscription confusion was not helping (I have two Azure subscriptions and it might have used the wrong one at some point and done something weird) and also it was possible there was some conflict with various versions of the Azure SDK since I have been using it for over 6 months but at the moment, all is good.
A related article on my blog here: Problems with PHP Azure
Thanks for the interest
I am using Tortoise1.6, SubversionEdge for SVN CMS and FileZilla3 (Test Server has CentOS as Server).
Let's assume the scenario:
- Test Server exists - here, developers have direct access; used for user testing
- There are 3 members in a team
- 2 of the members are developing on their local machine using TortoiseSVN
- But 1 wants to develop directly on the Test Server
--> The issue on developing directly on the Test Server are:
1.) No TortoiseSVN installed
2.) Even if SVN exist in TestServer, command scripts are tedious since it is running on CentOS (no GUI)
This issue can be resolved with team management, but the challenging part in here is how to address the technical issue (as this is maybe a future need).
QUESTION
So, my question is - is there a way to integrate TortoiseSVN on FileZilla?
Or a way that after committing changes on the working copy, files in the Test Server are also updated?
If you were on my situation, how would you address such issue aside from just team mgt/agreement?
Tortoisesvn is an explorer shell-extension. It doesn't know how to access Filezilla's functions to access the files on the ftp server.
What you can do is using a smb-share over a vpn-network. TortoiseSVN ist then able to directly see the files and display them correctly in explorer - although this solution may be quite slow depending on your network-connection.
However, what I usually do is developing locally, connect via ssh to the server and then use the svn cmd-line utility to do updates.
We recently had a project where we released beta of a big web app on our client's server. Our client requested us to do bug fixes as they come, and we tried to do it same way. Normally while building an app on our prototype server is way easier, as I just have to issue simple 'svn up' command which takes a second.
But on production environment, we do not have any version control tool available. Is it possible to automate the patching work, so that we need not to login to ftp and upload each a every file one by one?
Its very difficult to work this way. As I'm having this problem, its for sure that some of you have already solved the problem. Please share your solutions.
Looking forward to your replies... Thanks a lot for reading guys.
Depending on the tools available on the server, you could either do a svn diff -r x:y where x is the revision you last updated too and y the last revision you want to update to (probably the last revision on your repository) to generate a patch and then apply the patch with the patch command.
If rsync is available on the production platform, and you can use it (though ssh for instance) you could set up a production ready tree, rsync it on the production server, and when an update comes in, svn update your production tree, and rsync it again.
What is stopping you from installing a Subversion client on the production server?
[EDIT] So someone doesn't allow you to install the software you need on the server. The question is: What is more important? A stable production server or an arbitrary policy? If the someone doesn't listen to arguments, go to your computer, start MS Word and write this letter:
"I hereby refuse to accept any responsibility for the stability of our production system based on the fact that [insert name here] refuses to equip me with the tools to make sure that the production system contains all the necessary files and data after an installation."
Sign this, have your boss sign it and then send a copy to [insert name here]. All of a sudden, any problem that might arise after an installation will be on his turf. Or to put it more clearly: He will be responsible for any mistake you might make.
Now, all you have to do is wait. :)
Depends on the programming environment you use. In Smalltalk and the web application server like Aida/Web we can upgrade the live web applications on the fly, without stopping it.
The server is connected to the SCM of choice like Monticello for Squeak Smalltalk or Store for VisualWorks. New versions are then manually or automatically loaded to the server's Smalltalk image.