I run perforce server on Ubuntu. Just typed apt-get upgrade and noticed that perforce server got updated as well. The thing is that I can't access repository any more. Is it gone for good? I'm hope that it's only matter of reconfigure, but I am a total newbie to perforce administration.
If you can't access your server after an upgrade, it's probably one of these three things:
Did you accidentally change the P4ROOT setting? (This determines where your database lives and is unique to each server instance.) If so, the server will start and you'll be able to connect to it, but it won't have the same contents it did before. Setting P4ROOT to an empty directory and starting p4d will give you a fresh new server instance. Setting P4ROOT to your existing server database directory and starting p4d will give you access to your existing server instance.
Does the new version require a manual database upgrade? If so, you'll be unable to start p4d or connect to it via a client, and you'll see an error message in your log (P4LOG) telling you to run p4d -xu. Do that to upgrade the database.
Does your license not support the new version (i.e. because it expired before this version was released)? If so, you'll be unable to start p4d, and you'll see an error message in your log (P4LOG) telling you the license has expired. Contact Perforce to renew your license, or downgrade p4d to stay on the version that you're licensed for.
p4root was set wrong, fixing it helped.
Related
I am facing the puzzling fact that the information of update sites fail to be updated despite my forcing a reload in Preferences > Install/Updates > Available Software Sites.
I have a local update site (file:/ protocol, on Windows) and an online update site (https://) that I use as staging/test update sites for an open source project that I am maintaining.
I build the update site using an update site project that is stored locally and wiped clean each time I build it. When I have tested the new release in a different Eclipse instance and I have validated my changes, I then upload the entire update site to my server. Then, just to simulate what a user would do, I update the plugin in another Eclipse instance that runs on a different physical machine.
I have (yesterday) built another version, 2.2.0.201702052007 and uploaded it to my server. The previous version was 2.2.0.201702042059.
The problem that I have is that the Eclipse instances (Mars.2 and Neon) on my development machine keep reporting the previous to last version, despite my reloading the update site information. However, the other machine sees the new version without a problem.
This is what I've tried:
Reloading the information of the update site: each time, I get a confirmation message saying "information for [...] has been reloaded from the server" but it turns out that it hasn't been reloaded: I see the older feature version.
Accessing the update site from a different Eclipse instance on a different machine: I see the new version.
Loading the update site's site.xml file from a browser: I see the new version.
Using FileZilla to download the entire update site to a local folder and unzipping content.jar and artifacts.jar so that I can read the XML files embedded in those JAR files: I see no trace of the older version.
Removing the update site, restarting Eclipse and adding the update site again: the problem was still there.
As a last resort, I removed all files of the update site from the server: Eclipse still reported successfully reloading the information from the server.
I shut down the httpd service on the VPS. Eclipse reported success until I restarted it and it then failed. But once the web server was again online, it failed to actually send a request to the web server as it kept saying there was no update site! As a consequence, the online update site now appears empty and restarting Eclipse does not change that.
[EDIT] Even more incomprehensible, the Reload button reports success even when there's no network connection to the update site (network interface disabled).[/EDIT]
There seems to be in the provisioning framework a cache somewhere between the UI and my server that reports an outdated information and feature version in spite of the explicit requests to reload that very information.
Is there any file or folder that I can delete to have the provisioning framework reset itself? If possible, I would altogether disable its cache.
I've found out that Oomph apparently has an action on the update site information retrieval process.
Anyway, I could recover normal operation (for now) and have the information properly reloaded by first deleting the appropriate files in C:\Users\...\.eclipse\org.eclipse.oomph.p2\cache.
By “the appropriate files”, I am referring to the fact that files in that folder are named after the URLs of repositories known to your Eclipse instances.
I'm in the process of migrating my VPS to a Dedicated server. I initially set up the new box with the correct dedicated server version of cPanel. I transfered all the packages and accounts using the cPanel transfer tool. I then copied wwwacct.conf and cpanel.config so that my settings would be identical. Problem I have now seem to have over written the cPanel version back to the VPS version on the new dedicated server. How do I correct this while retaining my settings?
When I migrate from another server, I configure Tweak Settings manually, is the best way.
A tip: Doesn't forgot to download EasyApache file and recompile in new server with same configuration.
I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases)
I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients.
I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade.
As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this.
Thanks in advance...
When you do an upgrade your server ID should stay the same. You may need to chnage it is you want to clone your enviroment.
In your test senario you are creating a clone of the TFS server rather than a strate upgrade.
ChangeServerID
You are probably running into problems as this has been run on your test envionment to facilitate it runing on the same network as your production TFS server.
All workspaces and shelvesets remain unchanged, and people will be able to continue working immediately. Even checked-out files are OK and will be picked up correctly.
I would recommend upgrading the server first, and keep the clients as 2008 (using the Forward Compatibility Pack), and then upgrading the clients to 2010 as and when the projects are upgraded.
I have a C# application (WinForms) (ClickOnce) whose repository is installed on a server that is about to crash, so my boss asked me to move the repository, but there are around 300 client machines which have the application installed.
The ClickOnce is signed with a Test Certificate.
Is it possible to move the repository without having to reinstall in the client machines?
Thanks in Advance
[EDIT]
I Have published the application to the new server, but the clients don't reach it, what else can I do? I think i should change something inside the manifest or something like that, but a actually don't know too much about ClickOnce... In any case, i would like to avoid the reinstallation on all the client machines, any ideas, suggestion? thanks in advance
The answer provided by Jhonny seemed promising to me, and I encountered an error when I tried it, which I had to solve. It had to do with certificates.
After following his setps, when I launch the ClickOnce app on the client machine, I get an error dialog: "Cannot Start Application".
When I click on the Details... button in the error dialog, the text file that opens shows that the app is trying to update from the Deployment Provider URL of the new server, but it gives this error:
"The deployment identity does not match the subscription."
The problem was the certificate used to publish the app on the old server was expired, and I had updated the certificate in the app published on the new server. The certificates didn't match.
The solution was to first publish the app to the old server with the new certificate, have the users open the app to get that update, then publish another new version with the Deployment URL of the new server, and copy the files to both servers. When the users updated the next time, they got the version of the app from the old server with the manifest pointing to the new server, and then, all subsequents updates were retrieved from the new server.
Here is what I have done, for people who may have the same issue.
Setup the new server on the publish package. (Project Properties, Publish Tab)
Publish to the new server
Copy the published files to the old server. (Include the .application file and the folder)
When the clients reach the old server, they will update, but the server location will be updated on the client to the new server name.
You could try to change the DNS alias so that it redirects to your new server.
The fact that the code signed using a certificate is not relevant, since code-signing certificates are not bound to a specific repository (as opposed to SSL certificates)
Btw, why don't you want to reinstall? The whole point of clickonce is to ease this kind of software update !!
We recently had a project where we released beta of a big web app on our client's server. Our client requested us to do bug fixes as they come, and we tried to do it same way. Normally while building an app on our prototype server is way easier, as I just have to issue simple 'svn up' command which takes a second.
But on production environment, we do not have any version control tool available. Is it possible to automate the patching work, so that we need not to login to ftp and upload each a every file one by one?
Its very difficult to work this way. As I'm having this problem, its for sure that some of you have already solved the problem. Please share your solutions.
Looking forward to your replies... Thanks a lot for reading guys.
Depending on the tools available on the server, you could either do a svn diff -r x:y where x is the revision you last updated too and y the last revision you want to update to (probably the last revision on your repository) to generate a patch and then apply the patch with the patch command.
If rsync is available on the production platform, and you can use it (though ssh for instance) you could set up a production ready tree, rsync it on the production server, and when an update comes in, svn update your production tree, and rsync it again.
What is stopping you from installing a Subversion client on the production server?
[EDIT] So someone doesn't allow you to install the software you need on the server. The question is: What is more important? A stable production server or an arbitrary policy? If the someone doesn't listen to arguments, go to your computer, start MS Word and write this letter:
"I hereby refuse to accept any responsibility for the stability of our production system based on the fact that [insert name here] refuses to equip me with the tools to make sure that the production system contains all the necessary files and data after an installation."
Sign this, have your boss sign it and then send a copy to [insert name here]. All of a sudden, any problem that might arise after an installation will be on his turf. Or to put it more clearly: He will be responsible for any mistake you might make.
Now, all you have to do is wait. :)
Depends on the programming environment you use. In Smalltalk and the web application server like Aida/Web we can upgrade the live web applications on the fly, without stopping it.
The server is connected to the SCM of choice like Monticello for Squeak Smalltalk or Store for VisualWorks. New versions are then manually or automatically loaded to the server's Smalltalk image.