I have recently migrated my perforce server from an older version running on windows to a new Server in Linux, using this doc as reference.
After restarting the server if i run p4 depots on my machine running the server, i get the following output
Depot depot 2017/06/05 local depot/... 'Default depot'
Depot spec 2020/05/20 spec .p4s spec/... 'Created by super. '
Depot streamsDepot 2017/06/05 stream 1 streamsDepot/... 'Created by perforce. '
Depot unload 2020/05/20 unload unload/... 'Created by super. '
But when I run the same p4 depots command from a different machine connected to the server I only get the three depots
Depot depot 2020/05/20 local depot/... 'Default depot'
Depot spec 2020/05/20 spec .p4s spec/... 'Created by super. '
Depot unload 2020/05/20 unload unload/... 'Created by super. '
These are the same depots visible from p4v as well. Even after a 'Get Latest" click, I keep getting shown these 3 depots. I tried p4 sync as well , but again get errors along the line
//streamsDepot/... - must refer to client 'My-Client-Machine'.
Also , I do not see anything on p4 depots doc that solved my problem for me . Is this an expected behaviour?
There are three reasons you might see different results from p4 depots commands from two different client machines (note that when you run a command "on the server" you're still using a client, the client just happens to be on the server machine):
You're connecting to two different servers.
You have two different sets of permissions. (Depots you have no access to are hidden.)
(special exception for stream depots) You have a very old client executable and the server is hiding depot types that your client might not be able to parse.
p4 info will mostly let you rule out both of these. If you're connecting to different servers, you'll see different Server address and/or Server root values.
If your User name is different that probably explains the permissions issue; if not, check the protection table for IP-based restrictions. p4 protects may be useful here.
You can check the client executable version with p4 -V.
Related
I'm in the process of more correctly implementing Source Control via Mercurial at work and I've run into a situation. My environment is two programmers with a Server and approx 4 dev computers. There are our 2 Office desktops where the majority of the code writting happens. And then there are 2 laptops used in the Labs for testing and debugging.
Previously, we had just been operating over the network; the code projects lived on the server and both my office and the lab laptop opened the files over the network. Yeah, I know it wasn't the best of ideas, but we made it work. Moving to a more correct model of DVCS with local repos presents with me with a problem: How do I get my code updates from my Office where I was typing to the Lab so I can program an actual chip? I feel like this level of changes (10, 20, 50, maybe even 100 little changes over the course of a day of development) doesn't need to go through the Server. Personal opinion is that commits to the Server should be reserved for when I'm actually ready to share what I have with others... not necessarily finshed with the project, just ready to share where I'm at.
Do I have to push to the Server and then pull to the Laptop everytime?
Can I just push/pull back and forth between my Office and the Lab laptop repos? How would I set that connection up?
Under the assumption that the "Server" is CVCS-emulation in DVCS environment (i.e push target|pull source for all data exchanges exclusively) and "always working single branch" antipattern not used:
Each Dev-host work with at least two named branches: personal (for WIP) and shared (merge-target) "default". WIP have to be pushed to Server, every other host Sync local repository with the whole Server's repository (but "authoritative source" is only default branch)
Pure DVCS-model
Except "Server" as default path, each Dev-host have 3 additional entries for other Dev's workplaces and pull-only model used for simplicity (no additional ACL and rules for pushes). I.e. (with human's communication) local http-server (hg serve) activated on source(s) on demand and on target developer hg pull ANOTHERDEV. Source server can' be stopped after it. Personal named branches isn't bad idea in this case also
Note: `hg serve can be always enabled on all 4 dev-hosts, combined pull command (pull 3 another repos) xan be defined as alias on every host and used when needed, without additional negotiation
In Clear Case Remote Client, we use to create new VOB based on VOB selection rule. I checked out a couple of files, but when trying to checkin, I obtain following error -
CRVAP0087E CCRC command 'checkin' failed:
/bin/sh: /vob/cspecs/triggers/scripts/checkin.sh: No such file or
directory ClearCase CM Server: Warning: Trigger "checkin_SomeOtherBranch" has
refused to let checkin proceed.
Please note, as per my vob selection rule, remote client should trigger, checkin_MyBranch for checkin.
As per this SO post, we can redefine existing trigger with mktrtype, Since command line is not available in CCRC. Couldn't try this command to resolve my issue.
Have you come across this situation, I am not precisely clear what is the purpose of trigger in CCRC.
Thank you for any help.
This would be best debug on the CCRC server side (which has full access to all the base ClearCase commands, like mktrtype), like this trigger example for limiting the delete command.
You wouldn't be able to modify it from a client (ie from a CCRC web view)
Check however that, on the CCRC server, the path /vob/cspecs/triggers/scripts/checkin.sh is there (and the vob cspecs is mounted). It should be available though, or you would have error message about "interactive session" as well (see "Non-interactive triggers fail with warning about interactivity using CCRC or CCWeb")
This looks like a custom trigger, put in place on the ClearCase server side. I don't know what its purpose would be.
I am trying to run the following command against my TFS 2008 server:
TF history /server:MyTFSServer /recursive “$/MyTFSProject/Folder”
When I run I get this:
Ignoring the /server option
It then complains about workspace. The workspace part I get (it is trying to use my current folder to establish the TFS Server. Where I am running from is not mapped so it can't connect. For my needs going tot he right folder will not help.)
But WHY WHY WHY does it not like my /server option?
I have tried /s, /server and -s. None of them work. I have checked and double checked the spelling of my server name. I have checked to make sure that the tf.exe I am running is the TFS 2008 version.
I am so confused and getting a bit frustrated.
(The sad thing is I had this working last week. I ran several history commands without any issues. I don't have the text from those commands, so I don't know what I did different, but I know it CAN work.)
Any help would be great!
Usually when you get this message it's because the /server parameter is unnecessary - that is, the client has determined your workspace and server information from the path you gave it. This should only happen with local paths, however, not with server paths. Can you confirm that you're only using server paths in your commands?
I am trying to scrape a website using wget. Here is my command:
wget -t 3 -N -k -r -x
The -N means "don't download file if server version older than local version". But this isn't working. The same files get downloaded over and over again when I restart the above scraping operation - even though the files have no changes.
Many of the downloaded pages report:
Last-modified header missing -- time-stamps turned off.
I've tried scraping several web sites but all tried so far give this problem.
Is this a situation controlled by the remote server? Are they choosing not so send those timestamp headers? If so, there may not be much I can do about it?
I am aware of the -NC (no clobber) option, but that will prevent an existing file not being overwritten even if the server file is newer, resulting in stale local data accumulating.
Thanks
Drew
The wget -N switch does work, but a lot of web servers don't send the Last-Modified header for various reasons. For example, dynamic pages (PHP or any CMS, etc.) have to actively implement the functionality (figure out when the content was last modified, and send the header). Some do, while some don't.
There really isn't another reliable way to check if a file has been changed, either.
I am new to Perforce.
Is it possible in P4 to have a confirmation step before using some deletion command.
E.g.:
deleting a workspace has no confirmation step
( P4 client -d workspace_name )
deleting label has no confirmation step
( P4 label -d label_name)
Which I found dangerous.
Thanks,
Thomas
I'm not sure of the real danger - if Perforce will wipe out something that you can't get back, then that is generally why you have the -f flag. The one truly dangerous command - p4 obliterate - does require an explicit -y flag before it will do anything.
If you are concerned about modifications of the server meta data (client specs, labels, permissions tables, jobs etc), then I strongly recommend you set up a "Specs" depot. This creates a special depot in Perforce that version controls any changes users make to thing slike labels specs, branch specs, client specs etc. Can be really useful, and is the first thing I do on any new Perforce installation.
It's all in the docs. Try this KB entry for starters.