Perforce (p4) sync maxresults error - version-control

I am getting below error when trying to sync:
Request too large (over 110000); see 'p4 help maxresults'
Because of the above error, I am not able to sync.
Can you please let me know how to increase the maxresults to unlimited or if there is any other way to handle this?

Your resource limits are associated with your username based on which group(s) your user is included in.
From 'p4 help group':
Each group has MaxResults, MaxScanRows, and MaxLockTime fields,
which limit the resources committed to operations performed by
members of the group. For these fields, 'unlimited' or 'unset'
means no limit for that group. An individual user's limit is the
highest of any group with a limit to which he belongs, unlimited if
any of his groups has 'unlimited' for that field, or unlimited
if he belongs to no group with a limit. See 'p4 help maxresults'
for more information on MaxResults, MaxScanRows and MaxLockTime.
So your administrator can place your userid into an alternate group, with a higher set of resource limits, which will then allow you to use more server resources.
Alternatively, you can sync fewer files. For example, you could edit your client spec to specify a smaller portion of the depot in your View:, or you could specify a smaller set of files to sync, rather than syncing your entire workspace.
In my workspaces, I try to specify the smallest set of files which will allow me to do my job; a workspace with a View like:
View:
//depot/branch/my/project/... //client/my/project/...
is going to result in a smaller sync than a workspace with a View like:
View:
//depot/... //client/...

Related

Office 365 group classifications: How to restrict users from changing classification after the creation of the group

From the document https://learn.microsoft.com/en-us/office365/enterprise/manage-office-365-groups-with-powershell#create-classifications-for-office-groups-in-your-organization, we can understand how we can add classification to Office 365 groups, and also how to make one classification as default.
Is there any way to restrict owners from changing the classification after the group has been created? As per I see in our tenant users can easily go to Edit group from their Outlook settings and change the classification. But this is the thing which we want to restrict. Please let me know if anyone has any solution.
Thanks in Advance!
I got a resolution for my requirement.
Let user create the site and group using classification drop down.
A script runs every day to check for a property in site collection property bag named classification. If that property exists, the script will continue without changing anything for the site.
As this is not a default property, it will not exist for the first time. The script will add that property with the same text which exists in group classification. This is just a onetime activity for each site.
Another script will also run every day to check the property classification of the site from the property bag. If it exists, it can do whatever is needed (ex: enable/disable external sharing). If it doesn’t, it should not do anything. 5. In this way, no matter how many times owner changes the classification, it will not affect anything
To change property bag, we can use CSOM as below:
$allProperties = $web.allproperties $ctx.Load($web) $ctx.Load($allProperties) $ctx.ExecuteQuery() $allProperties["classification"] = 'Property' $web.Update() $ctx.ExecuteQuery()
Similarly, for communications site, instead of changing group classification, we can change in site collection classification (Site class of site collection) - as the group does not exist.

Billing by tag in Google Compute Engine

Google Compute Engine allows for a daily export of a project's itemized bill to a storage bucket (.csv or .json). In the daily file I can see X-number of seconds of N1-Highmem-8 VM usage. Is there a mechanism for further identifying costs, such as per tag or instance group, when a project has many of the same resource type deployed for different functional operations?
As an example, Qty:10 N1-Highmem-8 VM's are deployed to a region in a project. In the daily bill they just display as X-seconds of N1-Highmem-8.
Functionally:
2 VM's might run a database 24x7
3 VM's might run batch analytics operation averaging 2-5 hrs each night
5 VM's might perform a batch operation which runs in sporadic 10 minute intervals through the day
final operation writes data to a specific GS Buckets, other operations read/write to different buckets.
How might costs be broken out across these four operations each day?
The Usage Logs do not provide 'per-tag' granularity at this time and it can be a little tricky to work with the usage logs but here is what I recommend.
To further break down the usage logs and get better information out of em, I'd recommend trying to work like this:
Your usage logs provide the following fields:
Report Date
MeasurementId
Quantity
Unit
Resource URI
ResourceId
Location
If you look at the MeasurementID, you can choose to filter by the type of image you want to verify. For example VmimageN1Standard_1 is used to represent an n1-standard-1 machine type.
You can then use the MeasurementID in combination with the Resource URI to find out what your usage is on a more granular (per instance) scale. For example, the Resource URI for my test machine would be:
https://www.googleapis.com/compute/v1/projects/MY_PROJECT/zones/ZONE/instances/boyan-test-instance
*Note: I've replaced the "MY_PROJECT" and "ZONE" here, so that's that would be specific to your output along with the name of the instance.
If you look at the end of the URI, you can clearly see which instance that is for. You could then use this to look for a specific instance you're checking.
If you are better skilled with Excel or other spreadsheet/analysis software, you may be able to do even better as this is just an idea on how you could use the logs. At that point it becomes somewhat a question of creativity. I am sure you could find good ways to work with the data you gain from an export.
9/2017 update.
It is now possible to add user defined labels, then track usage and billing by these labels for Compute and GCS.
Additionally, by enabling the billing export to Big Query, it is then possible to create custom views or hit Big Query in a tool more friendly to finance people such as Google Docs, Data Studio, or anything which can connect to Big Query. Here is a great example of labels across multiple projects to split costs into something friendlier to organizations, in this case a Data Studio report.

Lotus Notes application Document count and disk space

Using Lotus Notes 8.5.2 & made a backup of my mail application in order to preserve everything in a specific folder before deleting the contents of it from my main application. The backup is a local copy, created by going to File --> Application --> New Copy. Set the Server to Local, give it a title & file name that I am saving in a folder. All of this works okay.
Once I have that, I go into the All Documents & delete everything out except the contents of the folder(s) I want this application to preserve. When finished, I can select all and see approximately 800 documents.
However, there are a couple other things I have noticed also. First - the Document Count (Right-click on the newly created application & go to Properties). Select the "i" tab, and it has a Disk Space & Document count there. However, that document count doesn't match what is shown when you open the application & go to All Documents. That count is matches the 800 I had after deleting all but the contents I wanted to preserve. Instead, the application says it has almost double that amount (1500+), with a fairly large file size.
I know about the Unread Document count, and in this particular application I checked the "Don't maintain unread marks" on the last property tab. There is no red number in the application, but the document count nor the file size changed when that was selected. Compacting the application makes no difference.
I'm concerned that although I've trimmed down what I want to preserve on this Lotus Notes application that there's a lot of excess baggage with it. Also, since the document count appears to be inflated, I suspect that the file size is also.
How do you make a backup copy of a Lotus Notes application, then keep only what you want & have the Document Count and File Size reflect what you have actually preserved? Would appreciate any help or advice.
Thanks!
This question might really belong on ServerFault or SuperUser, because it's more of an admin or user question than a development question, but I can give you an answer from a developer angle...
Open your mailbox in Domino Designer, and look at the selection formula for $All view. It should look something like this:
SELECT #IsNotMember("A"; ExcludeFromView) & IsMailStationery != 1 & Form != "Group" & Form != "Person"
That should tell you first of all that indeed, "All Documents" doesn't really mean all documents. If you take a closer look, you'll see that three types of documents are not included in All Documents.
Stationery documents
Person and Group documents (i.e., synchronized contacts)
Any other docs that any IBM, 3rd party, or local developer has decided to mark with an "A" in the ExcludeFromView field. (I think that repeat calendar appointment info probably falls into this category.)
One or more of those things is accounting for the difference in your document count.
If you want, you can create a view with the inverse of that selection formula by reversing each comparison and changing the Ands to Ors:
SELECT #IsMember("A"; ExcludeFromView) | (IsMailStationery = 1) | ( Form = "Group" | Form = "Person")
Or for that matter, you can get the same result just taking the original formula and surrounding it with parens and prefixing it with a logical not.
Either way, that view should show you everything that's not in AllDocuments, and you can delete anything there that you don't want.
For a procedure that doesn't involve mucking around with Domino Designer, I would suggest making a local replica instead of a local copy, and using the selective replication option to replicate only documents from specific folders (Space Savers under More Options). But that answer belongs on ServerFault or SuperUser so if you have any questions about it please enter a new question there.

ExpressionEngine missing channel entries

I am working on a new web app which is based in ExpressionEngine and for the most part I am basing the content on channel entries. However I am experiencing some very weird issues with the exp channel entries tag in that it is not returning all relevant entries each time. I can't figure out what's going on with it as the entries are definitely available when viewing them in the control panel, and they will also show up as requested in my template, but sometimes they just disappear and/or are not processed properly. This is the case for large and small sets of entries also, ranging from 3 channel entries which fit the criteria specified within the exp tag to 500 entries.
Any thoughts or feedback would be greatly appreciated.
There could be a number of things going on here so here are some things to look at, just in case;
If the entries have entry dates in the future - you'll need your channel entries tag to have the parameter show_future_entries = "yes"
Likewise if the entries are closed, or expired, you'll need to add show="open|closed"
Are you looking at a particular category and these entries aren't assigned to the category?
Are you looking at a particular category but have exlcuded category data from the entries tag
Are you retrieving more than 100 entries? There is a default limit of 100 entries returned unless you specify a limit parameter.

SQL Server "Space Available" Alert?

I am looking for a way to send alerts when a database or log reach 10% space remaining.
Let me preface this question by saying that I intentionally did not include the word "file" in the question. While I have researched this question it appears that most people have their databases set up for auto-growth and then struggle to manage their database(s) at the file system level. There are a ton of examples out there dealing with how to send disk space alerts. THIS IS NOT MY QUESTION! My databases are ALL set to fixed size files. That means files are thus ALL pre-allocated from the file system when they are created or when a database needs to be expanded. As a matter of policy I do not allow ANY database to grow, uncontrolled, to the point of bringing down a whole server at the hands of one badly behaved application. Each database is managed within its pre-alloted space and grown manually as necessary to meet growing demands.
That said I am looking for the best way to send an alert when the database "reaming space" drops below 10% for example - technically I'll probably set up a warning and alert threshold. So far I haven't been able to find anything on this subject since most people seem fixated on disk space which makes this a bit like looking for a needle in a haystack.
I kind of hoped that SQL Server would have simple alert mechanism to do such a simple, obvious, thing right out of the box, but it looks like alerts mostly are designed to catch error messages which is a little late in my book - I'm looking to be a little more proactive.
So, again, looking to send alerts when database "remaining space" drops below various thresholds. Has anyone done this or seen it done?
Thanks!
Yes indeed. I have done this.
It is possible to set counters with queries against system tables. One possibility includes determining the percentage free space in a log or data file. Then, a SQL Alert can be created to E-mail a message to an operator that a particular threshold has been reached on a counter, such as there is only 5% space remaining in a database file. The solution requires several steps, but is possible using existing functionality.
To determine file names and space information, the following query may be used.
SELECT name AS 'File Name' ,
physical_name AS 'Physical Name',
size/128 AS 'Total Size in MB',
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB',
round((CAST(FILEPROPERTY(name, 'SpaceUsed') AS float)/size)* 100 ,2) AS 'Percentage Used',
*
FROM sys.database_files;
Below are steps to set up an alert for percentage space free on a given file.
Create procedure that sets counter with value. This example sets counter number 10.
DECLARE #FreePercent int
SELECT #FreePercent = 100 - round((CAST(FILEPROPERTY(name, 'SpaceUsed') AS float)/size)* 100 ,2)
FROM sys.database_files
WHERE sys.database_files.name = 'NameOfYourLogOrDataFileHere';
EXEC sp_user_counter10 #FreePercent
Create a scheduled job to run the aforementioned procedure
Create SQL agent alert with counter, such that it executes when the free percentage drops below a certain threshold (i.e. 5%)
Configure database mail, test it, and create at least one operator
Enable SQL server agent alert E-mail (properties of agent), and restart the agent