Cannot insert duplicate key row in object 'dbo.Constants' while creating a new Team Project in TFS - team-project

TFS 2013 update 5:
Scenario: creating a new Team Project using visual studio:
Exception Message: Cannot insert duplicate key row in object 'dbo.Constants' with unique index 'IX_Constants__String_RemovedDate'. The duplicate key value is (1, 172c9245-8dad-48a7-b583-891fb1d51a19\Areas Admins, Jan 1 9999 12:00AM).
Investigation results: "172c9245-8dad-48a7-b583-891fb1d51a19" is a project ID already created 6 months ago and it is not last one to be created.
Things I tried:
-Delete client cache
-Detete server cache
-iisreset
-server restart

Related

DB2: update get "Error applying transforms. Verify that the specified transform paths are valid."

I have a:
Windows 2019 server
IBM DB2: Version 11.5.0
"v11.5.7_ntx64_universal_fixpack"
I want to update the DB to Version 11.5.7 with this "v11.5.7_ntx64_universal_fixpack".
a.) I doublclick the "setup.exe"
b.) On the launchpad, I choose under "Install Product", working with an existing installation
c.) then I choose my Edition and click button to start
==> I get the message: "Error applying transforms. Verify that the specified transform paths are valid."
In the log I have the following text:
*DEBUG: Error 2254: Database: Transform: Cannot update row that doesn't exist. Table: Property
1: 2254 2: 3: Property
Error applying transforms. Verify that the specified transform paths are valid.
C:\Windows\Installer\2641882f.mst
MSI (c) (B4:44) [11:15:10:831]: Produkt: DB2 Server Edition -- Installation fehlgeschlagen.
MSI (c) (B4:44) [11:15:10:831]: Windows Installer installed the product. Product Name: DB2 Server Edition. Product Version: 11.5.7000.1973. Product Language: 1031. Manufacturer: IBM. Installation success or error status: 1624.*
I search for a solution some day but have found nothing. Has somebody an idea or an idea?
thank you
Regards
Tino
Now, I have found the solution myself:
https://www.ibm.com/support/pages/applying-db2-v115-fix-pack-windows-fails-error-applying-transforms-any-language-other-english
In my system, the path was another. I have found the correct path to the files with regedit, and now the update works perfectly.

TFS 2015 warehouse job error: TF221123: Job Version Control Warehouse Sync

We have recently migrated from TFS 2010 to TFS 2015 (Update 2) and everything seems to work fine apart from
the following error we get every 12 minutes.
TF53010: The following error has occurred in a Team Foundation component or extension:
Application Domain: TfsJobAgent.exe
Assembly: Microsoft.TeamFoundation.Framework.Server, Version=14.0.0.0, Culture=neutral,
Detailed Message: TF221123: Job Version Control Warehouse Sync for team project collection JLT TFS 2010 was unable to run after 20 attempts.
After checking "Process status" through the "Warehouse Control Web Service" I get the following message.
I would like to understand the core reason of why this is happening and how we can resolve this isue?
<Job JobProcessingStatus="DataChange" Name="Version Control Warehouse Sync">
<LastRun Result="Stopped" EndTimeUtc="2016-06-30T14:10:50.19Z" ExecutionStartTimeUtc="2016-06-30T14:00:49.877Z" QueueTimeUtc="2016-06-30T14:00:49.203Z">
<ResultMessage>
[Version Control Warehouse Sync]: ---> MakeDataChanges() result=DataChangesPending.
---> MakeDataChanges() result=DataChangesPending.
---> MakeDataChanges() result=DataChangesPending.
---> MakeDataChanges() result=DataChangesPending. --->
...
...
---> TF221123: Job Version Control Warehouse Sync for team project collection JLT TFS 2010 was unable to run after 20 attempts.
</ResultMessage>
</LastRun>
<CurrentRun ExecutionStartTimeUtc="2016-06-30T14:12:50.75Z" QueueTimeUtc="2016-06-30T14:12:50.19Z" JobState="Running"/>
</Job>
After checking this further we found that this is a known issues (confirmed by Microsoft) and has been fixed in TFS 2015 (Update 3).
Although, it requires to apply the latest Update of TFS 2015, however, it can be achieved by applying the following workaround at database level.
Please run the following script on the TFS Collection Database
DECLARE #partitionId INT = 1
DECLARE #registryUpdates typ_KeyValuePairStringTableNullable
INSERT #registryUpdates ([Key], Value)
SELECT ‘#\Configuration\VersionControl\CodeChurn\InUpgrade\’, NULL
EXEC prc_UpdateRegistry #partitionId, #registryUpdates
DROP TABLE tbl_UpgradeCodeChurn
Detailed information can be found in the following article.
After running this script and leaving it for few hours resolved this reported issue.

Visual Studio Online Migration Utility fails with TF400023

Update OpsHub has published an update to their utility that fixes the problem I encountered.
I am trying to migrate an on-premises Team Foundation Server 2010 to Visual Studio online using the OpsHub Visual Studio Online migration utility. It has successfully uploaded 1380 of 6585 change sets, but is stuck on one of them and will not continue. The error message for the problematic change set:
Changeset ID: 1417
OH-SCM-009: Error occurred while sync. TF400023: The local workspace could not be reconciled with the server.
If I open the TFS workspace in Visual Studio (by browsing to O:\w69_1), I get a very similar error message in a popup window:
Error
TF400023: The local workspace could not be reconciled with the server.
The Visual Studio Source Control console displays a dozen repetitions of the following error message:
TF14060: The item $/EDT/SingleProjectClient/Data cannot be deleted. One or more children have pending changes.
Browsing through the pending changes in the workspace, it is clear that $/EDT/SingleProjectClient/Data/AllProjects.sdf has a pending "merge, delete" change.
The "merge, delete" change was present in the original change set made on the on-premises team foundation server. The problematic changeset ID 1417 contains the following changes:
$/EDT/SingleProjectClient/Data: delete
$/EDT/SingleProjectClient/Data/AllProjects.sdf: merge, delete
I have tried to undo the pending change on $/EDT/SingleProjectClient/Data/AllProjects.sdf, but that doesn't help. The migration utility continues to issue the same error message (TF400023: the local workspace could not be reconciled with the server).
Stack trace from OpsHubTFSService.log
2015-02-06 12:16:47,834 [5] ERROR Error occured in thread of CheckinAll:TF400023: The local workspace could not be reconciled with the server.
at Microsoft.TeamFoundation.VersionControl.Client.LocalDataAccessLayer.<>c__DisplayClass23.b__1c(LocalWorkspaceProperties
wp, WorkspaceVersionTable lv, LocalPendingChangesTable pc)
at Microsoft.TeamFoundation.VersionControl.Client.LocalWorkspaceTransaction.Execute(AllTablesTransaction toExecute)
at Microsoft.TeamFoundation.VersionControl.Client.LocalDataAccessLayer.ReconcileLocalWorkspace(Workspace workspace, WebServiceLayer webServiceLayer, Boolean unscannedReconcile, Boolean reconcileMissingFromDisk, Failure[]& failures, Boolean& pendingChangesUpdatedByServer)
at Microsoft.TeamFoundation.VersionControl.Client.WebServiceLayerLocalWorkspaces.ReconcileIfLocal(String workspaceName, String ownerName, Boolean unscannedReconcile, Boolean reconcileMissingLocalItems, Boolean skipIfAccessDenied, Boolean& reconciled)
at Microsoft.TeamFoundation.VersionControl.Client.WebServiceLayerLocalWorkspaces.CheckPendingChanges(String workspaceName, String ownerName, String[] serverItems)
at Microsoft.TeamFoundation.VersionControl.Client.Workspace.EvaluateCheckin2(CheckinEvaluationOptions options, IEnumerable&grave;1 allChanges, IEnumerable`1 changes, String comment, CheckinNote checkinNote, WorkItemCheckinInfo[] workItemChanges)
at Service.Adapters.TFSCheckinWorkspaceContext.EvaluateCheckIn(List`1 changesToCommit, String comment, CheckinNote checkinNote, WorkItemCheckinInfo[] workItemChanges) in f:\Ashish Docs\Checkouts\OVSMU Branch\OpsHubV2\TFSWCFServiceSource\Service\Service\TFSVersionControl\AdapterComponents\TFSCheckinWorkspaceContext.cs:line 2392
at Service.Adapters.TFSCheckinWorkspaceContext.checkin(String comment, WorkItemCheckinInfo[] workItemChanges, List&grave;1 otherCheckInProperties, String checkinUser) in f:\Ashish Docs\Checkouts\OVSMU Branch\OpsHubV2\TFSWCFServiceSource\Service\Service\TFSVersionControl\AdapterComponents\TFSCheckinWorkspaceContext.cs:line 2344
at Service.Adapters.TFSVCAdapter.checkIn(List&grave;1 checkinItems, String checkinComment, String checkinUser, List&grave;1 workitemId, List`1 otherCheckInProperties) in f:\Ashish Docs\Checkouts\OVSMU Branch\OpsHubV2\TFSWCFServiceSource\Service\Service\TFSVersionControl\AdapterComponents\TFSVCAdapter.cs:line 123
at com.opshub.tfs.test.TFSWebService.<>c__DisplayClass2.b__0() in f:\Ashish Docs\Checkouts\OVSMU Branch\OpsHubV2\TFSWCFServiceSource\Service\Service\TFSWebService.cs:line 692
If your server was ever TFS 2005/8 you can be in the situation that a past merge delete never completed.
In 2005/2008 if you had both updates and deleted in a single operation you had to do two checkins to complete the merge. However the UI to tell you that was only introduced in 2008 SP1 (AFAIR).
I have run into this issue all the time with Timely Migration and TFS Integration Tools. And as the merge was never completed your code bow relies (or possibly does) on the current setup. In the aforementioned tools I would edit the incoming migration data to remove knowledge of the pend-deleted and allow the tool to proceed.
The OpsHub tool is not good with corner cases and you may need OpsHub to show you how to resolve with their tool.

org.activiti.rest.editor.model.ModelEditorJsonRestResource - Error creating model JSON

I work with activiti-5.15.1
I deploy activiti-explorer and activiti-rest under my server tomcat-6.0.18\webapps
I related this war with postgres as a database
I change in db.properties in activiti-explorer and activiti-rest
db=postgres
jdbc.driver=org.postgresql.Driver
jdbc.url=jdbc:postgresql://localhost:5432/activiti
jdbc.username=postgres
jdbc.password=postgres
and I put postgresql-jdbc3-8.1-405.jar under lib folder
when I start the server 23 table are created,
but using http://com.supcom:8080/activiti-explorer/
when I try to create a new model I have this error :
ERROR org.activiti.rest.editor.model.ModelEditorJsonRestResource - Error creating model JSON
org.codehaus.jackson.JsonParseException: Unexpected character ('j' (code 106)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: java.io.StringReader#390afb; line: 1, column: 2]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1433)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:521)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportUnexpectedChar(JsonParserMinimalBase.java:442)
at org.codehaus.jackson.impl.ReaderBasedParser._handleUnexpectedValue(ReaderBasedParser.java:1198)
at org.codehaus.jackson.impl.ReaderBasedParser.nextToken(ReaderBasedParser.java:485)
at org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2770)
at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2718)
at org.codehaus.jackson.map.ObjectMapper.readTree(ObjectMapper.java:1542)
at org.activiti.rest.editor.model.ModelEditorJsonRestResource.getEditorJson(ModelEditorJsonRestResource.java:53)
the same error is displayed when I try to import other model which is created under eclipse using activiti plugin
This error is due to the multiple versions of PostgreSQL jar.
According to postgresql website https://jdbc.postgresql.org/download.html
If you are using JDK 1.6 then the JDBC4 is suitable.(JDBC4 Postgresql Driver, Version 9.4-1208).
The jar must be used in activiti-explorer lib folder , your application and tomcat lib folder.

Spring-Boot Configuration for suppressing BatchDataInitializer

I am using Spring-boot 0.5.0.M6 with Spring-Batch. Configuration has by using #EnableBatchProcessing with datasource etc configured in application.properties.
During first run of the application, everything works fine but after I stop the application and restart application, following error is seen
org.springframework.dao.DuplicateKeyException: PreparedStatementCallback; SQL [INSERT into BATCH_JOB_INSTANCE(JOB_INSTANCE_ID, JOB_NAME, JOB_KEY, VERSION) values (?, ?, ?, ?)]; Duplicate entry '1' for key 'PRIMARY'; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY'
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:239)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:659)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:908)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:969)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:974)
When digging down, i had observed following lines in logs
2013-12-06 12:12:37 INFO ResourceDatabasePopulator:162 - Executing SQL script from class path resource [org/springframework/batch/core/schema-mysql.sql]
2013-12-06 12:12:37 INFO ResourceDatabasePopulator:217 - Done executing SQL script from class path resource [org/springframework/batch/core/schema-mysql.sql] in 13 ms.
Root problem over here was schema-drop-mysql.sql was not triggered by schema-mysql.sql was triggered, thereby creating two entries in BATCH_JOB_SEQ.
For resolution of the same, I have added
#EnableAutoConfiguration(exclude={BatchAutoConfiguration.class})
However due to this I need to execute schema-mysql.sql explicitly, which as of now is ok, but would be problem when spring-batch version would be updated with updates in schema
Hence have couple of questions:
1. How to autoconfigure batch for even executing schema-drop-mysql.sql before schema-mysql.sql?
2. is there way to configure this BatchDatabaseInitializer to run kind of "update" mode?
Regards
With the current version of Spring Batch autoconfiguration that isn't possible with the upcoming version it is possible to disable the automatic creation of the database tables by specifying the spring.batch.initializer.enabled property and setting it to false.
IMHO you shouldn't use the automatic creation/update features to create schema's either do it yourself or use tools like LiquiBase or FlyWay to do it more controlled.
Also see https://stackoverflow.com/questions/8418814/db-migration-tool-liquibase-or-flyway
You can always execute the schema-drop-mysql.sql yourself, as a work around you could add a #PreDestroy method to your #Configuration class which executes this script. (Maybe you could even add this to an #Configuration class which is enabled only in dev mode/profile).