"Last Build Status: Failed" after uploading a build for analysis - coverity

We use Coverity's free scanning service for free and open source projects. We have not been able to utilize the service for the last two months or so. Prior to the service failures, we had half-a-dozen or so good analysis.
Submitting a scan results in:
Last Build Status: Failed. Your build has failed due to the following reason. Please fix the error and upload the build again.
Error details: :Failed to retrieve tar file
Coverity is very good about providing copy/paste directions, and we have copied/pasted them religiously. We verified there are no build errors, and we verified the build ends with "131 C/C++ compilation units (100%) are ready for analysis" and "The cov-build utility completed successfully".
We've tried to resolve the issue by verifying things from this generic solution provided in a "failed email" response from the service. We verified or performed all of them except number four.
We did not perform number four because Coverity's documentation is horrible (its the exact opposite of their awesome scanning service). Because there's no instructions or RTFM to read, we have no idea which knobs should be turned for bin/cov-configure. We don't want to mess with it since it worked in the past.
We also tried the following:
using the web submission form and browser
using curl from the command line
packaging cov-int/ in a tarball
packaging cov-int/ in a zip file
using all lowercase for the project name
capitalizing the first letter of the project name
We always get the same message ("Failed to retrieve tar file"), even with a ZIP file. Recall that prior to about 6 weeks ago, everything was working fine.
What is the secret to uploading a file to the service? What has changed in the last six weeks or two months?

After contacting the coverity support we just received the following answer and we could successfully submit a build. Seems there was some hickup on the coverity side.
"This was due to some behind the
scenes issues on our end – nothing interesting,, but it is back up and
running now. Thanks for your patience".

Related

Build Visual Studio project fails - The cloud operation was unsuccessful

I'm using two laptops and stored my C# code in OneDrive.
I am aware that sharing code via OneDrive is not be the best approach, but that's what I'm dealing with now.
I noticed that on laptop 1 I have to define the following path to the data file (mdf):
C:\Users\ Diet\OneDrive\Personal\VisualStudio2019\Repos\project\project\App_Data\data.mdf
On laptop 2, the path is different because the user I'm logged in with has a different name (or at least that's what I believe is the cause)
C:\Users\ Dieter\OneDrive\Personal\VisualStudio2019\Repos\project\project\App_Data\data.mdf
Updating this in the Web.config fixed the connection to the database, BUT building the solution still returns an error, also related to a cloud operation, hence why I think it is caused by the path in OneDrive...
The error message:
CSC : error CS0041: Unexpected error writing debug information -- 'The cloud operation was unsuccessful.
I welcome your insights. Thank you for helping me out.
I have my projects stored in OneDrive and had this same issue. The fix was to set the entire Project folder contents to "Always keep on this device".
Seems that building the solution in VS was attempting to write to files that were not cached locally from OneDrive. As soon as I changed the setting, the build worked!
I was also storing my project on OneDrive, got the same error after installing a new ssd.
Rebuilding the solution was enough for me.

Need help on publishing xml report with the plugin parasoft findings in TeamCity

I added a build step for my project in teamcity which consists on using the plugin Parasoft Findings to publish an XML report of all the code violations. the problem is that teamcity is failing to parse XML report. It says there is an unexpected report format and to see log (which I couldn't find).
I already checked the report location pattern which is right. I don't use SOAtest but C++ test so I only put "Parasof analyzers 10.x" for the report type
I'm sorry but you did not provide any additional details regarding your question and it's hard to help you.
Please, provide:
error message
your configuration of build step and version of TeamCity and C++test
command line used to start C++test ( if you are using variables,
replace variables with real values used in your run)
By log I think, you should check your build step log, and provide us the right content.
You can find such logs on two different ways:
You can download it by clicking "Download full build log" on build
log page (Recommended).
Try to find the raw output from the build agent by looking in the
agents logs directory, for example in
c:\TeamCity_dir_agent\logs\teamcity-build.log
Following article from JetBrains' TeamCity manual might good point to start
Edit:
Parasoft updated the plugin which fixed the issue:
https://plugins.jetbrains.com/plugin/9949-parasoft-findings
I have the same issue. Error messages in build log:
Unexpected report format: <path-to>\report.xml. See log for details.
Failed to parse XML report
Failed to parse XML report
Step test results (Parasoft Findings) failed
teamcity-agent.log and teamcity-build.log are having no entries within this build step because there is no piece of code which would write that into during this error case.
Edit/Workaround:
In the report the node <ExecutedTestsDetails> must be below the node <Exec>. The node still has the right indendation but it is at the same level as <Exec>. The xsl of TeamCity Plugin works perfectly if you fix the report xml manually.
To make it work you can add the Build Feature "File Content Replacer" like that:
Regex: (?s)(<ExecutedTestsDetails.*?<\/ExecutedTestsDetails>).*?(<Exec.*?>)
Replace with: $2 $1

Azure Function - Publishing Failed - RequestTimeout

I have a basic Azure Function app. When I try to publish the app, I receive an error that says "error : The attempt to publish the ZIP file through https://... failed with HTTP status code RequestTimeout.".
This app is a .NET Standard app. I followed the instructions here. The difference is, my app has an Event Hub Trigger instead of the Http Trigger shown in the documentation. I don't understand why i'm getting a Timeout during deployment. I also don't know how to get past this.
What am I doing wrong?
Update
Here are the logs.
1>------ Build started: Project: MyProject.Functions, Configuration: Release Any CPU ------
1>MyProject.Functions -> C:\MyProject\MyProject.Functions\bin\Release\netcoreapp2.1\bin\MyProject.Functions.dll
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
Publish Started
MyProject.Functions -> C:\MyProject\MyProject.Functions\bin\Release\netcoreapp2.1\bin\MyProject.Functions.dll
MyProject.Functions -> C:\MyProject\MyProject.Functions\obj\Release\netcoreapp2.1\PubTmp\Out\
Publishing C:\MyProject\MyProject.Functions\obj\Release\netcoreapp2.1\PubTmp\MyProject.Functions - 20181101105531356.zip to https://my-project.scm.azurewebsites.net/api/zipdeploy...
C:\Users\me\.nuget\packages\microsoft.net.sdk.functions\1.0.23\build\netstandard1.0\Microsoft.NET.Sdk.Functions.Publish.ZipDeploy.targets(42,5): error : The attempt to publish the ZIP file through https://my-project.scm.azurewebsites.net/api/zipdeploy failed with HTTP status code RequestTimeout. [C:\MyProject\MyProject.Functions\MyProject.Functions.csproj]
According to this:
https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file
you should be able to pass ?isAsync=true to the zipdeploy url (so it would be: 'https://my-project.scm.azurewebsites.net/api/zipdeploy?isAsync=true'
This requests resolves faster without a timeout and then you can grab the location header from the response, which you can poll to see the status of your deployment.
In my case this error was because of the version of packages in my .csproj file. After updating them there was not error and the publish was successful.
I faced this recently and spent 2 complete days trying to fix it. Tried most of the solutions suggested here and on other posts.
What finally worked for me is removing my Publish settings and creating a new one by uploading a brand new .PublishSettings file.
How to get .PublishSettings file?
On Azure Portal, on your Function App, click on "Get Publish Profile"
And will automatically start downloading it.
How to Upload Publish Profile?
When trying to Publish the project from Visual Studio, click on New -> Select "Import Profile"
And Browse your .PublishSettings file.
Then, just select this new profile (if it's not selected already), and click on Publish button as you would usually do.
In my case, it was an issue with two things:
1] Visual Studio and Azure are flaky. Timeouts in a working scenario are still somewhat regular, on a bad day happening about 50-75% of the time for me. This is with an 80mb function app, not super big and I have gigabit Internet.
2] Someone deleted the file share for the storage. I had to fix WEBSITE_CONTENTAZUREFILECONNECTIONSTRING to point to the right storage connection string, and I had to update WEBSITE_CONTENTSHARE to point to a valid file share name, which I had to create in the storage resource group matching WEBSITE_CONTENTAZUREFILECONNECTIONSTRING connection string.
If you are using a development and production function slot, I would suggest to make WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE deployment slot settings, that way you can link to a production and development storage environment. This is especially handy if you are using tables or blob storage and don't want to have to prefix or suffix all your table names or keys. In my opinion these two settings should be slots by default.
Once I did these changes I could publish, still dealing with the intermittent timeouts.
The error messaging with Azure function publishing is bad to non-existant, with any kind of configuration or resource errors simply causing a timeout error.
I got the same issue when using Visual Studio. Very frustrating.
But then I just used the zip file that VS created and used
az functionapp deployment source config-zip -g <resource_group> -n \
<app_name> --src <zip_file_path>
to publish.
You can find more options in
https://learn.microsoft.com/en-us/azure/azure-functions/deployment-zip-push
I got the same issue recently.
I'm not sure if they are related, but it started working fine after updating the NuGet package "Microsoft.NET.Sdk.Functions" to v3.0.7.
Changing the profile to use WebDeploy was the only way i could update my Azure Function.
When downloading the Profiles from the Azure Portal, and importing to VS - i noticed it imported 2 profiles. 1 for Zip, and another for Web Deploy method for uploading.
Trying the Zip publish profile, failed, but the WebDeploy 2nd Profile - did work and update perfectly.

Error occurred while starting the build in Openshift 3

I have been trying to deploy a war file as an OpenShift project. The server used is jboss-webserver30-tomcat8. I have followed the below steps -
Put ROOT.war file under 'deployments' directory in local system.
Upload the changes in github.
Create a new JAVA project in OpenShift 3 and provide the github repository details.
No automatic build or deployment starts. On manually clicking on Start Build button, the below error is displayed:
An error occurred while starting the build. Reason: Error resolving
ImageStreamTag jboss-webserver30-tomcat8-openshift:1.2 in namespace
openshift: unable to find latest tagged image
Please suggest how can I resolve the error.
This is an issue with how the jboss-webserver30-tomcat8-openshift imagestream is defined in the cluster. We are working to correct this, it is not currently importing the correct set of tags and as a result the 1.2 tag was stopped being a valid tag, when it should be.
However the short term solution is change your buildconfig to reference one of the tags that has a valid image reference associated (e.g. 1.3) instead of the 1.2 tag it is currently referencing. Your build should then be able to run.
A (temporarily) unavailable builder image may be related to this platform upgrade that correlates with the time of posting your question.
Generally, the best place to check for any incident reports or scheduled maintenance is the Status Page (Starter | Pro clusters; it's linked in the web console too, in the upper right corner of the interface).
If this does not seem to be related (e.g. you're not on the starter-us-west-2 cluster where the platform upgrade is taking place) or persists after the maintenance is over, I would encourage you to check the open issues, and log a new bug report, if it's not in the list.
Thank you.

Coverity OpenSource Scan: Failed to retrieve tar file

We are trying to use the Coverity OpenSource service but have problems submitting our project files for analyses.
Whenever submitting the project.tgz to the coverity (no matter whether this is done via the automation instruction or via the website directly),
we see that the build is being queued for a short time:
Last Build Status: Running. Your build is currently being analyzed
But after a few second the build fails as it cannot find the archive:
Last Build Status: Failed. Your build has failed due to the following reason. Please fix the error and upload the build again.
Error details: :Failed to retrieve tar file ...more
The build log seems fine:
2015-12-18T12:30:44.458433Z|cov-build|5752|info|> Build time (cov-build overall): 00:34:26.499117
2015-12-18T12:30:44.458433Z|cov-build|5752|info|>
2015-12-18T12:30:44.462750Z|cov-build|5752|info|> Build time (C/C++/Java emits total): 00:49:03.604351
2015-12-18T12:30:44.462750Z|cov-build|5752|info|>
2015-12-18T12:30:44.462750Z|cov-build|5752|info|>
2015-12-18T12:30:44.462794Z|cov-build|5752|info|> 397 C/C++ compilation units (100%) are ready for analysis
2015-12-18T12:30:44.462794Z|cov-build|5752|info|> 19 Java compilation units (100%) have been captured and are ready for analysis
The issue seems to be consistent with Error details: :Failed to download tar file from . Unfortunately, there is no solution.
Is there any naming convention/and or size restriction for the archive?
Thanks for your help!
After contacting the coverity support we just received the following answer and we could successfully submit a build. Seems there was some hickup on the coverity side.
"This was due to some behind the
scenes issues on our end – nothing interesting,, but it is back up and
running now. Thanks for your patience".