proget Failed to process request. 'There was an error processing the request: Invalid API key.' - nuget

I recently setup Proget to try its nuget and chocolatey servers. Now when I try to publish packages to nuget feed through a teamcity build, I keep on getting error "proget Failed to process request. 'There was an error processing the request: Invalid API key.'.". I've made 100% sure that the name and password are working fine and specified API key as per Proget doco (i.e. username:password ) . THat feed already has one package which I published on the day I installed Proget for trying out. What could have gone wrong?

I found a work around.
[1] I confirmed that my username:password combination is correct.
[2] Then I renamed that original feed to feed_old (or whatever you
want or even delete it if it doesn't have anything important in it) I
had created for trying out and which wasn't allowing publishing
through teamcity and giving the error message as per the question's
message.
[3] Created new feed with the name I wanted.
[4] Confirmed that the username I was using in API key had necessary
privileges to publish to this new feed I just created.
[5] Then tested the publishing to this feed through teamcity and
VOILA!! it worked.
I don't know why this happened in the first place though. It would be good to find out and be able to fix underlying reason rather than using the above mentioned workaround.

Related

InternalServiceFault when trying to connect SPGo to SP Online in VS Code

I'm trying to connect the SPGo plugin in Visual Studio Code to a Sharepoint Online site. There are lots of guides for this, for instance this one: https://medium.com/niftit-sharepoint-blog/saying-goodbye-to-sharepoint-designer-ac939a0b79ba
In short, I'm doing it like this:
Open VS Code
Open a local, empty folder)
SPGO: Configure workspace (follow guide, ending up with spgo.json
looking like the one I pasted)
SPGO: Populate local workspace (asking me for credentials and I plot
it in O365 style (email and password).
Statusbar says "Populating workspace"
After about 10 seconds I get the pasted error in the output window (spgo)
I'm using newest versions:
Visual Studio Code 1.37.1
SPGo 1.4.3
I have tried various sites in my tenant and I know they are up. I am Site Collection Administrator for the sites. I know the credentials are correct, of course. the remoteFolders and publishingScope doesn't affect anything, when changed. I assume authenticationType should be "Digest".
SPGo.json:
{
"sourceDirectory": "src",
"sharePointSiteUrl": "https://domain.sharepoint.com/sites/SiteName",
"publishingScope": "Major",
"authenticationType": "Digest",
"remoteFolders": [
"/SitePages/"
]
}
I don't get any files in the local folder, instead I get an error in the output:
================================ ERROR ================================
<s:Fault>
<s:Code>
<s:Value>s:Receiver</s:Value>
<s:Subcode>
<s:Value xmlns:a="http://schemas.microsoft.com/net/2005/12/windowscommunicationfoundation/dispatcher">a:InternalServiceFault</s:Value>
</s:Subcode>
</s:Code>
<s:Reason>
<s:Text xml:lang="en-US">The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework SDK documentation and inspect the server trace logs.</s:Text>
</s:Reason>
</s:Fault>
Error Detail:
----------------------
{}
===============================================================================
Sorry I missed this post for so long. First- thanks for the detailed write-up. This is the first time I've seen this specific issue with SPGo, so I do not know for sure what is the root cause.
Couple questions:
Are you using ADFS Authentication with your Office 365/SharePoint Online instance?
Are you able to use Addin-Only Authentication on this SP Site?
SPGo should be able to automatically work with ADFS in SharePoint Online but, as a fall-back, you could use Addin-Only Authentication. In this scenario you would create a ClientId and ClientSecret pair for the SharePoint Site Collection you are accessing and authenticate using those credentials. The ClientId would act as your UserName, and the ClientSecret would be your password.
Under the covers, I am using the node-sp-auth package for user authentication. Sergei (s-KaiNet on Github) has a great write-up on how to enable Addin-Only Authentication in SharePoint Online on his site, which you can find here.
Thanks for using SPGo!

Update action package with gactions always returns request timeout

I created a project under actions console and made a test action package for smart home app. I want to try uploading the action package I have using gactions. However, every time I execute this command
./gactions --verbose update --action_package action.json --project my_project_id
the result is always like this:
Unable to update: Patch https://actions.googleapis.com/v2/agents/my_project_id?updateMask=agent.draftActionPackage.actions%2Cagent.draftActionPackage.conversations&validateOnly=false: Post https://accounts.google.com/o/oauth2/token: dial tcp 216.58.200.45:443: i/o timeout
I checked the verbose log and I noticed that it is reading some data from creds.data
Reading credentials from: creds.data
Then I noticed the contents in creds.data contains the access token and the expiry time. But the expiry time is july 18, which is a lot of days from now. I am not sure if this is the case that causes timeout error. And I also don't know how to update the creds.data to get a new access token.
Alright. I noticed that a part of this error is my net problem. But I was able to open yahoo and other sites, while the update just didn't work. But nevermind, I just switched to a different Wi-Fi.
Then I deleted the creds.data. And executed the update command again, this will come out.
Gactions needs access to your Google account. Please copy & paste the URL below into a web browser and follow the instructions there. Then copy and paste the authorization code from the browser back here.
Visit this URL:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=237807841406-o6vu1tjkq8oqjub8jilj6vuc396e2d0c.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fassistant+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Factions.builder&state=state
Enter authorization code:
Then I followed the instructions above, got the authorization code, copied and pasted it in the console, and everything works fine now.

Github - Failed to clone/remote sync or clone or any github activity

I have github on Windows-7. The github doesn't seem to allow me to check in code as something is messed up.
I did try changing the credentials & so forth by looking up online but nothing seems to work.
I still see the Bad credentials error alongside some wamp developer errors.
I don't know how wamp developer is related to GitHub.
I did have WAMP developer once upon a time on the PC.
The log file for the attempt is here: Github log file.
The error:
2016-03-22 12:44:50.7329|
ERROR|thread: 5|StartupLogging| MISSING PATH!!: 'C:\WampDeveloper\Components\Apache\bin'
That simply means your %PATH% currently reference one non-existent path: you could clean-up your environment variable PATH which is currently quite large.
This is not-blocking for GitHub Desktop though.
The other error is linked to a key previously used for:
Logged user r... off of host 'https://<server_url>'
When that key "C:\Users\ffgr.ghjk\.ssh\github_rsa" is used to authenticate to github.com does not work.
Make sure that key (the public one github_rsa.pub) is added to your GitHub account: "Adding a new SSH key to your GitHub account"

WSO2 Carbon 404 Error Redirection for Webapp Deployment?

We are using WSO2 Carbon 4.2.0 through the WSO2 Application Server (AS) package. In replacing an older, highly customized Carbon installation (provided by a company that no longer supports the product, has abandoned it and refuses to work on it, and left us no details on how/what they modified in Carbon), we have deployed a couple web applications in the webapps container as they were deployed before in the older instance. We have changed our WebContextRoot in the carbon.xml from the default "/" to a sub-URL of ex: "/stuff", as is also detailed in the self-answered SO question here. However the answer given there is not detailed in what the OP actually encountered when he modified his WSO2 instance.
In testing the above configuration we noticed that if a user were to go to a non-existent web address on the server, depending on the format of the URL they are either:
redirected to a blank page;
receive a "500 Internal server error" (I suspect this is the embedded Tomcat?);
get sent to the Carbon login page (which we definitely do not want to happen for security reasons); or
get an XML document stating:
<faultString> The service cannot be found for the endpoint reference (EPR) /stuff/services/nonexistantservicename </faultString>
At least in the case of missing content we wish the user to be sent to a standardized 404 error page, or at the least be sent an HTTP 404 error by the server. For services the XML error is palatable, we can deal with that.
The only option for us right now to circumvent this issue is to place a proxy in front of the WSO2 instance, which would be another layer to manage and tune, and possibly degrade performance. Please know that I am not a programmer but just an admin with DevOps experience. I would not know how to handle this with e.g. a Java solution or re-coding parts of WSO2. Customizing the core product would also hamper future upgrades of WSO2, a scenario we are trying to dig ourselves out of now as detailed above. Is there no internal WSO2 mechanism to handle non-existent content? Can we not redirect any errors to a standard canned response page?

Finding latest TeamCity Backup via REST API

I found plenty of information and example about triggering TeamCity 8.1.2 backups via the REST API.
But leaving the backup files on the same server is pretty useless for disaster recovery.
So I'm looking for a way to copy over the generated backup file to another location.
My question is about finding the name of the latest available backup file via the REST API -
The Web GUI includes this information under "Last Backup Report" under the "Backup" page of the Server Administration.
I've dug through https://confluence.jetbrains.com/display/TCD8/REST+API#RESTAPI-DataBackup and the /httpAuth/app/rest/application.wadl on my server. I didn't find any mention of a way to get this info through the REST API.
I also managed to trigger a backup with a hope that perhaps the response gives this information, but it's not there - the response body is empty and the headers don't include this info.
Right now I intend to fetch the HTML page and extract this information from there, but this feels very hackish and fragile (the structure of the web page could change any time).
Is there a recommended way to get this information automatically?
Thanks.
JetBrains support got back to me with the right answer - I should use a POST method, not GET, even if the request body is empty.
Here is an example of a working request:
curl -u user:password --request POST http://localhost:8111/httpAuth/app/rest/server/backup?includeConfigs=true'&'includeDatabase=true'&'fileName=testBackup
And the response to that contains a plain file name in text: testBackup_20150108_141924.zip