Error 411 on pushing nuget package to Nuget.Server - nuget

I have successfully setup Nuget.Server from http://nugetserver.net.
I can access http://localhost/ site and http://localhost/nuget/Packages.
Unfortunately, every nuget push causes the following error:
Pushing Sample.1.1.0.nupkg to 'http://localhost/api/v2/package'...
PUT http://localhost/api/v2/package/
LengthRequired http://localhost/api/v2/package/ 33ms
Kod stanu odpowiedzi nie wskazuje powodzenia: 411 (Length Required).
It looks like nuget client is not setting Content-Length header so IIS is complaining.
How can I solve this?

I just had the same error 411 (Length required), and my problem was that I set -src https://nuget.org but this is wrong, it needs to be -src https://www.nuget.org.

In my case it was because of being behind a proxy. The proxy would just not forward all the info. Once the proxy removed the server would accept put request normally.

Related

Flutter web - the server responded with a status of 404 (Not Found), main.dart.js:1

Flutter web project works smoothly when running directly into chrome from IDE, but after calling flutter build web, and hosting the built web app is not opening(only a blank page) and showing this error which is
Failed to load resource: the server responded with a status of 404 (Not Found)
main.dart.js:1
Failed to load resource: the server responded with a status of 404 (Not Found)
(index):1
Uncaught (in promise) TypeError: Failed to register a ServiceWorker for scope ('http://localhost:8080/') with script ('http://localhost:8080/flutter_service_worker.js'): A bad HTTP response code (404) was received when fetching the script.
Did you publish resources not /build/web but /web?
First of all don't publish /build/web just publish /web folder's resources.
If you are hosting it with github then goto index.html and then change
<base href="/">
to
<base href="/YourRepoName/">
This worked for me..
I'm using nginx locally on my laptop to "force" flutter to serve my page.
The simplest config that I needed:
server {
listen 8004;
server_name your-domain.local;
location / {
root /path/to/your/project/build/web;
try_files $uri /index.html;
}
}
That's enough, the page is working without any problems :)

https rest request works in browser and postman not from soap ui

I am trying to hit a rest api, it works fine from browser and postman. But when I try from SOAP UI it throwing "javax.net.ssl.SSLException: Received fatal alert: protocol_version".
I updated SoapUI-5.3.0.vmoptions with this property
-Dsoapui.https.protocols=SSLv3,TLSv1.2. Now its throwing "javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure".
Could you please help me in resolving this issue
I'd rather avoid SSLv3 and activate TLSv1.1 instead : -Dsoapui.https.protocols=TLSv1.1,TLSv1.2
Got it working by following below options based on some other threads.
We need to remove below line from "C:\Program Files\SmartBear\SoapUI-5.3.0\bin\soapui.bat" and use the same to launch soap ui.
if exist "%SOAPUI_HOME%..\jre\bin" goto SET_BUNDLED_JAVA
once we remove that it executes below line from the bat file, it uses our system java which has solved issue.
if exist "%JAVA_HOME%" goto SET_SYSTEM_JAVA

Unable to connect to JIRA over HTTPS server using the Perl JIRA::Client::Automated

I do not use a proxy.
Here is my code:
use JIRA::Client::Automated;
my $jira = JIRA::Client::Automated->new(https://myserver.com, "user", "password");
And the error response is:
Unable to GET /jira/rest/api/latest/issue/DCS-51191: 500 Can't connect
to myserver.com:443 Can't connect to myserver.com:443
Bad file descriptor at
C:/Users/Fred/applis_portables/Strawberry_Perl/perl/vendor/lib/LWP/Protocol/http.pm
line 47.
at createPage2.pl line 16.
Thank you for your help.
It seems that there is a self signed certificate on JIRA server. To bypass, I added following code:
my $jira_ua = $jira->ua();
$jira_ua->ssl_opts( verify_hostname => 0 );
The error doesn't look like a JIRA::Client::Automated error. It's generated by LWP::UserAgent and usually means exactly what is shown.
Do you have a self signed certificate on your server?
Did you try to open that URL in in your browser? https://myserver.com:443 (exactly as you provide it to the module).
Try using curl from your webserver:
curl -vvv https://myserver.com/jira/rest/api/latest/issue/DCS-51191
Maybe it's just a missing www. prefix in your server URL?

How to make browser stop caching GWT nocache.js

I'm developing a web app using GWT and am seeing a crazy problem with caching of the app.nocache.js file in the browser even though the web server sent a new copy of the file!
I am using Eclipse to compile the app, which works in dev mode. To test production mode, I have a virtual machine (Oracle VirtualBox) with a Ubuntu guest OS running on my host machine (Windows 7). I'm running lighttpd web server in the VM. The VM is sharing my project's war directory, and the web server is serving this dir.
I'm using Chrome as the browser, but the same thing happens in Firefox.
Here's the scenario:
The web page for the app is blank. Accorind to Chrome's "Inspect Element" tool, it's because it is trying fetch 6E89D5C912DD8F3F806083C8AA626B83.cache.html, which doesn't exist (404 not found).
I check the war directory, and sure enough, that file doesn't exist.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
However, if I open the app.nocache.js on the browser, the javascript is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html!!! That is, even though the web server sent a new app.nocache.js, the browser seems to have ignored that and kept using its cached copy!
Goto Google->GWT Compile in Eclipse. Recompile the whole thing.
Verify in the war directory that the app.nocache.js was overwritten and has a new timestamp.
Reload the page from Chrome and verify once again that the server sent a 200 OK response to the app.nocache.js.
The browser once again tries to load 6E89D5C912DD8F3F806083C8AA626B83.cache.html and fails. The browser is still using the old cached copy of app.nocache.js.
Made absolutely certain in the war directory that nothing is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html (via find and grep)
What is going wrong? Why is the browser caching this nocache.js file even when the server is sending it a new copy?
Here is a copy of the HTTP request/response headers when clicking reload in the browser. In this trace, the server content hasn't been recompiled since the last GET (but note that the cached version of nocache.js is still wrong!):
Request URL:http://192.168.2.4/xbts_ui/xbts_ui.nocache.js
Request Method:GET
Status Code:304 Not Modified
Request Headersview source
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:192.168.2.4
If-Modified-Since:Thu, 25 Oct 2012 17:55:26 GMT
If-None-Match:"2881105249"
Referer:http://192.168.2.4/XBTS_ui.html
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4
Response Headersview source
Accept-Ranges:bytes
Content-Type:text/javascript
Date:Thu, 25 Oct 2012 20:27:55 GMT
ETag:"2881105249"
Last-Modified:Thu, 25 Oct 2012 17:55:26 GMT
Server:lighttpd/1.4.31
The best way to avoid browser caching is set the expiration time to now and add the max-age=0 and the must-revalidate controls.
This is the configuration I use with apache-httpd
ExpiresActive on
<LocationMatch "nocache">
ExpiresDefault "now"
Header set Cache-Control "public, max-age=0, must-revalidate"
</LocationMatch>
<LocationMatch "\.cache\.">
ExpiresDefault "now plus 1 year"
</LocationMatch>
your configuration for lighthttpd should be
server.modules = (
"mod_expire",
"mod_setenv",
)
...
$HTTP["url"] =~ "\.nocache\." {
setenv.add-response-header = ( "Cache-Control" => "public, max-age=0, must-revalidate" )
expire.url = ( "" => "access plus 0 days" )
}
$HTTP["url"] =~ "\.cache\." {
expire.url = ( "" => "access plus 1 years" )
}
We had a similar issue. We found out that timestamp of the nocache.js was not updated with gwt compile so had to touch the file on build. And then we also applied the fix from #Manolo Carrasco Moñino. I wrote a blog about this issue. http://programtalk.com/java/gwt-nocachejs-cached-by-browser/
We are using version 2.7 of GWT as the comment also points out.
There are two straightforward solutions (second is modified version of first one though)
1) Rename your *.html file which has a reference to *.nocache.js to i.e. MyProject.html to MyProject.jsp
Now search the location of you *.nocache.js script in MyProject.html
<script language="javascript" src="MyProject/MyProject.nocache.js"></script>
add a dynamic variable as a parameter for the JS file, this will make sure actual contents are being returned from the server every time. Following is example
<script language="javascript" src="MyProject/MyProject.nocache.jsp?dummyParam=<%= "" + new java.util.Date().getTime() %>"></script>
Explanation: dummyParam will be of no use BUT will get us our intended results i.e. will return us 200 code instead of 304
Note: If you will use this technique then you will need to make sure that you are pointing to right jsp file for loading your application (Before this change you was loading your app using HTML file).
2) If you dont want to use JSP solution and want to stick with your html file then you will need java script to dynamically add the unique parameter value on the client side when loading the nocache file. I am assuming that should not be a big deal now for you given the solution above.
I have used first technique successfully, hope this will help.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
I wouldn't rely on this. I've seen a bit of strange behaviour in Chrome's dev tools with the network tab in combination with caching (at least, it's not 100% transparent for me). In case of doubt, I usually still consult Firebug.
So probably Chrome still uses the old version. It may have decided long ago, that it will never have to reload the resource again. Clearing the cache should resolve this. And then make sure to set the correct caching headers before reloading the page, see e.g. Ideal HTTP cache control headers for different types of resources.
Open the page in cognito mode just to get-rid of cache issue and unblock yourself.
You need to configure cache time as mentioned in others comments.
After unsuccessfully preventing caching via Apache I created a bash script that root runs every minute in a cron job on my Linux Tomcat server.
#!/bin/bash
#
# Touches GWT nocache.js files in the Tomcat web app directory to prevent caching.
# Execute this script every minute in a root cron job.
#
cd /var/lib/tomcat7/webapps
find . -name '*nocache.js' | while read file; do
logger "Touching file '$file'"
touch "$file"
done

Downloading file by WebClient Exception

I have a problem downloading particular file types by WebClient. So there are no problems with usual types - mp3, doc and others, but when I rename file extension to config it returns me:
InnerException = {System.Net.WebException: The remote server returned an error: NotFound. ---> System.Net.WebException: The remote server returned an error: NotFound.
at System.Net.Browser.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult)
when I'm trying to access this file in browser (http://localhost:3182/Silverlight.config) - it's a usual xml file within - server returns me following error page:
Server Error in '/' Application.
This type of page is not served.
Description: The type of page you have requested is not served because it has been explicitly forbidden. The extension '.config' may be incorrect. Please review the URL below and make sure that it is spelled correctly.
Requested URL: /Silverlight.config
So I suppose this hapens because of some server configuration, which blocks files of unknown type.
downloading code is simple:
WebClient webClient = new WebClient();
webClient.OpenReadCompleted += new OpenReadCompletedEventHandler(webClient_OpenReadCompleted);
webClient.OpenReadAsync(new Uri("../Silverlight.config", UriKind.RelativeOrAbsolute));
completted eventhandler omitted for simplicity.
I'm not sure this is possible.
The .config extension is handled by the ASP.NET engine, for security reasons (sensitive data like connection strings need to be kept safe and hidden from unauthorized viewers).
This means that visitors cannot view your web.config file's content by simply entering "www.example.com/web.config" into their browser's adress bar.
EDIT : actually you can but I don't recommand it. If you really need to do it, you have to remove the mapping between the .config extension and ASP.NET ISAPI filter in IIS.