Suspicious behaviour of Spring Web Application - rest

I was auditing my spring web app's security and found a strange thing. Whenever I try to hit the address https://xxxxxxxxx.xxx/app then browser captures a document for downloading. But there is no API named "/app" is listed on my REST controller. Moreover, the document is blank.
Here is my request info:
GET /app HTTP/1.1
Host: xxxxxxxx.xxx
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: NG_TRANSLATE_LANG_KEY=%22en%22; count=0
Connection: close
Upgrade-Insecure-Requests: 1
Here is response sent from the server:
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Tue, 05 Jun 2018 11:19:01 GMT
Content-Type: application/octet-stream
Content-Length: 0
Connection: close
Expires: Sun, 05 Jun 2022 11:19:01 GMT
Cache-Control: max-age=126230400000, public
X-XSS-Protection: 1; mode=block
Pragma: cache
Accept-Ranges: bytes
Last-Modified: Fri, 01 Jun 2018 08:50:14 GMT
X-Content-Type-Options: nosniff
X-Application-Context: some-app
Whenever I try it from my local system then there is no issue like this. I've already disabled the directory listing for my application but the problem is still there. Please let me know if any other information is required.
My NGINX conf is as follows:
server {
listen 443 ssl http2;
server_name xxxxxxxx.xxx;
# Configure SSL
ssl_certificate /etc/ssl/certs/nginx/xxxxxx.xxx.chained.crt;
ssl_certificate_key /etc/ssl/certs/nginx/xxxxxxx.key;
include /etc/nginx/includes/ssl.conf;
location / {
include /etc/nginx/includes/proxy.conf;
proxy_pass http://10.210.xx.xx:8080;
}
access_log off;
error_log /var/log/nginx/error.log error;
}

Related

How to get 103 Early Hints work in Traefik?

I am using traefik in kubernetes and I have a service deployed that is returning 103 Early Hint. I can confirm that it is working by directly querying the service, e.g.
curl -D - http://contra-web-app
HTTP/1.1 103 Early Hints
Link: <https://builds.contra.com>; rel="preconnect"; crossorigin
Link: <https://fonts.googleapis.com/css2?family=Inter:wght#400;500;600;700;900&display=swap>; rel="preload"; as="font"
Link: <https://builds.contra.com/3f509d0cc/assets/entry-client-routing.4f895d55.js>; rel="modulepreload"; as="script"; crossorigin
Link: <https://www.googletagmanager.com/gtag/js?id=G-96H5NXQ2PR>; rel="preload"; as="script"
HTTP/1.1 200 OK
cache-control: no-store
referrer-policy: strict-origin-when-cross-origin
x-frame-options: sameorigin
content-type: text/html
content-length: 9062
Date: Tue, 26 Jul 2022 20:34:19 GMT
Connection: keep-alive
Keep-Alive: timeout=72
However, requesting the same service through Traefik just returns 200 response:
curl -H 'host: contra.com' -D - http://contra-traefik.traefik/gajus
HTTP/1.1 200 OK
Cache-Control: no-store
Content-Length: 11441
Content-Type: text/html
Date: Tue, 26 Jul 2022 19:51:48 GMT
Referrer-Policy: strict-origin-when-cross-origin
Set-Cookie: contra_web_app_service=394e7e912ad85b66; Path=/; Secure
Vary: Accept-Encoding
X-Frame-Options: sameorigi
At this point, I am unable to establish whether I am missing a configuration or if Traefik does not support it.

Getting a 401 status error whilst establishing a connection to Concourse API

At the moment, we are trying to get CI working in our labs..
we have just followed the instructions on the concourse website.
We are able to login properly and have setup ~/.flyrc as recomended ion the concourse-ci.org and concoursetutorial.com websites.
We have noticed that most commands are returning with a 401 Unauthorized error.
We have gone ahead setup the audit logs https://concourse-ci.org/concourse-web.html#audit-logs
But it isn't clear where this writes to, help?
It is difficult at the moment to properly trace this. BTW this is our first exposure to concourse.
We would like to know why? and what we can do resolve this (to cross this huddle).
fly -t rdb-ci set-team --team-name a-team --local-user admin --github-org organization --verbose --print-table-headers --non-interactive
2019/07/10 22:02:37 GET /api/v1/info HTTP/1.1
Host: ci.example.org
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip
2019/07/10 22:02:37 HTTP/1.1 200 OK
Content-Length: 88
Connection: keep-alive
Content-Type: application/json
Date: Wed, 10 Jul 2019 21:02:37 GMT
Server: nginx/1.12.2
X-Concourse-Version: 5.3.0
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Frame-Options: deny
X-Xss-Protection: 1; mode=block
{"version":"5.3.0","worker_version":"2.1","external_url":"https://ci.example.org"}
setting team: a-team
role owner:
users:
- local:admin
groups:
- github:organization
apply team configuration? [yN]: y
2019/07/10 22:02:53 PUT /api/v1/teams/a-team HTTP/1.1
Host: ci.example.org
User-Agent: Go-http-client/1.1
Content-Length: 71
Content-Type: application/json
Accept-Encoding: gzip
{"auth":{"owner":{"groups":["github:organization"],"users":["local:admin"]}}}
2019/07/10 22:02:53 HTTP/1.1 401 Unauthorized
Content-Length: 14
Connection: keep-alive
Content-Type: text/plain; charset=utf-8
Date: Wed, 10 Jul 2019 21:02:53 GMT
Server: nginx/1.12.2
X-Concourse-Version: 5.3.0
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Frame-Options: deny
X-Xss-Protection: 1; mode=block
not authorized
could not find a valid token.
logging in to team 'main'
2019/07/10 22:02:53 GET /api/v1/info HTTP/1.1
Host: ci.example.org
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip
could not reach the Concourse server called rdb-ci:
Get https://ci.example.org/api/v1/info: x509: certificate is valid for www.example.org, not ci.example.org
is the targeted Concourse running? better go catch it lol

JSTree bug with ngnix

A Laravel app is using the JSTree to display files.
If I get the tree under http://localhost:8000 I recive the correct tree.
We have a ngnix reverse Proxy setting to access the web site from behind a proxy.
But if I open the ngnix web site there are in some cases no data. The ajax response is correct, but JSTree doesn't render it.
Have anybody a idea?
First I tried out the jstree().last_error() function and it is a empty object.
Here are my header, I hope it helps:
Host: DOMAIN.de
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Referer: http://DOMAIN.de/explorer/show/443
Content-Length: 6
Cookie: cartalyst_sentinel=eyJpdiI6I...iJ9; laravel_session=eyJp...J9
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
The response:
Cache-Control: private, must-revalidate
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/html; charset=UTF-8
Date: Tue, 12 Apr 2016 06:37:12 GMT
Expires: -1
Host: DOMAIN.de
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Set-Cookie: laravel_session=eyJpdi......3D; expires=Tue, 12-Apr-2016 08:37:35 GMT; Max-Age=7200; path=/; httponly
Transfer-Encoding: chunked
X-Powered-By: PHP/5.6.19
The PHP header:
header('Content-Type: application/json; charset=utf-8');
The problem is, with ngnix the response has another Content-Type. ngnix put "application/json" to "text/html".
Are there any options to modify this?

Network unreachable: robots.txt unreachable

I'm getting error "Network unreachable: robots.txt unreachable" when trying to add my website on Google Webmaster tools -> http://www.hyponomist.com/
You can check my robots.txt at here and sitemap.xml at here
I have reading other posts here and there, but could not solve/understand. what is causing this issue. Also, I tried downloading a page with the Fetch as Googlebot tool but got same error.
Anyone knows?
Thanks in advance!
Your web server is returning a 503 error when the user-agent string says the request is from Googlebot, but 200 when it's from a browser. If you use an http diagnostic tool such as Fiddler (http://fiddler2.com/) you can see this.
If you use Fiddler to send the same request that a browser would send:
GET http://www.hyponomist.com/robots.txt HTTP/1.1
Host: www.hyponomist.com
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.72 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
The response is:
HTTP/1.1 200 OK
Server: nginx/1.4.4
Date: Fri, 10 Jan 2014 21:34:42 GMT
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Retry-After: 18000
Last-Modified: Fri, 10 Jan 2014 20:43:28 GMT
Content-Encoding: gzip
If you change the user-agent to mimic Googlebot:
GET http://www.hyponomist.com/robots.txt HTTP/1.1
Host: www.hyponomist.com
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Then the response is:
HTTP/1.1 503 Service Temporarily Unavailable
Server: nginx/1.4.4
Date: Fri, 10 Jan 2014 21:35:25 GMT
Content-Type: text/html; charset=iso-8859-1
Content-Length: 234
Connection: keep-alive
Retry-After: 18000
Exactly why it's doing this, I can't tell you. 503 is normally the error sent when a server is temporarily overloaded, but that's clearly not the case here. Maybe your firewall is poorly configured, and has blacklisted Googlebot based on request frequency? Take a look at your firewall settings and your server config.
Removing the trailing slash (use http://www.hyponomist.com instead of http://www.hyponomist.com/) may help

Facebook test accounts using selenium - failing to log in my fake users

I am programmatically creating test accounts, and then immediately trying to log in w/ them using a selenium driven browser. Unfortunately, the browser is just redirected to the facebook homepage. I can briefly see what appears to be the correct url prior to the redirect flash by, so I have no reason to believe the browser isn't going where I intend it to.
That said, if create a fake account, and then just paste the login_url into a browser, things work fine. Anyone have any idea why that might be unique about using Selenium here? Is there anything I need to do to prepare the browser for https connections or anything?
All I'm doing is this: (using capybara and the Selenium web driver)
visit #fake_user.login_url
https://www.facebook.com/platform/test_account_login.php?user_id=100002152974488&n=ILRvb8Lqf2cq05t
GET /platform/test_account_login.php?user_id=100002152974488&n=ILRvb8Lqf2cq05t HTTP/1.1
Host: www.facebook.com
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
HTTP/1.1 302 Found
Cache-Control: private, no-cache, no-store, must-revalidate
Expires: Sat, 01 Jan 2000 00:00:00 GMT
Location: http://www.facebook.com/
P3P: CP="Facebook does not have a P3P policy. Learn why here: http://fb.me/p3p"
Pragma: no-cache
Set-Cookie: datr=d3J_TWSAN5uIXyh94O1YJkJ8; expires=Thu, 14-Mar-2013 14:06:47 GMT; path=/; domain=.facebook.com; httponly
Set-Cookie: lsd=-Lv-N; path=/; domain=.facebook.com
Content-Type: text/html; charset=utf-8
X-Powered-By: HPHP
X-FB-Server: 10.52.145.67
X-Cnection: close
Date: Tue, 15 Mar 2011 14:06:47 GMT
Content-Length: 0
http://www.facebook.com/
GET / HTTP/1.1
Host: www.facebook.com
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Cookie: datr=d3J_TWSAN5uIXyh94O1YJkJ8; lsd=-Lv-N
HTTP/1.1 200 OK
Cache-Control: private, no-cache, no-store, must-revalidate
Expires: Sat, 01 Jan 2000 00:00:00 GMT
P3P: CP="Facebook does not have a P3P policy. Learn why here: http://fb.me/p3p"
Pragma: no-cache
Set-Cookie: reg_fb_gate=http%3A%2F%2Fwww.facebook.com%2F; path=/; domain=.facebook.com
Set-Cookie: reg_fb_ref=http%3A%2F%2Fwww.facebook.com%2F; path=/; domain=.facebook.com
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
X-Powered-By: HPHP
X-FB-Server: 10.52.163.25
X-Cnection: close
Transfer-Encoding: chunked
Date: Tue, 15 Mar 2011 14:06:47 GMT
Visit Facebook home page before trying to visit login url:
visit "https://www.facebook.com"
visit #fake_user.login_url
I haven't checked the headers, but I guess Facebook sets some cookies that are needed to log in.