I have created the load balancer service in AKS, my container is working fine, when i am running by using docker run command. Here, are the screenshot of my kubernetes dashboard and service is also working but I am not able to access the ip.
Kubernetes service is up and running
I am using my own image ashishrajput194/finreg-frontend:v1 and it is having angular project and I am using nginx as web server. Here, are the config file of nginx.
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/json max;
application/javascript max;
~image/ max;
}
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
expires $expires;
gzip on;
}
Please help to resolve my issue.
Related
I'm trying to link with my mongoDB database, using axios, Vue, and my server is running NGINX.
In my App.vue file, where the axios requests are, I have this:
export default {
name: 'App',
data(){
return {
baseApiURL: '[HTTP or HTTPS]://example.com:4000/api',
This works over HTTP. When I change it to HTTPS, it doesn't. I have tried using the IP address, and using the domain address. Network tab on Chrome says:
Request URL: https://www.example.com/api/
Request Method: GET
Status Code: 401 Unauthorized
I don't understand what this means exactly.
My NGINX config:
server {
listen 80 default_server;
ssl on;
listen 443;
server_name example.com;
ssl_certificate /usr/src/app/ssl/domain.cert.pem;
ssl_certificate_key /usr/src/app/ssl/private.key.pem;
# vue app & front-end files
location / {
root /usr/src/app/dist;
try_files $uri /index.html;
}
# node api reverse proxy
location /api/ {
proxy_pass http://localhost:4000/;
}
}
I'm not sure if there's anything else I should include in this info, please let me know. I feel like it should be a small issue as it's working over HTTP.
Not really an "answer" as it doesn't solve the exact initial problem, but I solved my problem of my site not working over HTTPS by using Caddy Server instead of NGINX, which somehow does it automatically.
I've been struggling with this all weekend, and I now on my knees hoping one of you geniuses can solve my problem.
I short: I have an ingress-nginx controller (Image: nginx/nginx-ingress:1.5.8) with whom I'm trying to achieve a self-signed mutual authentication.
The https aspect works all fine, but the problem I'm having (I think) is that the ingress controller reroute the request with the default cert and the ingress validates with the default CA(because it can't find my CA).
So.. Help!
Steps I've gone through on this cluster-f*** of a journey (pun intended):
I've tested it in a local Minikube-cluster and it all works like a charm. When I exec -it into the ingress-controller-pod and cat the nginx.conf for both my clusters (Minikube and Azure) I did find large differences; hence I just found out that I'm working with apples and pears in terms of minikube- vs azure-k8s nginx-ingresses.
This is the ingress setup that worked as a charm for my minikube cluster (the ingress I'm using is more or less a duplicate of the file you'll find in the link): https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/
In addition i found this which in a long way describes the problem that I'm having: https://success.docker.com/article/how-to-configure-a-default-tls-certificate-for-the-kubernetes-nginx-ingress-controller
From the link above the solution is simple; nuke the ingress from orbit and create a new one. Well.. Here's the thing, this is a production cluster and my bosses would be all but pleased if I did that.
Another discovery that I made whilst "exec -it bash"-roaming around inside the Azure-ingress-controller is that there is no public root cert folder (/etc/ssl/) to be found. Do not know why, but though I'd mention it.
I've also discovered the param --default-ssl-certificate=default/foo-tls, but this is a default. As there will be other needs for different client-auths later I have to be able to specify dynamic CA-certs for different ingresses.
I'll past my nginx.conf that I think is the problem below. Hoping to hear back from some of you because at this point in time I'm thoroughly lost. Hit me up if additional information is needed.
user nginx;
worker_processes auto;
daemon off;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65s;
keepalive_requests 100;
#gzip on;
server_names_hash_max_size 512;
variables_hash_bucket_size 256;
variables_hash_max_size 1024;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/nginx/secrets/default;
ssl_certificate_key /etc/nginx/secrets/default;
server_name _;
server_tokens "on";
access_log off;
location / {
return 404;
}
}
# stub_status
server {
listen 8080;
allow 127.0.0.1;
deny all;
location /stub_status {
stub_status;
}
}
server {
listen unix:/var/run/nginx-status.sock;
access_log off;
location /stub_status {
stub_status;
}
}
include /etc/nginx/config-version.conf;
include /etc/nginx/conf.d/*.conf;
server {
listen unix:/var/run/nginx-502-server.sock;
access_log off;
location / {
return 502;
}
}
}
stream {
log_format stream-main '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time';
access_log /var/log/nginx/stream-access.log stream-main;
So the problem came down to the ingress-controller being old and outdated. Didn't have the original helm-chart that is was deployed with so I was naturally worried about rollback options. Anyhoo -> took a leap of faith in the middle of the night local time and nuked the namespace; recreated the namespace; helm install stable/nginx-ingress.
There was a minimum downtime -> 1 min at most, but beware to lock down the public IP that's attached to the load balancer before going all 3.rd world war on your services.
Had to add an argument to the standard azure helm install command to imperatively set the public IP for the resource; pasting it below if any poor soul should find himself in the unfortunate situation with a new helm-cli and lost charts.
That's it; keep your services up to date and make sure to save your charts!
helm install nginx stable/nginx-ingress --namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.service.loadBalancerIP=*YourVeryPreciousPublicIP*
I'm trying to use the Woocommerce (v 3.5.4) Rest Api on my VPS (debian 9, Nginx).
Everything works well in my local machine (windows 10, XAMPP).
wpbop/ is the folder (var/www/wpbop/) where the wordpress files are stored.
The next basic URL in a browser should send the endpoints of the API (no need of athentication for this first step) :
http://my-public-ip/wpbop/wp-json/wc/v3
Or a curl in command line
curl http://127.0.0.1/wpbop/wp-json/wc/v3
in both cases, i get error 404 Not Found.
I can acces to the blog / admin blog without any problems ( http://my-public-ip/wpbop )
My permalinks are set on "Postname" in wordpress admin panel, this is recommanded by many people in same case.
EDIT - SOLUTION :
Since my Wordpress installation is in a sub-domain,
try_files $uri $uri/ /index.php$is_args$args;
can't find index.php. Just change this line by :
try_files $uri $uri/ /wpbop/index.php$is_args$args;
and it works !
Perhaps problem is coming from my Nginx conf file ?
server {
server_name localhost;
listen 80;
root /var/www;
location /wpbop {
index index.php;
access_log /var/log/nginx/blog.access.log;
error_log /var/log/nginx/blog.error.log;
try_files $uri $uri/ /index.php$is_args$args;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:7000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
}
}
I tried many things without any results, and I'm stuck for several days. Can someone help me ?
Thanks for reading me.
This case need a simple fix in the NGINX configuration file. This is related to the path of my wordpress installation.
Since my Wordpress installation is in a sub-domain,
try_files $uri $uri/ /index.php$is_args$args;
--> can't find index.php. Just change this line by :
try_files $uri $uri/ /wpbop/index.php$is_args$args;
when you get 404 code. try to access http://yoursite/?rest_route=/wp/v2/posts
Official documents https://developer.wordpress.org/rest-api/key-concepts
Move root /var/www/; by one level up (to server context). It is not being inherited.
I'm working on OSX with docker. Wihch install a light VM to makes containers run.
So my app is on the ip 192.168.99.100.
I would like to it my local IP on the host (192.168.1.10) and redirect to my vm.
I first my a 301 redirection to the VM IP but of course it's working well on my machine but not on a remote inside my network.
server {
listen 80;
server_name localhost;
return 301 http://192.168.99.100/;
location = /info {
allow 127.0.0.1;
deny all;
rewrite (.*) /.info.php;
}
error_page 404 /404.html;
error_page 403 /403.html;
}
What I have to do ?
I answer my own question.
I just have to had a proxy_pass to my local IP like this.
location / {
proxy_pass http://192.168.99.100/;
}
It was that easy.
I have a distribution with 2 CNAMES: example.com and www.example.com . My goal is redirect www.example.com to example.com
CloudFront points to a LoadBalancer, which points to a EC2 machine. This EC2 machines serves thought a nginx.
My config is:
server {
listen 80;
server_name default;
access_log /var/log/nginx/default.access.log;
root /xxxx/;
index index.html index.htm;
location /index.html {
add_header "Cache-Control" "public, must-revalidate, proxy-revalidate, max-age=0";
}
}
server {
listen 80;
server_name ~^(www\.)?(?<domain>.+)$;
return 301 https://$domain$request_uri;
}
The problem is that "server_name" receives "XXX-YYY-ZZZ-WWW.ap-northeast-1.elb.amazonaws.com", not the CNAME (so I don't have the information for get the domain).
Any solution?
You might try to enable forwarding of Host header in CloudFront (see details here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html). Then you should use Host header value in your nginx config to trigger redirect