Viewing MEAN app in Google cloud - google-cloud-storage

I am trying to access a barebone MEAN stack application with Google's glcloud one click deployments. I have successfully been able to add the code for the MEAN app and can access (via ssh) and run/start app using grunt. Neither of the external links provided by gcloud is working: http://:3000 or http://
Any idea on how to access app for viewing/testing?

I figured it out by allowing the default MEAN JS port 3000 on the firewall rules in the Google Developer Console. Networking > Firewall rules. You must also allow http port for incoming traffic.

Related

How to limit access in Cloud Foundry

I am new to Cloud Foundry.
Is there any way that only specific users can view and update an app deployed in Cloud Foundry?
1.I deployed an app in Cloud Foundry using “cf push”command.
2.After entering “cf push “command I’ve got an message below.
Using manifest file /home/stevemar/node-hello-world/manifest.yml
enter Creating app node-hello-world-example...
name: node-hello-world-example
requested state: started
routes: {route-information}
last uploaded: Mon 14 Sep 13:46:54 UTC 2020
stack: cflinuxfs3
buildpacks: sdk-for-nodejs
type: web
instances: 1/1
memory usage: 256M
3.Using the {route-information} above,I can see the app deployed via browser entering below URL.
https://{route-information}
By this way ,anyone can see app from browser, but I don’t want that to be seen by everyone and limit access to specific user.
I heard that this global IP will be allocated to {route-information} by default.
Is there any way to limit access to only between specific users?
(For example,is there any function like “private registry” at Kubernetes in Cloud Foundry which is not open to public)
Since I am using Cloud Foundry in IBM Cloud it would be better if there is solution using IBM Cloud.
I’ve already granted cloud foundry role to the other user.
Thank you.
The CloudFoundry platform itself does not provide any access controls for applications. If you assign a public route to your application, where the DNS is publicly resolvable and the foundation is on the public Internet, like IBM Bluemix, then anyone can access your app.
There's a number of things you can do to limit access, but they do require some work on your part.
Use a private DNS. You can add any domain you want to Cloud Foundry, even ones that don't resolve. That means you could add my-cool-domain.local which does not resolve anywhere. You could then add a record to /etc/hosts for this domain or perhaps run DNS on your local network to resolve this DNS domain and direct traffic to the CloudFoundry.
With this setup, most people cannot access your application because the DNS domain for the route to your application does not resolve anywhere. It's important to understand that this isn't really security, but obscurity. It would stop most traffic from making it to your app, but if someone knew the domain, they could add their own /etc/hosts header or send fake Host headers to access your application.
This type of setup can work well if you have light security requirements like you just want to hide something while you work on it, or it can work well paired with other options below.
You can set up access controls in your application. Many application servers & frameworks can do things like restrict access by IP address or require user access (Basic auth is easy and it is OK, if you're only allowing HTTPS traffic to your app which you should always do anyway).
You can use OAuth2 to secure apps too. Again, many app servers & frameworks have support for this and make it relatively simple to secure your apps. If you don't have a corporate OAuth2 solution, there are public providers you can use. Exactly how you do OAuth2 in your app is beyond the scope of this question, but there's plenty of material out there on how to do this. Google information for your application language/framework of choice.
You could set up an access Gateway. This would be an application that's job is to proxy traffic to other applications on the foundation. The Gateway could be something like Nginx, Apache HTTPD, or Spring Cloud Gateway. The idea is that the gateway would be publicly accessible, and would almost certainly apply access controls/restrictions (see #2, many of these proxies have access control options that only take a few lines of config). Your actual applications would not be deployed publicly though. When you deploy your actual applications, they would only be on the internal Cloud Foundry domain.
CloudFoundry has local domains, often apps.internal (run cf domains to see if that shows up), which you can use to easily route traffic across the internal container-to-container network. Using this domain and the C2C network, you can have apps deployed to CF that are not accessible to the public Internet, except through your Gateway.
Again, how you configure this exactly is outside the scope of this question, but check out the docs I linked to for info on using the C2C network & internal routes. Then check out your proxy server of choice's documentation.

Left over application in GKE - how to remove and make webpreview work as before

I deployed an application (let's say app1) in GKE with a service, deployment and certificate setup in an existing cluster with Jenkins and another app (let's say app2).
The other app is deployed in the same way as the new one, with a certificate (and a static IP and DNS entry).
Jenkins is not exposed to an external IP, so I used to use the port forward option in the cloud console and then web preview - this creates an appspot URL which alloww me to login to the web admin.
Something strange happened after I deployed app2.
I tested it with the webpreview button and could reach it.
All was fine and it was accessible at the new URL with HTTPS and all.
But after that, the web preview to Jenkins was not working anymore.
Instead, I would be redirected to app2, always.
I could not figure out why, so I removed everything from app2 and now I have some very strange situation:
in the (Chrome) browser where I did most of the actions, I can still access the (broken) app on both the FQDN in DNS and on the appspot link ( https://8080-dot-1234567-dot-devshell.appspot.com/ even after I reboot, clear cache and logout the google account (and removed the statis IP even) - the port forward actions works and gives the above link (with other numbers)
in another (Chromium) browser on the same laptop running Ubuntu, the portford action works, but when clicking the link in the browser it does not generate another appsot url and fails with a 500 error screen
After reading up a bit, I understand there is some proxy that is used to do the forward, I expect the proxy to be 'hanging' some how and on top of that it seems there are application left overs in the cluster that should really not be there
I have basic support currently, so not eligable for technical support.
I cannot find a manual way to access the appspot proxy and I found no load balancer or any other thing I know of that may cause this.
If I run the portfoward in the cloud shell in the second browser, I can curl to the localhost on the exposed port and get Jenkins, so that part seems to work, but the web preview then does not.
How can I go about troubleshooting this (meaning getting back to the web preview working for Jenkins and getting rid of the application left overs)?
I actually found the cause of this issue with the help of a colleague.
The second application I deployed was Yopass.
It turned out that it uses a serviceworker, that cached (almost) everything in the browser, including most of the application, I suppose to run offline.
Although I tried clearing cache in the network tab in developer options, I still had this behaviour which made me think it was not a cache issue.
After removing all cache in the applications tab for both the FQDN url and the appspot domain, behavior went back to normal.
I was not able to fix it in the other browser yet, but I suppose that is cache too. Thanks for the help, I consider this solved.

HTTPS for local IP address

I have a gadget[*] that connects to the user's WiFi network and responds to commands over a simple REST interface. The user uses a web app to control this gadget. The web app is currently served over http and the app's javascript does AJAX calls to the gadget's local IP address to control it. This scheme works well and I have no issues with it.
[*] By "gadget" I mean an actual, physical IoT device that the user buys and installs within their home, and configures to connect to their home WiFi network
Now, I want to serve this web app over https. I have no issue setting up https on the hosting side. The problem is, now the browser blocks access to the gadget (since the gadget's REST API is over http and not https).
The obvious solution is to have the gadget serve it's REST API over https. But how? It has a local IP address and no one will issue a certificate for it. (Even if they did, I'd have to buy a boatload of certificates for each possible local IP address.) I could round-trip via the cloud (by adding additional logic on my server side to accept commands from the web app and forward it to the gadget over another connection), but this will increase latencies.
Is there a way around this problem? One possibility that I have in mind is to:
Get a wildcard certificate (say, *.mydomain.com)
Run my own DNS that maps sub-domains to a local IP address following a pattern (For example, 192-168-1-123.mydomain.com would map to 192.168.1.123)
Use the wild-card certificate in all the gadgets
My web app could then make AJAX calls to https://192-168-1-123.mydomain.com instead of http://192.168.1.123 and latencies would remain unaffected aside from the initial DNS lookup
Would this work? It's an expensive experiment to try out (wildcard certificates cost ~$200) and running a DNS server seems like a lot of work. Plus I find myself under-qualified to think through the security implications.
Perhaps there's already a service out there that solves this problem?
While this is a pretty old question, it is still nothing that you find out-of-the-box solutions for today.
Just as #Jaffa-the-cake posted in a comment, you can lean on how Plex did it, which Filippo Valsorda explained in his blog:
https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
This is very similar to what you proposed yourself. You don't even need a wildcard certificate, but you can generate certificates on-the-fly using Let's Encrypt. (You can still use wildcard certificates, if you want, which Let's Encrypt supports now, too.)
Just yesterday I did a manual proof-of-concept for that workflow, that can be automated with the following steps:
Write a Web Service that can create DNS entries for individual devices dynamically and generate matching certificates via Let's Encrypt - this is pretty easy using certbot and e.g. Google Cloud DNS. I guess Azure, AWS and others have similar offerings, too. When you use certbot's DNS plugins, you don't even need to have an actual web server running on port 80/443.
On you local device, contact that Web Service to generate a unique DNS entry (e.g. ..yourdns.com) and certificate for that domain
Use that certificate in your local HTTPS server
Browse to that domain instead of your local IP
Now you will have a HTTPS connection to your local server, using a local IP, but a publicly resolved DNS entry.
The downside is that this does not work offline from arbitrary clients. And you need to think of a good security concept to create trust between the client that requests a DNS and certificate, and your web service that will generate those.
BTW, do you mind sharing what kind of gadget it is that you are building?
If all you want is to access the device APIs through the web browser, A Simple solution would be to proxy all the requests to the device through your web server.this was even self signed certs for the devices wont be a problem. Only problem though is that the server would have to be on the same network as your devices.
If you are not on the same network, you can write a simple browser plugin (chrome) to send the api request to IoT device. but then the dependency on the app/plugin will be clumsy.

Is it possible to use the Single Sign On Service (currently only available on US) from an app deployed on UK?

I get that it wont be possible to bind the service and therefore not use the VCAP_SERVICES, and credentials would need to be managed in another way.
Since the communication would go via the internet, I guess the question is really:
Does the SSO service have an API that can be reached from outside of Bluemix?
Yes the SSO service can be reached from outside Bluemix and therefore also from apps deployed on UK.
However, to retrieve the credentials you need to create an SSO service on US and then bind an app to it and inspect the VCAP_SERVICES. This is due to how Cloud Foundry works. Read more here

How to test Facebook Real-time updates

In order to publish real-time updates to my app, Facebook needs needs to perform a post request to my server.
Problem is, my server is my home computer and not publicly addressable from the internet. Bringing a server live to implement this sounds like it could be a pain... can't attach debugger, fiddler etc....
So what's the best way to test the Http Endpoint? Integration tests that simulate the Facebook server? Fiddling with firewalls/NAT to try and get Facebook talking to my home computer?
Any ideas?
You can use ngrok - https://ngrok.com/ - free (pay-what-you-can) service that does exactly what you need. Localtunnel service is down and the developers also recommend ngrok.
In the past, I've used LocalTunnel to do this. It's a nice wrapper around an SSH tunnel and it effectively assigns you a subdomain at localtunnel.com pointing to a port on your localhost.
So basically, when you run it it will spit back an externally accessible sub domain name like xyz.localtunnel.com who's port 80 will point a port you specify on your local box.
You can find it at: http://progrium.com/localtunnel/
It's really great for testing various pubsubhubbub subscription feeds (like Facebook's).
OK! I think NAT should be the best bet and I don't see a reason for it not to work. You should try it out.
It was actually pretty easy - Logged into my home router, set up port forwarding on port 80 to the local IP of my computer, put an exception in windows firewall for port 80. and then navigate to my public IP address in the browser.
Implement the receiver samples at: https://github.com/facebook/real-time/tree/master/samples
The only answer is to get a webserver that is publicly accessible for real-time updates to be able to call back to.
There's lots of free webhosts that allow server-side scripting. And there's lots of paid for webhosts out there too. Stackoverflow is really not the place to get leads on where/when/why/howmuch for web hosting.
No you can't use ngrok only to simulate facebook realtime update since you must make a call to facebook servers with your ngrok adress to validate it (tell me if you find out how to do this :p ).
I use an openshift server to receive facebook realtime and then post evry json data received from facebook to my ngrok adress. So the process is
set up an openshift server to receive facebook notifications
Facebook sends notifications to your openshift
your openshift sends datas (as received) to your ngrok adress
And if you must receive facebook notifications on a local website (like www.website.dev/fb-notifications/) then create a script in your localhost folder which receives openshift posts (let's call it tunelscript.php). the process will be
set up an openshift server to receive facebook notifications
Facebook sends notifications to your openshift
your openshift sends datas (as received) to your tunel script via your ngrok adress (perso.ngrok.com/tunelscript.php)
Relay datas from your tunelscript to your local website (tunelscript.php => www.website.dev/fb-notifications/)
That's Tuneling B-)