HTTPS for local IP address - rest

I have a gadget[*] that connects to the user's WiFi network and responds to commands over a simple REST interface. The user uses a web app to control this gadget. The web app is currently served over http and the app's javascript does AJAX calls to the gadget's local IP address to control it. This scheme works well and I have no issues with it.
[*] By "gadget" I mean an actual, physical IoT device that the user buys and installs within their home, and configures to connect to their home WiFi network
Now, I want to serve this web app over https. I have no issue setting up https on the hosting side. The problem is, now the browser blocks access to the gadget (since the gadget's REST API is over http and not https).
The obvious solution is to have the gadget serve it's REST API over https. But how? It has a local IP address and no one will issue a certificate for it. (Even if they did, I'd have to buy a boatload of certificates for each possible local IP address.) I could round-trip via the cloud (by adding additional logic on my server side to accept commands from the web app and forward it to the gadget over another connection), but this will increase latencies.
Is there a way around this problem? One possibility that I have in mind is to:
Get a wildcard certificate (say, *.mydomain.com)
Run my own DNS that maps sub-domains to a local IP address following a pattern (For example, 192-168-1-123.mydomain.com would map to 192.168.1.123)
Use the wild-card certificate in all the gadgets
My web app could then make AJAX calls to https://192-168-1-123.mydomain.com instead of http://192.168.1.123 and latencies would remain unaffected aside from the initial DNS lookup
Would this work? It's an expensive experiment to try out (wildcard certificates cost ~$200) and running a DNS server seems like a lot of work. Plus I find myself under-qualified to think through the security implications.
Perhaps there's already a service out there that solves this problem?

While this is a pretty old question, it is still nothing that you find out-of-the-box solutions for today.
Just as #Jaffa-the-cake posted in a comment, you can lean on how Plex did it, which Filippo Valsorda explained in his blog:
https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
This is very similar to what you proposed yourself. You don't even need a wildcard certificate, but you can generate certificates on-the-fly using Let's Encrypt. (You can still use wildcard certificates, if you want, which Let's Encrypt supports now, too.)
Just yesterday I did a manual proof-of-concept for that workflow, that can be automated with the following steps:
Write a Web Service that can create DNS entries for individual devices dynamically and generate matching certificates via Let's Encrypt - this is pretty easy using certbot and e.g. Google Cloud DNS. I guess Azure, AWS and others have similar offerings, too. When you use certbot's DNS plugins, you don't even need to have an actual web server running on port 80/443.
On you local device, contact that Web Service to generate a unique DNS entry (e.g. ..yourdns.com) and certificate for that domain
Use that certificate in your local HTTPS server
Browse to that domain instead of your local IP
Now you will have a HTTPS connection to your local server, using a local IP, but a publicly resolved DNS entry.
The downside is that this does not work offline from arbitrary clients. And you need to think of a good security concept to create trust between the client that requests a DNS and certificate, and your web service that will generate those.
BTW, do you mind sharing what kind of gadget it is that you are building?

If all you want is to access the device APIs through the web browser, A Simple solution would be to proxy all the requests to the device through your web server.this was even self signed certs for the devices wont be a problem. Only problem though is that the server would have to be on the same network as your devices.
If you are not on the same network, you can write a simple browser plugin (chrome) to send the api request to IoT device. but then the dependency on the app/plugin will be clumsy.

Related

How to limit access in Cloud Foundry

I am new to Cloud Foundry.
Is there any way that only specific users can view and update an app deployed in Cloud Foundry?
1.I deployed an app in Cloud Foundry using “cf push”command.
2.After entering “cf push “command I’ve got an message below.
Using manifest file /home/stevemar/node-hello-world/manifest.yml
enter Creating app node-hello-world-example...
name: node-hello-world-example
requested state: started
routes: {route-information}
last uploaded: Mon 14 Sep 13:46:54 UTC 2020
stack: cflinuxfs3
buildpacks: sdk-for-nodejs
type: web
instances: 1/1
memory usage: 256M
3.Using the {route-information} above,I can see the app deployed via browser entering below URL.
https://{route-information}
By this way ,anyone can see app from browser, but I don’t want that to be seen by everyone and limit access to specific user.
I heard that this global IP will be allocated to {route-information} by default.
Is there any way to limit access to only between specific users?
(For example,is there any function like “private registry” at Kubernetes in Cloud Foundry which is not open to public)
Since I am using Cloud Foundry in IBM Cloud it would be better if there is solution using IBM Cloud.
I’ve already granted cloud foundry role to the other user.
Thank you.
The CloudFoundry platform itself does not provide any access controls for applications. If you assign a public route to your application, where the DNS is publicly resolvable and the foundation is on the public Internet, like IBM Bluemix, then anyone can access your app.
There's a number of things you can do to limit access, but they do require some work on your part.
Use a private DNS. You can add any domain you want to Cloud Foundry, even ones that don't resolve. That means you could add my-cool-domain.local which does not resolve anywhere. You could then add a record to /etc/hosts for this domain or perhaps run DNS on your local network to resolve this DNS domain and direct traffic to the CloudFoundry.
With this setup, most people cannot access your application because the DNS domain for the route to your application does not resolve anywhere. It's important to understand that this isn't really security, but obscurity. It would stop most traffic from making it to your app, but if someone knew the domain, they could add their own /etc/hosts header or send fake Host headers to access your application.
This type of setup can work well if you have light security requirements like you just want to hide something while you work on it, or it can work well paired with other options below.
You can set up access controls in your application. Many application servers & frameworks can do things like restrict access by IP address or require user access (Basic auth is easy and it is OK, if you're only allowing HTTPS traffic to your app which you should always do anyway).
You can use OAuth2 to secure apps too. Again, many app servers & frameworks have support for this and make it relatively simple to secure your apps. If you don't have a corporate OAuth2 solution, there are public providers you can use. Exactly how you do OAuth2 in your app is beyond the scope of this question, but there's plenty of material out there on how to do this. Google information for your application language/framework of choice.
You could set up an access Gateway. This would be an application that's job is to proxy traffic to other applications on the foundation. The Gateway could be something like Nginx, Apache HTTPD, or Spring Cloud Gateway. The idea is that the gateway would be publicly accessible, and would almost certainly apply access controls/restrictions (see #2, many of these proxies have access control options that only take a few lines of config). Your actual applications would not be deployed publicly though. When you deploy your actual applications, they would only be on the internal Cloud Foundry domain.
CloudFoundry has local domains, often apps.internal (run cf domains to see if that shows up), which you can use to easily route traffic across the internal container-to-container network. Using this domain and the C2C network, you can have apps deployed to CF that are not accessible to the public Internet, except through your Gateway.
Again, how you configure this exactly is outside the scope of this question, but check out the docs I linked to for info on using the C2C network & internal routes. Then check out your proxy server of choice's documentation.

Bypassing `blocked: mixed-content` restrictions in browsers

I have an internal WEB application I use, with a local printer attached.
To control the local printer (it's a ticketing printer) I use locally a small program that manages it. In order for my WEB application to "use" the printer, I make it to POST AJAX request to the small local program.
My WEB application is served with HTTPS, while the local program exposes a simple HTTP API through HTTP (non-secure).
The problem is, I am facing blocked: mixed-content restrictions when accessing the application through HTTPS (in development mode I wasn't seen this, of course).
I have several fixes (don't like any of them):
Make the local program to expose its simple HTTP API through HTTPS.
It's doable, but I will face problems with self signed certificates (will have to install them on the target machine), or will have to use DNS tricks to expose it under a "name".
Disallow browsers to block mixed-content
Doable. But will have to configure each browser accessing my application, plus will make them less secure.
====
So my question is: is there another way of circumventing/bypassing the blocked: mixed-content restriction? Ideally supported on new Firefox and Chrome versions.
You shouldn't but you can upgrade all non-secure requests by allowing it in your header
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">

Security for on-prem/cloud REST Application

I've been reading security articles for several days, but have no formal training in the field. I am developing a configuration and management application for an IoT device. It is meant to be run either on an internal network, or accessed over the web.
My application will be used by IT admins, managers, and factory-floor workers. Depending on the installation, there will be varying levels of infrastructure in place. It could run on a laptop on the floor itself, on a server, or hosted in the cloud. For this reason, we can not assume that our clients will have the kind of infrastructure you might find at a datacenter or in the cloud, for example CAS or NTP.
Our application provides a REST API for client applications to gather data. We'd like to use roles to restrict what data users can access. I've gathered that a common solution for authentication is to encode the username/pass in the REST Header. However, this is completely insecure unless sent over a secure channel.
As I understand it, SSL Certification Authorities grant certs for a specific domain. Our application will have no set domain, and a different IP depending on the installation. Many web applications do not trust self-signed certs. It's not clear to me whether a self-signed application is good enough for a typical application-developer who will be consuming our interface.
With this being the case:
1) What are my options to set up a secure channel, internally or via the web?
2) Am I making assumptions about how our product will be used that damage our users' security unnecessarily?
Well you can use custom encryption to encrypt the data being sent to the applications.
You can also use JSON web tokens to secure your REST API. https://en.wikipedia.org/wiki/JSON_Web_Token. The JSON tokens could be generated by a centralized authentication server and included in all requests sent by the client applications to the server

Can i use localhost as a URL Callback in a messenger webhook

Good evening, just saw that Facebook released his messenger bot toolkit and i immediately jumped right into it to learn more about it and maybe try to do my own.
My problem is that i don't have a https website running and it requires a https valid url. I tried to use my local web-server that has a certificate but it doesn't work.
My question is if this is possible to be done using a localhost url at all.
Thank you in advance
Actually this is possible with localhost. Use ngrok. It allows you to open localhost to the public web, over http or https. This should only be used for testing however.
If you want to test webhooks on your local environment, I would try ultrahook.com, you can get an API Key for free and the tool creates a tunnel from a public URL to your computer. This is from their FAQs page:
You download and run the UltraHook client on your computer. It
connects to UltraHook servers in the cloud and creates a tunnel from a
public endpoint on our servers to your computer. Any HTTP POST
requests sent to the public end point will be sent through the tunnel
an delivered to a private endpoint accessible from your computer.
I have used it to test webhooks from different providers (like payment gateways). In your computer, you can run something like:
ultrahook <subdomain> http://localhost:8000/webhook/
and then configure the webhook URL in your external service to something like <subdomain>.ultrahook.com
My question is if this is possible to be done using a localhost url at all.
No, of course it isn’t – because what such a “callback” actually means, is that Facebook makes a request to your server – and that is hardly possible with localhost.
A valid SSL certificate for your website is easy to get for free these days, via LetsEncrypt. And even if that is not available on your server, there’s still StartSSL, that provide basic certificates for free. All you need is a server you can install them on, or upload them to, or whatever mechanism your hoster provides for it. (And if they don’t provide any, then it might be time to switch.)

SSL Cert on Seperate Email Server and Web Hosting Server?

I am working with a client who needs SSL on their Email and Web Site.
We have their site hosted on a Rackspace Cloud Site (Wordpress so Apache and all that jazz).
From what I can tell their Email is on an ISS server of their own.
They want to apply this SSL Cert they bought through GoDaddy and apply it to this email server and to the site on our hosting server. Now I am only a Web Developer with enough server knowledge to get sites launched and running, But I don't think you can apply the same SSL Cert on two different types of servers.
What would the solution be for this?
Would you purchase a second ssl? Is that even possible?
Sorry if this is a all completely wrong I am trying to use my limited knowledge of SSL to describe the situation.
I'm pretty sure you can use the same certificate if it's going on two servers as long as they are both using the same domain. You don't need to purchase a second ssl. The tricky part might be if the two servers require different certificate file formats.
Also, just do the CSR part on ONE of the servers (use the one you trust the most). On the other server just install the certificate bypassing the CSR part.