I have created gateways and devices but they don't show up when I attempt to create an edge rule.
attempt to link gateway to rule. no gateway shown
It is insufficient for a Gateway to exist. It must also be connected and have the edge analytic agent installed. Please see the docs - https://console.bluemix.net/docs/services/IoT/edge_analytics.html?pos=2#edge_analytics
Related
I am new to Cloud Foundry.
Is there any way that only specific users can view and update an app deployed in Cloud Foundry?
1.I deployed an app in Cloud Foundry using “cf push”command.
2.After entering “cf push “command I’ve got an message below.
Using manifest file /home/stevemar/node-hello-world/manifest.yml
enter Creating app node-hello-world-example...
name: node-hello-world-example
requested state: started
routes: {route-information}
last uploaded: Mon 14 Sep 13:46:54 UTC 2020
stack: cflinuxfs3
buildpacks: sdk-for-nodejs
type: web
instances: 1/1
memory usage: 256M
3.Using the {route-information} above,I can see the app deployed via browser entering below URL.
https://{route-information}
By this way ,anyone can see app from browser, but I don’t want that to be seen by everyone and limit access to specific user.
I heard that this global IP will be allocated to {route-information} by default.
Is there any way to limit access to only between specific users?
(For example,is there any function like “private registry” at Kubernetes in Cloud Foundry which is not open to public)
Since I am using Cloud Foundry in IBM Cloud it would be better if there is solution using IBM Cloud.
I’ve already granted cloud foundry role to the other user.
Thank you.
The CloudFoundry platform itself does not provide any access controls for applications. If you assign a public route to your application, where the DNS is publicly resolvable and the foundation is on the public Internet, like IBM Bluemix, then anyone can access your app.
There's a number of things you can do to limit access, but they do require some work on your part.
Use a private DNS. You can add any domain you want to Cloud Foundry, even ones that don't resolve. That means you could add my-cool-domain.local which does not resolve anywhere. You could then add a record to /etc/hosts for this domain or perhaps run DNS on your local network to resolve this DNS domain and direct traffic to the CloudFoundry.
With this setup, most people cannot access your application because the DNS domain for the route to your application does not resolve anywhere. It's important to understand that this isn't really security, but obscurity. It would stop most traffic from making it to your app, but if someone knew the domain, they could add their own /etc/hosts header or send fake Host headers to access your application.
This type of setup can work well if you have light security requirements like you just want to hide something while you work on it, or it can work well paired with other options below.
You can set up access controls in your application. Many application servers & frameworks can do things like restrict access by IP address or require user access (Basic auth is easy and it is OK, if you're only allowing HTTPS traffic to your app which you should always do anyway).
You can use OAuth2 to secure apps too. Again, many app servers & frameworks have support for this and make it relatively simple to secure your apps. If you don't have a corporate OAuth2 solution, there are public providers you can use. Exactly how you do OAuth2 in your app is beyond the scope of this question, but there's plenty of material out there on how to do this. Google information for your application language/framework of choice.
You could set up an access Gateway. This would be an application that's job is to proxy traffic to other applications on the foundation. The Gateway could be something like Nginx, Apache HTTPD, or Spring Cloud Gateway. The idea is that the gateway would be publicly accessible, and would almost certainly apply access controls/restrictions (see #2, many of these proxies have access control options that only take a few lines of config). Your actual applications would not be deployed publicly though. When you deploy your actual applications, they would only be on the internal Cloud Foundry domain.
CloudFoundry has local domains, often apps.internal (run cf domains to see if that shows up), which you can use to easily route traffic across the internal container-to-container network. Using this domain and the C2C network, you can have apps deployed to CF that are not accessible to the public Internet, except through your Gateway.
Again, how you configure this exactly is outside the scope of this question, but check out the docs I linked to for info on using the C2C network & internal routes. Then check out your proxy server of choice's documentation.
Cloud Run is providing a domain *.run.app to access the service deployed. I am wondering how Google Cloud Run handling requests from one to another Cloud Run service. Is all the service to service communication internal even we have a custom domain instead of *.run.app?
The definition of "internal" is not clear.
Your request stay in the Google Network. Is it internal or external?
To resolve the Custom Domain, a DNS resolution request (port 53) is performed on the public network, but the content of the request stays in the Google Network and forwarded after the resolution. Is it internal or external?
So, as long as you use Google Services (in premium network option), you don't go out of the Google Network and thus you can consider this as highly secured.
I assume, my answer isn't very clear, in fact all depend if you trust or not the Google Cloud network.
We have following structure for my application. currently we have used Any for both source and destination (on port 3389) while defining NSG rule for our Service Fabric to allow calls from mobile app. But our security team has raised concerns on Any-Any rule. Is there any way to optimize this?
Note: our mobile app is public and anyone can download from app store.
Any to Any rule open SF for attach really so you should limit the IP from your traffic manager given your current model.
Azure API management already have build in support for Service Fabric so my suggestion is to remove the traffic manager between then API management and SF. Then you can do is to limit the traffic to SF only from the API management which is much easier.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-api-management-overview
Authentication requirement you can delegate that to the API management ie validate JWT token
I was creating a Traffic Manager profile with end points for Dev and QA slots for Web Apps and wondered that is it not possible?
How can I create Traffic Manager endpoints for Dev and QA slots as we do it for production?
Same way. Just select App Service Slot.
You can also use Powershell to enable TM for any external endpoint if portal interface does not allow you to do something you need.
Azure Traffic Manager External Endpoints and Weighted Round Robin via PowerShell
Can someone explain to me how to setup a external end point for a fail over model in Windows Azure Traffic manager. I can add the end point to azure through powershell, for example www.mysite.com, but then the tutorials say I would need to change my DNS to point www.mysite.com to my.trafficmanager.net. But wouldnt this create a loop of sorts and never get to my actual server that is hosting the site?
In the scenario you describe you would first need to define a new hostname for your external endpoint in DNS (i.e. www-1.mysite.com) and configure your webserver to accept requests for that hostname. Once that is working you add the www-1.mysite.com to Traffic Manager as an external endpoint and the finally update DNS for www.mysite.com to point to Traffic Manager.