I wonder if there is a way of restricting the access to your restful service running in a GitHub code space? Like a whitelist of ip addresses and if there is a way of requiring an auth token?
I have been looking at a bunch of tutorials and trying to read up upon it in Github docks but cannot seem to find a clear answer.
BR Jan
Related
May u know any possibility to secure access to dashboard of jobrunr? As inside dashboard there are many critical operation can be done, it need be protected with user name and password. At the moment it seems it open for anyone who can access the dashboard's url.
thanks in advance
I don't recommend a setup where the dashboard can be accessed via the internet, I would only allow access via internal IP's.
If you also want to protect for internal IP's, there is a way to do so. Search the issues and the discussions for it (I'm not writing the answer here as I don't like to promote that solution).
Update regarding JobRunr 6
JobRunr Pro 6 will support OpenID Authentication.
I apologize to come before you with such a rudimentary question, but Google apps is giving me a hard time simply verifying the domain from which I want to make server side YouTube search API calls.
Google is insisting on using the DNS text verification method (even though it provides a link to alternate methods that are not recognized by Google apps). But my registrar (GoDaddy) is not my authoritative DNS provider. That honor goes to DynDNS. So, I'm not sure I can even use Google's automated tool to set up the TXT record. In fact, it makes me nervous that they want me to grant their app to make changes to the DNS at GoDaddy.
I'm assuming this is a requirement to make server side api calls and retrieve results. Can someone point me in the right direction? Either how to fulfill the TXT record requirement under this scenario, or how to force Google Apps to accept an alternate verification method?
Thanks
Paul G
If Godaddy isn't managing your DNS, you won't be able to follow the automated flow in the Admin console. You're going to need to manually create the record with DynDNS to accomplish this.
Your host doesn't have specific steps on Google Support site (here) so you'll need to follow the generic one. Support for your host should be able to help but you can also contact Google Support via the Support section in your Admin console.
Here is my problem.
I have a client that uses the plug-in powerview through Excel2013 .
At the beginning my client hadn't at all access as our firewall was blocking the access to the office365 IP and Bing.com.
Once the flow authorized on the firewall, he don't have any error message but nothing happens.
When he try to access bing.com/maps the map cannot be displayed at the same moment I did verify my firewall and I see that his machine tries to access an AKAMAI IP(same thing when he goes through powerview) . Every time that he tries to access bing maps we have a random AKAMAI ip adress and so it is blocked by the firewall.
The idea is to authorize only the flow towards bing maps and not to ANY destination.
So my question is, is there any way to learn what URL uses powerview so to authorize them on my firewall? Do you ever had such problem?
PS The only solution that I can imagine for the moment, but it is impossible for our Datacenter provider, is to implement a proxy server.
Bing Maps has a long list of IP address ranges which need to be allowed through the firewall. Contact the support team using the online form under the developer support section here: http://www.microsoft.com/maps/Licensing/licensing.aspx They will be able to provide you with the list you need.
A month or so ago I put up a static website using google cloud storage. Before I could create a public bucket, I was asked to verify that I actually owned the domain after which I was naming the bucket. I had to upload a file from google to the existing host in order for google to verify domain ownership.
I do understand the need to do this. However, if I had just bought a domain and had no other host, I don't see how I would have been able to prove that I owned the domain.
Did I miss a way around this limitation? Is there another, more user friendly way of creating public sites on google cloud storage?
There are three ways to verify domain ownership:
Adding a special Meta tag to a site's homepage.
Uploading a special HTML file to a site.
Adding a DNS TXT record to a domain's DNS configuration.
The first two require the domain to be hosted somewhere but the third method is purely DNS configuration, so it can be accomplished without hosting the domain. You can read more details about these methods here.
Add CName with the information that the google gives you. That would solve your problem of verifying your domain ownership.
The company I work for has recently installed a Apache staging server which uses Apache's mod_access module to prevent unwanted access to our staging environment.
One of the downsides of this is that Facebook, when trying to scrape the page for the opengraph metatags, comes up empty with the following error.
Error Scraping Page Bad response code
Which is to be expected since the scraper bumps into the authentication dialog.
My question now: is there a specific IP range that we can allow access
to the website?
We've looked at allowing certain headers, but that seems a little prone to header manipulation in order to bypass the security layer.
The access log did show one IP address, but I assume that Facebook uses multiple servers to scrape all these pages and I seem to remember reading that these IP addresses tend to change over time.
Any ideas?
Facebook has published their IP range here.