Support for "sub-organization" or non-global organization namespacing - azure-devops

We are looking at adopting Azure DevOps' hosted and cloud-based solution. However, organization names appear to be global in namespacing. Therefore, private internal project organization names become exposed to enumeration attacks and make it difficult for us to organize our projects.
A very simple example would be that we have an internal organization called Finance. Since it's global namespacing, we would have to call it Acme-Finance. Then we'd have to have Acme-Infrastructure, Acme-ProjectX, Acme-Project-Y. That is very messy.
Gitlab, as an example, has an infinite set of sub-organizations that you can create.
We could solve this with the on-premises version and using Collections, but it would lag behind all of the updates and functionality in the cloud version.

Related

View repository activity across all repositories under an account at the same time?

Is there a way to view repository traffic for all repositories on your account at the same time? (without creating your own custom dashboard using the Github API). It would be very convenient. I suspect a bash script might do this without too much effort (e.g. get all repo names, get the traffic/stars stats for each repo in the list). But I want to be sure something obvious doesn't already exist before writing anything myself
I am not ware of any native dashboard that would aggregate multiple GitHub repositories into one convenient view.
You would therefore have to rely on third-party scripts, such as, for example, nchah/github-traffic-stats (Python)
Get statistics on web traffic to your GitHub repositories.
Since it is limited to the last two weeks, you might have to record those statitics over time (example: Microsoft/GitHubTelemetryParsor)

Is it possible to deploy only updated Azure Function Projects when I push a repo of the entire Solution?

I need to refactor a .Net Web API, I'm considering moving to serverless and I'm trying to understand the best option to migrate the code to Azure Functions.
As farĀ as I understand the correct approach to reduce costs and cold start time is to split the API: it is much better to have many small web api than a single one with all methods. Small api consume less memory and cold start quicker.
Having more Functions in the same Project does not resolve the problem as they would be all deployed in the same Function App so one dll, high memory, slow cold start.
So I should create several Azure Function Projects and deploy each of them in a different Function App.
If all the above is correct we finally got to the problem:
I would structure the code and the repo so that I have one Solution containing several Azure Function Projects. How can I have a CI/CD (Azure DevOps) so that when I push the repo ONLY the Azure Function Projects updated/modified/new are deployed? I need to deploy only the modified Azure Function Projects so not to have all the Function Apps (also the ones whose code is unchanged) goes cold.
This is less important but I'd also need to have one URL for all APIs, so https://myapi.azurewebsites.net/api/Function1, https://myapi.azurewebsites.net/api/Function2, etc and not https://myapi1.azurewebsites.net/api/Function1, https://myapi2.azurewebsites.net/api/Function1, etc. Is this possible using the above structure?
You need to have multiple CI/CD pipelines with trigger limited only to specific folder:
trigger:
paths:
include:
- function-a/*
exclude:
- '*'
for this you will get pipeline triggered only if changes are done in function-a folder. To limit a work needed to develop pipelines you should consider using templates. You can find more info about this here:
Template types & usage
Build templates on Azure DevOps - this is my ow blog
In this way you will avoid repeating yourself.
EDIT
To unify your API you can use Azure Functions Proxies
With this feature, you can specify endpoints on your function app that
are implemented by another resource. You can use these proxies to
break a large API into multiple function apps (as in a microservice
architecture), while still presenting a single API surface for
clients.

Do CloudFormation target name or resources?

I am currently pending between using terraform and CloudFormation.
There is a question I haven't seen the answer yet (or maybe, I just haven't found it yet).
In terraform, you give a precise name to everything. This will delete the targets with those names.
But what about CF? If we already have an architecture in place and I want to add/delete an instance and use CF, how will this work? How will it know after which one to target?
I hope this question makes sense! I've already used terraform, but never before CloudFormation.
CloudFormation uses two mechanisms to identify its resources. The CFN template has a list of resources it created, it uses the actual ID, not a pretty name, and CFN also tags the resources (that support tags) with the stack ID.
CFN cannot be used to delete the resources in a different stack, only the stack that created them can manage them. Terraform allows you to import resources created by anything else into a new stack where they will be managed.
I used CFN for a year before converting to Terraform (also for a year now) and I'll never go back to CFN. Terraform offers many advantages over CFN that make CFN really hard to use now. Features such as plan before apply, re-usable modules, resource imports, granular output (CFN is mostly a black box), and generally faster AWS feature support (usually APIs are released at launch day and Terraform support follows soon after, /usually/ faster than CFN but not always).

GitHub to share a set of SPARQL queries

I am using github to share a set of SPARQL queries:
http://www.boisvert.me.uk/opendata/sparql_aq+.html?file=specific%20sensor.txt
Currently the simple work allows end-users to access queries stored on the github repository, but ultimately I want to allow them to also modify the queries, as with a pastebin, and make use of the repository to better manage the shared system. Ideally I would want end-users who may not be very tech-savvy, to be able to make minor changes to queries to an open, linked data endpoint: so to keep the technology barrier low.
My problem is this: how best to structure the github project and exploit the API to make the most of the available information? I can think of different points:
Currently the project (https://github.com/boisvert/unshaql) holds client code and example queries. Does it make a difference to create an independent project (separate from the web client code) for SPARQL queries?
I would use directories within the project to classify/tag queries, and file names to title them. Are there better alternatives? It strikes me that a hierarchical structure is not a good fit to tags.
When end-users save, a simpler (and cruder) option is to allow them to push their file into just one branch, which holds the examples. A better engineered one would be to allow them to use their github credentials to fork the set of SPARQL queries and edit theirs, but with unaware users, how do I avoid creating a mess?
I think that a rigular Github repository is a rather bad fit for this kind of content. If your users have a GitHub account, you should probably use Gists instead: https://help.github.com/articles/about-gists/ I never used this myself, but it seems perfectly adapted to what you are planning. Your site could become a DB of tags over user-provided gists. That would however lock you into GitHub-specific solutions.
Even if you go for a regular repository, you should not allow the users to commit into the repository hosting your code: that would be a serious security hazard as you won't be able to control the parts of the repository to which they are allowed to commit.
If you setup two repositories, it's rather easy to have the code of a webpage in a repository, and the code automatically commited in another repository (under an anonymous identity so that your users don't have to create a github account).
Also, note that the oauth token should never be stored in a public repository (or the GitHub robots will invalidate it in a matter of hours).
See Hiding GitHub token in .gitconfig for a solution to this sub-problem.

Creating an SaaS application that automates signup?

I'm looking for some guidance on my research to building an SaaS. This thread seems to be related, but I'm wondering if this software Rackspace has called rBuilder is what I would be looking for to automate the process of creating an instance of the software with a unique IP address and domain name.
Also, for an application similar to Shopify, does the application work like Facebook where it serves up different information based on the account, or is it better to have separate installs of the software like WordPress, but on a server that you maintain?
IMHO, there are various levels of Multi-tenancy [level1 through level4], among them, the purest form of multi-tenancy [Level4] is to have a single code base to cater to the needs of different customers [tenant's].
In this case, you will be required to maintain all of the configuration metadata within your code base to ensure that each tenant has the capability to customize the application the way they wanted to.
Having a single codebase is very clean, easier to maintain, easy to patch, easy to onboard new customers etc...
Hence, kindly note that you have to decide on the time and expense that you have budgeted for the application that you have planned to as the purest form of multi-tenancy does require some more additional thought process.
You can consult some articles like this and also google on the pros and con's of having the purest form of multi-tenancy vs on-premise model or virtualized model of multi-tenancy.
Also read more from here