What would be the best approach to emulate "templates hierarchy"? - ansible-awx

Ansible Tower does not offer directory hierarchy for templates and workflows.
How should we manage their growing number in a flat structure?
I know we could use labels, but their use seems a bit tedious and assume that users already knows label for template they look for.
Are there any best practices which we could follow?

Ansible Tower does not offer directory hierarchy for templates and workflows.
Right, according the documentation the Job Templates view is a list only.
This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template.
If using Labels
Labels can be used to group and filter job templates and completed jobs in the Tower display.
are not helpful
I know we could use labels, but their use seems a bit tedious and assume that users already knows label for template they look for.
there might the possibility to introduce a structured Naming Convention for TEMPLATES / NAME.
Since Job Templates are usually for automating administrative tasks like rollouts, updates, restarts, etc. you could have a structure there like DEP_ROLENAME_TASKNAME. This would also be possible for TEMPLATES DESCRIPTION. It is than easier to look them up via UI, as well REST API
curl --silent -u "${ACCOUNT}:${PASSWORD}" https://${TOWER_URL}/api/v2/job_templates/?search=DEP_ROLENAME_TASKNAME | jq .
A better approach might be to introduce Teams
a subdivision of an organization with associated users, projects, credentials, and permissions. Teams provide a means to implement role-based access control schemes and delegate responsibilities across organizations.
and Add Permission for certain Job Templates. A specific team or users can than only see Job Templates for tasks which they are supposed to do. There would be no need for lookup, searching and filtering anymore.
Further reading
How to design role-based access control (RBAC)?
Ansible Tower Organizations

Labels are too basic to emulate a "templates hierarchy". Let's say you have pages of Templates and some have labels "ad" & "set" and others have "ad" & "get". You can search Templates with Labels "ad" or "get" or "set" but you can't search for Templates with Labels "ad" AND "get". Your only choice is some sort of naming standard (that can include the "/" character) as a pseudo hierarchy.

Related

Rest API design: Managing access to sub-entities

Note: I realize that this is close to being off-topic for being opinion-based, but I am hoping that there is some accepted best practice to handle this that I just don't know about.
My problem is the following: I need to design a Rest API for a program where users can create their own projects, and each project contains files that can only be seen by users that have access. I am stuck with how to design the "List all files of a project" query.
Standard Rest API practice would suggest two endpoints, like:
`GET /projects` # List all projects
`POST /projects` # Create new project
`GET /projects/id` # Get specific project
etc.
and the same for the files of a project.
However, there should never be a reason to list all files - only the files of a single project. To make it more complicated, access management needs to be a thing, users should never see files that are in projects they don't have access to.
I can see multiple ways to handle that:
So the obvious way is to implement the GET function, optionally with a filter. However, this isn't optimal, since if the user doesn't set a filter, it would have to crawl through all projects, check for each project whether the user has access, and then list all files the user has access to:
GET /files?project=test1
I could also make the files command a subcommand of the projects command - e.g.
GET /projects/#id/files
However, I have the feeling this isn't too restful, since it doesn't expose entities directly?
Is there any concencus on which should usually be implemented? Is it okay to "force" users to set a parameter in the first one? Or is there a third alternative that solves what I am looking for? Happy about any literature recommendations on how to design this as well.
Standard Rest API practice would suggest two endpoints
No, it wouldn't. REST practice would suggest figuring out the resources in your resource model.
Think "documents": I should be able to retrieve (GET) a document that describes all of the files in the project. Great! This document should only be accessible when the request authorization matches some access control list. Also good.
Maybe there should also be a document for each user, so they can see a list of all of the projects they have access to, where that document includes links to the "all of the files in the project" documents. And of course that document should also be subject to access control.
Note that "documents" here might be text, or media files, or scripts, or CSS, or pretty much any kind of information that you can transmit over the network. We can gloss the details, because "uniform interface" means that we manage them all the same way.
In other words, we're just designing a "web site" filled with interlinked documents, with access control.
Each document is going to need a unique identifier. That identifier can be anything we want: /5393d5b0-0517-4c13-a821-c6578cb97668 is fine. Because it can be anything we want, we have extra degrees of freedom.
For example, we might design our identifiers such that the document whose identifiers begin with /users/12345 are only accessible by requests with authorization headers that match user 12345, and that all documents whose identifiers begin with /projects/12345 are only accessible by requests with authorization headers that match any of the users that have access to that specific project, and so on.
In other words, it is completely acceptable to choose resource identifier spellings that make your implementation easier.
(Note: in an ideal world, you would have "cool" identifiers that are implementation agnostic, so that they still work even if you change the underlying implementation details of your server.)
I have the feeling this isn't too restful, since it doesn't expose entities directly?
It's fine. Resource models and entity models are different things; we shouldn't expect them to always match one to one.
After looking further, I came across this document from Microsoft. Some quotes:
Also consider the relationships between different types of resources and how you might expose these associations. For example, the /customers/5/orders might represent all of the orders for customer 5. You could also go in the other direction, and represent the association from an order back to a customer with a URI such as /orders/99/customer. However, extending this model too far can become cumbersome to implement. A better solution is to provide navigable links to associated resources in the body of the HTTP response message. This mechanism is described in more detail in the section Use HATEOAS to enable navigation to related resources.
In more complex systems, it can be tempting to provide URIs that enable a client to navigate through several levels of relationships, such as /customers/1/orders/99/products. However, this level of complexity can be difficult to maintain and is inflexible if the relationships between resources change in the future. Instead, try to keep URIs relatively simple. Once an application has a reference to a resource, it should be possible to use this reference to find items related to that resource. The preceding query can be replaced with the URI /customers/1/orders to find all the orders for customer 1, and then /orders/99/products to find the products in this order.
This makes me think that using solution 2 is probably the best case for me, since each file will be associated with only a single project, and should be deleted when a project is deleted. Files cannot exist on their own, outside of projects.

How to recursively get dependent resources of Kubernetes owner resource

With Kubernetes you can use the Garbage Collector to automate the deletion of dependent resources when owning resources are removed. I'm wondering the easiest method to print out the dependency tree of an owning resource, potentially limiting to a tree depth if needs be.
I understand the potential for crashing the API service given the ability to fan out to all resources in a cluster and likely why this isn't an easy feat to achieve but I've been struggling to even find usable, community supported workarounds or even discussions/issues relating to this topic (likely my poor searching skills) so any help in achieving this would be great!
To make things more concrete a specific example of an abstract kubectl get query I'd like to achieve would be something like kubectl get scheduledworkflow <workflow name> --dependents:
This would find the Kubeflow Pipelines ScheduledWorkflow resource then recurse,
That would find all Argo Workflow resources,
Then for each Workflow resource many Pod and Volume resources (there are a few other types but wanted to paint the picture of these being disparate resource types).
We typically only keep a small number of Argo Workflow resources in the cluster at anyone one time as the majority of our Workflow's spawn 1k+ Pod so we have pretty aggressive GC policies in place. Even so listing these is just painful at the moment and need to use a custom script to do it but wondering if there was a higher level CLI, SDK or API available (or any group working on this issue in the community!).
There are no ready solutions for this.
I see two options how this can be proceeded:
1 - probably this is what you already mentioned: "need to use a custom script to do it".
Idea is to get jsons of required resource groups and then process it by any available/known language like bash/python/java/etc and/or using jq. All dependent objects have ownerReference field which allows to match resources.
More information about owners and dependents
jq tool and examples
2 - Write your own tool based on kubernetes garbage collector
Kubernetes garbage collector works based on graph built by GraphBuilder:
garbage collector source code
Graph is always up to date by using `reflectors:
GarbageCollector runs reflectors to watch for changes of managed API
objects, funnels the results to a single-threaded
dependencyGraphBuilder, which builds a graph caching the dependencies
among objects
graph_builder source code to get whole logic of it.
Built graph has node type:
graph data structure
Also it's worth to mention that working with api server is more convenient using kubernetes clients libraries which are available for different languages.

REST: Get query only changeable objects

I'm having a bunch of apis which return several types of data.
All users can query all data by using a GET rest api.
A few users can also change data. What is a common approach when designing REST API, to query only the data that can be changed by the current user but still allow the api to return all data (for display mode).
To explain it further:
The software manages projects. All projects are accessible for all users (also anonymous) via an api (let's call it GET api/projects).
A user has the ability to see a list of all projects he is involved in and which he can edit.
The api should return exactly the same data but limited to the projects he is involed in.
Should I create an additonal parameter, or maybe pass an http header, or what else?
There's no one-size-fits-all answer to this, so I will give you one recommendation that works for some people.
I don't really like creating resources that have 'complex access control'. Instead, I would prefer to create distinct resources for different access levels.
If you want to return limit results for people that have partial access, it might be better to create new resources that reflect this.
I think this might also help a bit thinking about the abstract role of a person who is not allowed to do everything. The abstraction probably doesn't exist per-property, but it exists somewhere as a business rule.

VSTS Iterate over all WorkItems in an Epic

I am building a VSTS dashboard widget where I would like to iterate over all Features in a particular Epic, and then for each Feature gather data about all the WorkItems to create a status report.
I know I can use getWorkItem() and getWorkItems(), but that is if I already know the WorkItem IDs. I want to loop through all the features and then all the WorkItems and see if they are completed, without knowing their particular ids.
The VSTS work item tracking system is very extensible, therefore there aren't any "fixed" methods that will return you specific work item types. Even though features in VSTS rely on one or more levels of work items being present, their name, the fields and other aspects of these work items are highly configurable.
To query the available work item levels (called Categories in VSTS), you can use the Categories/List API. This will allow you to find hierarchy as it's configured in VSTS and which work item types are available at each level.
You can use the ProcessConfiguration/Get API to list the relationship between the different backlog levels. Which is a parent of which and what type of backlog it represents. Is it a Task (lowest level), Requirement (Story, PBI level, planning level), or a Portfolio (Epic, Feature etc) level backlog.
With this information, you can either use the Backlig/GetBacklogWorkItems API to fetch all the work items on a specific backlog or you can construct a WIQL (Work Item Query Language) query to retrieve all work items that match that specific query. You can either export the WIQL from Visual Studio or use an extension.
Depending on what you need with each work item you can either query directly for the required fields, or just query the work item ID's and fetch the work item details individually using the workitem/getWorkItems(id) API.
There is pretty extensive documentation available on each of these API's and on the required VSTS services you can use from your extension. going deep to explain each of the services goes too far for this answer. I suggest you start experimenting from here and ask new questions as they arise. You now have far more information to work with and it will be easier to ask targetted questions from there.

Orchard multi tenancy without table/database proliferation

I'm looking at implemented a muli-tenant portal solution for my SaaS application using Orchard CMS. I'm pleased that it appears multi-tenancy is a first class feature, but it looks like in order to achieve it, I've got to either a) Create a set of tables for each tenant with a table prefix or b) Have separate databases for each tenant.
I'm trying to build a solution for 10,000+ customers, and so anything that requires me to make physical data schema changes per tenant won't scale. In our SaaS application, we use a tenantID column on all tables, plus the use of nHibernate filters and a heck of a lot of indexes to allow us to scale.
I'd like to do the same in Orchard. So instead of a table for each tenant, I'd like ONE set of tables with a tenantID, and then use filters in the data access layer (NHib) to always pull the right data.
Questions:
1) Is this possible?
2) Has anyone done this?
3) Any thoughts on the best way? I was going to modify the MultiTenancy/NHiberate module source directly.
It is possible, but quite hard to do.
It's also most likely not a scenario for Orchard multi-tenancy, but without any further details I cannot be sure.
This feature fits best in cases where you need to have a totally independent applications and (almost) nothing is supposed to be shared between them - like in shared hosting, for instance. The major drawback is the memory overhead, because each tenant has its own copy of the whole internal object infrastructure.
A much easier approach, instead of trying to put a square peg in a round hole tweaking multi-tenancy, would be to use single tenant and implement your desired multi-tenancy scheme in a separate module on your own, from scratch. You could eg. have a "Tenant" content type and build your module around it.