Is there a JSON or YAML specification available for AWS SAM, similar to CloudFormation? - aws-cloudformation

I maintain VaporShell, a PowerShell module to abstract CloudFormation template creation. As part of the CI pipeline, it pulls down the current CloudFormation specification JSON to generate the functions for resource types and resource property types.
Is there a similar specification JSON (or YAML) for SAM?
I currently manually maintain the SAM specific code, but I'd like to ensure that any new resources / properties / etc are pulled in as the SAM team releases them. If I have a JSON or YAML specification available, that would make my life much easier as well as ensure up-to-date SAM support within VaporShell.
Thank you!

Unfortunately for the AWS::Serverless types, there is no officially maintained specification:
https://github.com/awslabs/serverless-application-model/issues/1133
but there seems to be an unofficial specification many projects share so the maintenance can be shared across projects at least:
https://github.com/awslabs/goformation/blob/master/generate/sam-2016-10-31.json

Related

Is there anyway to query components in an Azure Pipeline?

I know one can use REST API to query Pipeline activities but is there anyway to query Pipeline components ( i.e, listing of linked services, sources, sinks, parameters, etc.)
Right now I'm manually recording all the components I see in the pipeline and querying these components would make this listing faster and more accurate
enter image description here
Thanks, Jeannie
I haven't found a way to get all the components of a pipeline directly. If the information you want to obtain is defined in the YAML file, you cannot directly obtain it, you may need to parse the YAML file to obtain it.
If the information you want to get is defined on the UI, such as variables, you can get it through the REST API Definitions - Get. And you can get the resources of the pipeline through Resources - List.

OpenAPI as a single source of truth - limitations

One of the benefits being promoted for API-first design or OpenAPI is that of their use as a single source of truth. To my mind, these schemas only serve as a contract - the actual source of truth for your API lies in your microservices implementation (typically a http endpoint).
How can OpenAPI claim to be a single source of truth when the contract cannot be enforced until the implementation on the API side is complete? I realise there is tooling available to assist with this, such as validation middleware that can be used to match your request and response against your schema, however this is typically only validated at the point that a network request is made, not at compile time.
Of course you could write api tests to validate, but this is very much dependent on good test coverage and not something you get out of the box.
TLDR - OpenAPI markets itself as being a single source of truth for APIs, but this simply isn't true until your API implementation matches the spec. What tools/techniques (if any) can be used to mitigate this?
Did a bit of additional research into available tooling and found a solution that helps mitigate this issue:
open-api-backend (and presumably other such libraries) have capabilities to map your api routes/handlers to a specific openAPI operation or operationID. You can then enforce schema validation such that only routes defined in the spec can be implemented, else a fail-fast error is thrown)

PowerShell Approved Verbs for "Archive" and "Unarchive" of Data Items

I have data that supports being Archived and Unarchived but none of the Approved Verbs for PowerShell Commands for data management or resource lifecycle seem to be a good fit.
Technically, the relevant data items are actually available over RESTful API and are referenced by ID. The Cmdlets I'm building speak to said API.
EDIT: These data items are more accurately described as records with the act of archiving being some form of recategorisation or relabelling of said records as being in an archived state.
Which verbs are most appropriate and what are some of the implementation factors and considerations that should be taken into account when choosing?
New-DataArchive and Remove-DataArchive
Not sure of the particulars of the underlying API, but often there's a POST (new) and a DELETE (remove).
I'm also a big fan of adding [Alias]s when there's not a great match. For example, I was recently working in a git domain where Fork is a well-known concept, so I picked the "closest" approved verb, but added an alias to provide clarity (aliases can be whatever you want)
function Copy-GithubProject {
[Alias("Fork-GithubProject")]
[CmdletBinding()]
I think this comes down to user-experience (user using said Cmdlets) versus actual-implementation. The Approved Verbs for PowerShell Commands article refers mostly to the actual-implementation when describing verbs, not so much the user-experience of those using Cmdlets. I think choosing PowerShell Verbs based on actual implementation, instead of abstracting that away and focusing on the common-sense user-experience, is the way the Approved PowerShell Verb list should be used.
Set (s): Replaces data on an existing resource or creates a resource that contains some data...
Get (g): Specifies an action that retrieves a resource. This verb is paired with Set.
Although the user maybe archiving something, they may only actually be changing a label or an archive-bit on the resource. In my case, the 'Archiving' is actually just a flag on a row on a backend database, meaning, it is replacing data on an existing resource, so Set-ArchiveState (or equivalent) as Seth suggested is the most appropriate here.
New vs. Set
Use the New verb to create a new resource. Use the Set verb to modify an existing resource, optionally creating it if it does not exist, such as the Set-Variable cmdlet.
...
New (n): Creates a resource. (The Set verb can also be used when creating a resource that includes data, such as the Set-Variable cmdlet.)
I think New would only be applicable if you are creating a new resource based off of the old ones, the new resource representing an archived copy. In my use-case, it's Archival of a resource is represented by a flag, I am changing data on an existing resource primarily, thus, New isn't suitable here.
Publish (pb): Makes a resource available to others. This verb is paired with Unpublish.
Unpublish (ub): Makes a resource unavailable to others. This verb is paired with Publish.
There is an argument to be made that, if Archiving/Unarchiving restricts availability of the resource, Publish/Unpublish would be appropriate but I think this negatively impacts the user-experience even more than Set/Get does by using terminology in an uncommon way.
Compress (cm): Compacts the data of a resource. Pairs with Expand.
Expand (en): Restores the data of a resource that has been compressed to its original state. This verb is paired with Compress.
This is quite implementation specific and I think would only be suitable if the main purpose of the Archive/Unarchive action is for data compression and not for resource lifecycle management.

How to find out whether Google storage object is live or noncurrent?

Google Storage documentation tells about Object Versioning. There are two kinds of the object versions: live and noncurrent.
gsutil allow listing both noncurrent and live versions using -a switch: https://cloud.google.com/storage/docs/using-object-versioning#list.
Also, I can list all the versions programmatically by supplying versions: true option to the Bucket.getFiles method.
However I have not found any way to programmatically find out whether a particular object version is live or noncurrent. There seems to be no property or method in the File object for this.
What is the proper way of finding this out given a File instance?
By looking at the REST API, there isn't a state for the live/noncurrent version of the objects. You have a generation number per object resource representation.
I assume that you have to apply yourselves an algorithm for this
Use List API (getFiles) on a single object with the version option to true
The highest generation is the live version, others are noncurrent
Except is timeDeleted is populated on the highest generation (timestamp of the live version deletion). Therefore all the version are noncurrent.

Kubernetes: validating update requests to custom resource

I created a custom resource definition (CRD) and its controller in my cluster, now I can create custom resources, but how do I validate update requests to the CR? e.g., only certain fields can be updated.
The Kubernetes docs on Custom Resources has a section on Advanced features and flexibility (never mind that validating requests should be considered a pretty basic feature 😉). For validation of CRDs, it says:
Most validation can be specified in the CRD using OpenAPI v3.0 validation. Any other validations supported by addition of a Validating Webhook.
The OpenAPI v3.0 validation won't help you accomplish what you're looking for, namely ensuring immutability of certain fields on your custom resource, it's only helpful for stateless validations where you're looking at one instance of an object and determining if it's valid or not, you can't compare it to a previous version of the resource and validate that nothing has changed.
You could use Validating Webhooks. It feels like a heavyweight solution, as you will need to implement a server that conforms to the Validating Webhook contract (responding to specific kinds of requests with specific kinds of responses), but you will have the required data at least to make the desired determination, e.g. knowing that it's an UPDATE request and knowing what the old object looked like. For more details, see here. I have not actually tried Validating Webhooks, but it feels like it could work.
An alternative approach I've used is to store the user-provided data within the Status subresource of the custom resource the first time it's created, and then always look at the data there. Any changes to the Spec are ignored, though your controller can notice discrepancies between what's in the Spec and what's in the Status, and embed a warning in the Status telling the user that they've mutated the object in an invalid way and their specified values are being ignored. You can see an example of that approach here and here. As per the relevant README section of that linked repo, this results in the following behaviour:
The AVAILABLE column will show false if the UAA client for the team has not been successfully created. The WARNING column will display a warning if you have mutated the Team spec after initial creation. The DIRECTOR column displays the originally provided value for spec.director and this is the value that this team will continue to use. If you do attempt to mutate the Team resource, you can see your (ignored) user-provided value with the -o wide flag:
$ kubectl get team --all-namespaces -owide
NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR
test test vbox-admin true vbox-admin
If we attempt to mutate the spec.director property, here's what we will see:
$ kubectl get team --all-namespaces -owide
NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR
test test vbox-admin true API resource has been mutated; all changes ignored bad-new-director-name