Swagger codegen: generate array elements with the correct names - openapi

I am using swagger codegen with the following YAML snippet:
lineItem:
type: array
xml:
wrapped: true
name: 'lineItems'
items:
#xml:
#name: 'lineItem'
$ref: '#/components/schemas/lineItem'
The object model is serialized to XML and needs to be in the format:
<lineItems>
<lineItem/>
<lineItem/>
</lineItems>
I have referred to the documentation here but cannot get the expected output:
https://swagger.io/docs/specification/data-models/representing-xml/
I have tried various things but the wrapping element and the individual elements always have the same name! e.g.
<lineItem>
<lineItem/>
<lineItem/>
</lineItem>
What is the correct configuration for this?

Related

OpenAPI 3.0 specifications - validating an array that is not required, but has minItems=1

optionalids:
type: array
items:
$ref: '#/components/schemas/Id'
minItems: 1
optionalIds is included inside another complex-type. and optionalIds is not a "required" property
Using openapi-codegen to generate code along with beanValidations.
The validation checks for the array optionalIds to at least contain one element. Since this is not a required property, not passing the optionalIds in the request should go through fine.
Is this understanding correct ?
What should be done to beanValidation templates so that this works

How to provision a bunch of resources using the pyplate macro

In this learning exercise I want to use a PyPlate script to provision the BucketA, BucketB and BucketC buckets in addition to the TestBucket.
Imagine that the BucketNames parameter could be set by a user of this template who would specify a hundred bucket names using UUIDs for example.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
TestBucket:
Type: "AWS::S3::Bucket"
#!PyPlate
output = []
bucket_names = params['BucketNames']
for name in bucket_names:
output.append('"' + name + '": {"Type": "AWS::S3::Bucket"}')
The above when deployed responds with a Template format error: YAML not well-formed. (line 15, column 3)
Although the accepted answer is functionally correct, there is a better way to approach this.
Essentially PyPlate code recursively reads through all the key-value pairs of the stack and replaces the values with their Python computed values (ie they match the #!PyPlate regex). So we need to have a corresponding key to the PyPlate code.
Here's how a combination of PyPlate and Explode would solve the above problem.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate, Explode]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Mappings: {}
Resources:
MacroWrapper:
ExplodeMap: |
#!PyPlate
param = "BucketNames"
mapNamespace = param + "Map"
template["Mappings"][mapNamespace] = {}
for bucket in params[param]:
template["Mappings"][mapNamespace][bucket] = {
"ResourceName": bucket
}
output = mapNamespace
Type: "AWS::S3::Bucket"
TestBucket:
Type: "AWS::S3::Bucket"
This approach is powerful because:
You can append resources to an existing template, because you won't tamper with the whole Resources block
You don't need to rely on hardcoded Mappings, as required by Explode. You can drive dynamic logic in Cloudformation.
Most of the Cloudformation props/KV's remain in the YAML part, and there is minimal python part which augments to the CFT functionality.
Please be aware of the macro order through - PyPlate needs to be executed before Explode, which is why the order [PyPlate, Explode]. The execution is sequential.
If we walk through the source code of PyPlate, it gives us control of more template related variables to work with, namely
params (stack parameters)
template (the entire template)
account_id
region
I utilised the template variable in this case.
Hope this helps
This works for me:
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
|
#!PyPlate
output = {}
bucket_names = params['BucketNames']
for name in bucket_names:
output[name] = {"Type": "AWS::S3::Bucket"}
Explanation:
The python code outputs a dict object where the key is the bucket name and the value is its configuration:
{'BucketA': {'Type': 'AWS::S3::Bucket'}, 'BucketB': {'Type': 'AWS::S3::Bucket'}}
Prior to Macro execution, the YAML template is transformed to JSON format, and because the above is valid JSON data, I can plug it in as value of Resources.
(Note that having the hardcoded TestBucket won't work with this and I had to remove it)

OpenAPI parameter with brackets and variable name

I am working on an API which allows searching with URLs like:
GET https://example.com/api/data?search[field1]=value1
GET https://example.com/api/data?search[field2]=value2
GET https://example.com/api/data?search[field1]=value1&search[field2]=value2
Basically, you can search for one or more field values by putting a field name in brackets. The problem is, the field names are defined by the user in their settings. The field name will be a string, but otherwise is not known ahead of time at a global level.
This answer is almost what I am looking to do, I just can't find a way to define the value inside the brackets to be "any string" rather than a list of known names.
The search parameter can be defined as a free-form object with the deepObject serialization style and minProperties: 1 to enforce the presence of at least one field in the search query.
Make sure you use OpenAPI 3.0 (openapi: 3.0.x) and not OpenAPI 2.0 (swagger: "2.0"); the latter does not support objects in query strings.
openapi: 3.0.2
...
paths:
/api/data:
get:
parameters:
- in: query
name: search
required: true
schema:
type: object
additionalProperties: true # Default value, may be omitted
minProperties: 1
# Optional example to use as a starting value for "try it out" in Swagger UI
example: >
{
"field1": "value1",
"field2": "value2"
}
style: deepObject
explode: true
responses:
200:
description: OK

How to note a calculated default value in OAS3

I'm updating my API spec (OAS 3.0.0), and am having trouble understanding how to properly model a "complex" default value.
In general, default values for parameters are scalar values (i.e. the field offset has a default value of 0). But in the API I'm spec'ing, the default value is actually calculated based on other provided parameters.
For example, what if we take the Pet model from the example documentation, and decide that all animals need to be tagged. If the user of the API wants to supply a tag, great. If not, it will be equal to the name.
One possibility:
Pet:
required:
- id
- name
properties:
id:
type: integer
format: int64
name:
type: string
tag:
type: string
default: '#/components/schemas/Pet/name'
This stores the path value as the default, but I'd like to have it explain that the default value will be calculated.
Bonus points if I can encode information from a parent schema.
Is the alternative to just describe the behavior in a description field?
OpenAPI Specification does not support dynamic/conditional defaults. You can only document the behavior verbally in the description.
That said, you can use specification extensions (x-...) to add custom information to your definitions, like so:
tag:
type: string
x-default: name
or
tag:
type: string
x-default:
propertyName: name
# or similar
and extend the tooling to support your custom extensions.

Mongoid and collections

I am trying to configure and use mongoid for the first time. I have set the mongoid.yml config file simply as:
host: localhost
database: table
and my code:
Mongoid.load!("/mongoid.yml")
class Data
include Mongoid::Document
field :study, type: String
field :nbc_id, type: String
field :short_title, type: String
field :source, type: String
field :start_date, type: Date
end
puts Data.study
I keep getting an error:
NoMethodError at / undefined method `study' for Data:Class
I think it is because I have not specified the collection name which is 'test'. However I can find no examples on how to do this. Do I specify it in the .yml file or in the code. What is the correct syntax. Can anyone point me in the right direction?
Tx.
According to the Mongoid documentation, "Mongoid by default stores documents in a collection that is the pluralized form of the class name. For the following Person class, the collection the document would get stored in would be named people."
http://mongoid.org/docs/documents.html
The documentation goes on to state that Mongoid uses a method called ActiveSupport::Inflector#classify to determine collection names, and provides instructions on how to specify the plural yourself.
Alternatively, you can specify the collection name in your class by including "store_in" in your class definition.
class Data
include Mongoid::Document
store_in :test
Hope this helps!