How to Renew Expired Metadata for SSO/SAML - single-sign-on

I am trying to host a metadata file and act as an IdP.
To generate the metadata I used the following online tool https://www.samltool.com/idp_metadata.php
After filling out the form and building it, the validUntil attribute is set to be the current timestamp.
When trying to test this metadata with https://samltest.id/upload.php it says expired.
When I increment the year and try again the metadata date doesn't seem to change.
How can create an IdP metadata file that is valid?

You can manually change the validUntil value according with your requirement. There won't be any issue during validation on the aforementioned site.
To verify, I just quickly ran through changing the field value and my input meta-data passed the validation.
original
and updated
results.

Related

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?
MSH|^~&|sss|xxx|INSTANCE2|KKLIU 0063/2021|20190905162034||ADT^A28^ADT_A05|Zx20190905162034|P|2.4|||NE|NE|||||
whenever message enters it needs to be validated whether duplicate of control id Zx20190905162034 is already processed or not?
Mirth will not do this for you, but you can write your own JavaScript transformer to check a database or your own set of previously encountered control ids.
Your JavaScript can make use of any appropriate Java classes.
The database check (you can implement this using code template) is the easier way out. You might want to designate the column storing MSH:10 values as a primary key or define an index on it. Queries against unique entries would be faster. Other alternatives include periodically redeploying the Channel while reading all MSH:10 values already in the database and placing them in a global map variable or maintained in an API that you can make a GET request to when processing every message. Any of the options depends on the number of records we are speaking about.

Setting id to task using Google Task API returns 400 invalid value

I am using Google's Java API for an project.
Strangely, inserting a task without setting an id works fine. However, inserting a task with a id returns a 400 invalid value error. The id is to be used for syncing local data with Google Tasks
I'm pretty sure there's nothing wrong with the algorithm that generates the ids for the task. The same algorithm works perfectly for Google Calendar API.
Am I missing something here?
You may refer with this SO answer. It suggested to pass the id key/value pair together with the title information you are already sending. There's no documentation to indicate this is a required parameter, especially since it is included in the URL. Google requires the id of the task to be passed as a part of url, parameters and body.

Cloud SQL API Explorer, settingsVersion

I'm getting familiarized with Cloud SQL API (v1beta1). I'm trying to update authorizedNetworks (sql.instances.update) and I'm using API explorer. I think my my request body is alright except for 'settingsVersion'. According to the docs it should be:
The version of instance settings. This is a required field for update
method to make sure concurrent updates are handled properly. During
update, use the most recent settingsVersion value for this instance
and do not try to update this value.
Source: https://developers.google.com/cloud-sql/docs/admin-api/v1beta3/instances/update
I have not found anything useful related to settingsVersion. When I try with different srings, instead of receiving 200 and the response, I get 400 and:
"message": "Invalid value for: Expected a signed long, got '' (class
java.lang.String)"
If a insert random number, I get 412 (Precondition failed) and:
"message": "Condition does not match."
Where do I obtain versionSettings and what is a signed long string?
You should do a GET operation on your instance and fetch the current settings, those settings will contain the current version number, you should use that value.
This is done to avoid unintentional settings overwrites.
For example, if two people get the current instance status which has version 1, and they both try to change something different (for example, one wants to change the tier and the other wants to change the pricingPlan) by doing an Update operation, the second one to send the request would undo the change of the first one if the operation was permitted. However, since the version number is increased every time an update operation is performed, once the first person updates the instance, the second person's request will fail because the version number does not match anymore.

Amazon S3: Cache-Control and Expiry Date difference and setting trough REST API

I want to enhance my sites loading speed, so I use http://gtmetrix.com/, to check what I could improve. One of the lowest rating I get for "Leverage browser caching". I found, that my files (mainly images), have problem "expiration not specified".
Okay, problem is clear, I thought. I start to googling and I found that amazon S3 prefer Cache-Control meta data over Expiry date (I lost this link, now I think maybe I misunderstood something). Anyway, I start looking for how to add cache-control meta to S3 object. I found
this page: http://www.bucketexplorer.com/documentation/amazon-s3--how-to-set-cache-control-header-for-s3-object.html
I learned, that I must add string to my PUT query.
x-amz-meta-Cache-Control : max-age= <value in seconds> //(there is no need space between equal sign and digits(I made a mistake here)).
I use construction: Cache-control:max-age=1296000 and it work okay.
After that I read
https://developers.google.com/speed/docs/best-practices/caching
This article told me: 1) "Set Expires to a minimum of one month, and preferably up to one year, in the future."
2) "We prefer Expires over Cache-Control: max-age because it is is more widely supported."(in Recommendations topic).
So, I start to look way to set Expiry date to S3 object.
I found this:
http://www.bucketexplorer.com/documentation/amazon-s3--set-object-expiration-on-amazon-s3-objects-put-get-delete-bucket-lifecycle.html
And what I found: "Using Amazon S3 Object Lifecycle Management , you can define the Object Expiration on Amazon S3 Objects . Once the Lifecycle defined for the S3 Object expires, Amazon S3 will delete such Objects. So, when you want to keep your data on S3 for a limited time only and you want it to be deleted automatically by Amazon S3, you can set Object Expiration."
I don't want to delete my files from S3. I just want add cache meta for maximum cache time or/and file expiry time.
I completely confused with this. Can somebody explain what I must use: object expiration or cache-control?
S3 lets you specify the max-age and Expires header for cache control , CloudFront lets you specify the Minimum TTL, Maximum TTL, and Default TTL for a cache behavior.
and these header just tell when will the validity of an object expires in the cache(be it cloudfront or browser cache) to read how they are related read the following link
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationDownloadDist
For letting you Leverage Browser caching just specify the Cache control header for all the object on s3 do
Steps for adding cache control for existing objects in your bucket
git clone https://github.com/s3tools/s3cmd
Run s3cmd --configure
(You will be asked for the two keys - copy and paste them from your
confirmation email or from your Amazon account page. Be careful when
copying them! They are case sensitive and must be entered accurately
or you'll keep getting errors about invalid signatures or similar.
Remember to add s3:ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.)
./s3cmd --recursive modify --add-header="Cache-Control:public ,max-age= 31536000" s3://your_bucket_name/
Your files won't be deleted, just not cached after the expiration date.
The Amazon docs say:
After the expiration date and time in the Expires header passes, CloudFront gets the object again from the origin server every time an edge location receives a request for the object.
We recommend that you use the Cache-Control max-age directive instead of the Expires header field to control object caching. If you specify values both for Cache-Control max-age and for Expires, CloudFront uses only the value of max-age.
"Amazon S3 Object Lifecycle Management" flushs some objects from your bucket based on a rule you can define. It's only about storage.
What you want to do is set the Expires header of the HTTP request as you set the Cache-Control header. It works the same: you juste have to add this header to your PUT query.
Expires doesn't work as Cache-Control: Expires gives a date. For instance: Sat, 31 Jan 2013 23:59:59 GMT
You may read this: https://web.archive.org/web/20130531222309/http://www.newvem.com/how-to-add-caching-headers-to-your-objects-using-amazon-s3/

Where does Tridion store Metadata values?

When we define custom metadata for the components, it is my understanding that this user-given metadata is stored in SQL server. And it is not visible in the component xml. Can anyone explain how exactly a metadata linked to a component will actually get stored?
A Component definition in Tridion has two types of fields: Content fields and Metadata fields. Both field types are stored in the Content Manager database (either SQL Server or Oracle). And both field types are retrieved whenever you read the Component back from Tridion through any of its APIs (TOM, TOM.NET or Core Service).
Only the Content fields are shown in the Source tab of a Component edit window, but the Metadata fields are visible on the Metadata tab of that same window.
If you want to have a single view of the XML of both Metadata and Content fields (as well as many other properties of you Component in Tridion), consider installing the PowerTools or the Item XML extension.
I think you may be confusing things a bit.
The Metadata is always stored as part of the component - under tcm:Metadata. When you publish this component, then the metadata fields will also be available for querying on the Content Delivery Data Store.
Whether these fields get displayed as part of the component presentation depends on your templates. There's nothing stopping you from including these values in the output of your template (typical use case for SEO, for instance).
In summary:
In the CM, the Metadata is stored together with the Component
In the CD, the Metadata is stored as part of the "CUSTOM_META"
associated with this component.
Just a note,
There is another metadata that is not stored as metadata fields, which is the system metadata, such as Last Modified Date or the user that last modified the component. That's metadata in the CMS. Also there is system metadata in the front-end (broker or file system metadata) that gets published when you publish a given component, such as Last Published Date.
You can leverage/use the system metadata in your templates as well.