What's the OpenWhisk way to keep an auth token alive? - ibm-cloud

So, I am dealing with a system where I need to be keeping an auth token alive by periodically (essentially daily) renewing the token and giving it to the functions that need it.
The way I would do this in plain old node.js is I would just use a SetInterval timer to renew it.
should I approach this the same way with an OpenWhisk action? I can build into the action it's work SetInterval and keep the token up to date. Or I could imagine creating an action which took input from an interval trigger as well as regular requests and have the action update on trigger requests and return the token on other requests. Or should I be using cloudant as the backend to manage the token?
Thoughts?

The following approach might solve your issue
write one action (A) that renews the token
call action A at the beginning of any other action by using the action sequence capability Creating action sequences
use the alarms (cron) trigger service to run action A periodically in order to renew the token even if your sequence is not executed Using the Alarms package
in case you need to store the token in action A you might think about using cloudant

Related

Set authorization for Rundeck Webhook Plugin

I've tried and created a webhook and it seems that anyone can access it as long as you know the URL. It is possible to put an authorization for it?
The webhook security is defined "inside rundeck" I mean, you can define a user to execute the job and probably you need to create an ACL for giving access to execute. The webhooks include in the URL all auth info to be executed, in that sense, you cannot add a extra auth layer to the webhook, the idea of webhooks is to be used with other solutions, for example, trigger a job from PagerDuty.
If you need to execute jobs with "dynamic authentication" the best approach is to execute from API using the user token as a variable.
UPDATE 26/05/2022:
Now you can use a security token on your Webhooks calls, that's only available for PagerDuty Process Automation On Prem (formerly "Rundeck Enterprise"), take a look.
No, you can't put restrictions on who can run a job via webhook:
Webhooks are meant for when you DON'T want authorization restrictions.
API calls are meant for when you DO want authorization restrictions.

Quarkus, Keycloak and OIDC token refresh

I’m currently working on a PoC with multiple Quarkus services and Keycloak RBAC. Works like a charm, easily to bootstrap and start implementing features.
But I encountered an issue that I could not solve in my mind. Imagine:
User accesses a protected service
quarkus-oidc extension does fancy token obtaining by HTTP redirecting, JWT in cookie lasts 30 minutes
User is authenticated and gets returned to the web application
User works in application, fills in forms and data
Data is being stored by JWT-enriched REST calls (we do validation by hibernate-validator)
User works again, taking longer than 30 min
Wants to store another entry, but token from step 3 is now expired and API call fails
User won’t be happy, so me neither
Possible ways to solve:
Make the JWT last longer than the current 30 minutes, but that just postpones the issue and opens some security doors
Storing users’ input in local storage to restore it later after a token refresh (we also would do that to not loose users’ work)
Refresh the token „silently“ in JS without user knowing. Is there a best practice for that?
I missed something important and the internet now tells me a better architecture for my application.
Thank you internet!
Re the step 3. In Quarkus 1.5.0 adding quarkus.oidc.token.refresh-expired=true will get the ID token refreshed and the user session extended if the refresh grant has succeeded
For such use cases, I tend to prefer the reverse of JWT. I keep the user data in a shared data service (a data grid like Infinispan or Redis). So that this data is keyed by the user and available. I do control the TTL of that data in the shared data service.
It can either be specific to an app, or shared between a small number of apps. It does bring some coupling but so does the JWT property structure.
For Quarkus, there is an Infinispan client integration, a Hazelcast one, mongodb and AWS dynamoDB. And you can bring other libraries.

Refreshing an Access Token vs obtaining a new one

I am writing a some PowerShell scripts to log into Azure AD using a clientid/secret.
The code for obtaining a new access / refresh token is very simple. I have a Get-Token function with all the error checking etc in about 80 lines of code.
If I write Get-Token so that it will decide whether to get a new token, send an existing one or to obtain a new one with a refresh token would invariably be considerably more complex.
Is there any benefit to renewing tokens rather than simply obtaining a new one each time the function is called? (considering there is no consumer side to this, it is a backend script to connect to Azure services)
ETA
This isn't using ADAL, it is using raw HTTP requests, so I'm managing the tokens directly.

Enable/Disable Workflow using Azure Workflow Management API

I'm trying to Enable/Disable a logic app on Azure using the management APIs. I always get a 403 saying the client: with object id does not have authorization to perform 'Microsoft.Logic/workflows/disable/action' ...
I do use the authentication token in my request and so far have been able to use the API to list all workflows, get trigger histories and in/out messages using the same method.
Any suggestion?
I've seen this issue a lot before if you are calling the http:// instead of https:// - we are looking into automatically redirecting, but for now you will need to make sure you are calling the https:// endpoint with the correct method (in this case a PUT)
EDIT: We discovered the issue was the account being used to perform the enable/disable didn't have contribute permissions.

How to handle api token expiry in app

Our API returns a user auth code which has a session expiry (30 mins of inactivity). So, if we make an api call using the auth token it renews the session to 30 mins from the time of the call.
After 30 mins of inactivity the api returns an error saying that the token has expired. At this point we should request a new auth token.
However, the obvious way to do this (show the user the log in screen and get them to log in again) will mean cutting the user off in the middle of some functions in the app.
For instance, we have various view controllers with options and inputs which aggregate and submit one whole API call at the end of the process. If the session expires on the server whilst the user is filling out these inputs and views then they will be logged out when the API call is made, and they will lose their progress in these views.
There are two possible work arounds for this:
We set timers in the app ourself to make sure the user is logged out after 30 mins inactivity in the app. This means that they won't get logged out during a set of inputs, however this poses the issues that: the server API may still expire even though we are running our own timer. This won't work therefore.
We poll every 10 seconds or so to the server to ask if the API auth token is still valid. This will eat battery, data and all sorts and just isn't a reasonable way to do something like this.
Does anyone have any ideas?
Thanks
Tom
From your description, it sounds like a classic failed transaction problem. Just in case you are not familiar with transaction processing, "Nuts and Bolts of Transaction Processing" is a primer on the topic.
If you have the ability to modify the back end system, you will want to ensure an ACID backend.
This could mean building up data on the client and not send the data to the server until you have a complete transaction. That way, if the session times out, the client still has all the data needed to complete the transaction. (leverage atomicity)
This could mean having a transaction token. As a new session is created the client could send the server the transaction token and the state of the transaction is restored within the new session. (leverage durability)
To me both of these options are better than wiping out the existing transaction and forcing the user to start over again.
Hope that helps.