Freeradius expiration variable expanding - radius

After looking into dozens of wiki's info pages and other resources, i'm unable to find a way on displaying the expiration date to a user.
My reply message: Reply-Message := "Your account has expired on %{Expiration} go to ab.com for extending!" Doesnt work. I can't seem to expand the variable Expiration in my database:
my expiration label is:
Expiration := 05 Jun 2012 02:00.
The expiration mod however does work, but i just can't show the expiration date. Is there any way to expand "custom database variables" in a radius environment, with freeradius?

You need to qualify the attribute with a list. There are three lists in the server, request, control, reply.
The use of request (default for unqualified attributes) and reply can be inferred from their names. Control (also called 'check' in older modules) holds attributes which affect the behaviour of the modules. Expiration is a control attribute so you need to qualify expansions involving it with control:
Reply-Message := "Your account has expired on %{control:Expiration} go to ab.com for extending!"

Related

Usage of nbf in json web tokens

nbf: Defines the time before which the JWT MUST NOT be accepted for processing
I found this definition about nbf in json web tokens. But still wondering what the usage of nbf is? Why we use this? Does it relate to the term of security?
Any idea would be appreciated.
It definitely is up to how you interpret the time.
One of possible scenarios I could make up is literally - when a token must last from some particular point in time til another point in time.
Say, you're selling some API or resource. And a client purchased access that lasts for one hour and the access starts tomorrow in the midday.
So you issue a JWT with:
iat set to now
nbf set to tomorrow 12:00pm
exp set to tomorrow 1:00pm
There is one more thing to add what #zerkms told, if you want the token to be used from now, then
nbf also need to be current time(now)
Otherwise you'll get error like the token cannot be used prior to this particular time.
It can be given a time of 3 seconds from time of creation to avoid robots and allow only humans users to access the API.
'nbf' means 'Not Before'.
If nbf=3000, then the token cannot be used before 3 seconds of creation. This makes a brute force attack nearly impossible.

PowerShell verb for "Expire"

I'm in the midst of finalising the set of cmdlets for a server application. Part of the application includes security principal management and data object management, and "expiration" of both (timed and manual). After the expiration date, login and access for the security principal is refused and access to the data owned by that principal is optionally prevented (either immediately by deletion or as part of automatic maintenance by marking it as expired).
From the output of Get-Verb, I cannot see an obvious synonym for Expire, which is the most natural choice of verb for the action being undertaken here. Expire on a security principal expires the principal and may also expire all their stored data, while expire of a data object is restricted to that object.
Set- is already in use for both object types, and has a partial overlap in functionality (Expire- forces a date in the past, and removes data, while Set- will allow future or past dates but NOT remove the data).
In this fashion Expire is combining two operations (Set+Remove) and for data-security reasons, we wouldn't want to force separation into the two operations (that's already possible).
For this reason, I also consider that Disable- is not appropriate since it suggests the possibility of reversal with Enable-.
I also think Remove- by itself is inappropriate since there are data records specifically not deleted as part of the operation.
Unpublish seems very close at least for the data, but again it seems that the intent is for Unpublish and Publish to be paired, and in this case it would not be reversible. It also does not make sense when applied to the security principal.
So which (if any) standard verb would you expect to use, if you wanted to expire something?
Looking at the list of approved verbs, two jump out at me:
Deny (dn):
Refuses, objects, blocks, or opposes the state of a resource or process.
Revoke (rk): Specifies an action that does not allow access to a resource. This verb is paired with Grant.
I wouldn't worry too much if there is not a paired operation, since that happens with some of the built-in cmdlets. Stop-Computer, for example, has no paired Start-Computer. There is Remove-Variable, but no Add-Variable (there is New-Variable). I think that it is only important if a paired command exists that it is named consistently.
Another option may be to use something like a Set-ObjectExpiration/Get-ObjectExpiration especially, if it makes sense to want to query when objects are going to expire.
What about Invoke? It could be Invoke-ExpireAppObject Or something like that.
There really isn't an approved verb that fits your scenario based on MS reccomendations

Amazon S3: Cache-Control and Expiry Date difference and setting trough REST API

I want to enhance my sites loading speed, so I use http://gtmetrix.com/, to check what I could improve. One of the lowest rating I get for "Leverage browser caching". I found, that my files (mainly images), have problem "expiration not specified".
Okay, problem is clear, I thought. I start to googling and I found that amazon S3 prefer Cache-Control meta data over Expiry date (I lost this link, now I think maybe I misunderstood something). Anyway, I start looking for how to add cache-control meta to S3 object. I found
this page: http://www.bucketexplorer.com/documentation/amazon-s3--how-to-set-cache-control-header-for-s3-object.html
I learned, that I must add string to my PUT query.
x-amz-meta-Cache-Control : max-age= <value in seconds> //(there is no need space between equal sign and digits(I made a mistake here)).
I use construction: Cache-control:max-age=1296000 and it work okay.
After that I read
https://developers.google.com/speed/docs/best-practices/caching
This article told me: 1) "Set Expires to a minimum of one month, and preferably up to one year, in the future."
2) "We prefer Expires over Cache-Control: max-age because it is is more widely supported."(in Recommendations topic).
So, I start to look way to set Expiry date to S3 object.
I found this:
http://www.bucketexplorer.com/documentation/amazon-s3--set-object-expiration-on-amazon-s3-objects-put-get-delete-bucket-lifecycle.html
And what I found: "Using Amazon S3 Object Lifecycle Management , you can define the Object Expiration on Amazon S3 Objects . Once the Lifecycle defined for the S3 Object expires, Amazon S3 will delete such Objects. So, when you want to keep your data on S3 for a limited time only and you want it to be deleted automatically by Amazon S3, you can set Object Expiration."
I don't want to delete my files from S3. I just want add cache meta for maximum cache time or/and file expiry time.
I completely confused with this. Can somebody explain what I must use: object expiration or cache-control?
S3 lets you specify the max-age and Expires header for cache control , CloudFront lets you specify the Minimum TTL, Maximum TTL, and Default TTL for a cache behavior.
and these header just tell when will the validity of an object expires in the cache(be it cloudfront or browser cache) to read how they are related read the following link
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationDownloadDist
For letting you Leverage Browser caching just specify the Cache control header for all the object on s3 do
Steps for adding cache control for existing objects in your bucket
git clone https://github.com/s3tools/s3cmd
Run s3cmd --configure
(You will be asked for the two keys - copy and paste them from your
confirmation email or from your Amazon account page. Be careful when
copying them! They are case sensitive and must be entered accurately
or you'll keep getting errors about invalid signatures or similar.
Remember to add s3:ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.)
./s3cmd --recursive modify --add-header="Cache-Control:public ,max-age= 31536000" s3://your_bucket_name/
Your files won't be deleted, just not cached after the expiration date.
The Amazon docs say:
After the expiration date and time in the Expires header passes, CloudFront gets the object again from the origin server every time an edge location receives a request for the object.
We recommend that you use the Cache-Control max-age directive instead of the Expires header field to control object caching. If you specify values both for Cache-Control max-age and for Expires, CloudFront uses only the value of max-age.
"Amazon S3 Object Lifecycle Management" flushs some objects from your bucket based on a rule you can define. It's only about storage.
What you want to do is set the Expires header of the HTTP request as you set the Cache-Control header. It works the same: you juste have to add this header to your PUT query.
Expires doesn't work as Cache-Control: Expires gives a date. For instance: Sat, 31 Jan 2013 23:59:59 GMT
You may read this: https://web.archive.org/web/20130531222309/http://www.newvem.com/how-to-add-caching-headers-to-your-objects-using-amazon-s3/

run one crystal report multiple times with different parameters

I am using the BusinessObjects Enterprise server and i have a report that uses the "department" as a parameter field to control the selection of records. there are 20 different departments.
I want to schedule this report to run 20 times with a new single department selected each time. Is there a way to do this without scheduling the report 20 times?
thanks for any help
Yes, you can. A bit of a process:
Create a Group for each department
Add users to groups as desired; ensure that they have an email address
Create a Profile; add a Profile Value for each Group (one Profile Value for each Group/Department ID combination); the Profile Values will be strings (important)
Create a Publication; add your report to the Source Document; add the Groups that you created earlier to the Enterprise-Recipient list
now define the Personalization (the key part of this); you can either add a Filter (set TABLE.FIELD or FORMULA to your Profile (Report Field & Enterprise Recipient Mapping columns) OR set the Department ID parameter to the appropriate Enterprise Recipient Mapping value (your parameter needs to be a string for this to work; note comment earlier).
set Destination to Email
set other properties (e.g. Format) as desired
Save & Close
You can also schedule this Publication to occur on a recurring basis.
Notes:
This solution uses the Publication Job Server (runs the Publication), the Crystal Reports Job Server (to run the report), the Adaptive Processing Server (does the bursting), and the Destination Job Server (send the email messages). You may want to create a separate set of these services and package them into their own server group, then force the Publications to use only this server group.
Related to the earlier point, you may want to create a server group just for scheduled reports and force recurring instances to use this server group. Why? Publications don't seem to do a good job of waiting for reports in a queue--if a Crystal Reports Job server isn't available, the Publication will fail. Forcing scheduled-report instances to generate on their own server group helps to eliminate this issue.
If you make significant changes to the report (e.g. add a parameter), you may need to remove then add the report to the Source-Document list to ensure that it has the most-recent definition; other changes to the report (e.g. adding a column) don't seem to require this attention. YOUR MILEAGE MAY VARY.
You can design the report with the department as a group.
Have a new page after each group and be sure to print the records from the department group section, not the details.
This is assuming you are getting all the departments inside your database fields.

GWT: Pragmatic unlocking of an entity

I have a GWT (+GAE) webapp that allows users to edit Customer entities. When a user starts editing, the lockedByUser attribute is set on the Customer entity. When the user finishes editing the Customer, the lockedByUser attribute is cleared.
No Customer entity can be modified by 2 users at the same time. If a user tries to open the Customer screen which is already opened by a different user, he get's a "Customer XYZ is being modified by user ABC".
The question is what is the most pragmatic and robust way to handle the case where the user forcefully closes the browser and hence the lockedByUser attribute is not cleared.
My first thought is a timer on the user side that would update the lockRefreshedTime each 30 seconds or so. A different user trying to modify the Customer would then look at the lockRefreshedTime and if the if the refresh happened more then say 35 seconds ago, it would acquire the lock by setting the lockedByUser and updating the lockRefreshedTime.
Thanks,
Matyas
FWIW, your lock with expiry approach is the one used by WebDAV (and implemented in tools like Microsoft Word, for instance).
To cope for network latency, you should renew your lock at least half-way through the lock lifetime (e.g. the lock expires after 2 minutes, and you renew it every minute).
Have a look there for much more details on how clients and servers should behave: https://www.rfc-editor.org/rfc/rfc4918#section-6 (note that, for example, they always assume failure is possible: "a client MUST NOT assume that just because the timeout has not expired, the lock still exists"; see https://www.rfc-editor.org/rfc/rfc4918#section-6.6 )
Another approach is to have an explicit lock/unlock flow, rather than an implicit one.
Alternatively, you could allow several users to update the customer at the same time, using a "one field at a time" approach: send an RPC to update a specific field on each ValueChangeEvent on that field. Handling conflicts (another user has updated the field) is then made a bit easier, or could be simply ignored: if user A changed the customers address from "foo" to "bar", it really means to set "bar" in the field, not to change _from "foo" to "bar", so if the actual value on the server has already be updated by user B from "foo" to "baz", that wouldn't be a problem, user A would have probably still set the value to "bar", changing it from "foo" or from "baz" doesn't really matter.
Using a per-field approach, "implicit locks" (the time it takes to edit and send the changes to the server) are much shorter, because they're reduced to a single field.
The "challenge" then is to update the form in near real-time when another user saved a change to the edited customer; or you could choose to not do that (not try to do it in near real-time).
The way to go is this:
Execute code on window close in GWT
You have to ask the user to confirm to really close the window in edit mode.
If the user really wants to exit you can then send an unlock call.