I am using ODI 10g. In that, the API OdiPurgeLog is not only purging the logs whose name is specified in the Session filter, instead it is deleting all the other sessions also along with it.
All the logs are getting deleted.
Please help to fix this problem.
You need not to mentioned session name in purgetype, Just let it be as Session.
For each session you can purge that session by calling purgeodilog utility in package.
If you have single session running at a time then you can use start time and end time variable and pass them in start date and end date which can also help you to purge the session as per start and end time.
please let me know if this works.
Related
I have a SlashDB installation on top of MySQL. Once in a while, especially after a period of time of inactivity, the first call to an API returns:
'ResultProxy' object has no attribute 'execution_options'
The second request (same endpoint) would work normally.
My guess was that SlashDB's connection to MySQL had been terminated due to inactivity. I set the wait_timeout system variable (MySQL) to about 10 hours and it seems to help in some cases.
What does this error mean, and is there a way to prevent it?
This should no longer occur in version 1.0, but in case you still have a problem please post a dump of your database schema with your question.
Good Morning,
Snapshot creation successful via cURL REST commands These take
anywhere between 1-5 minutes before they are visable via GUI or
getSnapshotsForVolume.
I do not see a reference to delete any snapshot (manual or automated) via the service, deletion of scheduled snapshots appears to be utilization based upon value defined for the schedule.
I know this function should exist, as option is available via the GUI.
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Storage_Iscsi
Thanks
yep it exists is the deleteObject method
http://sldn.softlayer.com/reference/services/softlayer_network_storage_iscsi/deleteobject
when you create a new manual snapshot it returns an softlayer_network_storage_iscsi object see:
http://sldn.softlayer.com/reference/services/softlayer_network_storage_iscsi/createsnapshot
you just need to make to sure to use the ID of that snapshot in the deleteObject method
I am using Perl's getpwnam to check whether an entry exists in the LDAP database I'm using to store user information. This is used in a script that deletes LDAP entries.
The thing is, when I run the script once, it's successful and I can no longer see the entry via the Unix command getent passwd and it's deleted from the LDAP database as well. The problem is, when I try to run the script again and ask it to delete the same user entry (to check that it's idempotent), the getpwnam test still returns success (and prints the entry that was just deleted) which causes the script to throw an error about attempting to delete a non-existent entry.
Why is Perl's getpwnam behaving like this? Is there a more robust test for LDAP entry existence short of binding to the LDAP server and querying it?
nscd cache is not keeping track of your deletions, apparently.
I'm reluctant to call this an "answer" since I don't know if nscd is supposed to stay synchronized with deletions, or how to fix it. The only thing I've ever done with nscd is remove it.
I am trying to save a date into meteor mongodb my challenge is as follows:
1) if i use new Date() it creates a date object in mongo DB however it saves the time as local time as javascript Date() this always comes with a timezone +0x:hours based on browser local timezone. When i retrieve this it causes havoc as i am assuming everything in my db is UTC.
2) I want to use moment js library which is great because it can represent dates in UTC properly but my challenge is how do i get mongo db to accept a moment time? The minute i use moment.format() it saves it as a string!
So how can i send a date to a mongodb insert command with a date object that is in UTC? string just dont work :(
Any help would be appreciated.
Thanks
I think everything you need to know about both of these questions can be found here and here.
TLDR:
If you directly insert/update from the client you will store a timestamp based on the user's clock. It will still be stored as UTC, but you may or may not want to trust that the time is correct. I strongly suggest using a method for any db modifications which involve time so that the server's version of time will always be used.
Moment objects are not serializable to a format compatible with mongodb. Use a date object and format it on the client.
The problem with saving dates on the client is that each client can have a different time zone, or even wrong time set. Thus the only solution is to have the date set on the server. Using a method for each insert / update is not an elegant solution.
A common practice is to modify the document inside allow or deny callback:
Messages.allow({
insert: function(userId, doc) {
...
doc.timestamp = new Date();
return true;
},
});
That way you ensure all documents have a compatible timestamp, and you can use usual db methods on the client.
The Meteor community recently started an extensive document about how to use dates and times. You'll find a lot of useful information there, in addition to David Weldon's links:
https://meteor.hackpad.com/Meteor-Cookbook-Using-Dates-and-Times-qSQCGFc06gH
However, in particular I recommend using https://github.com/mizzao/meteor-timesync when security is not a concern. It allows you to client-locally obtain an accurate server time even if the client's clock is way off, without a round-trip to the server. This can be useful for all kinds of reasons - in my apps, I universally just use server-relative time and don't care about what the client's time is at all.
I am trying to use Zend_Session_Handler_DbTable to save my session data to the db but as far as i can see, the expired sessions are never deleted from the database.
I can see a cron job running (ubuntu) which deletes the file based sessions but I couldn't find how gc works on sessions which are saving in db.
The Zend_Session_SaveHandler_DbTable class has a garbage collection method called gc which is given to PHP via session_set_save_handler when you call Zend_Session::setSaveHandler().
The gc function should get called periodically based on the php.ini values session.gc_probability and session.gc_divisor. Make sure those values are set to something that would result in garbage collection running at some point.
Also make sure you specify the modifiedColumn and lifetimeColumn options when creating the DbTable save handler because the default gc function uses those columns to determine which rows in the session table are old and should be deleted.