Hi I have typo3 extension which is a tiny shop.
What I want, is to delete the whole session and cache after a order.
How can I do this programatically in my controller?
Thanks in advance.
UPDATE:
Your right, flush full cache is not good, I just reread my code ;) and I think it would be enought to clear the cockies.
I set the following values:
$order_data = array();
$order_data = $GLOBALS['TSFE']->fe_user->getKey('ses', USER_ORDER);
$order_data['firstname'] = $_COOKIE["firstname"];
$order_data['lastname'] = $_COOKIE["lastname"];
$order_data['email'] = $_COOKIE["email"];
$GLOBALS['TSFE']->fe_user->setKey('ses', USER_ORDER, $order_data);
$GLOBALS['TSFE']->storeSessionData();
what would be a good way to remove the FE user and the USER_ORDER data?
Thanks
Note: It is a very bad idea to flush your entire cache at runtime initiated by user FE clicks. Not only it heavily slows down your system, if you have to do stuff like that, you should better fix your extension to not rely on such things. You're asking for an evil hack here.
To answer your question, the most brutal variant is "GeneralUtility::makeInstance(CacheManager::class)->flushCaches();". And no, please really don't do that. That's the opposite of "green-IT" so to say ;) Instead, get your tagging in the caches right, flush what you really need (flushByTag()), and have a look at USER / USER_INT processing.
For the Session stuff, the SessionManager class and the classes behind that should allow manipulating the session.
Related
I want to make it so that when you fill in a field (in case) X and go to a state, it is deleted (this field should be saved in the history, I think this is done by default). This is necessary so that the user does not have to be hitting the pencil and erasing the message that comes from another state.
As I saw with a Trigger it can be done, do you have any idea?
You don't need code for it, you could do it with config changes (workflow / flow / process builder). But if you're really after a trigger - something like that.
trigger CaseTrigger on Case(before update){
for(Case c : trigger.new){
Case old = trigger.oldMap.get(c.Id);
if(c.Status != old.Status){
c.Description = null; // whichever field you want to wipe
}
}
}
Edit about 0 code solutions
Look into workflows, flows and process builder. Actually if you're starting fresh maybe focus on flows, the other 2 are bit passe and SF recommends migrating away: https://admin.salesforce.com/blog/2021/go-with-the-flow-whats-happening-with-workflow-rules-and-process-builder
Have a look at these and if you're stuck: consider posting at dedicated https://salesforce.stackexchange.com. StackOverflow is really for code related stuff, you'll reach more admins over there.
https://trailhead.salesforce.com/content/learn/modules/flow-builder
https://trailhead.salesforce.com/en/content/learn/modules/platform-app-builder-certification-maintenance-winter-21/get-handson-with-flow-before-save-trigger-when-certain-record-changes-are-made
https://salesforce.stackexchange.com/questions/301451/trigger-flow-if-a-specific-field-on-the-updated-record-changed
https://help.salesforce.com/s/articleView?id=release-notes.rn_forcecom_flow_fbuilder_prior_values_flow.htm&type=5&release=230
I'm working on a discord bot, but I'm not sure if I need to close the MongoClient with:
client.close()
The issue I have is that I'm returning some data from a collection, and obviously, I can't close the client after I return something.
If I need to close the client, what's the best way of doing it? At the moment, I have a discord command that returns something.
def get_queue_info(queue):
if queue.isdigit():
queue = int(queue)
return db['Groups'].find_one({"order":queue})
else:
return db['Groups'].find_one({"name":str(queue).upper()})
#obviously, this won't work
#mongo.close()
My bot is executing commands pretty slowly, but I'm not sure if it's because of not closing
You don't need to close the connection. Pymongo manages the connection so there is no need to tidy it up.
If you really want to manually close, here are a couple options:
Call mongo.close() after you call get_queue_info(queue).
Instead of just returning the data, save it to a var then close the connection and return the var.
EDIT:
upon looking it up, you don't need to manually close it.
You should take advantage of connection pooling, just create one MongoClient that lasts for the entire life of your process.
I think the reason your bot is executing commands pretty slowly is because MongoDB is taking so much CPU (review your schema or index design)
Thanks everyone. I guess I don't need to close the connection!
we want to take care that all running busy indicators will be stopped after a couple of time. How can we do that? For the moment we use setBusy(false) for each control.
Thanks a lot!
I think that you should change your overall approach because it's not a good UI/UX pattern.
First of all, why do you have more than one busy control in your view? For instance, you if you are loading record in a list you just set busy the list, not the whole page. If you are submitting a form data, you set busy only the form not everything else.
Second of all, why do you say "For the moment we use setBusy(false) for each control"? You should remove the busy state after a specific event. For istance when you finished to load list's result or you get the result of a form submission.
Anyway, to solve your current issue, the best approach is to use XML binding with a temporary JSON model.
You could have a JSON model like with content like this:
{
busy: false
}
and you bind the busy property of the control to youtJSONModel>/busy at this point when you need to set the control to a busy state you can do this.getView().getModel("youtJSONModel").setProperty("/busy", true); and when you have finished the operation you can do this.getView().getModel("youtJSONModel").setProperty("/busy", false);
I need to log trace events during boot so I configure an AutoLogger with all the required providers. But when my service/process starts I want to switch to real-time mode so that the file doesn't explode.
I'm using TraceEvent and I can't figure out how to do this move correctly and atomically.
The first thing I tried:
const int timeToWait = 5000;
using (var tes = new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl") { StopOnDispose = false })
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
using (var tes = new TraceEventSession("TEMPSESSIONNAME", TraceEventSessionOptions.Attach))
{
Thread.Sleep(timeToWait);
tes.SetFileName(null);
Thread.Sleep(timeToWait);
Console.WriteLine("Done");
}
Here I wanted to make that I can transfer the session to real-time mode. But instead, the file I got contained events from a 15s period instead of just 10s.
The same happens if I use new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl", TraceEventSessionOptions.Create) instead.
It seems that the following will cause the file to stop being written to:
using (var tes = new TraceEventSession("TEMPSESSIONNAME"))
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
But here I must reenable all the providers and according to the documentation "if the session already existed it is closed and reopened (thus orphans are cleaned up on next use)". I don't understand the last part about orphans. Obviously some events might occur in the time between closing, opening and subscribing on the events. Does this mean I will lose these events or will I get the later?
I also found the following in the documentation of the library:
In real time mode, events are buffered and there is at least a second or so delay (typically 3 sec) between the firing of the event and the reception by the session (to allow events to be delivered in efficient clumps of many events)
Does this make the above code alright (well, unless the improbable happens and for some reason my thread is delayed for more than a second between creating the real-time session and starting processing the events)?
I could close the session and create a new different one but then I think I'd miss some events. Or I could open a new session and then close the file-based one but then I might get duplicate events.
I couldn't find online any examples of moving from a file-based trace to a real-time trace.
I managed to contact the author of TraceEvent and this is the answer I got:
Re the exception of the 'auto-closing and restarting' feature, it is really questions about the OS (TraceEvent simply calls the underlying OS API). Just FYI, the deal about orphans is that it is EASY for your process to exit but leave a session going. This MAY be what you want, but often it is not, and so to make the common case 'just work' if you do Create (which is the default), it will close a session if it already existed (since you asked for a new one).
Experimentation of course is the touchstone of 'truth' but I would frankly expecting unusual combinations to just work is generally NOT true.
My recommendation is to keep it simple. You need to open a new session and close the original one. Yes, you will end up with duplicates, but you CAN filter them out (after all they are IDENTICAL timestamps).
The other possibility is use SetFileName in its intended way (from one file to another). This certainly solves your problem of file size growth, and often is a good way to deal with other scenarios (after all you can start up you processing and start deleting files even as new files are being generated).
I have looked at the sample code and still not able to figure out some key functionality of the framework without more in depth documentation. Normally there are books about frameworks but it seems like with this framework, you're on your own until it picks up more mainstream usage.
How do I get the roster list? I see that XMPPRosterCoreDataStorage has an NSMutableSet of rosterPopulationSet. Is this the set of XMPPUserCoreDataStorageObjects, i.e., users, that make up a roster?
My way I'm guessing is a hack--get the presence of every user as it's announced, and stash it in an array. Those are the online buddies. Somehow get the entire roster list, and everyone who is not online, is offline.
I figure that there should be an array of XMPPUserCoreDataStorageObjects, i.e., 30 contacts, 30 entries in the XMPPUserCoreDataStorageObjects table?
How would I access this array and how would I tell if they are online or not?
For online status, am I supposed to query something else, b/c it's not encapsulated in XMPPUserCoreDataStorageObjects is it?
I suppose I could use the didReceivePresence or similar methods, but all in all, I want to use the framework and not fight against it.
Appreciate it!
Thanks
Use XMPPRoster extension with either XMPPRosterCoreDataStorage or XMPPRosterMemoryStorage
Take a look at following code. Please note that this is not complete code but should give you an idea.
XMPPRosterMemoryStorage *rosterstorage = [[XMPPRosterMemoryStorage alloc] init];
xmppRoster = [[XMPPRoster alloc] initWithRosterStorage:rosterstorage];
[xmppRoster activate:xmppStream];
[xmppRoster fetchRoster];