Search All folders in one call? - mailkit

I want to search for by a date range across all folders. Hopefully it would be a single call to mailKit.
I could get the full list of folders, and then search each folder for the given date range, but thats slow..
Is there a Deep Search option that i'm missing?
Any magic would be appreciated..

Service.GetFolder(SpecialFolder.All);

Related

In Power Query, when duplicating the source query should I duplicate the Transform File folder as well?

My apologies in advance if this question has already been asked, if so I cannot find it.
So, I have this huge data base divided by country where I need to import from each country data base individually and then, in Power Query, append the queries as one.
When I imported the US files, the Power Query automatically generated a Transform File folder with 4 helper queries:
Then I just duplicated the query US - Sales and named it as UK - Sales pointing it to the UK sales folder:
The Transform File folder didn't duplicate, though.
Everything seems to be working just fine right now, however I'd like to know if this could be problem in the near future, because I still have several countries to go. Should I manually import new queries as new connections instead of just duplicating them or it just doesn't matter?
Many thanks!
The Transform Files Folder group contains the code that is called to transform a list of files. It is re-usable code. You can see the Sample File, which serves as the template for the transform actions.
As long as the file that is arrived at for the Sample File has the same structure as the files that you are feeding into the command, then you can use any query with any list of files.
One thing you need to make sure is that the Sample File is not removed from your data source. You may want to create a new dummy file just for that purpose, make sure it won't be deleted, and then point the Sample File query to pull just that file.
The Transform Helper Queries are special queries that you may edit the queries, but you cannot delete and recreate your own manually. They are automatically created by PQ when combining list of contents and are inherently linked to the parent query.
That said, you cannot replicate them, and must use the Combine function provided by PQ to create the helper queries.
You may however, avoid duplicating the queries, instead replicate your steps in the parent query, and use table union to join the list before combining the contents with the same helper queries.

How to filter results of adding related work items

When adding a link from one work item to another and searching by title is there a way to filter the results to a specific project? We have two projects in our collection that have work items with identical titles so the search results shows the work items from both projects without any way to distinguish them other than ID.
You can use the search keyword "proj". For example "proj:some_project some_work_item_title"

How can I make query for specific time in AzureDevOps?

I need to make a query in AzureDevOps. I need to search every bug that was created in the same specific time (for example in one sprint). The bugs are not important status today but when was create.
Can you help me, please? Thank you.
How can I make query for specific time in AzureDevOps?
For this issue ,you can make query like below, using Created Date field in the query:
Or using Iteration path field to query the bugs in a specific spirit:

Symfony: getting form values before and after form handling

Hello I want to be able to compare values before and after form handling, so that I can process them before flush.
What I do is collect old values in an array before handlerequest.
I then compare new values to the old values in the array.
It works perfectly on simple variables, like strings for instance.
However I want to work on uploaded files. I am able to get their fullpath and names before handling the form but when I get the values after checking if form is valid, I am still getting the same old value.
I tried both calling $entity->getVar() and $form->getData()->getVar() and I have the same output....
Hello I actually found a solution. Yet it is a departure from the strategy announced in my question, which I realize is somewhat truncated regarding my objective. Which was to compare old file names and new names (those names actually include full path) for changes, so that I would unlink those of those old names that were not in the new name list anymore. Basically, to operate a cleanup after a file was uploaded to replace another, without the first one being deleted first. And to save the webmaster the hassle of having to sort between uniqid-named files that are still used by the web site and those that are useless.
Problem is that my upload functions, that are very similar to those given in examples to the file upload code shown on the official documentation pages, seemed to take effect at flush time.
So, since what I wanted to do with those files had nothing to do with database operations, I resorted to having step two code launch after flush, which works fine.
However I am intrigued by your solutions, as they are both strategies I hadn't thought of. Thank you for suggestions.
However I am not sure if cloning the whole object will be as straightforward as comparing two arrays of file names.

Executing list of .sql files in certain order

I have a directory having a number of .sql files.I am able to execute them.But, I want to execute them in certain order.So, for example if I have 4 files xy.sql,dy.sql,trim.sql and see.sql.I want to execute them in see.sql,dy.sql,trim.sql and xy.sql.What happens now is I get a list of files using DirectoryInfo object.Now, I need to sort them using my order.I am using C# 3.5
thanks
It might be better to rename your files so that they sort into the correct order natively. This prevents having to maintain a separate "execution order" list somewhere.
Using a common prefix for the sql file names is a bit self-documenting as well, e.g.
exec1_see.sql
exec2_dy.sql
exec3_trim.sql and
exec4_xy.sql
It's unclear what the ordering algorithm is for your filenames. It doesn't appear to be entirely dictated by the filename (such as alphabetical ordering).
If you have arbitrary order in which you must execute these scripts, I would recommend that you create an additional file which defines that order, and use that to drive the ordering.
If the list of filenames is known at compile time, you could just hard-code that order in your code. I'm assuming, from your question, however, that the set of files is likely to change, and new ones may be added. If that's the case, I refer you to my previous paragraph.
You can obtain a List<string> that consists of all your .sql files, and call the List<T>.sort function to sort it and then operate on the files based on the newly sorted sequence.
number them and use a tool like my SimpleScriptRunner or the Tarantino DB change tool