Deleting a specific value from a database with Jersey - rest

I have a couchdb that contains a number of files and I want to delete a specific file, based on a unique id. This doesn't work with any kind of parameters, PathParams/QueryParams. The code reads the parameters, takes in the value and then either ignores three request or treats it differently than a normal DELETE I can't really tell at this point.
Is there any other way to specify this data to the DELETE function?

Related

Cannot repopulate ElectrodeGroup datajoint table

I'm a researcher in Loren Frank's lab at UCSF using datajoint and files in the nwb format. I made some changes to our code for defining entries in our ElectrodeGroup table, and was hoping to test those by deleting an entry in the table and regenerating it with the new code. I was able to delete the entry, but cannot repopulate it. In particular, when I run ElectrodeGroup.populate() or ElectrodeGroup.populate({"nwb_file_name": my_file_name}), no changes are made to the table. I confirmed that the electrode group I deleted and am trying to regenerate is defined in the original nwb file. I am seeking input on why the populate command seems to not be working here. Thanks in advance for any help!
This user also contacted our team through another channel. Sharing the solution below for future users, in reference to this schema. In short, the populate process is reserved for unique upstream primary keys.
Since the ElectrodeGroup's only upstream table dependency is Session, the make method will only be called if there are no electrode groups for that session. This is because from the perspective of DataJoint, the only 'guaranteed' knowledge about what should exist for this table is defined solely by the presence/absence of related upstream records. Since the 'new' primary 'electrode_group_name' attribute is defined by the ElectrodeGroup table itself, DataJoint doesn't know how many copies will be created by make, and so simply invokes make 1 time per Session, expecting the single make invocation to fully define all possible electrode_group_name values the table will use. If there is one value for that session, no work needs to be done, so no make() invocation occurs.
There are a couple possible solutions:
Model the electrode group explicitly, with a table defines the existence of an electrode group (e.g., ElectrodeGroupConfiguration). This ElectrodeGroup would then inherit primary keys from both Session and ElectrodeGroupConfiguration. The ElectrodeGroup make function would be adjusted to load that unique keys across upstream tables.
Adjust the make function to handle the partial insert/update case, and call the make function directly with the desired primary key when these kinds of 'abnormal' updates need to occur.
Method #1 is 'cleanest' w/r/t to the DataJoint data model (explicitly modeled data dependencies using make/populate), whereas #2 is slightly 'escaping' the DataJoint data model in a controlled way to achieve a desired schema/data result.

AZURE DATA FACTORY - Can I set a variable from within a CopyData task or by using the output?

I have simple pipeline that has a Copy activity to populate a table. That task is based on a query and will only ever return 1 row.
The problem I am having is that I want to reuse the value from one of the columns (batch number) to set a variable so that at the end of the pipeline I can use a Stored Procedure to log that the batch was processed. I would rather avoid running the query a second time in a lookup task so can I make use of the data already being returned?
I have tried duplicating the column in the Copy activity and then mapping that to something like #BatchNo but that fails and have even tried to add a Set Variable task but can't figure out how to take a single column #{activity('Populate Aleprstw').output} does not error but not sure what that will actually do in this case.
Thanks and sorry if its a silly question.
Cheers
Mark
I always do it like this:
Generate a batch number (usually with a proc)
Use a lookup to grab it into a variable
Use the batch number in all activities (might be multiple copes, procs etc.)
Write the batch completion
From your description it seems you have the batch embedded in the data copy from the start which is not typical.
If you must do it this way, is there really an issue with running a lookup again?
Copy activity doesn't return data like that, so you won't be able to capture the results that way. With this design, running the query again in a Lookup is the best option.
Is the query in the Source running on the same Server as the Sink? If so, you could collapse the entire operation into a Stored Procedure that returns the data point you are trying to capture.

firestore rules and batch writes to a same array

I am trying to do batch writes to a same array multiple time.
writeBatch.updateData(ref, {
'testField':FieldValue.arrayUnion('firstReference')
});
writeBatch.updateData(ref, {
'testField':FieldValue.arrayUnion('secondReference')
});
writeBatch.updateData(ref, {
'testField':FieldValue.arrayUnion('thirdReference')
});
But seems request.resource.data.testField.size() is 3 and resource.data.testField.size() is 0 rule only call one time in that path , I though firestore would call every write.because i want to using exists to detect every reference is correct or not.Anyone have suggestion to me?
When you work with a transaction or batch write, the entire set of changes appears as a single change in terms of enforcing security rules, because the entire batch of changes on a single document must be allowed or denied all at once.
If you require that the batch of updates be any size, what you're trying to do isn't possible because security rules doesn't have any iteration, nor does it have any way to apply some function to each item of a list. If you can assume the length of the list, you could index directly into each item to do something with each item.
Your last resort might be to batch write into different document, so that they all get checked individually by rules that match those documents, then use Cloud Functions to copy those changes into place after they occur. But in this case, those changes would not be applied atomically.
If you would like to see more flexibility in security rules to handle lists of data, you should file a feature request.

Symfony: getting form values before and after form handling

Hello I want to be able to compare values before and after form handling, so that I can process them before flush.
What I do is collect old values in an array before handlerequest.
I then compare new values to the old values in the array.
It works perfectly on simple variables, like strings for instance.
However I want to work on uploaded files. I am able to get their fullpath and names before handling the form but when I get the values after checking if form is valid, I am still getting the same old value.
I tried both calling $entity->getVar() and $form->getData()->getVar() and I have the same output....
Hello I actually found a solution. Yet it is a departure from the strategy announced in my question, which I realize is somewhat truncated regarding my objective. Which was to compare old file names and new names (those names actually include full path) for changes, so that I would unlink those of those old names that were not in the new name list anymore. Basically, to operate a cleanup after a file was uploaded to replace another, without the first one being deleted first. And to save the webmaster the hassle of having to sort between uniqid-named files that are still used by the web site and those that are useless.
Problem is that my upload functions, that are very similar to those given in examples to the file upload code shown on the official documentation pages, seemed to take effect at flush time.
So, since what I wanted to do with those files had nothing to do with database operations, I resorted to having step two code launch after flush, which works fine.
However I am intrigued by your solutions, as they are both strategies I hadn't thought of. Thank you for suggestions.
However I am not sure if cloning the whole object will be as straightforward as comparing two arrays of file names.

Executing list of .sql files in certain order

I have a directory having a number of .sql files.I am able to execute them.But, I want to execute them in certain order.So, for example if I have 4 files xy.sql,dy.sql,trim.sql and see.sql.I want to execute them in see.sql,dy.sql,trim.sql and xy.sql.What happens now is I get a list of files using DirectoryInfo object.Now, I need to sort them using my order.I am using C# 3.5
thanks
It might be better to rename your files so that they sort into the correct order natively. This prevents having to maintain a separate "execution order" list somewhere.
Using a common prefix for the sql file names is a bit self-documenting as well, e.g.
exec1_see.sql
exec2_dy.sql
exec3_trim.sql and
exec4_xy.sql
It's unclear what the ordering algorithm is for your filenames. It doesn't appear to be entirely dictated by the filename (such as alphabetical ordering).
If you have arbitrary order in which you must execute these scripts, I would recommend that you create an additional file which defines that order, and use that to drive the ordering.
If the list of filenames is known at compile time, you could just hard-code that order in your code. I'm assuming, from your question, however, that the set of files is likely to change, and new ones may be added. If that's the case, I refer you to my previous paragraph.
You can obtain a List<string> that consists of all your .sql files, and call the List<T>.sort function to sort it and then operate on the files based on the newly sorted sequence.
number them and use a tool like my SimpleScriptRunner or the Tarantino DB change tool