How to delete Coverage from RASDAMAN base? - wcs

I have two coverages. I want to delete one of them.
With rasql I write : delete from MyCoverage (which is the name of desired coverage to drop) but I get rasdaman error 355: Execution error 355 in line 1, column 13, near token MyCoverage: Collection name is unknown.
Which syntax is correct for deleting a coverage?

Related

MySQL: How do I remove/skip the first row (before the headers) from my stored procedure result?

I am calling a stored procedure which results in the following output
CALL `resale`.`reportProfitAndLossSummary`(3,' ',599025,TRUE);
OUTPUT:
"CONCAT('"',
CONCAT_WS('","',
"Promoter",
"Event",
"Event Description",
"Zone",
"Tickets Unsold",
"
""Promoter","Event","Event Description","Zone","Tickets Unsold","Avg. Unsold Price","Tickets Sold","Avg. Sold Price","Avg. Cost","Profit","Revenue""
""Qcue","10/2/2022 1:15 PM Pirates # Cardinals","Pirates # Cardinals","1/3B Field Box",0,,16,149.761250,42.000000,1724.18,2396.18"
I exported the result to .csv and discovered that a new code chunk is created above the header which distorts the structure of the file. Is there a way to skip this code chunk. I tried "-N" "-ss" since the code chunk appears as the header and none of those worked in MySQLWorkbench. Turning the header option to "FALSE" in the stored procedure call removes the actual headers and not the undesired code.
The stored procedure was developed by someone else so I am not sure where to begin fixing this. The goal is remove the undesired code from the query result itself not the .csv export.

MongoDB find operation throws OperationFailure: Cannot update value

I have an application that uses MongoDB (on AWS DocumentDB) to stores documents with a large string in one of its fields which we call field X.
Few notes to start:
I'm using pymongo so the method names you might see here are taken from there
As of the nature of field X it is not being indexed
On field X we use MongoDB find method using a query with regex condition limiting it by both maxTimeMS and limit to a small amount of results.
When we get the results we iterate the cursor to fetch all the results to a list (inline loop).
Most of the times the query works properly but I'm starting to get more and more of the following error:
pymongo.errors.OperationFailure: Cannot update value (error code 14)
This is being thrown after the query return a cursor and we iterating the results and occurs after trying to _refresh the cursor connection by calling the next method and being thrown by _check_command_response at its last line meaning this is a default exception(?).
The query:
collection.find(condition).max_time_ms(MAX_QUERY_TIME_MS).sort(sort_order) \
.limit(RESULT_LIMIT)
results = [document for document in cursor] # <--- here we get the error
Stack trace:
pymongo/helpers.py in _check_command_response at line 155
pymongo/cursor.py in __send_message at line 982
pymongo/cursor.py in _refresh at line 1104
pymongo/cursor.py in next at line 1189
common/my_code.py in <listcomp> at line xxx
I'm trying to understand the origin of the exception to handle it correctly or use a different approach for handling the cursor.
what is being updated at the refresh method of the cursor that might
throw the above exception?
Thanks in advance.

Impex import export Error saving batch in bulk mode ambiguous unique keys

When I am exporting data from one environment and importing it to another, I am seeing an ambiguous unique keys error. I did check the ambiguity but did not find anything would cause this violation.
I get the following error (there are several identical errors but only posting 1):
Error Begin
**insert_update ABClCMSParagraphComponent;&Item;catalogVersion(catalog(id),version)[unique=true,allownull=true];content[lang=en];creationtime[forceWrite=true,dateformat=dd.MM.yyyy
hh:mm:ss];modifiedtime[dateformat=dd.MM.yyyy
hh:mm:ss];name;owner(&Item)[allownull=true,forceWrite=true];uid[unique=true,allownull=true]
ABClCMSParagraphComponent,8796158592060,,,Error saving batch in bulk
mode [reason:
unique keys {catalogVersion=CatalogVersionModel (8796093186649#41),
uid=DMparaleftdescrip} for model ABClCMSParagraphComponentModel
(8796158657596#1) - found 2 item(s) using the same keys]. Will try
line-by-line mode.,
unique keys {catalogVersion=CatalogVersionModel (8796093186649#41),
uid=comp_000003UX} for model ABClCMSParagraphComponentModel
(8796158592060#1) - found 2 item(s) using the same keys
;Item111;abcContentCatalog:Staged;"< p >Hello < a href="">world< /a><
/p>";12.09.2017 07:04:12;18.09.2017 09:38:39;Feed Article -
Makeup;;comp_000003UX
Error End
What would be the reason why it's showing the ambiguous error?
The logs didn't show the error clearly with the 2 check boxes that were selected. When I deleted these 2 columns, owner(&Item) & creationtime, the script imported successfully.
Often, when no specific errors are shown, there was a error saving the item. In your case it might be the initial attributes "owner" and "creationdate". If the item is present, initial attributes cannot be changed.

Invalid value for Attribute Set column (set does not exists?) in rows:

I am importing products. When I try to import, I get lot of errors.
I have fixed everything and file is valid for import.
After submit import button, it shows the following error:
Invalid value for Attribute Set column (set does not exists?) in rows: 11, 18, 19, 24, 25, 26
I have tried googling this error message to no avail.
What do I need to set in attribute_set as I am using the default values right now.
I had the same problem today as I was working through the import process. What fixed it for me was renaming the "attribute_set" column to "_attribute_set". I noticed the difference when I exported a CSV from the existing product set and was looking through the exported file for any differences between it and my new CSV.

Symstore deletion error

The usage of the delete function of the symstore.exe tool is the following:
symstore del /i ID /s Store [/o] [/d LogFile]
I have symbols that have been stored from a long time ago that I would like to delete. D:\Symbols\[productname] is the root where the symbols are saved. I call the tool with the following line:
symstore del /i 0000000001 /s d:\Symbols\[productname]
It gives the following error:
SYMSTORE ERROR: Class: Server. Desc: Couldn't get transaction id from d:\Symbols\[productname]\
SYMSTORE: Number of references deleted = 0
SYMSTORE: Number of files/pointers deleted = 0
SYMSTORE: Number of errors = 1
The error indicates that it recognizes the path to be a valid symbols server. I've double checked the 000Admin folder at D:\Symbols\[productname]\000Admin\, and it indeed has transactions from 0000000001 to 0000001261. I've also tried deleting other transactions, but end up with the same error. The history.txt, lastid.txt, and server.txt are there as well. What am I missing?
As it turns out, the Couldn't get transaction id error is the same error for if the disk is out of space. Since it keeps track of deletes, it needs extra space to write them to a file, and deletions are transactions with an id as well. Manually deleting something from the disk to free enough space will allow the delete transaction to go through.