How do I read mongodb shell errors? - mongodb

This error doesn't seem to correspond to the data in my editor:
SyntaxError: expected property name, got '{' :
#(shell):25:24
When I read this I read it as # row 25 and maybe character 24.
But that doesn't match what's in my editor.
What am I missing?
With previous errors like this, I've found the problem could be anywhere within 5 to 10 rows of what I'm thinking is the row number.
It's clearly not, at least not as it's written in my editor.
How do I interpret the above so I can find the problem?

Related

When trying to save pgAdmin result to a file (TXT) the result is modified

When I launch my query into pgAdmin 4 v5's Query Tool I get this type of data representation (this is also what I would like to get in my export file).
Unfortunately this information is transformed when saving it to a .TXT file by clicking the following button and saving it as indicated in the subsequent image.
As you can see below, after double-clicking on the saved TXT document, it added '.0' and wrapped my long character by indicating 'e+29' up to a certain row.
Can you please indicate me how to remove those transformations ?
All,
I found out the above problem is linked with the version of pgAdmin I was using, pgAdmin 4 v5 precisely.
After downloading pgAdmin 4 v6.4 the problem doesn't appear anymore.
I consider this thus as fixed, even if the cause of the problem remains unknown to me.
Thanks for your help.
Brieuc

Troubleshooting "no writeable tags set" error

I'm trying to (ultimately) modify a batch of files but getting stuck in the basics as I try to modify a single file before running a batch command.
If someone could help me troubleshoot the command I'm inputting, that would be fantastic. I'm sure it's something very simple.
Thanks a lot for any help you can provide!
Here's the abbreviated image exif data:
-ExifToolVersion=10.10
-FileName=2018_11_13_1.jpeg
-Directory=.
-FileSize=2.8 MB
-FileModifyDate=2019:07:12 15:40:38-07:00
-FileAccessDate=2019:07:12 15:40:38-07:00
-FileInodeChangeDate=2019:07:23 10:38:02-07:00
-FilePermissions=rw-rw-r--
-FileType=JPEG
-FileTypeExtension=jpg
-MIMEType=image/jpeg
[...]
-ModifyDate=2018:11:13 12:00:53
[...]
-DateTimeOriginal=2018:11:13 12:00:53
-CreateDate=2018:11:13 12:00:53
My current input is: exiftool "-FileModifyDate<$filename00000" ./2018_11_13_1.jpeg
And the error message is:
Warning: No writable tags set from 2018_11_13_1.jpeg
0 image files updated
1 image files unchanged
And the exif data is, of course, unchanged.
I've confirmed that I can write a value to this tag, so there's definitely something going wrong in pulling from the filename.
( Continued from How to compensate for incomplete date/time info in filename )
The problem here is that you are trying to write from a tag named filename00000. If you check the example in the other post, you will see that there is a space after Filename. This sets it apart so that exiftool knows which is a tag name and which is other data.
There is possibly an additional problem here, though. Your filename has an extra number that is not the date. When exiftool tries to write the time stamp from the filename, it is going to end up with a value of "2018:11:13 10:00:00", which might become especially problematic if that last digit hits a value of 3 or more, resulting in a timestamp of "2018:11:13 30:00:00".
I would suggest using exiftool's Advanced Formatting Feature (a fancy way of saying that you can use perl code in the command) to strip the excess data. Something like
exiftool "-FileModifyDate<${filename;s/^(.*\d{4}_\d\d_\d\d).*/$1/} 000000" ./2018_11_13_1.jpeg
Though take note, if the filenames are in any other format, then it would require a different command.

Unquoted carriage return found in data - Preventing COPY FROM in PostgreSQL

I am trying to import a large csv file (~4.5gb) into Postgres but it keeps throwing the following error:
ERROR: unquoted carriage return found in data
HINT: Use quoted CSV field to represent carriage return.
CONTEXT: COPY abc_complete_file_261115, line 9041959
I opened my csv in SublimeText2 and jumped to line 9041959, found the URN for record I needed, loaded the file in Vim and went to that line. I have hidden characters enabled in Vim (by using :set list) so I would expect to see a carriage return ^M somewhere on the line within the data but the only one I could find is at the end of the line as expected.
After an entire day of research and having gotten no further with this issue I ended up deleting the record on line 9041959 - this didn't fix the issue.
Then I figured well maybe it's something strange going on between records - so I ended up deleting about 5 records on either side of the line that threw the error - but it gave the the same error again. (I'll worry about preserving the data later on, right now I'm just trying to import the file so that I can have a look in Postgres). I made sure that I had saved the changes to the csv file before rerunning my query but it just gave the same error.
I feel like I am missing something really really obvious - does anyone have any ideas what might be causing the issue?
I'm using a Mac running El Capitan.
Many thanks
Update 27/11/15
Hi #JakubKania. Sorry for not putting up the query - the reason I didn't was because I am 99.9% sure that the issue is to do with the csv file rather than the query. A generalised version is:
CREATE TABLE large_file_test(
urn VARCHAR,
forename CHAR(32),
surname CHAR(32));
COPY large_file_test FROM '/Users/Shared/largefile1.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile2.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile3.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
ALTER TABLE large_file_test
ADD CONSTRAINT large_urn
PRIMARY KEY (large_urn);
ANALYZE large_file_test;
So I am actually trying to load 3 separate files into the Table that I created. The issue is that there seems to be hidden characters in part 1 that are preventing it from importing into Postgres. I haven't tried anything with part 2 or 3 yet.
The easiest way I found to solve this in MAC -El Capitan is:
1) Open the file with Sublime Text
2) in menu Reopen the file with encoding UTF8
3) in menu Save the file with encoding UTF8
Sublime "normalize" all end of line EOF.
This likely is caused by Windows line endings. Try installing the utility dos2unix and running dos2unix <filename> before executing the COPY command.
In my case, I noticed that the csv file had an extra blank at the end. After removing it, the file imported properly.
I created a separate folder and gave read/write permissions to "everybody" and that solved all this problem as well as the problem of access being denied when trying to import the file through pgAdmin4 as well. Seems to have been the "cure all".
Now, just to find out which user I need to give these permissions to instead of "everybody".
Using PostgreSQL v 9.6 on Windows 10.

Store row numbers which are causing "error"

I have to retrieve certain information from urls. For this I have to enter text into fields of the url. I am using GET operation for this. I have to modify the text to replace spaces with "%20". Some times the text(which is taken from the database) is badly formed. I would like to know the row numbers so I can manually change the text for such rows in the database and run it again. I have tried to use the logs and errors section but with little luck. Does anybody have an idea of how to do this?
First shot: Output bad urls on the console
So far, I came up with the following job design for your problem:
The trick is to catch the exceptions of the tHttpRequest component and print the necessary details on the console. For this example, I included the line number, the exception message and the URL that produced the exception.
Output (I couldn't reproduce your "Illegal character error", so I took a different one):
Second shot: Output to a file
If you really need to output the line numbers to a file, things get a little more complicated.
Instead of printing the info straight onto the console, we collect all line numbers into a context variable of type (Java) List inside the tJavaFlex. After the usual URL processing (which I have left out from the job design to keep the example small), we iterate over the Java List
and save it into a tHashOutput, so that we can finally write to a file.
We cannot directly write to the file in the tLoop section, since the Iterate flow would lead to the situation the the tFileInputDelimited would be opened several times. If "Append" was disabled, only the last bad URL line number would finally appear in the output file. If "Append" was enabled, you would get the full list of line numbers after the very first job run - but you would append every time you run the job, making the list longer and longer. Workarounds would be to use a runtime-dependent file name (e.g. timestamp) or to delete the file at the beginning of the job run. I chose the third option, that overwrites the file every time we run the job. Feel free to chose among those options the one which suits your use case best.
Details
The tHashOutput/tHashInput components are not visible on default, but must be enabled first to show up: https://www.talendforge.org/forum/viewtopic.php?pid=107249#p107249
Context variable:
INIT:
tJavaFlex "catch errors", end code:
tLoop:
tFixedFlowInput "badURL":
tHashOutput:
Needs to have "Append" enabled.

Matlab error with "load" - any idea what might cause it?

I'm trying to load a text file into Matlab and getting an error message that doesn't make sense to me. The file is a very large text file containing 8 columns of numbers. The error message says:
Number of columns on line 1308295 of ASCII file [filename.txt] must be the same as previous lines.
But as far as I can see there's nothing special about line 1308295. In fact that line has been in the file for months and the code hasn't complained before. So I wondered if there might be something else unrelated that could cause Matlab to give that error?
What kind of text file is it? If it's a data file, you could try the "importdata" function.