Is PapaParse adding an empty string to the end of its data array? - papaparse

Papa Parse seems wise, but I think he might be giving me null. I'm just:
Papa.parse(countries);
Where countries is a string containing the XMLHttpRequest of the countries csv file from a timezone database here:
https://timezonedb.com/download
But Papa Parse seems to have added an empty array to the end of it's data array. So when I'm searching and sorting through the array, that one empty guy at the end is giving me troubles. I can write around it but it's not ideal, and I thought Papa Parse was supposed to make those kind of csv parsing problems go away. Am I Parsing wrong?
Here is the end of the PapaParsed Array in console:

You need to use skipEmptyLines: true in parse config. For example:
Papa.parse(this.csvData, {skipEmptyLines: true,})

it was adding empty line to my iteration as well. i decided to skip it by doing loop:
for(let i=0;i<data.length -1;i++){

We can also use below syntax to remove empty lines from the record.
For example, in order to remove empty values from header, we can use the below code snippet.
headers.filter(Boolean);

Related

How do I find the index of a json object in an array based off of a value of its properties

I am fairly new to Redis and RedisJson and I have been through all the documentation I can get my hands on, but I still can't seem to figure this one out. I am hoping someone could shed some light on this situation to help me understand. My end goal is to be able to remove the JSON object from the responses array using JSON.ARRPOP. I need to get the index of the object first and I can't seem to do that.
Here is my object structure:
JSON.SET test:1 $ '{ "responses":[{"responseId":"29aab59c-10b0-48c0-ab91-8873fb6e2238"},{"responseId":"ab79f09b-8e31-41f4-9191-ef89a34964d3"}]}'
Check the path:
JSON.GET test:1 $.responses[*].responseId
RETURNS:
"["29aab59c-10b0-48c0-ab91-8873fb6e2238","ab79f09b-8e31-41f4-9191-ef89a34964d3"]"
Ok looks good I have an array of two strings lets get that index of 29aab59c-10b0-48c0-ab91-8873fb6e2238.
JSON.ARRINDEX test:1 $.responses[*].responseId '"29aab59c-10b0-48c0-ab91-8873fb6e2238"'
RETURNS:
(nil)
(nil)
It appears to have searched but found nothing?
At first I thought it was an escape character issue but I get the same results with responeIds as integers 1 and 2.
Any help here would be greatly appreciated. Thanks!
JSON.ARRINDEX can only search for scalar values. Objects and arrays are not scalars so you can't search for them.
For your use case you should look at https://redis.io/docs/stack/search/indexing_json/

Update string with firebase swift

I am trying to update a string with firebase swift but I am getting an error that I do not know how to get rid of.
I have this code part that is getting an error:
self.dbRef.child("feed-items/\(dataPathen)/likesForPost").updateChildValues("likesForPost": "7")
The error I am getting is expected "," seperator just before the :. I am using dbRef in another code part so I know i works and the dataPathen is being printed just before the above code part, so that is working too.
Can anyone help me with this bug?
Just change
self.dbRef.child("feed-items/\(dataPathen)/likesForPost").updateChildValues("likesForPost": "7")
To
self.dbRef.child("feed-items/\(dataPathen)/likesForPost").updateChildValues(["likesForPost": "7"])
And if you are only looking for incrementing a particular value at a specific node you might wanna check my answer's :- https://stackoverflow.com/a/39465788/6297658, https://stackoverflow.com/a/39471374/6297658
PS Prefer runTransactionBlock: to update properties like likeForPosts as there might be a moment when two users try to like same post at the same moment (Highly Unlikely, but still a possibility...),using updateChildValues might end up just updating like only from one user. But runTransactionBlock: keep firing until the changes of that thread have been committed to the node
updateChildValues accepts [AnyHashable:Any] dictionary:
self.dbRef.child("feed-items/\(dataPathen)/likesForPost")
.updateChildValues(["likesForPost": "7"])
Whenever updating values at any reference in Firebase Database, you need to pass a dictionary parameter for updateChildValues method as [AnyHashable: Any] for your path reference. So just update your code of line as below:
self.dbRef.child("feed-items/(dataPathen)/likesForPost").updateChildValues("likesForPost": "7")
Also if you need to update more than 1 key-value pairs then you can pass those key-value pairs inside dictionary by seperating using comma as below:
self.dbRef.child("feed-items/(dataPathen)/likesForPost").updateChildValues(["likesForPost": "7", "otherKey": "OtherKeyValue"])

dataFrame keying using pandas groupby method

I new to pandas and trying to learn how to work with it. Im having a problem when trying to use an example I saw in one of wes videos and notebooks on my data. I have a csv file that looks like this:
filePath,vp,score
E:\Audio\7168965711_5601_4.wav,Cust_9709495726,-2
E:\Audio\7168965711_5601_4.wav,Cust_9708568031,-80
E:\Audio\7168965711_5601_4.wav,Cust_9702445777,-2
E:\Audio\7168965711_5601_4.wav,Cust_7023544759,-35
E:\Audio\7168965711_5601_4.wav,Cust_9702229339,-77
E:\Audio\7168965711_5601_4.wav,Cust_9513243289,25
E:\Audio\7168965711_5601_4.wav,Cust_2102513187,18
E:\Audio\7168965711_5601_4.wav,Cust_6625625104,-56
E:\Audio\7168965711_5601_4.wav,Cust_6073165338,-40
E:\Audio\7168965711_5601_4.wav,Cust_5105831247,-30
E:\Audio\7168965711_5601_4.wav,Cust_9513082770,-55
E:\Audio\7168965711_5601_4.wav,Cust_5753907026,-79
E:\Audio\7168965711_5601_4.wav,Cust_7403410322,11
E:\Audio\7168965711_5601_4.wav,Cust_4062144116,-70
I loading it to a data frame and the group it by "filePath" and "vp", the code is:
res = df.groupby(['filePath','vp']).size()
res.index
and the output is:
[E:\Audio\7168965711_5601_4.wav Cust_2102513187,
Cust_4062144116, Cust_5105831247,
Cust_5753907026, Cust_6073165338,
Cust_6625625104, Cust_7023544759,
Cust_7403410322, Cust_9513082770,
Cust_9513243289, Cust_9702229339,
Cust_9702445777, Cust_9708568031,
Cust_9709495726]
Now Im trying to approach the index like a dict, as i saw in examples, but when im doing
res['Cust_4062144116']
I get an error:
KeyError: 'Cust_4062144116'
I do succeed to get a result when im putting the filepath, but as i understand and saw in previouse examples i should be able to use the vp keys as well, isnt is so?
Sorry if its a trivial one, i just cant understand why it is working in one example but not in the other.
Rutger you are not correct. It is possible to "partial" index a multiIndex series. I simply did it the wrong way.
The index first level is the file name (e.g. E:\Audio\7168965711_5601_4.wav above) and the second level is vp. Meaning, for each file name i have multiple vps.
Now, this is correct:
res['E:\Audio\7168965711_5601_4.wav]
and will return:
Cust_2102513187 2
Cust_4062144116 8
....
but trying to index by the inner index (the Cust_ indexes) will fail.
You groupby two columns and therefore get a MultiIndex in return. This means you also have to slice using those to columns, not with a single index value.
Your .size() on the groupby object converts it into a Series. If you force it in a DataFrame you can use the .xs method to slice a single level:
res = pd.DataFrame(df.groupby(['filePath','vp']).size())
res.xs('Cust_4062144116', level=1)
That works. If you want to keep it as a series, boolean indexing can help, something like:
res[res.index.get_level_values(1) == 'Cust_4062144116']
The last option is a bit less readable, but sometimes also more flexibile, you could test for multiple values at once for example:
res[res.index.get_level_values(1).isin(['Cust_4062144116', 'Cust_6073165338'])]

How to set null values while importing to phpmyadmin?

I'm trying to import a .csv file into phpmyadmin where several fields are purposefully left blank. I need these field to register as null values and not just left as a blank string.
I know in the field properties you can select to allow "null" vs. "not null" for each field, but it still doesn't change cell to a null value while importing. After the import I can manually go check the null box for each field on each record, but that it unrealistic considering the amount of data I'm working with.
Is there a way to get phpmyadmin to set these blank cell to null values on import?
I've been experience similar issues.
If you download a PhpMyAdmin CSV file with NULL values, you'll notice that NULL doesn't get encapsulated with quotes. So you'll have a line like this:
"1";"2";NULL;NULL
"2";"2";NULL;NULL
etc.
However, if you edit a CSV file in something like Open Office Calc, it might change this to put quotes around NULL, like so:
"1";"2";"NULL";"NULL"
"2";"2";"NULL";"NULL"
etc.
What should work is doing a search and replace for ["NULL" = NULL].
In your case, because you have empty (blank) fields, you'll be looking at doing a search and replace like this:
[,, = ,NULL,]
And probably a second pass for NULL values at the end of a line like so:
[,\n = ,NULL\n]
Ancient question, but in case another MySQL noob like myself comes across it.
The find/replace rigamarole jmbertucci describes is avoidable if you're in charge of the creation of the CSV file, for example when you're backing up your own databases. In phpMyAdmin, if you select "custom" export method, you will see replace NULL with: and the default is NULL. Simply change that to "NULL" and you save yourself a step.
I ran into this same problem and jmbertucci's answer worked great. I did run into one additional problem. In the case with a row of data like such
"hello","world",,,,,,
which has multiple sets of null values in a row doing a search replace with [,, = ,NULL,] as jmbertucci suggested won't work as you intend it to on the first pass. Instead you'll end up with
"hello","world",NULL,,NULL,,NULL
You should continue to do the search replace to until you end up with 0 occurrences replaced

Does NSXMLParser eat blank values?

I have some XML which looks like this:
<?xml version='1.0'?>
<methodResponse>
<params>
<param>
<value><array><data>
<value><array><data>
<value><dateTime.iso8601>20100508T14:49:56</dateTime.iso8601></value>
<value><string></string></value>
<value><string>comment</string></value>
<value><string></string></value>
<value><string>Milestone milestone1 deleted</string></value>
<value><int>1</int></value>
</data></array></value>
</data></array></value>
</param>
</params>
</methodRepsonse>
NSXMLParser seems to not be giving any data back for the blank values resulting in an array with 4 items in it instead of 6.
Is there anything I can do to NSXMLParser to make it return an empty string for the blank values so that I can maintain the order of the data when it is returned?
So after whipping up a quick sample with a delegate that just prints out what's happening during parsing I don't see anything at all wrong with what's being parsed.
I suspect however that you're relying on an incorrect expectation that between calls to didStartElement... and didEndElement... you should get a foundCharacters... call with an empty string? My question is based on the way you phrased the title of your question because there's no such thing as a "blank value." Either there is a value, or there isn't.
Imagine instead your XML contained <string/> instead of the exactly equivalent <string></string>. You still get start/end notifications.
You should be creating your NSMutableString (presumably the type that you're using for your <string> elements) in didStartElement... when <string> is found, appending to that string IF foundCharacters... is called (it can get called more than once with the value in chunks), and tossing it into your array when it's done on didEndElement....
If you really want to be more robust, you'll also be wanting detect an error condition if you find the start of a new element before your string ends, assuming that it is in fact an error for you.
Not quite sure I understand your problem here. NSXMLParser would at least report the beginning and end of the elements. Would that not be enough the get them in the right order?