Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed last year.
Improve this question
When I try to delete a specific line number from my file, all lines that have the same pattern are deleted. This is not what I want, I want to only delete the line number itself, not similar patterns.
Here is what I am trying to do:
x = 5
Command I run now:
sed -i "${x}d" home/file.txt
You had spaces around your variable assignment
x = 5
Which would be wrong.
Try the below fix
x=5
sed -i ${x}d home/file.txt
Here is an example to delete line number 33 of your file:
sed -i '33d' home/file.txt
If you need the number to be a variable:
local line_number=5
sed -i "${line_number}d" home/file.txt
The problem with what you are doing is the spaces, x = 5, will not work while x=5 will.
Here is what I am trying to do:
x = 5
That gives me:
bash: x: command not found...
, which is what I would expect. If you did not get a similar error message then you must be in the unfortunate situation of having a program named x in your path, perhaps because of doing something unwise, such as putting . in your PATH. Or perhaps you got such a message but did not see it because you have redirected your stderr.*
In any event, the quoted line does not assign a value to any shell variable. Shell variable assignments must not have whitespace around the = operator, so if you want to assign 5 to variable x, that must be
x=5
.
It is not an error to perform parameter expansion on a name that has not been assigned any value. The result is nothing. Thus, if x has not successfully been assigned any value then "${x}d" will expand to "d". As a complete sed script, that will delete every line.
*Or if you did get such an message then why in the world didn't you at least say so?
This question already has answers here:
How can I loop over the output of a shell command?
(4 answers)
Closed 2 years ago.
Hello fellow developers !
I'm quite having a hard time on this one, so your help would be good :)
I'm creating a script shell that will periodically (thanks to cron) do a synchronisation between an offline mongodb to firebase.
I use jq to treat the export of my mongodb, but this one is acting strangely on strings values containing whitespaces.
For example, I've got this exported document from a mongodb collection named users (using --jsonArray as mongoexport option), file named mycollectionexported.json:
[{"_id":{"$oid":"5fc253c493b9f7363c8d1011"},"emailVerified":true,"disabled":false,"isKickstarter":false,"isPremiun":false,"gender":"Male","nationality":"None","type":"admin","optMarketing":false,"optNewsletter":false,"cguValidated":true,"providers":[],"email":"test#reflect.com","displayName":"John Doe","phoneNumber":"0644080050","homeAddress":{"street":"rue Pasteur","number":24,"zipcode":94270,"city":"Le Kremlin-BicĂȘtre","country":"France"},"familyId":{"$oid":"5fc253c493b9f7363c8d1010"},"createdAt":{"$date":"2020-11-28T13:42:28.51Z"},"updatedAt":{"$date":"2020-11-28T13:42:28.51Z"},"__v":0}]
You can see that for keys like "displayName", "homeAddress.street" or "homeAddress.city", I've got whitespaces in the values.
When I try to store my newly exported collection into a classic variable using jq, and display it on the stdout, jq is acting quite strangely.
Script executed:
#!/bin/bash
documents=$(jq -c ".[]" ./mycollectionexported.json)
for document in $documents; do
echo ""
echo $document
done
Output:
{"_id":{"$oid":"5fc253c493b9f7363c8d1011"},"emailVerified":true,"disabled":false,"isKickstarter":false,"isPremiun":false,"gender":"Male","nationality":"None","type":"admin","optMarketing":false,"optNewsletter":false,"cguValidated":true,"providers":[],"email":"test#reflect.com","displayName":"John
Doe","phoneNumber":"0644080050","homeAddress":{"street":"rue
Pasteur","number":24,"zipcode":94270,"city":"Le
Kremlin-BicĂȘtre","country":"France"},"familyId":{"$oid":"5fc253c493b9f7363c8d1010"},"createdAt":{"$date":"2020-11-28T13:42:28.51Z"},"updatedAt":{"$date":"2020-11-28T13:42:28.51Z"},"__v":0}
Expected (no newlines):
{"_id":{"$oid":"5fc253c493b9f7363c8d1011"},"emailVerified":true,"disabled":false,"isKickstarter":false,"isPremiun":false,"gender":"Male","nationality":"None","type":"admin","optMarketing":false,"optNewsletter":false,"cguValidated":true,"providers":[],"email":"test#reflect.com","displayName":"John Doe","phoneNumber":"0644080050","homeAddress":{"street":"rue Pasteur","number":24,"zipcode":94270,"city":"Le Kremlin-BicĂȘtre","country":"France"},"familyId":{"$oid":"5fc253c493b9f7363c8d1010"},"createdAt":{"$date":"2020-11-28T13:42:28.51Z"},"updatedAt":{"$date":"2020-11-28T13:42:28.51Z"},"__v":0}
How is that possible ?
There is no such informations about that in the jq documentation.
When I do it with a json file containing no whitespaces in string values, it works as expected...
Thank you for your help and futures answers :)
If the output from jq is a single line, why not just:
echo "$documents"
Or, if the result of the call to jq is a stream that might have more than one item, and if you want each item in the stream to be available as a bash variable, then if your bash is sufficiently up-to-date, you could use mapfile (aka readarray); otherwise, you could consider using a bash while loop, e.g. along the lines of:
declare -a document
while read -r item ; do
document+=("$item")
done < <(jq -c ....)
The idea here is that invoking jq with the -c option ensures that the only raw newlines that jq emits will be the ones demarking the items in the stream.
This question already has answers here:
Counting characters in a specific text file
(4 answers)
Closed 4 years ago.
I have data in a file like below.
text1|text2
text3|text4|
I'm looking for a Power Shell command to count number of pipes present in thisfile.
Normally I would want you to show some effort but this is trivial enough I won't bother:
$pipeCount = (get-content file.txt -Raw).Split('|').count - 1
$pipeCount
First of all, I'm very new to programming and so would need your help in writing a perl script to do the following on windows.
I have a big log file with timestamp (1gb) and its difficult to read the logs as it takes a lot of time to open. so my requirement is to copy the logs from the bigger log file for the last one hour and paste it to another file and then copy the next 1 hr of data to different file(so we will have 24 files for a day). The next day the data in these files needs to be over written or delete & create a new file.
Sample log :
09092016-00:02:00,..................
09092016-00:02:08,..................
09092016-00:02:15,..................
09092016-00:02:18,..................
Please help me with this and thanks for your help in advance.
Thanks,
A simpler solution would be to use the split command to split the files into manageable sizes.
split -l 1000 logfile
Will split your logfile into smaller files of 1000 lines each.
You can then just use grep to find the files that contain the day you need.
grep 09092016 logfile*
for example:
logfile="./log"
while read -r d m y h; do
grep "^$d$m$y-$h" "$logfile" > "partial-${y}${m}{$d}-${h}.log"
done < <(sed -n 's/\(..\)\(..\)\(....\)-\(..\)\(.*\)/\1 \2 \3 \4/p' "$logfile" | sort -u)
easy, but not efficient. It reads the whole big logfile 25x for the split. (1x for gathering the existing ddmmyyyy-hh lines in the log, and again for every different found date-hour.)
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have a file as shown below.
2.6G kishan /home/Srikishan
10G kishan /home/data/aa
150G kishan /home/Junk
300G kishan /home/junk2
I want a command which displays only the folders which are consuming more than 50G memory. Can someone help me how I can code it using shell or Perl or TCL.
As a Perl one-liner
perl -ne'/([\d.]+)G/ and $1 > 50 and print' myfile
output
150G kishan /home/Junk
300G kishan /home/junk2
This will also ignore lines that don't contain a field like 999G
And here's the Tcl contender. It looks at every line in the file whose name is in the filename variable and prints those lines that begin with a floating-point number larger than 50.
package require fileutil
fileutil::foreachLine line $filename {if {[scan $line %f] > 50} {puts $line}}
Using awk you can do:
awk -F 'G' '$1>50' file
150G kishan /home/Junk
300G kishan /home/junk2