I am getting an error on ubunto while openning the terminal saying the following:
Bash: cannot create temp file for here-document: No space left ok device
Although there exists space already..
This error has encountered suddenly when i was working on sublime text
you may want to check your inode usage as well:
df -i
it's possible to get "no space left on device" when you have space available, but you're out of inodes.
http://web.archive.org/web/20210514092503/https://scoutapm.com/blog/understanding-disk-inodes
I faced a similar problem. In my case after login in, the screen would not show icons and only the cursor. Even auto completion in bash-terminal threw an error of out-of-memory.
I tried unmounting /tmp, apt-get autoclean, apt-get clean nothing worked.
In my case deleting large files in ~/.cache worked.
cd ~/.cache
du -sh *
rm -fr <large files/folders>
Generally, deleting files in .cache is not harmful but still be careful.
In my case thumbnails and pip folders were the culprit taking 1G and 3.6G respectively.
Run df -h and see if there are any folders that are 100% full. Looks like /tmp may be full.
Related
I run this command:
COPY XXX FROM 'D:/XXX.csv' WITH (FORMAT CSV, HEADER TRUE, NULL 'NULL')
In Windows 7, it successfully imports CSV files of less than 1GB.
If the file is more then 1GB big, I get an “unknown error”.
[Code: 0, SQL State: XX000] ERROR: could not stat file "'D:/XXX.csv' Unknown error
How can I fix this issue?
You can work around this by piping the file through a program. For example I just used this to copy from a 24GB file on Windows 10 and PostgreSQL 11.
copy t(c,d) from program 'cmd /c "type x:\path\to\file.txt"' with (format text);
This copies the text file file.txt into the table t, columns c and d.
The trick here is to run cmd in a single command mode, with /c and telling it to type out the file in question.
https://github.com/MIT-LCP/mimic-code/issues/493
alistairewj commented Nov 3, 2018 • ►
edited
Okay, the could not stat file "CHARTEVENTS.csv": Unknown error is actually a bug in PostgreSQL 11. Under the hood it makes a call to fstat() to make sure the file is not a directory, and unfortunately fstat() is a 32-bit program which can't handle large files like chartevents. I tested the build on Windows with PostgreSQL 10.5 and I didn't get this error so I think it's fairly new.
The best workaround is to keep the files compressed (i.e. keep them as .csv.gz files) and use 7zip to load in the data directly from compressed files. In testing this seemed to still work. There is a pretty detailed tutorial on how to do this here: https://mimic.physionet.org/tutorials/install-mimic-locally-windows/
The brief version of above is that you keep the .csv.gz files, you add the 7zip binary to your windows environment path, and then you call the postgres_load_data_7zip.sql file to load in the data. You can use the postgres_checks.sql file after everything to make sure you loaded in all the data correctly.
edit: For your later error, where you are using this 7zip approach, I'm not sure why it's not loading. Try redownloading just the ADMISSIONS.csv.gz file and seeing if it still throws you that same error. Maybe there is a new version of 7zip which requires me to update the script or something!
For anyone else who googled this Postgres error message after attempting to work with a >1gb file in Postgres 11, I can confirm that #亚军吴's answer above is spot-on. It is indeed a size issue.
I tried a different approach, though, than #亚军吴's and #Loren's: I simply uninstalled Postgres 11 and installed the stable version of Postgres 10.7. (I'm on Windows 10, by the way, in case that matters.)
I re-ran the original code that had prompted the error and voila, a few minutes later I'd filled in a new table with data from a medium-ish-size csv file (~3gb). I initially tried to use CSVSplitter, per #Loren, which was working fine until I got close to running out of storage space on my machine. (Thanks, Battlefield 5.)
In my case, there isn't anything in PGSQL 11 that I was relying on that wasn't in version 10.7, so I think this could be a good solution for anyone else who runs into this problem. Thanks everyone above for contributing, especially to the OP for posting this in the first place. I cured a huge, huge headache!
This has been fixed in commit bed90759f in PostgreSQL v14.
The file limit for the error is actually 4 GB.
The fix was too invasive to be backported, so you can only upgrade to avoid the problem. Once the fix has had some field testing, you could lobby the pgsql-hackers mailing list to get it backported.
With pgAdmin and AWS, I used CSVSplitter to split into files less than 1GB. Lame, but worked. pgAdmin import appends to the existing table. (Changed escape character from ' to " in order to avoid error due to unquoted text in the source file. Typically I apply quotes in LibreOffice, but these files were too big to open.)
It seems this is not a database problem, but a problem of psql / pgadmin. The workaround is using an admin software from the previous psql versions:
Use the existing PostgreSQL 11 database
Install psql or pgadmin from the PostgreSQL 10 installation and use it to upload the file (with the command shown in the question)
Hope this helps anyone coming across the same problem.
Add two lines to your CSV file: One at the begining and one at the end:
COPY XXX FROM STDIN WITH (FORMAT CSV, HEADER TRUE, NULL 'NULL');
<here are the lines your file already contains>
\.
Don't forget another newline after the \. line. Then call
psql -h hostname -d dbname -U username -f 'D:/XXX.csv'
This is what worked for me:
\COPY member_data.lab_result FROM PROGRAM 'gzip -dcf lab_result.dat.gz' WITH (FORMAT 'csv', DELIMITER '|', QUOTE '`')
I want to copy a particular file using Makefile and then make this file executable. How can this be done?
The file I want to copy is a .pl file.
For copying I am using the general cp -rp command. This is done successfully. But now I want to make this file executable using Makefile
Its a bad practice to use cp and chmod, instead use install command.
all:
install -m 0777 hello ../hello
You can use -m option with install to set the permission mode, and even note that by using the install you will preserve not only the permission but also the owner of the file.
You can still use chmod accordingly but it would be a bad practice
all:
cp hello ../hello
chmod +x ../hello
Update: install vs cp
cp would simply copy files with current permissions, install not only copies, but also can change perms/ownership as arg flags. (This is what your requirement was)
One significant difference is that cp truncates the destination file and starts copying data from the source into the destination file. install, on the other hand, removes the destination file first.
This is significant because if the destination file is already in use, bad things could happen to whomever is using that file in case you cp a new file on top of it. e.g. overwriting an executable that is running might fail. Truncating a data file that an existing process is busy reading/writing to could cause pretty weird behavior. If you just remove the destination file first, as install does, things continue much like normal - the removed file isn't actually removed until all processes close that file.[source]
For more details check these,
install vs. cp; and mmap
How is install -c different from cp
I am using Pentaho CE 5 on windows. I would like to use CTools but I can't make them show up in the File -> New menu to use them.
Being behind a proxy, I can not use the Marketplace plugin, so I have tried a manual installation.
First, I tried to use the ctools-installer.sh. I have run the following command line in cygwin (wget and unzip are installed):
./ctools-installer.sh -s /cygdrive/d/Users/[user]/Mes\ Programmes/pentaho/biserver-ce/pentaho-solutions/ -w /cygdrive/d/Users/[user]/Mes\ programmes/pentaho/biserver-ce/tomcat/webapps/pentaho/
The script starts, asks me what module I want to install, and begins the downloads.
For each module, I get an output like (set -x added to the script) :
echo -n 'Downloading CDF...' Downloading CDF...+ wget -q --no-check-certificate 'http://ci.analytical-labs.com/job/Webdetails-CDF-5-Release/lastSuccessfulBuild/artifact/bi-platform-v2-plugin/dist/zip/dist.zip'
-O .tmp/cdf/dist.zip SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
'[' '!' -z '' ']'
rm -f .tmp/dist/marketplace.xml
unzip -o .tmp/cdf/dist.zip -d .tmp End-of-central-directory signature not found. Either this file is not a zipfile, or it
constitutes one disk of a multi-part archive. In the latter case
the central directory and zipfile comment will be found on the last
disk(s) of this archive. unzip: cannot find zipfile directory in
.tmp/cdf/dist.zip,
and cannot find .tmp/cdf/dist.zip.zip, period.
chmod -R u+rwx .tmp
echo Done Done
Then the script ends. I have seen on this page (pentaho-bi-suite) that it is the normal output. Nevertheless, it seems a bit strange to me and when I start my pentaho server (login: admin/password), I cannot see any new tools in the menus.
After a look to a few other tutorials and the script itself, I have downloaded the .zip snapshots for every tool and unzipped them in the system directory of my pentaho server. Same result.
I would like to make the .sh works, what can I try or adjust ?
Thanks
EDIT 05/06/2014
I checked the dist.zip files dowloaded by the script and they are all empty. It seems that wget cannot fetch the zip files, and therefore the installation fails.
When I try to get any webpage through wget, it fails. I think it is because of the proxy.
Here is my .wgetrc file, located in my user's cygwin home folder:
use_proxy=on
http_proxy=http://[url]:[port]
https_proxy=http://[url]:[port]
proxy_user=[user]
proxy_password=[password]
How could I make this work?
EDIT 10/06/2014
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline.
I guess this is related with the -r option.
I consider this post solved, since it not a CTools issue anymore.
Difficult to identify the issue in the above procedure..
but you can refer this blog he is key member of pentaho itself..
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline. I guess this is related with the -r option.
I consider this post solved, since it is not a CTools issue anymore.
You can manually install the components from http://www.webdetails.pt/ctools/ or if you have pentaho 5.1 or above, you add the following parameters to CATALINA_OPTS option (in start-pentaho.bat or start-pentaho.sh):
-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts="localhost|127.0.0.1|10...*"
http://docs.treasuredata.com/articles/pentaho-dataintegration#tips-how-can-i-use-pentaho-through-a-proxy
I am trying to create a patch using two large size folders (~7GB).
Here is how I'm doing it :
$ diff -Naurbw . ../other-folder > file.patch
But maybe due to file sizes, patch is not getting created and giving an error:
diff: memory exhausted
I tried making space more than 15 GB but still the issue persists. Could someone help me out with the flags that I should use?
Recently I came across this too when I needed to diff two large files (>5Gb each).
I tried to use 'diff' with different options, but even the --speed-large-files had no effect. Other methods like splitting the files into smaller ones, using xdelta or sorting the files as per this suggestion didn't help either. I even got my hands around a very powerful VM (> 72Gb RAM), but still got this memory exhausted error.
I finally got to work by adding the following parameter to sysctl.conf (sudo vim /etc/sysctl.conf):
vm.overcommit_memory=1
vm.overcommit_memory has three values (0,1,2) and sets the kernel virtual memory accounting mode. From the proc(5) man page:
0: heuristic overcommit (this is the default)
1: always overcommit, never check
2: always check, never overcommit
To make sure that the parameter is indeed applied you can run
sudo sysctl -p
Don't forget to change this parameter back when you finish!
bsdiff is slow & requires large memory, xdelta is create large delta for large files.
Try HDiffPatch for large files: https://github.com/sisong/HDiffPatch
support diff between large binary files or directories;
can run on: Windows, macos, Linux, Android
diff & patch both support run with limit memory;
Usage example:
Creating a patch: hdiffz -s-256 [-c-lzma2] old_path new_path out_delta_file
Applying a patch: hpatchz old_path delta_file out_new_path
Try sdiff. It's a pre-built tool in some Linux Distributions.
sdiff a.txt b.txt --output=c.txt
will show the files to be Modified.
This worked perfectly for me.
I know of two ways of deleting an app under development from the emulator:
Using the emulator GUI: Settings >
Applications > Manage Applications >
Uninstall
Using ADB: adb uninstall
I may have discovered a third way, using 'adb shell':
rm /data/app/<package>.apk
It seems, however, that this isn't really a good way to delete apps because there may be additional information associated with it (registration?).
What is that information and where can it be found?
It's interesting you mention this. I ran a quick home made test to shed some light onto your question.
Generally, when you install a .apk file, Android creates an internal storage area for it located at /data/data/< package name of launching activity>. This is mainly used as an internal caching area that cant be accessed by other apps or the phone user. You can read up about that in a little bit more detail in the Internal storage chapter of Androids data storage section. It is an area exclusively used by your app and you can write private data there.
Once you uninstall an app theoretically, this internal storage area is also deleted. The first 2 ways which you outlined indeed does that: the .apk file in /data/app/ is deleted aswell as the internal storage area in /data/data/.
However if you used adb shell and run the rm command, all that is removed is the .apk file in /data/app/. The internal storage area in in /data/data/ is not deleted. So in essence you are correct that additional information with the app is not necessarily deleted. But on the flip side, if you reinstall the app after running the command, then the existing internal storage area gets overwritten as a fresh copy of it is being installed.
adb uninstall com.example.test
com.example.test may vary acording to your app.
I was having a problem with this too. I have Link2SD on my phone, but the ext4 partition on my SD card corrupted, so I reformatted, but all of the linked files were still in the /data/app folder. So I created a script to delete all broken links, and ran into the same problem as you, the app manager said they were still installed! so I made another script to fix that, using the pm program on your phone.
heres my code to remove broken links from the app folder:
fixln.sh
#!/system/bin/sh
#follow and fix symlinks
appfolder="/data/app/"
files=`ls ${appfolder}*`
fix=$1
badstring="No such file or directory"
for i in $files
do
if [ -h $i ]
then
if [ -a `readlink $i` ]
then
echo -e "\e[32m$i is good\033[0m";
else
if [ $fix == "fix" ]
then
`rm $i`
echo -e "\e[31m$i is bad, and was removed\033[0m";
else
echo -e "\e[31m$i is bad\033[0m";
if
fi
else
echo -e "\e[36m$i is not a symlink\033[0m";
fi
done
and heres my code to uninstall apps that have no apk:
fixmissing.sh
#!/system/bin/sh
#searches through a list of installed apps, and removes the ones that have no apk file
appfolder="/data/app/"
fix=$1
installed=`pm list packages -f -u`
for i in $installed
do
usefull=${i#*:}
filename=${usefull%=*}
package=${usefull#*=}
if [ -a $filename ]
then
echo -e "\e[32m$package ($filename) is good\033[0m"
else
if [ "$fix" == "fix" ]
then
uninstall=`pm uninstall $package`
if [ "$uninstall" == "Success" ]
then
echo -e "\e[31m$package ($filename) is bad, and was removed\033[0m"
else
echo -e "\e[31m$package ($filename) is bad, and COULD NOT BE REMOVED\033[0m"
fi
else
echo -e "\e[31m$package ($filename) is bad\033[0m"
fi
fi
done
copy these files to your phone, and run them with no arguments to see what they find, or add fix onto the end (fixmissing.sh fix) to make them fix what they find. Run at your own risk, and back up your files. I am not responsible if this code in any way wrecks anything.
If anyone wants to update/merge these scripts together, thats fine. these were just made to fix my problem, and they have done so, just thought I'd share them.
I believe any files the app has created on the sdcard would not be deleted.
There is another way - using the emulator like a real device -
locate the app in the emulator and drag it up to uninstall it.