ZFS recovery with missing device - recovery

what i wanted:
zfs add poolname cache nvme-partition.
what i did:
zfs add poolname nvme-partition
zfs export poolname
device-no-longer-availeable
zfs import
shows poolname but cannot import
any idea how to recover the 10 TB of my poolname without the 10 GB of my nvme-partition?

Related

Memory not freed upon image.close() in Pillow

I am struggling to de-allocate from a Pillow Image.
If I run the following:
#profile
def do_image_things():
im = Image.open(foo.png')
im.close()
del im
Then I get the following output from Python's memory_profiler:
Line # Mem usage Increment Line Contents
================================================
8 33.645 MiB 33.645 MiB #profile
9 def do_image_things():
12 37.383 MiB 3.738 MiB im = Image.open(u'foo.png')
13 37.387 MiB 0.004 MiB im.close()
14 37.387 MiB 0.000 MiB del im
The im.close() call appears not to have de-allocated the memory that the Image.open() reserved. The is a bare-bones reproduction of an issue occurring in a large-scale image processing deployment that we've noticed memory problems in.
Has anyone been able to resolve this problem?
I'm running Pillow Version 5.0.0 and 2.7.14 on Mac OS X.

testlink can not import more than 1131 testcases

My testlink environment is:
Linux Ubuntu14.04.4
Apache 2.4.7
Mysql 5.6
php5
TestLink 1.9.17
I tried to import more than 2000 testcases,but it only succeeded import 1131 testcases.and it receive a white page on the right frame on import page.and when I deleted the testsuite,it only delete the The first three sub testsuites,the fouth sub testsuite is not deleted.
I tried to change max_input_vars = 1000 to max_input_vars = 10000 in php.ini,but still the same problem.
I config /etc/php5/apache2/php.ini as follows,and succeed
max_input_vars = 1000
max_input_vars = 10000
memorylimit 128M
memorylimit 256M
post_max_size 12M
upload_max_filesize 10M
sudo /etc/init.d/apache2 restart

MongoDB fatal error: runtime: out of memory

i was trying to import a large collection in mongodb but everytime i try to import
-rw-r--r-- 1 root root 3491368960 Jul 15 06:15 activity-june.json
mongoimport --db analytics --collection reports < activity-june.json
it gives me error like this:
fatal error: runtime: out of memory
goroutine 37 [running]:
runtime.throw(0xcba857)
/usr/local/go/src/pkg/runtime/panic.c:520 +0x69 fp=0x7fd4984fea68 sp=0x7fd4984fea50
runtime.SysMap(0xc308100000, 0x100000000, 0x42b700, 0xcd7eb8)
/usr/local/go/src/pkg/runtime/mem_linux.c:147 +0x93 fp=0x7fd4984fea98 sp=0x7fd4984fea68
runtime.MHeap_SysAlloc(0xce3ea0, 0x100000000)
/usr/local/go/src/pkg/runtime/malloc.goc:616 +0x15b fp=0x7fd4984feaf0 sp=0x7fd4984fea98
MHeap_Grow(0xce3ea0, 0x80000)
/usr/local/go/src/pkg/runtime/mheap.c:319 +0x5d fp=0x7fd4984feb30 sp=0x7fd4984feaf0
MHeap_AllocLocked(0xce3ea0, 0x80000, 0x0)...
heres my disk space:
-bash-4.1# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 4128764 133416 -1
and heres my last query on my database, its not inserting anything.
> db.reports.find().count();
7
Kindly check on this
1) Your computer should be having 64 bit, if it 32 bit then we cannot import more than 2GB of data
2) Each of your document should not exceed 16MB
http://docs.mongodb.org/manual/reference/program/mongoimport/
also check this post,
https://forum.syncthing.net/t/fatal-error-runtime-out-of-memory/2190

Mongodb build/compile error: not enough memory on Ubuntu

Preface so this isn't marked as a duplicate: I've seen lots of mongodb memory issues posted on stack overflow, but none that have to do with errors on the compilation.
I just freshly downloaded and ran Ubuntu on Virtualbox (on a mac), so I feel like there should be enough memory. However, when I try to compile Mongodb from the source code I've gotten the following errors about an hour into the compilation (I have done this a few times now)
scons: *** [<whatever file it was working on>] No space left on device
scons: building terminated because of errors
and on a separate occasion
IOError: [Errno 28] No space left on device:
File "/usr/lib/scons/SCons/Script/Main.py", line 1359:
_exec_main(parser, values)
File "/usr/lib/scons/SCons/Script/Main.py", line 1323:
_main(parser)
File "/usr/lib/scons/SCons/Script/Main.py", line 1072:
nodes = _build_targets(fs, options, targets, target_top)
File "/usr/lib/scons/SCons/Script/Main.py", line 1281:
jobs.run(postfunc = jobs_postfunc)
File "/usr/lib/scons/SCons/Job.py", line 113:
postfunc()
File "/usr/lib/scons/SCons/Script/Main.py", line 1278:
SCons.SConsign.write()
File "/usr/lib/scons/SCons/SConsign.py", line 109:
syncmethod()
File "/usr/lib/scons/SCons/dblite.py", line 117:
self._pickle_dump(self._dict, f, 1)
Exception IOError: (28, 'No space left on device') in <bound method dblite.__del__ of <SCons.dblite.dblite object at 0x7fbe2a577dd0>> ignored
I've tried both of the following build commands:
scons all --dbg=on -j1
scons --dbg=on -j1
According to VirtualBox the virtual size is 8 GB and the Actual size is 4.09 GB. Also, if it makes the difference, the odds that the memory on my mac is actually full is slim to none.
Any help would be greatly appreciated, thanks in advance.
EDIT: I've tried creating more memory (24 GB) and resizing partitions but I still cannot complete a build.
Here is the output of the df -T command:
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 ext4 15345648 14304904 238184 99% /
none tmpfs 4 0 4 0% /sys/fs/cgroup
udev devtmpfs 1014316 12 1014304 1% /dev
tmpfs tempfs 205012 860 204152 1% /run
none tempfs 5120 0 5120 0% /run/lock
none tempfs 1025052 152 1024900 1% /run/shm
none tempfs 102400 40 102360 1% /run/user
When you say memory, I believe you mean disk space. Try running the command
df -T to see what % usage you really have. You will probably need to resize the amount of space virtualbox has assigned to your image, as well as resize your repartition. It may be simpler to just create a new virtualbox image with 16 or 24GB of disk space.
If you decide to go the resize partition route, here is a helpful resource: https://askubuntu.com/questions/126153/how-to-resize-partitions

Cloning a Bootable SD Card from Linux Using dd command

I have a Raspberry Pi with the default, store-bought operating systems on it. I want to wipe the SD card clean so that I can put in a new operating system, but I want to preserve the original OS in a backup disc image. I planned to store it in a .bin file. The SD card has two partitions.
I used the following command to figure out which drive is the SD card.
sudo dmesg | tail
--output--
[ 2954.642182] sd 3:0:0:0: [sdb] Attached SCSI removable disk _
[ 2955.149750] EXT4-fs (sdb2): mounted filesystem with ordered data mode. Opts: (null)
I believe this tells me that it is under dev/sdb2, but I also tried dev/sdb, ~/dev/sdb and ~/dev/sdb2. I used the following command to create the image:
dd if="dev/sdb2" of="~/Desktop/Pi Backup/Pi.bin"
But when I try to do this it returns the error message
dd: opening `dev/sdb2': No such file or directory
I'm running Linux Mint, Cinnamon.
Any help is appreciated.
Instead of doing:
sudo dd if="/dev/mmcblk0p1" of="Pi_1.bin"
sudo dd if="/dev/mmcblk0p2" of="Pi_2.bin"
try:
sudo dd if="/dev/mmcblk0" of="Pi.bin"
p1 and p2 are the partitions in that device and you want to make an image of the entire device.
All devices are under /dev
dev is looking for dev under the current directory and ~/dev is looking for dev under your home directory.
/dev/sdb2 if the second partition, I would expect use have /dev/sdb1 (the first partition) too.
sudo dmesg | tail -30
will give you the last 30 lines, then you should be able to see the sdb1 too.
I'm on Mint 14 and I did mount and the SDcard shows as 2 partitions like below
/dev/mmcblk0p1 on /media/nig/3312-932F type vfat (rw,nosuid,nodev,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush,uhelper=udisks2)
/dev/mmcblk0p2 on /media/nig/b7b5ddff-ddb4-48dd-84d2-dd47bf00564a type ext4 (rw,nosuid,nodev,uhelper=udisks2)
so I then did
sudo dd if="/dev/mmcblk0p1" of="Pi_1.bin"
sudo dd if="/dev/mmcblk0p2" of="Pi_2.bin"
seemed to work, not sure about restoring, not tried that yet