everyone. I have been trying to increase the memory of my BleagleBone Black rev c without success.
I have followed these instructions in order to increase the memory of my BBB with a 16GB microSD card. I have already tried burning 2 different images Debian 9.1 2017-08-31 4GB SD LXQT and Debian 8.7 2017-03-19 4GB SD LXQT (without flashing the eMMc) .
The steps that I have been using are listed below.
What I first did was to burn the image into the microSD card using
Etcher.
Then I inserted the microSD into the BBB, I pushed the boot button
and then I plugged it into my computer to turn it on.
After that, I logged into my BBB using ssh and I checked for the
Debian version and it was correct. Indicating that the boot from the
microSD card was correct, but when I tried to check disk space I
couldn´t find the partition for the microSD.
As you can see in the image below it is supposed to show the rootfs where I have the new BBB image and the 16GB extra space, but I´m not able to see the extra partition. Does anyone know what I could be possibly doing wrong?
I am facing with the same issue and I end up with
Login to your BBB by ssh
Run this command
nano grow_partition.sh
copy code from here then paste it on the terminal
save file by pressing control + o then enter
Exit from nano editor by pressing control + x
Run this command sudo ./grow_partition.sh
Reboot BBB
Enjoy :)
I have a beaglebone black (original with 512MB mem) and I was able to use a different method to add swap memory successfully (unfortunately user3680704's method didn't work for me).
I got the idea from this post which basically says the following:
You can check your current memory with
free -h
And you can create the swap memory by running these commands. Again a more detailed better explanation is in the link above, but in case that ever goes dead you can follow these:
sudo fallocate -l 1G /swapfile
ls -lh /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Next open the fstab file by running
vi /etc/fstab
and add the following line to the file
/swapfile swap swap defaults 0 0
You can then check your swap by running
swapon --show
This worked well for me, added 1G of swap. You can add more or less by changing the 1G value
Related
In a Proxmox machine I noticed some of the backups of some VM's were failing, so I wanted to test stuff.
Whilst testing the whole host stopped responding and I forced a reboot.
After the reboot I seem to have lost the whole data store.
Almost every zfs command results in a freeze.
zpool status,zpool list, you name it, it locks up and you can't even ctrl break out of it.
I can still create a new SSH session and try other things though.
In an attempt to see what is causing the commands to hang I thought about running
zpool set failmode=continue
hoping it will show me an error, but as you can guess, that command also hangs.
It's a pool created on two nvme drives. The original command to create the pool was
zpool create -f -o ashift=12 storage-vm /dev/nvme0n1 /dev/nvme1n1
First thing I thought was that one of the nvme's had gone bad so I checked the SMART status, but it shows both drives are perfectly healthy.
Then before trying other stuff I decided to backup the drives to an NFS share with the dd command.
dd if=/dev/nvme0n1 of=/mnt/pve/recovery/nvme0n1
dd if=/dev/nvme1n1 of=/mnt/pve/recovery/nvme1n1
Both commands completed and on the NFS share I have 2 images of the exact same size (2TB each)
Then I tried to do a non destructive read/write test with dd on both the nvme's and got no errors.
In order to rule out as much as possible I build another Proxmox machine using spare hardware (same brand and type etc.) and place the drives in there.
On the new machine all zpool commands also hang. If i run zpool status with the drives removed from the motherboard, it does not hang, but obviously it has nothing to show.
So I placed the nvme's back in the original machine.
zdb -l /dev/nvme0n1 gives
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
which kind of worries me. It does the same for the other nvme.
And now I'm running out of ideas. I have little knowledge of the zfs system and don't know what is possible to save the data.
Obviously, the drives are not really dead as the smart tells me it is healthy and I can dd an image from them.
Things like faulty RAM or motherboard are pretty much ruled out also with the hardware swap.
Is there a way to recover at least some VM's from that storage?
Help/pointers wil be greatly appreciated.
The issue was eventually solved and this is what I did.
Considering the volume was made out of 2 nvme drives I created 2 loop devices using the dd images.
losetup -fP /mnt/pve/recovery/nvme0n1
losetup -fP /mnt/pve/recovery/nvme1n1
You can check the mounted loop devices with lsblk and unmount them with losetup -d /dev/loop[X]
Finally I imported the pool devices into ZFS in readonly mode and I was able to access/recover all my data
zpool import -f -d /dev/loop0p1 -f -d /dev/loop1p1 -o readonly=on storage-vm
Brand new to the world of Pi - like so new that I had never even touched one until three days ago, and know very little about Linux... I have a Western Digital MyBook plugged directly into my router, and I've found I'm able to mount this as a drive with the following command:
sudo mount -t cifs -o user=yourusername,passwd=yourpasswd,rw,file_mode=0777,dir_mode=0777 //mybookIP/public /mnt/mybook
Unfortunately, it seems to drop this mount whenever I reboot. Anyone have a suggestion on how to make this permanent?
Based on the comments here, this is what I did:
First, in Terminal I ran:
sudo nano /etc/fstab
Once that was opened, I added the line:
//mbookIP/public /mnt/mybook cifs _netdev,username=yourusername,password=yourpasswd 0 0
Once I saved this I was able to reboot and the mounted drive was visible when it all loaded back up again.
I am trying to install sailsjs globally on my digital ocean vps but every time process seems to get killed . Any idea why it is happening and how I can overcome this problem. Let me know if more debug info is required.
Try1:
Try2:
Try3:
You need to add a SWAP in digitalocean to temporarily store the data when installing sails or Shiny.
Try the following code to allocate storage to a swap file:
sudo fallocate -l xx /swapfile
xx is the disk storage you want to assign, I found 1G is enough for my sails project. Then make the file enable for swap
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
The information could be found in https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04
I'm experiencing some weird behavior on the google compute engine. I made a new instance with ubuntu on it. I installed a node app I'm working on, pulled code from github etc...
then I installed mongodb and nginx. The weird thing is, every time I leave the session, and reconnect, my mongodb and nginx installation files disappear.
for example, when I install nginx I find the nginx installation on /etc/nginx where I can find like nginx.conf. but when I left the compute engine console session, and reconnected later, that directory was gone. same thing is happening with mongodb.
my node installation under /home/abdul/mystuff doesn't disappear though.
is this normal? is it a setting?
details:
this is an ubuntu image (idk which version, and not sure how to check)
using the following to install nginx:
sudo apt-get update
sudo apt-get install nginx
result of command
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /var/lib/docker/aufs
sdb 8:16 0 5G 0 disk
└─sdb1 8:17 0 5G 0 part /home
Looks like you're running a Docker container on your instance (/var/lib/docker/aufs) and installing the software inside the container.
If you want to save changes back to the image, it is possible to use the docker commit command, but this is almost definitely not what you want.
Instead, use a Dockerfile to build images and update it whenever you want to make a change. This way you can easily recreate the image and make changes without starting from scratch. For persistence (.e.g. config files and databases) use volumes, which are just directories stored outside of the Union File System as normal directories on the host.
I am currently working on centos running on intel atom board. I mistakenly renamed lic-2.17.so to _libc-2.17.so
library on my board, when I reboot the board it is giving me below error.
[ OK ] Reached target Initrd Default Target.
systemd-journald[136]: Received SIGTERM
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00007f00
Is there any possible way to get back to the original state.
I entered into grub prompt and able to see cat /lib64/_libc-2.17.so. Not Sure,
how to rename this to original name
Thanks in advance.
Can you enter run-level 3 from grub?
if so,
sudo mv /lib64/_libc-2.17.so /lib64/libc-2.17.so
if you can't enter run-level 3, you can try using a live DVD/USB to run the above command, you're just going to have to manually search for the right partition which the incorrectly named file is located.
Otherwise, I'm afraid you're going to need to reinstall the OS.