Using SD card as external storage in Beaglebone Black Debian 8.7 - sd-card

I had been working a while with my BBB and decided to update to last image, so I followed the guide on the Beagleboard.org site, I succeeded and now have the Debian 8.7 2017-03-19 image working just fine. However, before the update I had that same SD card as extra storage and now every time I put it into the BBB it flashes, so I would like to have that same SD card as extra storage as before.
Doing some research I read it was needed to have a uEnv.txt file with the following lines in the SD card:
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
I'm newbie at this so I don't really know how to do so and the information I have found is not that explanatory. I would really aprecciate some help in doing this so I can start making more interesting applications. Thanks.

So in order to update the card image it is necessary to follow the steps shown in the beaglebord webpage: software updates but only until the eighth step so the BBB boot off from the SD card, step nine is to program the onboard flash with the last image to boot off from there (the SD card will have to be unmounted).
Now that the BBB is booted off of the SD card, in Windows, if the df command is typed we would see this
debian#beaglebone:~$ df -k --human
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 98M 2.9M 95M 3% /run
/dev/mmcblk0p1 3.3G 2.8G 295M 91% /
tmpfs 245M 4.0K 245M 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 245M 0 245M 0% /sys/fs/cgroup
tmpfs 49M 0 49M 0% /run/user/1000
so that we know the file system is in the SD card, then running fsdisk command will extend the SD card partition for us to have the complete space in the SD card for the our applications.
Now the BBB is booted off of the SD card and also having that same SD card as available storage.

Related

How to use ceph to store large amount of small data

I set up a cephfs cluster on my virtual machine, and then want to use this cluster to store a batch of image data (total 1.4G, each image is about 8KB). The cluster stores two copies, with a total of 12G of available space. But when I store data inside, the system prompts that the available space is insufficient. How to solve this?The details of the cluster are as follows:
Cluster Information:
cluster:
id: 891fb1a7-df35-48a1-9b5c-c21d768d129b
health: HEALTH_ERR
1 MDSs report slow metadata IOs
1 MDSs report slow requests
1 full osd(s)
1 nearfull osd(s)
2 pool(s) full
Degraded data redundancy: 46744/127654 objects degraded (36.618%), 204 pgs degraded
Degraded data redundancy (low space): 204 pgs recovery_toofull
too many PGs per OSD (256 > max 250)
clock skew detected on mon.node2, mon.node3
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node2(active), standbys: node1, node3
mds: cephfs-1/1/1 up {0=node1=up:active}, 2 up:standby
osd: 3 osds: 2 up, 2 in
data:
pools: 2 pools, 256 pgs
objects: 63.83k objects, 543MiB
usage: 10.6GiB used, 1.40GiB / 12GiB avail
pgs: 46744/127654 objects degraded (36.618%)
204 active+recovery_toofull+degraded
52 active+clean
Cephfs Space Usage:
[root#node1 0]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/nlas-root xfs 36G 22G 14G 62% /
devtmpfs devtmpfs 2.3G 0 2.3G 0% /dev
tmpfs tmpfs 2.3G 0 2.3G 0%
/dev/shm
tmpfs tmpfs 2.3G 8.7M 2.3G 1% /run
tmpfs tmpfs 2.3G 0 2.3G 0%
/sys/fs/cgroup
/dev/sda1 xfs 1014M 178M 837M 18% /boot
tmpfs tmpfs 2.3G 28K 2.3G 1%
/var/lib/ceph/osd/ceph-0
tmpfs tmpfs 471M 0 471M 0%
/run/user/0
192.168.152.3:6789,192.168.152.4:6789,192.168.152.5:6789:/ ceph 12G 11G 1.5G 89% /mnt/test
Ceph OSD:
[root#node1 mnt]# ceph osd pool ls
cephfs_data
cephfs_metadata
[root#node1 mnt]# ceph osd pool get cephfs_data size
size: 2
[root#node1 mnt]# ceph osd pool get cephfs_metadata size
size: 2
ceph.dir.layout:
[root#node1 mnt]# getfattr -n ceph.dir.layout /mnt/test
getfattr: Removing leading '/' from absolute path names
# file: mnt/test
ceph.dir.layout="stripe_unit=65536 stripe_count=1 object_size=4194304 pool=cephfs_data"
Storing small files, you need to watch the minimum allocation size. Until the Nautilus release, this defaulted to 16k for SSD and 64k for HDD, but with the new Ceph Pacific the default minimum allocation has been tuned to 4k for both.
I suggest you use Pacific, or manually tune Octopus to the same numbers if that's the version you installed.
You also want to use replication (as opposed to Erasure Coding) if your files are under a multiple of the minimum allocation size, as the chunks of EC would use the same minimum allocation and will waste slack space otherwise. You already made the right choice here by using replication, I am just mentioning it here because you may be tempted by EC's touted space-saving properties -- which unfortunately do not apply to small files.
you need to set bluestore_min_alloc_size to 4096 by default its value is 64kb
[osd]
bluestore_min_alloc_size = 4096
bluestore_min_alloc_size_hdd = 4096
bluestore_min_alloc_size_ssd = 4096

Postgres cant vacuum despite enough space left (could not resize shared memory segment bytes)

I have a docker-compose file with
postgres:
container_name: second_postgres_container
image: postgres:latest
shm_size: 1g
and i wanted to vacuum a table, but got
ERROR: could not resize shared memory segment "/PostgreSQL.301371499" to 1073795648 bytes: No space left on device
the first number is smaller than the right one, also i do have enough space on the server (only 32% is taken)
I wonder if it sees the docker container as not big enough (as it resizes on demand (?)) or where else could be the problem ?
note
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
95c689aa4d38 redis:latest "docker-entrypoint.s…" 10 days ago Up 10 days 0.0.0.0:6379->6379/tcp second_redis_container
f9efc8fad63a postgres:latest "docker-entrypoint.s…" 2 weeks ago Up 2 weeks 0.0.0.0:5433->5432/tcp second_postgres_container
docker exec -it f9efc8fad63a df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
shm 1.0G 2.4M 1022M 1% /dev/shm
df -m
Filesystem 1M-blocks Used Available Use% Mounted on
udev 16019 0 16019 0% /dev
tmpfs 3207 321 2887 11% /run
/dev/md1 450041 132951 294207 32% /
tmpfs 16035 0 16035 0% /dev/shm
tmpfs 5 0 5 0% /run/lock
tmpfs 16035 0 16035 0% /sys/fs/cgroup
tmpfs 3207 0 3207 0% /run/user/1000
overlay 450041 132951 294207 32% /var/lib/docker/overlay2/0abe6aee8caba5096bd53904c5d47628b281f5d12f0a9205ad41923215cf9c6f/merged
overlay 450041 132951 294207 32% /var/lib/docker/overlay2/6ab0dde3640b8f2108d545979ef0710ccf020e6b122abd372b6e37d3ced272cb/merged
thx
That is a sign that parallel query is running out of memory. The cause may be restrictive settings for shared memory on the container.
You can work around the problem by setting max_parallel_maintenance_workers to 0. Then VACUUM won't use parallel workers.
I figured it out (a friend helped :) )
i guess i cant count 1073795648 is slightly more then i needed for the vacuum so indeed shm size 10g instead of 1g helped

VirtualBox machine has free space but I'm getting low disk space errors/messages [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Today I upped the storage on my machine, how ever I'm getting some low space disk errors. The command df -h returns the following:
[caramelo#localhost tmp]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 9.8G 9.8G 3.6M 100% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 18M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 221M 794M 22% /boot
SFolder 238G 207G 31G 88% /media/sf_SFolder
tmpfs 380M 4.0K 380M 1% /run/user/42
tmpfs 380M 48K 379M 1% /run/user/1000
tmpfs 380M 0 380M 0% /run/user/0
I used Gparted to expand /dev/sda2/ which is not being displayed in the list above and using du -sh /dev/sda2 returns
[caramelo#localhost ~]$ du -sh /dev/sda2
0 /dev/sda2
To give more storage space I cloned my existing .vdi with vboxmanage clonehd “CentOS7.vdi” “CentOS7Clone.vdi” and then vboxmanage modifyhd –resize 20000 “CentOS7Clone.vdi” and finnaly on VirtualBox Settings I added the cloned .vdi and deattached the original .vdi
Back on CentOS I used the Gparted to expand /dev/sda2 with the unallocated space I added with the vboxmanage modifyhd command.
After I allocated all the free space to sda2 partition, confirmed the changes and reboot the OS I solved the problem using the commands lvextend -l +100%FREE /dev/mapper/centos-root and xfs_growfs /dev/mapper/centos-root

From which file should I copy data to make an img on Rasbian if I want to backup raspberry and retore it

I saw one highly-voted answer on the net and it goes like this:
On Linux, you can use the standard dd tool:
dd if=/dev/sdx of=/path/to/image bs=1M
Where /dev/sdx is your SD card.
But I cheked my device there is no /dev/sdx.
Some other says dd if=/dev/mmcblk0 of=/path/to/image bs=1Mshould work fine.
I suppose it has something to do with the version of my raspberry.Mine is the newst Raspbian version.I don't want to break the systems so I just want to make sure the code is right before I run it.So I come here to ask help from those who have tried it before.
This is the situation of my filesystems:
~ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 15G 4.1G 9.5G 31% /
devtmpfs 214M 0 214M 0% /dev
tmpfs 218M 0 218M 0% /dev/shm
tmpfs 218M 4.7M 213M 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 218M 0 218M 0% /sys/fs/cgroup
/dev/mmcblk0p1 41M 21M 21M 51% /boot
tmpfs 44M 0 44M 0% /run/user/1000
Which file should I choose??
Anybody knows from which file(similiar with /dev/sdx )to copy the data?
Thank you very much!
I think what I was trying to do is to copy the file from machine A while using machine A. Most people's answers on the Internet actually indicates using another machine B to copy the files of machine A.That's why when when I use "df -h",the terminal shows "/dev/root" instead of "/dev/sdX".
Maybe it's because when you read files,the files itself cannot achieve other operations.So I used another machine B and the code "df -h",it shows "/dev/sdX" successfully.And now I can follow the instructions on the Internet and do the backup.

How to create database on the disk with enough space in psql?

I want to import data into a database on aws. But the space is always not enough. I created the database using this command sudo -u postgres createdb ~/data/word2vec/AidaDB -O MyName and tried to import the data into the database using this command:
bzcat AIDA_entity_repository_2014-01-02v10.sql.bz2 | psql /home/ubuntu/data/word2vec/AidaDB.
Here is the disk usage:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 89G 84G 343M 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 16G 12K 16G 1% /dev
tmpfs 3.2G 848K 3.2G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 16G 76K 16G 1% /run/shm
none 100M 24K 100M 1% /run/user
/dev/xvdg 138G 60M 131G 1% /home/ubuntu/data/glove
/dev/xvdf 246G 32G 203G 14% /home/ubuntu/data/word2vec
Why the disk is not enough? The data is 31GB. But I thought I created the database in /home/ubuntu/data/word2vec. Is there a way to solve this problem? Many thanks.
You cannot specify the location of the database as part of the name of the database. PostgreSQL always creates the database in it's data directory. However you could create an additional tablespace and create your database within this tablespace.
CREATE TABLESPACE mydbspace LOCATION '/home/ubuntu/data/word2vec';
CREATE DATABASE AidaDB OWNER MyName TABLESPACE mydbspace;