xbmc media library lost after system is rebooted - raspberry-pi

I'm running OpenELEC 4.0 with XBMC 13 (Gotham) on Raspberry Pi.
I'm accessing the source via NFS which is shared (on my ubuntu server) as instructed on wiki:
/media/Large/Series 192.168.2.0/24(rw,all_squash,insecure,no_subtree_check)
Adding e.g. TV Shows from the (above) NFS source (and marking it as tvshow) properly scans the subdirectory structure and builds TV Shows library.
Problem is: when the system is restarted (via Reboot), the library is lost and choosing "Update library" (from left menu) does nothing.
Turning on the "Update library on startup" changed nothing, too.
Only way to get the library again is to remove "tv shows" entry and create it again. But, I can't do that for all movies, shows, music after each restart!
After the library dissapeared (after restart), I've tried opening the (~/.xbmc/userdata/Database/)MyVideos78.db in SQLite Maestro, but it says:
I have also tried mounting NFS shares to home dir "tvshows" with .config/autostart.sh script:
#!/bin/sh
sleep 25; \
mount -t nfs 192.168.2.101:/media/Large/Series /storage/tvshows -o nolock; \
The end result is always the same: library lost after system restart :(
Logs for your inspection:
https://drive.google.com/file/d/0B7BtnBy2o_3vVU44YmdneDlaak0/edit?usp=sharing
dmesg output: http://pastebin.com/iXEtbr1L
Ideas how to solve this?

Bought a new (SanDisk) SD card, copied backed up /storage data on it and the error is gone.
Conclusion: bad (Toshiba) SD card was the reason for library getting corrupted at system restart.

Related

Docker - under Windows 10 Pro - Need to map volumes and have them work, not quietly fail

I ran various containers on Two different Windows 10 Pro machines, and thought that I had the data drives mapped correctly, but now I'm finding out that it isn't writing the data there at all. One example was Mongo db, where I mapped /mongodb/database:/data/db I upgraded docker, and when it restarted mongodb.. POOF! no data, I thought that was weird and looked in /mongodb/database and the directory is empty. Thankfully, the app is still in the development phase, and not critical that the data was lost...
the line from the docker compose file:
volumes:
- /mongodb/database:/data/db
Different machine:
I installed Gogs/gogs image, mapping the data:
docker run --name=Gogs-Git -p 10022:22 -p 10080:3000 -v /var/gogs:/Docker/Gogs-GitServer/Data gogs/gogs
Seemed to work perfectly, so I was thinking everything was fine, I pushed a Repo up to it.. and today, I looked at \Docker\Gogs-gitserver\data and no files... so where did it write the data?
I also installed TeamCity, mapping that data.. nope, it has no logs, no data...
This feature seems to just not work at all. I found a reference from 2016 saying I need to look at the 'shared' tab (below general),and check C: to be shared, but well, no, that isn't a tab, so it isn't that.
There is no way someone would write a system that just quietly wrote the data some other place, or didn't bother actually mapping it without giving an error - that would be nuts.
So, there must be some other explanation... One of the machines has Hyper-V enabled in the BIOS, the other one doesn't even support it as far as I know.
I think some of the images are Linux, and some are Windows (TeamCity I'm pretty sure is)
OK, this is interesting... If I look at the volumes, and enter one that is in use, I get this:
The Target looks about like the right path, but I'm not sure about the /backup and the /data on the last two lines, if these are supposed to be directories under that, they don't exist, but if I click on the data tab, I can see the data, it is in Docker, hidden and not shared, in spite of there being a 'target' that points at the right directory... how to I get it to start writing this data correctly to that folder??
I've not confirmed this yet with the above configuration, but I found that for other containers, I needed to specify the path as 'c:/data/MongoDb/Database' when I created the container using that as the path, it worked and I have data there now. I just need to go back and fix all these VMs so they have their data correctly...

Minikube on Windows does nothing on start

I have followed all the instructions on Minikube carefully (I thought). I installed it on Windows 10 (ver 1.7.2), started a Powershell console under Administrator, set the 3 PROXY variables (I am behind a proxy), enabled the Microsoft-Hyper-V, and ran the cmd: minikube start --vm-driver=hyperv
It downloads the VM boot images, then I get the following line output:
* Creating hyperv VM (CPUs=2,....etc) ....
AND THAT'S IT!
Nothing else!! If I start the Hyper-V Manager I don't see any VMs there. The .minikube directory is populated with several dirs and files. But for the rest I am completely blind!
I have left it to run for half an hour or more. Still nothing.
I have tried terminating the process, stopping, deleting (in this case I get the output 'Deleting Kubenetes cluster' but whether this means anything I don't know) and flushing the .minikube directory ... then running it all again off a clean base. NADA! NOTHING! same thing!
Could someone please tell me what I am doing wrong? I thought this was supposed to work out of the box! Why don't I see my VM in Microsoft-Hyper-V manager for a start? I don't even get as far as seeing starting Kubenets cluster, yet I get no errors!
Try to follow this guide. It has a step by step instructions bout how to setup Docker and Minikube on windows 10 with Chocolatey.
Also here you will find an analogical issue with possible solutions.
Before you start again, remember to delete the .minikube folder after executing minikube delete to avoid any leftover configuration to persist.
Please let me know if that helped.
For the record, I flushed everything .. and tried several things from the above page, the K8 site and elsewhere. In a nutshell Docker for Desktop works and Minikube doesn't (not 100% anyway)! I was just curious back in February as to whether I could set up a local Kubenetes environment quickly and easily and I am afraid for me the answer is No: Minikube is not quick and easy. Also, you can enable Kubenetes on Docker Desktop now of course and it works out of the box as software should, so no more need for Minikube.
The following are the instructions for setting up and installing Minikube and its dependencies for use on Windows Pro or Enterprise with Docker Desktop and HyperV.
Install Kubectl
Create a new directory that you will move your kubectl binaries into. A good place would be C:\bin
Download the latest kubectl executable from the link on the Kubernetes doc page:
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows
Move this downloaded .exe file into the bin directory you created.
Use Windows search to type “env” then select “Edit the system environment variables”
In the System Properties dialog box, click “Environment Variables”.
In System Variables click on the “Path” Variable and then click “Edit”
Click “New” and then type C:\bin
Drag the newly created path so that it is higher in order than Docker's binaries. This is very important and will ensure that you will not have an out of date kubectl client.
Click "OK"
Restart your terminal and test by typing kubectl into it. You should get the basic commands and help menu printed back to your screen. If this doesn't work try restarting your machine.
Run kubectl version to verify that you are using the newest version and not the out of date v1.10 version.
Install Minikube
Download the Windows installer here:
https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe
Double click the .exe file that was downloaded and run the installer. All default selections are appropriate.
Open up your terminal and test the installation by typing minikube. You should get the basic commands and help menu printed back to your screen. If this doesn't work try restarting your machine.
Configure HyperV
In Windows Search type "HyperV" and select "HyperV Manager"
In the right sidebar click "Virtual Switch Manager"
Leave selected "New Virtual network Switch" and "External" and click "Create Virtual Switch"
Name the switch "Minikube Switch" (or whatever you would like to name it)
Click Apply and acknowledge the "Pending changes" dialog box by clicking "yes"
Once the switch has been created, click "Ok"
Starting Up Minikube
Since by default Minikube expects VirtualBox to be used, you need to tell it to use the hyperv driver instead, as well as the Virtual Switch created earlier.
Start up a terminal as an Administrator. Then, in your terminal run:
minikube start --vm-driver hyperv --hyperv-virtual-switch "Minikube Switch"
NOTE: all minikube commands must be run in the context of an elevated Administrator.

Raspberry Pi + ownCloud: never stops finishing setup wizard

Model: Raspberry Pi 3 model B+
OS: Raspbian Stretch
Target disk for data storage: Netgear readyNAS (NTFS, fstab mounted) with share for ownCloud and www-data as owner on mount
I've been following this guide for installing and setting up ownCloud on my Pi. I've been following every step and doing exactly what the guide says.
It all goes well until the very last step where I fill out the forms for setting up ownCloud via the browser, seemingly correct inputs and no errors, and when i click the finish button it just loads forever. The browser also indicates that the page is loading and this goes on forever.
Tables in the ownCloud database on the Pi gets created so something is happening, but it never seems to create the admin user as the oc_users table is empty no matter how long I wait. The Pi also seems to slow down drastically when doing this as it takes forever to perform simple and otherwise instant tasks like ls, rmdir, rm, etc...
The target disk for data storage as filled in in the ownCloud wizard data directory path is a NAS disk mounted with www-data (uid/guid 33) as owner and chmoded with 777. I know for a fact that the www-data user has the right privileges to the disk since I've overcome permission issues before as well as the correct mysql credentials. Ive tried ownCloud version 10.0.3/8 and 9.1.8 with the same result.
Has anybody encountered this issue before or have any clue what this is all about?
Turns out it was just insanely slow..
Had it running for about 2 hours and it finally finished and prompted me to the login page where I was able to login and finally use my ownCloud. Installed a php cache optimizer and it seemed to have improved a bit.

sshfs -o follow_symlinks mounts with broken softlinks

Up until a day ago I was perfectly able to mount a drive via sshfs with the follow_symlinks option given.
In fact I set up an alias for this command and used it for several weeks.
Since yesterday, using the same alias the volume still mounts correctly but the all the soft symlinks within that volume are broken.
Logging in using a regular ssh session confirms the symlinks actually are functioning.
Are there any configuration files that may interfere with what I try to do?
I was modifying /etc/ssh/ssh_config and /etc/hosts because I experienced severe login delays when starting an ssh session from a friend's place. But I reverted any changes later on.
Could a wrong configuration in these files cause my issue?
Btw. I'm using Ubuntu 16.04
It turns out that the permissions on the particular machine I was trying to mount the folder from changed over the weekend.
It is now only allowing access to certain folders from within the same network. That is why my soft-links (pointing to now permission restricted content) seemed broken when mounting from my home network.

Mapping CentOS NFS to another CentOS Server

CentOS 5.5
I have a web application running on a server and it needs access to another CentOS server's file system running in the same network (via private IP). After doing a bunch of googling it looks like mounting the drive via NFS is a good way to go, but I'm not finding any good step by step instructions on how to go about it. I've read the man docs on the mount command and read some docs on the CentOS wiki as well but I feel like I'm missing something. Here is what I'm trying
mount -t nfs my.ip.address:/somePath /somePath/mount
I keep getting a 'no route to host' error but I can ping the server just fine. I'm guessing that I am possibly missing a port I need to open or something, but again, can't find information that makes sense to a non-sysadmin like myself.
Thanks for any help.
I ran across this, followed it step by step, and now I'm up and running!
http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/