save files in MATLAB with user ownership - matlab

I am using savefig() and saveas() functions to save .fig and .jpg files resp. in MATLAB (R2015a, Ubuntu 14.04, personal computer, single account). However, the owner of files being generated is root. I want the owner to be my user account.
I can use chown in terminal to later obtain the ownership, but I want that to happen directly from MATLAB, i.e. at the time of file creation.
Also, this problem was not occurring before. I just made a fresh installation of OS and all software, and this behaviour started happening.

I agree with previous users that this is more likely an issue of what user starts MATLAB to begin with.
A quick and dirty way of solving this issue is using the system command.
system('chown user:group DIRTOSAVEDFILE');
or
system(sprintf('chown %s:%s %s',USERSTRING, GROUPSTRING, SAVEDFILEDIR));
Please reconsider using system if you plan to distribute this code as the systemcommand gives access to to /bin/sh (maybe even with root privileges depending on how MATLAB is started).

I have figured out what I was doing wrong.
I was running MATLAB using the command sudo matlab, which is why the files being saved to disk had the ownership of root. The reason why I was running MATLAB as root was because simply using matlab in terminal was not working for me. Particularly, MATLAB gave JAVA exception error: "Error starting desktop". To resolve that error, I had to get the ownership of MATLAB's preferences directory, which is ~/.matlab/R2015a. I did sudo chown -R username:username ~/.matlab/R2015a/ to get the ownership. Now, I can run MATLAB without sudo as well as the files being generated have also my ownership. I used the following link to solve my ownership problem:
http://in.mathworks.com/matlabcentral/answers/50971-matlab-r2012b-java-exception-error-starting-desktop
Thanks for the comments and answers. I should have done more research I guess.

Related

File ownership and permissions in Singularity containers

When I run singularity exec foo.simg whoami I get my own username from the host, unlike in Docker where I would get root or the user specified by the container.
If I look at /etc/passwd inside this Singularity container, an entry has been added to /etc/passwd for my host user ID.
How can I make a portable Singularity container if I don't know the user ID that programs will be run as?
I have converted a Docker container to a Singularity image, but it expects to run as a particular user ID it defines, and several directories have been chown'd to that user. When I run it under Singularity, my host user does not have access to those directories.
It would be a hack but I could modify the image to chmod 777 all of those directories. Is there a better way to make this image work on Singularity as any user?
(I'm running Singularity 2.5.2.)
There is actually a better approach than just chmod 777, which is to create a "vanilla" folder with your application data/conf in the image, and then copy it over to a target directory within the container, at runtime.
Since the copy will be carried out by the user actually running the container, you will not have any permission issues when working within the target directory.
You can have a look at what I done here to create a portable remote desktop service, for example: https://github.com/sarusso/Containers/blob/c30bd32/MinimalMetaDesktop/files/entrypoint.sh
This approach is compatible with both Docker and Singularity, but it depends on your use-case if it is a viable solution or not. Most notably, it requires you to run the Singularity container with --writable-tmpfs.
As a general comment, keep in mind that even if Singularity is very powerful, it behaves more as an environment than a container engine. You can make it work more container-like using some specific options (in particular --writable-tmpfs --containall --cleanenv --pid), but it will still have limitations (variable usernames and user ids will not go away).
First, upgrade to the v3 of Singularity if at all possible (and/or bug your cluster admins to do it). The v2 is no longer supported and several versions <2.6.1 have security issues.
Singularity is actually mounting the host system's /etc/passwd into the container so that it can be run by any arbitrary user. Unfortunately, this also effectively clobbers any users that may have been created by a Dockerfile. The solution is as you thought, to chmod any files and directories to be readable by all. chmod -R o+rX /path/to/base/dir in a %post step is simplest.
Since the final image is read-only, allowing write permission doesn't do anything and it's useful to get into the mindset about only writing to files/directories that have been mounted to the image.

Understanding the error message: spdlog::spdlog_ex

I am aware this question is very specific. Nontheless, maybe s.o. can help:
I was trying to compile an open-source code today. (anyone, who's interested, that's the one.) The error message described below occurs after oai_hss -j $PREFIX/hss_rel14.json --onlyloadkey - having followed the step-by-step installation guide to this point.
After typing the aforementioned command in my terminal, the following error is thrown:
terminate called after throwing an instance of 'spdlog::spdlog_ex'
what(): Failed opening file logs/hss.log for writing: No such file or directory
Aborted (core dumped)
Allright, this sounds pretty severy (core dumped). I was searching google for a meaning of that error message. I came across this other github project. Apparently the spdlog class is trying to enable logging from wherever I run my program. And it throws an spdlog_ex error whenever the file it is trying to add to the registry (in this case logs/hss.log) already exists within this registry. So, I guess, the solution to my problem would be to find this registry and delete logs/hss.log. Does this make sense?
Question: Where the heck do I find this registry?
Maybe some background knowledge would be useful: I am trying to compile the open-source code within a VM that is running Ubuntu 18.04.3 LTS bionic with a 4.15.0-66-generic kernel.
I was searching the /tmp directory for a log folder already. There is none. Where else could it be?
open this file
sudo nano /usr/local/etc/oai/hss_rel14.json
you will see some config where you can find logs/hss.log
actually you have to change these 4 value to
logname: "/var/log/hss.log"
statlogname: "/var/log/hss_stat.log"
auditlogname: "/var/log/hss_audit.log"
ossfile: "~/openair-cn/etc/oss.json"
then use sudo touch to create these files
sudo touch /var/log/hss.log
sudo touch /var/log/hss_stat.log
sudo touch /var/log/hss_audit.log
for logname, statlogname, and auditlogname you can change it to whatever file you want but i like to put them together in /var/log folder.
for ossfile , the oss.json is actually in there.
hope this help

postgresql initdb - directory not empty

I am installing postgres 8.4 on an ubuntu lucid server (no, at the moment we are using the "lucid" LTS version on that server so an upgrade is not possible yet (although we are going to start testing the system on precise quite soon now))
I have set up an own partition for the /var/lib/postgresql/8.4/main directory with a ext4 file system. (Those of you who are really into postgres installs knows what is happening now...) Since ext4 puts a lost+found directory in the root of all file system, postgres will not use that directory as its data-directory since it is initially not empty...
initdb: directory "/var/lib/postgresql/8.4/main" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/8.4/main" or run initdb
with an argument other than "/var/lib/postgresql/8.4/main".
The easiest way to proceed would be to remove the lost+found and recreate it after initdb has done its job. - could that cause any problems? Does the lost+found have any special attributes or anything that makes it impossible to recreate, and also, it is needed at any other time than if checkdisk finds something it needs to put there?
Another way would be to unmount the .../main/ file system, init the database, temporary mount the .../main/ filesystem somewhere else, move things over there and mount it in place. Seems to be a bit more work than the "easiest way".
Or is it some way to make initdb ignore that the directory is not empty? (couldn't see any command line switches for that)
May a lost+found directory within postgres main directory cause any problems?
At the moment I am running the system on a virtual machine for testing, so it really doesn't matter if I mess up things, but before making this an official way of installing a mission-critical system, it would be nice to have some thoughts on this.
lost+found has preallocated blocks that make it easier for fsck to move data into it when the partition is short of free blocks. To create it, better use the mklost+found command rather than mkdir.
If you don't recreate it, fsck will do it anyway when it's needed.
But if it comes to the point where fsck finds corruption within PGDATA, I'd think about going for a backup rather than counting on lost+found to retrieve anything.

Moving MongoDB's data folder?

I have 2 computers in different places (so it's impossible to use the same wifi network).
One contains about 50GBs of data (MongoDB files) that I want to move to the second one which has much more computation power for analysis. But how can I make MongoDB on the second machine recognize that folder?
When you start mongodprocess you provide an argument to it --dbpath /directory which is how it knows where the data folder is.
All you need to do is:
stop the mongod process on the old computer. wait till it exits.
copy the entire /data/db directory to the new computer
start mongod process on the new computer giving it --dbpath /newdirectory argument.
The mongod on the new machine will use the folder you indicate with --dbpath. There is no need to "recognize" as there is nothing machine specific in that folder, it's just data.
I did this myself recently, and I wanted to provide some extra considerations to be aware of, in case readers (like me) run into issues.
The following information is specific to *nix systems, but it may be applicable with very heavy modification to Windows.
If the source data is in a mongo server that you can still run (preferred)
Look into and make use of mongodump and mongorestore. That is probably safer, and it's the official way to migrate your database.
If you never made a dump and can't anymore
Yes, the data directory can be directly copied; however, you also need to make sure that the mongodb user has complete access to the directory after you copy it.
My steps are as follows. On the machine you want to transfer an old database to:
Edit /etc/mongod.conf and change the dbPath field to the desired location.
Use the following script as a reference, or tailor it and run it on your system, at your own risk.
I do not guarantee this works on every system --> please verify it manually.
I also cannot guarantee it works perfectly in every case.
WARNING: will delete everything in the target data directory you specify.
I can say, however, that it worked on my system, and that it passes shellcheck.
The important part is simply copying over the old database directory, and giving mongodb access to it through chown.
#!/bin/bash
TARGET_DATA_DIRECTORY=/path/to/target/data/directory # modify this
SOURCE_DATA_DIRECTORY=/path/to/old/data/directory # modify this too
echo shutting down mongod...
sudo systemctl stop mongod
if test "$TARGET_DATA_DIRECTORY"; then
echo removing existing data directory...
sudo rm -rf "$TARGET_DATA_DIRECTORY"
fi
echo copying backed up data directory...
sudo cp -r "$SOURCE_DATA_DIRECTORY" "$TARGET_DATA_DIRECTORY"
sudo chown -R mongodb "$TARGET_DATA_DIRECTORY"
echo starting mongod back up...
sudo systemctl start mongod
sudo systemctl status mongod # for verification
quite easy for windows, just move the data folder to the target location
run cmd
"C:\your\mongodb\bin-path\mongod.exe" --dbpath="c:\what\ever\path\data\db"
In case of Windows in case you need just to configure new path for data, all you need to create new folder, for example D:\dev\mongoDb-data, open C:\Program Files\MongoDB\Server\6.0\bin\mongod.cfg and change there path :
Then, restart your PC. Check folder - it should contains new files/folders with data.
Maybe what you didn't do was export or dump the database.
Databases aren't portable therefore must be exported or created as a dumpfile.
Here is another question where the answer is further explained

Hard to think of a reason why MongoDB doesn't create /data/db for us automatically?

I installed MongoDB both on Win 7 and on Mac OS X, and both places, I got mongod (the server) and mongo (the client).
But at both places, running mongod will fail if I double click on the file, and the error message was gone too quickly before I can see anything. (was better on Mac because Terminal didn't exit automatically and showed the error message).
Turned out it was due to /data/db not exist and the QuickStart guide says: By default MongoDB will store data in /data/db, but it won't automatically create that directory
I just have a big question that MongoDB seems to want a lot of people using it (as do many other products), but why would it not automatically create the folder for you? If it didn't exist... creating it can do not much harm... especially you can state so in the user agreement. The question is why. I can think of one strange reason, but the reason may be too strange to list here...
One good reason would be that you do not want it in /data/db. In this case, you want it to fail with an error when you forgot to specify the correct directory on the command line. The same goes for mis-spelled directory names. If MongoDB just created a new directory and started to serve from there, that would not be very helpful. It would be quite confusing, because databases and collections are auto-created, so there would not even be errors when you try to access them.