I am asking what may be a simple question before diving into the source code to see if the answer is out there: Is the /var/log/journal location for systemd journal files hard-coded into the binaries? By that, I do not mean "is it the default?" I mean that I have attempted to override the default in every location I could find which might control the setting, and systemd-journald merrily ignores those settings and goes back to the /var/log/journal location. Or stop logging altogether. These locations include:
/etc/systemd/journald.conf
/usr/lib/tmpfiles.d/systemd.conf
/usr/lib/tmpfiles.d/var.conf
/lib/systemd/system/systemd-journal-flush.service
Am I missing a configuration setting somewhere? The distro is Ubuntu 16.04. System design constraints prompt the question, so please, no "Why in world would you ever..." type answers. Thanks.
Yes, it’s hard-coded. /usr/lib/tmpfiles.d/systemd.conf sets up the directory (via the systemd-tmpfiles service), but journald doesn’t check there to see what the directory should be. Also, you shouldn’t edit files in /usr/lib/ anyways – all systemd services support an override mechanism that doesn’t require editing files which belong to the package manager (e. g. /etc/tmpfiles.d/systemd.conf can be used to completely override /usr/lib/tmpfiles.d/systemd.conf).
Related
I am a web developer, and for some upcoming projects I would like to use a file-based CMS. This means that many of the files I create at the start must be editable by the PHP user later, but also remain editable for my user (and also the other way around). My PC runs Debian 9, which I love but am not super knowledgeable about, and I have also just set up a local network server with Debian 9 for backups and possibly file sharing. (I'm using Webmin to configure this, which reflects my level of command line skills).
On my online shared hosting server, the PHP user and the FTP user seem to be the same, and 644/755 permissions work fine, this is also recommended by the CMS I'm using. I would like to mimic this on my computer so I don't have to fiddle with permissions all the time. But how do I do this? Currently, my regular user (anna) does not have access to www-data's files and vice versa. Putting them in the same group still means changing file permissions. Making anna the PHP user is a Bad Idea (as far as I understand it) because anna has sudo permissions.
So far I have researched three possible solutions that I don't really know very much about, and I would like to know which is the best route to take.
Develop locally on my computer and use apache-mpm-itk or suPHP to let PHP edit the files (I got that idea from this question on ServerFault).
Develop locally on my computer and rsync the files to my server with grunt-rsync, and somehow get rsync to set the ownership to www-data (another ServerFault thread helping here).
Mount the project's server directory, which is owned by www-data, on my computer with SSHFS and then either edit the files on the server directly or copy them over from my local directory with grunt-copy.
What do you think: from a security and ease of use perspective, which is the best way? Or do you know an even better one?
Thank you for taking the time to read and think about this!
Anna~
I figured it out! I finally ended up reading about running PHP as CGI instead of as an Apache module, and that this would solve my permissions problem. Plus, as far as I understand it, there are no extra security precautions to take when I'm the only one working with it on my local computer.
In case someone comes across this who might find it helpful, here's what I did (basically following these instructions):
I installed php7.0-fpm
Edited /etc/apache2/sites-enabled/000-default.conf and put the following just before </VirtualHost>:
DirectoryIndex index.php
<LocationMatch "^(.*\.php)$">
ProxyPass fcgi://127.0.0.1:9000/var/www/html
</LocationMatch>
I activated the Apache module proxy_fcgi (via Webmin, which apparently does an automatic Apache restart)
In /etc/php/7.0/fpm/pool.d/www.conf I commented out a listen line and put another below like this:
; listen = /run/php/php7.0-fpm.sock
listen = 127.0.0.1:9000
I then restarted PHP-FPM with this command: /etc/init.d/php7.0-fpm restart (a little different from the instructions, I'm on Debian 9). After that, phpinfo() gave me the Server API "FPM/FastCGI".
And finally, I changed the user and group from www-data to anna in three places, twice in /etc/php/7.0/fpm/pool.d/www.conf and then once more in /usr/lib/tmpfiles.d/php7.0-fpm.conf (this last bit may be Ubuntu/Debian specific, my thanks go to Keith for a comment on StackExchange).
And that was it! :-)
I have a cluster of machines hosting hadoop (MapR) and have install streamsets on one of the nodes (say node002) following the RPM documentation. However, I am accessing the web UI for the data collector from another node, node001.
My question is, when I specify files paths (eg. an origin directory), which file system is the web UI going to be referring to? Eg. if I put an origin directory as /home/myuser/mydata, will the pipeline created in the web UI be looking for that directory in node001 or node002? New to using streamsets, so a more detailed answer would be appreciated. Thanks.
** Ultimately I am asking this because I am currently getting "FileNotFound" and "permission denied" errors while trying to follow the documentation's tutorial and am trying to debug the situation.
From the streamsets community forums: It will be the path to the local file on the machine running that particular SDC instance.
The FileNotFound and permission errors have to do with the fact that the default user for the sdc service is a user called sdc. Still working on how to fix this part, but can produce a workable prototype by setting the read and write access for the directories in question to allow public access (still need to work on this part, but this answers the posted question).
I am learning to write character device drivers from the Kernel Module Programming Guide, and used mknod to create a node in /dev to talk to my driver.
However, I cannot find any obvious way to remove it, after checking the manpage and observing that rmnod is a non-existent command.
What is the correct way to reverse the effect of mknod, and safely remove the node created in /dev?
The correct command is just rm :)
A device node created by mknod is just a file that contains a device major and minor number. When you access that file the first time, Linux looks for a driver that advertises that major/minor and loads it. Your driver then handles all I/O with that file.
When you delete a device node, the usual Un*x file behavior aplies: Linux will wait until there are no more references to the file and then it will be deleted from disk.
Your driver doesn't really notice anything of this. Linux does not automatically unload modules. Your driver wil simply no longer receive requests to do anything. But it will be ready in case anybody recreates the device node.
You are probably looking for a function rather than a command. unlink() is the answer. unlink() will remove the file/special file if no process has the file open. If any processes have the file open, then the file will remain until the last file descriptor referring to it is closed. Read more here: http://man7.org/linux/man-pages/man2/unlink.2.html
I am working on a simple Perl app that copies another Perl app and builds all the required Apache config files.
The thing I can't seem to figure out is how to reload the apache config on the fly. I know I could do a system call and reload apache there, but that would mean I would have to get root access to this app, and that is a little scary.
Is there a way to ask apache to reload its config files from within the CGI container?
-------------------------Additional info------------------------------
I have done some more research and the problem is that Apache must be run with elevated privileges to bind to port 80. So one solution would be to set Apache to run on another port and forward that port to 80 via iptables. (This may be a last resort but a very messy solution).
Here is what gets me, Apache should be able to maintain its current port bindings and recheck its config files, all I am doing is adding another script alias.
Is there any way to add a new script alias with out a reload?
you also have the options to reload the config:
/etc/init.d/httpd reload
or
apachectl -k graceful
But unfortunately, those need root also. This differs from a normal restart in that currently open connections are not aborted. A side effect is that old log files will not be closed immediately. This means that if used in a log rotation script, a substantial delay may be necessary to ensure that the old log files are closed before processing them.
Also, if running Apache with daemontools you can do this by:
svc -h /service/apache
Sorry to ask a question then not give some one else the opportunity to answer but I figured out a solution and I hope it may help some one else.
What I had to do was leave the config alone it is not possible to reload in the manner that I required with out root privileges or some fancy port forwarding (That would make this application less portable than I would like).
So the only thing that Apache appears to load dynamically is the file system.
What I have done is used mod_rewrite to redirect the script requests and simply put them in /var/www/appname/copyname/cgi-bin/
I have a web app I'm writing in mod_perl 2. (It's a custom handler module, not registry or perlrun scripts.) There are several configuration options I'd like to have set at server initialization, preferably from a configuration file. The problem I'm having is that I haven't found a good place to pass a filename for my app's config file.
I first tried loading "./app.conf" but the current directory isn't the location of the modules, so it's unpredictable and error-prone. Or, I have to assume some path -- relative or absolute. This is inflexible and could be problematic if the host OS distribution is changed. I don't want to hard-code a path (though, something in /etc may be acceptable if there's just no better way).
I also tried PerlSetVar, but the value isn't available until request time. While this is workable, it means I'm potentially reading a config file from disk at least once per child (thread) init. I would rather load at server init and have an immutable static hash that is part of the spawned environment when a child is created.
I considered using a config.pl, but this means I either have a config.pl with one option to configure where to find the app.conf file, or I move the options themselves into config.pl and require end-users to respect Perl syntax when setting options. Future users will be internal admins, so that's not unreasonable, but it's more complicated than I'd like.
So what am I missing? Any good alternatives?
Usually a top priority is to avoid having configuration files amongst your executables. Otherwise a server misconfiguration could accidentally show your private configuration info to the world. I put everything the app needs under /srv/app0, with subdir cfg which is a sibling to the dirs containing executables. (More detail.)
If you're pre-loading modules via PerlPostConfigRequire startup.pl to access mod/startup.pl then that's the best place to put the configuration file location ../cfg/app.cnf and you have complete flexibility re how to store the configuration in memory. An alternative is to PerlModule your modules and load the configuration (with a relative path as above) in a BEGIN block within one of them.
Usually processing a configuration file doesn't take appreciable time, so a popular option is to lazy-load: if the code detects the configuration is missing it loads it before continuing. That's no use if the code needed to know the configuration earlier than that, but it avoids lots of problems, especially when migrating code to a non-modperl env.