all, I'm new to this site, and Linux. I've just installed fluentd on Linux Mint. I want to use it to tale .evl logs at remote sites (by ip address) on our network, and send an email when a certain phrase appears. I'm reading on how to set up the tail input plugin. However, in the source section of fluent.conf, how do I specify the path for a remote file?
Fluentd as of Aug 2015, doesn't support tailing the remote files (#see in_tail plugin). You probably need to copy the files from remote to local via scp or rsync. Or you can post the data remotely via http (#see in_http plugin)
Related
I have an app (java, Spring boot) that runs in a container in openshift. The application needs to go to a third-party server to read the logs of another application. How can this be done? Can I mount the directory where the logs are stored to the container? Or do I need to use some Protocol to remotely access the file and read it?
A remote server is a normal Linux server. It runs an old application running as a jar. It writes logs to a local folder. An application that runs on a pod (with Linux) needs to read this file and parse it
There is a multiple way to do this.
If a continious access is needed :
A Watcher access with polling events ( WatchService API )
A Stream Buffer
File Observable with Java rx
Then creating an NFS storage could be a possible way with exposing the remote logs and make it as a persistant volume is better for this approach.
Else, if the access is based on pollling the logs at for example a certain time during the day then a solution consist of using an FTP solution like Apache Commons FTP Client or using an ssh client which have an SFTP implementation like JSch which is a native Java library.
One of the requirements is to keep remote Windows Server intact.
No third party software allowed (no WinSCP, etc).
So we configure Windows Server with WinRM and allow remote access, AllowUnencrypted=true, Auth basic=true, etc...
Then we create job and execute command on Windows server like "ifconfig" successfully.
When it comes to executing inline script or copying file - Rundeck is trying to copy script/file to remote Windows server.
By default:
plugin.script-copy.default.command=get-services
where "get-services" seems to be free-form text rather than executable.
If we want to use SCP or SSH instead, here we have problem -> Windows Server doesn't have WinSCP or SSH or Python installed by default.
Is there any way to copy/deliver script to target/remote Windows Server 2008 using embedded capabilities only (no third-party software allowed) ?
Versions:
Rundeck 2.6.2 running on Linux
Windows Server 2008 R2 Enterprise, Service Pack 1
Thank you.
You can use the WinRM plugin (AKA "Overthere WinRM"), configure it, and use the copy file step on your job workflow (keep in mind that you need the 1.3.4 WinRM plugin at least which support copy file).
You need to download the plugin and put it in Rundeck the libext directory.
Add the Windows resources.xml entry (for "Overthere" WinRM plugin):
<node name="windows" description="Windows node" tags="" hostname="192.168.1.81" osArch="x86" osFamily="windows" osName="Windows 2008R2" osVersion="2008" username="user" winrm-protocol="http" winrm-auth-type="basic" winrm-cmd="CMD" winrm-password-storage-path="keys/winpasswd"/>
Set WinRM as your default node executor / default node file copier, and use the copy file step on your workflow like this.
So, this is important: the WinRM plugin isn't in active development (and Rundeck 2.6 branch is out of support/maintenance), the best way to deal with this is to move to the latest Rundeck version and use the PyWinRM plugin (out of the box with Rundeck, on active development and easiest to configure compared by the old "Overthere" WinRM plugin) and use the copy step as the same way.
I want to host multiple websites using a single IP address i.e using name-based virtual hosting. In some of the blogs, it is given that we need to create separate config files for different websites and should enable all of them. but how does the apache server know which config file to look into? i.e if I have three config files named website1.conf,website2.conf,default.conf and if I type website2 in the chrome how does the server know which config file to look into?
The server is compiled to look for a single configuration file, which can be overridden by the -f command line flag. The configuration file can explicitly Include other configuration files or entire directories of configuration files.
At startup, the server parses the configuration. If it leads to other files, so be it. If those files have <virtualhost> directives, then the server will look at the directives within them to figure out what you've told it about routing requests.
apachectl -S can summarize what the server knows about virtual hosts.
I have scala application with akka steams. So the flow of my application is like this:
1. Check if file exists on FTP - I'm doing it with the org.apache.commons.net.ftp.FTPClient
2. If it exists stream it via alpakka library(and make some stream transformations)
My application works locally and it can connect to the server.
The problem is when it is being deployed to dcos/mesos. I get an issue:
java.io.IOException: /path/file.txt: No such file or directory
I can say for sure that file still exists there. Also when I try to connect from docker container locally through the ftp I've got something like this:
ftp> open some.ftp.address.com
Connected to some.ftp.address.com.
220 Microsoft FTP Service
Name (some.ftp.address.com:root): USER
331 Password required
Password:
230 User logged in.
Remote system type is Windows_NT.
ftp> dir
501 Server cannot accept argument.
ftp: bind: Address already in use
ftp>
Not sure if its still helpful but I also got my ftp client transfering data from inside a Docker container after changing the data connection to passive. I think that active mode requires the client to have open ports which the server connects to when returning file listing results and during data transfer. However the client ports are not reachable from outside of the docker container since the requests are not routed through (like in a NAT:et network).
Found this post explaning active/passive FTP connections
https://labs.daemon.com.au/t/active-vs-passive-ftp/182
So my problem was really weird. But I've managed to fix this way.
Quick answer: I was using alpakka ftp lib this way:
Ftp
.fromPath(url, user, pass, Paths.get(s"/path/$fileName"))
But using this way it works:
val ftpSettings = FtpSettings(
host = InetAddress.getByName(url),
port = 21,
NonAnonFtpCredentials(user, pass),
binary = true,
passiveMode = true
)
Ftp
.fromPath(Paths.get(s"/path/$fileName"), ftpSettings)
Longer answer: I started investigating alpakka lib and I've discovered that it uses the same lib that works for me during checking if file exists!
https://github.com/akka/alpakka/blob/master/ftp/src/main/scala/akka/stream/alpakka/ftp/impl/FtpOperations.scala
So I've started digging and it seems that most likely tahat setting passive mode to true was the solution. But it's weird because I've read that windows ftp server does not support passive mode...
I hope someone could clarify my doubts one day, but at the moment I'm happy because it works :)
I am using JSch API to connect to remote server through SFTP. I need to get a copy of a folder which is exist in the remote server into the same server. Is there any method implemented to do this kind of things in JSch? Or be kind to give me an advice to do the above use-case. (I am working in Scala).
We can not use "sftp" channel to do this task ans we have to use "exec" channel to do this task. Using "exec" channel we can execute Linux commands as follows,
val command = "mkdir testDir"
val channelExec: ChannelExec = session.openChannel("exec").asInstanceOf[ChannelExec]
channelExec.setCommand(command);
channelExec.connect
Go through the following links to get more details
http://www.programcreek.com/java-api-examples/index.php?api=com.jcraft.jsch.ChannelExec
http://www.journaldev.com/246/java-program-to-run-shell-commands-on-ssh-enabled-system
http://www.jcraft.com/jsch/examples/Exec.java.html
Thank you for all participants
Support for copying files remotely is rare in SFTP servers. There's copy-file extension to SFTP, but few servers/clients do support it.
See draft-ietf-secsh-filexfer-extensions-00.
In the most widespread OpenSSH SFTP server it is supported only by very recent version 9.0. And JSch does not support it at all.
Alternatives:
Download the folder and reupload it to a new location (pure SFTP solution)
Use cp command in a "exec" channel (not SFTP anymore, requires shell access)