I joined a small ctf challenge and one of the challenges is that, given a webserver with a file on it, now you have to find the file on the server. The only thing that is given to you is, the url and the filename (flag.txt). I tried brute forcing common directories, google dorks, reverse shell... Nothing worked, so my last hope is to find the file due to its name.
So my question, is it possible to find a file on a webserver due to its file name?
You can't directly find it, the best option is to run a tool for path brute force (like GoBuster for example).
If it doesn't output a result, maybe you are missing something in the question?
Also maybe you missed understand the brute force tool's output and you just don't have a access to the file?
good luck!
Related
I've read the TRAMP manual and dozens of forums across the web but I couldn't find an answer to this question. I am trying to set up a link in org-mode that transfers a file from a remote server to my local machine (or vice-versa).
According to the manual I have to write something like
/scp:user#host:filepathonremotemachine
and that's it. No specification of where the file should be moved to, which is weird.
I've tried to do it this way and it simply opened the file (as if I was using ssh); tried other combinations also, without any luck.
There is a specific reason for why I am trying to do this with tramp and not a shell:command link. Any help is very welcome
UPDATE
Apparently TRAMP is less useful than what it promises. That leaves me with the shell:command link option. The problem then revolves around avoiding the openssh window that pops out. The closest solution I found was here and it resumes to setting up an ssh-agent. I am not very familiar with this procedure and I would prefer to use the authinfo.gpg authentication method. Do I have this option? Thanks.
Tramp itself offers just alternative implementations of native Emeacs functions. In this sense, it is dumb, as every library, because it doesn't know what the caller wants.
I'm not an org-mode specialist, but could you please show, which kind of link you have in mind? Without any remoteness, just a link which copies a file locally. Replacing local file names with remote ones will be easy then.
I assume, you need something like an external link, evaluating Lisp code. Like
elisp:(copy-file "/path/src" "/path/target")
The following works (for some definition of "works"):
* link to copy a file
[[shell:scp remote.host.com:/path/to/file /tmp][scp]]
But you must have arranged for passwordless login to the remote host beforehand (e.g. ssh-copy-id your public key to the remote): given that, there is no output in the org buffer, no openssh popup, just the standard question from org-mode asking if you really want to execute the shell command and the file is copied quietly to its destination.
Error when run
I created a program in ZeroBrane, and I compiled it using srlua.
However, since it has some usage of socket get requests, it seems like it is looking for files such as socket.dll to be in the same folder
I am aware that there are other questions just like this.
My socket should be somewhere in my lua folder, but I have found core.dll under the socket, which didn't work.
I'm wondering if I'm approaching this right, and looking for a way to make sure the program can find these files. (it seems to be more than one file required)
You need to have socket.lua in a folder that is available through package.path (or have to be packaged together with your script with srlua) and you also need to have socket\core.dll available through package.cpath, as socket.lua does require "socket.core" and expects to find a DLL that implements that.
I have been trying to find a solution to download files from URL such as: https://.com//. I learned about wget and tried quite many options, but realized it does not download any files that does not have direct link in index file or any sort.
For example, I'd like to download everything from https://somesites.com/myfolder/myfiles/.
Let's say there is an index.html under "myfiles" directory, and many html files and couple directories that are all referenced and linked in index but also couple of other html files such as sample123.html and sample456.html.
wget command successfully downloads all, but sample123.html and sample456.html with pretty much most of the common and well known options.
Is there any other tools that will grab ALL files that are located in https://somesites.com/myfolder/myfiles/ regardless with or without direct link?
I also tried lftp against the http URL, but download result was much fewer files that wget.
I looked through stack overflow for this, but recommended commands are the ones that only downloads files with direct link (by wget).
What you want to do is not possible and could be a security problem. Imagine that someone has, for example, a file with some sensitive data inside the folder and that file is not listed anywhere. You are asking for a tool that would also download that file.
So as said, is not possible, that's why it's always a good suggestion to disable directory listing in HTTP servers as a security option, to prevent exactly what you want to do.
Tried searching for this a number of ways and have not yet found an answer ...
Background
I am working on a legacy Perl application that has a lot of hard-coded values in it which should be configurable depending on where the app is installed. So, obviously, I am looking to externalize these values into a configuration file that may be located in one of a few "expected" locations. That is, using a traditional approach of checking for the configuration file in:
the current working directory,
the user's home directory (or a sub-folder therein), and
the system configuration directory (or a sub-folder therein)
where the first one found wins.
Where I am at
Perused the CPAN site a bit and found the Config::Any package, which looks promising. I can give it a list of files to use:
use Config::Any;
my $config = Config::Any->load_files(
{
files => [qw(sample.conf /home/william/.config/sample.conf /etc/sample.conf)],
use_ext => 0,
});
This will check for the existence of each of these files, and, if found, load the contents into an array reference of hash references. Not bad, but I still have to hard-code the locations where I search for my sample.conf file. Here, I assume that I am working on a Linux system, and that the location for the configuration file for all users of the application is /etc/. I could always add /usr/local/etc/ as well, but regardless, this is not system agnostic.
I can locate the user home folder using File::HomeDir for searching there, and it works correctly regardless of the system on which the application is running. So is there a similar package that would provide the /etc/ folder (or its equivalent on other platforms)?
Questions
Is there a way to do this without having to know what particular OS I am on? (Perl package or code snippet)
What is the "Perl best practice" way of accomplishing this? I cannot imagine that no one else has run into this previously.
Unless you don't plan to run your code on non unix-based hosts, according to the conventional directory layout and filesystem hierarchy standard, you may rely on a quite large set of well known places.
Anyway, nothing prevents you to dynamically build the file search specification to take account of platform oddities and their specific ways to get them (eg. File::HomeDir::Win32 vs File::HomeDir).
Whenever I alter (or even just resave without altering) a Perl file, it completely takes down our backend. I have no idea what the problem could be. Permissions are correct. Encoding is correct. Encoding is UTF-8. Transfer mode was ASCII.
I might not deal with Perl too much but I have no idea what the problem could be. The network admin hosting our website has no idea what the problem could be.
Text editors I tried: Dreamweaver, TextMate, Vim
Operating systems I tried: Mac OS X, Linux (Ubuntu)
FTP clients I tried: Transmit (Mac), Filezilla (Linux (Ubuntu))
It's not that it's bad code, I even tried to open and solely save and my backend still goes down.
The network admin told me that he ran the files through a dos2unix converter and it worked immediately. I of course tried this and it did not, more so it wouldn't make any sense, since I tried this in some of the most respected editors and I don't think it would make such drastic changes to the file type without any user input. (when I say respected editors Dreamweaver is not included in that sentiment).
I personally think it is some sort of server-side issue because I have crossed my t's and dotted my i's in regards to any possible client side issue but I have tried everything. Any opinions as to what the root of this problem is, and any possible solutions? Thanks in advance.
Try setting binary mode in your FTP client. That will allow you to experiment with different line endings (dos2unix) on the client side, without worrying about them being translated during transfer.
I've had this problem in the past and line-feeds were indeed the culprit.
Your editor and/or FTP program may be mangling the linefeeds.
Running dos2unix on the server is a good test as to the problem but not the cause.
Generate an MD5 hash of the file after each step in saving and transport to find where it changes.
You do not say what kind of framework/server you are using.
Maybe the server reloads the file while it is still being written by FTP or whatever? (I.e. that the file is not complete when the server reads it?)
Will a server restart fix the problem once the file is uploaded?
It sounds like you are using dos2unix before the transfer but the network admin is using it after. Perhaps it's doing something different in that case.
How many lines are in the file? What is the file size before and after you save it, after you transfer it, and after transfer and running dos2unix on it?
If this is just a line ending problem, you might point your network admin at http://www.perlmonks.org/?node_id=586942.
Response to rebra: No frameworks are used, and I don't know what kind of server this is on. This is basically a one man project on a shared host which was pretty horribly maintained and I'm trying to clean house.
Yeah that does make sense and I asked the server people about that, one of my first questions actually, but even if that is the case, I can't reboot via Plesk (kind of like cPanel). But thanks for that, you put into technical words/explanation what I was thinking of the whole time.