The 'Add as a trusted domain' button didn't do anything before now it takes me to an 'Error 404' page.
I can set the domain on the owncloud box by editing the file config.php and have done so but I still do not understand why the button doesn't work.
you can manually override this button:
go to your installation folder in the config directory
open config.php and add a line like this:
'trusted_domains' =>
array (
0 => 'localhost',
1 => 'yourdomain.com'
),
Just started working with ownCloud myself and happened upon a similar issue when I reboot my BananaPi. The BPi got assigned a new IP on my network and the only Trusted IP was the original IP. I wanted to see how I could allow more trusted domains or IPs. Quick search shows no wildcard options. Since I often add and remove devices from my network, I wanted to add a range, like 192.168.0.1 to 192.168.0.254.
Since config.php is simply included and can still run code, versus just being XML or something, we can build an array really quickly.
config.php
<?php
$local_ips = array();
$base = "192.168.0.";
for($i = 1; $i < 255; $i++){
array_push($local_ips, $base . $i);
}
$CONFIG = array(
// Other config items ...
'trusted_domains' => $local_ips,
// More config items...
);
This will create an array of IPs that can then be used as Trusted Domains. $base is the first 4 octets of your Private IP Subnet. If you use 192.168.0.0/24 or 10.0.1.0/24, the $base would be 192.168.0. or 10.0.1. The for() loop will want to have a limit based on your network size.
You must to include http or https before the domain. The general form is:
http://domain:port
or
https://domain:port
This is an example example:
http://10.0.0.1:8000
or
https://10.0.0.1:8000
Related
I have this PowerShell command that exports for me all issued certificates into a .csv file:
$Local = "$PSScriptRoot"
$File = "$Local\IssuedCerts.csv"
$Header = "Request ID,Requester Name,Certificate Template,Serial Number,Certificate Effective Date,Certificate Expiration Date,Issued Country/Region,Issued Organization,Issued Organization Unit,Issued Common Name,Issued City,Issued State,Issued Email Address"
certutil -view -out $Header csv > $File
This works fine, by the way I would like to format the output in a more readable manner, if its somehow possible, please let me know, too.
The point is I need to export all certificates which will expire soon, but I also need data from SAN Extensions from each certificate too be exported with.
Perhaps getting the certificates directly from the CertificateAuthority X509Store and reading the certificate extensions (one of which is the Subject Alt Names) using the ASNEncodedData class would do the trick?
Example code below on reading certificates from the given store and printing out their extensions:
using namespace System.Security.Cryptography.X509Certificates
$caStore = [X509Store]::new([StoreName]::CertificateAuthority, [StoreLocation]::LocalMachine)
$caStore.Open([OpenFlags]::ReadOnly)
foreach ($certificate in $caStore.Certificates) {
foreach ($extension in $certificate.Extensions) {
$asnData = [System.Security.Cryptography.AsnEncodedData]::new($extension.Oid, $extension.RawData)
Write-Host "Extension Friendly Name: $($extension.Oid.FriendlyName)"
Write-Host "Extension OID: $($asnData.Oid.Value)"
Write-Host "Extension Value: $($asnData.Format($true))"
}
}
$caStore.Close()
You can specify a different store to open by specifying a different value for the [StoreName]::CertificateAuthority section.
Disclaimer, I haven't been able to test this code in production, so I'm not 100% certain that all the fields you require are exposed, but may serve as a good starting point
I want to copy a directory from one host to another host using SCP
I tried with following syntax
my $src_path="/abc/xyz/123/";
my $BASE_PATH="/a/b/c/d/";
my $scpe = Net::SCP::Expect->new(host=> $host, user=>$username, password=>$password);
$scpe->scp -r($host.":".$src_path, $dst_path);
i am getting the errror like no such file or directory.can you help in this regard.
According to the example given in the manpage, you don't need to repeat the host in the call, if you already passed it as an option.
from http://search.cpan.org/~djberg/Net-SCP-Expect-0.12/Expect.pm:
Example 2 - uses constructor, shorthand scp:
my $scpe = Net::SCP::Expect->new(host=>'host', user=>'user', password=>'xxxx');
$scpe->scp('file','/some/dir'); # 'file' copied to 'host' at '/some/dir'
Besides, is this "-r" a typo? If you want to copy recursively, you need to set recursive => "yes" in the options hash.
TL;DR
Does PHP 5.4 built-in webserver have any bug or restriction about relative paths? Or does it need to be properly (and additionally) configured?
When I used to programming actively I had a system working under URI routing using these lines in a .htaccess file:
RewriteEngine On
RewriteRule !\.(js|ico|gif|jpg|png|css)$ index.php [L]
The FrontController received the Request, find the proper route from given URI in a SQLITE database and the Dispatcher call the the Action Controller.
It worked very nicely with Apache. Today, several months later I decided to run my Test Application with PHP 5.4 built-in webserver.
First thing I noticed, obviously, .htaccess don't work so I used code file instead:
<?php
if( preg_match( '/\.(?:png|jpg|jpeg|gif)$/', $_SERVER["REQUEST_URI"] ) ) {
return false;
}
include __DIR__ . '/index.php';
And started the webserver like this:
php.exe -c "php.ini" -S "localhost:8080" "path\to\testfolder\routing.php"
So far, so good. Everything my application need to bootstrap could be accomplished by modifying the include_path like this:
set_include_path(
'.' . PATH_SEPARATOR . realpath( '../common/next' )
);
Being next the core folder of all modules inside a folder for with everything common to all applications I have. And it doesn't need any further explanation for this purpose.
None of the AutoLoader techniques I've ever saw was able to autoload themselves, so the only class manually required is my Autoloader. But after running the Test Application I received an error because my AutoLoader could not be found. o.O
I always was very suspicious about realpath() so I decided to change it with the full and absolute path of this next directory and it worked. It shouldn't be needed to do as I did, but it worked.
My autoloader was loaded and successfully registered by spl_autoload_register(). For the reference, this is the autoloading function (only the Closure, of course):
function( $classname ) {
$classname = stream_resolve_include_path(
str_replace( '\\', DIRECTORY_SEPARATOR, $classname ) . '.php'
);
if( $classname !== FALSE ) {
include $classname;
}
};
However, resources located whithin index.php path, like the MVC classes, could not be found. So I did something else I also should not be doing and added the working directory to the include_path. And again, manually, without rely on realpath():
set_include_path(
'.' . PATH_SEPARATOR . 'path/to/common/next'
. PATH_SEPARATOR . 'path/to/htdocs/testfolder/'
);
And it worked again... Almost! >.<
The most of Applications I can create with this system works quite well with my Standard Router, based on SQLITE databases. And to make things even easier this Router looks for a predefined SQLITE file within the working directory.
Of course, I also provide a way to change this default entry just in case and because of this I check if this file exist and trigger an error if it doesn't.
And this is the specific error I'm seeing. The checking routine is like this:
if( ! file_exists( $this -> options -> dbPath ) ) {
throw RouterException::connectionFailure(
'Routes Database File %s doesn\'t exist in Data Directory',
array( $this -> options -> dbPath )
);
}
The dbPath entry, if not changed, uses a constant value Data/Routes.sqlite, relatively to working directory.
If, again, again, I set the absolute path manually, everything (really) works, the the Request flow reached the Action Controllers successfully.
What's going on?
This a bug in PHP's built-in web server that is still not fixed, as of PHP version 5.6.30.
In short, the web server does not redirect to www.foo.com/bar/ if www.foo./bar was requested and happens to be a directory. The client being server www.foo.com/bar, assumes it is a file (because of the missing slash at the end), so all subsequent relative links will be fetched relative to www.foo.com/instead of www.foo.com/bar/.
A bug ticket was opened back in 2013 but was mistakenly set to a status of "Not a Bug".
I'm experiencing a similar issue in 2017, so I left a comment on the bug ticket.
Edit : Just noticed that #jens-a-koch opened the ticket I linked to. I was not awar of his comment on the original question.
I have a web application which I test locally and deploy on EC2 instance
I am using local nginx configuration which looks like as
location /static/ { alias /home/me/code/p/python/myapp/static/;
# if asset versioning is used
if ($query_string) {
expires max;
}
} location /templates/ { alias /home/me/code/p/python/app/templates/;
# if asset versioning is used
if ($query_string) {
expires max;
}
}
On EC2 instance, the only thing that would change is the path, e.g.
/home/me/code/p/python/myapp/static/ to /User/ubuntu/code/p/python/myapp/static/
To make this happen I change the configuration to look like
~/code/p/python/myapp/static/
but this didn't work, it shows the path
/etc/nginx/~/code/p/python/myapp/static/
which is not right
Question
- Is it possible to include environment variables in nginx conf?
What I want
- Nginx conf, which can read variables on specific machines to create paths, so that I don't have to change it per machine and code is reusable
Thank you
Two ways of doing this:
As suggested above, symlinking is a really good way of making paths match on machines, while keeping code in one place. A symbolic link basically is an alias; if /link is a symlink for /file, when you ask for /link, you'll get /file.
ln -s /file /link
Using include statements. In nginx, you can include variables.conf;. E.g.
nginx.conf:
include variables.conf
...
http {
listen $port;
...
}
variables.conf:
set $foo "Something";
set $bar "Else";
set $port 80;
I have a lighttpd 1.4.26 (ssl) configuration on a cent-os linux machine serving an HTML5 media application over HTTPS.
My goal is to serve media files via the application over HTTP from the same webserver.
If the webserver were located at https://www.media.com/ and all the media is located in various subfolders of http://www.media.com/sharedmedia/XXXXX, and I have relative links to any media file in the html for the pages served over http, then I want all requests to .mp3, .mp4, .webm, and .ogv files to be redirected to the EXACT SAME URL but using http instead of https...
My problem is I do not know how to write a url.redirect rule to perform this translation...
I have tried:
url.redirect = ( "https://^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1" )
And when I visit this URL:
https://www.media.com/sharedmedia/X-MAC-MINI/Sports/Amazing%20Football%20Skills%20and%20Tricks.ogv
I am 301 permanently redirected to
http://www.media.com/sharedmedia/X-MAC-MINI/Sports/Amazing0Football0Skills0and0Tricks.ogv
Which is then also 301'ed to:
http:///sharedmedia/AFFINEGY-MAC-MINI/Sports/Amazing0Football0Skills0and0Tricks
Notice that the %20's that were in the very first url (urlencoded SPACE) were dropped from the URL leaving the trailing '0' in each case during the first redirect (I assume interpreted as %2 which holds an empty string), and that the http request is ALSO redirected erroniously to another URL that doesn't even contain the host value (www.media.com). Also, the extension is left off the second redirect...
I then tried a conditional version after that:
$HTTP["socket"] =~ ":443$"
{
url.redirect = ( "^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1" )
}
..which results in lighttpd simply crashing on startup, so I can't even test it. Lighttpd startup error message follows:
Starting lighttpd: 2011-08-31 16:19:15: (configfile.c.907) source: find /etc/lighttpd/conf.d -maxdepth 1 -name '*.conf' -exec cat {} \; line: 44 pos: 1 parser failed somehow near here: (EOL)
2011-08-31 16:19:15: (configfile.c.907) source: /etc/lighttpd/lighttpd.conf line: 331 pos: 1 parser failed somehow near here: (EOL)
Any ideas what I'm doing wrong?
Here's what I did wrong:
You need a nested "IF" to use the %n notation in the destination IP rule...
I had
$HTTP["socket"] =~ ":443$"
{
url.redirect = ( "^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1" )
}
But I needed
$HTTP["socket"] =~ ":443$" {
$HTTP["host"] == (.*) {
url.redirect = ( "^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1.$2" )
}
}
Now... the %1 in "http://%1/$1" refers to the match in "$HTTP["host"] == (.*)" while $1 refers to the match in the url redirect parens, and $2 refers to the match for the second set of parens in the url redirect source...
Is it just me or is this shit TOTALLY undocumented? I can only find on google people complaining of not being able to get this to work and no one seems to have a single answer for how it works...
I'm now stuck making the EXACT same thing happen in APACHE, and I can't find a good documentation on .htaccess 301 redirects for it either...