Setting up Virtual Host with MAMP but not mapping to directory - mamp

I tried following these instructions to setup a Virtual Host as literally as I can.
When I try to access the directory using the virtual hostname, I get a 404 Not Found page.
I found a post in the support forum on MAMP saying that a possible problem I'm having is that I can't have my root folder outside of the default folder of /Applilcations/MAMP/htdocs/
Can anyone verify that this information is correct? I can't keep spending more days trying to figure this out!
EDIT: here is what I've done
Added to bottom of httpd-vhosts.conf
<VirtualHost *:80>
ServerName abc.dev
DocumentRoot /Users/micah/Sites/abc/
<Directory /Users/micah/Sites/abc/>
DirectoryIndex index.php
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
Uncommented VH entry in httpd.conf
# Virtual hosts
Include /Applications/MAMP/conf/apache/extra/httpd-vhosts.conf
/etc/hosts file, added to 127.0.0.1
127.0.0.1 localhost abc.dev
Then I restarted the servers and all I get is 404 error.
Here's the last entry in the apache_error.log within MAMP, which doesn't seem to have any relation to whether or not I try to load abc.dev because this was more than an hour ago.
[Wed Apr 02 23:46:38 2014] [notice] caught SIGTERM, shutting down
[Wed Apr 02 23:46:42 2014] [notice] FastCGI: process manager initialized (pid 11519)
[Wed Apr 02 23:46:42 2014] [notice] Digest: generating secret for digest authentication ...
[Wed Apr 02 23:46:42 2014] [notice] Digest: done
[Wed Apr 02 23:46:42 2014] [notice] Apache/2.2.26 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.6 PHP/5.5.10 mod_ssl/2.2.26 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.18.0 configured -- resuming normal operations
EDIT 4/4: I think I misunderstood my problem. The 404 means the hosts file is doing what it's supposed to but I can't seem to map it to the correct directory for some reason.

Related

Transferring logs using syslog-ng `as is` without timestamp and hostname etc

Background
Apache server running on a machine and producing logs into /var/log/httpd/error_log
Using syslog-ng to send log to a port 5140
Eventually it will be consumed by kafka producer to be send to a topic
Settings
options {
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames (off);
use_dns (no);
use_fqdn (no);
create_dirs (no);
keep_hostname (no);
};
source s_apache2 {
file("/var/log/httpd/error_log" flags(no-parse));
}
destination loghost {
tcp("*.*.*.*" port(5140));
}
Problem
syslog-ng prepends timestamp and hostname to the log data which is undesirable
<13>Jan 10 11:01:03 hostname [Tue Jan 10 11:01:02 2017] [notice] Digest: generating secret for digest authentication ...
<13>Jan 10 11:01:03 hostname [Tue Jan 10 11:01:02 2017] [notice] Digest: done
<13>Jan 10 11:01:03 hostname [Tue Jan 10 11:01:02 2017] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.4.30 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations
Desired output (Each log line as is from error_log file)
[Tue Jan 10 11:01:02 2017] [notice] Digest: generating secret for digest authentication ...
[Tue Jan 10 11:01:02 2017] [notice] Digest: done
[Tue Jan 10 11:01:02 2017] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.4.30 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations
Platform
CentOS release 6.4 (Final)
syslog-ng #version:3.2
PS
Syslog-ng to Kafka Integration : Please let me know if anybody has tried this which will render my java Kafka producer redundant
when you use the flags(no-parse) option in syslog-ng, then syslog-ng does not try to parse the different fields of the message, but puts everything into the MESSAGE field of the incoming log message, and prepends a syslog header. To remove this header, use a template in your syslog-ng destination:
template t_msg_only { template("${MSG}\n"); };
destination loghost {
tcp("*.*.*.*" port(5140) template(t_msg_only) );
}
To use the Kafka destination of syslog-ng, you need a newer version of syslog-ng (I'd recommend 3.8 or 3.9). Peter Czanik has written a detailed post about installing new syslog-ng rpm for CentOS.

CQ/AEM Dispatcher does not flush Binaries

Our application imports binaries (mostly PDF) from a legacy system and stores them on a page together with some metadata.
If there was a change the page automatically gets activated. We see the replication events in the replication log and also on the dispatcher an invalidate event is logged. But there is no eviction entry and this the old binary is still cached.
We also have HTML pages next to these container pages for the binaries and they work as expected. Here the two log entries for the successful html and the unsuccessful PDF:
OK:
[Thu Jul 03 09:26:33 2014] [D] [27635(24)] Found farm website for localhost:81
[Thu Jul 03 09:26:33 2014] [D] [27635(24)] checking [/dispatcher/invalidate.cache]
[Thu Jul 03 09:26:33 2014] [I] [27635(24)] Activation detected: action=Activate [/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/test]
[Thu Jul 03 09:26:33 2014] [I] [27635(24)] Touched /app/C2Z/dyn/c2zcqdis/docroot/.stat
[Thu Jul 03 09:26:33 2014] [I] [27635(24)] Evicted /app/C2Z/dyn/c2zcqdis/docroot/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/test.html
[Thu Jul 03 09:26:33 2014] [D] [27635(24)] response.status = 200
[Thu Jul 03 09:26:33 2014] [D] [27635(24)] response.headers[Server] = "Communique/2.6.3 (build 5221)"
[Thu Jul 03 09:26:33 2014] [D] [27635(24)] response.headers[Content-Type] = "text/html"
[Thu Jul 03 09:26:33 2014] [D] [27635(24)] cache flushed
[Thu Jul 03 09:26:33 2014] [I] [27635(24)] "GET /dispatcher/invalidate.cache" 200 13 2ms
Not OK
[Thu Jul 03 09:30:45 2014] [D] [27635(24)] Found farm website for localhost:81
[Thu Jul 03 09:30:45 2014] [D] [27635(24)] checking [/dispatcher/invalidate.cache]
[Thu Jul 03 09:30:45 2014] [I] [27635(24)] Activation detected: action=Activate [/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf]
[Thu Jul 03 09:30:45 2014] [I] [27635(24)] Touched /app/C2Z/dyn/c2zcqdis/docroot/.stat
[Thu Jul 03 09:30:45 2014] [D] [27635(24)] response.status = 200
[Thu Jul 03 09:30:45 2014] [D] [27635(24)] response.headers[Server] = "Communique/2.6.3 (build 5221)"
[Thu Jul 03 09:30:45 2014] [D] [27635(24)] response.headers[Content-Type] = "text/html"
[Thu Jul 03 09:30:45 2014] [D] [27635(24)] cache flushed
[Thu Jul 03 09:30:45 2014] [I] [27635(24)] "GET /dispatcher/invalidate.cache" 200 13 1ms
The PDF in this case is stored in a node called 'download' directly below the jcr:content node. It's html container is never called directly and this is not available on the dispatcher. So a user directly requests the file:
/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf/jcr%3acontent/download/file.res/as2p_vvm_ch_gl_fix_chf_.pdf
In the dispatcher.any we flush all html pages on activation, but not for the binaries. For testing, we added an allow *.pdf but this didn't help anyway.
/invalidate
{
/0000
{
/glob "*"
/type "deny"
}
/0001
{
/glob "*.html"
/type "allow"
}
}
In my opinion, the invalidate call should just delete the whole folder:
/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf
Any ideas why our binaries do not get flushed?
UPDATE: In another post the statfileslevel property in the dispatcher.any is mentioned. In our environment this is commented out. Could it be that this could be the problem. Sadly I don't fully understand how this is supposed to work. Is the level meant from the wwwroot or from the page that is activated?
It looks like your problem with dispatcher flushing is that the path the file is being served from is using jcr%3acontent when it should use _jcr_content.
Dispatcher flushing deletes the folder _jcr_content under the path that is being flushed. It does not delete jcr%3acontent (urldecoded as jcr:content). So you should instead serve the pdf using this URL:
/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf/_jcr_content/download/file.res/as2p_vvm_ch_gl_fix_chf_.pdf
This would then cache the pdf file under:
{CACHEROOT}/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf/_jcr_content/download/file.res/as2p_vvm_ch_gl_fix_chf_.pdf
Then when this path is flushed it will delete the subdirectory _jcr_content under the path of the flush
/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf
To go into more detail, when you issue a flush request for path above then the following files and directories are deleted:
/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf.* where * is a wildcard
/content/offering/s2p/en/offerings/documents/Swiss_Mandate_Line/Review/as2p_vvm_ch_gl_fix_chf__pdf/_jcr_content
See slide 23 in this presentation for details on how flushing works:
http://www.slideshare.net/andrewmkhoury/aem-cq-dispatcher-caching-webinar-2013
Not sure if this is the root cause, but what I suspect you probably need to do, is to go to localhost:4503/etc/replication/agents.publish.html (note, this is a publish instance, you can do it on the author and replicate the replication agents et al, but for the purposes of the POC, just do it directly on the publisher.)
Then go to your dispatcher flush agent, and click on edit settings.
Go to the triggers panel.
Make sure that the "On Receive" trigger is checked. What this does is enable chain replication, meaning that when a direct asset is published, it is directly deleted from the dispatcher, causing a miss on the next request, and thus pulling a fresh copy from the dispatcher.
Note that this kind of flushing is distinct from the stats file level flushing, which only flushes a directory, rather than a fully qualified path to the asset.
By the way, it's not stats file level. The stats file level by default is 0 if it is commented out, which invalidates anything below. What you seem to be looking for is an active delete of the cache. This is possible, as Dave just outlined to me for an unrelated problem in this post:
Is it possible to recursively flush directories in the CQ5/AEM apache dispatcher?
An approach would be to create a flush interceptor. Essentially a custom servlet on the publisher. What you would then do, is to configure the normal flush replicator to make a call to the local servlet on the publisher.
The servlet then detects whether it would need to delete the directory, or any particular files within. It can transform the flush path to the required path, and instead of a FLUSH action, use a DELETE action.
It would still be very important to send the flush to the normal dispatcher location.
Hope this helps.

How to run puppetmaster using Apache/Passenger

Running Puppet v2.7.14 on CEntOs 6 and also using Apache/Passenger instead of WEBrick. I was told that puppetmaster service is not required to be running (hence: chkconfig off puppetmaster) running when using httpd and passenger but in my case, if I don't start puppetmasterd manually, none of the agents can connect to the master. I can start httpd just fine and 'passenger' seems to start okay as well. This is my apache configuration file:
# /etc/httpd/conf.d/passenger.conf
LoadModule passenger_module modules/mod_passenger.so
<IfModule mod_passenger.c>
PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.12
PassengerRuby /usr/bin/ruby
#PassengerTempDir /var/run/rubygem-passenger
PassengerHighPerformance on
PassengerUseGlobalQueue on
PassengerMaxPoolSize 15
PassengerPoolIdleTime 150
PassengerMaxRequests 10000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off
</IfModule>
Upon restart, I see these in the httpd_error log:
[Sat Jun 09 04:06:47 2012] [notice] caught SIGTERM, shutting down
[Sat Jun 09 09:06:51 2012] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Sat Jun 09 09:06:51 2012] [notice] Digest: generating secret for digest authentication ...
[Sat Jun 09 09:06:51 2012] [notice] Digest: done
[Sat Jun 09 09:06:51 2012] [notice] Apache/2.2.15 (Unix) DAV/2 Phusion_Passenger/3.0.12 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations
And passenger-status prints these info on the screen:
----------- General information -----------
max = 15
count = 0
active = 0
inactive = 0
Waiting on global queue: 0
----------- Application groups -----------
But still, as I said, none of my agents can actually talk to the master until I start puppetmasterd manually. Does anyone know what am I still missing? Or, is this the way it supposed too be? Cheers!!
It sounds like you may be missing an /etc/httpd/conf.d/puppetmaster.conf file that's based on https://github.com/puppetlabs/puppet/blob/master/ext/rack/files/apache2.conf
Without something like this, you're missing the glue that'll map port 8140 to rack-based pupeptmastd.
See http://docs.puppetlabs.com/guides/passenger.html
https://github.com/puppetlabs/puppet/tree/master/ext/rack
http://www.modrails.com/documentation/Users%20guide%20Apache.html#_deploying_a_rack_based_ruby_application_including_rails_gt_3
After a few days of banging head, now it's running. The main problem was with port number - the puppetmaster was running on different port than puppet agent trying to communicate on.
Another thing is: RackAutoDetect On must be executed before the dashboard vhost file. My So, I renamed passenger vhost file as: 00_passenger.conf to make sure it runs first. After that I get puppetmaster running using Apache/Passenger. Cheers!!

Default Index Controller Not Being Called With New Zend Studio Project

I have just purchased a license for Zend Studio 9. I have only a minimal amount of experience with the Zend framework, and no previous experience with Zend Studio. I am using http://framework.zend.com/manual/en/ as a tutorial on the framework and have browsed through the resources located at http://www.zend.com/en/products/studio/resources for help with the studio software.
My main problem is that after creating a new Zend project with zstudio, I'm not seeing the initial welcome message. Here are the steps I am using:
I've already installed the Zend Server and confirmed that web apps are working (made some test files, they all parsed correctly).
Create a new project with Zend Studio.
a. File->New->Local PHP Project
b. For location, I am using C:\Program Files\Zend\Apache2\htdocs.
c. For version I used the default "Zend Framework 1.11.11 (Built-in)"
I go to http://localhost:81/projectname. Instead of the default index controller being called, I just see my directory structure.
Addition info:
OS: Windows 7
PHP version: 5.3
ERROR LOGS:
>[Wed Nov 30 14:32:30 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
>[Wed Nov 30 14:32:30 2011] [warn] pid file C:/Program Files (x86)/Zend/Apache2/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run?
>[Wed Nov 30 14:32:30 2011] [notice] Digest: generating secret for digest authentication ...
>[Wed Nov 30 14:32:30 2011] [notice] Digest: done
>[Wed Nov 30 14:32:31 2011] [notice] Apache/2.2.16 (Win32) mod_ssl/2.2.16 OpenSSL/0.9.8o configured -- resuming normal operations
>[Wed Nov 30 14:32:31 2011] [notice] Server built: Aug 8 2010 16:45:53
>[Wed Nov 30 14:32:31 2011] [notice] Parent: Created child process 13788
>[Wed Nov 30 14:32:32 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
>[Wed Nov 30 14:32:32 2011] [notice] Digest: generating secret for digest authentication ...
>[Wed Nov 30 14:32:32 2011] [notice] Digest: done
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Child process is running
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Acquired the start mutex.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting 64 worker threads.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting thread to listen on port 10081.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting thread to listen on port 81.
If you navigate to http://localhost:81/projectname/index/index does the correct screen load?
If so:
Check that the .htaccess file in your public directory contains the correct rewrite rules for Zend Framework.
Check your httpd.conf file and make sure index.php is added to the DirectoryIndex directive.
I think the solution is going to be the second bullet, but let me know what you find and I can help further if that doesn't work. Make sure to restart apache after you make any changes to httpd.conf.
Otherwise, report any errors you see when you access the controller directly, and check Apache's error_log file to see if you get any errors.

Why is GoogleBot interested in [index.php] when my root redirects to [/en/home]?

for the past months, googleBot has been hitting a file that does not exist anymore on my site [index.php] as all the routing to the proper home pages in the proper languages is handled via apache rewrite rules in the htaccess.
Ans to, I commented out my .htaccess the DirectoryIndex index.php rule
RewriteEngine on
RewriteBase /
Options +FollowSymLinks -Indexes -ExecCGI
# DirectoryIndex index.php (not needed anymore, index.php doesnt exist)
# DirectoryIndex /en/home (should it be set to this now??)
Currently, Everything works sublime: the http://website.org root is redirected instantly to /en/home via 301 permanent rediret!
But
66.249.67.142 / == crawl-66-249-67-142.googlebot.com is hitting my site again and again trying to read index.php, which does not exist. What should I do??
A sneak peak into the endless error log file with such entries (poor googlebot i thought it might be more intelligent...)
[Fri Mar 04 20:48:30 2011] [error] [client 66.249.66.177] File does not exist:
/var/www/vhosts/site.com/httpdocs/index.php
[Fri Mar 04 20:58:59 2011] [error] [client 66.249.66.177] File does not exist:
/var/www/vhosts/site.com/httpdocs/index.php
[Fri Mar 04 21:00:18 2011] [error] [client 66.249.67.142] File does not exist:
/var/www/vhosts/site.com/httpdocs/index.php
[Fri Mar 04 21:01:05 2011] [error] [client 66.249.66.177] File does not exist:
/var/www/vhosts/site.com/httpdocs/index.php
[Fri Mar 04 21:12:28 2011] [error] [client 66.249.66.164] File does not exist:
/var/www/vhosts/site.com/httpdocs/index.php
[Fri Mar 04 21:27:30 2011] [error] [client 66.249.68.115] File does not exist:
/var/www/vhosts/site.com/httpdocs/index.php
Someone linked to index.php, so Google is trying to follow it.
Do a rewrite from index.php to /en/home, and you'll be golden.
E: Also, DirectoryIndex can't be a Path, AFAIK. It just tells the server which file in a directory should be served if not specified otherwise.
/ and /index.php are separate resources as far as anything accessing your site through HTTP is concerned. Just because you are redirecting /, it doesn't mean anything if Google has seen links to /index.php before.
Just redirect /index.php to /en/home in the same way as you are redirecting /.