Loading external jars into spark-notebook fails - spark-notebook

I am trying to connect to redshift from notebook, so far i have done following -
Configured metadata for the notebook
"customDeps": [
"com.databricks:spark-redshift_2.10:3.0.0-preview1",
"com.databricks:spark-avro_2.11:3.2.0",
"com.databricks:spark-csv_2.11:1.5.0"
]
Checked browser console to ensure this library is loaded after restarting kernel
ui-logs-1422> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.m2/repository/com/databricks/spark-avro_2.10/3.0.0/spark-avro_2.10-3.0.0.jar
kernel.js:978 ui-logs-1452> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-redshift_2.10/3.0.0-preview1/spark-redshift_2.10-3.0.0-preview1.jar
kernel.js:978 ui-logs-1509> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-csv_2.11/1.5.0/spark-csv_2.11-1.5.0.jar
kernel.js:978 ui-logs-1526> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-avro_2.11/3.2.0/spark-avro_2.11-3.2.0.jar
When i try to load a table - i run into class not found exception,
java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.redshift. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:594)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
... 63 elided
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.redshift.DefaultSource
at scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
at scala.util.Try.orElse(Try.scala:84)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:579)
Anyone else running into this issue or have solved this?
I notice similar issue with another dependency as well, is there any thing missing in the configuration?
Trying out timeseries sample in the notebook - notebooks/timeseries/Spark-Timeseries.snb.ipynb
Notice an existing entry in metadata for custom dependency -
"customDeps": [
"com.cloudera.sparkts % sparkts % 0.3.0"
]
Quickly verified availability of this package # https://spark-packages.org/package/sryza/spark-timeseries
(updated meta data to include this line)
"com.cloudera.sparkts:sparkts:0.4.1"
After restarting kernel - validated library is loaded
ui-logs-337> [Wed Aug 23 2017 09:29:25 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Will fetch these customDeps artifacts:Set(Dependency(com.cloudera.sparkts:sparkts,0.3.0,,Set(),Attributes(,),false,true), Dependency(com.cloudera.sparkts:sparkts,0.4.1,,Set(),Attributes(,),false,true))
kernel.js:978 ui-logs-347> [Wed Aug 23 2017 09:29:37 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/cloudera/sparkts/sparkts/0.4.1/sparkts-0.4.1.jar
Error message -
<console>:69: error: object cloudera is not a member of package com
import com.cloudera.sparkts._
^
<console>:70: error: object cloudera is not a member of package com
import com.cloudera.sparkts.stats.TimeSeriesStatisticalTests

Downlaoded another version of spark-notebook(this wasnt from master branch).
spark-notebook-0.7.0-scala-2.11.8-spark-2.1.1-hadoop-2.7.2
against
spark-notebook-0.9.0-SNAPSHOT-scala-2.11.8-spark-2.1.1-hadoop-2.7.2
In addition i had to ensure scala, spark & hadoop versions are intact across dependencies i have configured.
In this particular example i had to set jar file for amazon JDBC redshift driver from command line, as this was not available at maven repository.
export EXTRA_CLASSPATH=RedshiftJDBC4-1.2.7.1003.jar
Hope this helps others

If you want, you can add the jar to the kernel's environment section "env" (EXTRA_CLASSPATH) like this:
cat /usr/local/share/jupyter/kernels/apache_toree_scala/kernel.json
{
"argv": [
"/usr/local/share/jupyter/kernels/apache_toree_scala/bin/run.sh",
"--profile",
"{connection_file}"
],
"interrupt_mode": "signal",
"env": {
"__TOREE_SPARK_OPTS__": "",
"PYTHONPATH": "/opt/cloudera/parcels/SPARK2/lib/spark2/python:/opt/cloudera/parcels/SPARK2/lib/spark2/python/lib/py4j-0.10.7-src.zip",
"__TOREE_OPTS__": "",
"PYTHON_EXEC": "python",
"SPARK_HOME": "/opt/cloudera/parcels/SPARK2/lib/spark2",
"DEFAULT_INTERPRETER": "Scala",
"JAVA_HOME": "/usr/java/latest",
"EXTRA_CLASSPATH": "/opt/cloudera/parcels/SPARK2/lib/spark2/jars/mysql-connector-java-5.1.15.jar"
},
"metadata": {},
"display_name": "SPARK2/Scala",
"language": "scala"
}

Related

Error message in Talend tool connecting with server - How to resolve this issue

Error message in Talend tool connecting with server - How to resolve this issue
Execution failed : java.security.cert.CertificateExpiredException: NotAfter: Sun Jan 17 05:36:12 IST 2021
[NotAfter: Sun Jan 17 05:36:12 IST 2021]
You're most likely using a subscription product that comes with support. You can find the required steps here:
https://community.talend.com/s/article/FAQ-for-REQUIRED-by-Jan-17-2021-Mandatory-Talend-Certificate-update-for-Talend-On-premises-and-cloud?language=en_US
Applying the latest cumulative patch should fix your problem.

While running Pact- Karma-Mocha framework, getting error - 'Can't find variable: Pact'

I am trying to get sample PACT JS framework (any variant) running for Contract testing. Initial plan is just to get the sample(s) provided get running & then later on make changes into End point and customise for our own purpose.
PACT Foundation link: https://github.com/pact-foundation/pact-js/tree/master/karma/mocha
Environment:
Win 7
Node: v8.11.4
dependencies installed:
"#pact-foundation/karma-pact": {
"version": "2.1.8",
"#pact-foundation/pact-node": {
"version": "6.19.11",
I am getting following error while trying to get it running.
Command: karma start karma.conf.js
C:\VarProjects\VanillaMocha>karma start test/karma.conf.js
10 09 2018 09:53:34.544:ERROR [config]: File C:\VarProjects\VanillaMocha\test\karma.conf.js does not exist!
C:\VarProjects\VanillaMocha>karma start karma.conf.js
[2018-09-10T08:53:42.384Z] INFO: pact-node#6.19.11/16892 on W5167037:
Creating Pact Server with options:
port = 1234,
consumer = KarmaMochaConsumer,
provider = KarmaMochaProvider,
logLevel = DEBUG,
log = C:\VarProjects\VanillaMocha\logs\pact.log,
dir = C:\VarProjects\VanillaMocha\pacts,
pactFileWriteMode = overwrite,
ssl = false,
cors = false,
host = localhost
[2018-09-10T08:53:42.401Z] INFO: pact-node#6.19.11/16892 on W5167037: Created 'standalone\win32-1.54.4\bin\pact-mock-service.bat service --port '1234' --consumer 'KarmaMochaConsumer
' --provider 'KarmaMochaProvider' --log-level 'DEBUG' --log 'C:\VarProjects\VanillaMocha\logs\pact.log' --pact_dir 'C:\VarProjects\VanillaMocha\pacts' --pact-file-write-mode 'overwri
te' --host 'localhost'' process with PID: 18912
10 09 2018 09:53:44.980:INFO [pact]: Pact Mock Server running on port: 1234
10 09 2018 09:53:45.054:WARN [watcher]: Pattern "C:/dist-web/pact-web.js" does not match any file.
10 09 2018 09:53:45.092:INFO [karma]: Karma v3.0.0 server started at http://0.0.0.0:9876/
10 09 2018 09:53:45.093:INFO [launcher]: Launching browser PhantomJS_without_security with unlimited concurrency
10 09 2018 09:53:45.101:INFO [launcher]: Starting browser PhantomJS
10 09 2018 09:53:46.811:INFO [PhantomJS 2.1.1 (Windows 7 0.0.0)]: Connected on socket qMhVUJZzdDCD_YuKAAAA with id 47921548
PhantomJS 2.1.1 (Windows 7 0.0.0) Client "before all" hook FAILED
Can't find variable: Pact
client-spec.js:10:32
PhantomJS 2.1.1 (Windows 7 0.0.0) Client "after all" hook FAILED
undefined is not an object (evaluating 'provider.finalize')
client-spec.js:21:28
PhantomJS 2.1.1 (Windows 7 0.0.0): Executed 2 of 4 (2 FAILED) ERROR (0.013 secs / 0.001 secs)
[2018-09-10T08:53:46.985Z] INFO: pact-node#6.19.11/16892 on W5167037: Removing all Pact servers.
[2018-09-10T08:53:46.986Z] INFO: pact-node#6.19.11/16892 on W5167037: Removing Pact with PID: 18912
C:\VarProjects\VanillaMocha>KARMA start
[2018-09-10T08:54:14.809Z] INFO: pact-node#6.19.11/7492 on W5167037:
Creating Pact Server with options:
port = 1234,
consumer = KarmaMochaConsumer,
provider = KarmaMochaProvider,
logLevel = DEBUG,
log = C:\VarProjects\VanillaMocha\logs\pact.log,
dir = C:\VarProjects\VanillaMocha\pacts,
pactFileWriteMode = overwrite,
ssl = false,
cors = false,
host = localhost
[2018-09-10T08:54:14.823Z] INFO: pact-node#6.19.11/7492 on W5167037: Created 'standalone\win32-1.54.4\bin\pact-mock-service.bat service --port '1234' --consumer 'KarmaMochaConsumer'
--provider 'KarmaMochaProvider' --log-level 'DEBUG' --log 'C:\VarProjects\VanillaMocha\logs\pact.log' --pact_dir 'C:\VarProjects\VanillaMocha\pacts' --pact-file-write-mode 'overwrit
e' --host 'localhost'' process with PID: 2920
10 09 2018 09:54:17.376:INFO [pact]: Pact Mock Server running on port: 1234
10 09 2018 09:54:17.447:WARN [watcher]: Pattern "C:/dist-web/pact-web.js" does not match any file.
10 09 2018 09:54:17.483:INFO [karma]: Karma v3.0.0 server started at http://0.0.0.0:9876/
10 09 2018 09:54:17.484:INFO [launcher]: Launching browser PhantomJS_without_security with unlimited concurrency
10 09 2018 09:54:17.489:INFO [launcher]: Starting browser PhantomJS
10 09 2018 09:54:19.243:INFO [PhantomJS 2.1.1 (Windows 7 0.0.0)]: Connected on socket rn-kwBRGhJbyUwvZAAAA with id 54614606
PhantomJS 2.1.1 (Windows 7 0.0.0) Client "before all" hook FAILED
Can't find variable: Pact
client-spec.js:10:32
PhantomJS 2.1.1 (Windows 7 0.0.0) Client "after all" hook FAILED
undefined is not an object (evaluating 'provider.finalize')
client-spec.js:21:28
PhantomJS 2.1.1 (Windows 7 0.0.0): Executed 2 of 4 (2 FAILED) ERROR (0.012 secs / 0 secs)
Kindly advise, Thanks a lot
The code is failing because it's missing pact-web. You can see this in the error message you included:
Pattern "C:/dist-web/pact-web.js" does not match any file.
This is happening because you're using the example karma.conf.js outside the example repository without modification.
Quoting the relevant part of karma.conf.js:
// if you are using this example to setup your own project
// load pact from the node_modules directory
'../../dist-web/pact-web.js',
// Example Using NPM package
// 'node_modules/#pact-foundation/pact-web/pact-web.js',
Looking at your directory structure, commenting this line:
'../../dist-web/pact-web.js',
and uncommenting this line:
// 'node_modules/#pact-foundation/pact-web/pact-web.js',
should solve your problem.
Note that you also need to ensure that #pact-foundation/pact-web is a dev-dependency:
npm install --save-dev '#pact-foundation/pact-web'

Error on reference integration ATG Siebel

We are integrating ATG to Siebel, we are getting an error while syncing the products from Siebel below are the error.
[oracle#localhost logs]$ less VodafonePub.out
**** Error Wed Jul 20 04:41:36 EDT 2016 1469004096089 /atg/siebel/catalog/SiebelCatalogImportService THREAD (Thread-58): Batch processing failed. Logging details with parent.
**** Error Wed Jul 20 04:41:36 EDT 2016 1469004096090 /atg/siebel/catalog/SiebelCatalogImportService Aborting Job after ADD/UPDATE PHASE - all repository changes will be rolled back
**** Error Wed Jul 20 04:41:36 EDT 2016 1469004096414 /atg/siebel/catalog/SiebelCatalogImportService SingleThreadedImportService.executeImport(): Data import was unsuccessful. Cancelling the import
**** Warning Wed Jul 20 04:41:36 EDT 2016 1469004096603 /atg/siebel/catalog/SiebelCatalogImportController ImportService DID NOT finish successfully
Jul 20, 2016 4:41:38 AM com.sun.xml.rpc.server.http.ea.JAXRPCServletDelegate doPost
SEVERE: JAXRPCSERVLET38: unknown port name: getJobStatus
JAXRPCSERVLET38: unknown port name: getJobStatus
at com.sun.xml.rpc.server.http.ea.ImplementorRegistry.getImplementorInfo(ImplementorRegistry.java:68)
at com.sun.xml.rpc.server.http.ea.ImplementorFactory.getImplementorFor(ImplementorFactory.java:103)
at com.sun.xml.rpc.server.http.ea.JAXRPCServletDelegate.doPost(JAXRPCServletDelegate.java:200)
at com.sun.xml.rpc.server.http.JAXRPCServlet.doPost(JAXRPCServlet.java:133)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:751)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:844)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:280)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:254)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:136)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:346)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79)
at atg.webservice.filter.SOAPFactoriesFilter.doFilter(SOAPFactoriesFilter.java:233)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79)
at atg.webservice.WSDLImportFilter.doFilter(Unknown Source)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3436)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3402)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:57)
at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2285)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2201)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1572)
at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:255)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)
<Jul 20, 2016 4:41:38 AM EDT> <Error> <javax.enterprise.resource.webservices.rpc.server.http> <BEA-000000> <JAXRPCSERVLET38: unknown port name: getJobStatus
JAXRPCSERVLET38: unknown port name: getJobStatus
at com.sun.xml.rpc.server.http.ea.ImplementorRegistry.getImplementorInfo(ImplementorRegistry.java:68)
at com.sun.xml.rpc.server.http.ea.ImplementorFactory.getImplementorFor(ImplementorFactory.java:103)
at com.sun.xml.rpc.server.http.ea.JAXRPCServletDelegate.doPost(JAXRPCServletDelegate.java:200)
at com.sun.xml.rpc.server.http.JAXRPCServlet.doPost(JAXRPCServlet.java:133)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:751)
Truncated. see log file for complete stacktrace
I have compared the port name in other Working environment, but there was no difference.
Really stuck in this please help.
The import process has been kicked off, so I assume you've correctly built the SiebelWS submodule, and it's being called by SCOA. It looks like the getJobStatus WS is missing however. Did you edit the build file to leave this out? Check in the Dynamo/Siebel/SiebelWS/j2ee-apps/siebelWS.war file to see if the relevant WSDL and mappings are present.

Osmosis not working with mapfilewriter plugin

I've recently downloaded Osmosis to convert .osm.pbf files to .map files. I'm using Windows 7 for this application. I've downloaded the latest zip file from their system and placed the mapfilewriter jar files into the /lib/default/ folder. However, I keep receiving this error when I run this statement in the .bat file:
osmosis --read-pbf file=taiwanlatest.osm.pbf --mapfile-writer file=helloworld.map
Mar 19, 2013 7:34:49 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Osmosis Version 0.42
Mar 19, 2013 7:34:49 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Preparing pipeline.
Mar 19, 2013 7:34:50 PM org.mapsforge.map.writer.osmosis.MapFileWriterTask <init>
INFO: mapfile-writer version: mapsforge-map-writer-0.3.0
Mar 19, 2013 7:34:50 PM org.mapsforge.map.writer.osmosis.MapFileWriterTask <init>
INFO: mapfile format specification version: 3
Mar 19, 2013 7:34:50 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Launching pipeline execution.
Mar 19, 2013 7:34:50 PM org.openstreetmap.osmosis.core.Osmosis run
INFO: Pipeline executing, waiting for completion.
Mar 19, 2013 7:34:50 PM org.openstreetmap.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion
SEVERE: Thread for task 1-read-pbf failed
java.lang.AbstractMethodError: org.mapsforge.map.writer.osmosis.MapFileWriterTas
k.initialize(Ljava/util/Map;)V
at crosby.binary.osmosis.OsmosisReader.run(OsmosisReader.java:43)
at java.lang.Thread.run(Thread.java:722)
Mar 19, 2013 7:34:50 PM org.openstreetmap.osmosis.core.Osmosis main
SEVERE: Execution aborted.
org.openstreetmap.osmosis.core.OsmosisRuntimeException: One or more tasks failed
.
at org.openstreetmap.osmosis.core.pipeline.common.Pipeline.waitForComple
tion(Pipeline.java:146)
at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:92)
at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Laun
cher.java:329)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
a:239)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
uncher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
352)
at org.codehaus.classworlds.Launcher.main(Launcher.java:47)
I've read up and it seems that this occurs due to the osmosis itself, and that I have to use a lower version's osmosis. I've tried to use Osmosis version 0.40 and I've gotten this error when I run the .bat file:
Error: Could not find or load main class org.codehaus.classworlds.Launcher
Where exactly did I go wrong?
After much trial and error, I finally made it work by downloading Osmosis version 0.40.1 here.
In addition, I've added 4 jar files in the /lib/default folder:
mapsforge-map-writer-0.3.0-jar-with-dependencies.jar
mapsforge-map-0.3.0-jar-with-dependencies.jar
trove-3.0.3.zip
jts-1.8.jar
After which, i created a new conf file named "osmosis-plugins.conf" and added a line " org.mapsforge.map.writer.osmosis.MapFileWriterPluginLoader" in it.
After doing these steps, Osmosis finally works (though I'm not sure if this is the most correct way of doing it).
However, I still don't really understand why the latest version of Osmosis does not work.
Hope this can help those who faced a similar problem as me!
Bumped into this problem as well and here's my solution to it:
Download the mapsforge-map-writer jar file (the one with
dependencies) from https://search.maven.org/search?q=mapsforge-map
Put the downloaded jar file into my osmosis/lib/default directory
Open up cmd, cd to where my osmosis is located and run a --mapfile-writer command.
Hope this helps!

Default Index Controller Not Being Called With New Zend Studio Project

I have just purchased a license for Zend Studio 9. I have only a minimal amount of experience with the Zend framework, and no previous experience with Zend Studio. I am using http://framework.zend.com/manual/en/ as a tutorial on the framework and have browsed through the resources located at http://www.zend.com/en/products/studio/resources for help with the studio software.
My main problem is that after creating a new Zend project with zstudio, I'm not seeing the initial welcome message. Here are the steps I am using:
I've already installed the Zend Server and confirmed that web apps are working (made some test files, they all parsed correctly).
Create a new project with Zend Studio.
a. File->New->Local PHP Project
b. For location, I am using C:\Program Files\Zend\Apache2\htdocs.
c. For version I used the default "Zend Framework 1.11.11 (Built-in)"
I go to http://localhost:81/projectname. Instead of the default index controller being called, I just see my directory structure.
Addition info:
OS: Windows 7
PHP version: 5.3
ERROR LOGS:
>[Wed Nov 30 14:32:30 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
>[Wed Nov 30 14:32:30 2011] [warn] pid file C:/Program Files (x86)/Zend/Apache2/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run?
>[Wed Nov 30 14:32:30 2011] [notice] Digest: generating secret for digest authentication ...
>[Wed Nov 30 14:32:30 2011] [notice] Digest: done
>[Wed Nov 30 14:32:31 2011] [notice] Apache/2.2.16 (Win32) mod_ssl/2.2.16 OpenSSL/0.9.8o configured -- resuming normal operations
>[Wed Nov 30 14:32:31 2011] [notice] Server built: Aug 8 2010 16:45:53
>[Wed Nov 30 14:32:31 2011] [notice] Parent: Created child process 13788
>[Wed Nov 30 14:32:32 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
>[Wed Nov 30 14:32:32 2011] [notice] Digest: generating secret for digest authentication ...
>[Wed Nov 30 14:32:32 2011] [notice] Digest: done
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Child process is running
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Acquired the start mutex.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting 64 worker threads.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting thread to listen on port 10081.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting thread to listen on port 81.
If you navigate to http://localhost:81/projectname/index/index does the correct screen load?
If so:
Check that the .htaccess file in your public directory contains the correct rewrite rules for Zend Framework.
Check your httpd.conf file and make sure index.php is added to the DirectoryIndex directive.
I think the solution is going to be the second bullet, but let me know what you find and I can help further if that doesn't work. Make sure to restart apache after you make any changes to httpd.conf.
Otherwise, report any errors you see when you access the controller directly, and check Apache's error_log file to see if you get any errors.