Sample project throwing NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper - spring-batch

I have implemented the project as per https://spring.io/guides/gs/batch-processing/
But i am getting :
Error creating bean with name 'batchConfigurer': Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper
I am new to spring-batch
Can anyone please help

The following is working as expected:
$>git clone https://github.com/spring-guides/gs-batch-processing.git
$>cd gs-batch-processing/complete
$>./mvnw clean install
$>./mvnw spring-boot:run
The output is the following:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.1.4.RELEASE)
2019-05-30 12:23:12.642 INFO 90644 --- [ main] hello.Application : Starting Application on localhost with PID 90644 (/private/tmp/gs-batch-processing/complete/target/classes started by mbenhassine in /private/tmp/gs-batch-processing/complete)
2019-05-30 12:23:12.646 INFO 90644 --- [ main] hello.Application : No active profile set, falling back to default profiles: default
2019-05-30 12:23:13.333 INFO 90644 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-05-30 12:23:13.338 WARN 90644 --- [ main] com.zaxxer.hikari.util.DriverDataSource : Registered driver with driverClassName=org.hsqldb.jdbcDriver was not found, trying direct instantiation.
2019-05-30 12:23:13.683 INFO 90644 --- [ main] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Driver does not support get/set network timeout for connections. (feature not supported)
2019-05-30 12:23:13.687 INFO 90644 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2019-05-30 12:23:14.091 INFO 90644 --- [ main] o.s.b.c.r.s.JobRepositoryFactoryBean : No database type set, using meta data indicating: HSQL
2019-05-30 12:23:14.277 INFO 90644 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : No TaskExecutor has been set, defaulting to synchronous executor.
2019-05-30 12:23:14.437 INFO 90644 --- [ main] hello.Application : Started Application in 2.114 seconds (JVM running for 5.2)
2019-05-30 12:23:14.438 INFO 90644 --- [ main] o.s.b.a.b.JobLauncherCommandLineRunner : Running default command line with: []
2019-05-30 12:23:14.503 INFO 90644 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=importUserJob]] launched with the following parameters: [{run.id=1}]
2019-05-30 12:23:14.530 INFO 90644 --- [ main] o.s.batch.core.job.SimpleStepHandler : Executing step: [step1]
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Jill, lastName: Doe) into (firstName: JILL, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Joe, lastName: Doe) into (firstName: JOE, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Justin, lastName: Doe) into (firstName: JUSTIN, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Jane, lastName: Doe) into (firstName: JANE, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: John, lastName: Doe) into (firstName: JOHN, lastName: DOE)
2019-05-30 12:23:14.604 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : !!! JOB FINISHED! Time to verify the results
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JILL, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JOE, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JUSTIN, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JANE, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JOHN, lastName: DOE> in the database.
2019-05-30 12:23:14.610 INFO 90644 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=importUserJob]] completed with the following parameters: [{run.id=1}] and the following status: [COMPLETED]
As you can see, there is no NoClassDefFoundError.

2 jars were corrupted , jackson-data bind and log4 .
While mvn clean install error was (invalid LOC header bad signature) in the logs but build was getting successfull hence i missed the logs.
i had to delete the folder from .m2 and then run mvn clean install this resolved the issue

Related

Unable to Get Data from Github Api in Grails App

I'm trying to make an http request to Github Api in Grails Controller.
I just started learning Grails yesterday and I'm stuck. I searched over internet for hours but it seems there is very little discussion about Grails on internet.
I simply want to call the Github Api and get user Data. I am familiar with the Api Endpoint, I have used it with other frameworks. But I am unable to figure out this (maybe) tiny problem in Grails.
Can anybody help me how do we make API calls in Grails controller?
Thanks in advance and Apologies for a naïve question.
Unable to Get Data from Github Api in Grails App
See the project at github.com/jeffbrown/tauseefahmedgithubapi.
src/main/groovy/tauseefahmedgithubapi/GitHubClient.groovy
package tauseefahmedgithubapi
import io.micronaut.http.annotation.Get
import io.micronaut.http.annotation.Header
import io.micronaut.http.client.annotation.Client
import static io.micronaut.http.HttpHeaders.USER_AGENT
#Client('https://api.github.com/')
interface GitHubClient {
#Get('/orgs/{org}/repos')
#Header(name = USER_AGENT, value = 'Micronaut Demo Application')
List<GitHubRepository> listRepositoriesForOrginazation(String org)
}
src/main/groovy/tauseefahmedgithubapi/GitHubRepository.groovy
package tauseefahmedgithubapi
import io.micronaut.core.annotation.Introspected
#Introspected
class GitHubRepository {
String name
}
grails-app/init/tauseefahmedgithubapi/BootStrap.groovy
package tauseefahmedgithubapi
import org.springframework.beans.factory.annotation.Autowired
class BootStrap {
#Autowired
GitHubClient client
def init = { servletContext ->
def repos = client.listRepositoriesForOrginazation('micronaut-projects')
for(def repo : repos) {
log.info "Repo Name: ${repo.name}"
}
}
def destroy = {
}
}
At application startup you may see output like this:
2021-06-28 13:03:16.834 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: static-website
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: presentations
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-core
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-profiles
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-guides-old
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-examples
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: static-website-test
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-docs
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-test
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-kotlin
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-spring
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-oauth2
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-liquibase
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-flyway
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-elasticsearch
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-graphql
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-grpc
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-kafka
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-netflix
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-groovy
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-micrometer
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-sql
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-mongodb
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-redis
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-neo4j
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-rabbitmq
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-aws
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-rss
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-gcp
2021-06-28 13:03:16.838 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-kubernetes

Single kafka consumer-group for many topics doesn't work with ACL

I have the following problem:
My application subscribe on many topics (about 16-20) with single constant consumer group.
If i use kafka without SSL and ACLs - it perfectly works.
But on kafka with ssl and strictly separated ACLs i have problems.
If much part of topics are not permitted for client, then correctly topics(allowed by ACL) does not reading, consumer doesn't sends fetch requests and no one partition assigned to this.
2020-11-13 16:18:30.209 [INFO ] [o.a.k.c.c.internals.AbstractCoordinator ] [T:main ] - Successfully joined group PPRBODCommandConsumer with generation 74
2020-11-13 16:18:30.209 [INFO ] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Setting newly assigned partitions [] for group PPRBODCommandConsumer
2020-11-13 16:20:54.449 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:PRBODCommandConsumer] - Sending Heartbeat request for group PPRBODCommandConsumer to coordinator vck4-s012-kfk010.vm.mos.cloud.sbrf.ru:9093 (id: 2147483644 rack: null)
2020-11-13 16:20:54.452 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:main ] - Received successful Heartbeat response for group PPRBODCommandConsumer
2020-11-13 16:20:54.741 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:54.762 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:55.741 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:55.763 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:56.741 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:56.764 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:57.449 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:PRBODCommandConsumer] - Sending Heartbeat request for group PPRBODCommandConsumer to coordinator vck4-s012-kfk010.vm.mos.cloud.sbrf.ru:9093 (id: 2147483644 rack: null)
2020-11-13 16:20:57.453 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:main ] - Received successful Heartbeat response for group PPRBODCommandConsumer
2020-11-13 16:20:57.742 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:57.765 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:58.743 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:58.766 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:59.744 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:20:59.767 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:00.449 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:PRBODCommandConsumer] - Sending Heartbeat request for group PPRBODCommandConsumer to coordinator vck4-s012-kfk010.vm.mos.cloud.sbrf.ru:9093 (id: 2147483644 rack: null)
2020-11-13 16:21:00.453 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:main ] - Received successful Heartbeat response for group PPRBODCommandConsumer
2020-11-13 16:21:00.745 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:00.768 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:01.746 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:01.769 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:02.746 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:02.770 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:03.449 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:PRBODCommandConsumer] - Sending Heartbeat request for group PPRBODCommandConsumer to coordinator vck4-s012-kfk010.vm.mos.cloud.sbrf.ru:9093 (id: 2147483644 rack: null)
2020-11-13 16:21:03.454 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:main ] - Received successful Heartbeat response for group PPRBODCommandConsumer
2020-11-13 16:21:03.747 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:03.771 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:04.748 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:04.771 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:05.749 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:05.772 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:06.450 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:PRBODCommandConsumer] - Sending Heartbeat request for group PPRBODCommandConsumer to coordinator vck4-s012-kfk010.vm.mos.cloud.sbrf.ru:9093 (id: 2147483644 rack: null)
2020-11-13 16:21:06.455 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:main ] - Received successful Heartbeat response for group PPRBODCommandConsumer
2020-11-13 16:21:06.749 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:06.773 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:07.750 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:07.774 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:08.750 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:08.775 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:09.451 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:PRBODCommandConsumer] - Sending Heartbeat request for group PPRBODCommandConsumer to coordinator vck4-s012-kfk010.vm.mos.cloud.sbrf.ru:9093 (id: 2147483644 rack: null)
2020-11-13 16:21:09.455 [DEBUG] [o.a.k.c.c.internals.AbstractCoordinator ] [T:main ] - Received successful Heartbeat response for group PPRBODCommandConsumer
2020-11-13 16:21:09.751 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:09.776 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:10.752 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:10.777 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:11.753 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Sending asynchronous auto-commit of offsets {} for group PPRBODCommandConsumer
2020-11-13 16:21:11.778 [DEBUG] [o.a.k.c.c.internals.ConsumerCoordinator ] [T:main ] - Completed auto-commit of offsets {} for group PPRBODCommandConsumer
When i produce in it topics - messages are not read by consumer.
But if all topics permitted for client or if i use individual consumer group for each topic - it perfectly work
Is this a known Kafka bug?
Or Kafka not designed to be used with single consumer group for many topics?

PATCH 405 (Method Not Allowed) Groovy

I'm trying to do a HTTP operation with PATCH method toward a Groovy Script. If I do that request with Postman Interface I obtain a 200 ok but when I do with the Groovy Script I obtain a 405 Error Code.
Postman Request:
Postman Request
That request is did for a Groovy with JSON Data.
The function that process the request is the next:
public Object sendHttpRequest(String url, String operation, String jsdonData,
String user, String password) throws Exception {
println("Start sendHttpRequest() method");
Object gesdenResponse = null;
HttpURLConnection conn = null;
try {
println("Opening HTTP connection...");
println("URL: " + url);
URL obj = new URL(url);
conn = (HttpURLConnection) obj.openConnection();
conn.setRequestProperty("Authorization", String.format("Basic %s", getProtectedCredentials(user, password)));
println("Header \"Authorization: *****\" set up");
String method = null;
switch (operation) {
case "PASSWORD":
method = 'PATCH';
println("PASSWORD Operation");
break;
default:
break;
}
if (method?.equals("PUT") || method?.equals("POST") ||method?.equals("PATCH")) (conn.setDoOutput(true));
if (method == "PATCH") {
println("MODIFICAMOS CABECERA PARA PATCH ");
conn.setRequestProperty("X-HTTP-Method-Override", "PATCH");
conn.setRequestMethod("POST");
} else {
conn.setRequestMethod(method);
}
println("Setting up custom HTTP headers...");
conn.setRequestProperty(GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_VALUE);
println(String.format("Custom header \"%s: %s\" set up", GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_VALUE));
conn.setRequestProperty(GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_VALUE);
println(String.format("Custom header \"%s: %s\" set up", GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_VALUE));
conn.setRequestProperty(GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_VALUE);
println(String.format("Custom header \"%s: %s\" set up", GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_VALUE));
if (jsdonData != null && !jsdonData.isEmpty()) {
conn.setRequestProperty("Content-Length", String.format("%s", Integer.toString(jsdonData.getBytes().length)));
conn.getOutputStream().write(jsdonData.getBytes("UTF-8"));
println(String.format("JSON data set up" + conn));
}
println("Waiting for server response...");
println(String.format("conn es "+conn));
BufferedReader input = new BufferedReader(
new InputStreamReader(conn.getInputStream()));
String inputLine;
StringBuffer data = new StringBuffer();
println(String.format("linea " +inputLine));
while ((inputLine = input.readLine()) != null)
{
data.append(inputLine);
println(String.format("linea " +inputLine));
}
} catch (Exception e) {
throw e;
} finally {
if (conn != null) {
conn.disconnect();
println("HTTP connection closed");
}
println("Finish sendHttpRequest() method");
}
return gesdenResponse;
}
The log of the code is the next:
2019-08-30 10:18:07.981 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : ***** SET PASSWORD started ******
2019-08-30 10:18:08.579 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : Show me the url: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.586 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Start toJSON() method
2019-08-30 10:18:08.589 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Finish toJSON() method
2019-08-30 10:18:08.589 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : El cuerpo del JSON es: {"password":"Pabloarevalo11"}
2019-08-30 10:18:08.589 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : ***** SET PASSWORD antes del response ******
2019-08-30 10:18:08.592 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Start sendHttpRequest() method
2019-08-30 10:18:08.592 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Opening HTTP connection...
2019-08-30 10:18:08.592 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : URL: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.594 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Header "Authorization: *****" set up
2019-08-30 10:18:08.594 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : PASSWORD Operation
2019-08-30 10:18:08.594 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Method: PATCH
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : MODIFICAMOS LAS CABECERAS PARA PATCH
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Setting up custom HTTP headers...
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Custom header "Sistema: Sanitas" set up
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Custom header "Accept: application/json" set up
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Custom header "Content-Type: application/json" set up
2019-08-30 10:18:08.596 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : JSON data set upsun.net.www.protocol.http.HttpURLConnection:http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.596 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Waiting for server response...
2019-08-30 10:18:08.596 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : conn es sun.net.www.protocol.http.HttpURLConnection:http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Ha llegado Server returned HTTP response code: 405 for URL: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : HTTP connection closed
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Finish sendHttpRequest() method
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : An exception ocurred while setting the password for user popen070: Server returned HTTP response code: 405 for URL: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : ****** SET PASSWORD finished ******

Snappy insertion into druid

Facing problem in snappy ingestion to druid. Things start break after org.apache.hadoop.mapred.LocalJobRunner - map task executor complete. Its able to fetch the input file.
My specs json file -
{
"hadoopCoordinates": "org.apache.hadoop:hadoop-client:2.6.0",
"spec": {
"dataSchema": {
"dataSource": "apps_searchprivacy",
"granularitySpec": {
"intervals": [
"2017-01-23T00:00:00.000Z/2017-01-23T01:00:00.000Z"
],
"queryGranularity": "HOUR",
"segmentGranularity": "HOUR",
"type": "uniform"
},
"metricsSpec": [
{
"name": "count",
"type": "count"
},
{
"fieldName": "event_value",
"name": "event_value",
"type": "longSum"
},
{
"fieldName": "landing_impression",
"name": "landing_impression",
"type": "longSum"
},
{
"fieldName": "user",
"name": "DistinctUsers",
"type": "hyperUnique"
},
{
"fieldName": "cost",
"name": "cost",
"type": "doubleSum"
}
],
"parser": {
"parseSpec": {
"dimensionsSpec": {
"dimensionExclusions": [
"landing_page",
"skip_url",
"ua",
"user_id"
],
"dimensions": [
"t3",
"t2",
"t1",
"aff_id",
"customer",
"evt_id",
"install_date",
"install_week",
"install_month",
"install_year",
"days_since_install",
"months_since_install",
"weeks_since_install",
"success_url",
"event",
"chrome_version",
"value",
"event_label",
"rand",
"type_tag_id",
"channel_name",
"cid",
"log_id",
"extension",
"os",
"device",
"browser",
"cli_ip",
"t4",
"t5",
"referal_url",
"week",
"month",
"year",
"browser_version",
"browser_name",
"landing_template",
"strvalue",
"customer_group",
"extname",
"countrycode",
"issp",
"spdes",
"spsc"
],
"spatialDimensions": []
},
"format": "json",
"timestampSpec": {
"column": "time_stamp",
"format": "yyyy-MM-dd HH:mm:ss"
}
},
"type": "hadoopyString"
}
},
"ioConfig": {
"inputSpec": {
"dataGranularity": "hour",
"filePattern": ".*\\..*",
"inputPath": "hdfs://c8-auto-hadoop-service-1.srv.media.net:8020/data/apps_test_output",
"pathFormat": "'ts'=yyyyMMddHH",
"type": "granularity"
},
"type": "hadoop"
},
"tuningConfig": {
"ignoreInvalidRows": "true",
"type": "hadoop",
"useCombiner": "false"
}
},
"type": "index_hadoop"
}
Error Getting
2017-02-03T14:39:50,738 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584)
2017-02-03T14:39:50,738 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 100
2017-02-03T14:39:50,738 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - soft limit at 83886080
2017-02-03T14:39:50,738 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 104857600
2017-02-03T14:39:50,738 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 6553600
2017-02-03T14:39:50,738 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2017-02-03T14:39:50,847 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Starting flush of map output
2017-02-03T14:39:50,849 INFO [Thread-22] org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2017-02-03T14:39:50,850 WARN [Thread-22] org.apache.hadoop.mapred.LocalJobRunner - job_local233667772_0001
java.lang.Exception: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.6.0.jar:?]
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) [hadoop-mapreduce-client-common-2.6.0.jar:?]
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) ~[hadoop-common-2.6.0.jar:?]
at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) ~[hadoop-common-2.6.0.jar:?]
at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:192) ~[hadoop-common-2.6.0.jar:?]
at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:176) ~[hadoop-common-2.6.0.jar:?]
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:90) ~[hadoop-mapreduce-client-core-2.6.0.jar:?]
at org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader.initialize(DelegatingRecordReader.java:84) ~[hadoop-mapreduce-client-core-2.6.0.jar:?]
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:545) ~[hadoop-mapreduce-client-core-2.6.0.jar:?]
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:783) ~[hadoop-mapreduce-client-core-2.6.0.jar:?]
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) ~[hadoop-mapreduce-client-core-2.6.0.jar:?]
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) ~[hadoop-mapreduce-client-common-2.6.0.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_121]
2017-02-03T14:39:51,130 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_local233667772_0001 failed with state FAILED due to: NA
2017-02-03T14:39:51,139 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 0
2017-02-03T14:39:51,143 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[var/druid/hadoop-tmp/apps_searchprivacy/2017-02-03T143903.262Z_bb7a812bc0754d4aabcd4bc103ed648a]
2017-02-03T14:39:51,158 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_apps_searchprivacy_2017-02-03T14:39:03.257Z, type=index_hadoop, dataSource=apps_searchprivacy}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:204) ~[druid-indexing-service-0.9.2.jar:0.9.2]
at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:208) ~[druid-indexing-service-0.9.2.jar:0.9.2]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.jar:0.9.2]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.jar:0.9.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.jar:0.9.2]
... 7 more
Caused by: com.metamx.common.ISE: Job[class io.druid.indexer.IndexGeneratorJob] failed!
at io.druid.indexer.JobHelper.runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:94) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:261) ~[druid-indexing-service-0.9.2.jar:0.9.2]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.jar:0.9.2]
... 7 more
2017-02-03T14:39:51,165 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_apps_searchprivacy_2017-02-03T14:39:03.257Z] status changed to [FAILED].
2017-02-03T14:39:51,168 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
"id" : "index_hadoop_apps_searchprivacy_2017-02-03T14:39:03.257Z",
"status" : "FAILED",
"duration" : 43693
}
It seems that jvm can't load native shared library (like .dll or .so), check is it available on machine(s) running the task, and if so check is its dir on the classpath of the jvm.

Ubuntu Zend Framework cli securityCheck Error

I followed all the instructions. I am on Ubuntu 10.10. using Zend Server Ce.
In my .bashrc I have ld_library_path, zend framework library, etc.
I can run zf, but its giving an error:
Fatal error: Uncaught exception 'Zend_Exception' with message 'Security check: Illegal character in filename' in /usr/local/zend/share/ZendFramework/library/Zend/Loader.php:303
Stack trace:
#0 /usr/local/zend/share/ZendFramework/library/Zend/Loader.php(128): Zend_Loader::_securityCheck('Zend/Tool/Proje...')
#1 /usr/local/zend/share/ZendFramework/library/Zend/Loader.php(94): Zend_Loader::loadFile('Zend/Tool/Proje...', NULL, true)
#2 /usr/local/zend/share/ZendFramework/library/Zend/Tool/Project/Context/Repository.php(88): Zend_Loader::loadClass('Zend_Tool_Proje...')
#3 /usr/local/zend/share/ZendFramework/library/Zend/Tool/Project/Context/Repository.php(79): Zend_Tool_Project_Context_Repository->addContextClass('Zend_Tool_Proje...')
#4 /usr/local/zend/share/ZendFramework/library/Zend/Tool/Project/Provider/Abstract.php(85): Zend_Tool_Project_Context_Repository->addContextsFromDirectory('/usr/local/zend...', 'Zend_Tool_Proje...')
#5 /usr/local/zend/share/ZendFramework/library/Zend/Tool/Framework/Provider/Repository.php(187): Z in /usr/local/zend/share/ZendFramework/library/Zend/Loader.php on line 303
If I comment the line Zend_loader 303, it seems to work, but if I try to create controller or something, it's giving an error like below:
[ 21.01.2011 10:26:40 ERROR] [ ZendExtensionManager.cpp : 654 ( sig_handler ) ] ZendExtensionManager got SIG 11 at pid 4781 !
[ 21.01.2011 10:26:40 ERROR] [ ZendExtensionManager.cpp : 667 ( sig_handler ) ] Crash happened during IDLE stage
[ 21.01.2011 10:26:40 ERROR] [ ZendExtensionManager.cpp : 670 ( sig_handler ) ] The stack trace follows:
[ 21.01.2011 10:26:40 SYSTEM] Obtained 20 stack frames
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/lib/ZendExtensionManager.so(+0x21c1e) [0xb718fc1e]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/lib/ZendExtensionManager.so(+0xf0b7) [0xb717d0b7]
[ 21.01.2011 10:26:40 SYSTEM] [0xb78d3400]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x81ccc07]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x830e044]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x82e2961]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php(execute+0x212) [0x82e4032]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/lib/debugger/php-5.3.x/ZendDebugger.so(+0x4bed6) [0xb2c72ed6]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x830daef]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x82e2961]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php(execute+0x212) [0x82e4032]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/lib/debugger/php-5.3.x/ZendDebugger.so(+0x4bed6) [0xb2c72ed6]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x830daef]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x82e2961]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php(execute+0x212) [0x82e4032]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/lib/debugger/php-5.3.x/ZendDebugger.so(+0x4bed6) [0xb2c72ed6]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x830daef]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php() [0x82e2961]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/bin/php(execute+0x212) [0x82e4032]
[ 21.01.2011 10:26:40 SYSTEM] /usr/local/zend/lib/debugger/php-5.3.x/ZendDebugger.so(+0x4bed6) [0xb2c72ed6]
Segmentation fault
How can I solve this problem?
In zf.sh you must add just before if test "#php_bin#" != '#'php_bin'#'; then line,
LANG=C
export LANG
Everything is working now, and i am happy...
I understand how the error occurred.
My operating system using utf8 character encoding for file names. utf8-encoded file names with the php is run by the terminal gives the error. 'I' is the upper one 'ı' in my language. But php expects to be 'i'. If i change the file names to ASCII in php using iconv , error happens again because it cant find the location of the file at this time. I found the cause of the problem but still do not know the solution.
Thank you.
I believe this is not ZF issue. This is either php bug or misconfiguration.
replace in Zend_Loader and post here output so i can confirm my suspicion
protected static function _securityCheck($filename)
{
var_dump($filename); exit;
}
Also try this:
mb_internal_encoding('UTF-8');
mb_regex_encoding('UTF-8');
Sorry if i'm wrong. I'm not an expert here.