I'm trying to run a map/reduce query on a MongoDB collection and I'm getting the following error:
uncaught exception: map reduce failed:{
"errmsg" : "exception: cannot run map reduce without the js engine",
"code" : 16149,
"ok" : 0
}
I can't seem to turn anything up on google for it. I've tried searching for the exception message but no one has written about it would seem. I suspected it might be a privilege issue to start with, but can't find a related privilege.
I didn't set the instance up, so is there some sort of configuration option that could have disabled the js engine, or perhaps a memory limit or something?
We have same issue in our system.
MongoDB has its own JS engine V8 configured by default but in our SIT / UAT environments our DBA has disabled JS Engine V8 to run javascript code since it generates security issues.
After this change our application started generating all these issues regarding to map reduce, but after enabling usev8 flag back again this issue was solved.
If you want to compile mongo sources you can do:
scons --release --usev8
Hope to help.
Related
our terraform plan is suddenly reporting errors such as the following while it is 'refreshing state':
Error: multiple IAM policies found matching criteria (ARN:arn:aws:iam::aws:policy/ReadOnlyAccess); try different search;
on ../../modules/xxxx/policies.tf line 9, in data "aws_iam_policy" "read_only_access":
9: data "aws_iam_policy" "read_only_access" {
and
Error: no IAM policy found matching criteria (ARN: arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy); try different search
on ../../modules/xxxx/iam.tf line 97, in data "aws_iam_policy" "aws_eks_worker_node":
97: data "aws_iam_policy" "aws_eks_worker_node" {
We recently updated our dev EKS cluster from 1.20 to 1.21. Stage and Live environments are still on 1.20 but they are built from the same module. We didn't see these errors until a day after the upgrade and there were no changes to these reported terraform files. The errors also appear to be somewhat intermittent and random. One plan run will be successful, while the next will include some of these policies that we have defined.
I know this is a shot in the dark with limited information so please ask questions if you have them. I'm just really looking for someone knows what this error means because Google isn't returning anything useful.
We also run terraform version 0.14
I am running into a very strange issue with Spring Boot and Spring Data: after I manually close a connection, the formerly working application seems to "forget" which schema it's using and complains about missing relations.
Here's the code snippet in question:
try (Connection connection = this.dataSource.getConnection()) {
ScriptUtils.executeSqlScript(connection, new ClassPathResource("/script.sql"));
}
This code works fine, but after it executes, the application immediately starts throwing errors like the following:
org.postgresql.util.PSQLException: ERROR: relation "some_table" does not exist
Prior to executing the code above, the application works fine (including referencing the table it later complains about). If I remove the try-resource block, and do not close the Connection, everything also works fine, except that I've now created a resource leak. I have also tried explicitly setting the default schema (public) in the following ways:
In the JDBC URL with the currentSchema parameter
With the the spring.datasource.hikari.schema parameter
With the spring.datasource.jpa.properties.hibernate.default_schema property
The last does alleviate the issue with respect to Hibernate managed classes, but the issue persists with native queries. I could, of course, make the schema explicit in those queries, but that doesn't seem to address the root issue. Why would closing a connection trigger this behavior?
My environment:
Spring Boot 2.5.1
PostgreSQL 12.7
Thanks to several users above who immediately saw what I did not. The script, adapted from an older pg_dump run, was indeed mucking with the search_path:
SELECT pg_catalog.set_config('search_path', '', false);
Removing that line, and some other unnecessary ones, resolved the problem. Big duh on my part.
We have developed two interactive XDP which have some pre-population and binding of data and they are generated in interactive PDF. Whenever we deploy the XDP in our ETE environment everything is perfect and works fine. We have developed a rest API which generate the PDF and bind values from front end.
The problem is whenever we deploy the XDP in QA environment and try to consume and bind dynamic values to XDP and generate the same PDF documents consuming the same rest API we get failure in generating the documents. I check the error logs of AEM instance and I am getting below. Please can somebody help me out here as we are not able to find what is the root cause of this failure specific to QA Environment.
09.07.2019 16:53:13.307 *ERROR* [10.52.160.35 [1562683992994] POST /content/AemFormsSamples/renderpdfform.html HTTP/1.1] com.adobe.fd.readerextensions.service.impl.ReaderExtensionsServiceImpl AEM-REX-001-008: Unable to apply the requested usage rights to the given document.
java.lang.NullPointerException: null
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7242)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
I'm trying to deploy the mongo db quick start as seen here.
Unfortunately, it quickly fails with status CREATE_FAILED with the following error which I can't understand a lot from:
Embedded stack arn:aws:cloudformation:us-west-****** was not successfully created:
The following resource(s) failed to create:
[NAT1EIP, NAT2EIP, PublicSubnet1RouteTableAssociation, PrivateSubnet2ARouteTableAssociation, PublicSubnetRoute, PrivateSubnet1ARouteTableAssociation, PublicSubnet2RouteTableAssociation].
I tried using both my own user's role and a new role I created where the trusted entity was CF and permissions of power user. It failed in both cases.
I'm surely missing something very basic, any thoughts?
Thank you
If you are unable to create these networking resources, it could be very well due to IAM user restrictions.
I am trying to deploy a codebase that has a number numba.njit functions with cache=True.
It works fine running locally (Mac OS X 10.12.3), but on the remote machine (Ubuntu 14.04 on AWS) I am getting the following error:
RuntimeError at /portal/
cannot cache function 'filter_selection':
no locator available for file:
'/srv/run/miniconda/envs/mbenv/lib/python2.7/site-packages/mproj/core_calcs/filter.py'
I looked through the numba codebase, and I saw this file: https://github.com/numba/numba/blob/master/numba/caching.py
It appears that the following function is returning None instead of a locator, for this exception to be raised
cls.from_function(py_func, source_path)
Guessing this is a permission to write the pycache folders, but I didn't see in the numba docs a way to specify the cache folder location (CACHE_DIR).
Has anyone hit this before, and if so, what is the suggested work-around?
Set sys.frozen = True before for cls in self._locator_classes: in caching.py can eliminate the issue.
I have no idea whether such setting will impact performance.