ModuleNotFoundError:No module name 'odoo.addons,base.res' - odoo-12

I got above error when I try to run the next code
from odoo import models, fields, api
from odoo.exceptions import ValidationError
from odoo.addons.base.res.res_request import referenceable_models

Because there is no res name module present in base module thats why it is giving an error

Related

Unable to call Notebook when using scala code in Databricks

I am into a situation where I am able to successfully run the below snippet in azure Databricks from a separate CMD.
%run ./HSCModule
But running into issues when including that piece of code with other scala code which is importing below packages and getting following error.
import java.io.{File, FileInputStream}
import java.text.SimpleDateFormat
import java.util{Calendar, Properties}
import org.apache.spark.SparkException
import org.apache.spark.sql.SparkSession
import scala.collection.JavaConverters._
import scala.util._
ERROR = :168: error: ';' expected but '.' found. %run
./HSCModule
FYI - I have also used dbutils.notebook.run and still facing same issues.
You can't mix the magic commands, like, %run, %pip, etc. with the Scala/Python code in the same cell. Documentation says:
%run must be in a cell by itself, because it runs the entire notebook inline.
So you need to put this magic command into a separate cell.

ImportError: cannot import name 'Template'

import numpy as np
from statistics import mean
x=[1,2,3,4,5]
y=[6,7,8,9,10]
m=((mean(x)*mean(y)-mean(x*y))/(mean(x)**2)-(mean(x**2)))
print(m)
In the above the(or any other code where I run numpy), Firstly I am getting an input request when running the program. Something like this:
PS D:\Codes\Python> python practice.py
Enter no.: 1
Enter: 1
which should not happen as values are initialized. I saw in other forums regarding how the file should not be named after a Python module(which you can see, it isn't). Even after that I'm getting error:
"C:\Users\KIIT\AppData\Local\Programs\Python\Python36\lib\logging\__init__.py", line 28, in <module>
from string import Template
ImportError: cannot import name 'Template'
Can someone please tell me what to do about it?
Edit:
This problem is only powershell centric. The problem is faced when I run program through powershell. It works fine in IDLE.
There must be a file named "string.py" in your D:\Codes\Python.
You can rename it to solve this issue.
You should not name the file to the modules in Python Standard Library.

Vue import failing

I've created a vue application scaffolded from the vue cli. Almost everything is reacting as expected with my app except for an issue with import.
The following works fine:
import Vuex from 'vuex';
but, this throws errors:
import { VuetronVue, VuetronVuex } from 'vuetron';
vue.use(VuetronVue);
Linting error:
"export 'VuetronVue' was not found in 'vuetron'
and Console error:
Uncaught TypeError: Cannot read property 'install' of undefined
Changing the code to:
import vuetron from 'vuetron'
vue.use(vuetron.VuetronVue);
resolves the issue...
This original code was taken directly from the Vuetron documentation. Does anyone have a suggestion as to why the ES6 notation would cause an issue?
This seems to be because
vuetron/packages/vuetron-plugins/index.js
only exports the default object:
import VuetronVue from './vuetron-vue';
import VuetronVuex from './vuetron-vuex';
export default {
VuetronVue,
VuetronVuex
};
For named imports as stated in the docs you would need a named export.

Spark Shell Import Fine, But Throws Error When Referencing Classes

I am a beginner in Apache Spark, so please excuse me if this is quite trivial.
Basically, I was running the following import in spark-shell:
import org.apache.spark.sql.{DataFrame, Row, SQLContext, DataFrameReader}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql._
import org.apache.hadoop.hive.ql.io.orc.{OrcInputFormat,OrcStruct};
import org.apa‌​che.hadoop.io.NullWritable;
...
val rdd = sc.hadoopFile(path,
classOf[org.apache.hadoop.hive.ql.io.orc.OrcInputFor‌​mat],
classOf[NullWritable],
classOf[OrcStruct],
1)
The import statements up till OrcInputFormat works fine, with the exception that:
error: object apa‌​che is not a member of package org
import org.apa‌​che.hadoop.io.NullWritable;
It does not make sense, if the import statement before goes through without any issue.
In addition, when referencing OrcInputFormat, I was told:
error: type OrcInputFor‌​mat is not a member of package org.apache.hadoop.hive.ql.io.orc
It seems strange that import for OrcInputFormat to work (I assume it works, since no error is thrown), but then the above error message turns up. Basically, I am trying to read ORC files from S3.
I am also looking at what have I done wrong, and why this happens.
What I have done:
I have tried running spark-shell with the --jars option, and tried importing hadoop-common-2.6.0.jar (My current version of Spark is 1.6.1, compiled with Hadoop 2.6)
val df = sqlContext.read.format("orc").load(PathToS3), as referred by (Read ORC files directly from Spark shell). I have tried variations of S3, S3n, S3a, without any success.
You have 2 non-printing characters between org.ape and che in the last import, most certainly due to a copy paste :
import org.apa‌​che.hadoop.io.NullWritable;
Just rewrite the last import statement and it will work. Also you don't need these semi-colons.
You have the same problem with OrcInputFormat :
error: type OrcInputFor‌​mat is not member of package org.apache.hadoop.hive.ql.io.orc
That's funny, in the mobile version of Stackoverflow we can cleary see those non-printing characters :

alembic/env.py target_metadata = metadata "No module name al_test.models"

When I use alembic to control the version of my project's database,part of codes in env.py
like:
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from al_test.models import metadata
target_metadata = metadata
when I run 'alembic revision --autogenerate -m "Added user table"', I get an error :
File "alembic/env.py", line 18, in
from al_test.models import metadata
ImportError: No module named al_test.models
so how to solve the question? thanks!
This might be a bit late, and you may have already figured out the issue, but my guess the problem is that your alembic/ directory is not part of the system path. I.e. you need to do something like:
import sys
sys.path.append(path/to/al_test)
from al_test.models import metadata
Update your env.py like this, to add the current working directory to the sys.path that Python uses when searching for modules:
import os
import sys
sys.path.append(os.getcwd())
from al_test.models import metadata
target_metadata = metadata
....
....