Which import of package is better? [duplicate] - import

This question already has answers here:
Why is using a wild card with a Java import statement bad?
(18 answers)
Closed 5 years ago.
import java.util.ArrayList;
import java.util.Collections;
...
or
import java.util.*;
Is there any execute time difference according to usage ?
Which one should I prefer to use?

If you use
import java.util.ArrayList;
import java.util.Collections;
or
import java.util.*;
They both result to same byte code after compilation.
There is no execution time difference. But you should prefer the first option as it will helps you if two or more packages have same Class File name.
for example if java.xyz and java.abc, both packages have class Sample and if you import both packages directly in your class then compiler will raise error and asks to resolve the ambiguity

Related

SQL alchemy split project into different files

I am implementing simple database, right now my goal is to split project into different modules, as it should be done. So far, my project consists of bunch of files:
base.py
from sqlalchemy.orm import declarative_base
Base = declarative_base()
file implementing tables
classes.py
from sqlalchemy import Column, Integer, String, Date, ForeignKey
from sqlalchemy.orm import relationship, validates
from base import Base
class Book(Base):
...
class Owner(Base):
...
creation of engine, session, etc
database.py
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
from datetime import date
from base import Base
engine = create_engine("sqlite+pysqlite:///mydb.db")
session = Session(engine)
from operations import *
import parser_file
Base.metadata.create_all(engine)
if __name__ == '__main__':
parser_file.main()
Here I import session from database.py
operations.py
from classes import Book, Owner
from database import session
def add_book(title, author, year_published=None):
...
# and many more
parser_file.py
import argparse
from datetime import date
from operations import *
def main():
...
I am not sure about the imports. operations.py, parser_file.py and database.py all import from each other. It used to throw error, but i moved from operations import * and import parser_file after creation of Session. It feels sketchy having imports in the middle of the code as I am used to imports on top of file, and on some posts there are people mentioning that modules should not depend on each other. On the other hand, the code is now nicely split and it feels better this way. What is the correct way to handle these?
Edit: From PEP8 guide on imports
Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants
So it seems like what I did is considered bad.

How to fix dart ambiguous import package error mesage

The name 'DateUtils' is defined in the libraries 'package:calendarro/date_utils.dart' and 'package:flutter/src/material/date.dart (via package:flutter/material.dart)'.
Try using 'as prefix' for one of the import directives, or hiding the name from all but one of the imports.dartambiguous_import
you can import packages like this
import 'package:calendarro/date_utils.dart' as dateutil;
//and use this as
dateutil.DateUtils

Error while trying to import Array._ and import org.apache.spark.sql.functions._ in Spark Scala [duplicate]

This question already has an answer here:
Ambiguous imports in Scala
(1 answer)
Closed 3 years ago.
I was running below code,
import Array._
import org.apache.spark.sql.functions._
df.withColumn(name, concat(substring(col(name),1,4),substring(col(name),6,2), substring(col(name),9,2) ))
and getting an import error,
Error:(188, 26) reference to concat is ambiguous;
it is imported twice in the same scope by
import Array._
and import org.apache.spark.sql.functions._
df.withColumn(name, concat(substring(col(name),1,4),substring(col(name),6,2), substring(col(name),9,2) ))
How can I overcome this ? I need to use both the imports.
Scala Array class contains method concat
and spark sql object org.apache.spark.sql.functions contains method concat
If you need to import both concats use alias:
import Array.{concat => concatArray}

How to Import from Module in Parent Directory (Python)

I have the file structure:
directory_1/
file_1.py
directory_2/
directory_3/
file_2.py
How can I import a function from file_1 into file_2?
Other answers have led me to try from ...file_1 import fun, after adding an __init__.py file to directory_1, but doing this gives me ValueError: attempted relative import beyond top-level package. I have also tried from directory_1.file_1 import fun but this has given me a ModuleNotFound error.
If anybody could help I would be very grateful!
Solution
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
from file_1 import fun
fun()
Umm?
This solution is messy but the only way to avoid it is by restructuring you project, having a look at this might help.

Why am I importing so many classes?

I'm looking at example Spark code and I'm a bit confused as to why the sample code I'm looking at requires two import statements:
import org.apache.spark._
import org.apache.spark.SparkContext._
This is Scala. As I understand it, _ is the wildcard character. So this looks like I'm importing SparkContext twice. Can anybody shed light on this?
This first line says to import all of the classes in the package org.apache.spark. This means you can use all of those classes without prefixing them with the package name.
The second line says to import all of the static members of the class SparkContext. This means you can use those members without prefixing their names with the class name.
Remember import doesn't really do anything at run time; it just lets you write less code. You aren't actually "importing" anything twice. The use of the term import comes from Java, and admittedly it is confusing.
This might help:
Without the first line, you would have to say
org.apache.spark.SparkContext
but the first import line lets you say
SparkContext
If you had only the first line and not the second, you would have to write
SparkContext.getOrCreate
but with both import lines you can just write
getOrCreate