I read a mf4 file using asammdf package. I want to delete this file after doing some edits but failed. The error shows: This file is in use by another program and cannot be accessed by the process. I wonder how to close a mf4 file before delete.
from asammdf import MDF
mdf = MDF(path)
from asammdf import MDF
mdf = MDF(path)
# .... something something
mdf.close()
Related
I want to be able to create a new file and write a string into it. Examples that I can find all seems to pertain to writing to an existing file.
String fileName = 'myFile.txt';
String contents = 'hello';
void writeToFile(String fileName, String contents){
File outFile = File('content://com.android.providers.media.documents/document/document/document_root/' + fileName);
outFile.writeAsString(contents);
}
This results in the following error, as expected
Unhandled Exception: FileSystemException: Cannot open file, path = 'content://com.android.providers.media.documents/document/document/document_root/myFile.txt' (OS Error: No such file or directory, errno = 2)
How can I create a new file in which I can write my contents?
Are you sure that the path exists? writeAsString does create the file if it doesn't exists, but it doesn't create the full folder structure up to the file you're trying to write to. Also, you might not have the rights to write in the path you've specified.
Anyway, you'd better use this plugin to get the folders' path instead of hard-coding them.
I imported my excel file into R Environment and saved the path by creating a new file in R scrip. However, when I tried to check my directory and load the dataset, I received the following message " Error: path does not exist: ‘MIS_655_RS_T3_Wholesale_Customers’
What am I doing wrong here?
Thanks
Have you missed the format of your dataset, eg. csv, xlsx.
I suggest you first set your file as working directory, then the following code might help you with it.
Dat_customers <- readxl::read_excel("MIS_655_RS_T3_Wholesale_Customers.xlsx")
I want to create a temporary folder with a directory and some files:
import os
import tempfile
from pathlib import Path
with tempfile.TemporaryDirectory() as tmp_dir:
# generate some random files in it
Path('file.txt').touch()
Path('file2.txt').touch()
files_in_dir = os.listdir(tmp_dir)
print(files_in_dir)
Expected: [file.txt,file2.txt]
Result: []
Does anyone know how to this in Python? Or is there a better way to just do some mocking?
You have to create the file inside the directory by getting the path of your tmp_dir. The with context does not do that for you.
with tempfile.TemporaryDirectory() as tmp_dir:
Path(tmp_dir, 'file.txt').touch()
Path(tmp_dir, 'file2.txt').touch()
files_in_dir = os.listdir(tmp_dir)
print(files_in_dir)
# ['file2.txt', 'file.txt']
I am running a pyspark job on AWS-EMR and I got the following error:
IOError: [Errno 20] Not a directory: '/mnt/yarn/usercache/hadoop/filecache/12/my_common-9.0-py2.7.egg/my_common/data_tools/myData.yaml'
Does anyone know what I might have missed? Thanks!
I've run into this recently when I switched my Python Spark application from Client deploy mode to Cluster deploy mode.
My workaround is to locate the ZIP file (the artifact that I fed to spark-submit using --py-files):
CURRENT_FILE_PATH = os.path.dirname(__file__)
print("[DEBUG] CURRENT_FILE_PATH=" + CURRENT_FILE_PATH)
It comes out something like this:
/mnt2/yarn/usercache/task/appcache/application_1638998214637_0019/container_1638998214637_0019_02_000001/something.zip
then I can use something like:
import zipfile
archive = zipfile.ZipFile(CURRENT_FILE_PATH, 'r')
json_bytes = archive.read('myfile.json')
json_string = json.loads(json_bytes)
Note: I first tried using pkg_resources but couldn't read in the
resulting JSON due to TypeError from json.loads():
import pkg_resources
json_data = pkg_resources.resource_stream(__name__, 'myfile.json')
See also PySpark: how to resolve path of a resource file present inside the dependency zip file
As the error states my_common-9.0-py2.7.egg is not a directory.
Are you missing space in your path?
/mnt/yarn/usercache/hadoop/filecache/12/my_common-9.0-py2.7.egg /my_common/data_tools/myData.yaml
i tried using this link http://www.apple.com/itunes/affiliates/resources/documentation/epfimporter.html
-----------------------
*Below is the script i executed..*
C:\Documents and Settings\freakk>python D:\freakk\Downloads\EPF_Itunes\EPFImporter\E
PFimporter.py \D:\freakk\Downloads\EPF_Itunes\EPFImporter\db\album_popularity_per_
genre
-----------------------
*But i am getting these errors*
2011-10-12 18:24:00,529 [INFO]: Beginning import for the following directories:
\D:\freakk\Downloads\EPF_Itunes\EPFImporter\db\album_popularity_per_genre
2011-10-12 18:24:00,529 [INFO]: Importing files in \D:\freakk\Downloads\EPF_Itunes
\EPFImporter\db\album_popularity_per_genre
Traceback (most recent call last):
File "D:\freakk\Downloads\EPF_Itunes\EPFImporter\EPFimporter.py", line 452, in <
module>
main()
File "D:\freakk\Downloads\EPF_Itunes\EPFImporter\EPFimporter.py", line 435, in m
ain
fieldDelim=fieldSep)
File "D:\freakk\Downloads\EPF_Itunes\EPFImporter\EPFimporter.py", line 162, in d
oImport
fileList = os.listdir(dirPath)
WindowsError: [Error 123] The filename, directory name, or volume label syntax i
s incorrect: 'C:\\D:\\freakk\\Downloads\\EPF_Itunes\\EPFImporter\\db\\album_popula
rity_per_genre/*.*'
please help me....
See the error log its saying you incorrect syntax
C:\\D:\\freakk\\Downloads\\EPF_Itunes\\EPFImporter\\db\\album_popularity_per_genre/*.*
and tell me how can D directory be in C? its not getting the correct path to reach there.
EPFImporter's this code is basically for Mac OS and it assumes that you are in same directory as of EPFImporter.py and in Mac OS everything is in same Directory (as mac is designed).
C:\Documents and Settings\freakk>python D:\freakk\Downloads\EPF_Itunes\EPFImporter\EPFimporter.py \D:\freakk\Downloads\EPF_Itunes\EPFImporter\db\album_popularity_per_genre
above command will not find either of your EPFImporter.py or album_popularity_per_genre.
change your directory to D from C and go to the directory of EPFImporter.py then try as
.....EPFImporter>python EPFImporter.py db\album_popularity_per_genre
assuming you are in same folder of EPFImporter, not tested but something like this may work for you. Hope this answer made you a bit clear on this.
Solved !
I was trying to import only partial data without main table.
Tried to import flat feed...it worked.
Code:
For Flat Feed
C:\Documents and Settings\freakk>python c:\epf\epfimporter.py -f c:\epf\db\application-usa-20111012
Note: Don't include file name(application-usa-20111012.txt)..restrict till folder name only (Eg:application-usa-20111012)