Enterprise Library 6.0 Logging TraceListner - enterprise-library-6

We are getting lots of log file with GUID prefix and files size 0KB only. how can we resolve this issues.
I tried to increase the file size and file max limit, but issue still exists.

Related

Neo4j (Windows) - can't increase heap memory size for Neo4jImport tool

I tried to batch import a graph database with about 40 million nodes and 20 million relationships but I get an outofmemory error (this has been documented already, I know). On Windows, I am using the import tool as so:
neo4jImport –into SemMedDB.graphdb --nodes nodes1.csv --nodes nodes2.csv --relationships edges.csv
I have 16 GB of RAM but Neo4j only allocates 3.5 GB of max heap memory while I still have about 11 GB of free RAM. To try to fix this so I wouldn't get an outofmemory error, I followed some suggestions online and created a conf folder in my C:\program files\Neo4j folder and created a neo4j-wrapper.conf file with heap values set to:
wrapper.java.initmemory=10000
wrapper.java.maxmemory=10000
Also, I set my neo4j properties file page cache setting to:
dbms.pagecache.memory=5g
The problem is, when I restart my neo4j application and try to import again, it still says 3.5 GB of max heap space and 11 GB free RAM... why doesn't Neo4j recognize my settings?
Note, I've tried downloading the zip version of Neo4j in order to use the powershell version of the import tool but I run into the same issue of changing my configuration settings but Neo4j not recognizing them.
I would really appreciate some help with this... thanks!
Cannot tell for windows, but on linux neo4j-wrapper.conf is not used for neo4j-import tool. Instead you can pass additional JVM parameters using JAVA_OPTS environment variable (again Linux syntax here):
JAVA_OPTS="-Xmx10G" bin/neo4j-import
To validate that approach, amend -XX:+PrintCommandLineFlags to the above. At the beginning of the output you should see a line similar to
-XX:InitialHeapSize=255912576 -XX:MaxHeapSize=4094601216 \n
-XX:+PrintCommandLineFlags -XX:+UseCompressedClassPointers \n
-XX:+UseCompressedOops -XX:+UseParallelGC
If that one shows up, using JAVA_OPTS is the way to go.
I found a solution. What ultimately allowed me to change the heap size for the Neo4jImport tool was to open the neo4jImport.bat file (path is C:Program files\neo4j\bin) in a text editor (required me to changed permissions first) and change the "set EXTRA_JVM_ARGUMENTS=-Dfile.encoding=UTF-8" line to
set EXTRA_JVM_ARGUMENTS=-Dfile.encoding=UTF-8 -Xmx10G -Xms10G -Xmn2G
Now, when I run Neo4jImport in the neo4j shell, it shows a heap size of 9.75 GB.
Generally Neo4jImport shouldn't realy on a large heap, it will use whatever heap is available and then use whatever offheap is available, however some amount of "boilerplate" memory needs to be there for the machinery to work properly. Recently there were a fix (coming in 2.3.3) reducing heap usage of the import tool and that would have certainly helped here.

Can I safely delete the administration notification log file (.nfy)?

I have a .nfy dump file in my client system which is taking too much space. It has been last updated on 4 August. Will it be OK if I delete it? Will it be permanently deleted or DB2 will create a new one?
Yes, it is perfectly safe to delete the administration notification log file, and yes, it will be re-created as necessary by the instance.
You can also enable automatic rotation of this file and the diagnostic log file (db2diag.log) by setting the diagsize instance configuration parameter, for example
db2 update dbm cfg using diagsize 1024
The command above instructs the instance to create 10 rotating log files, each with the maximum size of 1024 MB. Once the 10th file reaches the maximum size, the oldest of the 10 files will be deleted and a new file created.
Note that you will need to restart the DB2 instance for the new parameter value to take effect.
More info here.

What are the size limits of file *.agg.flex.data?

What are the size limits of file *.agg.flex.data ?These files are typically located at SSAS data directory.
While processing the cubes with "Process Index", I am getting below error message:
File system error: The following file is corrupted: Physical file: \?\F:\OLAP\.0.db\.0.cub\.0.det\.0.prt\33.agg.flex.data. Logical file .
However, if we navigate to the location mentioned in the error message, the specified file is not present(at given location).
If anyone have faced such issue earlier please help.
Any help would be highly appreciated.
I don't believe agg.flex.data files have a hard upper limit. I suspect that error either means you had a disk failure or that the database is corrupt. I would either unprocess (ProcessClear) and reprocess the database. Or I would delete the database and redeploy and process. Hopefully you can workaround that error.

I cannot upload large (> 2GB) files to the Google Cloud Storage web UI

I have been using the Google Cloud Storage Manager link on the Google APIs console in order to upload my files.
This works great for most files: 1KB, 10KB, 1MB, 10MB, 100MB. However yesterday I could not upload a 3GB file. Any idea what is wrong?
What is the best way to upload large files to Google Cloud Storage?
The web UI only supports uploads smaller than 2^32 bytes (4 GigaBytes). I believe this is a javascript limitation.
If you need to transfer many or large files consider using gsutil:
GSUtil uploads and downloads any size file.
GSUtil resumes uploads and resumes downloads that fail part way through.
GSUtil calculates the MD5 checksum to verify the contents of each file transferred correctly.
GSUtil can upload and download many files at the same time.
gsutil -m cp /path/to/*thousands-of-files* gs://my-bucket/*
In my experience, the accepted answer is not correct - maybe it was but something has changed.
I just uploaded a file of size 2.2GB to GCS using the web interface on Chrome 42 on Windows 8.1.
I would also point out that the question is about files > 2GB, and the answer mentions 2GB, but gets that from 2^32, which is 4GB, not 2. So maybe the limit really is 2^32 (4GB) - I haven't tried anything that big.
(It is still a good idea to use gsutil for large files.)

java.util.prefs.FileSystemPreferences trys to open broken path

Since recently some java applications started to print the following warnings every now and than:
java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: /home/yha/.java/.userPrefs/_!(k![#"k!'`!~!"p!(#!bw"y!#4![!"v!':!d#"t!'`!bg"0!&#!e#"w!'`!ew"0!(k!c!"l!&:!d!"y!'k!bg"n!$0!,w"h!(!!c!"s!'k!}w"h!(#!a#"v!'4!.#"5!'}!a#"s!'`!cw!n!(0= create failed.
"create failed". No kidding! What kind of file name is that?
After googling I now know what the Java Preferences Subsystem is and that the default value for storage on Linux should be $HOME/.userPrefs or the like but... that does not explain where the path I have in my log message is coming from. And I still don't know where to set this value. Maybe there is a configuration file with the storage file path somewhere that became corrupt.
using openjdk-7 on Kubuntu 12.10
That whacky string is the result of a call to java.util.prefs.Base64.byteArrayToAltBase64(). if you reverse the process, you get: "yEdeditor.DocumentType{typeString='application-yfiles'}". does that string mean something to you?
the file name characters may not ultimately be the problem (they may be correct). if your ubuntu home directories are encrypted, you are most likely running into this "well known" issue (max file name length of 143 chars for files in encrypted filesystems). very subtle and extremely hard to diagnose bug.