Schedulertask to index storage fails - typo3

I have some additional folders added to my installation at web-root. these folders are synced from extern and should be indexed regulary for TYPO3 (later on they are indexed by solrfal, which stalls with the same error).
While the scheduler jobs run some (for some storages) terminate with the error Execution failed: 1430657869, You are not allowed to read folders. But all folders have (meanwhile) the access right set to 777 recursively. There also are no symlinks.
In file list module all folders and files are fully accessible (read and write).
I even set the CLI user to admin =:O
Why does the error occur?
Where does the error occur? (what folder?)

Related

Ppolyspace error: Read/Write access problem sources_list.txt (errno=13)

I'm trying to execute an analysis in polyspace. After the selection of the folder where there are the source files (.c) and the header files (.h), when I click on Run Bug Finder, I get the following message error:
Error: Polyspace : Read/Write access problem on C:\Users\Gennaro\Desktop\prova\Module_1\BF_Result\sources_list.txt (errno=13)
Can you tell me how to overcome it?
Every user has a full control of the folder.
EDIT: the resource monitor in Windows 10 didn't find any process that is locking the file sources_list.txt, and this file doesn't exist in the above folder.

Firebase hosting: The remote web server hosts what may be a publicly accessible .bash_history file

We host our website on firebase. We fail a security check due to the following reason:
The remote web server hosts publicly available files whose contents may be indicative of a typical bash history. Such files may contain sensitive information that should not be disclosed to the public.
The following .bash_history files are available on the remote server : - /.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /cgi-bin/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /scripts/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file.
The problem is that we don't have an easy way to get access to the hosting machine and delete these files.
Anybody knows how it can be solved?
If you are using Firebase Hosting, you should check the directory (usually public) that you are uploading via the firebase deploy command. Hosting serves only those files (plus a couple of auto-generated ones under the reserved __/ path for auto-configuration).
If you have a .bash_history, cgi-bin/.bash_history or scripts/.bash_history in that public directory, then it will be uploaded to and served by Hosting. There are no automatically served files with those name.
You can check your public directory, and update the list of files to ignore on the next deploy using the firebase.json file (see this doc). You can also download all the files that Firebase Hosting is serving for you using this script.

Watching for files on remote shared folder using tWaitForFile

I am trying tWaitForFile component in Talend to watch for new created files. It seems to be working for local directory (I am using Windows 7).
However, when I point it to a shared folder like //ps1.remotemachine.com/Continents/Africa it doesn't work. It doesn't give me file creation signals like it gives for local directory.
Am I missing something?
Update:
In my testing so far, below are the observations for monitoring files on network path:
Talend tWaitForFile - Inconsistent results. Only gives notification sometimes. Majority of time, doesn't.
Java Nio WatchService - Tried this out of Talend solution. It does give notification for created files on network path. However, when the number of folders to be monitored on network path are too many, it starts missing events of some of the folders. In my case, it was around 100 folders to be monitored.
Hence, aborted both of above approaches and sticking on scheduler based running of Talend jobs.
Use
"\\\\ps1.remotemachine.com/Continents/Africa"
If you use the value from a context then you don't need to double "\"
And in the tWaitForFile

robocopy error with ERROR 32 (0x00000020)

I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.

Sitecore Rebuild Search Indexes throws UnauthorizedAccessException

I'm trying to Rebuild my Search Index in Sitecore 5.3.1 using the Desktop interface. After processing several thousand nodes, I get an UnauthorizedAccessException with the following message:
RebuildSearchIndex|System.UnauthorizedAccessException: Access to the
path '...\WebSite\indexes\master\system\deletable' is denied.
Does anyone know how I could resolve this issue?
UPDATE: #Divamatrix has the answer, and all three steps are required. Giving Full Control to the IIS App Pool identity for the Website and Indexes folders resolved the UnauthorizedAccessException. I got an "unable to rename" error on the deleteable.new file until I gave IUSR read and Write permissions to the Index folder.
Without seeing more of the logs, it's hard to say for sure.. but please check these things. It sounds like there may be some permissions issues when it gets to trying to create or edit files as it's going through creating the indexes.
1)Please make sure that the app pool identity has full control rights to the website folder.
2)App Pool identity also needs rights to the indexes folder which is usually not in the website folder... its usually in the data folder. (However, you should also make sure that the app pool identity has full control of the website folder and its descendants - subfolders and files).
3)Please give READ\WRITE IIS security for /index folder.