Set Permissions on Folder without Processing Sub Files and Folders - powershell

I have two related issues that I am trying to solve. I want to update an ACL on a folder without it processing all of the children. My first example is when setting "This folder only" on a high level file, setting the permissions takes forever because it processes all of the children. The second example is that I have a file system were Everyone is directly applied to each item and I need to remove the entry without taking the time to process the children. I have used Get/Set-ACL and Get/Remove/Add-NTFSAccess, but cannot figure out how to stop the processing of the children objects.

NTFS inheritance is similar to genetic inheritance. As we, as children, get to blame our parents for all our faults whether they like it or not, child objects decide whether they inherit permissions from parent objects.
You cannot, using NTFS, change a permission on a parent object and tell it not to allow propagation to child objects.

Related

In MDriven Enterprise Information, why is there a Processes Hierarchy and also a Processes Tree?

In MDriven Enterprise Information, why is there a Processes Hierarchy and also a Processes Tree? Aren’t they the same thing? Is this not redundant duplication?
Processes are defined by their steps.
A Process step can make use of another processes - ie defining a sub-process.
To both have the complete list of processes and the resulting expanded tree of process and their sub-processes and even their sub-processes (infinity) we added two nodes.
First node is the complete straight list of existing processes regardless if they are sub-processes or not.
Second node is the constructed tree of whom uses who - and a sub-process can then show up multiple times in this tree.
Notice that ApproveNewBulk is re-used in 2 processes in the example below.

NetSuite workflow to update a record of different type

I have two NetSuite records that have a parent-child relationship. Let's call them P and C
The Child records (e.g. C123,124, etc) are listed on the Parent (P987) in a sublist.
I have a need to display the most recently updated child record (e.g. C124) in the main area of the related Parent record. I need to display 3 fields from that child record:
name
field1
field2
Second question: I might need to make one or more of fields displayed above (e.g. field1) editable. If so, would I have to store that as fields on the parent also? And then how would I keep this "copy" updated in sync with that specific Child record?
It doesn't have to be a workflow but I prefer to use "supported" features (such out of the box workflow actions) as much as possible and avoid customization by scripting. If you don't think it can be done without a script then please be clear.
P.S. Fairly new to NetSuite but not the concepts.
P.P.S. no I am not happy about the problem above and wish I could prevent all silly requests. lol
First one you can achieve with a Workflow action script.
Second question, if those are custom fields, you'd have to make them populate the info on the child record, this can be done from the same WFA script.

Complicated job aggregate

I have a very complicated job process and it's not 100% clear to me where to handle what.
I don't want to have code, it just the question who is responsible for what.
Given is the following:
There is a root directory "C:\server"
Inside are two directories "ftp" and "backup"
Imagine the following process:
An external customer sends a file into the ftp directory.
An importer application get's the file and now the fun starts.
A job aggregate have to be created for this file.
The command "CreateJob(string file)" is fired.
?. The file have to be moved from ftp to backup. Inside the CommandHandler or inside the Aggregate or on JobCreated event?
StartJob(Guid jobId) get's called. A third folder have to be created "in-progress", File have to be copied from backup to in-progress. Who does it?
So it's unclear for me where Filesystem things have to be handled if the Aggregate can not work correctly without the correct filesystem.
Because my first approach was to do that inside an Infrastructure layer/lib which listen to the events from the job layer. But it seems not 100% correct?!
And top of this, what is with replaying?
You can't replay things/files that were moved, you have to somehow simulate that a customer sends the file to the ftp folder...
Thankful for answers
The file have to be moved from ftp to backup. Inside the CommandHandler or inside the Aggregate or on JobCreated event?
In situations like this, I move the file to the destination folder in the Application service that sends the command to the Aggregate (or that calls a command-like method on the Aggregate, it's the same) before the command is sent to the Aggregate. In this way, if there are some problems with the file-system (not enough permissions or space is not available etc) the command is not sent. These kind of problems should not reach our Aggregate. We most protect it from the infrastructure. In fact we should keep the Aggregate isolated from anything else; it must contain only pure business logic that is used to decide what events get generated.
Because my first approach was to do that inside an Infrastructure layer/lib which listen to the events from the job layer. But it seems not 100% correct?!
Indeed, this seems like over engineering to me. You must KISS.
StartJob(Guid jobId) get's called. A third folder have to be created "in-progress", File have to be copied from backup to in-progress. Who does it?
Whoever's calling the StartJob could do the moving, before the StartJob gets called. Again, keep the Aggregate pure. In this case it depends on your framework/domain details.
And top of this, what is with replaying? You can't replay things/files that where moved, you have to somehow simulate that a customer sends the file to the ftp folder...
The events are loaded from the event store and replayed in two situations:
Before every command gets sent to the Aggregate, the Aggregate Repository loads all the events from the event store then it applies every one of them to the Aggregate, probably calling some applyThisEvent(TheEvent) method on the Aggregate. So, this methods should be with no side effects (pure) otherwise you change the outside world again and again at every command execution and you don't want that.
The read-models (the projections, the query-models) that present data to the user listen to those events and update the database tables that hold the data that the users see. The events are sent to those read-models after they are generated and every time the read-models are being recreated. When you invent a new read-model, you must pass it all the events that were previous generated by the aggregates in order to build the correct/complete state on them. If your read-model's event listeners have side effects what do you think happens when you replay those long past events? The outside world is modified again and again and you don't want that! The read-models only interpret the events, they don't generate other events and they don't change the outside world.
There is a special third case when events reach another type of model, a Saga. A Saga must receive an event only once! This is the case that you thought to use in Because my first approach was to do that inside an Infrastructure layer/lib which listen to the events from the job layer. You could do this in your case but is not KISS.
I have a very complicated job process and it's not 100% clear to me where to handle what. I don't want to have code, it just the question who is responsible for what.
The usual answer is that the domain model -- aka the "aggregate" makes decisions, and saves them. Observing those decisions, some event handler induces side effects.
And top of this, what is with replaying? You can't replay things/files that where moved, you have to somehow simulate that a customer sends the file to the ftp folder...
You replay the events to the aggregate, so that it is restored to the state where it made the last decision. That's a separate concern from replaying the side effects -- which is part of the motivation for handling the side effects elsewhere.
Where possible, of course, you prefer to have the side effects be idempotent, so that a duplicated message doesn't create a problem. But notice that from the point of view of the model, it doesn't actually matter whether the side effect succeeds or not.

Org-mode -- Summing effort at multiple levels of hierarchy in column view

I have a tree of tasks in org-mode which has effort estimates at multiple levels within a subtree. In order words, a task may have sub-tasks with their own effort estimates, but the parent task also has an effort estimate which reflects work on the parent task that isn't included in any sub-task. I want to avoid sticking an "other" or "misc" subtask on each tree just to capture this sort of thing. The problem is that column view wipes out the effort property on the parent tasks as it percolates up the tree and replaces it with the sum of the child tasks. This seems like a terrible idea to me -- is there a way to prevent that, or must I push all effort estimates into leaf nodes exclusively?
As org version 9.2.5 if in the COLUMNS declaration we include both %Effort and %Effort(Effort Children){:}~, the effort in the parent node doesn't get overridden when usingM-x org-columns`.
Example
#+COLUMNS: %50ITEM TODO %3PRIORITY %Effort %Effort(Effort Children){:} %10CLOCKSUM

Need Core Data help to insert objects

First of all I want to show how I made this in SQL:
Both the location and environment table will never contain more than those four rows. Each log can only be associated with 4 rows.
What I don't understand is how do I even start writing code that will take whatever the user has chosen, based on state switches etc in my UI and persist this?
Because when the user are done I want to store a "log-record", and the log-record may have location and environment rows associated with it. And what happen when the user let say, choose all the location rows, four times a row....does it add the location to the location "entity" every time? Would I end up with a lot of duplicated data? I would appreciate any help that can show me how to do this. Thank you!
Looks like you need three entities. You'll have Location and Environment entities that have whichever attributes they need, and a Log entity that has relationship with both Environment and Location. I think you're asking if instances of Location and Environment that happen to be the same will be duplicated in the core data store, or if multiple Log instances will relate to the same Location and Environment instances. Is that right? Answer: It's up to you. Say you want to save a Location instance that has a particular set of attributes. You could first search for one that has that exact set of attributes and associate it with your Log instance, or you could just create a new Location instance and not worry about the duplication. If you're storing zillions of these Log entries, the first plan might save a lot of space. If you're not saving them all that often, and particularly if the user can go back and change the data associated with a Log instance, you might want to use separate instances even if they happen to be the same.