Recursively find the parent of a node in tree - swift

I have implemented a (non-binary) tree data structure like this:
class Node {
var id: String
var children: [Node] = []
}
To find a node with a given id as well as its parent, I have the following function:
extension Node {
func searchNodeAndParent(_ id: String, parentNode: Node) -> (Node?, Node?) {
if id == self.id {
return (self, nil)
}
for child in children {
if let found = child.searchNodeAndParent(id, parentNode: self).0 {
return (found, parentNode)
}
}
return (nil, nil)
}
}
However, I cannot seem to find the correct parent with rootNode.searchNodeAndParent(id, parentNode: nil). I suspect that self might not be the correct parent to use as the parameter, but do not know how I can fix it.

You keep returning the parentNode of the current node, instead of the one that one of the recursive functions may have found.
What you'd want to do is:
check if this node the one you want
for each child, start the recursion, passing self as the parent. If the recursion returns a non-nil result of (node, parent), return the tuple as-is.
Also, your return type should probably be: (Node, Node?)? - i.e. either the entire tuple is nil indicating a Not Found result, or if the tuple exists a parent could still be nil in the case of the root.
extension Node {
func searchNodeAndParent(_ id: String, parentNode: Node?) -> (Node, Node?)? {
if id == self.id {
return (self, parentNode) // create the found result here
}
for child in children {
if let found = child.searchNodeAndParent(id, parentNode: self) {
return found // return what was found as-is
}
}
return nil // nothing was found, the entire result is nil
}
}
Unrelated, but you might want to hide this method that takes in the parentNode parameter from the API surface (e.g. make it private), and give users a more convenient function that only takes the id as the parameter:
extension Node {
func searchNodeAndParent(_ id: String) -> (Node, Node?)? {
searchNodeAndParent(id, parentNode: nil)
}
private func searchNodeAndParent(_ id: String, parentNode: Node?) -> (Node, Node?)? {
// ...
}
}

Related

How to check if some properties in an object are nil using Mirroring?

Lets say I have a class that has many properties, and I want to check if most of them are nil...
So, I would like to exclude only two properties from that check (and say, to check against 20 properties).
I tried something like this:
extension MyClass {
func isEmpty() -> Bool {
let excluded = ["propertyName1", "propertyName2"]
let children = Mirror(reflecting: self).children.filter { $0.label != nil }
let filtered = children.filter {!excluded.map{$0}.contains($0.label)}
let result = filtered.allSatisfy{ $0.value == nil }
return result
}
}
The first thing that bothers me about this code is that, I would have to change excluded array values if I change a property name.
But that is less important, and the problem is, this line:
let result = filtered.allSatisfy{ $0.value == nil }
it doesn't really check if a property is nil... Compiler warns about:
Comparing non-optional value of type 'Any' to 'nil' always returns
false
So, is there some better / proper way to solve this?
The Mirror API is pretty rough, and the general reflection APIs for Swift haven't been designed yet. Even if they existed though, I don't think you should be using them for this case.
The concept of an "empty instance" with all-nil fields doesn't actually make sense. Imagine a Person(firstName: nil, lastName: nil, age: nil). you wouldn’t have an “empty person”, you have meaningless nonsense. If you need to model nil, use nil: let possiblePerson: Person? = nil
You should fix your data model. But if you need a workaround for now, I have 2 ideas for you:
Just do it the boring way:
extension MyClass {
func isEmpty() -> Bool {
a == nil && b == nil && c == nil
}
}
Or perhaps:
extension MyClass {
func isEmpty() -> Bool {
([a, b, c] as [Any?]).allSatisfy { $0 == nil }
}
}
Of course, both of these have the downside of needing to be updated whenever a new property is added
Intermediate refactor
Suppose you had:
class MyClass {
let propertyName1: Int? // Suppose this doesn't effect emptiness
let propertyName2: Int? // Suppose this doesn't effect emptiness
let a: Int?
let b: Int?
let c: Int?
}
You can extract out the parts that can be potentially empty:
class MyClass {
let propertyName1: Int? // Suppose this doesn't effect emptiness
let propertyName2: Int? // Suppose this doesn't effect emptiness
let innerProperties: InnerProperties?
struct InnerProperties { // TODO: rename to something relevant to your domain
let a: Int
let b: Int
let c: Int
}
var isEmpty: Bool { innerProperties == nil }
}
If the properties a/b/c are part of your public API, and you can't change them easily, then you can limit the blast radius of this change by just adding some forwarding computed properties:
extension MyClass {
public var a: Int? { innerProperties?.a }
public var b: Int? { innerProperties?.b }
public var c: Int? { innerProperties?.c }
}

Binary Tree Upside Down

The output from my code return an empty array while trying to put a binary tree upside down using dfs recursion. How do I fix this ?
public class TreeNode {
var val :Int
var left: TreeNode?
var right: TreeNode?
public init(_ val: Int, _ left: TreeNode?, _ right: TreeNode?) {
self.val = val
self.left = left
self.right = right
}
func upsideDownBinaryTree(_ root: TreeNode?) -> TreeNode? {
return dfs(current: root)
}
func dfs( current: TreeNode?) -> TreeNode? {
if current == nil {
return nil
}
let newRoot = dfs(current: current?.left)
current?.left?.left = current?.right
current?.left?.right = current
current?.left = nil
current?.right = nil
return newRoot
}
}
enter image description here
The problem is at the base case of recursion: it will always return nil. Since any other returned value is exactly the same as what was retrieved from the recursive call (newRoot), it can never be anything else than nil. There is no way the function can return anything else than nil.
Realise that the case current == nil is only there to deal with an empty tree. In all other cases the recursion should stop one step earlier: when it is a node without left child. In that case that node should be returned.
So change:
if current == nil {
return nil
}
To:
if current?.left == nil {
return current
}
Now the base case is focussed on the non-presence of the left child, and in that case the node itself will be returned. The case where current itself is nil is only relevant when the tree is empty, but it is also covered here.

How to detect a loop/cycle in a linked list using Swift [duplicate]

This question already has answers here:
How to detect a loop in a linked list?
(29 answers)
Closed 3 years ago.
I'm trying function that detects a loop/cycle in a linked list using Swift.
Can somebody show me code how to do that?
The only answers I did find where in some other programming languages that I'm not really familiar
This is the code I was working on so far.
public class Node<Value> {
public var value: Value
public var next: Node?
public init(value: Value, next: Node? = nil) {
self.value = value
self.next = next
}
}
public struct LinkedList<Value> {
public var head: Node<Value>?
public var tail: Node<Value>?
public init() {}
public var isEmpty: Bool {
return head == nil
}
public mutating func push(_ value: Value) {
head = Node(value: value, next: head)
if tail == nil {
tail = head
}
}
public mutating func apped(_ value: Value) {
guard !isEmpty else {
push(value)
return
}
tail!.next = Node(value: value)
tail = tail!.next
}
}
extension Node: CustomStringConvertible {
public var description: String {
guard let next = next else {
return "\(value)"
}
return "\(value) -> " + String(describing: next) + " "
}
}
extension LinkedList: CustomStringConvertible {
public var description: String {
guard let head = head else {
return "Empty List"
}
return String(describing: head)
}
}
One way is to traverse the list, somehow keeping track of the nodes previously visited; eventually, you will either visit a previously-visited node (and thus know there is a loop) or hit the end (and know there is not).
A more clever approach is to traverse the list with 2 pointers, but have one travel twice as fast as the other. If there is a loop, at some point the faster pointer will catch up to the slower one within the loop, so there is no explicit need to keep track of previously visited nodes.

Linked list declaration in Swift with finger type that can transparently insert in either middle or start

I am attempting to declare a linked list in Swift, with a finger type that is a reference to either a node, allowing to insert or remove beyond that node, or to the linked list itself, in which case inserting or removing at the top of the linked list.
I want to see if this can be made uniform down to the implementation, instead of having to special-case everything: Swift is object-oriented, after all.
I previously had a version which required forced casts, but again I'd like to see if this can be made to work without them (e.g. even if they never end up faulting they still imply runtime checks each time).
I currently have this code:
protocol ContainerNodeInterface: class {
associatedtype ContainedItem;
var contents: ContainedItem { get };
}
protocol ParentNodeInterface: class {
associatedtype LinkedItem: ContainerNodeInterface;
var next: LinkedItem? {get set};
}
class ProtoNode<Contents, NodeType: ParentNodeInterface>: ParentNodeInterface where NodeType.ContainedItem==Contents, NodeType.LinkedItem==NodeType { // not meant to be instantiated or directly referenced
typealias LinkedItem = NodeType;
var next: NodeType?;
init() {
next = nil;
}
final func insertThisAfterMe(_ node: NodeType) {
node.next = next;
next = .some(node);
}
final func removeNodeAfterMe() -> NodeType? {
guard let nextNode = next else {
return nil;
}
let result = nextNode;
next = result.next;
result.next = nil;
return nextNode;
}
}
class Node<Contents>: ProtoNode<Contents, Node<Contents>>, ContainerNodeInterface {
typealias ContainedItem = Contents;
typealias NextItem = Node<Contents>;
var contents: Contents;
init(withContents: Contents) {
contents = withContents;
super.init();
}
}
typealias ParentNode<Contents> = ProtoNode<Contents, Node<Contents>>;
But the Swift compiler, via Xcode, is complaining that Type 'Node<Contents>' does not conform to protocol 'ParentNodeInterface'. This makes no sense! And if I add explicit conformance to ParentNodeInterface to Node, then I get simultaneously that error and one of redundant conformance to the same protocol.
What is missing here?
Xcode Version 10.2 (10E125), Swift 5
I resolved it by splitting ProtoNode into an initial declaration and an extension:
protocol ContainerNodeInterface: class {
associatedtype ContainedItem;
var contents: ContainedItem { get };
}
protocol ParentNodeInterface: class {
associatedtype LinkedItem: ContainerNodeInterface;
var next: LinkedItem? {get set};
}
class ProtoNode<Contents, NodeType: ContainerNodeInterface>: ParentNodeInterface where NodeType.ContainedItem==Contents { // not meant to be instantiated or directly referenced
typealias LinkedItem = NodeType;
var next: NodeType?;
init() {
next = nil;
}
}
extension ProtoNode where NodeType: ParentNodeInterface, NodeType.LinkedItem==NodeType
{
final func insertThisAfterMe(_ node: NodeType) {
node.next = next;
next = .some(node);
}
final func removeNodeAfterMe() -> NodeType? {
guard let nextNode = next else {
return nil;
}
let result = nextNode;
next = result.next;
result.next = nil;
return nextNode;
}
}
class Node<Contents>: ProtoNode<Contents, Node<Contents>>, ContainerNodeInterface {
typealias ContainedItem = Contents;
typealias NextItem = Node<Contents>;
var contents: Contents;
init(withContents: Contents) {
contents = withContents;
super.init();
}
}
typealias ParentNode<Contents> = ProtoNode<Contents, Node<Contents>>;
I figure it helps the compiler break the dependency loop, where it has to determine whether Node, as a generic parameter, conforms to the protocol before it can determine the declaration is valid and consider the declared type, i.e. Node, as conforming to the protocol, but still it feels a bit silly for me to have to make this seemingly pointless extension declaration.
At the very least, the compiler could be slightly more helpful…
First, I would start with a simple linked-list Node type:
final class Node<Value> {
let value: Value
var next: Node<Value>?
init(_ value: Value) {
self.value = value
}
func insert(_ node: Node<Value>) {
node.next = next
next = node
}
func removeNext() -> Node<Value>? {
guard let removedNode = next else { return nil }
next = removedNode.next
removedNode.next = nil
return removedNode
}
}
Then, you can add the concept that you describe: a pointer to "either a node...or to the linked list itself." When you see "or" in a description, that implies a sum type, which in Swift is an enum, either a pointer to the head of a (possibly empty) list, or a pointer to a node. Each has slightly different behaviors, which you manage with switch.
enum NodePointer<Value> {
case head(Node<Value>?)
case node(Node<Value>)
mutating func insert(_ node: Node<Value>) {
switch self {
case .head(let n):
self = .head(node)
node.next = n
case .node(let n):
n.insert(node)
}
}
mutating func removeNext() -> Node<Value>? {
switch self {
case .head(let n):
self = .head(n?.next)
return n
case .node(let n):
return n.removeNext()
}
}
var pointee: Node<Value>? {
switch self {
case .head(let n): return n
case .node(let n): return n
}
}
}
With that you would have an interface like:
var list = Node(1)
list.insert(Node(2))
var ptr = NodePointer.head(list)
ptr.insert(Node(1))
ptr.pointee?.next?.next?.value // 2
Note that the specific problem you ran into (that the compiler couldn't work out the conformance) I believe is a compiler bug, though I also believe it's one that's fixed on master currently. I haven't tested that out though. But I don't believe the protocol-based approach is correct for this problem.

Unwrapping dictionary values Swift

I'm creating an adjacency list in Swift, storing an array of nodes. However, when adding an edge from an existing node I need to check if the from key exists in any of the children, and if it does check if the to value exists in the same. It seems to be a mess s.t.
func addEdge(from: String, to: String) {
//the full code for addEdge is incomplete here
if (children.contains{ $0.nodes[from] != nil}) {
for child in children {
if (child.nodes[from] != nil) {
if (!(child.nodes[from]?.contains{$0 == to})!){
child.nodes[from]?.append(to)
}
}
}
}
}
Children is
var children = [Node]()
and Node is
class Node: Hashable {
var nodes = [String:[String]]()
var hashValue: Int{ return nodes.hashValue }
static func == (lhs: Node, rhs: Node) -> Bool {
return lhs.nodes.keys == rhs.nodes.keys
}
}
Now it works, but seems really ugly. There must be a better way in Swift, but what is it?
Assuming that you do not wish to change the way you have implemented the above code but want to improve readability, you can utilise if let and optional chaining to make your code cleaner and more readable.
func addEdge(from: String, to: String) {
//the full code for addEdge is incomplete here
if children.contains{ $0.nodes[from] != nil } {
for child in children {
if let fromNode = child.nodes[from], fromNode.contains{$0 == to} {
fromNode.append(to)
}
}
}
}
Swift Optional Chaining
Try something like:
if (children.contains{ $0.nodes[from] != nil}) {
children.filter { $0.nodes[from] != nil }.
compactMap { $0.nodes[from] }.
filter { !($0.nodes[from]!.contains{$0 == to}) }.
forEach { $0.nodes[from]?.append(to) }
}