GO using interfaces as fields - interface

I'm going through An Introduction to Programming in Go and trying to grasp interfaces. I feel like I have an ok idea of what they are and why we would need them but I'm having trouble using them. At the end of the section they have
Interfaces can also be used as fields:
type MultiShape struct {
shapes []Shape
}
We can even turn MultiShape itself into a Shape by giving it an area method:
func (m *MultiShape) area() float64 {
var area float64
for _, s := range m.shapes {
area += s.area()
}
return area
}
Now a MultiShape can contain Circles, Rectangles or even other MultiShapes.
I don't know how to use this. My understanding of this is MultiShape can have a Circle and Rectangle in it's slice
This is the example code I'm working with
package main
import ("fmt"; "math")
type Shape interface {
area() float64
}
type MultiShape struct {
shapes []Shape
}
func (m *MultiShape) area() float64 {
var area float64
for _, s := range m.shapes {
area += s.area()
}
return area
}
// ===============================================
// Rectangles
type Rectangle struct {
x1, y1, x2, y2 float64
}
func distance(x1, y1, x2, y2 float64) float64 {
a := x2 - x1
b := y2 - y1
return math.Sqrt(a*a + b*b)
}
func (r *Rectangle) area() float64 {
l := distance(r.x1, r.y1, r.x1, r.y2)
w := distance(r.x1, r.y1, r.x2, r.y1)
return l*w
}
// ===============================================
// Circles
type Circle struct {
x, y, r float64
}
func (c * Circle) area() float64 {
return math.Pi * c.r*c.r
}
// ===============================================
func totalArea(shapes ...Shape) float64 {
var area float64
for _, s := range shapes {
area += s.area()
}
return area
}
func main() {
c := Circle{0,0,5}
fmt.Println(c.area())
r := Rectangle{0, 0, 10, 10}
fmt.Println(r.area())
fmt.Println(totalArea(&r, &c))
//~ This doesn't work but this is my understanding of it
//~ m := []MultiShape{c, r}
//~ fmt.Println(totalArea(&m))
}
Can someone help me with this? I have a python background so if there is some kind of link between the two that would help.
Thanks

For example,
package main
import (
"fmt"
"math"
)
type Shape interface {
area() float64
}
type MultiShape struct {
shapes []Shape
}
func (m *MultiShape) area() float64 {
var area float64
for _, s := range m.shapes {
area += s.area()
}
return area
}
type Rectangle struct {
x1, y1, x2, y2 float64
}
func (s *Rectangle) area() float64 {
x := math.Abs(s.x2 - s.x1)
y := math.Abs(s.y2 - s.y1)
return x * y
}
func (s *Rectangle) distance() float64 {
x := s.x2 - s.x1
y := s.y2 - s.y1
return math.Sqrt(x*x + y*y)
}
type Circle struct {
x, y, r float64
}
func (s *Circle) area() float64 {
return math.Pi * s.r * s.r
}
func main() {
r1 := Rectangle{1, 1, 4, 4}
fmt.Println(r1.area())
r2 := Rectangle{2, 4, 3, 6}
fmt.Println(r2.area())
c := Circle{10, 10, 2}
fmt.Println(c.area())
m := MultiShape{[]Shape{&r1, &r2, &c}}
fmt.Println(m.area())
}
Output:
9
2
12.566370614359172
23.566370614359172

Related

How can I ignore an embedded struct field while inserting database?

I have 2 database table;
A has 3 columns and they are X, Y, Z
B has 2 columns and they are X, W
My Go structs are like this;
type Base struct {
X int
Y int
}
type A struct {
Base
Z int
}
type B struct {
Base
W int
}
And I initialize my structs like this;
a := A{Base: Base{X: 1, Y:2}, Z: 3}
b := B{Base: Base{X: 1}, W: 4}
When I want to insert these to database using gorm.io ORM, "a" is inserted without any problem but "b" can't be inserted because postgresql gives me an error something like
pq: column "y" of relation "B" does not exist
How can I insert "b" to database without creating another base model that doesn't have "Y" field?
When you assigning struct to another struct, and you create instance of struct, all struct fields has been filled with they default data-type value.
for example: int default value is 0.
So you have 2 solution for this question.
create two different struct(without Base struct), just A and B. like this: (maybe you know this solution.).
type A struct {
X int
Y int
Z int
}
type B struct {
X int
W int
}
use struct tags to prevent from inserting with gorm.
Note: I didn`t test this.
type Base struct {
X int
Y int `json:"y,omitempty"`
}
type A struct {
Base
Z int
}
type B struct {
Base
W int
}
I think you are not declaring the models correctly as documentation 'Base' should have tag embedded inside both A and B then it is ok. .. . for your better understanding I am posting your code with modification. Please be noted that I have tested in my machine it works as charm. .
here is the model declaration.
type Base struct {
X int `json:"x"`
Y int `json:"y"`
}
type A struct {
Base `gorm:"embedded"`
Z int `json:"z"`
}
type B struct {
Base `gorm:"embedded"`
W int `json:"w"`
}
here is the main function for your understanding. . .
func main() {
fmt.Println("vim-go")
db, err := gorm.Open("postgres", "host=localhost port=5432 user=postgres
dbname=testd password=postgres sslmode=disable")
if err != nil {
log.Panic("error occured", err)
}
db.AutoMigrate(&A{}, &B{})
a := &A{Base: Base{X: 1, Y: 2}, Z: 3}
b := &B{Base: Base{X: 1}, W: 3}
if err := db.Create(a).Error; err != nil {
log.Println(err)
}
if err := db.Create(b).Error; err != nil {
log.Println(err)
}
fmt.Println("everything is ok")
defer db.Close()
}
Please be sure to check the documentation of model declaration and the tags. . .
gorm model declaration
Note: Y field will be there in B table but with zero value. . .
package main
import (
"encoding/json"
"fmt"
"reflect"
)
type SearchResult struct {
OrderNumber string `json:"orderNumber"`
Qunatity interface{} `json:"qty"`
Price string `json:"price"`
OrderType interface{} `json:"type"`
ItemQty string `json:"itemQty"`
}
type Or []SearchResult
func fieldSet(fields ...string) map[string]bool {
set := make(map[string]bool, len(fields))
for _, s := range fields {
set[s] = true
}
return set
}
func (s *SearchResult) SelectFields(fields ...string) map[string]interface{} {
fs := fieldSet(fields...)
rt, rv := reflect.TypeOf(*s), reflect.ValueOf(*s)
out := make(map[string]interface{}, rt.NumField())
for i := 0; i < rt.NumField(); i++ {
field := rt.Field(i)
jsonKey := field.Tag.Get("json")
if fs[jsonKey] {
out[jsonKey] = rv.Field(i).Interface()
}
}
return out
}
func main() {
result := &SearchResult{
Date: "to be honest you should probably use a time.Time field here,
just sayin",
Industry: "rocketships",
IdCity: "interface{} is kinda inspecific, but this is the idcity field",
City: "New York Fuckin' City",
}
b, err := json.MarshalIndent(result.SelectFields("orderNumber", "qty"), "", " ")
if err != nil {
panic(err.Error())
}
var or Or
fmt.Print(string(b))
or=append(or,*result)
or=append(or,*result)
for i := 0; i < len(or); i++ {
c, err := json.MarshalIndent(or[i].SelectFields("idCity", "city", "company"),
"", " ")
if err != nil {
panic(err.Error())
}
fmt.Print(string(c))
}
}
The one way of doing this using omitempty fields in struct and other way is to traverse through struct fields which is expensive.
In case of performance doesn`t matter for you then you can take reference of above code snippet.

Proper implementation of Sequence Protocol in a Class

I have been working on learning something about the Accelerate framework and am writing a Vector class to go along with my learning experience. I decided I needed to implement the Sequence protocol and after several false starts and much searching for relevant information to my problem finally came up with a solution that seems to work. Not sure if my solution is proper or not and would like comment if there are better ways to do this. Current code is a bit long but not super long so I will post it here.
import Foundation
import Accelerate
public class Vdsp{
public class VectorD: Sequence, IteratorProtocol {
var vindex = 0
public func makeIterator() -> Double? {
return next()
}
public func next() -> Double? {
let nextIndex = self.vindex * self.s + self.o
guard self.vindex < self.l && nextIndex < self.count
else {
self.vindex = 0
return nil
}
self.vindex += 1
return self.data[nextIndex]
}
public let count : Int
fileprivate var l: Int
fileprivate var o: Int
fileprivate var s: Int
public var length : Int {
get {
return self.l
}
set (value){
let l = (value - 1) * self.s + self.o
if l < 0 || l >= self.count {
preconditionFailure("length exceeds vector boundary")
}
self.l = value
}
}
public var stride : Int {
get {
return self.s
}
set (value){
let l = (self.l - 1) * value + self.o
if l < 0 || l >= self.count {
preconditionFailure("stride will cause vector to exceed vector boundary")
}
self.s = value
}
}
public var offset : Int {
get {
return self.o
}
set (value){
let l = (self.l - 1) * self.s + value
if l < 0 || l >= self.count {
preconditionFailure("stride will cause vector to exceed vector boundary")
}
self.o = value
}
}
// length * stride + offset >= 0 and <= count
public var data : Array<Double>
public init(length: Int){
self.count = length
self.l = length
self.s = 1
self.o = 0
data = Array(repeating: 0.0, count: count)
}
// MARK: - Utility functions
public var empty : VectorD { // Create a new vector unit stride, zero offset
get{
return VectorD(length: length)
}
}
public func display(decimals: Int) -> String {
let fmt = String("%0." + String(decimals) + "f\n")
var aString = ""
for i in 0..<length {
aString += String(format: fmt!, self.data[offset + i * stride])
}
return aString
}
// MARK: - Subscripts and Operators
public subscript(index: Int) -> Double {
get {
if index > length {
preconditionFailure("index \(index) out of bounds")
} else {
return data[self.offset + index * self.stride]
}
}
set(newValue) {
if index > self.length {
preconditionFailure("index \(index) out of bounds")
} else {
self.data[self.offset + index * self.stride] = newValue
}
}
}
public static func + (left: VectorD, right: VectorD) -> VectorD {
return Vdsp.add(left, right)
}
public static func + (left: Double, right: VectorD) -> VectorD {
return Vdsp.add(left, right)
}
public static func + (left: VectorD, right: Double) -> VectorD {
return Vdsp.add(right, left)
}
public static func * (left: VectorD, right: VectorD) -> VectorD {
return Vdsp.mul(left, right)
}
public static func * (left: Double, right: VectorD) -> VectorD {
return Vdsp.mul(left, right)
}
public static func * (left: VectorD, right: Double) -> VectorD {
return Vdsp.mul(right, left)
}
// MARK: - vDSP routines as methods of VectorD
public func fill(value: Double){
var v = value
vDSP_vfillD(&v, &data + offset, stride, vDSP_Length(length))
}
public func ramp(start: Double, increment: Double){
var s = start
var i = increment
vDSP_vrampD(&s, &i, &data + offset, stride, vDSP_Length(length))
}
public var sumval : Double {
get {
var s : Double = 0.0
vDSP_sveD(&data + offset, stride, &s, vDSP_Length(length))
return s
}
}
}
// MARK: - vDSP routines as class functions of Vdsp
public static func add(_ v1: VectorD, _ v2: VectorD) -> VectorD {
let v3 = v1.empty
vDSP_vaddD( &v1.data + v1.offset, v1.stride, &v2.data + v2.offset , v2.stride, &v3.data, 1, vDSP_Length(v3.length))
return v3
}
public static func add(_ s: Double, _ v: VectorD) -> VectorD {
var sdta = s
let r = v.empty
vDSP_vsaddD( &v.data + v.offset, v.stride, &sdta, &r.data, 1, vDSP_Length(v.length))
return r
}
public static func mul(_ v1: VectorD, _ v2: VectorD) -> VectorD {
let v3 = v1.empty
vDSP_vmulD( &v1.data + v1.offset, v1.stride, &v2.data + v2.offset, v2.stride, &v3.data, 1, vDSP_Length(v3.length))
return v3
}
public static func mul(_ s: Double, _ v: VectorD) -> VectorD {
var sdta = s
let r = v.empty
vDSP_vsmulD( &v.data + v.offset, v.stride, &sdta, &r.data, 1, vDSP_Length(v.length))
return r
}
}
I am exercising this with
//: Playground for Accelerate
import UIKit
let V = Vdsp.VectorD(length: 10);V.ramp(start: 0.1, increment: 0.2)
print("Vector V after ramp(0.1,0.2)");print(V.display(decimals: 3))
V.length = 4
V.stride = 2
V.offset = 1
print("Vector V after attribute modification")
print(V.display(decimals: 3))
let Q = V.empty
Q.ramp(start: 1.0, increment: 1.0)
print("Vector Q after ramp(1.0,1.0)");print(Q.display(decimals: 3))
print("V * Q"); var R = V * Q
for i in 0..<V.length {
print("\(V[i]) * \(Q[i]) = \(R[i])")
}
R = V + Q; print("V + Q = R")
for i in 0..<V.length {
print("\(V[i]) + \(Q[i]) = \(R[i])")
}
print("\n")
for item in V.data {
print(item)
}
print("\n")
for item in V {
print(item)
}
print("\n")
V.offset = 3
for item in V {
print(item)
}
and I seem to get the proper output. The Sequence protocol is in the first few lines of the VectorD class.
Your implementation of the Sequence protocol is not correct.
First, your makeIterator() method is not used at all because it has the wrong signature,
it does not return an Iterator. (You can remove that function from your code without changing anything.) Iterating over the vector elements works
because there is a default implementation of makeIterator() for all
iterators which are declared to conform to Sequence.
Second, your next() method uses an instance variable vindex
which is reset to zero after reaching the end of the iteration.
In other words, it is assumed that all elements are retrieved before
iterating the same vector again. This gives the unexpected output:
let V = Vdsp.VectorD(length: 10)
V.ramp(start: 1.0, increment: 1.0)
print(Array(V.prefix(4))) // [1.0, 2.0, 3.0, 4.0]
print(Array(V.prefix(4))) // [5.0, 6.0, 7.0, 8.0]
print(Array(V.prefix(4))) // [9.0, 10.0]
print(Array(V.prefix(4))) // [1.0, 2.0, 3.0, 4.0]
Here is a possible implementation of the Sequence protocol:
public class VectorD: Sequence {
public func makeIterator() -> AnyIterator<Double> {
var vindex = 0
return AnyIterator {
let nextIndex = vindex * self.s + self.o
guard vindex < self.l && nextIndex < self.count else {
return nil
}
vindex += 1
return self.data[nextIndex]
}
}
// ...
}
Note that vindex is now a local variable of makeIterator()
and captured by the closure. Calling makeIterator() again
will start from the beginning even if the previous iteration did
not retrieve all elements:
print(Array(V.prefix(4))) // [1.0, 2.0, 3.0, 4.0]
print(Array(V.prefix(4))) // [1.0, 2.0, 3.0, 4.0]
Another possible implementation would be
public class VectorD: Sequence {
public func makeIterator() -> AnyIterator<Double> {
let upperBound = Swift.min(count, o + l * s)
let it = Swift.stride(from: o, to: upperBound, by: s)
.lazy.map { self.data[$0] }.makeIterator()
return AnyIterator(it)
}
// ...
}
using the stride() method from the Swift standard
library.

Is it possible to create a "Positive Number" type in Swift?

Sorry if this is a stupid question but I'm wondering if there's a way in Swift to create a type that exclusively holds numbers that are strictly greater than zero and where the "positiveness" of the values is enforced at compile time.
For example, can I create somehow write code like
func divide(x: PositiveNumber, y: PositiveNumber){
return x / y
}
such that
divide(1, 3)
works but
divide(1, 0)
won't compile?
The closest thing I could come up with was a struct with only one fallible initializer such that the type has either a positive value or is nil:
struct PositiveNumber {
let value: Float
init?(value: Float){
if value > 0 {
self.value = value
} else {
return nil
}
}
}
func / (left: PositiveNumber, right: PositiveNumber) -> Float {
return left.value / right.value
}
func divide(x: PositiveNumber?, y: PositiveNumber?) -> Float? {
if let x = x, y = y {
return x / y
}
return nil
}
let x1 = PositiveNumber(value: 1)
let y1 = PositiveNumber(value: 3)
let x2 = PositiveNumber(value: -1)
let y2 = PositiveNumber(value: 0)
divide(x1, y: y1)! // .333
divide(x2, y: y2)! // runtime error
That's not terrible but we still have to deal with a lot of optional handling/unwrapping. I'm asking this question because I have many places in my code where I need to check that a value is not zero and I'm curious if there's a way to remove that code and let the compiler handle it. The struct-based solution requires pretty much the same amount of code.
Number Type build on Float
This gist is contains a Struct that conforms to pretty much everything Float conforms too. It is just a vanilla copy of Float, change it to your liking.
Have you considered a custom operator?
infix operator /+ { associativity left precedence 150 }
func /+(lhs:Float,rhs:Float) -> Float? {
guard rhs > 0 else {
return nil
}
return lhs / rhs
}
let test = 2 /+ -1 // nil
let test2 = 2 /+ 1 // 2
let test3 = 2 /+ 1 + 2 // warning
It doesn't really matter if you let it return an optional, or an enum value, or different protocols. You will have to handle the return. But this way you get compiler warnings.
Limited Number Type with just an operator to handle divisions:
You could change math altogether and create a PositiveNumber Type that returns NaN when dividing by a value less than Zero.
public struct PositiveFloat {
public var value: Float
/// Create an instance initialized to zero.
public init() {
self.value = 0
}
/// Create an instance initialized to `value`.
public init(_ value: Float) {
self.value = value
}
public init(_ value: PositiveFloat) {
self.value = value.value
}
}
extension Float {
public var positive : PositiveFloat {
return PositiveFloat(self)
}
}
public func /(lhs:Float,rhs:PositiveFloat) -> Float {
if 0 > rhs.value {
return lhs / rhs.value
} else {
return Float.NaN
}
}
public func /(lhs:PositiveFloat,rhs:PositiveFloat) -> Float {
if 0 > rhs.value {
return lhs.value / rhs.value
} else {
return Float.NaN
}
}
let testNormal : Float = 10
let testFloat : Float = -5
let test = testFloat / testNormal.positive
if test.isNaN {
// do stuff
}

Find digit sum of a number in Swift

Does anyone know how to get the sum of all the digits in a number in Swift?
For example using the number 845 would result in 17
update: Swift 5 or later We can use then new Character property wholeNumberValue:
let string = "845"
let sum = string.compactMap{$0.wholeNumberValue}.reduce(0, +)
print(sum) // 17
let integer = 845
let sumInt = String(integer).compactMap{$0.wholeNumberValue}.reduce(0, +)
print(sumInt) // 17
Here is a solution that uses simple integer arithmetic only:
func digitSum(var n : Int) -> Int {
var sum = 0
while n > 0 {
sum += n % 10 // Add least significant digit ...
n /= 10 // ... and remove it from the number.
}
return sum
}
println(digitSum(845)) // 17
Update for Swift 3/4:
func digitSum(_ n : Int) -> Int {
var n = n
var sum = 0
while n > 0 {
sum += n % 10 // Add least significant digit ...
n /= 10 // ... and remove it from the number.
}
return sum
}
print(digitSum(845)) // 17
Another implementation, just for fun:
func digitSum(_ n : Int) -> Int {
return sequence(state: n) { (n: inout Int) -> Int? in
defer { n /= 10 }
return n > 0 ? n % 10 : nil
}.reduce(0, +)
}
The recursive solution in Swift 3!
func digitSum(of number: Int) -> Int {
if(number < 10) {
return number
} else {
return (number % 10) + digitSum(of: (number/10))
}
}
For the sake of completeness, and for those who would like to see or understand a math-based approach, here's a real-number function based technique ported to Swift.
This is not the most efficient way to tally the digits of an integer in Swift. I don't recommend using it. I would personally use #LeoLDbus map/reduce answer to the question, because it is so cool and illustrates a powerful set of Swift features yet short, or #MartinR integer mod/divide answer, due to its utter simplicity and relative speed of integer arithmetic .
Cocoa and UIKit have the requisite math methods, so you'll probably need to import one of those packages.
func sumDigits(var i : Int) -> Int {
var sum = 0
var nDigits = floor(log10(Double(i))) + 1
for var r = nDigits; r > 0; r-- {
var p = pow(10, r - 1)
var d = floor(Double(i) / p)
sum += Int(d)
i -= Int(d * p)
}
return sum
}
for swift4, try below function:
func sumDigits(num: Int) -> Int {
return String(num).compactMap { Int(String($0)) }.reduce(0, +)
}
Split it into two pieces:
digits
public extension UnsignedInteger {
/// The digits that make up this number.
/// - Parameter radix: The base the result will use.
func digits(radix: Self = 10) -> [Self] {
sequence(state: self) { quotient in
guard quotient > 0
else { return nil }
let division = quotient.quotientAndRemainder(dividingBy: radix)
quotient = division.quotient
return division.remainder
}
.reversed()
}
}
XCTAssertEqual(
(867_5309 as UInt).digits(),
[8,6,7, 5,3,0,9]
)
XCTAssertEqual(
(0x00_F0 as UInt).digits(radix: 0b10),
[1,1,1,1, 0,0,0,0]
)
XCTAssertEqual(
(0xA0_B1_C2_D3_E4_F5 as UInt).digits(radix: 0x10),
[10,0, 11,1, 12,2, 13,3, 14,4, 15,5]
)
XCTAssertEqual(
(0o00707 as UInt).digits(radix: 0o10),
[0b111, 0, 0b111]
)
sum
public extension Sequence where Element: AdditiveArithmetic {
var sum: Element? { reduce(+) }
}
public extension Sequence {
/// The first element of the sequence.
/// - Note: `nil` if the sequence is empty.
var first: Element? {
var iterator = makeIterator()
return iterator.next()
}
/// - Returns: `nil` If the sequence has no elements, instead of an "initial result".
func reduce(
_ getNextPartialResult: (Element, Element) throws -> Element
) rethrows -> Element? {
guard let first = first
else { return nil }
return try dropFirst().reduce(first, getNextPartialResult)
}
}
XCTAssertEqual([1, 2, 3].sum, 6)
XCTAssertEqual([0.5, 1, 1.5].sum, 3)
XCTAssertNil([CGFloat]().sum)

Operator for parametrized struct is dependant on another operator

I'm writing LoG filter in Rust and I wanted to use | as operator for element wide multiplication operator (a_{i,j} * b_{i,j}), but compilator is complaining about Output result. It say that self[(i, j)] * out[(i, j)] does not equal Mul<N>::Output.
impl<N> BitOr<Matrix<N>> for Matrix<N> where N: Mul<N> {
type Output = Matrix<N>;
fn bitor(self, other: Matrix<N>) -> Matrix<N> {
if self.width() != other.width() ||
self.height() != other.height() {
panic!("Matrices need to have equal dimensions");
}
let mut out = Matrix::new(self.width(), self.height());
for i in 0..(self.width()) {
for j in 0..(self.height()) {
out[(i, j)] = self[(i, j)] * out[(i, j)];
}
}
out
}
}
Is there any way to set output based on Mul<N>::Output type?
I guess this should work:
impl<N> BitOr<Matrix<N>> for Matrix<N> where N: Mul<N> {
type Output = Matrix<<N as Mul<N>>::Output>;
fn bitor(self, other: Matrix<N>) -> Matrix<<N as Mul<N>>::Output> {
if self.width() != other.width() ||
self.height() != other.height() {
panic!("Matrices need to have equal dimensions");
}
let mut out = Matrix::new(self.width(), self.height());
for i in 0..(self.width()) {
for j in 0..(self.height()) {
out[(i, j)] = self[(i, j)] * out[(i, j)];
}
}
out
}
}
You didn't provide a small runnable example, so I made my own. This works:
use std::ops::{Mul,BitOr};
#[derive(Copy,Show)]
struct Matrix<N>(N, N);
impl<N> BitOr<Matrix<N>> for Matrix<N> where N: Mul<N, Output=N> {
type Output = Matrix<N>;
fn bitor(self, other: Matrix<N>) -> Matrix<N> {
Matrix(self.0 * other.0, self.1 * other.1)
}
}
fn main() {
let a = Matrix(-1,-1);
let b = Matrix(2,3);
let c = a | b;
println!("{:?}", c)
}
The main thing I had to do was N: Mul<N, Output=N>, which specifies that N must be multipliable by another N and will result in another N.