Parsing simple groups of checkboxes in Go - forms

I'm parsing a form in Go and I frequently find groups of checkboxes which need to be processed into text like so:
[ ] Foo
[x] Bar
[ ] Baz
[x] Bat
where the output should be a comma-separated list "BarText, BatText" corresponding to the checked items, or "None" if none of the items are checked. What is a good way to handle this situation? Repeating the logic each time seems like a bad idea.
In the spirit of YAGNI there's no need to handle possible future changes like translations into other languages (actually this example is highly unlikely to be useful in the present context).
Efficiency is unimportant for this application.
Edit: code looks like this (source):
func handleCheckboxesForm(w http.ResponseWriter, r *http.Request) {
b := func(name string) string { // helper function for boolean values in the form
return r.FormValue(name) == "on"
}
text := "Header stuff here"
mytext := ""
if b("nfpa-alk") {
mytext += ", alkaline"
}
if b("nfpa-acid") {
mytext += ", acid"
}
if b("nfpa-w") {
mytext += ", reacts violently with water"
}
if b("nfpa-alk") || b("nfpa-acid") || b("nfpa-w") {
text += mytext[2:] + "\n"
} else {
text += "none\n"
}
// lots of other checkbox groups here
// do stuff with text
}

There are many repeating code in yours which can be optimized out.
Your code must contain at least the following "fragments":
The mappings from entry name to entry text, which can be stored in a map, e.g.
var mappings = map[string]string {
"Foo": "Foo text",
"Bar": "Bar text",
"Baz": "Baz text",
"Bat": "Bat text",
// ... other mappings
}
And the list of keys belonging to a group, which can be stored in a slice, e.g.
var group1 = []string{"Foo", "Bar", "Baz", "Bat"}
Once you defined these, you can have a helper method which handles a group:
func handleGroup(r *http.Request, group []string) (res string) {
for _, v := range group {
if r.FormValue(v) == "on" {
res := ", " + mappings[v]
}
}
if res == "" {
return "none\n"
}
return res[2:] + "\n"
}
That's all. After this your handler can be this simple:
func checkboxHandler(w http.ResponseWriter, r *http.Request) {
// Handle group1:
res1 := handleGroup(r, group1)
// Handle group2:
res2 := handleGroup(r, group2)
}
Notes:
It wasn't your requirement, but this solution handles translations very easily: each translation can have its own mappings map, and that's all. Nothing else needs to be changed.
Performance also wasn't your concern, but appending strings isn't very efficient this way. If performance is at least a little concern, you can improve it without adding complexity by utilizing bytes.Buffer:
func handleGroup(r *http.Request, group []string) string {
buf := &bytes.Buffer{}
for _, v := range group {
if r.FormValue(v) == "on" {
buf.WriteString(", ")
buf.WriteString(mappings[v])
}
}
if buf.Len() == 0 {
return "none\n"
}
buf.WriteString("\n")
return string(buf.Bytes()[2:])
}

This will store the form values into an array.
Then, it will iterate the array into a string with appending "," at the end of each name.
Then, it will put very last ", " (2 bytes) if it's longer than 2, otherwise, print "None"
func(w http.ResponseWriter, r *http.Request) {
r.ParseMultipartForm(0)
arr := []string{}
if r.FormValue("Foo") {
arr = append(arr, "Foo")
}
if r.FormValue("Bar") {
arr = append(arr, "Bar")
}
if r.FormValue("Baz") {
arr = append(arr, "Baz")
}
if r.FormValue("Bat") {
arr = append(arr, "Bat")
}
out := ""
for _, title := range arr {
out += title +", "
}
if len(out) > 2 {
out := out[0: len(out)-2]
} else {
out = "None"
}
fmt.Println(out)
}
If you want to iterate,
for k, vs:= range r.Form {
for _, v:= range vs{
fmt.Println(k, v)
}
}

Related

How to trim specific text

I have some text in file.Text like below:
#cat tmp
host = "192.168.2.80"
port = 5432
user = "pnmsuser"
password = "PNMS$$$$$$"
dbname = "pnms"
Just I want text like below after trimming:
"192.168.2.80"
5432
"pnmsuser"
"PNMS$$$$$$"
"pnms"
I try to trim like below
func dbFileTrimming() {
dat, err := ioutil.ReadFile("tmp")
check(err)
for key, line := range strings.Split(strings.TrimRight(string(dat), "\n"), "\n") {
// println(key, line)
if key == 3 {
line := string([]rune(line)[11:])
fmt.Println(line)
} else if key == 4 {
line := string([]rune(line)[9:])
fmt.Println(line)
} else {
line := string([]rune(line)[7:])
fmt.Println(line)
}
}
}
Is there a simple method for this?
Chop the line after the =:
for _, line := range strings.Split(strings.TrimRight(string(dat), "\n"), "\n") {
line = line[strings.Index(line, " = ")+3:]
fmt.Println(line)
}
This looks like an INI file, or similar enough so that libraries like go-ini could work.
Alternatively, try Strings.split() and putting the result in a map. Quick and dirty / untested:
result := map[string]string
for _, line := range strings.Split(string(dat), "\n") {
split := strings.Split(line, "=")
key := strings.Trim(split[0])
value := strings.Trim(split[0])
result[key] = value
}

go insert composite type array unsupported type

sql
CREATE TABLE public.tiantang_page (
href varchar NOT NULL,
status int4 NOT NULL,
description varchar NOT NULL,
urls url[] NULL
);
CREATE TYPE url AS (
url varchar,
status int4);
insert composite type array
type url struct {
url string
status int
}
var urls [1]url
urls[0] = url{
url: "",
status: 0,
}
update := "UPDATE \"public\".\"tiantang_page\" SET \"urls\"=$1 where \"href\"=$2;"
r, err := db.Exec(update, pq.Array(urls),href)
if err != nil {
log.Fatal(err)
}
error
sql: converting argument $1 type: unsupported type parsetest.url, a struct
library
https://godoc.org/github.com/lib/pq
Note that custom composite types are not fully supported by lib/pq.
If all you want is to be able to store the urls then the simplest approach would be to implement the driver.Valuer interface on the url type and then use it as you do with pq.Array:
func (u url) Value() (driver.Value, error) {
return fmt.Sprintf("(%s,%d)", u.url, u.status), nil
}
// ...
r, err := db.Exec(update, pq.Array(urls), href)
more info on that can be found here: https://github.com/lib/pq/issues/544
Note that I haven't tried this with arrays, only with slices, so you may have to switch from using an array to using a slice, i.e. instead of var urls [1]url you would use var urls = make([]url, 1).
If you also want to be able to retrieve the array of urls back from the db, then you'll have to implement the sql.Scanner interface, however here the pq.Array is not very reliable and you'll have to implement the scanner on the slice type and do all the parsing yourself.
The general format of composite types is (val1, val2, ...) note that you have to put double quotes around values that contain commas or parentheses. For example to construct a value of the url type you would use the literal expression: (http://example.com,4). More info in the docs.
The format for an array of composite types is {"(val1, val2, ...)" [, ...]}, note that in this case if you need to put double quotes around the values you need to escape them. For example {"(http://example.com,4)","(\"http://example.com/?list=foo,bar,baz\",3)"}
So as you can see the more complex the data in the composite type the more complex will be the parsing as well.
Here's a crude example (does not handle quoted values):
type urlslice []url
func (s *urlslice) Scan(src interface{}) error {
var a []byte // the pq array as bytes
switch v := src.(type) {
case []byte:
a = v
case string:
a = []byte(v)
case nil:
*s = nil
return nil
default:
return fmt.Errorf("urlslice.Scan unexpected src type %T", src)
}
a = a[1 : len(a)-1] // drop curly braces
for i := 0; i < len(a); i++ {
if a[i] == '"' && (len(a) > (i+1) && a[i+1] == '(') { // element start?
i += 2 // move past `"(`
j := i // start of url.url
u := url{}
for ; i < len(a) && a[i] != ','; i++ {
}
u.url = string(a[j:i])
i += 1 // move past `,`
j = i // start of url.status
for ; i < len(a) && a[i] != ')'; i++ {
}
i64, err := strconv.ParseInt(string(a[j:i]), 10, 64)
if err != nil {
return err
}
u.status = int(i64)
*s = append(*s, u)
i += 2 // move past `)",`
}
}
return nil
}
for completeness, here's the Valuer interface implemented by the slice type, again not handling proper quoting of values that may require it:
func (s urlslice) Value() (driver.Value, error) {
data := []byte{'{'}
for _, url := range s {
data = append(data, '"', '(')
data = append(data, []byte(url.url)...)
data = append(data, ',')
data = strconv.AppendInt(data, int64(url.status), 10)
data = append(data, ')', '"', ',')
}
data[len(data)-1] = '}' // replace last ',' with '}' to close the array
return data, nil
}
With the urlslice implementing the two interfaces directly you can stop using pq.Array.
var urls = urlslice{{
url: "http://example.com",
status: 4,
}}
update := `UPDATE "public"."tiantang_page" SET "urls"=$1 where "href"=$2`
r, err := db.Exec(update, urls, href)
if err != nil {
log.Fatal(err)
}
var urls2 urlslice
selurls := `SELECT "urls" FROM "public"."tiantang_page" where "href" = $1`
if err := db.QueryRow(selurls, href).Scan(&urls2); err != nil {
log.Fatal(err)
}
Please keep in mind that both of the above examples should be considered only as hints of the direction to take in solving this problem. Not only are the two examples incomplete in that they don't handle quoted values, but they are also not very elegant implementations.
Reasonably complete composite literal parser:
type parseState int
const (
state_initial parseState = iota // start
state_value_start // no bytes read from value yet
state_value // unquoted value
state_quoted // inside quote
state_value_end // after a close quote
state_end // after close paren
)
func parseComposite(in []byte) ([]string, error) {
state := state_initial
ret := []string{}
val := []byte{}
for _, b := range in {
switch state {
case state_initial:
if b != '(' {
return nil, fmt.Errorf("initial character not ')': %v", in)
} else {
state = state_value_start
}
case state_value_start:
if b == '"' {
state = state_quoted
continue
}
fallthrough
case state_value:
if b == ',' {
ret = append(ret, string(val))
val = nil
state = state_value_start
} else if b == ')' {
ret = append(ret, string(val))
val = nil
state = state_end
} else {
val = append(val, b)
}
case state_quoted:
if b == '"' {
ret = append(ret, string(val))
val = nil
state = state_value_end
} else {
val = append(val, b)
}
case state_value_end:
if b == ',' {
state = state_value_start
} else if b == ')' {
state = state_end
} else {
return nil, fmt.Errorf("invalid delimiter after closing quote: %v", in)
}
case state_end:
return nil, fmt.Errorf("trailing bytes: %v", in)
}
}
if state != state_end {
return nil, fmt.Errorf("unterminated value: %v", in)
}
return ret, nil
}

Checking for whitespace in a string typed array - Swift

In a string typed array how can I achieve the functionality as I would for checking whitespace in a string? I'd like to check if the array contains only whitespace
var stringExample: String!
var stringArrayExample: [String]!
if stringExample.trimmingCharacters(in: .whitespaces).isEmpty{
//string contains whitespace characters
}
Swift 3 would look something like this if I'm understanding what you're wanting:
var someStrings = [" ", "foo", "bar", "\t"]
let result = someStrings.filter { $0.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty }
print(result) // [" ", "\t"]
If you're just wanting to know if the array of strings are all whitespace-only strings you could change the last two lines to:
let result = someStrings.filter { $0.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty == false }
print(result.isEmpty) // false
Note that both these use .whitespacesAndNewlines if you don't want new lines, just use .whitespaces like you do in your original example.
I've created an extension for String which returns whether it's empty or contains only whitespace:
extension String {
var isEmptyOrWhitespace : Bool {
return self.trimmingCharacters(in: .whitespaces).isEmpty
}
}
And since I'm also a .NET developer and like the methods Any, All etc. I've also created an extension for the Array type, which lets me check a condition for every element in the array, leveraging the reduce function:
extension Array {
func all(test: (Element) -> Bool) -> Bool {
return self.reduce(true) { $0 && test($1) }
}
}
Then you can combine these two to get a fairly nice syntax, which is also fairly performant, since it "breaks" when it stumbles upon an element that does not comply with the provided test (using a for instead of reduce would probably be even more efficient).
let strings1 = [" ", "", "\t"]
print(strings1.all { $0.isEmptyOrWhitespace }) // true
print(strings1.all { !$0.isEmptyOrWhitespace }) // false
By printing within the test, you can see it no longer executes the tests for elements when it finds the first non-compliant one.
let strings2 = [" ", "x", "\t"]
print(strings2.all(test: { (str) -> Bool in
let e = str.isEmptyOrWhitespace
print ("[\(str)]: \(e)")
return e
}))
Prints:
[ ]: true
[x]: false
false

Parsing text and representing it with tokens using Scala

I'm getting frustrated trying to convert a small part of the Golang templating language to Scala.
Below are the key parts of the lex.go source code: https://github.com/golang/go/blob/master/src/text/template/parse/lex.go
The tests are here: https://github.com/golang/go/blob/master/src/text/template/parse/lex_test.go
Basically this "class" takes a string and returns an Array of "itemType". In the template string, the start and end of special tokens is using curly braces {{ and }}.
For for example:
"{{for}}"
returns an array of 4 items:
item{itemLeftDelim, 0, "{{" } // scala case class would be Item(ItemLeftDelim, 0, "")
item{itemIdentifier, 0, "for"}
item{itemRightDelim, 0, "}}"}
item{itemEOF, 0, ""}
The actual call would look like:
l := lex("for", `{{for}}`, "{{", "}}") // you pass in the start and end delimeters {{ and }}
for {
item := l.nextItem()
items = append(items, item)
if item.typ == itemEOF || item.typ == itemError {
break
}
}
return
The key parts of the source code are below:
// itemType identifies the type of lex items.
type itemType int
const (
itemError itemType = iota // error occurred; value is text of error
itemEOF
itemLeftDelim // left action delimiter
// .............. skipped
)
const (
leftDelim = "{{"
rightDelim = "}}"
leftComment = "/*"
rightComment = "*/"
)
// item represents a token or text string returned from the scanner.
type item struct {
typ itemType // The type of this item.
pos Pos // The starting position, in bytes, of this item in the input string.
val string // The value of this item.
}
// stateFn represents the state of the scanner as a function that returns the next state.
type stateFn func(*lexer) stateFn
// lexer holds the state of the scanner.
type lexer struct {
name string // the name of the input; used only for error reports
input string // the string being scanned
leftDelim string // start of action
rightDelim string // end of action
state stateFn // the next lexing function to enter
pos Pos // current position in the input
start Pos // start position of this item
width Pos // width of last rune read from input
lastPos Pos // position of most recent item returned by nextItem
items chan item // channel of scanned items
parenDepth int // nesting depth of ( ) exprs
}
// lex creates a new scanner for the input string.
func lex(name, input, left, right string) *lexer {
if left == "" {
left = leftDelim
}
if right == "" {
right = rightDelim
}
l := &lexer{
name: name,
input: input,
leftDelim: left,
rightDelim: right,
items: make(chan item),
}
go l.run()
return l
}
// run runs the state machine for the lexer.
func (l *lexer) run() {
for l.state = lexText; l.state != nil; {
l.state = l.state(l)
}
}
// nextItem returns the next item from the input.
func (l *lexer) nextItem() item {
item := <-l.items
l.lastPos = item.pos
return item
}
// emit passes an item back to the client.
func (l *lexer) emit(t itemType) {
l.items <- item{t, l.start, l.input[l.start:l.pos]}
l.start = l.pos
}
// lexText scans until an opening action delimiter, "{{".
func lexText(l *lexer) stateFn {
for {
if strings.HasPrefix(l.input[l.pos:], l.leftDelim) {
if l.pos > l.start {
l.emit(itemText)
}
return lexLeftDelim
}
if l.next() == eof {
break
}
}
// Correctly reached EOF.
if l.pos > l.start {
l.emit(itemText)
}
l.emit(itemEOF)
return nil
}
// next returns the next rune in the input.
func (l *lexer) next() rune {
if int(l.pos) >= len(l.input) {
l.width = 0
return eof
}
r, w := utf8.DecodeRuneInString(l.input[l.pos:])
l.width = Pos(w)
l.pos += l.width
return r
}
// lexLeftDelim scans the left delimiter, which is known to be present.
func lexLeftDelim(l *lexer) stateFn {
l.pos += Pos(len(l.leftDelim))
if strings.HasPrefix(l.input[l.pos:], leftComment) {
return lexComment
}
l.emit(itemLeftDelim)
l.parenDepth = 0
return lexInsideAction
}
// lexRightDelim scans the right delimiter, which is known to be present.
func lexRightDelim(l *lexer) stateFn {
l.pos += Pos(len(l.rightDelim))
l.emit(itemRightDelim)
return lexText
}
// there are more stateFn
So I was able to write the item and itemType:
case class Item(typ: ItemType, pos: Int, v: String)
sealed trait ItemType
case object ItemError extends ItemType
case object ItemEOF extends ItemType
case object ItemLeftDelim extends ItemType
...
..
.
The stateFn and Lex definitions:
trait StateFn extends (Lexer => StateFn) {
}
I'm basically really stuck on the main parts here. So things seem to be kicked of like this:
A Lex is created, then "go l.run()" is called.
Run is a loop, which keeps looping until EOF or an error is found.
The loop initializes with lexText, which scans until it finds an {{, and then it sends a message to a channel with all the preceding text of type 'itemText', passing it an 'item'. It then returns the function lexLeftDelim. lexLeftDelim does the same sort of thing, it sends a message 'item' of type itemLeftDelim.
It keeps parsing the string until it reaches EOF basically.
I can't think in scala that well, but I know I can use an Actor here to pass it a message 'Item'.
The part of returning a function, I asked I got some good ideas here: How to model recursive function types?
Even after this, I am really frustrated and I can seem to glue these concepts together.
I'm not looking for someone to implement the entire thing for me, but if someone could write just enough code to parse a simple string like "{{}}" that would be awesome. And if they could explain why they did a certain design that would be great.
I created a case class for Lex:
case class Lex(
name: String,
input: String,
leftDelim: String,
rightDelim: String,
state: StateFn,
var pos: Int = 0,
var start: Int = 0,
var width: Int = 0,
var lastPos: Int = 0,
var parenDepth: Int = 0
) {
def next(): Option[String] = {
if (this.pos >= this.input.length) {
this.width = 0
return None
}
this.width = 1
val nextChar = this.input.drop(this.pos).take(1)
this.pos += 1
Some(nextChar)
}
}
The first stateFn is LexText and so far I have:
object LexText extends StateFn {
def apply(l: Lexer) = {
while {
if (l.input.startsWith(l.leftDelim)) {
if (l.pos > l.start) {
// ????????? emit itemText using an actor?
}
return LexLeftDelim
}
if (l.next() == None) {
break
}
}
if(l.pos > l.start) {
// emit itemText
}
// emit EOF
return None // ?? nil? how can I support an Option[StateFn]
}
}
I need guidance on getting the Actor's setup, along with the main run loop:
func (l *lexer) run() {
for l.state = lexText; l.state != nil; {
l.state = l.state(l)
}
}
This is an interesting problem domain that I tried to tackle using Scala, and so far I am a bit confused hoping some else finds it interesting and can work with what little I have so far and provide some code and critique if I am doing it correctly or not.
I know deep down I shouldn't be mutating, but I'm still on the first few pages of the functional book :)
If you translate the go code literally into Scala, you'll get very unidiomatic piece of code. You'll probably get much more maintainable (and shorter!) Scala version by using parser combinators. There are plenty of resources about them on the internet.
import scala.util.parsing.combinator._
sealed trait ItemType
case object LeftDelim extends ItemType
case object RightDelim extends ItemType
case object Identifier extends ItemType
case class Item(ty: ItemType, token: String)
object ItemParser extends RegexParsers {
def left: Parser[Item] = """\{\{""".r ^^ { _ => Item(LeftDelim, "{{") }
def right: Parser[Item] = """\}\}""".r ^^ { _ => Item(RightDelim, "}}") }
def ident: Parser[Item] = """[a-z]+""".r ^^ { x => Item(Identifier, x) }
def item: Parser[Item] = left | right | ident
def items: Parser[List[Item]] = rep(item)
}
// ItemParser.parse(ItemParser.items, "{{foo}}")
// res5: ItemParser.ParseResult[List[Item]] =
// [1.8] parsed: List(Item(LeftDelim,{{), Item(Identifier,foo), Item(RightDelim,}}))
Adding whitespace skipping, or configurable left and right delimiters is trivial.

Does Go allow specification of an interface for a map with particular key type?

I wrote a function that would return a sorted slice of strings from a map[string]Foo. I'm curious what is the best way to create a generic routine that can return a sorted slice of strings from any type that is a map with strings as keys.
Is there a way to do it using an interface specification? For example, is there any way to do something like:
type MapWithStringKey interface {
<some code here>
}
To implement the interface above, a type would need strings as keys. I could then write a generic function that returns a sorted list of keys for fulfilling types.
This is my current best solution using the reflect module:
func SortedKeys(mapWithStringKey interface{}) []string {
keys := []string{}
typ := reflect.TypeOf(mapWithStringKey)
if typ.Kind() == reflect.Map && typ.Key().Kind() == reflect.String {
switch typ.Elem().Kind() {
case reflect.Int:
for key, _ := range mapWithStringKey.(map[string]int) {
keys = append(keys, key)
}
case reflect.String:
for key, _ := range mapWithStringKey.(map[string]string) {
keys = append(keys, key)
}
// ... add more cases as needed
default:
log.Fatalf("Error: SortedKeys() does not handle %s\n", typ)
}
sort.Strings(keys)
} else {
log.Fatalln("Error: parameter to SortedKeys() not map[string]...")
}
return keys
}
Click for Go Playground version
I'm forced to code type assertions for each supported type even though at compile time, we should know the exact type of the mapWithStringKey parameter.
You cannot make partial types. But you can define an interface which serves your purpose:
type SortableKeysValue interface {
// a function that returns the strings to be sorted
Keys() []string
}
func SortedKeys(s SortableKeysValue) []string {
keys := s.Keys()
sort.Strings(keys)
return keys
}
type MyMap map[string]string
func (s MyMap) Keys() []string {
keys := make([]string, 0, len(s))
for k, _ := range s {
keys = append(keys, k)
}
return keys
}
Try it here: http://play.golang.org/p/vKfri-h4Cp
Hope that helps (go-1.1):
package main
import (
"fmt"
"reflect"
)
var m = map[string]int{"a": 3, "b": 4}
func MapKeys(m interface{}) (keys []string) {
v := reflect.ValueOf(m)
for _, k := range v.MapKeys() {
keys = append(keys, k.Interface().(string))
}
return
}
func main() {
fmt.Printf("%#v\n", MapKeys(m))
}