Payload from Monday.com not providing the desired information - monday.com

I am trying to create a webhook from Monday.com to an AWS Lambda function.
My Monday board is structured like so :
Name
Price
Status
A
8
Under Review
B
9
Under Review
There is a webhook configured which fires off when the Status changes to “Approved” so if the board changes to :
| Name |Price |Status |
| -------- | -------------- |-------------- |
| A | 8 |Approved |
| B | 9 |Under Review |
then we receive information to our URL. However the information only tells us that status changed from “Under Review” to “Approved”.
I am trying to get the Name and Price as well. Is it possible to get a custom payload from Monday.com or what is the best way to proceed?
Thanks in advance!

Related

Kafka / KSQL, stuck in reducing stream/table

What I have, are two streams (from two different systems, imported via connectors). Some of the information from the different streams will be used to build combined information.
Currently, I'm working with ksqlDB but I'm having problems with the last step to reduce the information from both streams.
Both streams contains a tree structure (id/parentId), so I've used a second table for each stream to find certain information from the parents, which is then joined into a table containing all the information to do the final reduce.
The main matching column is always the same, however, one or more columns (not fixed) is also needed to do the final match. The columns might also be partial matches between them.
An example output of the table might look like this:
| id | match | matchExtra1 | matchExtra2 | matchExtra3 |
| 1 | 1 | Extra1 | Extra2 | Extra3 |
| 2 | 1 | Extra1 | Extra4 | Extra5 |
| 3 | 1 | Extra6 | Extra7 | Extra8 |
| 4 | 1 | Extra9 | Extr10 | tra8 |
In this case, id 1 and 2 should be matched and id 3 and 4 should be another match.
If this is possible within ksqlDB, that would be great. If needed to work with low-level Kafka, that's fine as long as we can achieve the end result.
Basic flow as I have it right now:

Scala use ML to find outliers in a Dataframe

this question is gonna be a little vague, but I cant seem to find any concrete example online.
https://spark.apache.org/docs/0.9.0/mllib-guide.html
From the above spark docs, I can see multiple ways of training and predict anomaly/outliers with Mllib library. However, every single one of those examples only involve numbers or at most 2 columns of data.
I can't figure out how to train and predict on a data set with more values and etc...
Let's say if I wanted to use the clustering method to find outliers to my data, and my data looks like the following in a Dataframe:
UserId | Department | Date | Item | Cost
user1 | Electronic | 11-19 | Iphone | 115
user1 | Electronic | 11-19 | Iphone | 150
user1 | Electronic | 11-19 | Iphone | 900
user1 | Electronic | 11-23 | Iphone | 85
user1 | Electronic | 11-20 | Iphone | 120
user2 | Electronic | 11-19 | Iphone | 600
user2 | Electronic | 11-19 | Iphone | 550
user2 | Electronic | 11-19 | Iphone | 600
user2 | Electronic | 11-23 | Iphone | 575
user2 | Electronic | 11-20 | Iphone | 570
....
There will be millions of data like this across months.
I want to research across the pattern of the users in the past X months and constantly updating my model every day with new data. So something like
user1 | Electronic | 11-19 | Iphone | 900
should be considered an outlier
How can I apply any of the above supervised learning methods on this type of datasets?
Thanks!
Are you sure that you are using Spark 0.9 (current version is 2.2)? The site you have quoted is showing a kMeans example [1]. The parameter parsedData can have more than two columns, but kMeans in Spark 0.9 can only handle double values [2].
Also the other examples can have more than two columns [3]. The label parameter could be an ongoing number and features are your listed data, but like kMeans spark 0.9 is only able to handle double values.
Looking at the other avaiable classes of 0.9 api let's me assume the spark 0.9 was only able to handle double values. If you want to handle the data liked you have showed above, you should consider to use a more recent version of spark.
[1] https://spark.apache.org/docs/0.9.0/mllib-guide.html#clustering-1
[2] https://spark.apache.org/docs/0.9.0/api/mllib/index.html#org.apache.spark.mllib.clustering.KMeans$
[3] https://spark.apache.org/docs/0.9.0/api/mllib/index.html#org.apache.spark.mllib.regression.LabeledPoint

How to search by parent and base tag in Postgresql

I have Products and Company tables that look something like this:
Company:
id | name
----------
1 | Google
Products
id | companyId | name
---------------------
1 | 1 | Google Home
2 | 1 | Pixel
Tags
id | name
----------
1 | Consumer Electronics
2 | Computer
3 | Cell Phone
Now I am not sure if I should do a junction table between company -> tags and products -> tags or if there is a way to have one for both or what. When I am searching for companies I want to be able to use the tags for the company and all children products, ie if I search for a Computer and Cell Phone it should find Google. Then if I am searching for Products by Consumer Electronics and Computer, it should return Google Home.
Any help would be greatly appreciated.

API Design: images with multiple i18n titles

We're designing an API where partners can post images and titles with it. I do have some trouble imaging how to implement an API endpoint for it which supports i18n in the most uncomplicated way for our partners (not remembering our returned ids for example if they want to use their existing ids).
Our first database table idea looked like:
image_id | partner_id | category_id | image | description
---------------------------------------------------------
123 | 1 | 8 | url.. | This is my image!
234 | 2 | 5 | url.. | A pretty image.
But we would probably split the description into an own table which would look something like:
image_id | language | description
---------------------------------
123 | en | This is my image!
123 | de | Dies ist mein Bild!
So how could the API endpoint look like?
Single request with an array of images, which can be the same images just with different language/description values? (images would not be persisted multiple times, just the different descriptions)
N requests with images and an Content-Language header for all the images contained inside the request? (resulting in the same persistence as in the 1. idea)
Single request to create images with a "default" language/description value and a PUT endpoint to further add additional language/description values?
Something completely different?

JasperReports grouping changeable by user

I have no idea if this is possible or not but I'm trying figure out if it is possible to use iReport Designer to create reports where the user viewing the report is able to control the grouping.
For example I would like the user to be able to re-order the grouping and also change to which degree the report is grouped (only on one field or on multiple ones).
I don't mean SQL grouping btw, I mean for example grouping by Account and then Agent:
| Account | Agent | Invoice | Total |
+----------+---------+----------+-------+
| Account1 | | | |
| | Joe | | |
| | | Invoice2 | $600 |
| | | Invoice1 | $300 |
| Account2 | | | |
| | Sam | | |
| | | Invoice4 | $120 |
| | | Invoice7 | $230 |
| | Joe | | |
| | | Invoice3 | $200 |
+----------+-- ------+----------+-------+
And what I'm trying to figure out is, can you use iReport to make this grouping dynamic? That is, that the user might want to group by Agent first and Account second and rather than have one report for each grouping it'd be nice if there was a way of doing this with iReport.
Yes, it should be possible to create reports like that. But depending on your exact needs it may not be practical (as Alex K indicated).
If you take only your example of grouping on Account then Agent or grouping on Agent then account, it would be simple. Have a parameter that let's the user specify this choice. It would probably be a drop down list. Then in the report you would have fields like this:
Today's version: $F{Account} and $F{Agent}
Dynamic version: $P{AcctFirst} ? $F{Account} : $F{Agent} and $P{AcctFirst} ? $F{Agent} : $F{Account}
Likewise, the group definition would need to include the new AcctFirst param.
But it won't extend nicely. What if the 2 fields are different data types? What if you want to let the user choose from 3 or 4 or N fields? Each of those is solvable... but the report becomes exponentially more complex.
By the way, it's relatively common request. You'll see features like this make their way into JasperReports. But for now it's a tough one.