Is there a quick way of discovering the age of a specific key on memcache?
Memcached does not keep information on when keys are created so it is not possible to do this unless you have some sort of client side logic. If this is important to you then you can simply add the creation time as a prefix to the actual value of the key (value = time + data). Then when you get the data from memcached you can split the value into the corresponding pieces of data.
OP, do you refer to knowing the age of a cache entry? You can store the timestamp of the latest update along with the actual value:
from datetime import datetime
def put_value(key, value):
timestamp = datetime.utcnow()
entry = (timestamp, value)
memcache.add(key, entry)
def get_value(key):
entry = memcache.get(key)
if not value:
return None
_, value = entry
return value
def get_age(key):
entry = memcache.get(key)
if not value:
return None
timestamp, _ = entry
age = datetime.utcnow() - timestamp
return age
Related
I am using the AWS Dynamo DB library for Scala - com.amazonaws.services.dynamodbv2.
Earlier I had a table with a primary key and I was using GetItem to get specific item from it like so :
val item = Try(Option(ddb.getItem(new GetItemRequest().withTableName(table).withKey(Collections.singletonMap(keyField, new AttributeValue(key)))).getItem).map(_.get(valField).getS))
But now I need to start using a new sort key (timestamp of the created date) on top.
This way I can have multiple identical primary keys with different timestamp sort keys.
This is meant so I will be able to get the closest item to my current sort timestamp property.
I think I need KeyConditionExpression where my input timestamp is bigger or equal to the new sort key,
and I saw a property ScanIndexForward which can be set to true in combination with
limit = 1
so I will get only one item and it will be the closest(?)
And this should get me the desired item I hope, but I am not so sure how to approach this in Scala and with the AWS library.
In case anyone interested,
this is what worked out for me -
val queryRequest = new QueryRequest().withTableName(table)
.withKeyConditions(createKeyConditionsMap(keyField, keyValue, sortField, sortValue))
.withScanIndexForward(false)
.withLimit(1)
val x = ddb.query(queryRequest).getItems()
def createKeyConditionsMap(keyField: String, keyValue: String, sortField: String, sortValue: String) = {
Map(keyField -> new Condition().withComparisonOperator("EQ").withAttributeValueList(new AttributeValue(keyValue)),
sortField -> new Condition().withComparisonOperator("LE").withAttributeValueList(new AttributeValue().withN(sortValue))).asJava
}
In my CRUD Rest Service I do an insert into a DB and want to respond to the caller with the created new record. I am looking for a nice way to convert the map to json.
I am running on ballerina 0.991.0 and using a postgreSQL.
The return of the Update ("INSERT ...") is a map.
I tried with convert and stamp but i did not work for me.
import ballerinax/jdbc;
...
jdbc:Client certificateDB = new({
url: "jdbc:postgresql://localhost:5432/certificatedb",
username: "USER",
password: "PASS",
poolOptions: { maximumPoolSize: 5 },
dbOptions: { useSSL: false }
}); ...
var ret = certificateDB->update("INSERT INTO certificates(certificate, typ, scope_) VALUES (?, ?, ?)", certificate, typ, scope_);
// here is the data, it is map<anydata>
ret.generatedKeys
map should know which data type it is, right?
then it should be easy to convert it to json like this:
{"certificate":"{certificate:
"-----BEGIN
CERTIFICATE-----\nMIIFJjCCA...tox36A7HFmlYDQ1ozh+tLI=\n-----END
CERTIFICATE-----", typ: "mqttCertificate", scope_: "QARC", id_:
223}"}
Right now i do a foreach an build the json manually. Quite ugly. Maybe somebody has some tips to do this in a nice way.
It cannot be excluded that it is due to my lack of programming skills :-)
The return value of JDBC update remote function is sql:UpdateResult|error.
The sql:UpdateResult is a record with two fields. (Refer https://ballerina.io/learn/api-docs/ballerina/sql.html#UpdateResult)
UpdatedRowCount of type int- The number of rows which got affected/updated due to the given statement execution
generatedKeys of type map - This contains a map of auto generated column values due to the update operation (only if the corresponding table has auto generated columns). The data is given as key value pairs of column name and column value. So this map contains only the auto generated column values.
But your requirement is to get the entire row which is inserted by the given update function. It can’t be returned with the update operation if self. To get that you have to execute the jdbc select operation with the matching criteria. The select operation will return a table or an error. That table can be converted to a json easily using convert() function.
For example: Lets say the certificates table has a auto generated primary key column name ‘cert_id’. Then you can retrieve that id value using below code.
int generatedID = <int>updateRet.generatedKeys.CERT_ID;
Then use that generated id to query the data.
var ret = certificateDB->select(“SELECT certificate, typ, scope_ FROM certificates where id = ?”, (), generatedID);
json convertedJson = {};
if (ret is table<record {}>) {
var jsonConversionResult = json.convert(ret);
if (jsonConversionResult is json) {
convertedJson = jsonConversionResult;
}
}
Refer the example https://ballerina.io/learn/by-example/jdbc-client-crud-operations.html for more details.?
I'm currently using the pg-promise library to insert multiple values into a database in the format:
const cs = new pgp.helpers.ColumnSet(['booking_id', {name:'timeslot', cast:'timestamp'}], {table: 'booking'});
// data input values:
const values = [];
bookings.forEach(slot => {
values.push({booking_id: booking_id, timeslot: slot});
});
Where I need timeslot to be a timestamp. However it comes into the API as value like
1515586500.0
Using the above cast property my query gets resolved like so
insert into "booking"("booking_id","timeslot") values(1,'1515586500.0'::timestamp)
however this throws an error of cannot cast type numeric to timestamp without time zone
If I use the to_timestamp function however this works how I need it to e.g
insert into "booking"("booking_id","timeslot") values(1,to_timestamp('1515586500.0'));
Is there any way I can get pg-promise to use to_timestamp rather than the ::timestamp notation?
Change the column definition to this one:
{
name: 'timeslot',
mod: ':raw',
init: c => pgp.as.format('to_timestamp($1)', c.value)
}
or
{
name: 'timeslot',
mod: ':raw',
init: c => pgp.as.format('to_timestamp(${value})', c)
}
...as per the Column type documentation.
Or you can use Custom Type Formatting on the type, to self-format automatically.
Also, you do not need to remap values to suit the ColumnSet object, you use ColumnSet object to fit the data instead. So if the value for column timeslot is in property slot, you just use prop: 'slot' within your column definition to change where the value is coming from.
I have a model:
case class MyModel(
id: Pk[Long] = NotAssigned,
startsAt: Option[DateTime] = None,
addedAt: Option[DateTime] = None
)
object MyModel {
// .....................
SQL("""
INSERT INTO my_table(starts_at)
VALUES ({startsAt})
"""
).on('startsAt -> newItem.startsAt).executeInsert()
}
Both startst_at and added_at have a default value of now() in Postgresql and don't allow null values in them. It doesn't cause any error for addedAt (because I never pass it to the server from the client) but it does cause the error for startsAt if it's not specified at newItem.startsAt and, thus, is equal to None and, thus, it's being passed as null.
org.postgresql.util.PSQLException: ERROR: null value in column "starts_at" violates not-null constraint
What I want is be able to specify startsAt whenever I want it and pass it to the server, meaning if I specify it then that value should be passed to the server, if not - nothing should be passed and the server should use its default value now(). I don't want to specify the default value at the client because it's already set at the server at the db level.
How about this SQL fix:
insert into my_table(starts_at)
values (COALESCE({startsAt}, now())
Updated: requirement is to use the default value of the column
The only way that I know of to get the server to use the default value of a column in an insert, is not to mention that column in the columns list. For example (not tested):
startsAt.map { date =>
SQL("""insert into my_table(starts_at) values({startsAt})""")
.on('startsAt -> date)
.execute()
}.orElse {
SQL("""insert into my_table() values()""")
.execute()
}
I am querying a collection that includes an integer value among it's values, and loading resulting documents into this struct:
type Subscription struct {
Id bson.ObjectId "_id,omitempty"
Listen string
Job string
TimeoutSeconds int
Data string
}
var subscription Subscription
subscriptions := subscriptionsCol.Find(bson.M{"listen": "example_channel"}).Iter()
for subscriptions.Next(&subscription) {
log("Pending job: %s?%s (timeout: %d)\n",
subscription.Job,
subscription.Data,
subscription.TimeoutSeconds)
}
This is what phpMoAdmin shows me:
[_id] => MongoId Object (
[$id] => 502ed8d84eaead30a1351ea7
)
[job] => partus_test_job_a
[TimeoutSeconds] => 30
[listen] => partus.test
[data] => a=1&b=9
It puzzles me that subscription.TimeoutSeconds contains always 0, when I'm positive I have 30 in the document I inserted in the collection.
All other values are retrieved OK.
What's wrong with the int type?
Have you tried setting the "key" value for that field?
Unmarshal
The lowercased field name is used as the key for each exported field,
but this behavior may be changed using the respective field tag.
type Subscription struct {
Id bson.ObjectId "_id,omitempty"
Listen string
Job string
TimeoutSeconds int "TimeoutSeconds"
Data string
}
The other fields are working fine because their lowercase value matches your Mongo fields in the collection, whereas TimeoutSeconds is using the TitleCase. What is happening is the int field is being left at its zero value, since the Unmarshal can't map a field to it.
When UnMarshalling data, there are multiple keys that are supported.
Below are some examples:
type T struct {
A bool
B int "myb"
C string "myc,omitempty"
D string `bson:",omitempty" json:"jsonkey"`
E int64 ",minsize"
F int64 "myf,omitempty,minsize"
}
The general spec for 1 key-value pair during marshal is :
"[<key>][,<flag1>[,<flag2>]]"
`(...) bson:"[<key>][,<flag1>[,<flag2>]]" (...)`
GO provides support for particular keywords like bson (for mongo keys) and json for setting the json key in a resposne.
Check the Marshal GO Reference for more information.
Similarly there are some frameworks which provide further options to define the keys befor parsing. For example, in sql jinzhu github library gives support for setting default values, column ids to map, etc.
Anyone can use this feature and provide customized support.