I need to implement a counter by prefix and get the current value. Therefore I created a table UPLOAD_ID:
CREATE TABLE UPLOAD_ID
(
COUNTER INT NOT NULL,
UPLOAD_PREFIX VARCHAR(60) PRIMARY KEY
);
Using H2 and a Spring nativeQuery:
#Query(nativeQuery = true, value = MYQUERY)
override fun nextId(#Param("prefix") prefix: String): Long
with MYQUERY being
SELECT COUNTER FROM FINAL TABLE (
USING (SELECT CAST(:prefix AS VARCHAR) AS UPLOAD_PREFIX FOR UPDATE) S FOR UPDATE
ON T.UPLOAD_PREFIX = S.UPLOAD_PREFIX
WHEN MATCHED
THEN UPDATE
SET COUNTER = COUNTER + 1
WHEN NOT MATCHED
THEN INSERT (UPLOAD_PREFIX, COUNTER)
VALUES (S.UPLOAD_PREFIX, 1) );
I'm unable to lock the table to avoid "Unique index or primary key violation" in my test. In MSSQL I can add WITH (HOLDLOCK) T in MERGE INTO UPLOAD_ID WITH (HOLDLOCK) T to solve this issue.
The gist of my test looks like
try { uploadIdRepo.deleteById(prefix) } catch (e: EmptyResultDataAccessException) { }
val startCount = uploadIdRepo.findById(prefix).map { it.counter }.orElseGet { 0L }
val workerPool = Executors.newFixedThreadPool(35)
val nextValuesRequested = 100
val res = (1..nextValuesRequested).toList().parallelStream().map { i ->
workerPool.run {
uploadIdRepo.nextId(prefix)
}
}.toList()
res shouldHaveSize nextValuesRequested // result count
res.toSet() shouldHaveSize nextValuesRequested // result spread
res.max() shouldBeEqualComparingTo startCount + nextValuesRequested
Can I solve this with H2?
Related
I am trying to create a conditional insert into a table with responses for an event. The event may have a limit on how many responses/attendees it can support so to prevent overbooking I want to check the status before I insert a new response.
The tables match the database models I have (IDs are generated by DB/auto inc):
case class Event(id: Option[Int], name: String, limitedCapacity: Option[Int])
case class Response(id: Option[Int], eventId: Int, email: String)
I have constructed a SQL statement that describes my conditional insert (eg, only insert if event has no limited capacity or if number of responses are less than the limit) that looks like this:
INSERT INTO responses(eventId, email)
SELECT :eventId, :email
FROM events
WHERE id = :eventId
AND (limitedCapacity IS NULL
OR (SELECT COUNT(1) FROM responses WHERE eventId = :eventId) < limitedCapacity);
but I don't know how to translate that into Slick DSL that also returns the inserted row. I am using PostgreSQL so I know return-row-on-insert is possible for normal insertions at least.
Here is some code that shows what I am after but I want this as a single transaction:
def create(response: Response): Future[Option[Response]] = {
val event = db.run(events.filter(_.id === response.eventId).result.head)
if (event.limitedCapacity.isEmpty) {
db.run((responses returning responses) += response).map(Some(_))
}
else {
val responseCount = db.run(responses.filter(_.eventId === response.id).length.result)
if (responseCount < event.limitedCapacity.get) {
db.run((responses returning responses) += response).map(Some(_))
}
else {
Future.sucessful(None)
}
}
}
If it is not possible to return the inserted row that is fine but I need some kind of confirmation of the insertion at least.
I'm using the postgres crate which makes a query with postgres::Connection. I query a table based on a string value in an ilike '%search_string%' expression:
extern crate postgres;
use std::error::Error;
//DB Create queries
/*
CREATE TABLE TestTable (
Id SERIAL primary key,
_Text varchar(50000) NOT NULL
);
insert into TestTable (_Text) values ('test1');
insert into TestTable (_Text) values ('test1');
*/
fn main() -> Result<(), Box<dyn Error>> {
let conn = postgres::Connection::connect(
"postgres://postgres:postgres#localhost:5432/notes_server",
postgres::TlsMode::None,
)?;
let text = "test";
// //Does not work
// let query = &conn.query(
// "
// select * from TestTable where _text ilike '%$1%'
// ",
// &[&text],
// )?;
//Works fine
let query = &conn.query(
"
select * from TestTable where Id = $1
",
&[&1],
)?;
println!("Rows returned: {}", query.iter().count());
Ok(())
}
If I uncomment the //Does not work part of the code, I will get the following error:
thread 'main' panicked at 'expected 0 parameters but got 1'
It appears it doesn't recognize the $1 parameter that is contained in the ilike expression. I've tried escaping the single quotes and that doesn't change it.
The only dependencies are:
postgres = { version = "0.15.2", features = ["with-chrono"] }
To my surprise, here was the fix:
let text = "%test%";
let query = &conn.query(
"
select * from TestTable where _text like $1
",&[&text],
)?;
Apparently the postgres function knows to add single quotes around strings in this scenario.
I found out about this from here: https://www.reddit.com/r/rust/comments/8ltad7/horrible_quote_escaping_conundrum_any_ideas_on/
An example should do the magic.
At the top, I am using pg. So, import it. My code looks like
const title = req.params.title;
const posts = await db.query("SELECT * FROM postsTable INNER JOIN usersTable ON postsTable.author = usersTable.username WHERE title ILIKE $1 ORDER BY postsTable.created_on DESC LIMIT 5;", [`%${title}℅`])
This is an extension to this question:
How to increment Cassandra Counter Column with phantom-dsl?
This question has also been asked here.
In Thiagos example the two tables; 'songs' & 'songs_by_artist' both have the same rows but with different partitions (primary keys / clustering columns)
CREATE TABLE test.songs (
song_id timeuuid PRIMARY KEY,
album text,
artist text,
title text
);
CREATE TABLE test.songs_by_artist (
artist text,
song_id timeuuid,
album text,
title text,
PRIMARY KEY (artist, song_id)
) WITH CLUSTERING ORDER BY (song_id ASC);
This means inserting, updating and deleting across both tables within the SongsService works with the same base data / rows.
How would you for example have a table such as 'artist_songs_counts', with columns 'song_id' (K) and 'num_songs' (++) and ensure that 'SongsService' adds corresponding row to each table; 'songs' & 'songs_by_artist' & 'artist_songs_counts' (where there are different numbers of row but information should be linked, such as the artist info).
CREATE TABLE test.artist_songs_counts (
artist text PRIMARY KEY,
num_songs counter);
SongsService extends ProductionDatabaseProvider that gives to you an object called database where you have access to tables under a certain database:
/**
* Find songs by Id
* #param id
* #return
*/
def getSongById(id: UUID): Future[Option[Song]] = {
database.songsModel.getBySongId(id)
}
Or even better, handling two tables at the same time:
/**
* Save a song in both tables
*
* #param songs
* #return
*/
def saveOrUpdate(songs: Song): Future[ResultSet] = {
for {
byId <- database.songsModel.store(songs)
byArtist <- database.songsByArtistsModel.store(songs)
} yield byArtist
}
Since through a database object you can access all tables that belongs to a specific Database, I would implement a counter for artist the following way:
def increment(artist: String): Future[ResultSet] = {
update
.where(_.artist eqs artist)
.modify(_.numSongs += 1)
.future()
}
Then the saveOrUpdate method could be written as below:
def saveOrUpdate(song: Song): Future[ResultSet] = {
for {
byId <- database.songsModel.store(songs)
byArtist <- database.songsByArtistsModel.store(songs)
counter <- database.artistSongsCounter.increment(song.artist)
} yield byArtist
}
How I can make this query in slick 3.0 ?
Select *,(SELECT COUNT(*) from flashcards WHERE setId = flashcards_sets.id ) as allCount,(SELECT COUNT(*) from flashcards WHERE studied = true AND setId = flashcards_sets.id ) as studiedCount FROM flashcards_sets;
private def filterByFlashCardQuery(id: Int): Query[FlashCards, FlashCard, Seq] =
flashcards.filter(_.setId === id && _.studied = true)
def findByFlashcardLength(flashcardId: Int):Future[Int] = {
try db.run(filterByFlashCardQuery(flashcardId).length.result)
finally println("db.close")//db.close
}
```
Assume a Cassandra datastore with 20 rows, with row keys named "r1" .. "r20".
Questions:
How do I fetch the row keys of the first ten rows (r1 to r10)?
How do I fetch the row keys of the next ten rows (r11 to r20)?
I'm looking for the Cassandra analogy to:
SELECT row_key FROM table LIMIT 0, 10;
SELECT row_key FROM table LIMIT 10, 10;
Take a look at:
list<KeySlice> get_range_slices(keyspace, column_parent, predicate, range, consistency_level)
Where your KeyRange tuple is (start_key, end_key) == (r1, r10)
Based on my tests there is no order for the rows (unlike columns). CQL 3.0.0 can retrieve row keys but not distinct (there should be a way that I do not know).I my case I do not know what my key range is, so I tried to retrieve all the keys with both Hector and Thrift, and sort the keys later. The performance test with CQL 3.0.0 for 100000 columns 200 rows was about 500 milliseconds, Hector around 100 and thrift about 50 milliseconds. My Row key here is integer. Hector code follows:
public void queryRowkeys() {
myCluster = HFactory.getOrCreateCluster(CLUSTER_NAME, "127.0.0.1:9160");
ConfigurableConsistencyLevel ccl = new ConfigurableConsistencyLevel();
ccl.setDefaultReadConsistencyLevel(HConsistencyLevel.ONE);
myKeyspace = HFactory.createKeyspace(KEYSPACE_NAME, myCluster, ccl);
RangeSlicesQuery<Integer, Composite, String> rangeSlicesQuery = HFactory.createRangeSlicesQuery(myKeyspace, IntegerSerializer.get(),
CompositeSerializer.get(), StringSerializer.get());
long start = System.currentTimeMillis();
QueryResult<OrderedRows<Integer, Composite, String>> result =
rangeSlicesQuery.setColumnFamily(CF).setKeys(0, -1).setReturnKeysOnly().execute();
OrderedRows<Integer, Composite, String> orderedRows = result.get();
ArrayList<Integer> list = new ArrayList<Integer>();
for(Row<Integer, Composite, String> row: orderedRows){
list.add(row.getKey());
}
System.out.println((System.currentTimeMillis()-start));
Collections.sort(list);
for(Integer i: list){
System.out.println(i);
}
}
This is the Thrift code:
public void retreiveRows(){
try {
transport = new TFramedTransport(new TSocket("localhost", 9160));
TProtocol protocol = new TBinaryProtocol(transport);
client = new Cassandra.Client(protocol);
transport.open();
client.set_keyspace("prefdb");
ColumnParent columnParent = new ColumnParent("events");
SlicePredicate predicate = new SlicePredicate();
predicate.setSlice_range(new SliceRange(ByteBuffer.wrap(new byte[0]), ByteBuffer.wrap(new byte[0]), false, 1));
KeyRange keyRange = new KeyRange(); //Get all keys
keyRange.setStart_key(new byte[0]);
keyRange.setEnd_key(new byte[0]);
long start = System.currentTimeMillis();
List<KeySlice> keySlices = client.get_range_slices(columnParent, predicate, keyRange, ConsistencyLevel.ONE);
ArrayList<Integer> list = new ArrayList<Integer>();
for (KeySlice ks : keySlices) {
list.add(ByteBuffer.wrap(ks.getKey()).getInt());
}
Collections.sort(list);
System.out.println((System.currentTimeMillis()-start));
for(Integer i: list){
System.out.println(i);
}
transport.close();
} catch (Exception e) {
e.printStackTrace();
}
}
You should firstly modify cassandra.yaml in the version of cassandra1.1.o, where you should set as follows:
partitioner: org.apache.cassandra.dht.ByteOrderedPartitioner
Secondly,you should define as follows:
create keyspace DEMO with placement_strategy =
'org.apache.cassandra.locator.SimpleStrategy' and
strategy_options = [{replication_factor:1}];
use DEMO;
create column family Users with comparator = AsciiType and
key_validation_class = LongType and
column_metadata = [
{
column_name: aaa,
validation_class: BytesType
},{
column_name: bbb,
validation_class: BytesType
},{
column_name: ccc,
validation_class: BytesType
}
];
Finally, you can insert data into cassandra and can realize range query.