Is it ok to use a Guava Cache of maximum size 1?
I'm reading that sometimes there can be evictions even before that maximum size is reached.
However, I only need one cache entry. So I'm wondering what value to set for maximum size that would be safe, but not excessive.
You should be able to use Cache.cleanUp() as explained in When Does Cleanup Happen? and test whether a maximum size of 1 suites your needs or not.
e.g. The following shows that using a LoadingCache with maximum size of 1 will not evict an existing entry until a different entry is loaded to take its place:
final LoadingCache<Character, Integer> loadingCache = CacheBuilder.newBuilder()
.maximumSize(1)
.build(new CacheLoader<Object, Integer>() {
private final AtomicInteger loadInvocationCount = new AtomicInteger();
#Override
public Integer load(Object key) throws Exception {
return loadInvocationCount.getAndIncrement();
}
});
assert loadingCache.size() == 0;
assert loadingCache.getUnchecked('a') == 0;
assert loadingCache.size() == 1;
loadingCache.cleanUp();
assert loadingCache.size() == 1;
assert loadingCache.getUnchecked('a') == 0;
assert loadingCache.size() == 1;
assert loadingCache.getUnchecked('b') == 1;
assert loadingCache.size() == 1;
loadingCache.cleanUp();
assert loadingCache.size() == 1;
assert loadingCache.getUnchecked('a') == 2;
assert loadingCache.size() == 1;
loadingCache.cleanUp();
assert loadingCache.size() == 1;
Note that this may be specific to the type of LoadingCache being built so you will need to test whatever configuration you plan to use.
Related
I am trying to solve a puzzle, and it has been suggested that I use backtracking - I did not know the term so did some investigation, and found the following in Wikipedia:
In order to apply backtracking to a specific class of problems, one must provide the data P for the particular instance of the problem that is to be solved, and six procedural parameters, root, reject, accept, first, next, and output. These procedures should take the instance data P as a parameter and should do the following:
root(P): return the partial candidate at the root of the search tree.
reject(P,c): return true only if the partial candidate c is not worth completing.
accept(P,c): return true if c is a solution of P, and false otherwise.
first(P,c): generate the first extension of candidate c.
next(P,s): generate the next alternative extension of a candidate, after the extension s.
output(P,c): use the solution c of P, as appropriate to the application.
The backtracking algorithm reduces the problem to the call backtrack(root(P)), where backtrack is the following recursive procedure:
procedure backtrack(c) is
if reject(P, c) then return
if accept(P, c) then output(P, c)
s ← first(P, c)
while s ≠ NULL do
backtrack(s)
s ← next(P, s)
I have attempted to use this method for my solution, but after the method finds a rejected candidate it just starts again and finds the same route, rather than the next possible one.
I now don't think I have used the next(P,s) correctly, because I don't really understand the wording 'after the extension s'.
I've tried 2 methods:
(a) in the first() function, generating all possible extensions, storing them in a list, then using the first. The next() function then uses the other extensions from the list in turn. But this maybe can't work because of the calls to backtrack() in between the calls to next().
(b) adding a counter to the data (i.e. the class that includes all the grid info) and incrementing this for each call of next(). But can't work out where to reset this counter to zero.
Here's the relevant bit of code for method (a):
private PotentialSolution tryFirstTrack(PotentialSolution ps)
{
possibleTracks = new List<PotentialSolution>();
for (Track trytrack = Track.Empty + 1; trytrack < Track.MaxVal; trytrack++)
{
if (validMove(ps.nextSide, trytrack))
{
ps.SetCell(trytrack);
possibleTracks.Add(ps);
}
}
return tryNextTrack(ps);
}
private PotentialSolution tryNextTrack(PotentialSolution ps)
{
if (possibleTracks.Count == 0)
{
ps.SetCell(Track.Empty);
return null;
}
ps = possibleTracks.First();
// don't use same one again
possibleTracks.Remove(ps);
return ps;
}
private bool backtrackTracks(PotentialSolution ps)
{
if (canExit)
{
return true;
}
if (checkOccupiedCells(ps))
{
ps = tryFirstTrack(ps);
while (ps != null)
{
// 'testCells' is a copy of the grid for use with graphics - no need to include graphics in the backtrack stack
testCells[ps.h, ps.w].DrawTrack(g, ps.GetCell());
if (ps.TestForExit(endColumn, ref canExit) != Track.MaxVal)
{
drawRowColTotals(ps);
return true;
}
ps.nextSide = findNextSide(ps.nextSide, ps.GetCell(), ref ps.h, ref ps.w);
if (ps.h >= 0 && ps.h < cellsPerSide && ps.w >= 0 && ps.w < cellsPerSide)
{
backtrackTracks(ps);
ps = tryNextTrack(ps);
}
else
return false;
}
return false;
}
return false;
}
and here's some code using random choices. This works fine, so I conclude that the methods checkOccupiedCells() and findNextSide() are working correctly.
private bool backtrackTracks(PotentialSolution ps)
{
if (canExit)
{
return true;
}
if (checkOccupiedCells(ps))
{
Track track = createRandomTrack(ps);
if (canExit)
return true;
if (track == Track.MaxVal)
return false;
ps.SetCell(track);
ps.nextSide = findNextSide(ps.nextSide, track, ref ps.h, ref ps.w);
if (ps.h >= 0 && ps.h < cellsPerSide && ps.w >= 0 && ps.w < cellsPerSide)
backtrackTracks(ps);
else
return false;
}
}
If it helps, there's more background info in the puzzle itself here
Currently for my college project, I am trying to implement FCFS and Priority scheduling algorithms for xv6. I am done with the priority one and now trying to make FCFS work out. The following is the modification i did to the code:
void
scheduler(void)
{
struct proc *p = 0;
struct cpu *c = mycpu();
c->proc = 0;
for(;;)
{
// Enable interrupts on this processor.
sti();
// Loop over process table looking for process to run.
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
struct proc *minP = 0;
if(p->state != RUNNABLE)
continue;
// ignore init and sh processes from FCFS
if(p->pid > 1)
{
if (minP != 0){
// here I find the process with the lowest creation time (the first one that was created)
if(p->ctime < minP->ctime)
minP = p;
}
else
minP = p;
}
// If I found the process which I created first and it is runnable I run it
//(in the real FCFS I should not check if it is runnable, but for testing purposes I have to make this control, otherwise every time I launch
// a process which does I/0 operation (every simple command) everything will be blocked
if(minP != 0 && p->state == RUNNABLE)
p = minP;
if(p != 0)
{
// Switch to chosen process. It is the process's job
// to release ptable.lock and then reacquire it
// before jumping back to us.
c->proc = p;
switchuvm(p);
p->state = RUNNING;
swtch(&(c->scheduler), p->context);
switchkvm();
// Process is done running for now.
// It should have changed its p->state before coming back.
c->proc = 0;
}
}
release(&ptable.lock);
}
}
Now, I would like to ask is that when I run two dummy process (doing with the convention, foo.c to produce children processes to do useless calculations that consume time) each producing a child, why is it that I am still able to run ps?
Technically, each of the 2 available CPUs must be occupied running the two dummy process correct?
Additionally, I set the creation time as Priority using the algoirthm i wrote for the Priority scheduling. Turns out, after creation of the two processes, I cannot run anything, meaning both the CPUs are in use right now.
I think you've made two mistakes:
the process context is inside your for loop, it should be after:
schedule()
{
// for ever
for(;;)
{
// select process to run
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
...
}
// run proc
if (p != 0)
{
...
}
}
You've made a little mistake in minP selection:
if(minP != 0 && p->state == RUNNABLE)
p = minP;
should be
if(minP != 0 && minP->state == RUNNABLE)
p = minP;
but since minP's state is necessary RUNNABLE, and that you test that it's not null before running it, you could write
p = minP;
So you're corrected code could be:
void
scheduler(void)
{
struct proc *p = 0;
struct cpu *c = mycpu();
c->proc = 0;
for(;;)
{
sti();
struct proc *minP = 0;
// Loop over process table looking for process to run.
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
if(p->state != RUNNABLE)
continue;
// ignore init and sh processes from FCFS
if(p->pid > 1)
{
if (minP != 0) {
// here I find the process with the lowest creation time (the first one that was created)
if(p->ctime < minP->ctime)
minP = p;
}
else
minP = p;
}
}
p = minP;
release(&ptable.lock);
if(p != 0)
{
c->proc = p;
switchuvm(p);
p->state = RUNNING;
swtch(&(c->scheduler), p->context);
switchkvm();
c->proc = 0;
}
}
}
I want to validate a TexFormField field to both check for a min value of 10 (done elsewhere) and also check that the value entered is a multiple of 10.
I've written a function that tries to handle both and it seems to work. However, it feels clunky. And it doesn't provide any feedback until the form is submitted. Here is what I've written:
final form = _formKey.currentState;
if ((form.validate()) && (_amount / 10 is int)) {
form.save();
return true;
}
return false;
}
Is there a cleaner way to check if an entered value is a multiple of 10 (or any integer)? For example, in the validator: property field itself?
validator: (String value) {
int n = int.parse(value);
int multipleOf = 10;
return n % multipleOf != 0 ? "not a multiple of $multipleOf" : null;
}
I tried to do a Parameterized constructor for a linked list my program is about to implement a queue by using a liked list so i want to do a parameterized constructor like Queue(int value , int size) and it dose not run or doing a list
this is my code for this problem
Queue(int value,int _size)
{
for(int i = 0; i < _size; ++i)
{
Node* temp = new Node;
temp->data = value;
temp->next = nullptr;
if(head == nullptr)
{
head = tail = temp;
}
else
{
tail->next = temp;
tail = temp;
}
}
}
i expected that the result is to fill the lest by value times size like if i run this function Queue x(20,3) the linked list should be
20 20 20
Since that this is a constructor, The head and tail are not properly initialized to use them. I would suggest adding head = tail = nullptr just before the loop and see what happens.
Follow this code after your node creation. I hope this will work. And do use i++ instead of ++i, as the later will make the loop for size-1 times.
if(head == NULL)
head = temp;
else{
Node *x;
x= head;
while(x->next != NULL)
x = x->next;
x->next = temp;
}
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/sem.h>
#include<sys/ipc.h>
int sem_id;
void update_file(int number)
{
struct sembuf sem_op;
FILE* file;
printf("Inside Update Process\n");
/* wait on the semaphore, unless it's value is non-negative. */
sem_op.sem_num = 0;
sem_op.sem_op = -1; /* <-- Amount by which the value of the semaphore is to be decreased */
sem_op.sem_flg = 0;
semop(sem_id, &sem_op, 1);
/* we "locked" the semaphore, and are assured exclusive access to file. */
/* manipulate the file in some way. for example, write a number into it. */
file = fopen("file.txt", "a+");
if (file) {
fprintf(file, " \n%d\n", number);
fclose(file);
}
/* finally, signal the semaphore - increase its value by one. */
sem_op.sem_num = 0;
sem_op.sem_op = 1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
}
void write_file(char* contents)
{
printf("Inside Write Process\n");
struct sembuf sem_op;
sem_op.sem_num = 0;
sem_op.sem_op = -1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
FILE *file = fopen("file.txt","w");
if(file)
{
fprintf(file,contents);
fclose(file);
}
sem_op.sem_num = 0;
sem_op.sem_op = 1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
}
int main()
{
//key_t key = ftok("file.txt",'E');
sem_id = semget( IPC_PRIVATE, 1, 0600 | IPC_CREAT);
/*here 100 is any arbit number to be assigned as the key of the
semaphore,1 is the number of semaphores in the semaphore set, */
if(sem_id == -1)
{
perror("main : semget");
exit(1);
}
int rc = semctl( sem_id, 0, SETVAL, 1);
pid_t u = fork();
if(u == 0)
{
update_file(100);
exit(0);
}
else
{
wait();
}
pid_t w = fork();
if(w == 0)
{
write_file("Hello!!");
exit(0);
}
else
{
wait();
}
}
If I run the above code as a c code, the write_file() function is called after the update_file () function
Whereas if I run the same code as a c++ code, the order of execution is reverse... why is it so??
Just some suggestions, but it looks to me like it could be caused by a combination of things:
The wait() call is supposed to take a pointer argument (that can
be NULL). Compiler should have caught this, but you must be picking
up another definition somewhere that permits your syntax. You are
also missing an include for sys/wait.h. This might be why the
compiler isn't complaining as I'd expect it to.
Depending on your machine/OS configuration the fork'd process may
not get to run until after the parent yields. Assuming the "wait()"
you are calling isn't working the way we would be expecting, it is
possible for the parent to execute completely before the children
get to run.
Unfortunately, I wasn't able to duplicate the same temporal behavior. However, when I generated assembly files for each of the two cases (C & C++), I noticed that the C++ version is missing the "wait" system call, but the C version is as I would expect. To me, this suggests that somewhere in the C++ headers this special version without an argument is being #defined out of the code. This difference could be the reason behind the behavior you are seeing.
In a nutshell... add the #include, and change your wait calls to "wait(0)"