formula to pick every pixel in a bitmap without repeating - swift

I'm looking for an algorithm, I am programming in swift now but pseudocode or any reasonably similar "C family" syntax will do.
Imagine a large list of values, such as pixels in a bitmap. You want to pick each one in a visually random order, one at a time, and never pick the same one twice, and always end up picking them all.
I used it before in a Fractal generator so that it was not just rendering line by line, but built it up slowly in a stochastic way, but that was long ago, in a Java applet, and I no longer have the code.
I do not believe it used any pseudo-random number generator, and the main thing I liked about it is that it did not make the rendering time take longer than the just line by line approach. Any of the shuffling algorithms I looked at would make the rendering take longer with such a large number of values to deal with, unless I'm missing something.
EDIT: I used the shuffling an array approach. I shuffle once when the app loads, and it does not take that long anyway. Here is the code for my "Dealer" class.
import Foundation
import Cocoa
import Quartz
class Dealer: NSObject
{
//########################################################
var deck = [(CGFloat,CGFloat)]()
var count = 0
//########################################################
init(_ w:Int, _ h:Int)
{
super.init()
deck.reserveCapacity((w*h)+1)
for y in 0...h
{
for x in 0...w
{
deck.append((CGFloat(x),CGFloat(y)))
}
}
self.shuffle()
}
//########################################################
func shuffle()
{
var j:Int = 0
let total:Int = deck.count-1
for i:Int in 0...total
{
j = Int(arc4random_uniform(UInt32(total)))
deck.swapAt(i, j)
}
}
//########################################################
func deal() -> (CGFloat,CGFloat)
{
let result = deck[count]
let total:Int = deck.count-1
if(count<total) { count=count+1 } else { count=0 }
return(result)
}
//########################################################
}
The init is called once, and it calls shuffle, but if you want you can call shuffle again if needed.
Each time you need a "card" you call Deal. It loops to the beginning when the "deck" is done.

if you got enough memory space to store all the pixel positions you can shuffle them:
const int xs=640; // image resolution
const int ys=480;
color pixel[sz]; // image data
const int sz=xs*ys; // image size
int adr[sz],i,j;
for (i=0;i<sz;i++) adr[i]=i; // ordered positions
for (i=0;i<sz;i++) // shuffle them
{
j = random(sz); // pseudo-randomness with uniform distribution
swap(pixel[i],pixel[j]);
}
this way you got guaranteed that each pixel is used once and most likely all of them are shuffled ...

You need to implement a pseudo-random number generator with a theoretically known period, which is greater than but very close to the number of elements in your list. Suppose R() is a function that implements such a RNG.
Then:
for i = 1...N
do
idx = R()
while idx > N
output element(idx)
end
If the period of the RNG is greater than N, this algorithm is guaranteed to finish, and never output the same element twice
If the period of the RNG is close to N, this algorithm will be fast (i.e. the do-while loop will mostly do 1 iteration).
If the RNG has good quality, the visual output will look pleasant; here you have to do experiments and decide what is good enough for you
To find a RNG that has an exactly-known period, you should examine theory on RNGs, which is very extensive (maybe too extensive); Wikipedia has useful links.
Start with Linear congruential generators: they are very simple, and there is a chance they will be of good enough quality.

Here's a working example based on linear feedback shift registers. Since an n-bit LFSR has a maximal sequence length of 2n−1 steps, this will work best when the number of pixels is one less than a power of 2. For other sizes, the pseudo-random coordinates are discarded until one is obtained that lies within the specified range of coordinates. This is still reasonably efficient; in the worst case (where w×h is a power of 2), there will be an average of two LSFR iterations per coordinate pair.
The following code is in Javascript, but it should be easy enough to port this to Swift or any other language.
Note: For large canvas areas like 1920×1024, it would make more sense to use repeated tiles of a smaller size (e.g., 128×128). The tiling will be imperceptible.
var lsfr_register, lsfr_mask, lsfr_fill_width, lsfr_fill_height, lsfr_state, lsfr_timer;
var lsfr_canvas, lsfr_canvas_context, lsfr_blocks_per_frame, lsfr_frame_rate = 50;
function lsfr_setup(width, height, callback, duration) {
// Maximal length LSFR feedback terms
// (sourced from http://users.ece.cmu.edu/~koopman/lfsr/index.html)
var taps = [ -1, 0x1, 0x3, 0x5, 0x9, 0x12, 0x21, 0x41, 0x8E, 0x108, 0x204, 0x402,
0x829, 0x100D, 0x2015, 0x4001, 0x8016, 0x10004, 0x20013, 0x40013,
0x80004, 0x100002, 0x200001, 0x400010, 0x80000D, 0x1000004, 0x2000023,
0x4000013, 0x8000004, 0x10000002, 0x20000029, 0x40000004, 0x80000057 ];
nblocks = width * height;
lsfr_size = nblocks.toString(2).length;
if (lsfr_size > 32) {
// Anything longer than about 21 bits would be quite slow anyway
console.log("Unsupposrted LSFR size ("+lsfr_size+")");
return;
}
lsfr_register = 1;
lsfr_mask = taps[lsfr_size];
lsfr_state = nblocks;
lsfr_fill_width = width;
lsfr_fill_height = height;
lsfr_blocks_per_frame = Math.ceil(nblocks / (duration * lsfr_frame_rate));
lsfr_timer = setInterval(callback, Math.ceil(1000 / lsfr_frame_rate));
}
function lsfr_step() {
var x, y;
do {
// Generate x,y pairs until they are within the bounds of the canvas area
// Worst-case for an n-bit LSFR is n iterations in one call (2 on average)
// Best-case (where w*h is one less than a power of 2): 1 call per iteration
if (lsfr_register & 1) lsfr_register = (lsfr_register >> 1) ^ lsfr_mask;
else lsfr_register >>= 1;
y = Math.floor((lsfr_register-1) / lsfr_fill_width);
} while (y >= lsfr_fill_height);
x = (lsfr_register-1) % lsfr_fill_width;
return [x, y];
}
function lsfr_callback() {
var coords;
for (var i=0; i<lsfr_blocks_per_frame; i++) {
// Fetch pseudo-random coordinates and fill the corresponding pixels
coords = lsfr_step();
lsfr_canvas_context.fillRect(coords[0],coords[1],1,1);
if (--lsfr_state <= 0) {
clearInterval(lsfr_timer);
break;
}
}
}
function start_fade() {
var w = document.getElementById("w").value * 1;
var h = document.getElementById("h").value * 1;
var dur = document.getElementById("dur").value * 1;
lsfr_canvas = document.getElementById("cv");
lsfr_canvas.width = w;
lsfr_canvas.height = h;
lsfr_canvas_context = lsfr_canvas.getContext("2d");
lsfr_canvas_context.fillStyle = "#ffff00";
lsfr_canvas_context.fillRect(0,0,w,h);
lsfr_canvas_context.fillStyle = "#ff0000";
lsfr_setup(w, h, lsfr_callback, dur);
}
Size:
<input type="text" size="3" id="w" value="320"/>
×
<input type="text" size="3" id="h" value="240"/>
in
<input type="text" size="3" id="dur" value="3"/>
secs
<button onclick="start_fade(); return 0">Start</button>
<br />
<canvas id="cv" width="320" height="240" style="border:1px solid #ccc"/>

Related

Minimum cost solution to connect all elements in set A to at least one element in set B

I need to find the shortest set of paths to connect each element of Set A with at least one element of Set B. Repetitions in A OR B are allowed (but not both), and no element can be left unconnected. Something like this:
I'm representing the elements as integers, so the "cost" of a connection is just the absolute value of the difference. I also have a cost for crossing paths, so if Set A = [60, 64] and Set B = [63, 67], then (60 -> 67) incurs an additional cost. There can be any number of elements in either set.
I've calculated the table of transitions and costs (distances and crossings), but I can't find the algorithm to find the lowest-cost solution. I keep ending up with either too many connections (i.e., repetitions in both A and B) or greedy solutions that omit elements (e.g., when A and B are non-overlapping). I haven't been able to find examples of precisely this kind of problem online, so I hoped someone here might be able to help, or at least point me in the right direction. I'm not a graph theorist (obviously!), and I'm writing in Swift, so code examples in Swift (or pseudocode) would be much appreciated.
UPDATE: The solution offered by #Daniel is almost working, but it does occasionally add unnecessary duplicates. I think this may be something to do with the sorting of the priorityQueue -- the duplicates always involve identical elements with identical costs. My first thought was to add some kind of "positional encoding" (yes, Transformer-speak) to the costs, so that the costs are offset by their positions (though of course, this doesn't guarantee unique costs). I thought I'd post my Swift version here, in case anyone has any ideas:
public static func voiceLeading(from chA: [Int], to chB: [Int]) -> Set<[Int]> {
var result: Set<[Int]> = Set()
let im = intervalMatrix(chA, chB: chB)
if im.count == 0 { return [[0]] }
let vc = voiceCrossingCostsMatrix(chA, chB: chB, cost: 4)
// NOTE: cm contains the weights
let cm = VectorUtils.absoluteAddMatrix(im, toMatrix: vc)
var A_links: [Int:Int] = [:]
var B_links: [Int:Int] = [:]
var priorityQueue: [Entry] = []
for (i, a) in chA.enumerated() {
for (j, b) in chB.enumerated() {
priorityQueue.append(Entry(a: a, b: b, cost: cm[i][j]))
if A_links[a] != nil {
A_links[a]! += 1
} else {
A_links[a] = 1
}
if B_links[b] != nil {
B_links[b]! += 1
} else {
B_links[b] = 1
}
}
}
priorityQueue.sort { $0.cost > $1.cost }
while priorityQueue.count > 0 {
let entry = priorityQueue[0]
if A_links[entry.a]! > 1 && B_links[entry.b]! > 1 {
A_links[entry.a]! -= 1
B_links[entry.b]! -= 1
} else {
result.insert([entry.a, (entry.b - entry.a)])
}
priorityQueue.remove(at: 0)
}
return result
}
Of course, since the duplicates have identical scores, it shouldn't be a problem to just remove the extras, but it feels a bit hackish...
UPDATE 2: Slightly less hackish (but still a bit!); since the requirement is that my result should have equal cardinality to max(|A|, |B|), I can actually just stop adding entries to my result when I've reached the target cardinality. Seems okay...
UPDATE 3: Resurrecting this old question, I've recently had some problems arise from the fact that the above algorithm doesn't fulfill my requirement |S| == max(|A|, |B|) (where S is the set of pairings). If anyone knows of a simple way of ensuring this it would be much appreciated. (I'll obviously be poking away at possible changes.)
This is an easy task:
Add all edges of the graph in a priority_queue, where the biggest priority is the edge with the biggest weight.
Look each edge e = (u, v, w) in the priority_queue, where u is in A, v is in B and w is the weight.
If removing e from the graph doesn't leave u or v isolated, remove it.
Otherwise, e is part of the answer.
This should be enough for your case:
#include <bits/stdc++.h>
using namespace std;
struct edge {
int u, v, w;
edge(){}
edge(int up, int vp, int wp){u = up; v = vp; w = wp;}
void print(){ cout<<"("<<u<<", "<<v<<")"<<endl; }
bool operator<(const edge& rhs) const {return w < rhs.w;}
};
vector<edge> E; //edge set
priority_queue<edge> pq;
vector<edge> ans;
int grade[5] = {3, 3, 2, 2, 2};
int main(){
E.push_back(edge(0, 2, 1)); E.push_back(edge(0, 3, 1)); E.push_back(edge(0, 4, 4));
E.push_back(edge(1, 2, 5)); E.push_back(edge(1, 3, 2)); E.push_back(edge(1, 4, 0));
for(int i = 0; i < E.size(); i++) pq.push(E[i]);
while(!pq.empty()){
edge e = pq.top();
if(grade[e.u] > 1 && grade[e.v] > 1){
grade[e.u]--; grade[e.v]--;
}
else ans.push_back(e);
pq.pop();
}
for(int i = 0; i < ans.size(); i++) ans[i].print();
return 0;
}
Complexity: O(E lg(E)).
I think this problem is "minimum weighted bipartite matching" (although searching for " maximum weighted bipartite matching" would also be relevant, it's just the opposite)

How to optimize this algorithm that find all maximal matching in a graph?

In my app people give grades to each other, out of ten point. Each day, an algorithm computes a match for as much people as possible (it's impossible to compute a match for everyone). It makes a graph where vertexes are users and edges are the grades
I simplify the problem by saying that if 2 people give a grade to each other, there is an edge between them with a weight of their respective grade average. But if A give a grade to B, but B doesnt, their is no edge between them and they can never match : this way, the graph is not oriented anymore
I would like that, in average everybody be happy, but in the same time, I would like as few as possible of people that have no match.
Being very deterministic, I made an algorithm that find ALL maximal matchings in a graph. I did that because I thought I could analyse all these maximal matchings and apply a value function that could look like :
V(Matching) = exp(|M| / max(|M|)) * sum(weight of all Edge in M)
That is to say, a matching is high-valued if its cardinal is close to the cardinal of the maximum matching, and if the sum of the grade between people is high. I put an exponential function to the ratio |M|/max|M| because I consider it's a big problem if M is lower that 0.8 (so the exp will be arranged to highly decrease V as |M|/max|M| reaches 0.8)
I would have take the matching where V(M) is maximal. Though, the big problem is that my function that computes all maximal matching takes a lot of time. For only 15 vertex and 20 edges, it takes almost 10 minutes...
Here is the algorithm (in Swift) :
import Foundation
struct Edge : CustomStringConvertible {
var description: String {
return "e(\(v1), \(v2))"
}
let v1:Int
let v2:Int
let w:Int?
init(_ arrint:[Int])
{
v1 = arrint[0]
v2 = arrint[1]
w = nil
}
init(_ v1:Int, _ v2:Int)
{
self.v1 = v1
self.v2 = v2
w = nil
}
init(_ v1:Int, _ v2:Int, _ w:Int)
{
self.v1 = v1
self.v2 = v2
self.w = w
}
}
let mygraph:[Edge] =
[
Edge([1, 2]),
Edge([1, 5]),
Edge([2, 5]),
Edge([2, 3]),
Edge([3, 4]),
Edge([3, 6]),
Edge([5, 6]),
Edge([2,6]),
Edge([4,1]),
Edge([3,5]),
Edge([4,2]),
Edge([7,1]),
Edge([7,2]),
Edge([8,1]),
Edge([9,8]),
Edge([11,2]),
Edge([11, 8]),
Edge([12,13]),
Edge([1,6]),
Edge([4,7]),
Edge([5,7]),
Edge([3,5]),
Edge([9,1]),
Edge([10,11]),
Edge([10,4]),
Edge([10,2]),
Edge([10,1]),
Edge([10, 12]),
]
// remove all the edge and vertex "touching" the edges and vertex in "edgePath"
func reduce (graph:[Edge], edgePath:[Edge]) -> [Edge]
{
var alreadyUsedV:[Int] = []
for edge in edgePath
{
alreadyUsedV.append(edge.v1)
alreadyUsedV.append(edge.v2)
}
return graph.filter({ edge in
return alreadyUsedV.first(where:{ edge.v1 == $0 }) == nil && alreadyUsedV.first(where:{ edge.v2 == $0 }) == nil
})
}
func findAllMaximalMatching(graph Gi:[Edge]) -> [[Edge]]
{
var matchings:[[Edge]] = []
var G = Gi // current graph (reduced at each depth)
var M:[Edge] = [] // current matching being built
var Cx:[Int] = [] // current path in the possibilities tree
// eg : Cx[1] = 3 : for the depth 1, we are at the 3th edge
var d:Int = 0 // current depth
var debug_it = 0
while(true)
{
if(G.count == 0) // if there is no available edge in graph, it means we have a matching
{
if(M.count > 0) // security, if initial Graph is empty we cannot return an empty matching
{
matchings.append(M)
}
if(d == 0)
{
// depth = 0, we cannot decrement d, we have finished all the tree possibilities
break
}
d = d - 1
_ = M.popLast()
G = reduce(graph: Gi, edgePath: M)
}
else
{
let indexForThisDepth = Cx.count > d ? Cx[d] + 1 : 0
if(G.count < indexForThisDepth + 1)
{
// depth ended,
_ = Cx.popLast()
if( d == 0)
{
break
}
d = d - 1
_ = M.popLast()
// reduce from initial graph to the decremented depth
G = reduce(graph: Gi, edgePath: M)
}
else
{
// matching not finished to be built
M.append( G[indexForThisDepth] )
if(indexForThisDepth == 0)
{
Cx.append(indexForThisDepth)
}
else
{
Cx[d] = indexForThisDepth
}
d = d + 1
G = reduce(graph: G, edgePath: M)
}
}
debug_it += 1
}
print("matching counts : \(matchings.count)")
print("iterations : \(debug_it)")
return matchings
}
let m = findAllMaximalMatching(graph: mygraph)
// we have compute all the maximal matching, now we loop through all of them to find the one that has V(Mi) maximum
// ....
Finally my question is : how can I optimize this algorithm to find all maximal matching and to compute my value function on them to find the best matching for my app in a polynomial time ?
I may be missing something since the question is quite complicated, but why not simply use maximum flow problem, with every vertex appearing twice and the edges weights are the average grading if exists? It will return the maximal flow if configured correctly and runs polynomial time.

Functional version of a typical nested while loop

I hope this question may please functional programming lovers. Could I ask for a way to translate the following fragment of code to a pure functional implementation in Scala with good balance between readability and execution speed?
Description: for each elements in a sequence, produce a sub-sequence contains the elements that comes after the current elements (including itself) with a distance smaller than a given threshold. Once the threshold is crossed, it is not necessary to process the remaining elements
def getGroupsOfElements(input : Seq[Element]) : Seq[Seq[Element]] = {
val maxDistance = 10 // put any number you may
var outerSequence = Seq.empty[Seq[Element]]
for (index <- 0 until input.length) {
var anotherIndex = index + 1
var distance = input(index) - input(anotherIndex) // let assume a separate function for computing the distance
var innerSequence = Seq(input(index))
while (distance < maxDistance && anotherIndex < (input.length - 1)) {
innerSequence = innerSequence ++ Seq(input(anotherIndex))
anotherIndex = anotherIndex + 1
distance = input(index) - input(anotherIndex)
}
outerSequence = outerSequence ++ Seq(innerSequence)
}
outerSequence
}
You know, this would be a ton easier if you added a description of what you're trying to accomplish along with the code.
Anyway, here's something that might get close to what you want.
def getGroupsOfElements(input: Seq[Element]): Seq[Seq[Element]] =
input.tails.map(x => x.takeWhile(y => distance(x.head,y) < maxDistance)).toSeq

Promela system with unranged values

int rq_begin = 0, rq_end = 0;
int av_begin = 0, av_end = 0;
#define MAX_DUR 10
#define RQ_DUR 5
proctype Writer() {
do
:: (av_end < rq_end) -> av_end++;
if
:: (av_end - av_begin) > MAX_DUR -> av_begin = av_end - MAX_DUR;
:: else -> skip;
fi
printf("available span: [%d,%d]\n", av_begin, av_end);
od
}
proctype Reader() {
do
:: d_step {
rq_begin++;
rq_end = rq_begin + RQ_DUR;
}
printf("requested span: [%d,%d]\n", rq_begin, rq_end);
(rq_begin >= av_begin && rq_end <= av_end);
printf("got requested span\n");
od
}
init {
run Writer();
run Reader();
}
This system (only an example) should model a reader/writer queue where the reader requests a certain span of frames [rq_begin,rq_end], and the writer should then make at least this span available. [av_begin,av_end] is the span of available frames.
The 4 values are absolute frame indices, rq_begin gets incremented infinitley as the reader reads the next span of frames.
The system cannot be directly verified because the values are unranges (generating infinitely many states). Does Promela/Spin (or a similar software) has support to verify a system like this, and automatically transform it such that it becomes finite?
For example if all the 4 values were incremented by the same amount, the situation would still be the same. Or it could be reformulated into a model which instead has variables for the differences of these values, for example av_end - rq_end.
I'm using Promela/Spin to verify a more complex queuing system which uses absolute frame indices like this.

How/When to update bias in RPROP neural network?

I am implementing this neural network for some classification problem. I initially tried back propagation but it takes longer to converge. So I though of using RPROP. In my test setup RPROP works fine for AND gate simulation but never converges for OR and XOR gate simulation.
How and when should I update bias for RPROP?
Here my weight update logic:
for(int l_index = 1; l_index < _total_layers; l_index++){
Layer* curr_layer = get_layer_at(l_index);
//iterate through each neuron
for (unsigned int n_index = 0; n_index < curr_layer->get_number_of_neurons(); n_index++) {
Neuron* jth_neuron = curr_layer->get_neuron_at(n_index);
double change = jth_neuron->get_change();
double curr_gradient = jth_neuron->get_gradient();
double last_gradient = jth_neuron->get_last_gradient();
int grad_sign = sign(curr_gradient * last_gradient);
//iterate through each weight of the neuron
for(int w_index = 0; w_index < jth_neuron->get_number_of_weights(); w_index++){
double current_weight = jth_neuron->give_weight_at(w_index);
double last_update_value = jth_neuron->give_update_value_at(w_index);
double new_update_value = last_update_value;
if(grad_sign > 0){
new_update_value = min(last_update_value*1.2, 50.0);
change = sign(curr_gradient) * new_update_value;
}else if(grad_sign < 0){
new_update_value = max(last_update_value*0.5, 1e-6);
change = -change;
curr_gradient = 0.0;
}else if(grad_sign == 0){
change = sign(curr_gradient) * new_update_value;
}
//Update neuron values
jth_neuron->set_change(change);
jth_neuron->update_weight_at((current_weight + change), w_index);
jth_neuron->set_last_gradient(curr_gradient);
jth_neuron->update_update_value_at(new_update_value, w_index);
double current_bias = jth_neuron->get_bias();
jth_neuron->set_bias(current_bias + _learning_rate * jth_neuron->get_delta());
}
}
}
In principal you don't treat the bias differently than before when you did backpropagation. It's learning_rate * delta which you seem to be doing.
One source of error may be that the sign of the weight change depends on how you calculate your error. There's different conventions and (t_i-y_i) instead of (y_i - t_i) should result in returning (new_update_value * sgn(grad)) instead of -(new_update_value * sign(grad)) so try switching the sign. I'm also unsure about how you specifically implemented everything since a lot is not shown here. But here's a snippet of mine in a Java implementation that might be of help:
// gradient didn't change sign:
if(weight.previousErrorGradient * errorGradient > 0)
weight.lastUpdateValue = Math.min(weight.lastUpdateValue * step_pos, update_max);
// changed sign:
else if(weight.previousErrorGradient * errorGradient < 0)
{
weight.lastUpdateValue = Math.max(weight.lastUpdateValue * step_neg, update_min);
}
else
weight.lastUpdateValue = weight.lastUpdateValue; // no change
// Depending on language, you should check for NaN here.
// multiply this with -1 depending on your error signal's sign:
return ( weight.lastUpdateValue * Math.signum(errorGradient) );
Also, keep in mind that 50.0, 1e-6 and especially 0.5, 1.2 are empirically gathered values so they might need to be adjusted. You should definitely print out the gradients and weight changes to see if there's something weird going on (e.g. exploding gradients->NaN although you're only testing AND/XOR). Your last_gradient value should also be initialized to 0 at the first timestep.