What is the meaning of this method calcWrappedOffset () - queue

A method in a queue, offer I don't quite understand the effect of this method
:
#Override
public boolean offer(final T e) {
if (null == e) {
throw new NullPointerException("Null is not a valid element");
}
// local load of field to avoid repeated loads after volatile reads
final AtomicReferenceArray<Object> buffer = producerBuffer;
final long index = lpProducerIndex();
final int mask = producerMask;
final int offset = calcWrappedOffset(index, mask);
.......
}
calcWrappedOffset() method:
private static int calcWrappedOffset(long index, int mask) {
return calcDirectOffset((int)index & mask);
}
private static int calcDirectOffset(int index) {
return index;
}

What is that you don't understand?
This is from the SpscLinkedArrayQueue I presume which uses circular buffers with power of 2 size. Due to being power of 2, wrapping the index around requires a simple and fast binary 'and' with the mask = size - 1 value instead of the heavy modulo operator.
The original JCTools version allowed spanning out items in the array to reduce the false-sharing effects but RxJava decided to not support that in order to reduce memory consumption, hence the calcDirectOffset returning the unaltered index.

Related

How to limit download speed using UnityWebRequest?

Most files of our game is download during playing. If the download speed has no limit, there is not network traffic space left for the game play protocols... I tried to use the "DownloadHandlerScript", and provide a array of "preallocatedBuffer" to control the accept content size from web per frame, but it look like unuseful...Can anyone help me please?
Here is the DownloadHandlerScript I use:
class CustomDownloadHandler : DownloadHandlerScript
{
private byte[] _cacheBytes;
private int _index;
public CustomDownloadHandler(byte[] preallocatedBuffer) : base(preallocatedBuffer)
{
}
protected override void ReceiveContentLength(int contentLength)
{
_cacheBytes = new byte[contentLength];
}
// this method will be called once per frame to hand chunks of that data to script.
protected override bool ReceiveData(byte[] buffer, int dataLength)
{
if (buffer == null || buffer.Length <= 0)
return false;
Array.Copy(buffer, 0, _cacheBytes, _index, dataLength);
_index += dataLength;
return true;
}
//...
}
And I create an UnityWebRequest like this:
bytes = new byte[4096]; //As preallocated buffer.
webRequest = UnityWebRequest.Get("url/file.assetbundle");
webRequest.downloadHandler = new CustomDownloadHandler(bytes);
webRequest.SendWebRequest();
But the download speed is same whether I use the downloadhandler or not...

How to do Geofence monitoring/analytics using KSQLDB?

I am trying to do geofence monitoring/analytics using KSQLDB. I want to get a message whenever a vehicle ENTERS/LEAVES a geofence. Taking inspiration from the [https://github.com/gschmutz/various-demos/tree/master/kafka-geofencing] I have created a UDF named as GEOFENCE, below is the code for the same.
Below is my query to perform join on geofence stream and live vehicle position stream
CREATE stream join_live_pos_geofence_status_1 AS SELECT lp1.vehicleid,
lp1.lat,
lp1.lon,
s1p.geofencecoordinates,
Geofence(lp1.lat, lp1.lon, 'POLYGON(('+s1p.geofencecoordinates+'))') AS geofence_status
FROM live_position_1 LP1
LEFT JOIN stream_1_processed S1P within 72 hours
ON kmdlp1.clusterid = kmds1p.clusterid emit changes;
I am taking into account all the geofences created in last 3 days.
I have created another query to use the geofence status from previous query to calculate whether the vehicle is ENTERING/LEAVING geofence.
CREATE stream join_geofence_monitoring_1 AS SELECT *,
Geofence(jlpgs1.lat, jlpgs1.lon, 'POLYGON(('+jlpgs1.geofencecoordinates+'))', jlpgs1.geofence_status) geofence_monitoring_status
FROM join_live_pos_geofence_status_1 JLPGS1 emit changes;
The above query give me the output as 'INSIDE', 'INSIDE' for geofence_status and geofence_monitoring_status columns, respectively or the output is 'OUTSIDE', 'OUTSIDE' for geofence_status and geofence_monitoring_status columns, respectively. I know I am not taking into account the time aspect, like these 2 queries should never be executed at same time say 't0' but I am not able to think the correct way of doing this.
public class Geofence
{
private static final String OUTSIDE = "OUTSIDE";
private static final String INSIDE = "INSIDE";
private static GeometryFactory geometryFactory = JTSFactoryFinder.getGeometryFactory();
private static WKTReader wktReader = new WKTReader(geometryFactory);
#Udf(description = "Returns whether a coordinate lies within a polygon or not")
public static String geofence(final double latitude, final double longitude, String geometryWKT) {
boolean status = false;
String result = "";
Polygon polygon = null;
try {
polygon = (Polygon) wktReader.read(geometryWKT);
// However, an important point to note is that the longitude is the X value
// and the latitude the Y value. So we say "lat/long",
// but JTS will expect it in the order "long/lat".
Coordinate coord = new Coordinate(longitude, latitude);
Point point = geometryFactory.createPoint(coord);
status = point.within(polygon);
if(status)
{
result = INSIDE;
}
else
{
result = OUTSIDE;
}
} catch (ParseException e) {
throw new RuntimeException(e.getMessage());
}
return result;
}
#Udf(description = "Returns whether a coordinate moved in or out of a polygon")
public static String geofence(final double latitude, final double longitude, String geometryWKT, final String statusBefore) {
String status = geofence(latitude, longitude, geometryWKT);
if (statusBefore.equals("INSIDE") && status.equals("OUTSIDE")) {
//status = "LEAVING";
return "LEAVING";
} else if (statusBefore.equals("OUTSIDE") && status.equals("INSIDE")) {
//status = "ENTERING";
return "ENTERING";
}
return status;
}
}
My question is how can I calculate correctly that a vehicle is ENTERING/LEAVING a geofence? Is it even possible to do with KSQLDB?
Would it be correct to say that the join_live_pos_geofence_status_1 stream can have rows that go from INSIDE -> OUTSIDE and then from OUTSIDE -> INSIDE for some key value?
And what you're wanting to do is to output LEAVING and ENTERING events for these transitions?
You can likely do what you want using a custom UDAF. Custom UDAFs take and input and calculate an output, via some intermediate state. For example, an AVG udaf would take some numbers as input, its intermediate state would be the number of inputs and the sum of inputs, and the output would be count/sum.
In your case, the input would be the current state, e.g. either INSIDE or OUTSIDE. The UDAF would need to store the last two states in its intermediate state, and then the output state can be calculated from this. E.g.
Input Intermediate Output
INSIDE INSIDE <only single in intermediate - your choice what you output>
INSIDE INSIDE,INSIDE no-change
OUTSIDE INSIDE,OUTSIDE LEAVING
OUTSIDE OUTSIDE,OUTSIDE no-change
INSIDE OUTSIDE,INSIDE ENTERING
You'll need to decide what to output when there is only a single entry in the intermediate state, i.e. the first time a key is seen.
You can then filter the output to remove any rows that have no-change.
You may also need to set cache.max.bytes.buffering to zero to stop any results being conflated.
UPDATE: suggested code.
Not tested, but something like the following code may do what you want:
#UdafDescription(name = "my_geofence", description = "Computes the geofence status.")
public final class GoeFenceUdaf {
private static final String STATUS_1 = "STATUS_1";
private static final String STATUS_2 = "STATUS_2";
#UdafFactory(description = "Computes the geofence status.",
aggregateSchema = "STRUCT<" + STATUS_1 + " STRING, " + STATUS_2 + " STRING>")
public static Udaf<String, Struct, String> calcGeoFenceStatus() {
final Schema STRUCT_SCHEMA = SchemaBuilder.struct().optional()
.field(STATUS_1, Schema.OPTIONAL_STRING_SCHEMA)
.field(STATUS_2, Schema.OPTIONAL_STRING_SCHEMA)
.build();
return new Udaf<String, Struct, String>() {
#Override
public Struct initialize() {
return new Struct(STRUCT_SCHEMA);
}
#Override
public Struct aggregate(
final String newValue,
final Struct aggregate
) {
if (newValue == null) {
return aggregate;
}
if (aggregate.getString(STATUS_1) == null) {
// First status for this key:
return aggregate
.put(STATUS_1, newValue);
}
final String lastStatus = aggregate.getString(STATUS_2);
if (lastStatus == null) {
// Second status for this key:
return aggregate
.put(STATUS_2, newValue);
}
// Third and subsequent status for this key:
return aggregate
.put(STATUS_1, lastStatus)
.put(STATUS_2, newValue);
}
#Override
public String map(final Struct aggregate) {
final String previousStatus = aggregate.getString(STATUS_1);
final String currentStatus = aggregate.getString(STATUS_2);
if (currentStatus == null) {
// Only have single status, i.e. first status for this key
// What to do? Probably want to do:
return previousStatus.equalsIgnoreCase("OUTSIDE")
? "LEAVING"
: "ENTERING";
}
// Two statuses ...
if (currentStatus.equals(previousStatus)) {
return "NO CHANGE";
}
return previousStatus.equalsIgnoreCase("OUTSIDE")
? "ENTERING"
: "LEAVING";
}
#Override
public Struct merge(final Struct agg1, final Struct agg2) {
throw new RuntimeException("Function does not support session windows");
}
};
}
}

Concatenating ImmutableLists

I have a List<ImmutableList<T>>. I want to flatten it into a single ImmutableList<T> that is a concatenation of all the internal ImmutableLists. These lists can be very long so I do not want this operation to perform a copy of all the elements. The number of ImmutableLists to flatten will be relatively small, so it is fine that lookup will be linear in the number of ImmutableLists. I would strongly prefer that the concatenation will return an Immutable collection. And I need it to return a List that can be accessed in a random location.
Is there a way to do this in Guava?
There is Iterables.concat but that returns an Iterable. To convert this into an ImmutableList again will be linear in the size of the lists IIUC.
By design Guava does not allow you to define your own ImmutableList implementations (if it did, there'd be no way to enforce that it was immutable). Working around this by defining your own class in the com.google.common.collect package is a terrible idea. You break the promises of the Guava library and are running firmly in "undefined behavior" territory, for no benefit.
Looking at your requirements:
You need to concatenate the elements of n ImmutableList instances in sub-linear time.
You would like the result to also be immutable.
You need the result to implement List, and possibly be an ImmutableList.
As you know you can get the first two bullets with a call to Iterables.concat(), but if you need an O(1) random-access List this won't cut it. There isn't a standard List implementation (in Java or Guava) that is backed by a sequence of Lists, but it's straightforward to create one yourself:
/**
* A constant-time view into several {#link ImmutableList} instances, as if they were
concatenated together. Since the backing lists are immutable this class is also
immutable and therefore thread-safe.
*
* More precisely, this class provides O(log n) element access where n is the number of
* input lists. Assuming the number of lists is small relative to the total number of
* elements this is effectively constant time.
*/
public class MultiListView<E> extends AbstractList<E> implements RandomAccess {
private final ImmutableList<ImmutableList<E>> elements;
private final int size;
private final int[] startIndexes;
private MutliListView(Iterable<ImmutableList<E>> elements) {
this.elements = ImmutableList.copyOf(elements);
startIndexes = new int[elements.size()];
int currentSize = 0;
for (int i = 0; i < this.elements.size(); i++) {
List<E> ls = this.elements.get(i);
startIndexes[i] = ls.size();
currentSize += ls.size();
}
}
#Override
public E get(int index) {
checkElementIndex(index, size);
int location = Arrays.binarySearch(startIndexes, index);
if (location >= 0) {
return elements.get(location).get(0);
}
location = (~location) - 1;
return elements.get(location).get(index - startIndexes[location]);
}
#Override
public int size() {
return size;
}
// The default iterator returned by AbstractList.iterator() calls .get()
// which is likely slower than just concatenating the backing lists' iterators
#Override
public Iterator<E> iterator() {
return Iterables.concat(elements).iterator();
}
public static MultiListView<E> of(Iterable<ImmutableList<E>> lists) {
return new MultiListView<>(lists);
}
public static MultiListView<E> of(ImmutableList<E> ... lists) {
return of(Arrays.asList(lists));
}
}
This class is immutable even though it doesn't extend ImmutableList or ImmutableCollection, therefore there's no need for it to actually extend ImmutableList.
As to whether such a class should be provided by Guava; you can make your case in the associated issue, but the reason this doesn't already exist is that surprisingly few users actually need it. Be sure there isn't a reasonable way to solve your problem with an Iterable before using MultiListView.
Firstly, #dimo414's answer is right on the mark - with a clean wrapper view implementation and advice.
Still, I would like to emphasise that since Java 8, you probably just want to do:
listOfList.stream()
.flatMap(ImmutableList::stream)
.collect(ImmutableList.toImmutableList());
The guava issue was since closed as working-as-intended with the remark:
We are more down on lazy view collections than we used to be (especially now that Stream exists) (...)
At least, profile your own use case before trying the view-collection approach.
Under the hood using streams, what effectively happens is that a new backing array is populated with references to the elements - the elements themselves are not deeply copied. So there's very low number of objects created (GC costs) and linear copies from backing-arrays to backing-arrays usually proceed faster than you might expect even with large inner-lists. (They work well with CPU cache prefetch).
Depending on how much you do with the result, the stream version might work out faster that the wrapper version's extra indirection every time you access it.
Here is probably a slightly more readable version of dimo414 implementation, which processes empty lists correctly and populates startIndexes correctly:
public class ImmutableMultiListView<E> extends AbstractList<E> implements RandomAccess {
private final ImmutableList<ImmutableList<E>> listOfLists;
private final int[] startIndexes;
private final int size;
private ImmutableMultiListView(List<ImmutableList<E>> originalListOfLists) {
this.listOfLists =
originalListOfLists.stream().filter(l -> !l.isEmpty()).collect(toImmutableList());
startIndexes = new int[listOfLists.size()];
int sumSize = 0;
for (int i = 0; i < listOfLists.size(); i++) {
List<E> list = listOfLists.get(i);
sumSize += list.size();
if (i < startIndexes.length - 1) {
startIndexes[i + 1] = sumSize;
}
}
this.size = sumSize;
}
#Override
public E get(int index) {
checkElementIndex(index, size);
int location = Arrays.binarySearch(startIndexes, index);
if (location >= 0) {
return listOfLists.get(location).get(0);
} else {
// See Arrays#binarySearch Javadoc:
int insertionPoint = -location - 1;
int listIndex = insertionPoint - 1;
return listOfLists.get(listIndex).get(index - startIndexes[listIndex]);
}
}
#Override
public int size() {
return size;
}
// AbstractList.iterator() calls .get(), which is slower than just concatenating
// the backing lists' iterators
#Override
public Iterator<E> iterator() {
return Iterables.concat(listOfLists).iterator();
}
public static <E> ImmutableMultiListView<E> of(List<ImmutableList<E>> lists) {
return new ImmutableMultiListView<>(lists);
}
}
Not sure if it is possible just with Guava classes, but it seems not difficult to implement, how about something like the following:
package com.google.common.collect;
import java.util.List;
public class ConcatenatedList<T> extends ImmutableList<T> {
private final List<ImmutableList<T>> underlyingLists;
public ConcatenatedList(List<ImmutableList<T>> underlyingLists) {
this.underlyingLists = underlyingLists;
}
#Override
public T get(int index) {
for (ImmutableList<T> list : underlyingLists) {
if (index < list.size()) return list.get(index);
index -= list.size();
}
throw new IndexOutOfBoundsException();
}
#Override
boolean isPartialView() {
for (ImmutableList<T> list : underlyingLists) {
if (list.isPartialView()) return true;
}
return false;
}
#Override
public int size() {
int result = 0;
for (ImmutableList<T> list : underlyingLists) {
result += list.size();
}
return result;
}
}
Note package declaration, it needs to be like that to access Guava's ImmutableList package access constructor. Be aware that this implementation might break with future version of Guava, since the constructor is not part of API. Also as mentioned in the javadoc of ImmutableList and in comments this class was not intended to be subclassed by the original library author. However, there is no good reason for not using it in application you control and it has additional benefit of expressing immutability in the type signature compared to MultiListView suggested in the other answer.

Case-insensitive indexing with Hibernate-Search?

Is there a simple way to make Hibernate Search to index all its values in lower case ? Instead of the default mixed-case.
I'm using the annotation #Field. But I can't seem to be able to configure some application-level set
Fool that I am ! The StandardAnalyzer class is already indexing in lowercase. It's just a matter of setting the search terms in lowercase too. I was assuming the query would do that.
However, if a different analyzer were to be used, application-wide, then it can be set using the property hibernate.search.analyzer.
Lowercasing, term splitting, removing common terms and many more advanced language processing functions are applied by the Analyzer.
Usually you should process user input meant to match indexed strings with the same Analyzer used at indexing; configuring hibernate.search.analyzer sets the default (global) Analyzer, but you can customize it per index, per entity type, per field and even on different entity instances.
It is for example useful to have language specific analysis, so to process Chinese descriptions with Chinese specific routines, Italian descriptions with Italian tokenizers.
The default analyzer is ok for most use cases, and does lowercasing and splits terms on whitespace.
Consider as well that when using the Lucene Queryparser the API requests you the appropriate Analyzer.
When using the Hibernate Search QueryBuilder it attempts to apply the correct Analyzer on each field; see also http://docs.jboss.org/hibernate/search/4.1/reference/en-US/html_single/#search-query-querydsl .
There are multiple way to make sort insensitive in string type field only.
1.First Way is add #Fields annotation in field/property on entity.
Like
#Fields({#Field(index=Index.YES,analyze=Analyze.YES,store=Store.YES),
#Field(index=Index.YES,name = "nameSort",analyzer = #Analyzer(impl=KeywordAnalyzer.class), store = Store.YES)})
private String name;
suppose you have name property with custom analyzer and sort on that. so it's not possible then you can add new Field in index with nameSort apply sort on that field.
you must apply Keyword Analyzer class because that is not tokeniz field and by default apply lowercase factory class in field.
2.Second way is that you can implement your comparison class on sorting like
#Override
public FieldComparator newComparator(String field, int numHits, int sortPos, boolean reversed) throws IOException {
return new StringValComparator(numHits, field);
}
Make one class with extend FieldComparatorSource class and implement above method.
Created new Class name with StringValComparator and implements FieldComparator
and implement following method
class StringValComparator extends FieldComparator {
private String[] values;
private String[] currentReaderValues;
private final String field;
private String bottom;
StringValComparator(int numHits, String field) {
values = new String[numHits];
this.field = field;
}
#Override
public int compare(int slot1, int slot2) {
final String val1 = values[slot1];
final String val2 = values[slot2];
if (val1 == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return val1.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public int compareBottom(int doc) {
final String val2 = currentReaderValues[doc];
if (bottom == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return bottom.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public void copy(int slot, int doc) {
values[slot] = currentReaderValues[doc];
}
#Override
public void setNextReader(IndexReader reader, int docBase) throws IOException {
currentReaderValues = FieldCache.DEFAULT.getStrings(reader, field);
}
#Override
public void setBottom(final int bottom) {
this.bottom = values[bottom];
}
#Override
public String value(int slot) {
return values[slot];
}
}
Apply sorting on Fields Like
new SortField("name",new StringCaseInsensitiveComparator(), true);

Balancing String based Binary Search Tree (For Spellchecking)

Update: I can't get "Balancing" to work, because I cannot get "doAVLBalance" to recognize the member functions "isBalanced()", "isRightHeavy()", "isLeftHeavy". And I don't know why! I tried Sash's example(3rd answer) exactly but I get "deceleration is incompatible" and I couldn't fix that...so I tried doing it my way...and it tells me those member functions don't exist, when they clearly do.
"Error: class "IntBinaryTree:TreeNode" has no member "isRightHeavy".
I'm stuck after trying for the last 4 hours :(. Updated code below, help would be much appreciated!!
I'm creating a String based Binary Search Tree and need to make it a "Balanced" tree. How do I do this?* Help please!! Thanks in advance!
BinarySearchTree.cpp:
bool IntBinaryTree::leftRotation(TreeNode *root)
{
//TreeNode *nodePtr = root; // Can use nodePtr instead of root, better?
// root, nodePtr, this->?
if(NULL == root)
{return NULL;}
TreeNode *rightOfTheRoot = root->right;
root->right = rightOfTheRoot->left;
rightOfTheRoot->left = root;
return rightOfTheRoot;
}
bool IntBinaryTree::rightRotation(TreeNode *root)
{
if(NULL == root)
{return NULL;}
TreeNode *leftOfTheRoot = root->left;
root->left = leftOfTheRoot->right;
leftOfTheRoot->right = root;
return leftOfTheRoot;
}
bool IntBinaryTree::doAVLBalance(TreeNode *root)
{
if(NULL==root)
{return NULL;}
else if(root->isBalanced()) // Don't have "isBalanced"
{return root;}
root->left = doAVLBalance(root->left);
root->right = doAVLBalance(root->right);
getDepth(root); //Don't have this function yet
if(root->isRightHeavy()) // Don't have "isRightHeavey"
{
if(root->right->isLeftheavey())
{
root->right = rightRotation(root->right);
}
root = leftRotation(root);
}
else if(root->isLeftheavey()) // Don't have "isLeftHeavey"
{
if(root->left->isRightHeavey())
{
root->left = leftRotation(root->left);
}
root = rightRotation(root);
}
return root;
}
void IntBinaryTree::insert(TreeNode *&nodePtr, TreeNode *&newNode)
{
if(nodePtr == NULL)
nodePtr = newNode; //Insert node
else if(newNode->value < nodePtr->value)
insert(nodePtr->left, newNode); //Search left branch
else
insert(nodePtr->right, newNode); //search right branch
}
//
// Displays the number of nodes in the Tree
int IntBinaryTree::numberNodes(TreeNode *root)
{
TreeNode *nodePtr = root;
if(root == NULL)
return 0;
int count = 1; // our actual node
if(nodePtr->left !=NULL)
{ count += numberNodes(nodePtr->left);
}
if(nodePtr->right != NULL)
{
count += numberNodes(nodePtr->right);
}
return count;
}
// Insert member function
void IntBinaryTree::insertNode(string num)
{
TreeNode *newNode; // Poitner to a new node.
// Create a new node and store num in it.
newNode = new TreeNode;
newNode->value = num;
newNode->left = newNode->right = NULL;
//Insert the node.
insert(root, newNode);
}
// More member functions, etc.
BinarySearchTree.h:
class IntBinaryTree
{
private:
struct TreeNode
{
string value; // Value in the node
TreeNode *left; // Pointer to left child node
TreeNode *right; // Pointer to right child node
};
//Private Members Functions
// Removed for shortness
void displayInOrder(TreeNode *) const;
public:
TreeNode *root;
//Constructor
IntBinaryTree()
{ root = NULL; }
//Destructor
~IntBinaryTree()
{ destroySubTree(root); }
// Binary tree Operations
void insertNode(string);
// Removed for shortness
int numberNodes(TreeNode *root);
//int balancedTree(string, int, int); // TreeBalanced
bool leftRotation(TreeNode *root);
bool rightRotation(TreeNode *root);
bool doAVLBalance(TreeNode *root); // void doAVLBalance();
bool isAVLBalanced();
int calculateAndGetAVLBalanceFactor(TreeNode *root);
int getAVLBalanceFactor()
{
TreeNode *nodePtr = root; // Okay to do this? instead of just
// left->mDepth
// right->mDepth
int leftTreeDepth = (left !=NULL) ? nodePtr->left->Depth : -1;
int rightTreeDepth = (right != NULL) ? nodePtr->right->Depth : -1;
return(leftTreeDepth - rightTreeDepth);
}
bool isRightheavey() { return (getAVLBalanceFactor() <= -2); }
bool isLeftheavey() { return (getAVLBalanceFactor() >= 2); }
bool isBalanced()
{
int balanceFactor = getAVLBalanceFactor();
return (balanceFactor >= -1 && balanceFactor <= 1);
}
int getDepth(TreeNode *root); // getDepth
void displayInOrder() const
{ displayInOrder(root); }
// Removed for shortness
};
Programmers use AVL Tree concepts to balance binary trees. It is quite simple. More information can be found online. Quick wiki link
Below is the sample code which does tree balance using AVL algorithm.
Node *BinarySearchTree::leftRotation(Node *root)
{
if(NULL == root)
{
return NULL;
}
Node *rightOfTheRoot = root->mRight;
root->mRight = rightOfTheRoot->mLeft;
rightOfTheRoot->mLeft = root;
return rightOfTheRoot;
}
Node *BinarySearchTree::rightRotation(Node *root)
{
if(NULL == root)
{
return NULL;
}
Node *leftOfTheRoot = root->mLeft;
root->mLeft = leftOfTheRoot->mRight;
leftOfTheRoot->mRight = root;
return leftOfTheRoot;
}
Node *BinarySearchTree::doAVLBalance(Node *root)
{
if(NULL == root)
{
return NULL;
}
else if(root->isBalanced())
{
return root;
}
root->mLeft = doAVLBalance(root->mLeft);
root->mRight = doAVLBalance(root->mRight);
getDepth(root);
if(root->isRightHeavy())
{
if(root->mRight->isLeftHeavy())
{
root->mRight = rightRotation(root->mRight);
}
root = leftRotation(root);
}
else if(root->isLeftHeavy())
{
if(root->mLeft->isRightHeavy())
{
root->mLeft = leftRotation(root->mLeft);
}
root = rightRotation(root);
}
return root;
}
Class Definition
class BinarySearchTree
{
public:
// .. lots of methods
Node *getRoot();
int getDepth(Node *root);
bool isAVLBalanced();
int calculateAndGetAVLBalanceFactor(Node *root);
void doAVLBalance();
private:
Node *mRoot;
};
class Node
{
public:
int mData;
Node *mLeft;
Node *mRight;
bool mHasVisited;
int mDepth;
public:
Node(int data)
: mData(data),
mLeft(NULL),
mRight(NULL),
mHasVisited(false),
mDepth(0)
{
}
int getData() { return mData; }
void setData(int data) { mData = data; }
void setRight(Node *right) { mRight = right;}
void setLeft(Node *left) { mLeft = left; }
Node * getRight() { return mRight; }
Node * getLeft() { return mLeft; }
bool hasLeft() { return (mLeft != NULL); }
bool hasRight() { return (mRight != NULL); }
bool isVisited() { return (mHasVisited == true); }
int getAVLBalanceFactor()
{
int leftTreeDepth = (mLeft != NULL) ? mLeft->mDepth : -1;
int rightTreeDepth = (mRight != NULL) ? mRight->mDepth : -1;
return(leftTreeDepth - rightTreeDepth);
}
bool isRightHeavy() { return (getAVLBalanceFactor() <= -2); }
bool isLeftHeavy() { return (getAVLBalanceFactor() >= 2); }
bool isBalanced()
{
int balanceFactor = getAVLBalanceFactor();
return (balanceFactor >= -1 && balanceFactor <= 1);
}
};
There are many ways to do this, but I'd suggest that you actually not do this as all. If you want to store a BST of strings, there are much better options:
Use a prewritten binary search tree class. The C++ std::set class offers the same time guarantees as a balanced binary search tree and is often implemented as such. It's substantially easier to use than rolling you own BST.
Use a trie instead. The trie data structure is simpler and more efficient than a BST of strings, requires no balancing at all, and is faster than a BST.
If you really must write your own balanced BST, you have many options. Most BST implementations that use balancing are extremely complex and are not for the faint of heart. I'd suggest implementing either a treap or a splay tree, which are two balanced BST structures that are rather simple to implement. They're both more complex than the code you have above and I can't in this short space provide an implementation, but a Wikipedia search for these structures should give you plenty of advice on how to proceed.
Hope this helps!
Unfortunately, we programmers are literal beasts.
make it a "Balanced" tree.
"Balanced" is context dependent. The introductory data structures classes typically refer to a tree being "balanced" when the difference between the node of greatest depth and the node of least depth is minimized. However, as mentioned by Sir Templatetypedef, a splay tree is considered a balancing tree. This is because it can balance trees rather well in cases that few nodes accessed together at one time frequently. This is because it takes less node traversals to get at the data in a splay tree than a conventional binary tree in these cases. On the other hand, its worst-case performance on an access-by-access basis can be as bad as a linked list.
Speaking of linked lists...
Because otherwise without the "Balancing" it's the same as a linked-list I read and defeats the purpose.
It can be as bad, but for randomized inserts it isn't. If you insert already-sorted data, most binary search tree implementations will store data like an bloated and ordered linked list. However, that's only because you're building one side of the tree continually. (Imagine inserting 1, 2, 3, 4, 5, 6, 7, etc... into a binary tree. Try it on paper and see what happens.)
If you have to balance in a theoretical worst-case-must-guaranteed sense, I recommend looking up red-black trees. (Google it, second link is pretty good.)
If you have to balance it in a reasonable way for this particular scenario, I'd go with integer indices and a decent hash function -- that way the balancing will happen probabilistically without any extra code. That is to say, make your comparison function look like hash(strA) < hash(strB) instead of what you've got now. (For a quick but effective hash for this case, look up FNV hashing. First hit on Google. Go down until you see useful code.) You can worry about the details of implementation efficiency if you want to. (For example, you don't have to perform both hashes every single time you compare since one of the strings never changes.)
If you can get away with it, I strongly recommend the latter if you're in a crunch for time and want something fast. Otherwise, red-black trees are worthwhile since they're extremely useful in practice when you need to roll your own height-balanced binary trees.
Finally, addressing your code above, see the comments in the code below:
int IntBinaryTree::numberNodes(TreeNode *root)
{
if(root = NULL) // You're using '=' where you want '==' -- common mistake.
// Consider getting used to putting the value first -- that is,
// "NULL == root". That way if you make that mistake again, the
// compiler will error in many cases.
return 0;
/*
if(TreeNode.left=null && TreeNode.right==null) // Meant to use '==' again.
{ return 1; }
return numberNodes(node.left) + numberNodes(node.right);
*/
int count = 1; // our actual node
if (left != NULL)
{
// You likely meant 'root.left' on the next line, not 'TreeNode.left'.
count += numberNodes(TreeNode.left);
// That's probably the line that's giving you the error.
}
if (right != NULL)
{
count += numberNodes(root.right);
}
return count;
}