ValueTree feature request/improvment

I’d like to have a getHashCode() from the ValueTree, that would return an integer (or SHA256) of all node name & attribute name&value.
I can do this with a recursive function in O(N), but ideally, it could be done in O(1), if it was maintained in every modification method.

The idea is to detect changes to a huge value tree, without having to parse all nodes.
Making the change in the listener doesn’t cut it since it’s still O(number parents) going on in the listener code.

I’ll try to explain with a pseudo code what I mean:

class HashMaster
{
    typedef Hash int; // Could be SHA256

    Hash *                     nodeName; // Using pointer explicitely
    // You could add Hash * attributeName, Hash * attributeValue, based on the desired granularity

    struct Listener
    {
        void hashValueChanged(Hash * value, ValueTree & treeChanged) = 0;
    }
    ListenerList<Listener> * listeners;
    bool root;

public:
    void setHash (HashMaster & parent) { delete nodeName; nodeName = parent.nodeName; delete listeners; listeners = parent.listeners; root = false; }
    void unsetHash() { nodeName = 0; listeners = 0; } // Explicitely not deleted here
    void callListeners(ValueTree & treeChanged) { if (listeners) listeners->callListener(nodeName, treeChanged); }
  
    HashMaster() : nodeName(new Hash(0)), listeners(new ListenerList<Listener>), root(true) { }
    ~HashMaster() { if (root) { delete nodeName; delete listeners; } unsetHash(); }
};

class ValueTree
{
   // Added a HashMaster here
   HashMaster hash;

   void appendChild(...)
   {
       child.hash.setHash(hash);
       modifyRootNodeNameHash();
       [...]
   }

   void removeChild(...)
   {
       [...]
       child.hash.unsetHash();
       modifyRootNodeNameHash();
   }

   void setName(...)
   {
        [...]
        modifyRootNodeNameHash();
   }

   void modifyRootNodeNameHash()
   {
         // The idea is to directly modify the root's hash version.
         *hash.nodeName *= sharedObject.getName().getHashCode() % 101; // Whatever hash function here
         hash.callListeners(*this);
   }
};

Using the same idea of Pointer sharing of a parent object in all child value tree, you can probably move out all the listeners in the parent value tree (in its SharedObject) so you don’t have to walk up the value tree hierarchy each time you need to call a listener. Then the call to listener become also O(1) at the same time.

Surely it’d be even easier to just write a non-intrusive class…?

E.g.

[code]class ValueTreeHash : public ValueTree::Listener
{
public:
ValueTreeHash (ValueTree&) { // register for changes }

SHA256& getHash()
{
    if (needsRefreshing)
        ..recaclulate hash

    return hash;
}

void valueTreeChanged() { needsRefreshing = true; }

};[/code]

Well, you haven’t read the post entirely. What you suggest is exactly what I’m doing now.
The issue is that it’s very very long process on huge valuetree, because these “under-the-hood” operations happen on each change on any child node:

  • For any child that is changed, the current code walk up the tree to find out a parent with a listener registered (this is a O(tree-level-N) operation)
  • Once the listener is called, I need to walk down to compute the hash for all child node name and value (this is a O(N) operation).

What I was proposing, is to change the way the “internals” of the value tree works, so that value tree listeners are always registered on the root node (no matter how deep the hierarchy is).
This means that listener registration takes O(tree-level-N) operation (but since it happens rarely, it’s peanuts), and any listener’s method call now takes O(1) operations (instead of O(tree-level-N), and O(1) for registering the listener).

This is done by using a pointer on the real listener holder list that child use when they get added to a valuetree. No more O(tree-level) search for any value tree operations.
If you could make this exploitable, I could even store a hash in that structure or a derived version, so I can perform my initial query in zero time instead of the current long parsing I encounter.

Maybe I don’t understand what you mean, but I can’t really see the point of what you’re suggesting…

It sounds to me like the only difference would be that instead of having a chain of small listener lists to scan, you’d have one big list. That doesn’t really sound like it’d be a significant performance gain, unless you have really insanely deep nesting going on, with very few listeners.

And as well as requiring an extra pointer in each object, there’d be some significant work involved in managing all these pointers when nodes are removed or added, since they’d need to make sure that all their listeners are correctly removed from the top-level list and added to the new top-level list… and when a top-level node gets added to a parent, it’d need to migrate all of its listeners to its new parent, and then traverse the entire tree to inform all the other nodes that the top-level has changed… sorry, it all sounds like a can of worms!

That’s my case. I doubt people have so many listener going on, so I initially thought that’s also the general case (listener count << valuetree node)

Well, yes there is additional work to manage this additional pointer (but mainly because the struct is done the way it’s done actually), but no you don’t need to traverse the entire tree on moving a value tree inside another one (and the reason is just because it’s a pointer, you can figure out the top level node of the other tree, and update only it’s own pointer there).
I agree about the listener moving that would be more difficult, but again, if you have few listeners to move, it’s not going to take eons anyway.
I haven’t find any single use case of moving a child value tree to another value tree, and obviously it never happened to me that the child in question had a listener.

So, to sum up, the actual code is made to perform well when you have relatively short tree, with a huge number of listener, but it’s not good with the opposite (large tree, few listener).
I would be inclined to think the later is more common.