In this thread : [DSP module discussion] Structure of audio plugins API, someone asked :
What’s the advantage of DF1/2 over TPT?
Let me summarize again what TPT and DF1/2 means :

A digital filter can be implemented using a DF1/DF2 structure and their transposed versions when it is defined by its Z transfer functions, meaning a set of coefficients. For example, classic biquads have a transfer function like H(z) = (b0 + b1 z^1 + b2 z^2) / (1 + a1 z^1 + a2 z^2). Knowing these coefficients (b0, b1, b2, a1, a2), the filtering can be done with the DF structures, and that’s how things work in JUCE IIRFilter and dsp::IIR::filter.

TPT means “topology preserving structure transform”. Basically, it means that the filter is generated from an analog prototype, derived from an analog circuit equations (Kirchhoff’s laws), used directly (see DKmethod or Andy Simper’s papers) or reworded as a graphical block scheme (see Vadim Zavalishin’s paper and Will Pirkle’s papers). What we do there, instead of finding an equivalent discrete transfer functions with a set coefficients like before, is that we try to keep the topology, the processing path in analog if you want, to preserve its timevarying context properties mainly. To do so, we need to discretize the integrators only, for example using the bilinear transform and the TDF2 locally, but everything around is the same. That’s why there is also an additional step to get the final processing structure, which is solving the delay free loops, which is easy if everything is kept linear.
Very often, the filters designed with a DF structure only are generated using a filter design method which sets directly the values of the filter discrete transfer function coefficients. Classic EQ biquads are made this way, using the so called bilinear transform. My new IIR filter design classes calculates digital allpass coefficients or manipulate poles and zeroes in the digital domain, so everything is made directly in the digital domain as well. Think also about regressive methods for designing filters, where all the coefficients are fitted to get a given frequency response.
But using the TPT structure means that we can’t do this, it means we only discretize the integrator itself and everything around is left unchanged. So this structure gives less freedom in what we want to do. And if we want to fit a specific wanted frequency response to our processing, we need to do the changes in the analog block scheme instead of the digital world directly.
In short, TPT structure is not the holy grail, it’s just the solution of a very specific problem which is “how do I get the best processing behaviour for a digital filter derived from an analog prototype ?”. For other problems (high order filter design for example), you have to use something easier to use / maintain / customize like standard filters simulated with DF structures (or others but that’s another topic).
One of the confusion sources about all this stuff is the so called State Variable Filter (SVF) circuit, which happens to be the one used to generate the digital coefficients in the RBJ EQ audio cookbook, used in the IIR filter design classes in JUCE. And that circuit is a perfect and the most simple example of how to use the TPT structure, and how to get its advantages (fast modulation of the cutoff frequency). And in TPT SVF code, most of the time the integrators are discretized with the bilinear transform / trapezoidal method, and simulated with a DF structure . But again everything around is different. And at the end the frequency response in timeinvariant context is the same.
That’s what you get with the new class dsp::StateVariableFilter.
Bibliography :
 Vadim Zavalashin’s book : https://www.nativeinstruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf
 Will Pirkle’s articles : http://www.willpirkle.com/7062/
 About the DKmethod : https://ccrma.stanford.edu/~dtyeh/papers/pubs.html
 Andy Simper’s article : https://cytomic.com/files/dsp/SvfLinearTrapOptimised.pdf