So I was having a conversation with another member here about some work but I wanted to take the time to showcase some of my work on Juce’s framework. It is incredible and I honestly, it beats ALOT of the frameworks out here.
So i wanted to give a snippet of what I’m doing in the audio world and what Ive found out.
Alot of you know that sound is relative, not absolute. You can hear a sound being “emitted” from a wall but it is reflected at an angle from an offset source.
So I debated with my uncle almost 2 years ago that instead of trying to figure out how to make sounds for the ear, we do it in a fashion that propagates in the air.
So i came up with a framework called TASS- Time Aligned Spatial System.
I won’t go into specifics as its proprietary but im willing to show a snippet of the code so you know how it is. However it works by making time the non negotiable factor for my audio physics simulator outputting sound.
#pragma once
#include “../Audio/AudioBuffer.h”
#include “../voxel/VoxelField.h”
#include “TASSConfig.h”
#include “TASSCore.h”
#include “TASSPhysics.h”
#include “TASSVirtualArray.h”
#include
#includenamespace DimenSystem {
struct VoxelFieldSnapshot {
const VoxelField *field = nullptr;
int voxelCount = 0;
float maxEnergy = 0.0f;
float avgCoherence = 0.0f;
float avgHeight = 0.0f;
float avgDepth = 0.0f;
};// Main TASS volumetric renderer
class TASSRenderer {
public:
TASSRenderer();
~TASSRenderer() = default;// Initialize void initialize(double sampleRate, int blockSize); // Main rendering void renderVoxelField(const VoxelFieldSnapshot &snapshot, AudioBuffer &output, int numSamples, bool fastMode = false); // Configuration void setConfig(const TASSConfig &config); TASSConfig getConfig() const { return config; } // Fast mode control (advisory from CPU guard, overridable) void setFastModeEnabled(bool enabled) { fastModeEnabled.store(enabled); } bool getFastModeEnabled() const { return fastModeEnabled.load(); } static void setOutputClampEnabled(bool enabled); static void setSoftClipEnabled(bool enabled); // Component access TASSCore &getCore() { return *core; } TASSPhysics &getPhysics() { return *physics; } TASSVirtualArray &getVirtualArray() { return *virtualArray; } // State bool isInitialized() const { return initialized; }private:
std::unique_ptr core;
std::unique_ptr physics;
std::unique_ptr virtualArray;TASSConfig config; double sampleRate; int blockSize; bool initialized; std::atomic<bool> fastModeEnabled{false}; static std::atomic<bool> outputClampEnabled; static std::atomic<bool> softClipEnabled; float groundingModulation = 1.0f; // Rendering pipeline void applyPhysics(VoxelField &field, float deltaTime); void renderToSpeakers(VoxelField &field, AudioBuffer &output, int numSamples); void applyGrounding(AudioBuffer &output, int numSamples); // Gravity anchor and spatial cue enhancements void preserveGravityAnchor(juce::AudioBuffer<float> &buffer); void enhanceRearCues(float azimuth, juce::AudioBuffer<float> &buffer); void enhanceElevationCues(float elevation, juce::AudioBuffer<float> &buffer); void enhanceDistanceCues(float distance, juce::AudioBuffer<float> &buffer); void addEarlyReflections(juce::AudioBuffer<float> &buffer, float elevation); void reset();};
} // namespace DimenSystem
now im not saying people cant get clever here, but it is going to be very very difficult to figure out the rest of the system without the entire code base so ive taking a fair risk showing this.
in the end, the framework works, without using HRTF or traditional CTC metrics and the TASS renderer is i guess competitive with other methods. It is a real product and i plan to show a demo of it soon. with the audio a/b comparison. but this is my gift to the beautiful guys at juce who got me into designing real stuff again.
the end result is a media player called DimenPlay for all platforms- android, ios, macos and windows.
DimenPlay on android and MacOs.
Then my SDK showcasing how i tune my library for commercial use.
Cheers.


