Custom Colour IDs for LookAndFeel?

Hello everyone,

Is it possible to create custom Colour IDs for use with custom GUI components, that can be assigned using the standard lookAndFeel.setColour() method?

In my current project, I’ve created a customLookAndFeel class that inherits from LookAndFeel_V4, so that I can override some of the slider drawing functions. As it is now, I can easily call something like customLAFinstance.setColour(Slider::ColourIds::rotarySliderFillColourId, desiredColour) to globally set colours of various kinds of components, which is my desired behavior. Is it possible to design a custom GUI component and set the colours using this same method? I’m assuming that my customLookAndFeel class would need to define the new ColourIDs so it knows how to save/look up the colours, but how exactly would this be implemented?

As you see, colour IDs are not defined in the LookAndFeel classes, but in the widget classes. They are just ints, LookAndFeel keeps the id-Colour pairs in a sorted set, when you call setColour it modifies the colour if it’s already registered, otherwise adds it. Your only concern is that your IDs don’t clash with JUCE’s. It seems that all JUCE IDs start at 2^24, so you can just start at 0.

1 Like

@kamedin thank you! I think I got it to work – because these are colors I’m using globally throughout my GUI, I created an enum in my subclass of LookAndFeel_V4.

Just to double check the colour id’s themselves, will doing this work:

enum uiColourIds
{
	backgroundPanelColourId     = uint32_t(0),
	insetPanelColourId          = uint32_t(1)
};

I just want to make sure I won’t mess up any of Juce’s internal/default ColourIds.

Thanks!

Yep, that should work. Of course it could be just

enum uiColourIds
{
	backgroundPanelColourId,
	insetPanelColourId
};

(enums default to int starting at 0 with step 1).

I would add, as an opinion, that if those colours are static (if you’re not using different instances of the same LookAndFeel as colour presets), you could just have the Colours as static/global consts, and avoid set/findColour altogether.

1 Like
2 Likes

sorry if this is a stupidly basic C++ question, but delving into the JUCE API, it looks like the Slider Colour IDs are stored as:

enum ColourIds
{
    backgroundColourId          = 0x1001200,  /**< A colour to use to fill the slider's background. */
    thumbColourId               = 0x1001300,  /**< The colour to draw the thumb with. It's up to the look
                                                   and feel class how this is used. */
    trackColourId               = 0x1001310,  /**< The colour to draw the groove that the thumb moves along. */
                                // etc etc 
};

aren’t these hexadecimal numbers? does it matter if my custom IDs are in the same numbering system? or is this not a different numbering system than declaring

enum myCustomColourIds
{
    firstColor  = 1;
    secondColor = 2;
               // because aren't these base-10 ints and not hexadecimal?
};

which is why I initially wrote set up my custom enum the way I did, with casting to uint32_t(n)… but if I’m totally wrong please let me know, thanks in advance for your help :slightly_smiling_face:

Oh, the base is just formatting. The type of the literals is still int. You can write your ints in hex starting with 0x, or in octal starting with 0. The same for floats: you can write 0.0625f, or 6.25e-2f, or 0x1p-4f (hex scientific notation) -they’re all the same float.

1 Like

Ohhh so the underlying datatype is the same, and the base is basically kind of like a “unit” hop size between possible stored values, so to speak?

It’s literally just code formatting. You use the notation that fits your case better. If you’re using powers of two, it can make sense to use hex (0x1p-8f looks more like 2^-8 than 0.00390625f). In common architectures, ints will be 32-bit two’s complement, and floats will be IEEE754 singles -all binary of course. In VS, and I think in Xcode too, if you hover on an hex literal you’ll see its decimal value.

There are some suffixes that actually change the type -like f making a floating point literal a float instead of a double, or u making an integer literal unsigned.

1 Like

I think I understand – thank you for explaining!

Exposing such complexity to a beginner in C++ is just cruel :joy: (kidding)

1 Like

…but it’s so pretty! #not :laughing:

1 Like