[HIRING] Create a game-like GUI with a unique artistic aesthetic

I’m looking to hire someone to put a game-like GUI on my Max/MSP project using OpenGL.

There has been much deliberation, but the consensus seems to be to create this game interface as a separate app, which will receive communication from the main program via OSC. This will allow for flexibility in choosing which framework/language to write it in, as well as provide a path for eventually building it into a full standalone video game. It’s possible the best framework for the job would be Cinder, but openFrameworks, Nannou, JUCE or any other would also be options. The possibility of doing it natively in Jitter also still exists.

This project is destined to be a rhythm game not unlike Guitar Hero, except that you write the music you’re playing to as you go. Below is a video of the system in action as it currently stands. I start playing and when it likes what it hears it starts building a song around me in real-time. If it’s a little hard to differentiate what’s what audibly, that’s where this new visual interface will help.

There are essentially 3 things this first iteration needs to do:

  1. Like Guitar Hero, “targets” of some kind need to appear and disappear on the screen, denoting moments in the future to aim for when playing a note.
  2. Targets need to enter into “scoring animations” when users hit a note at the right time.
  3. Along with each scoring animation, a numeric score (e.g. +500) will need to pop up from the target and then disappear. A cumulative total of the scores will need to be kept track of in a permanent scoreboard off to the side, along with a few other metrics.

When these things should happen and all the numbers involved will be provided to you through OSC messages. You would only be responsible for the visuals (and creating an OSC/UDP receiver).

This GUI will need to convey certain information, but within those constraints I’d like to leave as many of the design decisions as possible up to you. This is because it’s important to me that whoever takes this on is able to bring their own unique creative vision to the aesthetic. For ease of conveying ideas I reference Guitar Hero, but it must be stated that that game is really not the sort of aesthetic I’m looking for. I’d prefer to steer clear of it looking like a cartoony child’s toy, the way Guitar Hero does. I’m looking for a piece of art to be made, something that could spark an emotion just by looking at it. Or to put it in a word, ideally the thing would look just plain gorgeous. Whether that’s a dark sort of gorgeous or a light one, or a mind-blowing psychedelic sort of gorgeous, the way everything will be pulsing on-screen with various rhythms will provide huge potential to make something that’s a real treat to look at, even sort of addicting and mesmerizing.

To give a better idea what I mean by that, I’ll delve into some more of the details.


A still image of an example animation from www.nannou.cc. Any of the example animations on their homepage would embody the sort of style I’m looking for. I recommend checking them out; they’re quite pretty.

Targets

  • Every time the user plays a note, many targets will appear representing every possible rhythmic interval to aim for.
  • Unlike Guitar Hero, I don’t think the targets should move down a fretboard until they reach the moment the player must act, or necessarily move it all, but rather fade in or otherwise alter/morph in some way over the appropriate interval.
  • In this way, the movement on-screen will be an overall movement of many targets appearing in a sort of “cloud”, running through their morphing animations, then disappearing. It will be this “cloud” that moves around the screen rather than any individual targets.
  • When a target is successfully hit it will need to reenter into its morphing animation. Therefore as the cloud moves around the screen it will probably be leaving trails of targets that continue to be hit at interval.
  • Cleanup messages will be delivered through OSC dictating when each target is no longer viable and can be removed from memory (and also from the screen, if it hasn’t been already).
  • The system has multiple tracks (synth track, drum track, etc.) which can be played simultaneously. Targets need to vary in some way based on their track. An obvious option would be in coloration, but unique shapes/effects/filters or anything else could work.
  • Notes that have already been recorded into tracks and are playing back (referred to as “recitation”) will also spawn targets in exactly the same manner as a live human player. Basically, users are competing with the “recitation” in their track for inclusion in the song as it gets written. Targets spawned from “recitation” will need to have the same track colors (or other distinguishing characteristics) as human playing, but somehow be more subdued/darker/transparent so the user’s actions will stand out by comparison.
  • Exactly where to place new targets on-screen will be something that will require some experimentation, simply to see what looks the best. It’s likely that we’ll want to have targets from the same track appear near each other, as well as targets with longer intervals appearing farther out (more along the edges of the “cloud” than those of smaller intervals).

Scoring Animations

  • When the user successfully hits a note at the right moment (the very end of a target’s morphing animation) the target will need to enter into a scoring animation to visually signify this. Perhaps a hit could cause the target to explode or flash, whereas it would just fizzle out otherwise.
  • Multiple targets can and will frequently be hit at once.
  • Scoring animations will need to vary in some way consistent with their track, just like targets, and also be more subdued for recitation (yes, recitation will be “scoring” as it goes along, just the same as a player).
  • Players can score on targets spawned by recitation and vice versa, as well as on other tracks, so scoring animations will be taking place on off-colored targets all the time.
  • Very often targets will be hit in groups of “rhythmic succession” that include any number of targets that have already been hit in the past. These groups will be conveyed through the OSC messages and will need to be designated in some way visually, whether that be by drawing lines between them, encircling them or whatever else you come up with.

Logistics

I really do see this project as being a video game, even if it’s a bit lacking in the “video” department currently, and this interface for it should be treated as such as much as possible starting from the ground up. Therefore, it should be written like a game engine with a central main function that executes every frame and classes as separate files for each of the objects that appear on screen.

In addition to being very-well commented, I would ask that you leave as many tweakable variables throughout your code as you can so I or anyone else can make adjustments to the way it looks down the road. These variables might control things like the spacing between objects, colors, thickness of lines, etc. and would then need to be declared in an obvious and organized section at the top, as globally as possible, or perhaps more ideally in a separate JSON reference file.

OSC

All the information you need will be conveyed to you via OSC messages. You can watch a sampling of the messages as they’re sent below.

The OSC messages are as follows:

amanuensis/wake/ i

  • An update to a variable called the wake, which is an integer denoting a number of milliseconds (see below).

amanuensis/tolerance/ i

  • An update to a variable called the tolerance, which is an integer denoting a number of milliseconds (see below).

amanuensis/played/ i i

  • Every time the user (or recitation) plays a note, this message will be sent.
  • The 1st argument is an integer denoting the source track. There are 16 possible tracks. Positive numbers are recitation, while negative are the user’s active playing. So -3 would be a user playing on track 3, while 5 would be the recording in track 5 playing back.
  • The 2nd argument is a unique millisecond timestamp for the note denoting the exact moment it was played.
  • These timestamps will need to be stored in an array of some kind or dictionary, because there’s a small amount of calculation that will need to be done with them before they’re used. The incoming timestamp should be stored and then have each other stored timestamp subtracted from it. This series of numbers will be the morphing intervals for each of the targets that should spawn at the moment this message comes in. Do not spawn targets for any interval that’s over the number of milliseconds found in the wake variable.
  • Consider the track that targets are associated with to be the one in this message (1st argument).
  • For each target that spawns, its track number and interval will also need to be stored along with its timestamp for later reference.

amanuensis/hit/ i f l

  • Every time a player (or recitation) successfully hits a target or targets, this message will be sent.
  • The 1st argument is an integer denoting the track the scoring note took place on (not necessarily the track of the target(s)). Again, -16 through 16, negative for a player and positive for recitation. Consider the track that scoring animations are associated with to be the one in this 1st argument, not that of the target scored upon.
  • The 2nd argument is the interval of the successfully hit target(s).
  • The 3rd argument will be a list of any length, consisting of the timestamps of every target that was hit. If there is more than one, then these constitute a group of “rhythmic succession” as I mentioned in the Scoring Animations section above.
  • Since a single note (and therefore a single timestamp) will spawn multiple targets, target objects on-screen will need to be looked up first by their timestamp, then by their interval, when determining where to deploy a scoring animation. In the extremely rare case that two targets have both the same timestamp and interval, their track numbers will be different and can be used to uniquely identify them.
  • The interval (2nd argument) may not be exact. When looking for specific targets, you will need to assume +/- a few milliseconds when comparing against the value of stored intervals. Specifically, this few milliseconds will be equal to the variable tolerance, so the comparison will be incoming_interval - stored_interval <= abs(tolerance).
  • Each target that scores will need to have a numeric value (e.g. +500) pop up from it and then disappear. This value will be equal to its interval.

amanuensis/cleanup/ i

  • This message will come in eventually for every target, signaling when it can be removed from memory.
  • The 1st argument is the timestamp of the target(s) to be removed. Since more than one target can be associated with a single timestamp, when this message arrives it means all targets with this timestamp are ready to be removed.

Communication

Reply to this post for more details. I can also be found on discord @to_the_sun#5590 or you can message me at soundcloud.com/to_the_sun.

1 Like

Thanks for the link to Nannou, hadn’t heard of it before, looks great. Why don’t you use that or find a developer who can? It looks much easier to use for what you need than coding something up directly with OpenGL. Plus it uses Vulkan so more future-proof than OpenGL.

Aside from that, I can think of several environments that would be far better suited to doing what you’re asking than JUCE. It is mostly good at a lot of things, but visual art is most assuredly not one of them. Unity, Cinder, openFrameworks, that sort of thing, are environments far better suited to this sort of thing. All of them speak OSC fluently, as well.

Thanks for the suggestion. I have been looking there and elsewhere. I know JUCE is primarily audio, but so is my project as a whole and if I ever want to create a true standalone videogame out of it I’ll have to remake the audio aspects outside of Max as well.

So, you could still use JUCE to do that. Due to the nature of C++ name mangling probably the easier way would be to create a static/dynamic library with Nannou and then include that in a JUCE project. Or use Cinder which is also C++ based.

Alternatively, use a Rust audio library to do that side of things, depends what you need to achieve.

Personally, I love working with Rust so I’d go with Nannou… unless you want it to be a plugin in which case Rust is not a good choice at the moment.

Thanks for the advice, that’s really helpful. A plug-in version is not a priority for me but non-browser web connectivity might be in the future, which I hear is Rust’s strong suit. However, I’m a little wary of Rust simply because I’m not sure as many people know it as C++ and I want to be able to find collaborators in the future as easily as possible.

In any case, it’s perfectly fine if you or anyone else wants to do it in another framework. Just let me know!

Still looking to hire someone for this! Offering a ballpark figure of $300 to get it done: https://www.upwork.com/ab/applicants/1176576811599220736/job-details

Good luck with that… if you were offering $3000, even that would be on the low end of what a project you describe would cost to build by an experienced developer.

5 Likes

Seconding the above. This is minimum several weeks of work, probably more.

Perhaps it was a faux pas to mention my budget in a post like this, but I really don’t think it should be all that much work for someone with the right skill set. I’m only asking for a visualization, not a whole game. I’m already entertaining several bids in this price range.