how can we ensure the update will take place inside a timeslice and not across several?
It depends on what you’re doing, but basically comes down to limiting ownership/mutability of the data to a single object/thread at a given moment in time, and/or verifying that a mutation was valid to begin with. Sidenote: this is why Rust’s “borrow checking” is really intriguing since it enforces those concepts at compile time.
So how does having multiple copies of the data solve this problem?
It doesn’t, it just changes how race conditions manifest. If you store copies and trigger a race condition, the copy will be outdated, but semantically valid. If you referenced the data instead (eg through a
const*) then the race condition can be seen as a split update, where some data is current and some is outdated, which could be garbage - you solve that with a lock.
At some point one thread will need to update it’s data from the other thread
Not necessarily. You can pipe updates through well defined constructs like bounded FIFOs, Michael-Scott queues, or other data structures/algorithms used for sending data across threads. That data can be updates themselves, so only one thread ever needs to mutate anything. This is how a lot of my parameter/graph changes are implemented. Very little data is shared between editor/processor at all, mostly just what’s needed to send data between them.
I will say though, if the underlying data is a POD type that can fit in a single register, you can wrap it in
std::atomic and probably (do test this) be fine.