Advertisement
Guest User

Untitled

a guest
Feb 27th, 2020
718
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 32.45 KB | None | 0 0
  1. Hello,
  2.  
  3. I am trying to understand how all the different parts of the movement replication work. A bit of background. We are using the Gameplay Ability System, specifically Ability Tasks to perform custom movement actions for a player character.
  4.  
  5. Our current version copies the technique used in the engine's version of UAbilityTask_MoveToLocation whereby we use SetActorLocation every tick of the task on all network entities - the autonomous proxy, the server and the simulated proxies.
  6.  
  7. However we get a lot of judder on the simulated proxies. We currently have 2 different fixes for this, neither of which seem to be good solutions so we want to get an understanding of how all the different elements of the replicated movement system tie together.
  8.  
  9. Our two different fixes-but-hacks are:
  10.  
  11. 1) Disable network smoothing for the duration of the ability task This makes the movement much less juddery but turning off smoothing seems like it would be a bad idea.
  12.  
  13. 2) Don't call SetActorLocation on the simulated proxy. This again fixes the issue, but again seems like it introduces other issues, such as we are no longer doing prediction of movement on simulated proxies and just relying on the replicated movement.
  14.  
  15. Neither of these things are done in UAbilityTask_MoveToLocation, so I am unsure why we need them as the code is very similar. It is also interesting that UAbilityTask_MoveToLocation sets a custom movement mode, but does not actually override the PhysCustom movement anywhere.
  16.  
  17. What is the correct way to perform custom movement via the gameplay ability system that copes with prediction on the client, with smooth movement and replays?
  18.  
  19. Things we do not quite yet understand:
  20.  
  21. Why does the UAbilityTask_MoveToLocation need to set a custom movement mode without custom movement implementation?
  22.  
  23. When should you use a full custom movement mode vs just setting actor locations in the ability task?
  24.  
  25. What role does FSavedMove_Character play? All I can find in the docs is these couple of paragraphs (link), and all the examples of saved moves just seem to change the character speed, rather than doing any custom velocity/rotation.
  26.  
  27. How do all three of these systems interplay with smoothing? Do custom movement moves / setting actor location in tasks require custom smoothing code?
  28.  
  29. Sorry for the large set of questions but I feel they are all interconnected.
  30.  
  31. Thanks in advance,
  32.  
  33. -ralph
  34.  
  35.  
  36.  
  37.  
  38.  
  39.  
  40.  
  41.  
  42. //////////////////////////////////////////////////////////////
  43.  
  44.  
  45.  
  46.  
  47.  
  48.  
  49.  
  50.  
  51. Hi Ralph,
  52.  
  53.  
  54. Character limit so this answer will be spread across a couple comments too..
  55.  
  56.  
  57. On Paragon we initially set up some of our abilities using UAbilityTask_MoveToLocation. It was the very first movement-related ability task, and is the most basic brute-force method of getting something to move. Ability tasks are nice in that you can have a clean "OnStart, OnTick, OnStop" functionality occurring on Autonomous, Authority and Simulated and do different behavior based on what the role is.
  58.  
  59.  
  60. In this task we set the movement mode to MOVE_Custom at the beginning and set back to falling at the end of the task. We do this without implementing custom movement code as a way of saying "stop doing normal movement stuff", but still tick, and we know what we do with the SetActorLocation per tick won't get interrupted/influenced by normal movement code. So it's basically disabling normal movement. If you set a character to MOVE_Custom when they're in the air, they'll simply freeze in the air.
  61.  
  62.  
  63. You'll see the comment "//TODO: This is still an awful way to do this and we should scrap this task or do it right." above the TickTask() function in the cpp. This is because of how brute-force this method is. To explain why, first more detail on how character movement works:
  64.  
  65.  
  66. UE4's character movement lives almost completely in UCharacterMovementComponent (from now on referred to as CMC). For historical reasons there's a few variables/settings on ACharacter, and you'll note in CMC there's a bit of entanglement between the two. There's a few more settings in GameNetworkManager, and there's also a one-off "force updates every X seconds" section of code in APlayerController::TickActor, and client adjustments (corrections) from the server to the client are called from UNetDriver::ServerReplicateActors().
  67.  
  68.  
  69. CMC/Character's have three different roles just like elsewhere in UE4 - AutonomousProxy (the client controlling the character), Authority (the server), and SimulatedProxy (any other client that sees that remotely-controlled character, whether it's another client or an AI on the server). It's very important to always keep in mind which of these you're stepping through or thinking about since what they do in CMC is different depending on which role you are.
  70.  
  71.  
  72. AutonomousProxy Role, AKA "Client controlling the character moving"
  73. Each tick of CharacterMovementComponent, as AutonomousProxy I build a "Move" (the FSavedMove_Character you're asking about), move myself locally, and send the Move to the server. A Move is basically "this tick was X seconds long, my acceleration due to the keys I was pressing was vector Y, and I ended up at world location Z". See the ServerMove RPCs.
  74.  
  75.  
  76. Authority Role, AKA Server
  77. The server as Authority does NOT regularly tick the character's movement in sync with game ticks. The server waits around for the ServerMove RPCs in CMC to be sent from the AutonomousProxy client whose parameters make up a given Move. When received, the server takes the inputs of the Move, simulates them on its own local copy of where it has the character at, and then sends back either an acknowledgement to the AutonomousProxy ("yup, move received and is good") or a correction ("nope, I did what you said you did but ended up in a significantly different place, you should have ended up at world location X").
  78.  
  79.  
  80. SimulatedProxy Role, AKA Observer
  81. Any other clients receive a steady stream of locations/state from Authority and apply them directly. There's complexity added in with "network smoothing" where we clean up/make the positions/rotations being sent by the server look nicer by smoothing the character's location, but mostly it's just that - applying whatever state changes the server sends.
  82.  
  83.  
  84.  
  85. Prediction/Correction
  86. A lot of the complexity of CMC is due to the prediction/correction logic. Each tick the AutonomousProxy builds up a Move [FSavedMove_Character], saves the Move in a SavedMoves list [UCMC::GetPredictionData_Client_Character()::SavedMoves], and sends the Move info to the server to be processed. The autonomous client needs a list of SavedMoves because of corrections. If the server sends back an acknowledge ("move X was good and you ended up close enough to where your character did on the server to be okay"), that move and any earlier ones are removed from the SavedMoves list and are forgotten about. If the server sends back a correction ("move X was bad, you should have actually ended up at location Z"), that move and any earlier ones are removed from the SavedMoves list on the client, the client snaps its character location/rotation/state to where the server said it should be, and then all of the OTHER saved moves that happened after the "corrected one" are replayed next TickComponent(). THAT's why they need to be saved. If you're running forward and then dodge to the right, but the server says "hey actually when you were moving forward you went forward too fast, correction!", we wouldn't want to just negate all the other moves you've done meanwhile - the client snaps to the correct location, replays all the moves (so you still dodge to the right, but you are in the correct place now), and then processes the tick as normal doing a new move.
  87.  
  88.  
  89.  
  90. Walking through the exact order of operations for the AutonomousProxy side in code:
  91. UCMC::TickComponent called every tick
  92.  
  93.  
  94. |-- ClientUpdatePositionAfterServerUpdate() called - if we've received a ClientAdjustPosition() RPC from the server, it snapped character location/rotation when it was received and also set bUpdatePosition to true. If that bool is set, at this time, we replay all the moves that haven't been acked yet.
  95. |-- Get jump and acceleration input
  96. |-- ReplicateMoveToServer() called..
  97. |------ Which contains a bunch of logic for combining moves if they're similar enough,
  98. |------ Calls PerformMovement(), which is where the bulk of all movement is actually done
  99. |------ Saves off the results of PerformMovement into the NewMove
  100. |------ Calls CallServerMove(), which takes the NewMove's info and calls a given ServerMove RPC to tell the server what the client did this move
  101.  
  102.  
  103. On the Authority (server) side, we start instead with the ServerMove RPCs:
  104.  
  105.  
  106. UCMC::ServerMove called every time it's received from AutonomousProxy
  107.  
  108.  
  109. |-- Check time stamp, so that if we've already processed a ServerMove that happened after this current one we're receiving (packets can and do come in out-of-order across the internet) we ignore it completely
  110. |-- UCMC::MoveAutonomous called with Move data. We set inputs for the Move to what the client said they did, and then
  111. |------ Calls PerformMovement(), which is where the bulk of all movement is actually done
  112. |-- UCMC::ServerMoveHandleClientError() called..
  113. |------ This is where we check if where the client ended up is different enough from where the server ended up to send a correction
  114. |------ First we check GameNetworkManager::WithinUpdateDelayBounds() so that we're not sending corrections too often to clients (they take bandwidth and require nontrivial amount of work on the clients part when the client has to resimulate all their saved moves)
  115. |------ If someone has set bForceClientUpdate to force a correction or ServerCheckClientError() returns that there's enough error, we create a pending adjustment. These pending adjustments are sent out in UCMC::SendClientAdjustment() (called from UNetDriver::ServerReplicateActors()) that calls various ClientAdjust*() RPCs
  116.  
  117.  
  118. Both of these paths called PerformMovement(), which is the base "I'm actually moving myself" functionality. A little overview on PerformMovement():
  119.  
  120.  
  121. PerformMovement() called:
  122. |-- Apply external physics impulses/knockups etc
  123. |-- Calculate and apply animation root motion and root motion sources
  124. |-- Call StartNewPhysics()
  125. |----- Depending on which movement state we're in (Walking, Falling, Swimming, etc.) we call the Phys*() function associated with that
  126. |----- Each Phys*() function basically does the same thing - Calculates what Velocity should be based on Acceleration/the state we're in/other info and then applies that as an actual move. After the move, if we've hit something or now are in the air or went into water, we switch movement modes and then call StartNewPhysics() again with the remaining delta time of the move. We cap out the number of different StartNewPhysics calls you can do per tick.
  127.  
  128.  
  129. So if you are debugging specifics on movement, look in the PhysWalking() type functions.
  130.  
  131.  
  132. Side note: When the server receives a ServerMove, the client has sent it a ClientTimeStamp. This is the value that the server looks at to calculate what deltaTime to process its PerformMovement with. Why? Packets get dropped. Every ServerMove() a client sends isn't going to get to the server on time, or in order, or ever, so it's meant to handle those cases. If there's been a major mess-up with the packets in transition, it'll likely lead to a correction being sent, since now information will have been lost. If Move 1 you were moving forward, Moves 2-9 were moving to the right, and then Move 10 you moved forward, if the server received Move 1 and then Move 10, it would simulate that entire Move 1-10 time as if you were holding forward the entire time, and be way off. No avoiding that.
  133.  
  134.  
  135.  
  136. Much of CharacterMovementComponent's code and complexity comes from the fact that contained within it is this prediction/correction system. Autonomous proxies are simulating ahead of time, so that when you press the jump key or press a different move direction you can see your character jump/change direction immediately, without having to wait for a round-trip from the server as confirmation. The reason the comment says MoveToLocation is an "awful" way of doing this is because by setting custom movement mode and manually setting the actor location, we are completely bypassing the CharacterMovementComponent system. Calling SetActorLocation on a character does not behave well with the predictive nature of CharacterMovementComponent - you could call SetActorLocation and the next frame receive a movement correction from the server that moves you to a different location, and the previous SetActorLocation would get lost.
  137.  
  138.  
  139. When a correction happens, the client rewinds to the last good location and replays the SavedMoves that occur after that. What's important about this is that anything that has an effect on your movement (including SetActorLocations) that happen that are not part of the SavedMove system will get lost any time a correction occurs - the client will rewind and replay, but not replay anything outside of the SavedMove system.
  140.  
  141.  
  142. With that in mind, we wanted a similar behavior of being able to choose a target location and a duration and have the character move to that location, but with it working as part of the CharacterMovementComponent prediction/correction system.
  143.  
  144.  
  145. One option for moving a character along a path that is supported in the prediction/correction system is Animation Root Motion.
  146.  
  147.  
  148. Animation root motion at its core says during the CharacterMovementComponent tick "hey, I'm about to do a tick for X seconds. Tick my current animation montage for that long, see how much my root bone moved during that time, extract that displacement and now overwrite what my Velocity value is to get me to where I need to be during this move". Before PerformMovement(), we call TickCharacterPose() to make sure the root motion-enabled animation montage has been ticked so that we can extract the root motion from it and then set our Velocity to that value. Now, during PerformMovement() with our Phys*() states we don't really do the "calculate what my velocity" step should be - we just keep Velocity at whatever root motion told us to. There is an exception that during the Falling state we allow gravity to be applied so that if you're doing an animation root motion during Falling you should fall like normal.
  149.  
  150.  
  151. All of this needs to work within the prediction/SavedMoves system - so different SavedMoves have info pertaining to root motion, specifically if it was there and what time the animation was at for the move. Also the addition of root motion complicates the SimulatedProxy version a bit, because in-between network updates from the server "you are at location X1. you are at location X2." we simulate root motion to "fill in the blanks" so that if the server is only sending updates 5 times a second it still looks silky smooth for other clients.
  152.  
  153.  
  154. Early in the Paragon project, we used animation root motion for abilities like Steel's charge/bull rush. Again, animation root motion's purpose is to say "what does the animation want my Velocity to be this frame?" and then force the Velocity to be exactly that for the frame. And that works for a number of cases and does its job. But Paragon was a MOBA, and MOBAs usually require balance tweaks. Sure, having Steel's charge moving him 1100 units forward works, but say designers have determined that it's really better for it to go 900 units. What do we do now? If it needed to be shorter, designers could forcefully interrupt the animation so it doesn't go as far. But then it'd look weird, and we'd need animation to go back and alter the animation. Okay, what if it needed to be longer? What if it needed to instead change distance each time you leveled up the ability? All of these changes would require animation changes, or we could implement and tune what exists now which is an "animation root motion scalar" that multiplies the translations extracted from animation to enable these changes.
  155.  
  156.  
  157. So animation root motion could basically handle those straight-line charge cases. But now we want to support something like Muriel's ultimate - an ability where you target a character anywhere on a large map and fly through the air to land on them. To do this, I want to start out at one side of the map, shoot up into the sky 4000 units, shoot across the map and then land directly on top of someone's head that happens to be moving the entire time. How do we handle that? Animation root motion doesn't work for that case - the distance you need to travel isn't known, it's different every time, and the target is moving. You could probably come up with some pretty crazy dynamic scalar animation root motion setup that allows you to end up in the right spot, but it would be pretty crazy, super touchy and again would require animators to be tweaking and tuning things every time we change the timing and capabilities of the ability.
  158.  
  159.  
  160. So animation root motion is out - what do we have? At the very beginning of Paragon development we relied on AbilityTask_MoveToLocation for Muriel's ultimate. This ability task ticked every frame and worked by calculating where someone should be at the time and use AActor::SetActorLocation() every frame. Instead of "physics" happening or any of the normal movement code, we basically just ignored that entire setup and said "you should be at location X1 now". If we don't want to do that, and really just want to do the animation root motion thing we could just set our Velocity directly each frame, problem solved - it's basically what animation root motion does, determining what Velocity should be, setting it and then keep doing that..
  161.  
  162.  
  163. This seems okay at first, but the critical part to remember is the entire prediction/moves system that's at the heart of how AutonomousProxy/Authority interact. Any time you are directly setting a character's location through SetActorLocation() or directly modifying Velocity, you are doing Bad Things outside of that system. That information of "I set Velocity to X" or "I SetActorLocation()'d this character" is not saved or present or known within the SavedMoves system. When you as AutonomousProxy build your move and send it to the server, you are not telling the server "oh hey I got forcefully put at this location" or "I had my Velocity modified" - you send your acceleration (basically your input) and where you ended up. The server when trying to verify and apply that move says "oh, this was your acceleration, well I know what your move speed is so your velocity should end up at Y" - that will differ from what you locally changed it to. So if I do it just do those modifications on AutonomousProxy, that information won't be there for Authority's simulation of the move, the results will differ resulting in corrections sent to AutonomousProxy. On AutonomousProxy's screen you'll see your character trying to do the stuff they should, but then they'll snap/teleport back to where the server still has them.
  164.  
  165.  
  166. Ok, well let's not set it on AutonomousProxy. Let's just do all that stuff to the Authority version. This gets the end results we want - when we SetActorLocation() on the server, ANY move from the client where they didn't also teleport on that exact move will result in a correction sent by the server - "you thought you just moved forward 10 units, but on the server you were teleported, so you need to teleport too". Done! This is actually how most teleportation was handled in Paragon, and how teleportation is likely handled in many UE projects. It comes across the network as a correction and the local client receiving the correction makes sure you're in the right spot. And the nice thing about the SavedMove system is that once we get corrected on that move, we replay all subsequent moves (now starting at the correct teleported-to position) and we're good to go! The problem with this is that doing things only on the server means we don't allow prediction. We must always remember that AutonomousProxy clients are IN THE FUTURE. They are making moves and seeing the results of the moves in how the world used to look, and the roundtrip time required for a correction doesn't just mean we're being corrected for what we did this frame, it's correcting what we've been doing since we originally sent that move up until now. Corrections are jarring. If the server is the one moving me or changing my Velocity, it's going to feel super bad on the client. We can get away with it for single teleports because it's so disorienting anyway and it kind of makes sense that when teleported I just appear somewhere else, but the same thing can't be said for when my Velocity needs to be doing something special.
  167.  
  168.  
  169. Not that we can't support a world (and this is a completely viable way to do things) where "fancy movement stuff is always done authoritatively by the server". Maybe we say "whenever we're doing something fancy, we stop adding more SavedMoves as the client and just sort of revert to a SimulatedProxy mode - server just replicates down where we should be and we follow that". This could work. Maybe every time you're knocked up you just listen to the server. For some mechanics in Paragon, this would have worked mostly fine. We would have had to do some clean-up/hiding of the fact that you're switching from your normal "I'm moving/predicting the future" to a "I'm moving what the server said I was doing in the past" and there'd be some weirdness there now that when the fancy move is over with you lose out on being able to control your character for that time period between the two. That's also viable, could hide it with animation transitions. BUT generally, going "do stuff on authority only" means "I'm going to break our prediction/move system". If anything you want that's fancy needs to have immediate effect on your screen, say you want a jump-like ability and you want to press a button and see your character jump immediately (and not wait for potentially 300ms round-trip to see your character jump, or have the "feel" of the ability be different each time because the timing is always dependent on your latency) - can't do it.
  170.  
  171.  
  172. So what can we do? Another instinct might be "let's just make sure we do it on both the client and the server!". If we're directly modifying Velocity or SetActorLocation()'ing, just do it both on AutonomousProxy and Authority and then they'll both agree and we'll be correction-free. But, oops, again the prediction/move setup. The way Autonomous and Authority tick is different - Autonomous does it every tick, Authority does each move as it receives the moves. In perfect LAN situations, great. It'll work most of the time - hardly any packet loss, no latency, no out-of-orderness to get in the way of these synchronized adjustments. But outside of that controlled environment, in real networking conditions, packets are all over the place, latency varies and is non-zero. As a client I might be sending out moves at a fixed tick | | | | | |. On the server, they could end up coming in like | |||| ||, and out of order. Now the frame that you set your Velocity to shoot across the sky and moved 100 units with was received on the server before the server ever applied that, so it'll send you a correction down "you should still be where you were". You see your character snap on the screen, but then the server DOES apply it because it reached that point in logic, and now the move it receives next time gets ANOTHER correction "you were back there but actually you need to shoot across the sky". And that ping-ponging of corrections will happen a bunch, and not be good.
  173.  
  174.  
  175. With these options exhausted, we needed a better solution. Animation root motion is great and works and is integrated with the prediction/SavedMoves system since what it does each frame is saved off in those moves, BUT animation root motion is limited in that it can't really do dynamic logic and that it's driven by animation assets that aren't very quick to tune and change for designers. If we want something like Muriel's ultimate where you shoot across the sky and land on a moving target's head, if SetActorLocation() or overrode Velocity each frame on Authority, it would result in constant corrections for the AutonomousProxy client and make it so we couldn't predict that movement. If we do that work on AutonomousProxy only, it'll get stomped on by the server - another stream of constant corrections, and no resulting movement. If we do that work on both in sync, it may work a little bit, but as soon as you put real-world and poor networking conditions, you'll get ping-ponging all over the place because you'd be working around the prediction/SavedMoves system instead of working with it.
  176.  
  177.  
  178. So how do we do anything? Unreal Tournament has jumps and it works fine! Other UE4 games have teleports! What I described above is a more word-intensive and probably more confusing version of what's on the CharacterMovementComponent doc page at the bottom https://docs.unrealengine.com/latest/INT/Gameplay/Networking/CharacterMovementComponent/index.html "Advanced Topic: Adding New Movement Abilities to Character Movement". The trick to getting movement abilities (jumps, teleports) working nicely is to make sure that information is embedded in the SavedMoves. When you trigger a jump, we save "player jumped" in the Move and then the server applies that when they get that move. If we had the same for a teleport, we could teleport on client, save "I teleported to location X" in the move, send to the server, and the server could choose to accept what you did or not and work naturally from that. Great! But in Paragon, we had dozens of characters and dozens of different movement abilities. Each of them could be doing wildly different things to a character's Velocity. The way animation root motion works is nice - we save off into the Move "I'm doing anim root motion, starting at track time X", and then when that move is replayed, or simulated on the server, all we need to do is extract root motion from the animation between track time X and the deltaTime of the Move to know exactly what we need to about the Velocity result it should have.
  179.  
  180.  
  181. And that's what led to creating Root Motion Sources. The idea was to generalize how animation root motion works so that we could have arbitrary game logic feeding the movement system Velocity each frame. Animation root motion ticks animation and gets a Velocity for a frame, I wanted to be able to tick some arbitrary game logic and get Velocity for a frame. We could have logic that sends you forward for an exact amount of time at an exact rate. We could have logic that sends you up into the air a specified amount, shoot you across the sky, and have you track and home to a moving target. We could have logic that senses the terrain you're moving around and allows you to move up walls like a spider, and you could be on fire at the same time, and be a spider on fire climbing up a wall. Cool. We could also come up with some insane path for your special jump to take through the air that had you being repelled from nearby enemies and moving towards some target point in space.
  182.  
  183.  
  184. A given Root Motion Source is just something you tick that you get Velocity out of, that's it.
  185.  
  186.  
  187. Let's start out with how animation root motion works, since that's what it was based off of. Animation root motion: for a given Move, there's an animation montage that's playing, there's a track time it started at for the Move, and there's the resulting Transform that the root bone moves for the duration of the Move that gets applied as Velocity to the character. In UCMC::PerformMovement(), there's a section that gets executed if we have a root motion-enabled anim montage playing. We TickCharacterPose() which ticks the mesh animation and then calls CharacterMesh->ConsumeRootMotion(), and save this off in our UCMC::RootMotionParams member variable. Below the TickCharacterPose() call in PerformMovement() we then convert that RootMotionParams transform from local to world space, and then we calculate the anim root motion velocity from that and directly apply it to UCMC::Velocity - we see a Velocity = line of code. That's what happens once per PerformMovement() - tick the character, extract root motion, apply the transform to our Velocity. PerformMovement() then (like usual) calls Phys() functions depending on the current movement mode - let's look at PhysWalking(). In PhysWalking(), there's a CalcVelocity() call that takes your acceleration/time/friction/braking data and ends up setting UCMC::Velocity. But note that the calls to CalcVelocity() in Phys*() calls are guarded by "only do this if I don't have animation root motion". So with animation root motion, we calculate what Velocity should be for a frame, and then don't change it throughout the normal physics calls.
  188.  
  189.  
  190. In terms of how animation root motion works with the prediction/SavedMoves system, in FSavedMove_Character::PostUpdate() when we're in PostUpdateMode == PostUpdate_Record, we save off into our FSavedMove_Character 1) which montage we were playing into RootMotionMontage, 2) the track position of the animation RootMotionTrackPosition, and 3) the resulting translation into RootMotionMovement. Now, let's say as AutonomousProxy I get a move correction. I snap to the location of the correction, and now I'm going to replay the saved moves I have. That happens in UCMC::ClientUpdatePositionAfterServerUpdate - for each saved move, I call CurrentMove->PrepMoveFor(). In that function, either we A) just use the saved-off RootMotionMovement again so that we don't have to do any expensive animation ticking/extraction, or B) if the server said in the correction we're off on what we're doing with animation root motion (like our track position was way wrong), we use RootMotionMontage and RootMotionTrackPosition to extract the new (corrected) root motion and apply that.
  191.  
  192.  
  193. As far as client-server communication for animation root motion, the client never tells the server what it's doing. Looking at CallServerMove() and the ServerMove() RPCs, the AutonomousProxy client doesn't tell the server what montage it's currently playing or even if it's playing it at all. [[There IS a ServerMoveDualHybridRootMotion() function, but that's meant when we're sending two moves to the server where one didn't have root motion and the next didn't - this is important because of how Authority ticks things, if it just receives one 0.5 second Move instead of two 0.25 second ones, the Authority will end up ticking the character pose twice as long as it should which will lead to errors/corrections.]] The only communication client and server have are when a correction has been triggered by the server - it calls SendClientAdjustment(), and in that logic, if we are playnig a networked root motion montage, when we're sending a correction we call ClientAdjustRootMotionPosition instead of the normal ClientAdjustPosition - this does everything the normal one does, but also sends the anim montage's track position. On the client side of this RPC, if the track positions are different we correct them.
  194.  
  195.  
  196. But that's it! Clients send ServerMove RPCs by default with their framerate. It's good for bandwidth to not include a bunch of info in each of those, so there's no animation root motion data being sent along with each Move. Clients and servers don't actually even know or never check with each other that they're playing the same animation montage. We just hope they are, and any functioning game would. On Paragon we solved this with anim montages being played from abilities and the AbilitySystemComponent has built-in stuff for syncing montages to be played on server and client. You are on your own for making sure clients and servers are playing the same thing at the same time, and you'll get corrected if they don't line up enough. When they get out of sync, you'll get corrections and the server will force AutonomousProxy to set track position to match the server. What's nice about how lightweight the synchronization is ("if I'm playing animation root motion on server when a correction happens, I'll tell you where in that montage I'm at") is that there aren't constraints put on how you must activate the montage. Some projects might want the server to lead things and tell the client "you should play montage X now", others might allow the client to predictively start it and start moving and then only get involved when things go wrong. Whatever your project's cup of tea, go for it.
  197.  
  198.  
  199. And that's what Root Motion Sources do, except with arbitrary game logic instead of animations.
  200.  
  201.  
  202. So the answer here is to try using the RootMotionSource system, which is set up through ability tasks - instead of using UAbilityTask_MoveToLocation, try UAbilityTask_ApplyRootMotionMoveToForce. This is the synchronized, play-nicely-with-prediction/correction version of UAbilityTask_MoveToLocation. Hope this gets you started.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement