Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ? ?
- ????? ?????
- ????????? ????????
- ????????? ????????? ???
- ????????? ????????? ???? ???? ??? ???
- ?????????????????????????? ???? ???? ????????? ??? ?????? ???? ???
- ????? ???????????? ????? ???? ???? ????? ???? ??? ?????? ??? ???
- ????? ????? ????? ???? ???? ???? ??? ??? ??? ???????
- ???? ???? ????? ???????????? ???? ??? ??? ????? ?????
- ? ??? ???? ???? ???????? ???? ??? ??? ???? ????
- ?????????????????????? ?????
- ????????????????
- ????????
- ?
- Version information:
- ml-agents: 0.29.0,
- ml-agents-envs: 0.29.0,
- Communicator API: 1.5.0,
- PyTorch: 1.12.0+cpu
- [INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor.
- [INFO] Connected to Unity environment with package version 2.0.1 and communication version 1.5.0
- [INFO] Connected new brain: BoxPerson?team=0
- [WARNING] Deleting TensorBoard data events.out.tfevents.1656957925.KevinDesktop.3480.0 that was left over from a previous run.
- [WARNING] Deleting TensorBoard data events.out.tfevents.1656957925.KevinDesktop.3480.0.meta that was left over from a previous run.
- [INFO] Hyperparameters for behavior name BoxPerson:
- trainer_type: ppo
- hyperparameters:
- batch_size: 32
- buffer_size: 256
- learning_rate: 0.0003
- beta: 0.005
- epsilon: 0.2
- lambd: 0.95
- num_epoch: 5
- learning_rate_schedule: linear
- beta_schedule: linear
- epsilon_schedule: linear
- network_settings:
- normalize: False
- hidden_units: 256
- num_layers: 3
- vis_encode_type: simple
- memory: None
- goal_conditioning_type: hyper
- deterministic: False
- reward_signals:
- extrinsic:
- gamma: 0.9
- strength: 1.0
- network_settings:
- normalize: False
- hidden_units: 128
- num_layers: 2
- vis_encode_type: simple
- memory: None
- goal_conditioning_type: hyper
- deterministic: False
- init_path: None
- keep_checkpoints: 5
- checkpoint_interval: 500000
- max_steps: 500000
- time_horizon: 3
- summary_freq: 2000
- threaded: False
- self_play: None
- behavioral_cloning: None
- C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\torch\networks.py:91: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Trigger
- ed internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_new.cpp:204.)
- enc.update_normalization(torch.as_tensor(vec_input))
- C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\torch\utils.py:320: UserWarning: The use of `x.T` on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider `x.mT` to transpose batches of matricesor
- `x.permute(*torch.arange(x.ndim - 1, -1, -1))` to reverse the dimensions of a tensor. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:2985.)
- return (tensor.T * masks).sum() / torch.clamp(
- [WARNING] Restarting worker[0] after 'Communicator has exited.'
- [INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor.
- [INFO] Exported results\test\BoxPerson\BoxPerson-782.onnx
- [INFO] Copied results\test\BoxPerson\BoxPerson-782.onnx to results\test\BoxPerson.onnx.
- Traceback (most recent call last):
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
- "__main__", mod_spec)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
- exec(code, run_globals)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\Scripts\mlagents-learn.exe\__main__.py", line 7, in <module>
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\learn.py", line 260, in main
- run_cli(parse_command_line())
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\learn.py", line 256, in run_cli
- run_training(run_seed, options, num_areas)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\learn.py", line 132, in run_training
- tc.start_learning(env_manager)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
- return func(*args, **kwargs)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\trainer_controller.py", line 176, in start_learning
- n_steps = self.advance(env_manager)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
- return func(*args, **kwargs)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\trainer_controller.py", line 234, in advance
- new_step_infos = env_manager.get_steps()
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\env_manager.py", line 124, in get_steps
- new_step_infos = self._step()
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 420, in _step
- self._restart_failed_workers(step)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 328, in _restart_failed_workers
- self.reset(self.env_parameters)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\env_manager.py", line 68, in reset
- self.first_step_infos = self._reset_env(config)
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 446, in _reset_env
- ew.previous_step = EnvironmentStep(ew.recv().payload, ew.worker_id, {}, {})
- File "C:\Users\kevin\AppData\Local\Programs\Python\Python37\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 101, in recv
- raise env_exception
- mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
- The environment does not need user interaction to launch
- The Agents' Behavior Parameters > Behavior Type is set to "Default"
- The environment and the Python interface have compatible versions.
- If you're running on a headless server without graphics support, turn off display by either passing --no-graphics option or build your Unity executable as server build.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement