Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2019-09-27 04:07:50.320088: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) CPU:0 -> /job:tpu_worker/replica:0/task:0/device:XLA_CPU:0
- 2019-09-27 04:07:50.320179: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:0 -> /job:tpu_worker/replica:0/task:0/device:TPU:0
- 2019-09-27 04:07:50.320188: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:1 -> /job:tpu_worker/replica:0/task:0/device:TPU:1
- 2019-09-27 04:07:50.320195: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:2 -> /job:tpu_worker/replica:0/task:0/device:TPU:2
- 2019-09-27 04:07:50.320201: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:3 -> /job:tpu_worker/replica:0/task:0/device:TPU:3
- 2019-09-27 04:07:50.320208: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:4 -> /job:tpu_worker/replica:0/task:0/device:TPU:4
- 2019-09-27 04:07:50.320214: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:5 -> /job:tpu_worker/replica:0/task:0/device:TPU:5
- 2019-09-27 04:07:50.320220: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:6 -> /job:tpu_worker/replica:0/task:0/device:TPU:6
- 2019-09-27 04:07:50.320226: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:217] XRT device (LOCAL) TPU:7 -> /job:tpu_worker/replica:0/task:0/device:TPU:7
- 2019-09-27 04:07:50.320263: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:221] Worker grpc://10.240.1.18:8470 for /job:tpu_worker/replica:0/task:0
- 2019-09-27 04:07:50.320271: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:225] XRT default device: TPU:0
- 2019-09-27 04:07:50.320302: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1114] Configuring TPU for master worker tpu_worker:0 at grpc://10.240.1.18:8470
- 2019-09-27 04:07:54.302503: I tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1125] TPU topology: mesh_shape: 2
- mesh_shape: 2
- mesh_shape: 2
- num_tasks: 1
- num_tpu_devices_per_task: 8
- device_coordinates: 0
- device_coordinates: 0
- device_coordinates: 0
- device_coordinates: 0
- device_coordinates: 0
- device_coordinates: 1
- device_coordinates: 0
- device_coordinates: 1
- device_coordinates: 0
- device_coordinates: 0
- device_coordinates: 1
- device_coordinates: 1
- device_coordinates: 1
- device_coordinates: 0
- device_coordinates: 0
- device_coordinates: 1
- device_coordinates: 0
- device_coordinates: 1
- device_coordinates: 1
- device_coordinates: 1
- device_coordinates: 0
- device_coordinates: 1
- device_coordinates: 1
- device_coordinates: 1
- test_add_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_addcdiv_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_addcmul_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_advancedindex_big_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_advancedindex_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_all_any_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_atan2_edgecases_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_atan2_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_binary_op_mem_overlap_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_bitwise_not_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_blas_alpha_beta_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_blas_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_bool_sub_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_bool_tensor_comparison_ops_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_bool_tensor_value_change_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_broadcast_batched_matmul_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_broadcast_fused_matmul_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_broadcast_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cat_all_dtypes_and_devices_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cat_empty_legacy_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cat_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cdist_empty_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_cdist_large_batch_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_cdist_large_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_cdist_non_contiguous_batch_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cdist_non_contiguous_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cdist_norm_batch_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cdist_norm_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_ceil_out_mismatch_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_chain_matmul_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_cholesky_batched_many_batches_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
- test_cholesky_batched_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cholesky_inverse_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_cholesky_solve_batched_broadcasting_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_cholesky_solve_batched_many_batches_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
- test_cholesky_solve_batched_non_contiguous_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cholesky_solve_batched_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_cholesky_solve_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_cholesky_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_clamp_xla (__main__.TestTorchDeviceTypeXLA) ... 2019-09-27 04:08:04.557650: E tensorflow/compiler/xla/xla_client/tf_logging.cc:11] Check failed: min || max
- *** Begin stack trace ***
- tensorflow::CurrentStackTrace[abi:cxx11]()
- torch_xla::XLATensor::clamp(torch_xla::XLATensor const&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- torch_xla::AtenXlaType::clamp(at::Tensor const&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- c10::detail::wrap_kernel_functor_unboxed_<c10::detail::WrapKernelFunction_<at::Tensor (at::Tensor const&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>), &torch_xla::AtenXlaType::clamp, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar> > >, at::Tensor (at::Tensor const&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)>::call(c10::OperatorKernel*, at::Tensor const&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- torch::autograd::VariableType::clamp(at::Tensor const&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- _PyMethodDef_RawFastCallKeywords
- _PyMethodDescr_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- PyEval_EvalCodeEx
- PyEval_EvalCode
- PyRun_FileExFlags
- PyRun_SimpleFileExFlags
- _Py_UnixMain
- __libc_start_main
- *** End stack trace ***
- At least one of 'min' or 'max' must not be None
- 2019-09-27 04:08:04.562454: E tensorflow/compiler/xla/xla_client/tf_logging.cc:11] Check failed: min || max
- *** Begin stack trace ***
- tensorflow::CurrentStackTrace[abi:cxx11]()
- torch_xla::XLATensor::clamp_(torch_xla::XLATensor&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- torch_xla::AtenXlaType::clamp_(at::Tensor&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- c10::detail::wrap_kernel_functor_unboxed_<c10::detail::WrapKernelFunction_<at::Tensor& (at::Tensor&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>), &torch_xla::AtenXlaType::clamp_, at::Tensor&, c10::guts::typelist::typelist<at::Tensor&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar> > >, at::Tensor& (at::Tensor&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)>::call(c10::OperatorKernel*, at::Tensor&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- torch::autograd::VariableType::clamp_(at::Tensor&, c10::optional<c10::Scalar>, c10::optional<c10::Scalar>)
- _PyMethodDef_RawFastCallKeywords
- _PyMethodDescr_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- PyEval_EvalCodeEx
- PyEval_EvalCode
- PyRun_FileExFlags
- PyRun_SimpleFileExFlags
- _Py_UnixMain
- __libc_start_main
- *** End stack trace ***
- At least one of 'min' or 'max' must not be None
- ok
- test_clone_all_dtypes_and_devices_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_contiguous_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_copy_all_dtypes_and_devices_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_copy_mem_overlap_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_ctor_with_numpy_array_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_cumprod_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_cumsum_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_det_logdet_slogdet_batched_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_det_logdet_slogdet_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_device_guard_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'fewer than 2 GPUs detected'
- test_diagflat_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_diagonal_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_dim_function_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_dim_reduction_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_dist_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_dlpack_conversion_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_empty_strided_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_empty_tensor_props_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_erfinv_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_eye_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_fill_all_dtypes_and_devices_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_flip_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_float_scalar_pow_float_tensor_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_geometric_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_geqrf_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_half_tensor_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_has_storage_numpy_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_histc_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_index_copy_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_index_fill_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_index_select_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_index_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_int_pow_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_int_tensor_pow_neg_ints_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_inverse_many_batches_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
- test_inverse_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_is_signed_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_isinf_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_kthvalue_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_lapack_empty_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_lerp_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_linspace_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_log_normal_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_logical_all_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_logical_any_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_logical_not_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_logical_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_logical_xor_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_long_tensor_pow_floats_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_lstsq_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_lu_solve_batched_broadcasting_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_lu_solve_batched_many_batches_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
- test_lu_solve_batched_non_contiguous_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_lu_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_masked_fill_bool_tensor_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_masked_scatter_bool_tensor_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_masked_select_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_matrix_power_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_matrix_rank_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_memory_format_empty_like_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_memory_format_preserved_after_permute_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_mul_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_multinomial_alias_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_multinomial_constraints_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_multinomial_device_constrain_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_multinomial_gpu_device_constrain_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'only one GPU detected'
- test_narrow_empty_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_neg_xla (__main__.TestTorchDeviceTypeXLA) ... 2019-09-27 04:08:15.122101: E tensorflow/compiler/xla/xla_client/tf_logging.cc:11] Check failed: self.scalar_type() != at::kBool
- *** Begin stack trace ***
- tensorflow::CurrentStackTrace[abi:cxx11]()
- torch_xla::AtenXlaType::neg(at::Tensor const&)
- c10::detail::wrap_kernel_functor_unboxed_<c10::detail::WrapKernelFunction_<at::Tensor (at::Tensor const&), &torch_xla::AtenXlaType::neg, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
- torch::autograd::VariableType::neg(at::Tensor const&)
- _PyMethodDef_RawFastCallDict
- _PyCFunction_FastCallDict
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- PyObject_Call
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallDict
- _PyObject_Call_Prepend
- _PyObject_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- _PyFunction_FastCallKeywords
- _PyEval_EvalFrameDefault
- _PyEval_EvalCodeWithName
- PyEval_EvalCodeEx
- PyEval_EvalCode
- PyRun_FileExFlags
- PyRun_SimpleFileExFlags
- _Py_UnixMain
- __libc_start_main
- *** End stack trace ***
- Negation, the `-` operator, on a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
- ok
- test_nonzero_empty_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_nonzero_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_norm_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_normal_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_nuclear_norm_axes_small_brute_force_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_nuclear_norm_exceptions_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_ones_like_multiple_device_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'only one GPU detected'
- test_ones_like_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_pairwise_distance_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_pdist_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_pdist_norm_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_pin_memory_from_constructor_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_pinverse_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_pow_scalar_overloads_mem_overlap_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_pow_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_put_empty_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_qr_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_random_neg_values_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_randperm_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_reduction_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_remainder_overflow_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_renorm_ps_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_resize_all_dtypes_and_devices_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_resize_as_all_dtypes_and_devices_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_reverse_binary_ops_multiple_device_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'only one GPU detected'
- test_roll_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_rot90_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_rpow_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_scatter_add_bool_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_scatter_add_to_large_input_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_scatter_bool_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_scatter_to_large_input_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_serialization_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_sign_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_signal_window_functions_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_solve_batched_broadcasting_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_solve_batched_many_batches_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
- test_solve_batched_non_contiguous_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_solve_batched_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_solve_methods_arg_device_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_solve_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_std_mean_all_dims_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_std_mean_some_dims_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_std_mean_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_stft_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_storage_device_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_storage_multigpu_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'less than 2 GPUs detected'
- test_svd_no_singularvectors_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_svd_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_symeig_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_take_empty_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_tensor_factories_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_tensor_factory_gpu_type_inference_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_tensor_factory_gpu_type_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_tensor_pow_tensor_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_tensor_set_errors_multigpu_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'less than 2 GPUs detected'
- test_tensor_shape_empty_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_tensordot_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_ternary_op_mem_overlap_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_topk_noncontiguous_gpu_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_trapz_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_triangular_solve_batched_broadcasting_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_triangular_solve_batched_many_batches_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_triangular_solve_xla (__main__.TestTorchDeviceTypeXLA) ... FAIL
- test_triu_tril_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_unary_out_op_mem_overlap_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_unfold_all_devices_and_dtypes_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_unique_dim_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_unique_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_var_mean_all_dims_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_var_mean_some_dims_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_var_mean_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- test_view_all_dtypes_and_devices_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_view_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'skipped on XLA'
- test_zeros_like_multiple_device_xla (__main__.TestTorchDeviceTypeXLA) ... skipped 'only one GPU detected'
- test_zeros_like_xla (__main__.TestTorchDeviceTypeXLA) ... ok
- ======================================================================
- FAIL: test_chain_matmul_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "../test/test_torch.py", line 7495, in test_chain_matmul
- run_test([10, 20, 30, 5], device)
- File "../test/test_torch.py", line 7493, in run_test
- self.assertEqual(torch.chain_matmul(*matrices), product(matrices))
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 698, in assertEqual
- assertTensorsEqual(x, y)
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 668, in assertTensorsEqual
- self.assertLessEqual(max_err, prec, message)
- AssertionError: tensor(0.2323, device='xla:1') not less than or equal to 1e-05 :
- ======================================================================
- FAIL: test_cholesky_inverse_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7906, in test_cholesky_inverse
- self.assertLessEqual(inv0.dist(inv1), 1e-12)
- AssertionError: tensor(4.2386e-06, device='xla:1') not less than or equal to 1e-12
- ======================================================================
- FAIL: test_cholesky_solve_batched_broadcasting_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7889, in test_cholesky_solve_batched_broadcasting
- run_test((2, 1, 3, 4, 4), (2, 1, 3, 4, 6), cast, upper) # no broadcasting
- File "../test/test_torch.py", line 7885, in run_test
- self.assertEqual(x, cast(x_exp))
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 698, in assertEqual
- assertTensorsEqual(x, y)
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 668, in assertTensorsEqual
- self.assertLessEqual(max_err, prec, message)
- AssertionError: tensor(0.0002, device='xla:1') not less than or equal to 1e-05 :
- ======================================================================
- FAIL: test_cholesky_solve_batched_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7831, in test_cholesky_solve_batched
- cholesky_solve_batch_helper((5, batchsize), (batchsize, 5, 10), lambda t: t.to(device), upper)
- File "../test/test_torch.py", line 7828, in cholesky_solve_batch_helper
- self.assertLessEqual(b.dist(torch.matmul(A, x_act)), 2e-12) # Correctness check
- AssertionError: tensor(0.1194, device='xla:1') not less than or equal to 2e-12
- ======================================================================
- FAIL: test_cholesky_solve_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7813, in test_cholesky_solve
- self.assertLessEqual(b.dist(A.mm(x)), 1e-12)
- AssertionError: tensor(0.0311, device='xla:1') not less than or equal to 1e-12
- ======================================================================
- FAIL: test_cholesky_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7964, in test_cholesky
- self.assertEqual(A, B, 1e-14)
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 698, in assertEqual
- assertTensorsEqual(x, y)
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 668, in assertTensorsEqual
- self.assertLessEqual(max_err, prec, message)
- AssertionError: tensor(0.0287, device='xla:1') not less than or equal to 1e-14 :
- ======================================================================
- FAIL: test_matrix_power_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7462, in test_matrix_power
- run_test(M)
- File "../test/test_torch.py", line 7455, in run_test
- self.assertEqual(MP6, torch.matmul(MP3, MP3))
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 698, in assertEqual
- assertTensorsEqual(x, y)
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 668, in assertTensorsEqual
- self.assertLessEqual(max_err, prec, message)
- AssertionError: tensor(0.2648, device='xla:1') not less than or equal to 1e-05 :
- ======================================================================
- FAIL: test_solve_batched_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7753, in test_solve_batched
- solve_batch_helper((5, batchsize), (batchsize, 5, 10), device)
- File "../test/test_torch.py", line 7750, in solve_batch_helper
- self.assertLessEqual(b.dist(torch.matmul(A, x_act)), 1e-12) # Correctness check
- AssertionError: tensor(0.0218, device='xla:1') not less than or equal to 1e-12
- ======================================================================
- FAIL: test_solve_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 7735, in test_solve
- self.assertLessEqual(b.dist(A.mm(x)), 1e-12)
- AssertionError: tensor(0.0024, device='xla:1') not less than or equal to 1e-12
- ======================================================================
- FAIL: test_tensordot_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "../test/test_torch.py", line 11051, in test_tensordot
- self.assertEqual(c, cn)
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 698, in assertEqual
- assertTensorsEqual(x, y)
- File "/home/dlibenzi_google_com/pytorch/test/common_utils.py", line 668, in assertTensorsEqual
- self.assertLessEqual(max_err, prec, message)
- AssertionError: tensor(0.0307) not less than or equal to 1e-05 :
- ======================================================================
- FAIL: test_triangular_solve_xla (__main__.TestTorchDeviceTypeXLA)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 127, in instantiated_test
- return test(self, cls.device_type)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "/home/dlibenzi_google_com/pytorch/test/common_device_type.py", line 309, in dep_fn
- return fn(slf, device, *args, **kwargs)
- File "../test/test_torch.py", line 9603, in test_triangular_solve
- self.assertLessEqual(b.dist(A.t().mm(x)), 4e-12)
- AssertionError: tensor(2.1711e-08, device='xla:1') not less than or equal to 4e-12
- ----------------------------------------------------------------------
- Ran 184 tests in 35.906s
- FAILED (failures=11, skipped=111)
- Fail to import hypothesis in common_utils, tests are not derandomized
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement