Advertisement
Guest User

Untitled

a guest
May 25th, 2016
73
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 13.56 KB | None | 0 0
  1. Loading required package: methods
  2.  
  3. Attaching package: 'SparkR'
  4.  
  5. The following object is masked from 'package:testthat':
  6.  
  7. describe
  8.  
  9. The following objects are masked from 'package:stats':
  10.  
  11. cov, filter, lag, na.omit, predict, sd, var, window
  12.  
  13. The following objects are masked from 'package:base':
  14.  
  15. as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
  16. rank, rbind, sample, startsWith, subset, summary, transform
  17.  
  18. binary functions: ...........
  19. functions on binary files: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  20. ....
  21. broadcast variables: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  22. ..
  23. functions in client.R: .....
  24. test functions in sparkR.R: .1234Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  25. ........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  26. Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  27. ..........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  28. .
  29. include an external JAR in SparkContext: ..
  30. include R packages: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  31.  
  32. MLlib functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  33. ..........................May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY
  34. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728
  35. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576
  36. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
  37. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on
  38. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off
  39. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
  40. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 65,622
  41. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label] BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
  42. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms, list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages, encodings: [PLAIN, RLE]
  43. May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings: [BIT_PACKED, PLAIN]
  44. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
  45. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY
  46. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728
  47. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576
  48. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
  49. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on
  50. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off
  51. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
  52. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 49
  53. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels, list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings: [PLAIN, RLE]
  54. May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
  55. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY
  56. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728
  57. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576
  58. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
  59. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on
  60. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off
  61. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
  62. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 92
  63. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for [vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
  64. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for [prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1 pages, encodings: [PLAIN, RLE]
  65. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for [prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 12B raw, 1B comp}
  66. May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
  67. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY
  68. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728
  69. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576
  70. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
  71. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on
  72. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off
  73. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
  74. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 54
  75. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for [columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages, encodings: [PLAIN, RLE]
  76. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
  77. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY
  78. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728
  79. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576
  80. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
  81. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on
  82. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off
  83. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
  84. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 56
  85. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, PLAIN]
  86. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for [coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
  87. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
  88. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1 pages, encodings: [PLAIN, RLE]
  89. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for [coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1 pages, encodings: [PLAIN, RLE]
  90. May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
  91. May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY
  92. May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728
  93. May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576
  94. May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
  95. May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on
  96. May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off
  97. May 25, 2016 7:42:30 PM INFO: org.apache.parquet.had.........................................................................
  98. parallelize() and collect(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  99. .............................
  100. basic RDD functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  101. ............................................................................................................................................................................................................................................................................................................................................................................................................................................
  102. SerDe functionality: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  103. ...................
  104. partitionBy, groupByKey, reduceByKey etc.: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  105. ....................
  106. SparkSQL functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  107. .......................................................S..................................................................................................................................................................................................................................................5........S..................................................................................................................................................................................................................................................................................................................................................................S
  108. tests RDD function take(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  109. ................
  110. the textFile() function: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  111. .............
  112. functions in utils.R: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
  113. .................................
  114. test the support SparkR on Windows: .
  115.  
  116. Skipped ------------------------------------------------------------------------
  117. 1. create DataFrame from RDD (@test_sparkSQL.R#166) - Hive is not build with SparkSQL, skipped
  118.  
  119. 2. test HiveContext (@test_sparkSQL.R#957) - Hive is not build with SparkSQL, skipped
  120.  
  121. 3. Window functions on a DataFrame (@test_sparkSQL.R#2142) - Hive is not build with SparkSQL, skipped
  122.  
  123. Failed -------------------------------------------------------------------------
  124. 1. Failure: Check masked functions (@test_context.R#30) ------------------------
  125. length(maskedBySparkR) not equal to length(namesOfMasked).
  126. 1/1 mismatches
  127. [1] 22 - 20 == 2
  128.  
  129.  
  130. 2. Failure: Check masked functions (@test_context.R#31) ------------------------
  131. sort(maskedBySparkR) not equal to sort(namesOfMasked).
  132. Lengths differ: 22 vs 20
  133.  
  134.  
  135. 3. Failure: Check masked functions (@test_context.R#40) ------------------------
  136. length(maskedCompletely) not equal to length(namesOfMaskedCompletely).
  137. 1/1 mismatches
  138. [1] 5 - 3 == 2
  139.  
  140.  
  141. 4. Failure: Check masked functions (@test_context.R#41) ------------------------
  142. sort(maskedCompletely) not equal to sort(namesOfMaskedCompletely).
  143. Lengths differ: 5 vs 3
  144.  
  145.  
  146. 5. Error: subsetting (@test_sparkSQL.R#922) ------------------------------------
  147. argument "subset" is missing, with no default
  148. 1: subset(df, select = "name", drop = F) at C:/Users/IEUser/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:922
  149. 2: subset(df, select = "name", drop = F)
  150. 3: .local(x, ...)
  151. 4: x[subset, select, drop = drop]
  152.  
  153. DONE ===========================================================================
  154. Error: Test failures
  155. Execution halted
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement