Advertisement
Guest User

Error message

a guest
Oct 27th, 2014
417
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 18.78 KB | None | 0 0
  1. WARNING: Running python applications through ./bin/pyspark is deprecated as of Spark 1.0.
  2. Use ./bin/spark-submit <python file>
  3.  
  4. Spark assembly has been built with Hive, including Datanucleus jars on classpath
  5. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  6. 14/10/27 03:29:43 WARN Utils: Your hostname, Sid resolves to a loopback address: 127.0.1.1; using 192.168.0.15 instead (on interface wlan0)
  7. 14/10/27 03:29:43 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
  8. 14/10/27 03:29:43 INFO SecurityManager: Changing view acls to: sid,
  9. 14/10/27 03:29:43 INFO SecurityManager: Changing modify acls to: sid,
  10. 14/10/27 03:29:43 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sid, ); users with modify permissions: Set(sid, )
  11. 14/10/27 03:29:44 INFO Slf4jLogger: Slf4jLogger started
  12. 14/10/27 03:29:44 INFO Remoting: Starting remoting
  13. 14/10/27 03:29:44 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:38405]
  14. 14/10/27 03:29:44 INFO Remoting: Remoting now listens on addresses: [akka.tcp://[email protected]:38405]
  15. 14/10/27 03:29:44 INFO Utils: Successfully started service 'sparkDriver' on port 38405.
  16. 14/10/27 03:29:44 INFO SparkEnv: Registering MapOutputTracker
  17. 14/10/27 03:29:44 INFO SparkEnv: Registering BlockManagerMaster
  18. 14/10/27 03:29:44 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20141027032944-e2cd
  19. 14/10/27 03:29:44 INFO Utils: Successfully started service 'Connection manager for block manager' on port 58556.
  20. 14/10/27 03:29:44 INFO ConnectionManager: Bound socket to port 58556 with id = ConnectionManagerId(192.168.0.15,58556)
  21. 14/10/27 03:29:44 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
  22. 14/10/27 03:29:44 INFO BlockManagerMaster: Trying to register BlockManager
  23. 14/10/27 03:29:44 INFO BlockManagerMasterActor: Registering block manager 192.168.0.15:58556 with 265.4 MB RAM
  24. 14/10/27 03:29:44 INFO BlockManagerMaster: Registered BlockManager
  25. 14/10/27 03:29:44 INFO HttpFileServer: HTTP File server directory is /tmp/spark-ed95bc19-7c96-4d6e-8b07-fba7695c2f41
  26. 14/10/27 03:29:44 INFO HttpServer: Starting HTTP Server
  27. 14/10/27 03:29:44 INFO Utils: Successfully started service 'HTTP file server' on port 42993.
  28. 14/10/27 03:29:45 INFO Utils: Successfully started service 'SparkUI' on port 4040.
  29. 14/10/27 03:29:45 INFO SparkUI: Started SparkUI at http://192.168.0.15:4040
  30. 14/10/27 03:29:45 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  31. 14/10/27 03:29:45 INFO Utils: Copying /home/sid/Downloads/spark/pdsWork/smallCode.py to /tmp/spark-14dbc370-b423-48dd-b498-3798b76af4bb/smallCode.py
  32. 14/10/27 03:29:45 INFO SparkContext: Added file file:/home/sid/Downloads/spark/pdsWork/smallCode.py at http://192.168.0.15:42993/files/smallCode.py with timestamp 1414394985793
  33. 14/10/27 03:29:45 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://[email protected]:38405/user/HeartbeatReceiver
  34. 14/10/27 03:29:46 INFO MemoryStore: ensureFreeSpace(163705) called with curMem=0, maxMem=278302556
  35. 14/10/27 03:29:46 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 159.9 KB, free 265.3 MB)
  36. 14/10/27 03:29:46 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
  37. 14/10/27 03:29:46 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
  38. 14/10/27 03:29:46 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
  39. 14/10/27 03:29:46 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
  40. 14/10/27 03:29:46 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
  41. 14/10/27 03:29:46 INFO FileInputFormat: Total input paths to process : 1
  42. 14/10/27 03:29:46 INFO SparkContext: Starting job: saveAsTextFile at NativeMethodAccessorImpl.java:-2
  43. 14/10/27 03:29:46 INFO DAGScheduler: Got job 0 (saveAsTextFile at NativeMethodAccessorImpl.java:-2) with 1 output partitions (allowLocal=false)
  44. 14/10/27 03:29:46 INFO DAGScheduler: Final stage: Stage 0(saveAsTextFile at NativeMethodAccessorImpl.java:-2)
  45. 14/10/27 03:29:46 INFO DAGScheduler: Parents of final stage: List()
  46. 14/10/27 03:29:46 INFO DAGScheduler: Missing parents: List()
  47. 14/10/27 03:29:46 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[5] at saveAsTextFile at NativeMethodAccessorImpl.java:-2), which has no missing parents
  48. 14/10/27 03:29:46 INFO MemoryStore: ensureFreeSpace(61848) called with curMem=163705, maxMem=278302556
  49. 14/10/27 03:29:46 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 60.4 KB, free 265.2 MB)
  50. 14/10/27 03:29:46 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MappedRDD[5] at saveAsTextFile at NativeMethodAccessorImpl.java:-2)
  51. 14/10/27 03:29:46 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
  52. 14/10/27 03:29:46 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1258 bytes)
  53. 14/10/27 03:29:46 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
  54. 14/10/27 03:29:46 INFO Executor: Fetching http://192.168.0.15:42993/files/smallCode.py with timestamp 1414394985793
  55. 14/10/27 03:29:46 INFO Utils: Fetching http://192.168.0.15:42993/files/smallCode.py to /tmp/fetchFileTemp8666575365236477394.tmp
  56. 14/10/27 03:29:47 INFO CacheManager: Partition rdd_2_0 not found, computing it
  57. 14/10/27 03:29:47 INFO HadoopRDD: Input split: file:/home/sid/Downloads/spark/pdsWork/input.txt:0+147355
  58. 14/10/27 03:29:47 ERROR PythonRDD: Python worker exited unexpectedly (crashed)
  59. org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  60. File "/home/sid/Downloads/spark/python/pyspark/worker.py", line 75, in main
  61. command = pickleSer._read_with_length(infile)
  62. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 146, in _read_with_length
  63. length = read_int(stream)
  64. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 464, in read_int
  65. raise EOFError
  66. EOFError
  67.  
  68. at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  69. at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  70. at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  71. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  72. at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  73. at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
  74. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  75. at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  76. at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
  77. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  78. at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  79. at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
  80. at org.apache.spark.scheduler.Task.run(Task.scala:54)
  81. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
  82. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  83. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  84. at java.lang.Thread.run(Thread.java:745)
  85. Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  86. File "/home/sid/Downloads/spark/python/pyspark/worker.py", line 79, in main
  87. serializer.dump_stream(func(split_index, iterator), outfile)
  88. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 196, in dump_stream
  89. self.serializer.dump_stream(self._batched(iterator), stream)
  90. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 128, in dump_stream
  91. self._write_with_length(obj, stream)
  92. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 138, in _write_with_length
  93. serialized = self.dumps(obj)
  94. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 356, in dumps
  95. return cPickle.dumps(obj, 2)
  96. PicklingError: Can't pickle __main__.testing: attribute lookup __main__.testing failed
  97.  
  98. at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  99. at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  100. at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  101. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  102. at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)
  103. at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
  104. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:209)
  105. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  106. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  107. at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
  108. at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:183)
  109. 14/10/27 03:29:47 ERROR PythonRDD: This may have been caused by a prior exception:
  110. org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  111. File "/home/sid/Downloads/spark/python/pyspark/worker.py", line 79, in main
  112. serializer.dump_stream(func(split_index, iterator), outfile)
  113. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 196, in dump_stream
  114. self.serializer.dump_stream(self._batched(iterator), stream)
  115. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 128, in dump_stream
  116. self._write_with_length(obj, stream)
  117. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 138, in _write_with_length
  118. serialized = self.dumps(obj)
  119. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 356, in dumps
  120. return cPickle.dumps(obj, 2)
  121. PicklingError: Can't pickle __main__.testing: attribute lookup __main__.testing failed
  122.  
  123. at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  124. at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  125. at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  126. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  127. at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)
  128. at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
  129. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:209)
  130. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  131. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  132. at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
  133. at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:183)
  134. 14/10/27 03:29:47 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
  135. org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  136. File "/home/sid/Downloads/spark/python/pyspark/worker.py", line 79, in main
  137. serializer.dump_stream(func(split_index, iterator), outfile)
  138. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 196, in dump_stream
  139. self.serializer.dump_stream(self._batched(iterator), stream)
  140. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 128, in dump_stream
  141. self._write_with_length(obj, stream)
  142. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 138, in _write_with_length
  143. serialized = self.dumps(obj)
  144. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 356, in dumps
  145. return cPickle.dumps(obj, 2)
  146. PicklingError: Can't pickle __main__.testing: attribute lookup __main__.testing failed
  147.  
  148. at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  149. at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  150. at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  151. at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  152. at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)
  153. at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
  154. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:209)
  155. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  156. at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  157. at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
  158. at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:183)
  159. 14/10/27 03:29:47 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  160. File "/home/sid/Downloads/spark/python/pyspark/worker.py", line 79, in main
  161. serializer.dump_stream(func(split_index, iterator), outfile)
  162. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 196, in dump_stream
  163. self.serializer.dump_stream(self._batched(iterator), stream)
  164. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 128, in dump_stream
  165. self._write_with_length(obj, stream)
  166. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 138, in _write_with_length
  167. serialized = self.dumps(obj)
  168. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 356, in dumps
  169. return cPickle.dumps(obj, 2)
  170. PicklingError: Can't pickle __main__.testing: attribute lookup __main__.testing failed
  171.  
  172. org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  173. org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  174. org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  175. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  176. org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)
  177. org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
  178. org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:209)
  179. org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  180. org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  181. org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
  182. org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:183)
  183. 14/10/27 03:29:47 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
  184. 14/10/27 03:29:47 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
  185. 14/10/27 03:29:47 INFO TaskSchedulerImpl: Cancelling stage 0
  186. 14/10/27 03:29:47 INFO DAGScheduler: Failed to run saveAsTextFile at NativeMethodAccessorImpl.java:-2
  187. Traceback (most recent call last):
  188. File "/home/sid/Downloads/spark/pdsWork/smallCode.py", line 42, in <module>
  189. output.saveAsTextFile("output")
  190. File "/home/sid/Downloads/spark/python/pyspark/rdd.py", line 1324, in saveAsTextFile
  191. keyed._jrdd.map(self.ctx._jvm.BytesToString()).saveAsTextFile(path)
  192. File "/home/sid/Downloads/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  193. File "/home/sid/Downloads/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
  194. py4j.protocol.Py4JJavaError: An error occurred while calling o40.saveAsTextFile.
  195. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  196. File "/home/sid/Downloads/spark/python/pyspark/worker.py", line 79, in main
  197. serializer.dump_stream(func(split_index, iterator), outfile)
  198. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 196, in dump_stream
  199. self.serializer.dump_stream(self._batched(iterator), stream)
  200. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 128, in dump_stream
  201. self._write_with_length(obj, stream)
  202. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 138, in _write_with_length
  203. serialized = self.dumps(obj)
  204. File "/home/sid/Downloads/spark/python/pyspark/serializers.py", line 356, in dumps
  205. return cPickle.dumps(obj, 2)
  206. PicklingError: Can't pickle __main__.testing: attribute lookup __main__.testing failed
  207.  
  208. org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
  209. org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
  210. org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
  211. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  212. org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)
  213. org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
  214. org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:209)
  215. org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  216. org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:184)
  217. org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
  218. org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:183)
  219. Driver stacktrace:
  220. at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
  221. at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
  222. at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
  223. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  224. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
  225. at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
  226. at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  227. at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  228. at scala.Option.foreach(Option.scala:236)
  229. at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
  230. at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
  231. at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
  232. at akka.actor.ActorCell.invoke(ActorCell.scala:456)
  233. at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
  234. at akka.dispatch.Mailbox.run(Mailbox.scala:219)
  235. at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
  236. at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
  237. at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
  238. at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
  239. at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement