Advertisement
Guest User

fdffc6547a5c805a9e4a48c0598ceba4

a guest
Oct 21st, 2019
111
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.15 KB | None | 0 0
  1. Review for team fdffc6547a5c805a9e4a48c0598ceba4
  2. Legend:
  3. + Positive
  4. ~ Neutral
  5. - Negative
  6.  
  7. Given a scale with quarter points I would award:
  8. Textual description: 5.75 / 6
  9. Code quality: 6 / 6
  10. Results: 5 / 6
  11.  
  12. The grades given are based on the comments below. The negative and positive points are comments about either something lacking or which impressed me. A lack of comment on something is indicative that I considered it good or reasonably good.
  13.  
  14. Overall comments:
  15. + Usage of pickle is well done
  16. + The commenting of functions created is extremely well done, giving an overall description of the function and then comments on more complicated lines of the function.
  17. + The code is always very easy to follow and quite well written
  18. - A few errors lead to some incorrect results which propagate.
  19.  
  20.  
  21. Comments per section:
  22.  
  23. Section A:
  24.  
  25. A1:
  26. + Clear textual description
  27. + Complete comments
  28. - The column names have not been renamed to the requested format.
  29. - Using ‘cvpr’ as index_col removes it from the list, leading to a conference being forgotten and a lower number of total papers.
  30.  
  31. A2:
  32. + Great use of eval function. Simplifies the code itself and the reading of it.
  33. - Using explode would have simplified creating the author-centric paper, doing the whole ‘author df’ function in a line of code.
  34. + Good comments on the author df function.
  35.  
  36. A3.1:
  37. - The plot gives the impression that the only outlier is Sheila A. McIlraith due to how the x labels are shown, though the second graph remedies to this.
  38. - Bar graph or scatter plot would make it easier to see the actual number of papers published in the top 20 authors.
  39. - Good use of double grouping to make it easier to see what is happening in one go.
  40. - Really good textual description for this part.
  41.  
  42. A3.2:
  43. - There is no plot of the papers per year before cleaning the year column.
  44.  
  45. A3.3, A3.4, A4
  46. + Clear and concise.
  47. + The textual description is complete.
  48.  
  49. Section B:
  50.  
  51. B1
  52. + Analysis gets the most important points but perhaps could be extended.
  53.  
  54. B2.1
  55. + Good checks on the aminer_ai table.
  56.  
  57. B2.2
  58. ~ Perhaps use a bar graph for more readability rather than a long list.
  59.  
  60. B2.3
  61. - Not completely clear which list is the one for those absent in the H5 ranking, perhaps a bit more textual description or printing the full list would make it more readable.
  62.  
  63. B2.4
  64. + Based on the observed results (cvpr conference missing) the analysis is reasonable.
  65.  
  66. B2.5
  67. - A rotation of 90 on the x labels make them hard to read.
  68. - The difference between both graphs with the title ‘Sum of the rank drop of the top20 per conference removed’ is unclear.
  69. - Unconvincing analysis: ‘If an author publishes only in one conference and that conference disappears, then his rank will drop greatly.’ If I understood what’s being said, then this is a bit too self-evident.
  70.  
  71. B3
  72. + Very interesting analysis found.
  73. ~ Could talk about potential issues with the new scoring method. The analysis is interesting but lacks critique.
  74.  
  75. Section C:
  76.  
  77. C
  78. + The analysis is complete.
  79. + The textual description is well done.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement