Guest User

Untitled

a guest
Nov 20th, 2018
131
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.25 KB | None | 0 0
  1. # Video 5 (3 A's)
  2.  
  3. ## How do you know whether to make progress?
  4.  
  5. ### Before lean (Vanity metrics)
  6.  
  7. - Just do stuff
  8. - Vanity metrics
  9. - Look at total no question answered
  10. - Total number of chat messages
  11. - Arguing / opinions about technical things that go nowhere
  12.  
  13.  
  14. ### After lean (Split testing)
  15.  
  16. - Look at real data like conversion
  17. - Engineer X can put feature Y in split testing and see real data. Real data can show negative / positive bonuses
  18.  
  19. ## 3 A's
  20.  
  21. - Actionable. Based on results from testing you can take the correct action. It should be obvious what action you should take as the result of a test
  22. - Accessibility. Reports from split testing should be easy to read.
  23. - Auditable. Every feature has a report card of what matters. "We shipped this feature but it didn't improve the business". "Metrics are people too". The report must represent real people
  24.  
  25. if positive
  26. then do action
  27. if data is accessible
  28. then we see that it is positive
  29. if data is audible
  30. then we know there are real customers behind it
  31. then if data is negative we kill project
  32.  
  33. Audible test kill pet projects of no value.
  34.  
  35. ## How to integrate / do framework for split testing
  36.  
  37. - Easy reporting, should be obvious. Should be readable.
  38. - Hackathon day where everyone adds a split test
  39. - team meeting with report data
  40.  
  41. If you come up with a split test project. If it wins the data it stays, if it loses it leaves.
  42.  
  43. Less risk of trying a feature as we can tell whether it's good or bad
  44.  
  45. ## Metrics
  46.  
  47. - We have 6 to 10 criteria we run our business on
  48. - converting to paid
  49. - signing up
  50. - come back tomorrow
  51. - come back next week
  52. - learn something
  53. - We have key metrics
  54. - We want you to learn. Do they learn?
  55. - signing up
  56. - user generated distribution
  57. - converted to premium
  58. - engaged in parts of the system that leads to learning
  59.  
  60. Each metric is of the form "Of the people that are in the test, what fraction of people did this thing"
  61.  
  62. ## Questions
  63.  
  64. - Do we argue / throw opinions about things that we can solve by doing A / B testing based on data?
  65. - ---Do users get annoyed with our split tests---
  66. - When doing a split test that we expect to have negative result. THE VALUE OF LEARNING THE RESULT IS WORTH THE NEGATIVE EFFECT ON YOUR USERS
  67.  
  68. ## Extreme programming
  69.  
  70. - pair programming
  71. - agile development
  72. - test driven development
  73. - pivotal tracker. Getting things done for work
  74. - user experience story
  75. - developer does what is obvious
  76.  
  77. ## Advantages
  78.  
  79. Don't talk / present / play politics. Build and bring data!
  80. Reduces risk / fear of trying something
  81.  
  82. # BUILD AND BRING DATA. DONT PRESENT / BANTER / POLITICS
  83.  
  84. ## Fear of customer seeing weird stuff
  85.  
  86. Only one time someone emailed us and "hey why does your price keep changing". It's not that big a deal. Nobody cares. Nobody knows
  87.  
  88. # DON'T ASK FOR PERMISSION ASK FOR FORGIVENESS
  89.  
  90. ## What do you really need to build to test this thing?
  91.  
  92. ## The bigger the test the more learning you get
  93.  
  94. Test big differences and optimize later. Do far left vs far right then binary search to optimum point
  95.  
  96. # IF IT DOESNT LOOK GOOD AND PEOPLE STILL USE IT THEN THERE IS REAL CORE VALUE THERE.
  97.  
  98. # MAKE FUNCTIONAL FIRST THEN BEAUTIFUL
  99.  
  100. # A GOOD DESIGN IS ONE THAT CHANGES CUSTOMER BEHAVIOUR IN A QUANTIABLE WAY
Add Comment
Please, Sign In to add comment