Advertisement
Javi

Checklist de arquitectura

Sep 27th, 2016
109
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.45 KB | None | 0 0
  1. # Internet Scale Services Checklist
  2.  
  3. A checklist for designing and developing internet scale services, inspired by James Hamilton's 2007 paper "On Desgining and Deploying Internet-Scale Services."
  4.  
  5. * http://mvdirona.com/jrh/talksandpapers/jamesrh_lisa.pdf
  6.  
  7. ## Basic tenets
  8.  
  9. - [ ] Does the design expect failures to happen regularly and handle them gracefully?
  10. - [ ] Have we kept things as simple as possible?
  11. - [ ] Have we automated everything?
  12.  
  13. ## Overall Application Design & Development
  14.  
  15. - [ ] Can the service survive failure without human administrative interaction?
  16. - [ ] Are failure paths frequently tested?
  17. - [ ] Have we documented all conceivable component failure modes and combinations thereof?
  18. - [ ] Does our design tolerate these failure modes? And if not, have we undertaken a risk assessment to determine the risk is acceptable?
  19. - [ ] Are we targeting commodity hardware? (That is, our design does not require special h/w)
  20. - [ ] Are we hosting all users on a single version of the software?
  21. - [ ] Can we support multi-tenancy without physical isolation?
  22. - [ ] Have we implemented (and automated) a quick service health check?
  23. - [ ] Do our developers work in the full environment? (Requires single server deployment)
  24. - [ ] Can we continue to operate in reduced capacity if services (components) you depend on fail?
  25. - [ ] Does our design eliminate code redundancy across services/components?
  26. - [ ] Can our pods/clusters of services continue to operate independently of each other?
  27. - [ ] For rare emergency human intervention, have we worked with operations to come up with recovery plans,and documented, scripted, and tested them?
  28. - [ ] Does each of our complexity adding optimizations (if any), give at least an order of magnitude improvement?
  29. - [ ] Have we enforced admission control at all levels?
  30. - [ ] Can we partition the service, and is that partitioning infinitely adjustable and fine-grained?
  31. - [ ] Have we understood the network design and reviewed it with networking specialists?
  32. - [ ] Have we analysed throughput and latency and determined the most important metric for capacity planning?
  33. - [ ] Are all of our operations utilities following the same code review, source code control, testing etc. as the rest of the code base?
  34. - [ ] Have we understood the load this service will put on any backend store / services? Have we measured and validated this load?
  35. - [ ] Is everything versioned? The goal is to run single-version software, but multiple versions will always exist during rollout and testing etc. Versions n and n+1 of all components need to peacefully co-exist.
  36. - [ ] Have we avoided single points of failure?
  37.  
  38. ## Automatic Management and Provisioning
  39.  
  40. - [ ] Are all of our operations restartable?
  41. - [ ] Is all persistent state stored redundantly?
  42. - [ ] Can we support geo-distribution / multiple data center deployments
  43. - [ ] Have we automated provisioning and installation?
  44. - [ ] Are configuration and code delivered by development in a single unit?
  45. - [ ] Is the unit created by development used all the way through the lifecycle (test and prod. deployment)?
  46. - [ ] Is there an audit log mechanism to capture all changes made in production?
  47. - [ ] Have we designed for roles rather than servers, with the ability to deploy each 'role' on as many or few servers as needed?
  48. - [ ] Are we handling failures and correcting errors at the service level?
  49. - [ ] Have we eliminated any dependency on local storage for non-recoverable information?
  50. - [ ] Is our deployment model as simple as it can possibly be? (Hard to beat file copy!)
  51. - [ ] Are we using a chaos monkey in production?
  52.  
  53. ## Dependency Management
  54.  
  55. (How to handle dependencies on other services / components).
  56.  
  57. - [ ] Can we tolerate highly variable latency in service calls? Do we have timeout mechanisms in place and can we retry interactions after a timeout (idempotency)?
  58. - [ ] Are all retries reported, and have we bounded the number of retries?
  59. - [ ] Do we have circuit breakers in place to prevent cascading failures? Do they 'fail fast'?
  60. - [ ] Are we depending upon shipping and proven components wherever possible?
  61. - [ ] Have we implemented inter-service monitoring and alerting?
  62. - [ ] Do the services we depend on have the same (or compatible) design points (e.g SLAs)?
  63. - [ ] Can we continue operation (perhaps in a degraded mode) if a component or service we depend on fails?
  64.  
  65. ## Release Cycle and Testing
  66.  
  67. - [ ] Are we shipping often enough?
  68. - [ ] Have we defined specific criteria around the intended user experience? Are we continuously monitoring it?
  69. - [ ] Are we collecting the actual numbers rather than just summary reports? Raw data will always be needed for diagnosis.
  70. - [ ] Have we minimized false-positives in the alerting system?
  71. - [ ] Are we analyzing trends on key metrics?
  72. - [ ] Is the system health highly visible at all times?
  73. - [ ] Is the system continuously monitored?
  74. - [ ] Can we support version roll-back? Is this tested and proven?
  75. - [ ] Do we support both forward and backward compatibility on every change?
  76. - [ ] Can we deploy on a single server to support dev and test?
  77. - [ ] Have we run stress tests?
  78. - [ ] Do we have a process in place to catch performance and capacity degradations in new releases?
  79. - [ ] Are we running tests using real data?
  80. - [ ] Do we have (and run) system-level acceptance tests?
  81. - [ ] Do we have an environment that lets us test at scale, with the same data collection and mining techniques using in production?
  82.  
  83. ## Hardware Selection and Standardization
  84.  
  85. (I deviate from the Hamilton paper here, on the assumption that you'll use at least an IaaS layer).
  86.  
  87. - [ ] Do we depend only on standard IaaS compute, storage, and network facilities?
  88. - [ ] Have we avoided dependencies on specific hardware features?
  89. - [ ] Have we abstracted the network and naming? (For service discovery)
  90.  
  91. ## Operations and Capacity Planning
  92.  
  93. - [ ] Is there a devops team that takes shared responsibility for both developing and operating the service?
  94. - [ ] Do we always do soft deletes so that we can recover accidentally deleted data?
  95. - [ ] Are we tracking resource allocation for every service to understand the correlation between service metrics and underlying infrastructure requirements?
  96. - [ ] Do we have a discipline of only making one change at a time?
  97. - [ ] Is everything that might need to be configured or tuned in production able to be changed without a code change?
  98.  
  99. ## Auditing, Monitoring, and Alerting
  100.  
  101. - [ ] Are we tracking the alerts:trouble-ticket ratio (goal is near 1:1)?
  102. - [ ] Are we tracking the number of systems health issues that don't have corresponding alerts? (goal is near zero)
  103. - [ ] Have we instrumented every customer interaction that flows through the system? Are we reporting anomalies?
  104. - [ ] Do we have sufficient data to understand the normal operating behaviour?
  105. - [ ] Do we have automated testing that takes a customer view of the service?
  106. - [ ] Do we have sufficient instrumentation to detect latency issues?
  107. - [ ] Do we have performance counters for all operations? (at least latency and number ops/sec data)
  108. - [ ] Is every operation audited?
  109. - [ ] Do we have individual accounts for everyone who interacts with the system?
  110. - [ ] Are we tracking all fault-tolerant mechanisms to expose failures they may be hiding?
  111. - [ ] Do we have per-entity / entity-specific audit logs?
  112. - [ ] Do we have sufficient assertions in the code base?
  113. - [ ] Are we keeping historical performance and log data?
  114. - [ ] Is logging configurable without needing to redeploy?
  115. - [ ] Are we exposing suitable health information for monitoring?
  116. - [ ] Is every error that we report actionable?
  117. - [ ] Do our problem reports contain enough information to diagnose the problem?
  118. - [ ] Can we snapshot system state for debugging outside of production?
  119. - [ ] Are we recording all significant system actions? Both commands sent by users, and what the system internally does.
  120.  
  121. ## Graceful Degradation and Admission Control
  122.  
  123. - [ ] Do we have a 'big red switch' mechanism to keep vital processing online while shedding or delaying non-critical workload?
  124. - [ ] Have we implemented admission control?
  125. - [ ] Can we meter admission to slowly bring a system back up after a failure?
  126.  
  127. ## Customer and Press Communication Plan
  128.  
  129. - [ ] Do we have a communications plan in place for issues such as wide-scale system unavailability, data loss or corruption, security breaches, privacy violations etc..?
  130.  
  131. ## Customer Self-Provisioning and Self-Help
  132.  
  133. - [ ] Can customers self-provision and self-help?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement