Advertisement
Guest User

Untitled

a guest
Jan 21st, 2020
78
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.23 KB | None | 0 0
  1. TA1 Collaboration Items
  2. closer collaboration to steer technology and architecture
  3. engage critically how they are representing systems and relationships–how can they be more intuitive and human readable? This isn't their area of expertise.. how can we help Dennis model?
  4. CCML is insufficient for showing relationships, associating properties, relating to databases
  5. tell TA1 what information we've captured / are expecting they will capture and represent
  6. Goals:
  7.  
  8. review error items from Exercise 1 with TA1 to see if problems have been resolved (listed below)
  9. passwords remaining as default
  10. unspecified changes in configuration output based on TA2 CCML configuration-selection input
  11. unmodeled systems and/or services
  12. make our needs for the Phase II model clear and ensure they are feasible for TA1
  13. modeling of non-IP-based communications
  14. modeling of non-traditional OT assets
  15. We want to collaborate with TA1 to help direct the development of CCML and ensure that it adequately represents the configurations we have evaluated, particularly those that result in a vulnerability. We will do this both by seeing what progress has been made since the phase 1 exercise based on inefficiencies that were noted, as well as by engaging in conversations as we enter phase 2 to continuously communicate our expectations of the necessary configurations and attributes that need to be modeled. This will ensure that TA2 has the appropriate information to reason over the CCML model and choose configuration selections.
  16.  
  17. Phase I Debrief
  18. Have you made it so that the ConfINE system automatically updates passwords to a secure password?
  19. Have you added CCML to represent the Windows or Linux workstations? The BCAS?
  20. What have you added to the CCML since the exercise?
  21. Have you continued testing ConfINE's ability to faithfully implement configuration selections?
  22. What happens if you feed it a blank configuration selection? Do any of the configuration files change?
  23. Do you have any thoughts or questions based on the phase 1 exercise?
  24. Phase II Preparations
  25. Have you begun modeling the phase 2 testbed in CCML, even though specifics have not been released? Can you send what you've modeled?
  26. We plan to continue looking at several vulnerabilities from phase 1, as well as new vulnerabilities relevant to the phase 2 testbed; phase 1 vulnerabilities that we will continue to test will be
  27. weak / default authentication
  28. unnecessary services, including remote access services
  29. firewall(s) with device-based whitelisting
  30. insecure protocols, particularly when a more secure protocol is available
  31. TA2 Collaboration Items
  32. highly dependent on ptolemy/prologue/CCML models and subsystems–no transition partner will want to create a high fidelity model of all subsystems
  33. TA2 wants weights given to them; weights on't be available.. need some background nowledge and reasoning to weight own edges, not everything will be given to you in the CCML
  34. need to actually study and understand TA2 technologies; where are they good? where do they fail? how can we close the gap? how can we build tests to highlight what they are good at and what they still need to grown in? need to understand where these areas are for each system
  35. ask critical questions
  36. develop tests to demonstrate capabilities and lack thereof
  37. educate in security aspects we expect them to model
  38. not all automated understanding is on ta1, ta2 needs to do own reasoning and research (sciborg started this)
  39. build knowledgebases for firmware versions, best practices, known CVEs, protocols, etc. and grown own reasonining capabilities
  40. Goals:
  41.  
  42. review shortcomings of TA2 reasoning in Exercise 1 (performer-specific)
  43. communicate expectations for continued regression testing in phase 2
  44. We want to collaborate with TA2 to ensure the appropriate areas of vulnerability are being focused on and addressed. Because each performer had different challenges associated with the execution of their code, there are tailored remarks and follow-up questions to gauge the progress that is being made. The goal is to ensure TA2s are creating software that effectively meets the needs of the program and overcomes the difficulties experienced at exercise 1.
  45.  
  46. Phase I Debrief
  47. In the aftermath of phase 1 testing and the continued integration that TA2 did with the new requirement from TA4 (about the algae system), it would be helpful to have a follow-up conversation with each of the TA2 performers to ask some of the following questions:
  48.  
  49. Did you ever receive your system-under-test report? Did it make sense? Did you have any questions or feedback?
  50. Based on the testing that occurred and subsequent reporting, what areas of growth have you identified and what progress has been made?
  51. How did the requirements regarding the bioreactor go for you?
  52. Have you changed how your program addresses security requirements at all? Do you have any questions or comments about security requirements?
  53. Have you performed regression tests using the ConfINE system to test it's implementation of your configurations?
  54. Would you like to send me a copy of your CCML and/or latest program for review or comment?
  55. Performer-specific questions and comments:
  56. MASON:
  57. Is your program now able to generate configurations for all the devices?
  58. How long does it take to process on the full CCML?
  59. Does it generate different CCML configuration-selection output for different modes?
  60. Does it generate any firewall-based rules?
  61. OCCAM:
  62. Did you see our feedback regarding the OpenSprinkler being enabled? Is it now enabled in business and audit modes?
  63. Have you changed how you handle firewalls? Any additional firewall rules?
  64. Are the firewall rules for the pfSense and OpenWrt distinct?
  65. SYGESEC:
  66. Did you correct the problem that caused only shutdown mode to generate effective CCML configuration-selections? Does SYGESEC effectively generate CCML for all modes now?
  67. Of the three outputs that were created for shutdown mode, what features are intended for 1, 2, and 3? Which is the most secure?
  68. When we evaluated the system, no firewall rules were present in the CCML–is this still the case, or do you now generate firewall rules?
  69. What was with setting ports to be 0, -1, or -2? Does it still do this?
  70. When we evaluated the system, SYGESEC did not turn any unnecessary devices to "disabled"–is this still the case, or when devices are unnecessary does it turn them "off"? (e.g. BrewPi and OpenSprinkler in shutdown mode)
  71. The alarm length had been set to "0" which disables the alarm–has this been rectified?
  72. SCIBORG:
  73. Do you output CCML configuration-selections yet? Would you like us to review those? This is an important step for future phases. We will no longer evaluate performers who do not output CCML.
  74. It didn't seem that SCIBORG created any firewall rules–is this still the case, or can it reason about necessary traffic and create appropriately restrictive rules?
  75. Phase II Preparations
  76. In preparation for phase 2 exercises, it is notable that we plan to continue testing many of the major categories introduced in the phase 1 testbed while continuing to introduce and integrate new areas of vulnerabilities to be reasoned about.
  77.  
  78. Particularly, we will be doing regression testing on:
  79.  
  80. weak / default authentication
  81. unnecessary services, including remote access services
  82. firewall(s) with device-based whitelisting
  83. insecure protocols, particularly when a more secure protocol is available
  84. Are there any questions or comments that you, as another performer, has as we approach phase 2 regarding expectations or our plan? We hope to engage more regularly with the TA1/TA2 performers about the vulnerabilities we have identified and expect to be mitigated.
  85.  
  86. TA4 Collaboration Items
  87. More controls–how are we going to do the evaluation this time? The same way? How can we control it better?
  88. What metrics are we going to use and how are we going to measure them?
  89. How are we helping with their test plan to make sure they're not cutting corners?
  90. Help shape testbed and requirements
  91. How are requirements specified for this level of system?
  92. The current level of requirements is no where near enough to get to transition level–we need to mature fast and have a complicated system. Suck it up TAs and let's work on actually modeling real, difficult things.
  93. MET framework may not be sufficient for phase 2 and 3 testing (what more would we want?)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement