Advertisement
Guest User

Untitled

a guest
Jun 27th, 2017
64
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.77 KB | None | 0 0
  1. Task 1:
  2. a)The Internet used to consist of only 4 Systems. It has since
  3. grown considerably, making the Client-Server model obsolete in
  4. many cases. This increase in end systems has lead to increasingly
  5. widespread adoption of distributed models of computing and
  6. communication. Also, it has made an extension of the IP Addressspace
  7. unavoidable. b)There are many different areas where new technology
  8. is needed.
  9. * Back in the day only big Mainframes had internet access. The
  10. internet was not as accessible to the general public making
  11. security a lesser priority. The increase in personal computing and
  12. personal internet access entails a whole new array of demands,
  13. such as e.g. privacy, security, reliability, scalability. *
  14. Many (mobile) devices have Internet connectivity nowadays. Hence,
  15. wireless technology and corresponding protocols, and scalability
  16. solutions are necessary. This extends to endsystems as well
  17. as infrastructure. * Nowadays, the internet is often used for
  18. personal reasons, such as social networking etc as opposed to
  19. being a research or corporate network. * Data transmission has
  20. changed in several areas. For mobility reasons many devices use
  21. wireless technologies. Also cablebound protocols have changed
  22. significantly in the past, in respect to efficiency and a growing
  23. number of endsystems (users), see IPv6.
  24. c)
  25. The most common model for personal internet access is ADSL
  26. by far. This skews the traffic on personal endsystems towards
  27. downstream heavy applications. For a lot of applications, this
  28. makes sense, as the user generally wants to pull in information
  29. from some remote source, e.g. a webserver or "the cloud". But
  30. especially in recent years, upstream has become more and more
  31. relevant for users, since the focus is often placed on user
  32. contributed data. This comes in many forms, like for example
  33. uploading a photo album to a social network or doing live
  34. (potentially mobile) audio (and often even video) streaming.
  35. With increasing use of such features, capacities of centralized
  36. ressources, e.g. webservers, are used up very quickly, which is
  37. why many services have started incorporating P2P technology. Many
  38. video streaming services use the upstream bandwidth of the users
  39. (peers) to broadcast their data in order to lessen ressource
  40. requirements on their backends. This is also true for intrisically
  41. person-to-person based services, like instant messengers, voip
  42. calls etc. Modern P2P filesharing networks are pretty much the
  43. epitome of this development, trading in distributed usage of
  44. ressources for increased robustness of the network (eliminating
  45. central servers as a single point of failure).
  46.  
  47. Task 2:
  48. a) * Scalability:
  49. Modern internet applications generally have a lot of users and
  50. high interaction between these users. This is very ressource
  51. intensive, especially in a centralized model. This problem
  52. is not always remedied by throwing money at it (i.e. buy
  53. more servers etc), which often makes a deeper analysis
  54. necessary. New protocols, more efficient procedures and
  55. potentially P2P technology are often needed.
  56. * Security:
  57. The internet being very much open to the public introduces
  58. a whole new category of security considerations. Protocols
  59. have to be designed robustly, and implemented in a secure
  60. fashion. Authentication technologies are indispensable. And
  61. a whole new class of attacks, like e.g. DDoS attacks, have
  62. to be dealt with properly. The increasing proliferation of
  63. "social networking" aspects in internet applications also
  64. calls into question issues of privacy.
  65. * Reliability:
  66. The applications should always be accessible and should act
  67. as expected. This means that single points of failure should
  68. be eliminated or at least minimized. Also, the application
  69. should be built in a robust fashion in order to be able to
  70. withstand unexpected failures of components without degrading
  71. the user experience.
  72. b) Obviously, Scalability and Reliability are closely
  73. linked. Increasing realiability, for example by introducing
  74. redundancies, makes scalability harder, because the redundancy
  75. solutions have to scale as well. Also, redundancies can pose a
  76. security risk, because the failure mitigation procedures could
  77. pontentially be exploited. Thus a more reliable system is often harder
  78. to make secure. Security and Scalability are also somewhat related,
  79. as security constraints forbid certain designs that would make an
  80. application more scalable, for example possibly sensitive user data
  81. can not necessarily be stored in a distributed system because of
  82. privacy issues. Also, security technologies like for example SSL/TLS
  83. cause additional overhead, thus further complicating the challenge
  84. of scalability.
  85. Furthermore, none of these challenges are necessarily more
  86. important than any other, because failing to overcome any one
  87. of them will cause the entire application to fail. But as they
  88. all influence each other, there cannot be a perfect solution. A
  89. compromise has to be made.
  90. c) The problems of security and scalability are very important
  91. ones. If the users can't be/aren't confident in the security and
  92. most of all the privacy of their data, they will not be as willing
  93. to submit it, potentially causing loss of income. Also, the service
  94. must be scalable, because the goal is to get as many users as possible
  95. using the services as much as possible. Thus, if the application is
  96. not scalable, its success will be its failure. Long waiting times,
  97. maintenance downtimes etc all degrade the user experience and are
  98. thus detrimental to my income.
  99.  
  100. Task #3:
  101. a) Even though most end user nowdays have flat rates, traffic for
  102. service providers is expensive. By letting the consumers redristrube
  103. the content the service can save a significant amount of money. i.e
  104. every megabyte one peer uploads to another is a megabyte the service
  105. provider doesn't have to pay for. Similar things can be said
  106. about cpu time for cpu-heavy P2P systems. These advantages transfer
  107. to the end user of course, by virtue of a pontentially faster/more
  108. responsive service (especially in data distribution applications).
  109. b) Distributing data is one of the best use cases for P2P-systems. Useing
  110. BitTorrent for example to disribute "World of Warcraft" patches
  111. or Linux Distributions has almost no downsides for consumers
  112. and only upsides (minimizing traffic costs) for the provider.
  113. The Client/Server model thought still has its upsides when it
  114. comes to latency or cases where one needs absolute control and 100%
  115. consistancy in the data.
  116. c) Managing the balance of a skype account
  117. in the cloud might not be a good idea as users could easily change
  118. it - or if not for themself at least for other users. Storing the
  119. username in the cloud would be rather complicatied because there
  120. has to be a way to lookup and exchange public keys. (And one don't
  121. want to endeavor a second communcition tool (like email) to exchange
  122. that information) However the most important reasons for Skype to
  123. rely on central servers is to keep control over the system. (This,
  124. and some weird obfuscation of the protocol.)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement