Advertisement
Guest User

Untitled

a guest
Feb 25th, 2016
231
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 61.35 KB | None | 0 0
  1. IOC:
  2.  
  3. inversion of control (IoC) describes a design in which custom-written portions of a computer program receive the flow of control from a generic, reusable library. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the reusable code that calls into the custom, or task-specific, code.
  4.  
  5. Inversion of Control (инверсия управления) — это некий абстрактный принцип, набор рекомендаций для написания слабо связанного кода. Суть которого в том, что каждый компонент системы должен быть как можно более изолированным от других, не полагаясь в своей работе на детали конкретной реализации других компонентов.
  6. Dependency Injection (внедрение зависимостей) — это одна из реализаций этого принципа (помимо этого есть еще Factory Method, Service Locator).
  7. IoC-контейнер — это какая-то библиотека, фреймворк, программа если хотите, которая позволит вам упростить и автоматизировать написание кода с использованием данного подхода на столько, на сколько это возможно.
  8.  
  9. DIP:
  10.  
  11. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  12. Abstractions should not depend upon details. Details should depend upon abstractions.
  13.  
  14. SOLID
  15.  
  16. S SRP [4]
  17. Single responsibility principle
  18. a class should have only a single responsibility (i.e. only one potential change in the software's specification should be able to affect the specification of the class)
  19. O OCP [5]
  20. Open/closed principle
  21. “software entities … should be open for extension, but closed for modification.”
  22. L LSP [6]
  23. Liskov substitution principle
  24. “objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.” See also design by contract.
  25. I ISP [7]
  26. Interface segregation principle
  27. “many client-specific interfaces are better than one general-purpose interface.”[8]
  28. D DIP [9]
  29. Dependency inversion principle
  30. one should “Depend upon Abstractions. Do not depend upon concretions.”[8]
  31. Dependency injection is one method of following this principle, also Factory Method, Service Locator
  32.  
  33.  
  34. ACID
  35.  
  36. Atomicity[edit]
  37. Main article: Atomicity (database systems)
  38. Atomicity requires that each transaction be "all or nothing": if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes. To the outside world, a committed transaction appears (by its effects on the database) to be indivisible ("atomic"), and an aborted transaction does not happen.
  39.  
  40. Consistency[edit]
  41. Main article: Consistency (database systems)
  42. The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors cannot result in the violation of any defined rules.
  43.  
  44. Isolation[edit]
  45. Main article: Isolation (database systems)
  46. The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed serially, i.e., one after the other. Providing isolation is the main goal of concurrency control. Depending on concurrency control method, the effects of an incomplete transaction might not even be visible to another transaction.[citation needed]
  47.  
  48. Durability[edit]
  49. Main article: Durability (database systems)
  50. Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.
  51.  
  52.  
  53. DOM - pull the whole thing into memory and walk around inside it. Good for comparatively small chunks of XML that you want to do complex stuff with. XSLT uses DOM.
  54.  
  55. SAX - Walk the XML as it arrives watching for things as they fly past. Good for large amounts of data or comparatively simple processing.
  56.  
  57. StAX - Much like SAX but instead of responding to events found in the stream you iterate through the xml
  58.  
  59. XPath, the XML Path Language, is a query language for selecting nodes from an XML document. In addition, XPath may be used to compute values (e.g., strings, numbers, or Boolean values) from the content of an XML document. XPath was defined by the World Wide Web Consortium (W3C).[1]
  60. A//B/*[1]
  61.  
  62. XSLT (Extensible Stylesheet Language Transformations) is a language for transforming XML documents into other XML documents,[1] or other formats such as HTML for web pages, plain text or into XSL Formatting Objects, which may subsequently be converted to other formats, such as PDF PostScript and PNG.[2]
  63.  
  64.  
  65. REST security:
  66. -Basic Authentication w/ TLS
  67. -Oauth1.0/2.0
  68. -Custom
  69. -API Keys(tokens)
  70.  
  71. REST Idenpotent method:
  72.  
  73. HTTP Method Idempotent Safe
  74. OPTIONS yes yes
  75. GET yes yes
  76. HEAD yes yes
  77. PUT yes no
  78. POST no no
  79. DELETE yes no
  80. PATCH no no
  81.  
  82. Safe methods are HTTP methods that do not modify resources.
  83.  
  84.  
  85.  
  86.  
  87.  
  88. OAuth is an open standard to authorization. OAuth provides client applications a 'secure delegated access' to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials.
  89.  
  90. http://geektimes.ru/post/77648/
  91. OAuth — протокол для авторизованного доступа к стороннему API. OAuth позволяет скрипту Consumer-а получить ограниченный API-доступ к данным стороннего Service Provider-а, если User дает добро. Т.е. это средство для доступа к API.
  92.  
  93. SSO
  94. Benefits of using single sign-on include:
  95.  
  96. Reducing password fatigue from different user name and password combinations
  97. Reducing time spent re-entering passwords for the same identity
  98. Reducing IT costs due to lower number of IT help desk calls about passwords[3]
  99. SSO shares centralized authentication servers that all other applications and systems use for authentication purposes and combines this with techniques to ensure that users do not have to actively enter their credentials more than once.
  100.  
  101. CAS
  102. CAS - is a single sign-on protocol for the web
  103. The CAS protocol involves at least three parties: a client web browser, the web application requesting authentication, and the CAS server.
  104.  
  105. When the client visits an application desiring to authenticate to it, the application redirects it to CAS. CAS validates the client's authenticity, usually by checking a username and password against a database (such as Kerberos, LDAP or Active Directory).
  106.  
  107. If the authentication succeeds, CAS returns the client to the application, passing along a security ticket. The application then validates the ticket by contacting CAS over a secure connection and providing its own service identifier and the ticket. CAS then gives the application trusted information about whether a particular user has successfully authenticated.
  108.  
  109.  
  110. Scalability
  111.  
  112. Scalability is ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth
  113. A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system.
  114.  
  115. To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application. An example might involve scaling out from one Web server system to three. As computer prices have dropped and performance continues to increase, high-performance computing applications such as seismic analysis and biotechnology workloads have adopted low-cost "commodity" systems for tasks that once would have required supercomputers. System architects may configure hundreds of small computers in a cluster to obtain aggregate computing power that often exceeds that of computers based on a single traditional processor. The development of high-performance interconnects such as Gigabit Ethernet, InfiniBand and Myrinet further fueled this model. Such growth has led to demand for software that allows efficient management and maintenance of multiple nodes, as well as hardware such as shared data storage with much higher I/O performance. Size scalability is the maximum number of processors that a system can accommodate.[4]
  116. To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer. Such vertical scaling of existing systems also enables them to use virtualization technology more effectively, as it provides more resources for the hosted set of operating system and application modules to share. Taking advantage of such resources can also be called "scaling up", such as expanding the number of Apache daemon processes currently running. Application scalability refers to the improved performance of running applications on a scaled-up version of the system.[4]
  117.  
  118.  
  119. partitioning
  120. Partitioning refers to splitting what is logically one large table into smaller physical pieces. Partitioning can provide several benefits:
  121.  
  122. Query performance can be improved dramatically in certain situations, particularly when most of the heavily accessed rows of the table are in a single partition or a small number of partitions. The partitioning substitutes for leading columns of indexes, reducing index size and making it more likely that the heavily-used parts of the indexes fit in memory.
  123.  
  124. When queries or updates access a large percentage of a single partition, performance can be improved by taking advantage of sequential scan of that partition instead of using an index and random access reads scattered across the whole table.
  125.  
  126. Seldom-used data can be migrated to cheaper and slower storage media.
  127.  
  128. One technique supported by most of the major database management system (DBMS) products is the partitioning of large tables, based on ranges of values in a key field. In this manner, the database can be scaled out across a cluster of separate database servers. Also, with the advent of 64-bit microprocessors, multi-core CPUs, and large SMP multiprocessors, DBMS vendors have been at the forefront of supporting multi-threaded implementations that substantially scale up transaction processing capacity.
  129.  
  130. Partition
  131. A partition is a division of a logical database or its constituent elements into distinct independent parts. Database partitioning is normally done for manageability, performance or availability reasons.
  132.  
  133. Horizontal partitioning (also see shard) involves putting different rows into different tables. Perhaps customers with ZIP codes less than 50000 are stored in CustomersEast, while customers with ZIP codes greater than or equal to 50000 are stored in CustomersWest. The two partition tables are then CustomersEast and CustomersWest, while a view with a union might be created over both of them to provide a complete view of all customers.
  134.  
  135. Vertical partitioning involves creating tables with fewer columns and using additional tables to store the remaining columns.[1] Normalization also involves this splitting of columns across tables, but vertical partitioning goes beyond that and partitions columns even when already normalized. Different physical storage might be used to realize vertical partitioning as well; storing infrequently used or very wide columns on a different device, for example, is a method of vertical partitioning. Done explicitly or implicitly, this type of partitioning is called "row splitting" (the row is split by its columns). A common form of vertical partitioning is to split dynamic data (slow to find) from static data (fast to find) in a table where the dynamic data is not used as often as the static. Creating a view across the two newly created tables restores the original table with a performance penalty, however performance will increase when accessing the static data e.g. for statistical analysis.
  136.  
  137. Database can be split vertically (Partitioning) or horizontally (Sharding).
  138.  
  139. Vertically splitting (Partitioning) :– Database can be split into multiple loosely coupled sub-databases based of domain concepts. Eg:– Customer database, Product Database etc. Another way to split database is by moving few columns of an entity to one database and few other columns to another database. Eg:– Customer database , Customer contact Info database, Customer Orders database etc.
  140.  
  141. Horizontally splitting (Sharding) :– Database can be horizontally split into multiple database based on some discrete attribute. Eg:– American Customers database, European Customers database.
  142.  
  143. Transiting from single database to multiple database using partitioning or sharding is a challenging task.
  144.  
  145. Shard
  146. A database shard is a horizontal partition of data in a database or search engine. Each individual partition is referred to as a shard or database shard. Each shard is held on a separate database server instance, to spread load.
  147. Some data within a database remains present in all shards,[notes 1] but some only appears in a single shard. Each shard (or server) acts as the single source for this subset of data
  148.  
  149. Normalization
  150. Database normalization is the process of organizing the attributes and tables of a relational database to minimize data redundancy.
  151. Normalization involves refactorizing a table into smaller (and less redundant) tables but without losing information defining foreign keys in the old table referencing the primary keys of the new ones. The objective is to isolate data so that additions, deletions, and modifications of an attribute can be made in just one table and then propagated through the rest of the database using the defined foreign keys.
  152.  
  153. . Clusters which provide "lazy" redundancy by updating copies in an asynchronous fashion are called 'eventually consistent'.
  154.  
  155. Load Balance
  156. Load balancing distributes workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability through redundancy.
  157.  
  158. Replication
  159. Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
  160. The access to a replicated entity is typically uniform with access to a single, non-replicated entity. The replication itself should be transparent to an external user. Also, in a failure scenario, a failover of replicas is hidden as much as possible.
  161.  
  162. Database replication becomes difficult when it scales up.
  163.  
  164. Multi-master replication is a method of database replication which allows data to be stored by a group of computers, and updated by any member of the group. All members are responsive to client data queries. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group, and resolving any conflicts that might arise between concurrent changes made by different members.
  165. Multi-master replication can be contrasted with master-slave replication, in which a single member of the group is designated as the "master" for a given piece of data and is the only node allowed to modify that data item. Other members wishing to modify the data item must first contact the master node. Allowing only a single master makes it easier to achieve consistency among the members of the group, but is less flexible than multi-master replication.
  166. The primary purposes of multi-master replication are increased availability and faster server response time.
  167.  
  168.  
  169. Distributed database
  170. A distributed database is a database in which storage devices are not all attached to a common processing unit such as the CPU,[1] controlled by a distributed database management system
  171. Advantages:
  172. Management of distributed data with different levels of transparency like network transparency, fragmentation transparency, replication transparency, etc.
  173. Increase reliability and availability
  174. Easier expansion
  175. Reflects organizational structure — database fragments potentially stored within the departments they relate to
  176. Local autonomy or site autonomy — a department can control the data about them (as they are the ones familiar with it)
  177. Protection of valuable data — if there were ever a catastrophic event such as a fire, all of the data would not be in one place, but distributed in multiple locations
  178.  
  179. RDBMS database can be scaled by having master-slave mode with read/writes on master database and only reads on slave databases. Master-Slave provides limited scaling of reads beyond which developers has to split the database into multiple databases.
  180.  
  181. Fault tolerance
  182. Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of (or one or more faults within) some of its components.
  183.  
  184.  
  185.  
  186. CAP theorem
  187.  
  188. Fowler interpritation:
  189. ____ Consistency
  190. |
  191. |
  192. Partition ----
  193. |
  194. | ____ Availability
  195.  
  196. Reliability:
  197. A system needs to be reliable, such that a request for data will consistently return the same data. In the event the daat changes or is updated, then that same request should return the new data. Users need to know that if something is written to the system, or stored, it will persist and can be relied on to be in place for future retrieval.
  198.  
  199. MApReduce
  200. MapReduce program is composed of a Map() procedure that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() procedure that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The "MapReduce System" (also called "infrastructure" or "framework") orchestrates the processing by marshalling the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for redundancy and fault tolerance.
  201.  
  202. IO
  203. non-blocking I/O is a form of input/output processing that permits other processing to continue before the transmission has finished.
  204. Any task that depends on the I/O having completed (this includes both using the input values and critical operations that claim to assure that a write operation has been completed) still needs to wait for the I/O operation to complete, and thus is still blocked, but other processing that does not have a dependency on the I/O operation can continue.
  205.  
  206.  
  207. Aggregation implies a relationship where the child can exist independently of the parent. Example: Class (parent) and Student (child). Delete the Class and the Students still exist.
  208.  
  209. Composition implies a relationship where the child cannot exist independent of the parent. Example: House (parent) and Room (child). Rooms don't exist separate to a House.
  210.  
  211. The above two are forms of containment (hence the parent-child relationships).
  212.  
  213. Dependency is a weaker form of relationship and in code terms indicates that a class uses another by parameter or return type.
  214.  
  215. Dependency is a form of association.
  216.  
  217.  
  218. Memory barrier
  219. A memory barrier, also known as a membar, memory fence or fence instruction, is a type of barrier instruction that causes a central processing unit (CPU) or compiler to enforce an ordering constraint on memory operations issued before and after the barrier instruction. This typically means that operations issued prior to the barrier are guaranteed to be performed before operations issued after the barrier.
  220. Memory barriers are necessary because most modern CPUs employ performance optimizations that can result in out-of-order execution. This reordering of memory operations (loads and stores) normally goes unnoticed within a single thread of execution, but can cause unpredictable behaviour in concurrent programs and device drivers unless carefully controlled. The exact nature of an ordering constraint is hardware dependent and defined by the architecture's memory ordering model. Some architectures provide multiple barriers for enforcing different ordering constraints.
  221.  
  222.  
  223. List<Object> lo=new List<String>()
  224. lo.add(5)
  225.  
  226. generics
  227.  
  228.  
  229. Type erasure:
  230. The compiled generic code actually just uses java.lang.Object wherever you talk about T
  231. Generics are checked at compile-time for type-correctness. The generic type information is then removed in a process called type erasure. For example, List<Integer> will be converted to the non-generic type List, which ordinarily contains arbitrary objects. The compile-time check guarantees that the resulting code is type-correct.
  232. Consequent to type erasure, type parameters cannot be determined at run-time.
  233. Java run-time environment does not need to know which parameterized type is used because the type information is validated at compile-time and is not included in the compiled code
  234. Type parameter cannot be used in the declaration of static variables or in static methods.
  235.  
  236.  
  237.  
  238. currying
  239. implicit
  240.  
  241. Definition of DONE
  242.  
  243. stop thread - boolean flag interrupted
  244.  
  245.  
  246.  
  247.  
  248.  
  249.  
  250. Message queue
  251.  
  252. 1. Request–reply connects a set of clients to a set of services. This is a remote procedure call and task distribution pattern.[clarification needed]
  253. 2. Publish–subscribe connects a set of publishers to a set of subscribers. This is a data distribution pattern.[clarification needed]
  254. 3. Push–pull connects nodes in a fan-out / fan-in pattern that can have multiple steps, and loops. This is a parallel task distribution and collection pattern.[clarification needed]
  255. 4. Exclusive pair connects two sockets in an exclusive pair. This is a low-level pattern for specific, advanced use cases.
  256.  
  257. There are often numerous options as to the exact semantics of message passing, including:
  258.  
  259. Durability - messages may be kept in memory, written to disk, or even committed to a DBMS if the need for reliability indicates a more resource-intensive solution.
  260. Security policies - which applications should have access to these messages?
  261. Message purging policies - queues or messages may have a "time to live"
  262. Message filtering - some systems support filtering data so that a subscriber may only see messages matching some pre-specified criteria of interest
  263. Delivery policies - do we need to guarantee that a message is delivered at least once, or no more than once?
  264. Routing policies - in a system with many queue servers, what servers should receive a message or a queue's messages?
  265. Batching policies - should messages be delivered immediately? Or should the system wait a bit and try to deliver many messages at once?
  266. Queuing criteria - when should a message be considered "enqueued"? When one queue has it? Or when it has been forwarded to at least one remote queue? Or to all queues?
  267. Receipt notification - A publisher may need to know when some or all subscribers have received a message.
  268.  
  269. Synchronous Message queue is used for RPC
  270.  
  271.  
  272. A message broker is an architectural pattern for message validation, transformation and routing.[1] It mediates communication amongst applications, minimizing the mutual awareness that applications should have of each other in order to be able to exchange messages, effectively implementing decoupling.
  273.  
  274. The purpose of a broker is to take incoming messages from applications and perform some action on them. The following are examples of actions that might be taken in by the broker:
  275.  
  276. Route messages to one or more of many destinations
  277. Transform messages to an alternative representation
  278. Perform message aggregation, decomposing messages into multiple messages and sending them to their destination, then recomposing the responses into one message to return to the user
  279. Interact with an external repository to augment a message or store it
  280. Invoke Web services to retrieve data
  281. Respond to events or errors
  282. Provide content and topic-based message routing using the publish–subscribe pattern
  283.  
  284.  
  285. JMS
  286.  
  287.  
  288. Topics
  289. In JMS a Topic implements publish and subscribe semantics. When you publish a message it goes to all the subscribers who are interested - so zero to many subscribers will receive a copy of the message. Only subscribers who had an active subscription at the time the broker receives the message will get a copy of the message.
  290. Queues
  291. A JMS Queue implements load balancer semantics. A single message will be received by exactly one consumer. If there are no consumers available at the time the message is sent it will be kept until a consumer is available that can process the message. If a consumer receives a message and does not acknowledge it before closing then the message will be redelivered to another consumer. A queue can have many consumers with messages load balanced across the available consumers.
  292. So Queues implement a reliable load balancer in JMS.
  293.  
  294. Queue technique is used for one to one messaging, and it supports point to point messaging. While topic is typically used for one to many messaging and it supports public subscribe model of messaging.
  295.  
  296.  
  297. n+1 one problem: DB perfomance problem when using ORM such as Hibernate when you need to query one set of results and then for all of results make another query. Example:
  298. SELECT id, name FROM albums
  299. SELECT id, name FROM songs WHERE album_id = 1
  300. SELECT id, name FROM songs WHERE album_id = 2
  301. ....
  302. Solution1:replacing the second set of N=5 queries by a single query with an IN predicate
  303. SELECT id, title, filename FROM songs
  304. WHERE album_id IN (1, 2, 3, 4, 5)
  305. Solution2:Explicit “eager” fetching, using JOINs:
  306. SELECT
  307. a.id a_id,
  308. a.name a_name,
  309. s.id s_id,
  310. s.name s_name
  311. FROM albums a
  312. JOIN songs s ON s.album_id = a.id
  313.  
  314. L1 + L2 caches
  315. L1 Cache is the cache that exists per Hibernate session, and this cache is not shared among threads. This cache makes use of Hibernate's own caching.Mainly it reduces the number of SQL queries it needs to generate within a given transaction.
  316. L2 Cache is a cache that survives beyond a Hibernate session, and can be shared among threads. For this cache you can use either a caching implementation that comes with Hibernate like EHCache or something else like JBossCache2
  317.  
  318. persistence unit
  319. A persistence unit defines a set of all entity classes that are managed by EntityManager instances in an application. This set of entity classes represents the data contained within a single data store.
  320.  
  321. rebase merge
  322. If you would prefer a clean, linear history free of unnecessary merge commits, you should reach for git rebase instead of git merge when integrating changes from another branch.
  323.  
  324. On the other hand, if you want to preserve the complete history of your project and avoid the risk of re-writing public commits, you can stick with git merge.
  325. https://www.atlassian.com/git/tutorials/merging-vs-rebasing/conceptual-overview
  326. Merge takes all the changes in one branch and merge them into another branch in one commit.
  327. Rebase says I want the point at which I branched to move to a new starting point
  328. Merge
  329. Let's say you have created a branch for the purpose of developing a single feature. When you want to bring those changes back to master, you probably want merge (you don't care about maintaining all of the interim commits).
  330.  
  331. Rebase
  332. A second scenario would be if you started doing some development and then another developer made an unrelated change. You probably want to pull and then rebase to base your changes from the current version from the repo.
  333. rebasing re-writes the project history by creating brand new commits for each commit in the original branch.
  334. The major benefit of rebasing is that you get a much cleaner project history
  335. rebasing loses the context provided by a merge commit—you can’t see when upstream changes were incorporated into the feature.
  336. The golden rule of git rebase is to never use it on public branches.
  337.  
  338. unique key primary key
  339. Primary keys and unique keys are similar. A primary key is a column, or a combination of columns, that can uniquely identify a row. It is a special case of unique key. A table can have at most one primary key, but more than one unique key. When you specify a unique key on a column, no two distinct rows in a table can have the same value.
  340.  
  341. Primary Key
  342. 1. A primary key cannot allow null (a primary key cannot be defined on columns that allow nulls).
  343. 2. Each table cannot have more than one primary key.
  344. 3. On some RDBMS a primary key generates a clustered index by default.
  345. Unique Key
  346. 1. A unique key can allow null (a unique key can be defined on columns that allow nulls.)
  347. 2. Each table can have multiple unique keys.
  348. 3. On some RDBMS a unique key generates a nonclustered index by default.
  349.  
  350. Clustered Index
  351. Clustering alters the data block into a certain distinct order to match the index, resulting in the row data being stored in order. Therefore, only one clustered index can be created on a given database table. Clustered indices can greatly increase overall speed of retrieval, but usually only where the data is accessed sequentially in the same or reverse order of the clustered index, or when a range of items is selected.
  352. Since the physical records are in this sort order on disk, the next row item in the sequence is immediately before or after the last one, and so fewer data block reads are required. The primary feature of a clustered index is therefore the ordering of the physical data rows in accordance with the index blocks that point to them. Some databases separate the data and index blocks into separate files, others put two completely different data blocks within the same physical file(s).
  353.  
  354.  
  355. Cuncurency:
  356. 1 Executors, which are an enhancement over plain old threads because they are abstracted from thread pool management. They execute tasks similar to those passed to threads (in fact, instances implementing java.lang.Runnable can be wrapped). Several implementations are provided with thread pooling and scheduling strategies. Also, execution results can be fetched both in a synchronous and asynchronous manner.
  357. 2 Thread-safe queues allow for passing data between concurrent tasks. A rich set of implementations is provided with underlying data structures (such as array lists, linked lists, or double-end queues) and concurrent behaviors (such as blocking, supporting priorities, or delays).
  358. 3 Fine-grained specification of time-out delays, because a large portion of the classes found in the java.util.concurrent packages exhibit support for time-out delays. An example is an executor that interrupts tasks execution if the tasks cannot be completed within a bounded timespan.
  359. 4 Rich synchronization patterns that go beyond the mutual exclusion provided by low-level synchronized blocks in Java. These patterns comprise common idioms such as semaphores or synchronization barriers.
  360. 5 Efficient, concurrent data collections (maps, lists, and sets) that often yield superior performance in multithreaded contexts through the use of copy-on-write and fine-grained locks.
  361. 6 Atomic variables that shield developers from the need to perform synchronized access by themselves. These variables wrap common primitive types, such as integers or Booleans, as well as references to other objects.
  362. 7 A wide range of locks that go beyond the lock/notify capabilities offered by intrinsic locks, for example, support for re-entrance, read/write locking, timeouts, or poll-based locking attempts.
  363.  
  364. ForkJoinPool - executor that is dedicated to running instances implementing ForkJoinTask. ForkJoinTask objects support the creation of subtasks plus waiting for the subtasks to complete. With those clear semantics, the executor is able to dispatch tasks among its internal threads pool by “stealing” jobs when a task is waiting for another task to complete and there are pending tasks to be run.
  365.  
  366. BlockinQueue -- prducer-consumer
  367. Deque -- each consumer has own deque, work stealing from tail of deque of other consumers
  368.  
  369. ConcurentHash map vs Hashtable(synchronized, not used anymore), vs .synchronizedMap
  370.  
  371. Latch - one time stopper
  372.  
  373. ReentrantLock
  374. ReentrantLock behaves like synchronized and you might wonder when it's appropriate to use one or the other. Use ReentrantLock when you need timed or interruptible lock waits, non-block-structured locks (obtain a lock in one method; return the lock in another), multiple condition variables, or lock polling. Furthermore, ReentrantLock supports scalability and is useful where there is high contention among threads. If none of these factors come into play, use synchronized.
  375.  
  376. semaphore - manage permit to access smtng. Counting semapthore with count of 1 is called binary, can act as mutex.
  377.  
  378. Barriers - waits group of threads to complete,
  379.  
  380. Latches waits for EVENT, Barriers for waiting other threads
  381. CyclicBarrier waits for fixed number of threads and then it can be re-used after the waiting threads are released.
  382.  
  383. JVM shutdown: orderly shutdown - starts all hooks, can run finalizers, make no attempt to stop or interrupt that are still running
  384. Abrupt shutdown - on Runtiime.halt or killing JVM
  385.  
  386. Normal thread vs daemon threads differs only at in whtat happesns when they exit. On thread exit JVM check is there are normal thrreads running, if not it shutdowns, fInally block not executed
  387.  
  388. optimistic-pessimistic Concurrency:
  389.  
  390. A good way of thinking about this is that an optimistic lock is about conflict detection while a pessimistic lock
  391. is about conflict prevention.
  392. read-write locks
  393.  
  394. With optimistic locking both of them can make a copy of the file and edit it freely. If David is the first to finish, he can check
  395. in his work without trouble. The concurrency control kicks in when Martin tries to commit his changes. At this
  396. point the source code control system detects a conflict between Martin's changes and David's changes.
  397. Martin's commit is rejected and it's up to him to figure out how to deal with the situation. With pessimistic
  398. locking whoever checks out the file first prevents anyone else from editing it. So if Martin is first to check
  399. out, David can't work with the file until Martin is finished with it and commits his changes
  400.  
  401.  
  402. DeadLock
  403.  
  404. Can be lock-ordering deadlock. Solution: all threads acquire the lock they need in fixed GLOBAL ordering
  405. Try to use "open calls" - call methow with no lock
  406. Solution2: use one lock at a time
  407. Solution3:use timed tryLock of the Lock classes
  408.  
  409. Starvation - thread is continiously denied access to recources it needs to make progress. Not recommended but can be used threads priorities.
  410.  
  411. thread-starvation deadlock - when one thread is waits for result of another task. while blocking other threads from execution
  412.  
  413. Livelock - thread while not blocked stil cannot make progress because is keeps retrying an operation that will always fail. Solutions - add retrying with random component.
  414.  
  415.  
  416.  
  417. Sleep vs wait:
  418. Sleep can be only interrupted
  419. sleeping a Thread does not release the locks it holds, while waiting releases the lock on the object that wait() is called on
  420.  
  421. A wait can be "woken up" by another thread calling notify on the monitor which is being waited on whereas a sleep cannot. Also a wait (and notify) must happen in a block synchronized on the monitor object whereas sleep does not:
  422.  
  423. Object mon = ...;
  424. synchronized (mon) {
  425. mon.wait();
  426. }
  427. At this point the currently executing thread waits and releases the monitor. Another thread may do
  428.  
  429. synchronized (mon) { mon.notify(); }
  430. (On the same mon object) and the first thread (assuming it is the only thread waiting on the monitor) will wake up.
  431.  
  432. You can also call notifyAll if more than one thread is waiting on the monitor - this will wake all of them up. However, only one of the threads will be able to grab the monitor (remember that the wait is in a synchronized block) and carry on - the others will then be blocked until they can acquire the monitor's lock.
  433.  
  434. Another point is that you call wait on Object itself (i.e. you wait on an object's monitor) whereas you call sleep on Thread.
  435.  
  436. Yet another point is that you can get spurious wakeups from wait (i.e. the thread which is waiting resumes for no apparent reason). You should always wait whilst spinning on some condition as follows:
  437.  
  438. synchronized {
  439. while (!condition) { mon.wait(); }
  440. }
  441.  
  442. Fail Fast Iterator | Fail Safe Iterator
  443. Throw ConcurrentModification Exception Yes | No
  444. Clone object No | Yes
  445. Memory Overhead No | Yes
  446. Examples HashMap,Vector,ArrayList,HashSet|
  447. |CopyOnWriteArrayList,
  448. |ConcurrentHashMap
  449.  
  450.  
  451. ╔═══════════════════╦══════════════════════╦═════════════════════════════╗
  452. ║ ║ List ║ Set ║
  453. ╠═══════════════════╬══════════════════════╬═════════════════════════════╣
  454. ║ Duplicates ║ YES ║ NO ║
  455. ╠═══════════════════╬══════════════════════╬═════════════════════════════╣
  456. ║ Order ║ ORDERED ║ DEPENDS ON IMPLEMENTATION ║
  457. ╠═══════════════════╬══════════════════════╬═════════════════════════════╣
  458. ║ Positional Access ║ YES ║ NO ║
  459. ╚═══════════════════╩══════════════════════╩═════════════════════════════╝
  460.  
  461. HashSet (unordered)
  462. LinkedHashSet (ordered)
  463. TreeSet (sorted by natural order or by provided comparator)
  464.  
  465. GC
  466. There are four kinds of GC roots in Java:
  467.  
  468. 1.Local variables are kept alive by the stack of a thread. This is not a real object virtual reference and thus is not visible. For all intents and purposes, local variables are GC roots.
  469. 2.Active Java threads are always considered live objects and are therefore GC roots. This is especially important for thread local variables.
  470. 3.Static variables are referenced by their classes. This fact makes them de facto GC roots. Classes themselves can be garbage-collected, which would remove all referenced static variables. This is of special importance when we use application servers, OSGi containers or class loaders in general. We will discuss the related problems in the Problem Patterns section.
  471. 4.JNI References are Java objects that the native code has created as part of a JNI call. Objects thus created are treated specially because the JVM does not know if it is being referenced by the native code or not. Such objects represent a very special form of GC root, which we will examine in more detail in the Problem Patterns section below.
  472.  
  473. GC types
  474. There are 5 GC types.
  475. 1 Serial GC
  476. 2 Parallel GC
  477. 3 Parallel Old GC (Parallel Compacting GC)
  478. 4 Concurrent Mark & Sweep GC (or "CMS")
  479. 5 Garbage First (G1) GC
  480.  
  481. Cuncurent GC - works cuncurently with app
  482. Parallel GC - uses multiple CPUs to perform gc
  483.  
  484. GC is Conservative when it unaware of some references or not sure if field is reference or not
  485. GC is Precise when it can cull identify all object references
  486.  
  487. compacting - remove fragmantation of heap, relocate, remap
  488.  
  489. — Throwable и Exception и все их наследники (за исключением наследников Error-а и RuntimeException-а) — checked
  490. — Error и RuntimeException и все их наследники — unchecked
  491.  
  492. GC:
  493. When Eden is full - minor GC is going
  494. When Tenured(old_) full - major GC
  495. The Permanent Generation is a special case, it contains objects that are needed by the JVM that are not necessarily represented in your program, for example objects that represent classes and methods.
  496. GS algorithm:
  497. Serial - user when one core CPU(amazon micro instance), smallest overhead
  498. Parallel - usually N times faster then serial, better throughtput
  499. CMS - in young - paralel, old - mostly concurent, old is fragmented, smaller GC pause then parallel, throughtput is lower, better when enought heap size
  500. G1 - new, Java8 - monplithis STW YoungGen, mostly concurent oldGen marker, STW mostly instemental compacting oldgen, fall back to monolithic stop-the-world
  501.  
  502. GC stages:
  503. mark, sweep, compact
  504.  
  505. GC pause is NOT depends on Xmx(heap size)
  506.  
  507. GC efficency depends on empty heap size
  508.  
  509. STW - stop the world time
  510.  
  511. Memory model:
  512.  
  513. -----HEAP----------------------------PermGen----Thread
  514. ----Young gen-------------ld gen-----
  515. Eden--Survivor1-surivivor2 = YG
  516.  
  517.  
  518. PermGen is replaced with Metaspace in JDK8, which is very similar. The main difference is that Metaspace can expand at runtime.
  519.  
  520. A new flag is available (MaxMetaspaceSize), allowing you to limit the amount of native memory used for class metadata. If you don’t specify this flag, the Metaspace will dynamically re-size depending of the application demand at runtime. Size unbounded (default) - system memory is the limit.
  521.  
  522. висота дерева
  523. бинарнй пошук
  524. реверс стрічки
  525. public static String reverse(String orig)
  526. {
  527. char[] s = orig.toCharArray();
  528. int n = s.length;
  529. int halfLength = n / 2;
  530. for (int i=0; i<halfLength; i++)
  531. {
  532. char temp = s[i];
  533. s[i] = s[n-1-i];
  534. s[n-1-i] = temp;
  535. }
  536. return new String(s);
  537. }
  538. квик сорт
  539.  
  540. Arrays merge. Implement merge step of merge sort algorithm. For example:
  541. Input: 2 sorted arrays, [5 6 7 8 13 14 15] and [2 3 4 9 11].
  542. Output: merged array, [2 3 4 5 6 7 8 9 11 13 14 15].
  543.  
  544.  
  545. Code Metrics
  546. By MS:
  547. -Maintainability Index
  548. -Cyclomatic Complexity
  549. -Depth of Inheritance
  550. -Class Coupling !!!Good software design dictates that types and methods should have high cohesion and low coupling. !!!
  551. -Lines of Code
  552. -Code coverage
  553.  
  554.  
  555. safe publication
  556.  
  557. memory strike
  558. type cohesion
  559. cohesion refers to the degree to which the elements of a module belong together.
  560. High cohesion often correlates with loose coupling, and vice versa
  561.  
  562. Transaction isolation:
  563. Isolation Level Dirty Read Unrepeatable Read Phantom
  564. Read Uncommitted Yes Yes Yes
  565. Read Committed No Yes Yes
  566. Repeatable Read No No Yes
  567. Serializable No No No
  568.  
  569. Serializable[edit]
  570. This is the highest isolation level.
  571. With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction. Also range-locks must be acquired when a SELECT query uses a ranged WHERE clause, especially to avoid the phantom reads phenomenon (see below).
  572. When using non-lock based concurrency control, no locks are acquired; however, if the system detects a write collision among several concurrent transactions, only one of them is allowed to commit. See snapshot isolation for more details on this topic.
  573. Repeatable reads[edit]
  574. In this isolation level, a lock-based concurrency control DBMS implementation keeps read and write locks (acquired on selected data) until the end of the transaction. However, range-locks are not managed, so phantom reads can occur.
  575. Read committed[edit]
  576. In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level, as discussed below). As in the previous level, range-locks are not managed.
  577. Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read.
  578. Read uncommitted[edit]
  579. This is the lowest isolation level. In this level, dirty reads are allowed, so one transaction may see not-yet-committed changes made by other transactions.
  580. Since each isolation level is stronger than those below, in that no higher isolation level allows an action forbidden by a lower one, the standard permits a DBMS to run a transaction at an isolation level stronger than that requested (e.g., a "Read committed" transaction may actually be performed at a "Repeatable read" isolation level).
  581.  
  582. materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function.
  583.  
  584. Materialized views are disk based and are updated periodically based upon the query definition.
  585. Views are virtual only and run the query definition each time they are accessed.
  586.  
  587. https://en.wikipedia.org/wiki/Flyweight_pattern
  588.  
  589. System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
  590.  
  591. GOF
  592.  
  593. Creational patterns
  594.  
  595. Abstract factory (recognizeable by creational methods returning the factory itself which in turn can be used to create another abstract/interface type):
  596. xml.parsers.DocumentBuilderFactory#newInstance()
  597. xml.transform.TransformerFactory#newInstance()
  598. xml.xpath.XPathFactory#newInstance()
  599.  
  600. Builder (recognizeable by creational methods returning the instance itself)
  601. java.lang.StringBuilder#append() (unsynchronized)
  602. java.lang.StringBuffer#append() (synchronized)
  603. java.nio.ByteBuffer#put() (also on CharBuffer, ShortBuffer, IntBuffer, LongBuffer, FloatBuffer and DoubleBuffer)
  604. javax.swing.GroupLayout.Group#addComponent()
  605. All implementations of java.lang.Appendable
  606.  
  607. Factory method (recognizeable by creational methods returning an implementation of an abstract/interface type)
  608. java.util.Calendar#getInstance()
  609. java.util.ResourceBundle#getBundle()
  610. java.text.NumberFormat#getInstance()
  611. java.nio.charset.Charset#forName()
  612. java.net.URLStreamHandlerFactory#createURLStreamHandler(String) (Returns singleton object per protocol)
  613.  
  614. Prototype (recognizeable by creational methods returning a different instance of itself with the same properties)
  615. java.lang.Object#clone() (the class has to implement java.lang.Cloneable)
  616.  
  617. Singleton (recognizeable by creational methods returning the same instance (usually of itself) everytime)
  618. java.lang.Runtime#getRuntime()
  619. java.awt.Desktop#getDesktop()
  620. java.lang.System#getSecurityManager()
  621.  
  622. Structural patterns
  623.  
  624. Adapter (recognizeable by creational methods taking an instance of different abstract/interface type and returning an implementation of own/another abstract/interface type which decorates/overrides the given instance)
  625. java.util.Arrays#asList()
  626. java.io.InputStreamReader(InputStream) (returns a Reader)
  627. java.io.OutputStreamWriter(OutputStream) (returns a Writer)
  628. javax.xml.bind.annotation.adapters.XmlAdapter#marshal() and #unmarshal()
  629.  
  630. Bridge (recognizeable by creational methods taking an instance of different abstract/interface type and returning an implementation of own abstract/interface type which delegates/uses the given instance)
  631.  
  632. Composite (recognizeable by behavioral methods taking an instance of same abstract/interface type into a tree structure)
  633. java.awt.Container#add(Component) (practically all over Swing thus)
  634.  
  635. Decorator (recognizeable by creational methods taking an instance of same abstract/interface type which adds additional behaviour)
  636. All subclasses of java.io.InputStream, OutputStream, Reader and Writer have a constructor taking an instance of same type.
  637. java.util.Collections, the checkedXXX(), synchronizedXXX() and unmodifiableXXX() methods.
  638. javax.servlet.http.HttpServletRequestWrapper and HttpServletResponseWrapper
  639.  
  640. Facade (recognizeable by behavioral methods which internally uses instances of different independent abstract/interface types)
  641.  
  642. Flyweight (recognizeable by creational methods returning a cached instance, a bit the "multiton" idea)
  643. java.lang.Integer#valueOf(int) (also on Boolean, Byte, Character, Short, Long and BigDecimal)
  644.  
  645. Proxy (recognizeable by creational methods which returns an implementation of given abstract/interface type which in turn delegates/uses a different implementation of given abstract/interface type)
  646. java.lang.reflect.Proxy
  647. java.rmi.*, the whole API actually.
  648.  
  649. Behavioral patterns
  650. Chain of responsibility (recognizeable by behavioral methods which (indirectly) invokes the same method in another implementation of same abstract/interface type in a queue)
  651. java.util.logging.Logger#log()
  652. javax.servlet.Filter#doFilter()
  653.  
  654. Command (recognizeable by behavioral methods in an abstract/interface type which invokes a method in an implementation of a different abstract/interface type which has been encapsulated by the command implementation during its creation)
  655. All implementations of java.lang.Runnable
  656. All implementations of javax.swing.Action
  657.  
  658. Interpreter (recognizeable by behavioral methods returning a structurally different instance/type of the given instance/type; note that parsing/formatting is not part of the pattern, determining the pattern and how to apply it is)
  659. java.util.Pattern
  660. java.text.Normalizer
  661.  
  662. Iterator (recognizeable by behavioral methods sequentially returning instances of a different type from a queue)
  663. All implementations of java.util.Iterator (thus among others also java.util.Scanner!).
  664. All implementations of java.util.Enumeration
  665.  
  666. Mediator (recognizeable by behavioral methods taking an instance of different abstract/interface type (usually using the command pattern) which delegates/uses the given instance)
  667. java.util.Timer (all scheduleXXX() methods)
  668. java.util.concurrent.Executor#execute()
  669. java.util.concurrent.ExecutorService (the invokeXXX() and submit() methods)
  670. java.util.concurrent.ScheduledExecutorService (all scheduleXXX() methods)
  671. java.lang.reflect.Method#invoke()
  672.  
  673. Memento (recognizeable by behavioral methods which internally changes the state of the whole instance)
  674. java.util.Date (the setter methods do that, Date is internally represented by a long value)
  675. All implementations of java.io.Serializable
  676. All implementations of javax.faces.component.StateHolder
  677.  
  678. Observer (or Publish/Subscribe) (recognizeable by behavioral methods which invokes a method on an instance of another abstract/interface type, depending on own state)
  679. java.util.Observer/java.util.Observable (rarely used in real world though)
  680. All implementations of java.util.EventListener (practically all over Swing thus)
  681. javax.servlet.http.HttpSessionBindingListener
  682. javax.servlet.http.HttpSessionAttributeListener
  683.  
  684.  
  685. State (recognizeable by behavioral methods which changes its behaviour depending on the instance's state which can be controlled externally)
  686. javax.faces.lifecycle.LifeCycle#execute() (controlled by FacesServlet, the behaviour is dependent on current phase (state) of JSF lifecycle)
  687.  
  688. Strategy (recognizeable by behavioral methods in an abstract/interface type which invokes a method in an implementation of a different abstract/interface type which has been passed-in as method argument into the strategy implementation)
  689. java.util.Comparator#compare(), executed by among others Collections#sort().
  690. javax.servlet.http.HttpServlet, the service() and all doXXX() methods take HttpServletRequest and HttpServletResponse and the implementor has to process them (and not to get hold of them as instance variables!).
  691. javax.servlet.Filter#doFilter()
  692.  
  693. Template method (recognizeable by behavioral methods which already have a "default" behaviour definied by an abstract type)
  694. All non-abstract methods of java.io.InputStream, java.io.OutputStream, java.io.Reader and java.io.Writer.
  695. All non-abstract methods of java.util.AbstractList, java.util.AbstractSet and java.util.AbstractMap.
  696. javax.servlet.http.HttpServlet, all the doXXX() methods by default sends a HTTP 405 "Method Not Allowed" error to the response. You're free to implement none or any of them.
  697.  
  698. Visitor (recognizeable by two different abstract/interface types which has methods definied which takes each the other abstract/interface type; the one actually calls the method of the other and the other executes the desired strategy on it)
  699. java.nio.file.FileVisitor and SimpleFileVisitor
  700.  
  701.  
  702. Proxy could be used when you want to lazy-instantiate an object, or hide the fact that you're calling a remote service, or control access to the object.
  703.  
  704. Decorator is also called "Smart Proxy." This is used when you want to add functionality to an object, but not by extending that object's type. This allows you to do so at runtime.
  705.  
  706. Adapter is used when you have an abstract interface, and you want to map that interface to another object which has similar functional role, but a different interface.
  707.  
  708. Bridge is very similar to Adapter, but we call it Bridge when you define both the abstract interface and the underlying implementation. I.e. you're not adapting to some legacy or third-party code, you're the designer of all the code but you need to be able to swap out different implementations.
  709.  
  710.  
  711.  
  712. Patterns
  713.  
  714. Enterprise patterns:
  715. Decoupling using app layers: data source, logic, presentation, etc
  716.  
  717.  
  718. Domain Model
  719. Service Layer(security, transactions, etc)
  720. Remote Facade
  721. Data Transfer Object
  722. Unit of Work
  723. Class Table Inheritance
  724. Active Record - logic with data
  725. Data Mapper - separates the domain objects and the database from each other
  726. Gateway - one class per database table
  727. Unit of Work - keeps track of all objects read from the database, together with all objects modified in any way
  728. Lazy load
  729. Foreign Key Mapping
  730. Association Table Mapping
  731. Serialized LOB
  732. Inheritance in DB:
  733. 1 Single Table Inheritance
  734. 2 Concrete Table Inheritance
  735. 3 Class Table Inheritance
  736. Metadata Mapping - used in ORMs
  737. Repository
  738. WEB patterns
  739. Model View Controller
  740. Transform View (XSLT)
  741. Template View (JSP)
  742.  
  743. Session:
  744. Server Session State
  745. Client Session State
  746. Database Session State
  747.  
  748. Remote Facade returns
  749. Data Transfer Object
  750.  
  751.  
  752. Enterprise patterns:
  753. Integration Styles[edit]
  754. File Transfer
  755. Shared Database
  756. Remote Procedure Invocation
  757. Messaging
  758. Integration Types[edit]
  759. Informationportal
  760. Data Replication
  761. Shared Business Function
  762. Service Oriented Architecture
  763. Distributed Business Process
  764. Business-to-Business Integration
  765. Tightly Coupled Interaction vs. Loosely Coupled Interaction
  766. Messaging[edit]
  767. Message Channel
  768. Message
  769. Pipes and Filters
  770. Message Router
  771. Message Translator
  772. Message Endpoint
  773.  
  774. Distributed design patterns
  775. Distributed communication patterns(RPC, CORBA)
  776. Event driven (Akka, Java Swing) - An event-driven system typically consists of event emitters (or agents), event consumers (or sinks), and event channels.
  777. MapReduce
  778. Bulk synchronous parallel
  779. Remote Session[1]
  780.  
  781.  
  782.  
  783. Reactive manifesto
  784. Responsive: The system responds in a timely manner if at all possible. Responsiveness is the cornerstone of usability and utility, but more than that, responsiveness means that problems may be detected quickly and dealt with effectively. Responsive systems focus on providing rapid and consistent response times, establishing reliable upper bounds so they deliver a consistent quality of service. This consistent behaviour in turn simplifies error handling, builds end user confidence, and encourages further interaction.
  785. Resilient: The system stays responsive in the face of failure. This applies not only to highly-available, mission critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by replication, containment, isolation and delegation. Failures are contained within each component, isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole. Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures.
  786. Elastic: The system stays responsive under varying workload. Reactive Systems can react to changes in the input rate by increasing or decreasing the resources allocated to service these inputs. This implies designs that have no contention points or central bottlenecks, resulting in the ability to shard or replicate components and distribute inputs among them. Reactive Systems support predictive, as well as Reactive, scaling algorithms by providing relevant live performance measures. They achieve elasticity in a cost-effective way on commodity hardware and software platforms.
  787. Message Driven: Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation, location transparency, and provides the means to delegate errors as messages. Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. Non-blocking communication allows recipients to only consume resource while active, leading to less system overhead.
  788.  
  789.  
  790. GoF Patterns:
  791.  
  792. Creational patterns are ones that create objects for you, rather than having you instantiate objects directly. This gives your program more flexibility in deciding which objects need to be created for a given case.
  793. Abstract factory pattern groups object factories that have a common theme.
  794. Builder pattern constructs complex objects by separating construction and representation.
  795. Factory method pattern creates objects without specifying the exact class to create.
  796. Prototype pattern creates objects by cloning an existing object.
  797. Singleton pattern restricts object creation for a class to only one instance.
  798.  
  799. Structural[edit]
  800. These concern class and object composition. They use inheritance to compose interfaces and define ways to compose objects to obtain new functionality.
  801. Adapter allows classes with incompatible interfaces to work together by wrapping its own interface around that of an already existing class.
  802. Bridge decouples an abstraction from its implementation so that the two can vary independently.
  803. Composite composes zero-or-more similar objects so that they can be manipulated as one object.
  804. Decorator dynamically adds/overrides behaviour in an existing method of an object.
  805. Facade provides a simplified interface to a large body of code.
  806. Flyweight reduces the cost of creating and manipulating a large number of similar objects.
  807. Proxy provides a placeholder for another object to control access, reduce cost, and reduce complexity.
  808.  
  809. Behavioral[edit]
  810. Most of these design patterns are specifically concerned with communication between objects.
  811. Chain of responsibility delegates commands to a chain of processing objects.
  812. Command creates objects which encapsulate actions and parameters.
  813. Interpreter implements a specialized language.
  814. Iterator accesses the elements of an object sequentially without exposing its underlying representation.
  815. Mediator allows loose coupling between classes by being the only class that has detailed knowledge of their methods.
  816. Memento provides the ability to restore an object to its previous state (undo).
  817. Observer is a publish/subscribe pattern which allows a number of observer objects to see an event.
  818. State allows an object to alter its behavior when its internal state changes.
  819. Strategy allows one of a family of algorithms to be selected on-the-fly at runtime.
  820. Template method defines the skeleton of an algorithm as an abstract class, allowing its subclasses to provide concrete behavior.
  821. Visitor separates an algorithm from an object structure by moving the hierarchy of methods into one object.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement