Advertisement
Guest User

Untitled

a guest
Jul 17th, 2019
62
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.54 KB | None | 0 0
  1. Hey there!
  2. I know Pauls explained you a bit about concept we are going towards and i understood that you compared it with Cloud computing and Map reduce model. We are aiming at HPC concept which varies alot from cloud computing. It might seem at first that its identical, but it differs in a nmber of ways, and it has some similarities as well. We are aiming at targeting clients that are in need of large sets of data and crunching information in parallel while they have possibility of sharing the data inbetween nodes. The connection between nodes will be interconenected with infiband probably thats why i like to call it that we will be in a place to offer commercial usage of supercomputer without limiting the users on entire IAAS concept (core picks, OS, middleware setup etc). The most important thing is that we offer verical scalability. On other side, what you presented is more like cloud computing which as we both know target embarrassingly parallel problems. We are not focusing on quantity of nodes, but rather quality and power. We are seeking to combine these 2 paradigms by offering capability of running both emabarassingly parallel problems as well as depndent ones which is mainly what HPC differs from CC. Next to this, we want to offer set of libs that will go in handy with developers using it in first place. Many HPC companies and research labs use MPI/OPENMP currently but we plan on using something simmilar, filling the holes where those 2 keep failing which is simplicity and abstraction. Next to many things that we will solve unlike models you mentioned is real time processing, slow processing speeds(we both know how complex are map and reduce on lower level) and imagen on top of that doing some iterative algorithm approach, latency, simplicity(don't get them in place where they have to hand code every operation to convert it in mp approach), abstraction. Of course, iterative algorithms can be done in Spark but we are just trying to unite all the concepts together and to get to somtehing universal. Next to possiblity of offering developers to submit their tasks to us, and letting our scheduler takes care of rest including a very simple and detailed GUI, we offer thousands HPC comaptible applications which can be used by non developer as well. Such as simulation apps, ML predefiners etc. I'm off to class now. To sum it up, we are not trying to re invent the wheel. We are aiming at presenting HPC concept in a different and more simpler way. Along that path, we will create unique framework/model which will possibliy fix some flaws that current ones have.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement