Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 1-1. What type does foo() return?
- foo takes a function that takes an I and returns an IResult<I, O, E>
- But now foo will spit out a similar function where O became Option<O>
- 1-2. What type the type of bar?
- bar is of type F, so Fn(I) -> IResult<I, O, E>
- 1-3. What is the return type of the following expression:
- foo(bar)(i) is of type IResult<I, Option<O>, E>
- 1-4. What is the type of j?
- j is of type Option<O>
- 2.
- Y = (λx.λy.y (x x y)) (λx.λy.y (x x y))
- Yz = ((λx.λy.y(x x y)) (λx.λy.y(x x y))z)
- (λx.λy.y (x x y)) (λx.λy.y (x x y)) z
- (λy.y ((λx.λy.y (x x y)) (λx.λy.y (x x y)) y)) z
- z ((λx.λy.y (x x y)) (λx.λy.y (x x y)) z)
- Yz = z(Yz) = zz(Yz) = zzz(Yz) = zz...z(Yz)
- This looks like a spawner, it can create as many 'z's as it wants to, its A VIRUS!!! (idk)
- 3. https://stackoverflow.com/questions/33923/what-is-tail-recursion
- Compared to regular recursion, tail recursion is when your recursion is called at the end of your function, and what ends up happening is you do the
- computation as you go recursively. In regular/normal recursion, what happens is you only do the computation after you have gone through all the recursive steps.
- I know stack overflow isn't completely reliable, but the examples and code they present seem to be in agreement with the community. As we can see here, regular recursion
- will keep evaluating the recursion, then do the operation all at the end, while tail recursion does the operation as it goes recursively. The benefit of tail recursion,
- is that there is no need for the current stackframe of each function, because the computations/operations are already done for that recursive step.
- Recusive:
- fn sum_slice(s: &[i32]) -> i32 {
- if s.is_empty() {
- 0
- } else {
- s[0] + sum_slice(&s[1..])
- }
- }
- Tail Recusive:
- fn sum_slice(x: i32, s: &[i32]) -> i32 {
- if s.is_empty() {
- x
- } else {
- sum_slice(x + s[0], &s[1..])
- }
- }
- As you can see inboth examples, the recursive one has s[0] on hold, and continues to do recursion, so what you really end up with is something like this
- 5 + s[1, 2, 3, 4]
- 5 + 4 + s[1, 2, 3]
- ...
- 5 + 4 + 3 + 2 + s[1]
- 5 + 4 + 3 + 2 + 1
- And in the tail recursion, the values are passed with you
- (0, s[1-5])
- (5, s[1-4])
- ...
- (14, s[1])
- (15, 0)
- 4. Since I am kind of a math nerd, I'll select FORTRAN for this one.
- After seraching a little bit, I was able to find this
- https://gist.github.com/t-nissie/479f0f16966925fa29ea
- This code includes the implementation, the compilation, and a test file.
- Looking at lines 6 - 10, I can tell that this is a recursive function, it's literally in the name, but I'm not quite sure what subroutine
- really is. The parameters being passed in don't seem to have anything indicating what type it really is, but we'll go down further to look
- at the other lines if that can tell us anything. On the initialization, we can see that indeed, you don't give them a type in the function parameters
- like you normally do, you just pass in the variables, and then you give it the type right after the function. (Quite interesting, never seen anything like this actually).
- I am not sure what implicit none means, but I know that real*8 has something to do with an array or arraylist, and everything else in the initialization
- seems kind of self explanatory (integer, int).
- From lines 12 - 29, I would say I kind of have an idea what's going on, but without looking at any FORTRAN sources or other code, the part that is really
- tripping me up is how they use the do while loop. Ignoring the quicksort functionality for now (that's because everybody kind of knows what's going on during quicksort),
- normally in java or c++, for a do while, you usually put the body, then your loop condition. In this case, they just did a do while loop with the condition right afterwards.
- In short, everything in terms of quick sort looks standard, setting the pivot, swapping, then calling recursion, the last 2 statements is the recursion part,
- and if you ever have a condition where the place you marked the end of your first array is greater than the place you marked the beginning of your second array, then
- it exists, standard recursion stops condition. It's just the do-while loop that is kind of messing with me hard.
- In the make file, I don't know what else I can say other than this looks very VERY similar to a C/C++ makefile, used to combine multiple object files and link
- them all together to form an executable. Of course the syntax is vastly different from C/C++ from what I've done so far, but the format is almost identical.
- In the test file, program/end program is probably the "main" method, still don't know what implicit none means, call quicksort, and print it out with "write"
- function. But yea, I think that's about all the analysis I can do without googling anything relating to FORTRAIN.
- 5. In both Rust and Haskell, one of the most simplest examples of lazy evaluation are the if statements. In an if statement, if there are multiple conditions such as
- if (A or B), where if either A or B are true, or both are true, the if statement is true, then the body executes. This is an example of lazy evaluation because first off
- the program will check if A is true. If A is true, then that implies the whole statmenet is true regardless what B really is. So After knowing A is true, it just skip
- figuring out what B is, and evaluates the body of the if statement. However, if A is NOT true, then condition B will be evaluated, and that will determine whether the
- whole if statement condition is true or not. This is an example of lazy evaluation in both programming languages.
- One example of strict evaluation is when you have case matching in both rust and haskell. Let's say I have some matching, where
- foo True = 1
- foo False = 1
- this is true. This is definitely strict in some sense, because they both evaluate to the same exact thing, it would have been easier to just type
- foo _ = 1
- but since I set the matching to be that way, where True is 1, or False is 1, the program evaluating that function MUST check whether the given condition is True or False,
- regardless if the result is the same or not. Strict evaluation forces the program to evaluate everything in order to come to a conclusion.
- Looking at it from both sides, the lazy evaluation seems to have the speed edge, but the strict evaluation seems to have the security edge, both holding their pros and cons.
- 6. product [x^2 | x<-[1..100]]
- If I understand set builder notation correctly, what is happening is a new list is created, and as we iterate from 1-100, the numbers are square and
- stored into the list. So what you would have now is [1, 4, 9, ..., 100000]. After that is computed, what finally happens is you take the product of
- all the numbers in your new list, i.e 1*4*9*16*...*100000.
- In rust,
- fn main() {
- let x:i32 = (1..=100).map(|n| n*n).product();
- print!("{}", x);
- }
- This program runs if I swap .product() with .sum(), so .product() should also work. The reason I got an error when I ran it is because the number
- this program produces is too freakin massive. But YES, There is a HUGE problem with this program. THE NUMBER IS WAY TOO BIG. I think you need to import
- some special function or package to manage that number.
- In GHCi, it spits the answer easily. To be exact:
- 8709782489089480079416590161944485865569720643940840134215932536243379996346583325877967096332754920644690380762219607476364289411435920190573960677507881394607489905331729758013432992987184764607375889434313483382966801515156280854162691766195737493173453603519594496000000000000000000000000000000000000000000000000
- Haskell succeeds in generating these incredibly large numbers because integers in haskell are arbitrarily size. It will use up as much memory
- as it needs to in order to do computations/operations. In arbitrary sized integers, arithmetic is usually done in arrays or arraylist, where
- you do your standard operation, and if the index is filled to the max, you just carry over like you would do back in elementary arithmetic. Depending
- if integers are 4byte or 8byte sized, object oriented integers will use memory, as if it has a counting base of 2^32-1 or 2^64-1 (normal coutning system is 10)
- 7. https://manishearth.github.io/blog/2015/05/17/the-problem-with-shared-mutability/
- I was fortunate to find this source break up mutability rules, and shared mutability state into simple understandable code to explain why shared mutability is
- incredbibly dangerous. In general, the reason why shared mutablity is inredibly unsafe is because there's the risk of one party modifying the data in a way that
- prevents the other one from working properly. From the source above, this code is provided:
- let mut x = Vec::new();
- x.push(1); // Allowed
- {
- let ptr = &x; // Create an immutable reference
- let y = ptr[0]; // Allowed, nobody can mutate
- let y = x[0]; // Similarly allowed
- x.push(1); // Not allowed (will not compile): as long as `ptr` is active,
- // `x` is frozen for mutation
- }
- And clearly, this will definitely give you an issue, because you defined a pointer, or a reference to the mutable vector x, and after you set that pointer, and
- attempt to modify what the vector x originally was, the pointer will freak out because the pointer isn't updated on what's going on, only x is. Thus, this program will yield
- a huge compiler error, and dislike you for that.
- There is another simple example I found looking around on the internet, known as the conumser producer problem.
- https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem
- According to wikipedia, the consumer producer problem is a problem where both parties share a buffer, the produer produces data, and the consumer consumes data.
- The producer can't produce too much data otherwise buffer gets full, and the consumer can't consume too much data otherwise there'll be no data left to consume.
- In this problem, there is an specific scenario where a "deadlock" can happen, here are the followign steps for that to happen according to this page.
- The consumer has just read the variable itemCount, noticed it's zero and is just about to move inside the if block.
- Just before calling sleep, the consumer is interrupted and the producer is resumed.
- The producer creates an item, puts it into the buffer, and increases itemCount.
- Because the buffer was empty prior to the last addition, the producer tries to wake up the consumer.
- Unfortunately, the consumer wasn't yet sleeping, and the wakeup call is lost. When the consumer resumes, it goes to sleep and will never be awakened again.
- This is because the consumer is only awakened by the producer when itemCount is equal to 1.
- The producer will loop until the buffer is full, after which it will also go to sleep.
- I didn't paste the code because otherwise it'd be way too long, but this is a very fascinating problem that describes how shared mutability can really mess your program.
- Lastly, there is somethng called data races, where the rust book actually goes a little indepth about it. Here is the source.
- https://doc.rust-lang.org/nomicon/races.html
- I don't think I will go too indepth into it because the answer for this question is already long, but this problem is also an example of shared mutability state.
- 8.
- slithers(snake).
- swims(fish).
- swims(shark).
- crawl(lizard).
- scales(snake).
- scales(lizard).
- scales(fish).
- scales(shark).
- flies(bug).
- % eats(A,B) is true when A eats B
- eats(shark, fish).
- eats(lizard, bug).
- eats(snake, lizard).
- eats(fish, fish).
- eats(bug, bug).
- ?- swims(X)
- X = fish
- X = shark
- ?- scales(lizard)
- YES
- ?- eats(X, fish)
- X = shark
- X = fish
- ?- scales = X, eats(X, Y)
- X = snake, Y = lizard
- X = lizard, Y = bug
- X = Y, Y = fish
- X = shark, Y = fish
- reptile(X) :- scales(X), (slithers(X); crawl(X))
- Fish(X) :- scales(X), swims(X), eats(X, fish)
- insect(X) :- flies(X), (eats(reptile(Y), X); (eats(bug, X)))
- 9.
- No Silver Bullet Summary
- From a general stand point, Brooks argues that in programming, there is no "silver bullet", i.e, there is nothing that provides an immediate and
- extremely effective solution to a given problem or difficulty, especially one that is normally very complex or hard to resolve. In his essay of "No Silver Bullet",
- Brooks first talks about the difficulties of building software, categorizing them into either "essential difficulties" and "accidental difficulties".
- In short, "essential difficulties" are difficulties that are inherent in nature, and can't be avoided, and "accidental difficulties" are difficulties
- are difficulties we intentionally make harder for ourselves. After talking about difficulties, Brooks breaks down the properties of modern software systems
- into 4 parts: Complexity, Conformity, Changeability, and Invisibility.
- "Let us consider the inherent properties of this irreducible essence of modern software systems: complexity, conformity, changeability, and invisibility."
- Complexity - Scaling up a software increases the number of elements (key is that this increase is non-linear)
- Conformity - A programmer cannot have the mindset of "everything has some kind of unifying principle" like you see in mathematics or physics
- Changeability - Software will constantly under go changes. Subjects like users, and hardware will force these changes
- Invisibility - Software can NOT be visualized, unlike buildings and architectural designs
- So in short, it looks like these properties are a pain to work with, software gets complex incredibly fast, and uses a lot of computing power, not everything
- shares some sort of principle like in mathematics or physics, software is constantly changing, and software can't even be visualized. So what can we do given that
- software falls under these 4 hard-to-work-with principles? Well, in Silver Bullet, Brooks offers 3 kinds of solutions to address the problem of software engineering.
- Reuse - Reusing existing software components means that a majority of your cost is really just going to be developmental, as in all you have to really do is
- add on to what is already existing. The cost of building your own solution stands our more than if you were to buy a whole new system.
- Incremental Development - The biggest problem in building a software is having the urge to build it optimally the first try. However, instead of building an optimally
- software in the first place, learn how to build a working one, then optimize it from there.
- Investing in Software Developers - Take the time to nurture and cultivate your own team. I don't think he necessarily means find the best one, but find the ones in
- which you can develop and invest into them, in order for them to build better software.
- Out of the Tar Pit Summary
- From a general stand point, Marks and Mosley argues that complexity is the root of all problems in software engineering. By separating accidental complexity
- and essential complexity, there are methods to build better software. When it comes to software engineering, Marks and Mosley argues that there are a variety of
- factors that cause complexity. Some of these include specifying control, adding states in stateless procedures, the languages themselves etc. In short, there are too
- many things we can most likely control that cause all this complexity.
- Similar to "No Silver Bullet", Marks and Mosley describe accidental and essential complexity. In short, essential complexity is what must be done minimum, in order
- for the program to function at all. In the ideal world, being only concerned with essential complexity is easy. There is no ambiguity, there is no state. However, any means
- to attempt to interpret this problem in physical way, like code, creates accidental complexity. Accidental complexity, as described above, is complexity that we
- intentionally make for ourselves.
- Marks and Mosley continues to describe why it might be reasonable to have accidentla complexity to some level (of course you need essential complexity, that is THE TASK)
- Some of these reasons include Performance and Ease of Expression. Performance is when the accidental state and control are required for efficiency, and ease of
- expression is when the accidental state can sometimes be the easiest way to express some sort of logic. The ideal way to group these together is to imagine that the
- essential complexity is the core of the system, it's more like an "idea", and have essential logic and accidental state and control support this idea of
- essential complexity. Essential complexity is the problem, it's an "idea", anything used to try to represent this kind of idea through higher level languages, or code
- is accidental complexity.
- Personally, I would say I have to side more with Marks and Mosley when it comes to software engineering. Although both of them do share some points, Out of the Tar Pit
- seems to be more relevant than No Silver Bullet. I have to agree that complexity is the root of all our problems, because I believe as software engineering progresses more
- with time, bigger, and better tools will be developed to help us solve some problems that "No Silver Bullet" has mentioned. One of these include invisibility. I believe
- that in the far future, someone will develop a high level language that allows us to see what's happening on the inside of a software. Yes, Brooks might be right that
- software is invisible, but I don't think it's so invisible that there is no way we can create our own way of interpreting what it really is.
- No Silver Bullet
- "Software is invisible and unvisualizable"
- When it comes to discussing Accidental Complexity and Essential Complexity, it seems like that is the proper way to embody
- software engineering. Now when it came to discussing what they really were and the main focuses in both essays, Marks and Mosley seems to have been able to describe
- accidental and essential complexity in a way that makes software engineering a lot more of an open problem than just a typical problem, which I 100% believe is the case
- when it comes to software engineering.
- No Silver Bullet
- "The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions"
- Out of the Tar Pit
- "Note that the definition of essential is deliberately more strict than common usage. Specifically when we use the term essential we will mean strictly essential to the users’
- problem (as opposed to – perhaps – essential to some specific, implemented system, or even – essential to software in general)"
- When both essays tried to depict what essential complexty was, No Silver Bullet seemed to view it as a mechanism that connects things such as data sets, algorithms and
- much more as explained in the quote. However, Marks and Mosley seems to take it one step further, and seem to have depict Essential Complexity as this formless entity,
- this idea, that is truly the core of our problem. I don't quite agree when Silver Bullet says it's some connection, but I agree when Tar Pit says it's the core of your problem
- because the essential complexity IS your problem. I know it's probably wrong to think about it in terms of other subjects, but from what I see, think of a word problem, and
- reinterpreting it into a math equation. In this case, the essential complexity would probably be that math equation itself. Anything outside of this math equation, especially in
- a word problem, means absolutely nothing. "Jim buys 40 watermelons, and eats 20 of them, how many do we have left?" The essential complexity of this problem boils down
- to a very simplistic mathematical equation. However, I think this is kind of a bad example, because in software engineering, essential complexity is almost the same, except
- you can't really express it. Any means to express it means you're adding on accidental complexity. (Not gonna lie I think I'm sounding a bit crazy).
- When it comes to looking at accidental complexity, I have to again, agree with Marks and Mosley. Again, I believe Out of the Tar Pit depicts accidental complexity a lot
- better than No Silver Bullet does, but I think this is because Marks and Mosley's essays mainly revolve around complexity anyways, which I believe is the root of software
- engineering. It seems like Brooks doesn't talk much about accidental complexity, but something I probably agree with him is that some things, such as higher level langauges,
- can help us get rid of some accidental complexities.
- No Silver Bullet
- "The power of such systems does not come from ever-fancier inference mechanisms, but rather from ever-richer knowledge bases that reflect the real world more accurately.
- I believe that the most important advance offered by the technology is the separation of the application complexity from the program itslf"
- When it comes to tackling these software engineering problems, I must also say that Marks and Mosley make a fair point over Brooks. Brooks does make a fair stand on
- the problems we have with software engineering, but I blieve Marks and Mosley breaks it down to the core, where it changes how you see the problem, and how you solve it.
- Out of the Tar Pit
- "It is our belief that the vast majority of state (as encountered in typical contemporary systems) simply isn’t needed (in this ideal world).
- Because of this, and the huge complexity which state can cause, the ideal world removes all non-essential state"
- When you strip a problem of all it's accidentla complexity, expression through code, higher level languages, anything that has nothing to do with the problem, and you
- only look at the problem, and the problem itself, that's when you reach true essential complexity, and in an ideal world, the essential complexity part is just the problem itself.
- In an ideal world, a lot of the things used to express this problem wouldn't be needed. In short, it's just an idea floating.
- On top of that, I really like how Marks and Mosley logically defines accidental complexity after talking about essential complexity.
- Out of the Tar Pit
- "We believe that – despite the existence of required accidental complexity – it is possible to retain most of the simplicity of the ideal
- world in the real one. We now look at how this might be achievable. Our recommendations for dealing with complexity (as exemplified by both state and control)
- can be summed up as avoid, and separate."
- In short, some things that really can't be avoided, which I agree to be true, is performance and ease of expression. These things are usually in the form of higher level
- languages, logic expression etc etc. As much as essential complexity is the problem itself, we can choose to represent this problem in a readable manner to us humans,
- through these accidental complexities.
- Sorry if my argument was all over the place.
- 10.
- I'm going to frame my vision in the context of PL as communication, i.e in the idea of those who know how to code, and those who don't know how to code, and what impact
- this may have on those who do NOT code, because in this time of era, not knowing how to code is a huge disadvantage
- I feel like at this point, a lot of the things we do are either completely new, or for ease of use/access. For example, look at a lot of our technology today. When
- computers were first introduced, it was a literal brick that had a terminal, it was a place to store research (if I remember correctly). Now going to the present, a computers
- can almost do whatever that person wants it to do technology wise. Similarly, in the context of programming languages, we've had languages like C, where a beginner would
- probably struggle to figure out what really is going on, compared to recennt languages such as Rust. It seems clear that the direction programming languages seem to be heading
- in the context of communication is for readability, simplicity, and ease of use. I mean, take a look at Dark for example, a very new language at the talk for web development.
- Look at how easy it looks to create some sort of website through blocks, it's almost like an art where the difficulty of picking it up is probably a LOT LOT easier than
- javascript. Additionally, look at the programming language our professor created, Mech. There seems to be very nice looking visualizations, tables, diagrams etc. It seems
- like the programming languages that are being developed in the next 10 years are merely for ease of use, and access to everyone, most likely making it an incredibly easy
- thing to pick up, and a basic skill in the technological era. Reserachers, and typical people will create these new languages, and they will succeed in creating them, making it
- easier for everyone to pick coding up. I think everybody is mildly frustrated, including me, at how old languages are sometimes incredibly cryptic, and how you have to
- manage everything on your own like pointers, check for safety, compiler errors and stuff like that. The shift from old language to new language is most definitely going to be a
- convenience shift, is what i like to call. As of right now, programmers worldwide seem to be in this state of transition almost, a lot of us are still coding in old languages
- like C/C++, at the same time, looking to learn other new languages and realize the potential it could have when developing programs. In short, I believe languages
- are not going to develop much in the functionality part, but more in the interface part, making code a lot easier to pick up, a lot safer to write, and a lot simpler to program.
- Citations
- https://stackoverflow.com/questions/33923/what-is-tail-recursion
- https://gist.github.com/t-nissie/479f0f16966925fa29ea
- https://manishearth.github.io/blog/2015/05/17/the-problem-with-shared-mutability/
- https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem
- https://doc.rust-lang.org/nomicon/races.html
- https://blog.acolyer.org/2015/03/20/out-of-the-tar-pit/
- https://blog.acolyer.org/2015/03/20/out-of-the-tar-pit/
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement