Advertisement
Guest User

Anon's post.

a guest
Jul 3rd, 2016
161
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 9.56 KB | None | 0 0
  1. Let's take a small step back and talk about memory for a bit
  2. There is more than one kind of memory in his PC. You have your permanent storage (HDD, SSD) and your RAM, which is volatile (data is lost shortly after your PC is turned off). Your files go on your permanent storage, while programs load data into your RAM and keep it there until they terminate. There is another very important relation between these kinds of memory. Keeping all the data on your permanent storage (including the data your applications work with), is a terrible idea, because your storage might be large, but it's also much slower. Your RAM on the other hand, might be blazing fast, but is also much, much smaller (aside from being volatile).
  3. There are more kinds of memory in your PC and this relation holds true for all of them now, and always did:
  4. >permanent storage (measured in TB, slow as fuck)
  5. >RAM (measured in GB, much faster)
  6. >various levels of CPU caches (measured in MB/KB, really fast)
  7. >CPU registers (measured in Bytes, as fast as it gets)
  8. If you program in assembler, you manage all of them (except for the caches) by hand. If you program in C/C++ you can manage these by hand, but usually only deal with your RAM and permanent storage (again, by hand). If you program in languages like C# or Java, you manage your permanent storage by hand, the RAM is managed for you and you have no way of directly influencing anything below that, or even know what's happening there.
  9. Depending on what you're working on, this can cause huge problems, as not using your CPU caches efficiently can make your application multiple times slower.
  10. For today, we're going to focus on the RAM.
  11. >Heap and stack
  12. Your OS actually separates your RAM into two parts: the heap and the stack. One on each end, growing towards each other. The stack is a small part of your RAM. The most of your RAM is used for the heap. Despite both of them being on the same physical piece of hardware, they have the same relation I told you about earlier. The stack is a lot smaller, but also a lot faster.
  13. >Heap
  14. The heap is what people who know a bit about computers think of, when they think about RAM. Your application loads/creates data and puts it in the heap to use it. Unless something goes horribly wrong, the heap exists for as long as the application needs it to. From the point of view of the application, it's permanent. You "allocate" (request and reserve) memory on the heap using the "new" operator in C++ or the "malloc()" function in C. You "free" memory on the heap using the "delete" operator in C++ and the "free()" function in C.
  15. The reason why it's so slow, is because being permanent has some huge implications in modern, multi-tasking operating systems.
  16. If applications allocate chunks of memory, your OS needs to make sure that there is no overlapping. Otherwise horrible things would happen. So your OS needs to keep track of which application currently uses which chunks of your memory and intelligently manage them. Which part of your memory should you give to an application exactly? Applications that were terminated or simply freed some resources, will have inevitably left lots of free "holes" in between allocated memory. I think you get how this isn't a simple task.
  17. Every time, you use "new" in C++, this has to be done. This is the reason for why particle systems and similar things, with too much throwaway data for the stack, use something called "object pooling", if they were developed by a competent developer (which is something one should do A LOT when working with Unity).
  18. You create a »new« particle, send it on its way, some time later it dies and is »deleted«. Actually using the new and delete operators (or your language's equivalents) is the naïve implementation, which will run like absolute arse. What you're supposed to do, is create all the particles up front, turn them off by default and put them in an array for particles that are ready to be used. Then, instead of using the new operator, you always simply take the first element out of the ready-array, turn it on and send it on its way. Once it dies, you turn it off, reset all its variables and put it back into the ready-array.
  19.  
  20.  
  21.  
  22.  
  23.  
  24.  
  25.  
  26.  
  27.  
  28.  
  29.  
  30.  
  31.  
  32.  
  33. >Stack
  34. Other than the heap, the stack is something (usually) only programmers know about. It's a special part of your RAM that, from the point of view of the application, is volatile.
  35. As the other anon already pointed out correctly, it's there for things you do in functions like temporary variables used in calculations, counters for iterating through your loop, etc. Simply put: It's there for throwaway data. Things you only need for a short period of time. As soon as you leave the function/loop/whatever those variables "go out of scope" and are "deleted" (not really, but we get to that later).
  36. The stack is called like this for a reason. The reason being that it literally works like a stack of cards. You have to operations "PUSH" and "POP".
  37. PUSH puts a new variable on top of the stack, while POP takes the topmost one off the stack. The reason why it's so much faster than the heap is that, being volatile managing it becomes a whole lot simpler.
  38. The only thing your OS needs to keep track of, is where the top of your stack is. Earlier I mentioned CPU registers as being the fastest memory available to us. There is one special register, designated to that exact job: the stack register (or stack pointer). With multi-tasking operating systems it, again, gets more complicated (stack-frames and shit like that), but the general idea is the same. You just need to keep track of where your top is at the moment.
  39. If you push a variable onto the stack, you take your stack pointer, increment it by one, then write the data. If you pop a variable off the stack, you return the data at the current position, then decrement SP by one. You don't even have to clear the data. You just take a step back in memory, because the next time you push again, you'll overwrite whatever has been there earlier anyway, so nobody gives a shit.
  40. >Manual and automatic memory management
  41. Manual memory management means allocating memory when you need it, using it for whatever you need it, then free it, once you don't need it no more. Sounds simple enough. The problem is that manual memory management is the source of one of the biggest PITAs a programmer can experience: memory leaks. It is really easy to write code that allocates more memory than it frees. If this happens in some sort of loop, your application becomes a memory black hole that eats up all the RAM. If it eats too much, too fast, your computer will simply crash. You'll have to pull the plug or press the reset button.
  42. Automatic memory management works differently. Java and C# have a a huge-ass runtime environment your application runs in. It's basically a virtual OS within your OS and what it does, among other things, is keep track of resources you allocate on the heap and free them for you, when it's clear they won't be used anymore.
  43. It's a huge help and makes developing so much faster. However, there are severe problems with this approach.
  44.  
  45.  
  46.  
  47.  
  48.  
  49.  
  50.  
  51.  
  52.  
  53.  
  54.  
  55.  
  56.  
  57.  
  58.  
  59.  
  60. >Wrapping it all up: garbage collection
  61. The part of the runtime that checks whether a certain resource can be deleted and does so, if it can, is the garbage collector or GC. The obsolete resources on the the heap it frees, are called garbage, which shouldn't require any further explanation. I will now list a couple of facts, that should make clear where the problems lie.
  62. When the GC kicks in, it needs to check all the data and free whatever it can. While it works, your application is completely stopped until the GC finishes its job.
  63. You have no direct control over the GC. The runtime will decide when it runs by itself. (You can have some minor influence here and there, but the general situation stays the same.)
  64. Java/C# have no manual memory management, so they themselves decide what goes on the heap and what goes on the stack. So there's only so much you can do to prevent producing garbage.
  65. A GC collect is really fucking slow in the kind of time windows realtime (soft and hard) applications work with. Vidya is soft realtime, which means that it is one big loop that repeats itself over and over, and needs each iteration to finish within a certain time frame. You either have 30 ms for 30 fps, or 16 ms for 60 fps. On my, quite powerful CPU, a single GC collect took about 7.6 ms in Unity. That's stealing almost half the time I have for a solid 60 fps. That's on top of everything else, every now and then, without prior warning and with no way of really preventing it.
  66. You should see what this can lead to (hint: Minecraft). Compared to C++, which not only gives you the possibility to allocate shit on the stack explicitly (even arrays), a memory leak in C++ might be pain to find, it might be a pain to fix, it might even crash your computer, but at the very least it FORCES you to fix the issue. A memory leak in C# or Java on the other hand, won't ever crash your computer. It will just degrade your application's performance in a subtle way, due to spreading the increased GC collects out. You'll barely notice unless you fuck up really bad or write realtime applications.
  67. The Minecraft modder who makes Optifine revealed that Minecraft leaked +200 MB/s some versions ago. If Minecraft would have been written in C++, it would crash the most PCs in under a minute with no survivors.
  68. C# and Java make it easy to write programs fast. They just make it hard to write fast programs. There are good reasons why Unity might use C# for gameplay logic, but is itself written in C++.
  69. This concludes our little lesson. I hope you'll have as much fun reading, as I had writing.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement