sdadszx111

Untitled

May 25th, 2023
35
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 10.36 KB | None | 0 0
  1. Five free courses to learn large language models. In this video, we're going to talk about five free courses, three short courses, two long courses that can help you learn different aspects of large language models, and it can compliment with one another the order in which I'm going to make this video is from the easiest, the least technical course to the most technical, the deepest course. That's how this course is structured or the video is structured and it is going to be a sequential learning for you. So the first course that I'm going to talk about is something that you should learn first if you are looking for the least technical course. And the last course that we are going to discuss is the most technical course and you need to learn if you want very in-depth hardcore knowledge. Let's get started with the video. The first one is the deeplearning.ai's very short course called chat GPT prompt engineering for developers. In fact, this course is specifically for chat GPT in partnership with open AI and as every other course that we are going to see in this video, you do not have to pay a single penny to attend this course, it is a short course that is taught by Esau Fulford, who is a member technical of open AI and also deep learning.ai is from Yanzhu Yang. And you can see what are the information that are covered in this course. That is very simple. It helps you learn how to use chat GPT, how to optimize your prompts, which is also called as prompt engineering to do certain tasks like summarization, inferring, transforming text and expanding. So if you want to play with chat GPT to do certain tasks that are not otherwise very straightforward or easy this course can help you do those things. This course is primarily aimed at developers who want to learn prompt engineering for chat GPT and this course does not take a lot of time for you to finish. So if you are in a rush, if you have like less than one hour and you want to attend a course that can make you better at prompt engineering for chart GPT, then this is the course that you should start with. The second one that we are going to discuss is state of GPT. Technically speaking, this is not even a course. It's a talk that was given by Andrej Carpathi couple of days back during Microsoft build. But what makes this really interesting and what makes this a good course is that it covers almost every single aspect of the large language model space today. And Andrew Carpathi being one of the best teachers in the world in this particular subject takes you to all the essential items. So learn about the training pipeline of GPT assistance like chat GPT from tokenization to pre-training supervised fine tuning, which is also what a lot of people have been calling us instruct fine tuning, SFT or instruction fine tuning reinforcement learning from human feedback. R L H F. And it also dives deeper into certain technicals and mental models. Like how do you use this models? What kind of prompting strategies that you should use? How do you fine tune them? How is this ecosystem growing and what kind of extensions are we talking about? The good thing is because this is hosted on Microsoft website. It has got a lot of language captioning support. So you can, even if you do not know English, you still can have a caption, closed caption for different languages like Arabic, Hindi, Thai, Turkish, Ukrainian, and a lot of other these languages. And this not being a course and being like less than 45 minutes content makes it really easy for example if you want to quickly dive into the state of large language model and learn what is happening in this particular space from one of the best teachers in the world in the large language model space that is state of GPT by Anjali Karpathy which is technically not a course but I'm putting it in the course because of the versatility and the subjects that this particularly covers. The next course, the third one in our space is Cohere's large language model university. What they call as an LLM university is a new initiative by Cohere, which is a company that also gives you large language model endpoints. As you might have guessed because this is a Cohere specific course you might see a lot of things related to Cohere but still this is a course that can be learned even if you do not want to use Cohere service and it is completely for free. So this has been taught by a lot of industry veterans in the large language model space and the curriculum is very simple and easy for you to cover. Starts with how do large language models work. There you will learn about basic architecture, the transformer models, what are embeddings, what are similarity, what is attention mechanism, and then you go one step level further and then see how do you use these large language models for certain topics. Like if you want to build semantic search you want to do text generation, text classification or you want to simply analyze text using embeddings and finally you want to use everything that you have learned and productionize them or deploy them as application so finally you will learn in this course how how to use Cohere endpoints like classify, generate and embed to use these models in the real world. If you were to build applications, like I said, again, certain aspects of this course could be Cohere specific, but in general, it's a very good way to understand what is happening in the industry, Even if you do not want to tie yourself with the Cohere ecosystem and it has been taught by well-known teachers. So you will actually learn a lot of different aspects of large language model. And this is a good bridge between what we have seen in the first two courses and what we are going to see in the next two courses, because it's going to combine the least technical aspect and the most technical aspect of learning large language models. The next course is another very well-known course. It's a Stanford CS25 transformers united V2 course. The start in winter 2023, but this entire course is available on YouTube as a playlist. So you can go watch the recordings available. So you can go here and watch the recordings, the, the, the playlist available completely for you to go watch the entire recordings. So this course takes you deep into again, different aspects of transformers. And once again, you would see names like Anjali Kalpathy here but you can go see all the large language models all the information that you have got from different aspects like mathematical aspects sometimes sometimes non mathematical aspect and all the details are available within the course page and this is another very important course for you to learn anything and everything in the large language model space that that has been quite popular these days. This is another interesting course but like unlike the three courses that we saw this is not a small course. This would actually take a considerable amount of time and it would require several weeks of your time to watch these recordings and learn from this course. And this is CS25 Transformers United v2 course which is available on YouTube as a free course and as you can see it's constantly getting updated and you can go watch this course. The final thing is something that I personally considered as the OG LLM course and this is one of the most respected teachers like at least for me this man Jeremy this is one of the most respected teachers like at least for me This man jeremy howard is one of the most respected teachers on the internet jeremy howard from fast Ai has got the second part of the deep learning course, which is with stable diffusion So this is the from deep learning foundations to stable diffusion completely free. You don't have to spend a single penny, even for compute. Jeremy has made it really easy for you to use the course materials on free Google collab and other Google, sorry, GPU environments. So you don't have to spend anything on computation even. So this course, while the other courses were primarily focused on the text part of the large language models. This course also dives into the stable diffusion world about teaching you how the stable diffusion world works and how this de-analyzing works and what is the latent space that we talk about what are these VAEs variable autoencoders and all these details are covered in this course while also teaching you the foundations of deep learning or getting deeper into deep learning like backpropagation, autoencoder, mixer precision, the you know DDPM, dropout and all these informations and attention and transformers and latent diffusion all these informations that are required for you to build a strong foundation if you want to create your own model. So this course is not going to just teach you applied large language model or not just going to teach you how to use an existing large language model to do something. But rather this course can make you a deep learning engineer and make you build your own large language model and what so not. So this this is one of the most comprehensive courses in this subject on the internet that's also free that's also taught by one of the best in this particular domain who is Jeremy Howard who's been there in this space even before an LLM was called an LLM. So this is definitely a must-watch course if you want to dive deeper into the technical aspect and build your Knowledge and rigor as a deep learning engineer or a large language model AI engineer So once again to quickly recap and again, this course will take a quite a bit of time You have to spend a couple of months in this course more than a couple of months in this course This is not a course that you can watch in one, a couple of days and then get expertise. So just that's, that's how, so the course that we just saw is the most technical course and the course with which we started is the least technical. Then we moved on to the state of GPT by Anjali Karpathy, then LLM university by cohere Stanford CS 25, which still gets updated. And finally, practical deep learning for coders. Part two, where it is covered with a stable diffusion from deep learning foundations to stable diffusion by fast.ai, which has got primarily lot of Jeremy Howard's content. Hope this video was helpful to you in learning how to build your LLM expertise based on the level of expertise that you want to build and the technicality that you want to go deep into. If you have any question or if you know any course that I might have missed please let me know in the comment section. See you in another video. Happy learning!
Advertisement
Add Comment
Please, Sign In to add comment