Before we jump into coding, Part 0 sets the stage. We'll cover the course logistics, define the scope, get our first experience of using an LLM API, and articulate why the LLM developer role is distinct and essential. We'll also explain our focus on the core practical toolkit (Prompting, RAG, Fine-tuning, Tool Use, Data Curation) rather than training models from scratch. Crucially, this section provides a non-mathematical grounding in how LLMs function and, just as importantly, why they sometimes produce peculiar or incorrect outputs. Building intuition for what leads to their "alien" and “jagged” intelligence with superhuman strengths and basic failure modes is key to working with them effectively.
This is where the rubber meets the road. Over 60 lessons, we'll construct an AI Tutor capable of answering complex AI questions, using this project as a vehicle to learn core LLM development skills. You'll progress from Prompting, simple LLM API usage and basic RAG pipelines to sophisticated data collection (scraping, APIs, search), data cleaning, advanced RAG techniques, fine-tuning experiments, and finally, deploying your application using tools like FastAPI, Gradio, and Hugging Face Spaces.
We emphasize learning through iteration and experimentation. Rather than just presenting a finished product, we'll show the development path, including techniques we tried that didn't pan out for this specific project. Some ideas might not offer enough performance gain, carry too high a cost or latency penalty, or simply be better suited elsewhere. Embracing this trial-and-error process builds adaptable skills that outlast any specific technique. You'll primarily work through standalone notebooks for each concept, which we encourage you to integrate into your own version of the AI Tutor using our provided dataset.
This first part of the course is split into nine subsections. The notebooks in the first two subsections are geared toward LLM first-timers and are less relevant to the final project. You may wish to skip some details in the code project notebooks here if you have already used LLMs via API. However, we still recommend reading these lessons even if you have more LLM experience, as we cover many tips, techniques, and explanations that will be useful later in the course.
Here are the submodules we cover in Part 1:
We first explore the criteria for choosing the right LLM for a project with lessons on metrics, benchmarks, and considerations for using closed-source versus open-source models. We conclude this course section with a detailed introduction to prompt engineering. The basics of prompt construction and common techniques to craft effective prompts will be introduced. The course includes a practical example: building an AI tutor system prompt, which also covers important topics like prompt injection and prompt hacking. We conclude with lessons on refining and iterating your prompts to ensure better results from your LLM. Understanding how to write and adjust prompts is a crucial skill in working with LLMs, as it can significantly impact the quality and accuracy of your outcomes.