1. Course Part 0: Course Introduction and Understanding LLMs (7 Lessons)

Before we jump into coding, Part 0 sets the stage. We'll cover the course logistics, define the scope, and articulate why the LLM developer role is distinct and valuable. We'll also explain our focus on the core practical toolkit (Prompting, RAG, Fine-tuning, Tool Use, Data Curation) rather than training models from scratch. Crucially, this section provides a non-mathematical grounding in how LLMs function and, just as importantly, why they sometimes produce peculiar or incorrect outputs – building intuition for their "alien" intelligence is key to working with them effectively.

Part 1: Core LLM Skills via Building Our RAG AI Tutor

This is where the rubber meets the road. Over 60 lessons, we'll construct an AI Tutor capable of answering complex AI questions, using this project as a vehicle to learn core LLM development skills. You'll progress from fundamental API usage and basic RAG pipelines to sophisticated data collection (scraping, APIs, search), cleaning, advanced RAG techniques, fine-tuning experiments, and finally, deploying your application using tools like FastAPI, Gradio, and Hugging Face Spaces.

We emphasize learning through iteration and experimentation. Rather than just presenting a finished product, we'll show the development path, including techniques we tried that didn't pan out for this specific project. Some ideas might not offer enough performance gain, carry too high a cost or latency penalty, or simply be better suited elsewhere. Embracing this trial-and-error process builds adaptable skills that outlast any specific technique. You'll primarily work through standalone notebooks for each concept, which we encourage you to integrate into your own version of the AI Tutor using our provided dataset.

This first part of the course is split into nine subsections. The notebooks in the first two subsections are geared toward LLM first-timers and are less relevant to the final project. You may wish to skip some details in the code project notebooks here if you have already used LLMs via API. However, we still recommend reading these lessons even if you have more LLM experience, as we cover many tips, techniques, and explanations that will be useful later in the course.

Here are the sub modules we cover in Part 1:

2. Course Part 1: Building Our RAG AI Tutor; Introduction to Using LLMs (7 Lessons)

We introduce LLMs and demonstrate how to use them via API calls. This first lesson covers the core concepts behind LLMs, explaining their architecture and capabilities while walking you through the steps of making an API call to access models from platforms like OpenAI or Google AI Studio. You will learn how to structure queries, handle API responses, and use LLM outputs in your applications.

We then explore the criteria for choosing the right LLM for a project with lessons on metrics, benchmarks, and considerations for using closed-source versus open-source models. We conclude this course section with a detailed introduction to prompt engineering. The basics of prompt construction and common techniques to craft effective prompts will be introduced. The course includes a practical example: building an AI tutor system prompt, which also covers important topics like prompt injection and prompt hacking. We conclude with lessons on refining and iterating your prompts to ensure better results from your LLM. Understanding how to write and adjust prompts is a crucial skill in working with LLMs, as it can significantly impact the quality and accuracy of your outcomes.