AAA Accredited Courses
39+ Courses
Nobles Center does not issue a Certificate for this course, it’s an affiliate program from Udemy
Are you eager to dive into the world of AI and master the art of Prompt Engineering? The Complete Prompt Engineering for AI Bootcamp (2025) is your one-stop solution to becoming a Prompt Engineer working with cutting-edge AI tools like GPT-4, Stable Diffusion, and GitHub Copilot!
We update the course regularly with fresh content (AI moves fast!):
**Updated November 2024 – “Sammo introduction with metaprompting, minibatching and optimization”
**Updated October 2024 – “Anthropic Computer use, Prompt Caching, Perplexity, Langwatch, Zapier”
**Updated September 2024 – “Google NotebookLM, Anthropic Workbench and content updates.”
**Updated August, 2024 – “Mixture of Experts, LangGraph and content updates.”
**Updated July, 2024 – “Five proven prompting techniques and an advanced prompt optimization case study.”
**Updated June, 2024 – “LangGraph content including human in the loop, and building a chat bot with LangGraph.”
**Updated: May, 2024 – “ChatGPT desktop, apps with Flask + HTMX, and prompt optimization DSPy, LM Studio”
**Updated: April, 2024 – “LangChain agents, LCEL, Text-to-speech, Summarizing a whole book, Memetics, Evals, DALL-E”
**Updated: March, 2024 – “More content on vision models, and evaluation as well as reworking old lessons.”
**Updated: February, 2024 – “Completely reworked the five principles of prompting + added one pager.”
**Updated: January, 2024 – “Added a one-pager graphic and fixed various errors in notebooks.”
**Updated: December, 2023 – “Another 10 lessons, including creating an entire ebook and more LCEL.”
**Updated: November, 2023 – “10 fresh modules, with 5 covering LangChain Expression Language (LCEL).”
**Updated: October, 2023 – “12 more lessons including GPT-V Vision, Github Co-pilot, LangChain and more.”
**Updated: September, 2023 – “10 more lessons, including projects, more LangChain, non-obvious tactics & SDXL.”
**Updated: August, 2023 – “10 lessons diving deep into LangChain, plus upgraded 9 lessons from GPT-3 to GPT-4.”
**Updated: July, 2023 – “built out the prompt pack, plus 10 more advanced technical lessons added.”
**Updated: June 2023 – “added 6 new lessons and 4 more hands-on projects to apply what you learned.”
**Updated: May, 2023 – “fixed issues with hard to read text mentioned in reviews, and added 15 more videos.”
**Launched: April, 2023
Before we made this course we had both been experimenting with Prompt Engineering since the GPT-3 beta in 2020, and DALL-E beta in 2022, way before ChatGPT exploded on the scene. We slowly replaced every part of our work with AI, and now we work full time in Prompt Engineering. This course is your guide to doing the same and accelerating your career with AI.
*Since launching this course, Mike and James have been commissioned to write a book for O’Reilly titled “Prompt Engineering for Generative AI” which has sold over 4,000 copies!*
If you buy this course you get a PDF of the first chapter free! The book is complementary to the course, but with all new material based on the same principles that work.
Whether you’re an aspiring AI Engineer, a developer learning Prompt Engineering, or just a seasoned professional looking to understand what’s possible, this comprehensive bootcamp has got you covered. You’ll learn practical techniques to harness the power of AI for various professional applications, from generating text and images to enhancing software development and boosting your creative projects.
! Warning !: The majority of our lessons require reading and modifying code in Python (for each lesson marked with “- Coding” in the title). Please don’t buy this course if you can’t code and aren’t seriously dedicated to learning technical skills. We’ve heard from non-technical people they still got value from seeing what’s possible, but please don’t complain in the reviews 😉
The number of papers published on AI every month is growing exponentially, and it’s becoming increasingly difficult to keep up. The open-source project Stable Diffusion is the fastest growing repository in GitHub in history, and ChatGPT is the fastest growing consumer product in history, hitting 1 million users in less than a week and 100m in a few months.
This course will walk you through:
Introduction to Prompt Engineering and its importance
Working with AI tools such as ChatGPT, GPT-4, Midjourney, GitHub Copilot, GPT-4, DALL-E, and Stable Diffusion
Understanding the capabilities, limitations, and best practices for each AI tool
Mastering tokens, log probabilities, and AI hallucinations
Generating and refining lists, summaries, and role prompting
Utilizing AI for sentiment analysis, contextualization, and step-by-step reasoning
Techniques for overcoming token limits and meta-prompting
Advanced AI applications, including inpainting, outpainting, and progressive extraction
Leveraging AI for real world projects like generating SEO blog articles and stock photos
Advanced tooling for AI engineering like Langchain and AUTOMATIC1111
We’ve had over 3,000 5-Star Reviews!
Here’s what some students have to say:
“Practical, fast and yet profound. Super bootcamp.” – Barbara Herbst
“This is a very good introduction about how AI can be prompt-engineered. The instructor knows what he’s talking about and presents it very clearly.” – Eve Sapsford
“Awesome course for beginners and coders alike! Thoroughly enjoyed myself and the guys delivered some great insights, explaining everything in a straight forward way. Would highly recommend to anyone” – Jeremy Griffiths
“This is a very good introduction about how AI can be prompt-engineered. The instructor knows what he’s talking about and presents it very clearly.” – Hina Josef Teahuahu
“The course is quite detailed, I think almost every topic is covered. I liked the coding parts especially.” – Gyanesh Sharma
“Loved how your articulated the value of thoughtfully engineered prompts. The hands-on exercises were insightful.” – Akshay Chouksey
“Good content but at few steps voice sounds very robotic, which is funny considering the course is about AI.” – Shrish Shrivastava
“Awesome and Detailed Course. Helped a lot to understand the nuances of prompt engineering in AI.” – Prasanna Venkatesa Krishnan
“The best parts of the online training were demonstrations and real-life hints. Interesting and useful examples”
“Good” – Jayesh Khandekar
“Mike and James are very good educators and practitioners. Mike also has courses on LinkedIn; together with James, they are running Vexpower. The price is low to collect reviews. It will go up, for sure. GET” – Periklis Papanikolaou
“This course is a legit practical course for prompt engineering, I learned a lot from this course. The resources that they provided is good, but some of the course (tagged with ‘Coding’ in the Course Title) is for intermediate or advance people in Python programming. If you are not usual with Python, this will be a challenge (like me), but we can overcome it because they taught us step by step pretty clearly (of course I need to pause or backwards). Thanks for this course, but you guys can provide more real case scenario when using AI (less/without coding maybe…)” – J Arnold Parlindungan Gultom
So why wait? Boost your career and explore the limitless potential of AI by enrolling in The Complete Prompt Engineering for AI Bootcamp (2025) today!
Welcome to The Complete Prompt Engineering for AI Bootcamp (2023) – Mike & James
Define what prompt engineering is, so you can confidently explain it to others.
Every lecture has attached prompts and/or the slides shared in case you can't see the text easily.
Please note that videos suffixed with "- Coding" should only be attempted by individuals with a solid understanding of Python programming.
Experience "The Practical Exploration: ChatGPT Prompt Pack", a thoughtful collection of 690 prompts to gently guide and navigate interactions with ChatGPT. It aims to cover a wide array of disciplines, offering a more enriched and varied engagement, while respecting the limits of what this AI model can offer.
While ChatGPT is useful for day-to-day work, the OpenAI playground is a cleaner testing environment.
Split tasks into multiple steps, chained together for complex goals.
Define what rules to follow, and the required structure of the response.
Insert a diverse set of test cases where the task was done correctly.
Identify errors and rate responses, testing what drives performance.
Split tasks into multiple steps, chained together for complex goals.
Work through the five principles checklist template to optimize your prompts.
Explain what Token Limits are and how to get the token limits without and with code.
Define what Log Probabilites are, how to apply them for AI content detection or to avoid content detection.
To an example of extremely high temperature with a bad prompt. If you don't have the right format. It might make the facts or break the structure of the output you wanted. Repeating itself.
Learn about OpenAI's new O1 reasoning models (o1-preview and o1-mini) - designed for deep analytical thinking, with enhanced capabilities in science, coding, and mathematical problem-solving. This lesson covers the differences between chat models and reasoning models. Also we discuss when to use each model and their various trade offs.
Examine how to generate lists to easily generate knowledge at scale.
Learn how to perform sentiment analysis, enhancing your understanding of text data and enabling better decision-making based on the emotions and opinions expressed in the content.
Discover how to simplify complex topics using GPT-3, making them accessible and easy to understand for individuals of all ages, especially for those new to a subject or concept.
Master the least to most problem-solving approach, where you learn to decompose complex tasks into subproblems and sequentially solve each one, resulting in a more efficient and effective method for tackling challenging situations.
To ensure a highly pertinent response, it's crucial to include any significant details or context in your requests. If these elements are absent, you're essentially allowing the model to infer your intentions, which may lead to less accurate results.
Certain tasks are most effectively detailed in a step-by-step manner. By clearly listing the steps, the model's ability to adhere to them can be enhanced.
Symbols such as triple quotes, HTML elements, chapter headings, and others serve as separators to distinguish various segments of text that should be interpreted in unique ways.
You have the option to request the model to generate outputs that match a predetermined length. This desired length can be measured in units such as words, sentences, paragraphs, or bullet points. Nonetheless, it's important to understand that guiding the model to produce an exact word count might not yield precise results. Conversely, the model can more dependably produce outputs containing a certain number of paragraphs or bullet points.
Master the art of breaking down complex tasks or concepts into smaller steps using, allowing you to effectively communicate and teach intricate ideas by guiding learners through a step-by-step process.
Explore the concept of role prompting, understanding how to enhance AI-generated content by assigning specific roles or perspectives to the model, resulting in more engaging and contextually relevant outputs.
Learn how to request context from GPT-3/ChatGPT, enabling you to generate more accurate and relevant AI-generated content by providing the necessary background information and ensuring a better understanding of the topic at hand.
Understand the art of question rewriting, enhancing the clarity and effectiveness of your queries to receive more accurate and relevant AI-generated responses, ultimately improving your problem-solving capabilities.
Prepare the ground for ChatGPT to do good work, by asking it to give itself advice.
Delve into the technique of progressive summarization using GPT-3, enabling you to condense large amounts of information into concise and easily digestible summaries while retaining the essence of the original content.
Discover how to overcome token limitations in ChatGPT by chunking text, allowing you to process larger amounts of data more efficiently and effectively while maintaining the integrity of the information being analyzed.
Explore the concept of meta prompting, where you learn to craft prompts based on desired outputs, enabling you to generate more targeted and relevant AI-generated content by reverse-engineering the input-output relationship.
Delve into the technique of chain of thought reasoning, allowing you to develop logical, coherent, and well-structured arguments by connecting ideas and concepts in a step-by-step manner, enhancing your critical thinking skills.
Understand how people use prompt injection as a tool for reverse engineering and taking control of AI systems.
Construct an automatic prompt engineering prompt, capable of generating multiple relevant prompts for a given task.
Easily download all of the Jupyter Notebooks, code and resources for the technical lessons via our Github repository - https://github.com/BrightPool/udemy-prompt-engineering-course.git
Dive deep into advanced list generation techniques improving your AI-generated content by creating more structured and relevant lists for various applications.
Improve the reliability and quality of your results by testing the robustness of your prompts.
Learn how to effectively manage the chat message history within ChatGPT API, enabling you to overcome token limitations and handle larger datasets more efficiently, while maintaining the quality and coherence of AI-generated content.
Classify text using embeddings from an AI model, as that allows you to conduct a similarity search.
Simulate an agent with your AI model, to handle decision-making and tool use.
Compile longer documents from the top down, so you can ensure the text is actually coherent.
Search a vector database to retrieve similar chunks of text to provide as context to your prompt.
Learn how to easily extract structured data from text via OpenAI’s structured output API.
Prompt caching optimizes costs by allowing developers to reuse repeated input text (like instructions or context) sent to AI models at a steep discount, while output tokens remain at full price - making it especially valuable for applications that repeatedly send the same large chunks of context but expect different responses.
Learn how to check prompt caching results on OpenAI calls and also how to manually perform prompt caching within Anthropic.
Explore the OpenAI Realtime Console, an interactive tool that helps you understand how to implement voice conversations and function calling in your applications.
Twitter Profiles to follow
Reddit Groups to join
Discord Servers to join
Blog Posts to read
Academic Papers to review
Prompting Tools to use
This is a new technique I have been using to get diverse and unique answers to LLM questions.
LangChain is a cutting-edge framework designed for crafting applications driven by language models. It seamlessly integrates with data sources, allowing the language model to actively engage with its environment. With its modular components and pre-built chains, users can easily initiate projects or tailor solutions to suit intricate needs.
Learn several different approaches to installing LangChain and also how to expose your OPENAI_API_KEY as an environment variable within Python.
Learn how to load a langchain chat model as well as how to add different types of messages such as SystemMessage, HumanMesssage.
Discover how to create chat prompt templates that'll make your prompts more dynamic.
Learning how to use the streaming parameter in Langchain for reducing latency and obtaining the outputs one token at a time.
Learn how to easily extract structured data from LLM's with Output Parsers.
Discover how to use various summarization techniques including stuffing, MapReduce, and refining to extract meaningful content from large documents. Grasp the importance of each method and how they handle documents differently, ensuring you choose the right strategy for your specific text.
Discover the intricacies of loading documents, splitting texts, and creating LangChain documents. Dive into the world of Beautiful Soup for parsing, manage large texts with recursive text splitters, and maintain the integrity of document chunks with variable overlaps. Learn how to handle large data sources, such as GitHub or markdown files, and how to efficiently break them down for processing with large language models. Emphasize the importance of maintaining content context during the splitting process, and apply MapReduce summarization techniques to efficiently derive meaning from your segmented data.
Dive into the powerful world of tagging with LangChain. Expand your document analysis toolkit to identify and categorize specific features in large datasets. Harness the power of sitemap loaders to retrieve web pages, define JSON schemas to establish tagging criteria, and process content using OpenAI's GPT 3.5 Turbo. Experience seamless integration of structured data with popular Python libraries like pandas and effortlessly enrich your dataset with metadata, such as URLs.
Integrate the LangSmith tool into your workflow to identify bugs and evaluate quality of text generation responses.
Explore LangChain Hub inside of LangSmith. LangChain Hub allows you to easily find, download and use different prompts from other prompt engineers.
Understand the principles and operation of the LCEL runnable protocol to efficiently execute your AI models.
Understand how to utilize itemgetter and Retrieval Augmented Generation (RAG) techniques to optimize the performance of ChatGPT models.
Understand how to incorporate chat history and memory with LangChain to improve the user engagement and conversation flow.
Construct multiple chains in LangChain, enhancing the flexibility of your AI model's output.
Demonstrate the ability to implement conditional logic, branching and merging to create sophisticated conversational flows in LangChain.
Master the application of JSON mode in LangChain, ensuring improved model performance and error prevention by constraining the model to only generate valid JSON objects.
Practice the use of JSON mode through a hands-on exercise to solidify your understanding and enhance your skills in handling JSON objects in AI models.
Learn how to effectively utilize JSON mode in conjunction with LangChain Expression Language.
Understand how parallel function calling works, enabling the model to perform multiple function calls simultaneously, reducing round trips with the API, and enhancing the efficiency of AI models.
Apply your understanding of parallel function calling through a practical exercise, reinforcing your knowledge and improving your proficiency in implementing this technique in AI models.
Learn how to effectively structure your document ingestion pipelines with the LangChain Indexing API.
Configurable fields allow you to dynamically change parts of your LCEL runnables at runtime!
Learn about agents, tools and how to create a custom agent with memory in LangChain.
LangGraph is a powerful library for building stateful, multi-agent workflows with language models, offering features like cycles, controllability, and persistence. It is important to learn LangGraph because it enables the creation of robust, flexible applications that can manage complex interactions and stateful processes, making it essential for developing advanced language-driven solutions.
Learn to build a support chatbot using LangGraph, progressively adding sophisticated capabilities while understanding key concepts like state management and node functions.
Enhance your chatbot with tools by integrating a web search tool to handle queries it can't answer from memory. This lesson covers installing the necessary packages, setting up API keys, defining the search tool, and modifying the chatbot to use these tools. By the end, your chatbot will be able to provide more relevant and comprehensive responses by accessing external information sources.
Integrate human oversight into your chatbot by utilizing LangGraph's interrupt_before functionality to pause execution before specific nodes.
Learn how to manually update the state of LangGraph agents to control their behavior and correct mistakes.
Enhance your chatbot by adding custom state fields to support more complex behavior. Integrate a new ask_human flag in the state, enabling the chatbot to request human assistance when necessary. By defining a conditional logic to handle this flag, you can dynamically include a human in the loop while maintaining full memory across executions.
Implement time travel in your chatbot using LangGraph's built-in functionality to rewind and resume from previous states. This guide demonstrates how to fetch checkpoints using the get_state_history method, allowing users to explore alternative outcomes or correct mistakes. By enabling time-travel checkpoint traversal, you can enhance your chatbot's flexibility and debugging capabilities.
Learn about how to implement a self corrective retrieval augmented generation pipeline in Langgraph. This system will score both documents and answers and will self-correct when an answer contains hallucinations or is not grounded on document based knowledge.
Give your AI model time to think and plan, and it’ll get better at reasoning.
Use psychological techniques that motivate humans in your AI prompts.
Give the AI a role to play or a style to emulate, and relevant examples.
Provide examples of the task being done, to demonstrate desired results.
Generate multiple responses, then choose the most popular answer.