Skip to main content

Overview

  • Prompt Engineering: process of carefully designing and optimizing instructions (prompts) to elicit the best possible output from generative AI models, especially Large Language Models (LLMs). By providing clear, specific, and well-structured prompts, you can guide the AI to generate relevant, accurate, and high-quality responses
  • Prompt: input you provide to a generative AI model to request a specific output. It can be a simple question, a set of instructions, or even a creative writing example
  • Large Language Model (LLM): AI model designed to understand and generate human-like text. LLMs are trained on vast amounts of data and can perform tasks like translation, summarization, and even creative writing
  • Prompt Template: a pre-defined structure or format for a prompt that can be customized with specific details or variables to generate dynamic prompts
  • Prompt Tuning: process of fine-tuning pre-trained LLMs by adapting them to specific tasks or domains through prompt engineering, rather than traditional fine-tuning methods
  • Prompt Injection: a security vulnerability where an attacker manipulates the input prompt to influence the AI model's behavior in unintended ways, potentially leading to unauthorized actions or disclosures
  • Prompt Leakage: situation where sensitive information from the prompt is inadvertently included in the generated output, posing privacy or security risks
  • Prompt Bias: tendency of an AI model to generate responses that reflect the biases present in its training data, leading to unfair or inaccurate outcomes
  • Prompt Hallucination: when an AI model generates information that is not supported by the input prompt or its training data, leading to false or misleading outputs
  • Prompt Testing: process of evaluating and validating prompts to ensure they produce the desired output, meet quality standards, and comply with ethical and regulatory requirements
  • Prompt Optimization: continuous process of refining prompts to improve their performance, based on feedback, testing results, and changes in the AI model or its training data
  • Context Window: max number of tokens the model can process at once, including input and output. Often a model-specific architectural limit