Constraint Definition and Boundaries in Prompt Engineering

Constraint definition and boundaries in prompt engineering refer to the explicit specification of limits, rules, and conditions that govern how a language model may respond, including what it should and should not do, the scope it must stay within, and the format or style it must follow 64. Their primary purpose is to channel a model’s generative flexibility into outputs that are safe, relevant, and useful for a particular task or domain, such as regulated industries, software generation, or educational assistance 67. They matter because modern large language models (LLMs) are highly underdetermined by a naive prompt; well-designed constraints reduce ambiguity, improve reliability, and help enforce safety and policy requirements 56. In practice, constraint definition is a central mechanism by which prompt engineers turn raw model capability into dependable, production-grade systems 67.

Overview

The emergence of constraint definition and boundaries as a distinct practice within prompt engineering reflects the maturation of LLM deployment from experimental prototypes to production systems. As organizations began integrating language models into high-stakes domains—healthcare, finance, legal services, and customer support—the limitations of unconstrained prompting became apparent. Early adopters discovered that models given vague instructions would produce inconsistent outputs, occasionally hallucinate information, or generate content that violated organizational policies or regulatory requirements 67.

The fundamental challenge that constraint definition addresses is the inherent flexibility of modern LLMs. While this generative capacity is a strength, it becomes a liability when predictability, compliance, and safety are paramount. A model asked to “help with tax questions” might provide general information, attempt to give specific tax advice (potentially crossing into unauthorized practice), fabricate tax code citations, or respond in formats ranging from casual conversation to formal documentation 6. Without explicit boundaries, the model’s behavior space is too large to be reliable.

Over time, the practice has evolved from simple output-length restrictions to sophisticated multi-layered constraint systems. Cloud providers and enterprise AI platforms now treat constraints, guardrails, and controls as near-synonyms for mechanisms that keep responses within acceptable use and safety policies 67. Modern constraint engineering encompasses not only natural-language instructions but also structured output requirements, automated validation, refusal behaviors, and integration with external safety systems. This evolution reflects a broader shift toward treating prompt engineering as a rigorous discipline requiring systematic design, testing, and monitoring 678.

Key Concepts

Task Constraints

Task constraints define what specific action or operation the model is being asked to perform, such as “summarize,” “translate,” “classify,” or “explain to a particular audience” 56. These constraints narrow the model’s operational focus and establish clear success criteria for the interaction.

<strong>Example: A financial services company deploys an AI assistant to help customer service representatives understand complex insurance policies. The prompt includes the task constraint: “Summarize the coverage limitations section of the attached policy document in three bullet points, focusing specifically on exclusions that apply to natural disasters.” This constraint prevents the model from providing general insurance advice, discussing unrelated policy sections, or generating lengthy explanations that would slow down the representative’s workflow. When a representative uploads a homeowner’s policy, the model responds with exactly three bullets covering flood exclusions, earthquake coverage limits, and wind damage deductibles—nothing more, nothing less.

Content Constraints

Content constraints specify what information must be included or excluded from the model’s output, such as prohibitions on personally identifiable information (PII), requirements to include specific examples, or mandates to cite only from provided sources 46.

<strong>Example: A healthcare technology company builds a patient education chatbot with strict content constraints: “Do not include any patient names, medical record numbers, or specific appointment dates in your responses. Always include at least two authoritative medical sources for any health information provided. Never recommend specific medications or dosages.” When a patient asks about managing hypertension, the model generates an explanation of lifestyle modifications and general treatment approaches, citing the American Heart Association and Mayo Clinic, but explicitly states “Your doctor will determine the right medication and dosage for your specific situation” rather than suggesting particular drugs.

Format Constraints

Format constraints dictate the structural presentation of outputs, including requirements for JSON schemas, tables, bullet lists, specific headings, or other organizational patterns 46. These constraints enable downstream automation and ensure consistency across multiple model invocations.

<strong>Example: A legal technology firm uses an LLM to extract key information from commercial contracts. The prompt specifies: “Output your analysis as valid JSON with exactly these keys: contract_parties (array), effective_date (ISO 8601 format), termination_clauses (array of objects with clause_text and page_number), liability_cap (number or null), and governing_law (string). If any information is not found in the contract, use null for that field.” This format constraint allows the firm’s contract management system to automatically ingest the model’s output, populate a database, and flag contracts missing critical terms—all without human parsing of free-text responses.

Safety and Policy Boundaries

Safety and policy boundaries establish conditions under which the model should refuse requests, escalate to human oversight, or provide standardized disclaimers 67. These boundaries protect both users and organizations from harmful, inappropriate, or legally problematic outputs.

<strong>Example: An educational technology platform serving K-12 schools implements comprehensive safety boundaries: “If a student’s message indicates self-harm, suicidal ideation, or abuse, immediately respond with: ‘I’m concerned about what you’ve shared. Please talk to a trusted adult, school counselor, or contact [crisis hotline]. I’m notifying your school counselor now.’ Then flag the conversation for immediate human review. Do not attempt to provide mental health counseling or crisis intervention yourself.” When a middle school student types a message suggesting depression, the model follows this protocol exactly, ensuring appropriate professional intervention rather than attempting to handle a crisis situation it’s not equipped to manage.

Input Boundaries and Assumptions

Input boundaries specify what data sources the model may rely on, what assumptions it may make, and how it should handle missing or ambiguous information 6. These constraints prevent hallucination and ensure transparency about the model’s knowledge limitations.

<strong>Example: A business intelligence platform uses an LLM to answer questions about company sales data. The prompt includes: “Base all answers exclusively on the quarterly sales report table provided above. If the data needed to answer a question is not present in the table, respond with ‘That information is not available in the current quarterly report. You may need to consult [specific alternative source].’ Never estimate, extrapolate, or use your general knowledge to fill in missing data points.” When an executive asks about sales in a region not covered in the current report, the model explicitly states the data gap rather than generating plausible-sounding but fabricated figures.

Role and Persona Specification

Role and persona specification defines who the model “is” in the interaction—such as “a tax compliance assistant,” “a Socratic tutor,” or “a technical documentation writer”—which sets high-level behavioral boundaries and expectations for style, tone, and domain expertise 68.

<strong>Example: A corporate training platform creates an AI mentor for new software engineers with the persona: “You are a senior software engineer with 15 years of experience mentoring junior developers. Your communication style is patient and encouraging. You never provide complete code solutions; instead, you ask guiding questions and provide hints that help the learner discover the solution themselves. You emphasize best practices, code readability, and testing.” When a new hire asks “How do I fix this bug in my authentication function?”, the model responds with questions like “What have you tried so far?” and “What do you expect to happen versus what’s actually happening?” rather than simply providing corrected code, thereby reinforcing the learning-focused persona.

Output Structure Specification

Output structure specification requires the model to organize its response according to predefined templates, sections, or hierarchies, supporting both human readability and machine processing 46.

<strong>Example: A market research firm uses an LLM to analyze customer feedback. The prompt mandates: “Structure every analysis with exactly these sections: ## Executive Summary (2-3 sentences), ## Key Themes (numbered list of 3-5 themes with supporting quote for each), ## Sentiment Breakdown (table with columns: Theme, Positive %, Negative %, Neutral %), ## Recommended Actions (bulleted list of 3 specific, actionable recommendations), ## Methodology Note (one sentence explaining the analysis approach).” This structure constraint ensures that every analysis, regardless of the underlying feedback content, can be quickly reviewed by executives who know exactly where to find the information they need, and allows the firm’s reporting system to automatically extract and aggregate the sentiment tables across multiple analyses.

Applications in Production Environments

Regulated Industry Compliance Systems

In healthcare, financial services, and legal domains, constraint definition enables LLM deployment while maintaining regulatory compliance. A pharmaceutical company might deploy an AI assistant for medical information inquiries with constraints that restrict responses to FDA-approved indications, require citation of specific package insert sections, prohibit off-label use suggestions, and mandate disclaimers that the information does not constitute medical advice 67. The system includes automated validators that check each response against a database of approved language and flag any deviation for human review before delivery. This layered constraint approach allows the company to leverage AI efficiency while meeting stringent regulatory requirements for medical communications.

Customer Support Knowledge Base Systems

Enterprise customer support organizations use domain-scoped constraints to ensure AI assistants respond only from approved knowledge bases and escalate appropriately. A telecommunications company implements a support bot with boundaries specifying: “Answer only using information from the official support documentation provided in your context. If a customer’s issue is not addressed in the documentation, respond ‘I don’t have information about that specific issue in my current knowledge base. Let me connect you with a specialist who can help.’ Never speculate about network infrastructure, pricing not listed in official materials, or upcoming features” 67. This prevents the model from generating plausible-sounding but inaccurate technical explanations or making unauthorized commitments about service capabilities.

Code Generation and Software Development

Software development platforms apply format and safety constraints to ensure generated code is secure, maintainable, and appropriate for the target environment. A cloud platform’s AI coding assistant includes constraints such as: “Generate only Python 3.9+ compatible code. Never include hardcoded credentials, API keys, or connection strings—always use environment variables or configuration management. Include type hints for all function parameters and return values. Add docstrings following Google style. For any database operations, use parameterized queries to prevent SQL injection” 46. These constraints are enforced through both prompt instructions and post-generation validation that parses the code and checks for prohibited patterns, ensuring that AI-generated code meets the organization’s security and quality standards.

Educational Content and Tutoring Systems

Educational applications use pedagogical constraints to ensure appropriate learning scaffolding. An adaptive math tutoring system implements constraints including: “For students who answer incorrectly, provide a hint related to the first step of the solution process, not the final answer. If the student makes the same error twice, ask a diagnostic question to identify the underlying misconception. Adjust explanation complexity based on grade level: grades 6-8 use concrete examples and visual descriptions; grades 9-12 may include abstract reasoning. Never solve the entire problem for the student” 6. These constraints ensure the AI functions as an effective tutor rather than simply an answer key, promoting genuine learning and skill development.

Best Practices

Use Simple, Imperative Language with Thematic Grouping

Effective constraints employ clear, direct instructions organized by category (format, content, safety) rather than complex, nested conditions 568. The rationale is that models respond more reliably to straightforward commands, and thematic organization reduces the cognitive load of parsing lengthy instruction sets.

<strong>Implementation example: Instead of a single dense paragraph mixing multiple constraint types, structure the prompt as:

CONTENT REQUIREMENTS:
<ul>
<li>Include exactly three customer testimonials</li>
<li>Cite the source document page number for each claim</li>
<li>Do not mention competitor products by name</li>
</ul>

FORMAT REQUIREMENTS:
<ul>
<li>Use markdown headers (## for main sections)</li>
<li>Maximum 500 words total</li>
<li>End with a single-sentence call to action</li>
</ul>

SAFETY BOUNDARIES:
<ul>
<li>If source material is insufficient, state &quot;Additional information needed&quot; rather than inferring</li>
<li>Do not make claims about product efficacy not present in source material

This organization makes it easy to verify compliance with each constraint category and simplifies debugging when the model fails to follow specific rules 56.

Specify Performance Boundaries Over Procedural Micromanagement

Define what must be true of the final output rather than dictating step-by-step reasoning processes, unless the reasoning itself is important 16. This approach gives the model flexibility in how it achieves the goal while maintaining clear success criteria.

<strong>Implementation example: A legal document review system uses the constraint “Every identified risk must include: the specific contract clause (with section number), a plain-language explanation of the risk, and a severity rating (High/Medium/Low) with justification” rather than “First read the entire contract, then identify all clauses related to liability, then analyze each clause for risk, then assign severity ratings.” The performance-boundary approach specifies the required output characteristics while allowing the model to use its most effective internal processing strategy. This results in more reliable outputs because the model isn’t forced into a reasoning process that may not suit the specific document structure 16.

Leverage Structured Outputs with Automated Validation

Combine format constraints that specify structured outputs (JSON, tables, XML) with automated validators that check compliance before the output reaches end users 467. This creates a safety net that catches constraint violations and enables automatic retry or fallback behaviors.

<strong>Implementation example: A financial analysis platform requires all model outputs to conform to a JSON schema and implements a validation pipeline:

{
  "type": "object",
  "required": ["analysis_date", "ticker", "recommendation", "confidence", "key_factors"],
  "properties": {
    "analysis_date": {"type": "string", "format": "date"},
    "ticker": {"type": "string", "pattern": "^[A-Z]{1,5}$"},
    "recommendation": {"enum": ["buy", "hold", "sell"]},
    "confidence": {"type": "number", "minimum": 0, "maximum": 1},
    "key_factors": {"type": "array", "minItems": 3, "maxItems": 5}
  }
}

The system validates every model response against this schema. If validation fails, the system logs the error, automatically retries with a clarified prompt emphasizing the schema requirements, and escalates to human review after two failed attempts. This multi-layer approach dramatically reduces the rate of malformed outputs reaching production 467.

Build Comprehensive Test Suites Including Adversarial Cases

Develop test sets that include not only typical use cases but also edge cases, ambiguous inputs, and adversarial prompts designed to violate constraints 67. Regular testing against this suite ensures constraints remain effective as models and use cases evolve.

<strong>Implementation example: A content moderation system maintains a test suite of 500+ prompts including: standard policy-compliant requests, borderline cases that should trigger clarification requests, clearly prohibited content that should be refused, attempts to circumvent restrictions through indirect phrasing (“hypothetically, if someone wanted to…”), multi-turn conversations that gradually escalate toward prohibited territory, and inputs in multiple languages. The team runs this full suite weekly and after any prompt modifications, tracking the pass rate for each constraint category. When a new failure mode is discovered in production, a corresponding test case is added to prevent regression 67.

Implementation Considerations

Tool and Format Choices

The selection of constraint implementation mechanisms depends on the deployment platform and integration requirements. System-level prompts, API parameters, structured output modes, and external validation libraries each offer different trade-offs between flexibility and enforcement strength 467.

For applications requiring strict format compliance—such as data extraction pipelines or API integrations—platforms that support schema-constrained generation (where the model’s token selection is restricted to produce valid JSON or XML) provide stronger guarantees than natural-language format instructions alone 4. However, these mechanisms may not be available on all platforms and can sometimes reduce output quality for complex reasoning tasks. A practical approach is to layer multiple enforcement mechanisms: use the platform’s native structured output mode when available, include explicit format instructions in the prompt, and add post-generation validation as a final safety net 467.

For safety and policy boundaries, organizations often combine prompt-level constraints with external content filtering APIs and human-in-the-loop review for high-risk outputs. A customer service application might use prompt instructions to establish initial boundaries, route all responses through a toxicity detection API, and flag any customer conversation involving account closures or complaints for human review before the response is sent 67.

Audience-Specific Customization

Effective constraints must be tailored to the end user’s expertise level, role, and context. The same underlying task—explaining a technical concept—requires different constraints when the audience is a domain expert versus a novice, or when the use case is formal documentation versus casual learning 68.

A medical information system might maintain multiple constraint profiles: for healthcare providers, constraints allow technical terminology and assume familiarity with clinical concepts; for patients, constraints require plain language, mandate analogies for complex concepts, and include explicit disclaimers about not replacing professional medical advice. The system selects the appropriate constraint profile based on the authenticated user’s role. This audience-aware approach ensures outputs are both appropriate and useful for each user segment while maintaining consistent safety boundaries across all profiles 68.

Organizational Maturity and Context

The sophistication of constraint implementation should match the organization’s AI maturity, risk tolerance, and available resources. Early-stage implementations often begin with simpler constraints focused on critical safety and compliance requirements, then progressively add refinements as teams gain experience and tooling improves 67.

A startup deploying its first AI feature might begin with basic constraints covering prohibited content and required disclaimers, implemented primarily through prompt instructions and manual spot-checking of outputs. As the product scales and the team develops expertise, they add automated validation, A/B testing of constraint variations, and comprehensive logging to identify constraint violations. An enterprise organization in a regulated industry, by contrast, might require extensive constraint documentation, formal review processes, automated compliance checking, and audit trails from the initial deployment 67. Neither approach is inherently superior; the key is ensuring constraint rigor is appropriate for the actual risk profile and organizational capabilities.

Integration with Monitoring and Continuous Improvement

Constraints are not static; they must evolve based on observed model behavior, user feedback, and changing requirements. Effective implementations include instrumentation to detect constraint violations, track adherence rates, and surface patterns that indicate needed refinements 67.

A content generation platform logs every model output along with metadata indicating which constraints were specified and whether automated validators detected any violations. The team reviews weekly dashboards showing constraint adherence rates by category, investigates any degradation trends, and analyzes user feedback for cases where constraints were too restrictive (blocking legitimate use cases) or too permissive (allowing problematic outputs). This data-driven approach enables continuous refinement: constraints that are frequently violated may need clearer phrasing or may be unrealistic given model capabilities, while new failure modes observed in production logs inform additions to the constraint set 67.

Common Challenges and Solutions

Challenge: Underspecified Tasks and Vague Constraints

One of the most common failure modes in constraint definition is using vague language that leaves too much room for interpretation. Instructions like “be concise,” “focus on important points,” or “use appropriate tone” are subjective and lead to inconsistent model behavior across different inputs 6. The model has no objective way to determine what length counts as “concise” or which points are “important,” resulting in outputs that vary unpredictably.

This challenge is particularly acute when handling edge cases or missing data. Without explicit instructions, models often improvise—sometimes fabricating plausible-sounding information to fill gaps, other times refusing to respond at all, with no consistent pattern 6.

Solution:

Replace subjective constraints with objective, measurable criteria. Instead of “be concise,” specify “use no more than 150 words” or “limit your response to three bullet points.” Rather than “focus on important points,” enumerate the specific topics or criteria that define importance for the task: “focus on financial impact, timeline, and required approvals” 6.

For edge cases and missing data, provide explicit fallback behaviors: “If the contract does not specify a termination notice period, state ‘Termination notice period: Not specified in contract’ rather than assuming a standard period” or “If you cannot answer the question using only the provided documentation, respond with ‘I don’t have that information in my current knowledge base’ and do not attempt to answer from general knowledge” 6. A financial analysis system might include: “If fewer than three years of historical data are available, state ‘Insufficient historical data for trend analysis (minimum 3 years required)’ and do not extrapolate trends.”

These explicit specifications eliminate ambiguity and make constraint compliance objectively verifiable through automated checks 67.

Challenge: Overly Complex Instructions

As constraint requirements accumulate, prompts can become long, dense instruction sets with many overlapping or nested rules. Complex prompts are harder for models to follow reliably, increase the risk of conflicting constraints, and make debugging difficult when outputs don’t meet expectations 67.

A customer service bot prompt that has grown to include dozens of specific rules about tone, prohibited topics, escalation conditions, format requirements, and domain-specific policies may overwhelm the model’s ability to track and apply all constraints simultaneously, leading to degraded performance even on straightforward queries.

Solution:

Simplify and modularize constraint sets. Group related constraints thematically (content, format, safety) and prioritize the most critical rules 56. For complex applications, consider decomposing the task into multiple model calls with simpler, focused constraints at each stage rather than one call with exhaustive instructions.

A technical support system might split a complex workflow into stages: first, a classification call with constraints focused solely on categorizing the user’s issue (“Classify this support request into exactly one category: billing, technical, account_access, or other. Output only the category name”); second, a response generation call with constraints specific to that category (“You are responding to a billing question. Use only information from the billing FAQ provided. Include the relevant FAQ article number in your response”) 67.

Additionally, externalize constraints that can be enforced through validation rather than relying solely on prompt instructions. Format requirements, prohibited terms, and length limits can often be checked and enforced by post-processing code, allowing the prompt itself to focus on higher-level behavioral guidance 467.

Challenge: Model Capability Limitations

Some constraints may exceed the model’s fundamental capabilities. Strict logical consistency across long documents, perfect adherence to complex multi-step procedures, or flawless compliance with intricate domain-specific rules may be beyond what current models can reliably achieve through prompting alone 6.

A legal contract analysis system might specify constraints requiring the model to identify all logical inconsistencies between clauses, ensure perfect citation accuracy for hundreds of cross-references, and never miss any instance of specific legal terms across 100-page documents. Even with well-crafted constraints, current models may not achieve 100% reliability on such demanding tasks.

Solution:

Design constraint systems that acknowledge and work within model limitations. For tasks requiring perfect accuracy, implement human-in-the-loop review, use the model to assist rather than fully automate, or combine model outputs with deterministic validation tools 67.

The contract analysis system might reframe its approach: instead of requiring the model to catch all inconsistencies, use it to flag potential issues for human review (“Identify clauses that may conflict with the liability limitations in Section 8 and explain why they might be inconsistent”). Combine the model’s natural language understanding with deterministic tools for tasks like citation validation (use regex or parsing tools to verify that every cited section number actually exists in the document) 67.

For complex multi-step procedures, break them into smaller tasks with validation between steps, allowing errors to be caught and corrected before they compound. A data analysis workflow might have the model first extract relevant data points (with validation that all required fields are present), then perform calculations (with automated checks that results are within plausible ranges), then generate narrative interpretation (with constraints limiting it to describing the validated results) 67.

Challenge: Constraint Drift and Maintenance

As products evolve, user needs change, and new edge cases emerge, constraint sets require ongoing maintenance. Without systematic processes, constraints can become outdated, contradictory, or misaligned with current requirements. Teams may add new constraints to address specific issues without reviewing how they interact with existing rules, leading to bloated, inconsistent instruction sets 67.

A content moderation system’s constraints might accumulate over months: initial safety rules, then added format requirements, then special handling for different content types, then exceptions for specific use cases. Without periodic review, the constraint set becomes a patchwork that’s difficult to understand and may contain contradictions.

Solution:

Treat constraints as code: maintain them in version control, document the rationale for each rule, conduct regular reviews, and refactor when complexity grows unwieldy 67. Establish a clear ownership and review process for constraint changes.

Implement a constraint review cycle: quarterly, the team reviews all constraints, removes obsolete rules, consolidates overlapping requirements, and tests the simplified set against the current evaluation suite. Each constraint includes a comment explaining its purpose and the issue it addresses, making it easier to determine whether it’s still needed 67.

When adding new constraints, explicitly check for conflicts with existing rules and consider whether the new requirement should replace rather than supplement an older constraint. A constraint management document might track: the constraint text, the date added, the issue or requirement it addresses, test cases that verify compliance, and the date of last review. This systematic approach prevents constraint sets from becoming unmaintainable over time 67.

Challenge: Balancing Constraint Strictness with Flexibility

Overly restrictive constraints can make systems brittle, blocking legitimate use cases and frustrating users. Conversely, constraints that are too permissive fail to provide the control and safety benefits that motivated their implementation. Finding the right balance—strict enough to ensure reliability and safety, flexible enough to handle diverse real-world inputs—is an ongoing challenge 568.

A customer service bot with very strict constraints might refuse to help users whose questions don’t exactly match anticipated patterns, leading to poor user experience. One with too-loose constraints might occasionally provide inaccurate information or inappropriate responses, undermining trust.

Solution:

Adopt a risk-based approach to constraint strictness. Apply the tightest constraints to the highest-risk aspects (safety, compliance, critical accuracy requirements) while allowing more flexibility in lower-risk dimensions (style, exact phrasing, creative elements) 67.

Implement graduated constraint levels with escalation paths. A financial advice chatbot might have strict constraints prohibiting specific investment recommendations or tax advice (high risk), moderate constraints on the structure and sourcing of general financial education content (medium risk), and flexible constraints on conversational style and examples used (low risk). When the system encounters a query it cannot handle within its constraints, rather than simply refusing, it provides a helpful explanation and escalation path: “I can provide general information about retirement account types, but I can’t recommend specific investments for your situation. Would you like to schedule a consultation with one of our licensed advisors?” 67.

Use A/B testing and user feedback to calibrate constraint strictness. Deploy constraint variations to different user segments, measure both safety metrics (constraint violations, inappropriate outputs) and user satisfaction metrics (task completion, user ratings), and adjust to find the optimal balance for each use case 67.

See Also

References

  1. Google Cloud. (2024). Prompt design strategies. https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/prompt-design-strategies
  2. Palantir Technologies. (2024). Best practices for prompt engineering. https://palantir.com/docs/foundry/aip/best-practices-prompt-engineering/
  3. DataCamp. (2024). What is prompt engineering: The future of AI communication. https://www.datacamp.com/blog/what-is-prompt-engineering-the-future-of-ai-communication
  4. PromptLayer. (2024). Constrained generation. https://www.promptlayer.com/glossary/constrained-generation
  5. MIT Sloan School of Management. (2024). Effective prompts. https://mitsloanedtech.mit.edu/ai/basics/effective-prompts/
  6. VisibleThread. (2024). Creating effective AI prompts: Prompt types and real-world applications, part 2. https://www.visiblethread.com/blog/creating-effective-ai-prompts-prompt-types-and-real-world-applications-part-2/
  7. Ruben Hassid. (2024). Context is all you need. https://ruben.substack.com/p/context-is-all-you-need
  8. YouTube. (2024). Prompt engineering best practices. https://www.youtube.com/watch?v=9GHYUKYNbag