Basic Prompt Structure and Syntax in Prompt Engineering
Basic prompt structure and syntax in prompt engineering refers to the systematic organization, ordering, and formatting of inputs to large language models (LLMs) designed to reliably elicit desired behaviors and outputs 245. This encompasses the composition of instructions, context, examples, and output constraints as a single coherent text sequence that the model processes token by token 48. The primary purpose of well-designed prompt structure is to reduce ambiguity, expose relevant information, and align the model’s generative behavior with user goals, thereby improving accuracy, controllability, and consistency 358. As LLMs are increasingly deployed in high-stakes and complex applications—from customer support automation to code generation and medical decision support—systematic control over prompt structure and syntax has become a core competency in prompt engineering and a critical factor in system reliability and safety 259.
Overview
The emergence of basic prompt structure and syntax as a distinct discipline stems from the rapid evolution of large language models and their unique operational characteristics. Unlike traditional software systems that execute deterministic instructions, LLMs generate outputs by predicting the next token in a sequence based on patterns learned during pretraining 24. This fundamental mechanism means that every aspect of how a prompt is structured—the ordering of elements, the choice of delimiters, the phrasing of instructions—directly influences the model’s internal trajectory and output distribution 248.
The fundamental challenge that prompt structure addresses is the translation of human intent into a format that probabilistic language models can reliably interpret and execute. Early interactions with models like GPT-2 and GPT-3 revealed that seemingly minor variations in prompt wording or organization could produce dramatically different results, ranging from highly accurate responses to complete failures or hallucinations 24. This sensitivity necessitated a systematic approach to prompt design, drawing on insights from in-context learning research, which demonstrated that models could perform tasks from demonstrations and instructions embedded in text without parameter updates 24.
Over time, the practice has evolved from ad-hoc experimentation to structured methodologies supported by vendor documentation, academic research, and community best practices. Major cloud providers including OpenAI, Microsoft, AWS, and IBM now publish comprehensive prompt engineering guides that codify structural patterns, component taxonomies, and iterative design processes 3589. The field has progressed from simple instruction-response patterns to sophisticated frameworks incorporating few-shot learning, chain-of-thought reasoning, tool augmentation, and retrieval-augmented generation, all of which depend on careful structural design 235.
Key Concepts
Role and System Specification
Role and system specification defines the behavioral contract and persona the model should adopt when generating responses 58. This component tells the model “who it is” and establishes high-level constraints on tone, expertise level, and safety boundaries. Major API providers like OpenAI distinguish system messages from user messages specifically to set these overarching behavioral parameters 58.
Example: A financial services company building an investment advisory chatbot structures its system message as: “You are a certified financial advisor with 15 years of experience in retirement planning. You provide conservative, risk-aware advice compliant with SEC regulations. You never guarantee returns or recommend specific securities. When uncertain, you direct users to consult their personal financial advisor.” This role specification ensures the model maintains appropriate professional boundaries and regulatory compliance across all subsequent interactions, preventing it from making inappropriate investment promises or providing advice beyond its scope.
Directive and Task Instruction
The directive or task instruction is the core imperative or question that explicitly states what the model should do 1358. This is often the single most critical element for output quality and must be explicit, unambiguous, and action-oriented. Vague verbs like “analyze” or “discuss” without clear success criteria frequently lead to diffuse or incorrect outputs 378.
Example: A legal technology firm developing a contract review tool initially used the vague prompt: “Analyze this employment agreement.” This produced inconsistent outputs ranging from general summaries to clause-by-clause breakdowns. After refinement, they adopted: “Review the following employment agreement and produce a numbered list of: (1) any non-compete clauses with their duration and geographic scope, (2) intellectual property assignment provisions, (3) termination conditions, and (4) any unusual or non-standard clauses. For each item, cite the specific section number.” This precise directive yielded consistent, actionable outputs that legal reviewers could efficiently validate.
Context and Background Information
Context encompasses background material, data, documents, or prior conversation history that the model needs to perform the task accurately 48. This component provides the factual grounding and domain-specific information that the model cannot access from its training data alone, such as proprietary product specifications, user profiles, or recent events.
Example: A healthcare technology company building a patient education system structures prompts with extensive context: “Patient profile: 67-year-old female, Type 2 diabetes (diagnosed 2019), HbA1c 7.8%, currently taking metformin 1000mg twice daily, no known drug allergies. Recent lab results show elevated fasting glucose (avg 145 mg/dL over past month). Patient education level: high school, prefers simple explanations without medical jargon. Task: Explain why her doctor recommended adding a GLP-1 agonist to her treatment plan.” This context enables the model to generate personalized, appropriate explanations rather than generic diabetes information.
Few-Shot Demonstrations
Few-shot demonstrations are exemplar input-output pairs that teach the model the desired mapping pattern, format, or style through concrete examples 24. This technique leverages in-context learning, where models induce task structure from labeled demonstrations without parameter updates. Few-shot prompting consistently improves performance on classification, reasoning, and formatting tasks 24.
Example: A content moderation system for a social media platform uses few-shot prompting to classify user comments. The prompt includes:
Classify comments as SAFE, BORDERLINE, or VIOLATION.
Example 1:
Comment: "I disagree with your political views but respect your right to express them."
Classification: SAFE
Reason: Respectful disagreement
Example 2:
Comment: "People like you are what's wrong with this country. Educate yourself."
Classification: BORDERLINE
Reason: Dismissive and mildly insulting but no direct threats or slurs
Example 3:
Comment: "You should be [violent threat]. People like you deserve [harm]."
Classification: VIOLATION
Reason: Direct violent threat
Now classify:
Comment: [user input]
This structure teaches the model both the classification schema and the reasoning style, producing more consistent and explainable moderation decisions.
Output Format Specification
Output format specification defines the structure, style, and schema of the model’s response, such as JSON objects, numbered lists, tables, or constrained text formats 18. Explicit formatting instructions enable automatic parsing, validation, and integration into downstream systems, and they reduce the likelihood of extraneous commentary or format drift 58.
Example: A market research firm extracting insights from customer interviews structures the output specification as:
Return your analysis as valid JSON matching this exact schema:
{
"sentiment": "positive" | "negative" | "mixed",
"key_themes": [string array, 3-5 items],
"pain_points": [string array, specific quotes],
"feature_requests": [
{
"feature": string,
"urgency": "high" | "medium" | "low",
"quote": string
}
],
"confidence": number between 0 and 1
}
Do not include any text outside the JSON object.
This specification ensures outputs can be automatically ingested into their analytics database without manual reformatting, and the confidence field enables quality filtering.
Delimiters and Syntax Markers
Delimiters and syntax markers are structural elements—such as headings, bullet points, code fences, horizontal rules (---), or XML-style tags—used to visually and syntactically partition different sections of the prompt 18. These markers help the model distinguish instructions from context, examples from queries, and user data from system directives, reducing ambiguity and improving parsing reliability 18.
Example: A document summarization service processing multiple articles in batch uses clear delimiters:
You are a professional research summarizer.
INSTRUCTIONS:
<ul>
<li>Summarize each article in exactly 3 bullet points</li>
<li>Focus on methodology, findings, and implications</li>
<li>Use objective, academic tone</li>
</ul>
---
ARTICLE 1:
[full text of first article]
---
ARTICLE 2:
[full text of second article]
---
ARTICLE 3:
[full text of third article]
---
Provide summaries in the format:
<h2>Article 1</h2>
<ul>
<li>[bullet point]</li>
<li>[bullet point]</li>
<li>[bullet point]
The --- separators and section headers prevent the model from conflating content across articles and ensure it processes each document independently.
Chain-of-Thought Structuring
Chain-of-thought structuring explicitly instructs the model to show its reasoning process step-by-step before arriving at a final answer 23. This technique, often triggered by phrases like “think step by step” or “show your work,” improves performance on arithmetic, logical reasoning, and multi-step problem-solving tasks by making intermediate reasoning explicit and verifiable 23.
Example: An educational technology platform helping students solve algebra problems structures prompts as:
Solve the following equation step-by-step, explaining each transformation:
Problem: 3(2x - 4) + 5 = 2(x + 3) - 1
Format your response as:
Step 1: [transformation] → [result]
Explanation: [why this step is valid]
Step 2: [transformation] → [result]
Explanation: [why this step is valid]
[continue for all steps]
Final Answer: x = [value]
Verification: [substitute back into original equation]
This structure not only improves solution accuracy but also generates pedagogically valuable explanations that help students understand the problem-solving process, and the verification step catches computational errors.
Applications in Real-World Contexts
Customer Support Automation
In customer support applications, prompt structure combines role specification, product documentation context, example interactions, and explicit formatting rules to ensure reliable, brand-aligned responses 59. A telecommunications company, for instance, structures support agent prompts with a system role (“You are a helpful Verizon customer support specialist”), dynamically inserted context (customer account status, service plan details, recent interactions), examples of compliant responses that cite knowledge base articles, and formatting constraints (use bullet lists for multi-step instructions, always provide ticket numbers, never promise credits without manager approval). This comprehensive structure ensures consistency across thousands of daily interactions while maintaining compliance with company policies and regulatory requirements.
Code Generation and Software Development
Code generation applications rely on precise role definitions, detailed problem statements, and strict output syntax specifications to produce code that can be directly integrated into build systems 57. A software development platform uses prompts structured as: “You are an expert Python developer specializing in data processing pipelines. Write production-quality code following PEP 8 style guidelines. Include type hints, docstrings, and error handling. Output only the code with no explanatory text.” The prompt then includes the specific requirements, input/output examples, and any relevant API documentation as context. The “code only” constraint is critical—it prevents the model from adding conversational text that would break automated testing pipelines. This structure enables developers to use LLM-generated code in continuous integration workflows with minimal manual cleanup.
Data Extraction and Structured Information Retrieval
Data extraction workflows embed JSON or CSV schemas directly in prompts and use separators to delineate multiple documents or records for batch processing 8. A real estate analytics firm extracting property details from listing descriptions structures prompts with a detailed JSON schema specifying required fields (address, price, square footage, bedrooms, bathrooms, amenities), validation rules (price must be numeric, bedrooms must be integer), and handling instructions for missing data (use null for unavailable fields, never guess). When processing hundreds of listings, they use clear delimiters between each listing and explicit instructions to output a JSON array. This structure enables automated ingestion into their database, with schema validation catching any format deviations before data corruption occurs.
Retrieval-Augmented Generation Systems
Retrieval-augmented generation (RAG) systems depend on carefully structured prompts that interleave task instructions with retrieved passages and metadata, using clear markers so the model can distinguish instructions from evidence 58. A legal research platform structures RAG prompts as: “You are a legal research assistant. Answer the user’s question using ONLY the provided case excerpts. Cite specific cases and page numbers for every claim. If the excerpts don’t contain sufficient information, state this explicitly rather than using general legal knowledge.” The prompt then includes retrieved passages formatted as: [Case: Smith v. Jones, 2019, p. 45] [excerpt text] [end excerpt]. This structure ensures the model grounds its responses in the retrieved evidence rather than hallucinating case law, and the citation requirement enables lawyers to verify every claim against source documents.
Best Practices
Front-Load Critical Instructions
Place the most important task instructions at the beginning of the prompt, before extensive context or examples 58. The rationale is that LLMs process prompts sequentially and may give disproportionate weight to early tokens, and in long prompts, critical instructions buried deep in context may be diluted or overlooked due to attention limitations.
Implementation Example: A content generation platform initially placed instructions after several paragraphs of brand guidelines and style examples, resulting in inconsistent adherence to core requirements. After restructuring to place the directive first—”Write a 300-word product description for [product] targeting [audience]. Requirements: (1) include exactly 3 benefit statements, (2) use active voice throughout, (3) end with a clear call-to-action, (4) avoid superlatives like ‘best’ or ‘perfect'”—followed by style context, output quality improved measurably. The explicit, numbered requirements at the start ensured the model prioritized these constraints throughout generation.
Use Clear Sectioning and Consistent Delimiters
Employ headings, horizontal rules, code fences, or XML-style tags to visually and syntactically separate instructions, examples, context, and user data 18. This practice reduces ambiguity about which text represents directives versus content to be processed, and it makes prompts easier for both humans and models to parse reliably.
Implementation Example: A document classification system initially used unstructured prompts where instructions, examples, and the document to classify ran together in continuous text, leading to frequent errors where the model treated instructions as part of the document. After adopting a structured format with clear sections:
=== INSTRUCTIONS ===
Classify the document into exactly one category: TECHNICAL, BUSINESS, LEGAL, or MARKETING.
=== EXAMPLES ===
Document: "This API accepts POST requests with JSON payloads..."
Category: TECHNICAL
Document: "Q3 revenue exceeded projections by 12%..."
Category: BUSINESS
=== DOCUMENT TO CLASSIFY ===
[user document]
=== OUTPUT FORMAT ===
Category: [your classification]
Confidence: [0-1]
Classification accuracy improved by 23% and parsing errors dropped to near zero, because the model could reliably identify which text to classify versus which text contained instructions.
Iterate with Empirical Testing on Representative Data
Start with simpler, smaller prompts and grow complexity incrementally while testing with realistic inputs that represent the full distribution of production use cases 57. Rapid iteration with A/B testing and quantitative evaluation prevents over-engineering and reveals which structural elements actually improve performance versus adding unnecessary complexity.
Implementation Example: A sentiment analysis service began with a minimal prompt: “Classify this review as positive or negative: [text].” Through systematic testing on 500 labeled reviews, they discovered this baseline achieved 78% accuracy. They then tested incremental additions: adding a role specification (no improvement), adding two examples per class (improved to 84%), adding explicit handling for mixed sentiment (improved to 87%), and adding a confidence score requirement (improved calibration without changing accuracy). By testing each change independently on held-out data, they built an optimized prompt structure based on evidence rather than intuition, avoiding the temptation to add every possible structural element.
Maintain Prompt Versioning and Documentation
Treat prompts as critical code artifacts with version control, change logs, and documentation of design decisions 59. Since minor edits can have large behavioral impacts in production, systematic versioning enables rollback when changes degrade performance and provides an audit trail for debugging and compliance.
Implementation Example: A financial services firm maintains a Git repository for all production prompts, with each prompt file including a header comment documenting: version number, author, date, intended use case, known limitations, and links to evaluation results. When they updated a credit risk assessment prompt to improve handling of self-employed applicants, the change inadvertently reduced accuracy for standard W-2 employees. Because they had versioned the prompt and maintained evaluation benchmarks, they detected the regression within hours through automated testing, immediately rolled back to the previous version, and redesigned the change to improve self-employed handling without degrading the base case. Without versioning, the regression might have gone undetected for weeks, potentially affecting thousands of credit decisions.
Implementation Considerations
Token Budget and Context Window Management
Models have finite context windows (e.g., 4K, 8K, 32K, or 128K tokens), and adding extensive context or many examples can improve accuracy but may exceed limits or dilute the salience of critical instructions 248. Practitioners must balance comprehensiveness against focus, carefully curating context to include only what is necessary and front-loading key instructions 58. For applications requiring large context—such as analyzing full legal contracts or codebases—consider chunking strategies where documents are processed in segments with structured aggregation prompts, or use models with extended context windows despite higher latency and cost.
Example: A contract analysis system initially attempted to include entire 50-page agreements in prompts, frequently hitting token limits and producing incomplete analyses. They restructured to first extract relevant sections (termination clauses, liability provisions, payment terms) using targeted extraction prompts, then performed detailed analysis on these focused excerpts. This two-stage approach stayed within token budgets while improving analysis depth, because the model could focus attention on relevant content rather than processing dozens of pages of boilerplate.
Output Format Enforcement and Validation
Models sometimes drift from requested schemas or include extraneous commentary despite explicit formatting instructions 58. Mitigation strategies include: repeating format requirements multiple times in the prompt, using explicit “do not include” constraints, wrapping format specifications in distinctive markers (e.g., XML-style tags the model has learned to respect), and implementing post-processing validation that rejects non-compliant outputs and retries with strengthened instructions.
Example: A data pipeline extracting structured information from research papers initially requested JSON output but found that 15% of responses included explanatory text before or after the JSON object, breaking automated parsing. They strengthened the prompt with: “Output ONLY valid JSON. Do not include any explanatory text, comments, or markdown formatting. The response must begin with { and end with }. Any response containing text outside the JSON object will be rejected.” They also implemented a validation layer that parsed the output and, on failure, automatically retried with an appended instruction: “Previous response was invalid. Remember: output ONLY the JSON object with no additional text.” This two-layer approach reduced format violations to under 2%.
Domain-Specific Customization and Terminology
Effective prompt structure varies significantly across domains based on specialized terminology, regulatory requirements, and output conventions 39. Medical applications require careful attention to clinical terminology and safety disclaimers; legal applications demand precise citation formats and jurisdictional awareness; technical documentation requires consistent use of product-specific terms and version numbers. Practitioners should collaborate with domain experts to identify critical terminology, common edge cases, and compliance requirements, then encode these explicitly in role specifications and constraints.
Example: A medical device company developing patient education materials initially used generic health writing prompts, resulting in outputs that mixed consumer and clinical terminology inconsistently. After consulting with medical writers and regulatory affairs, they restructured prompts to specify: “Use lay terminology for all medical concepts (e.g., ‘heart attack’ not ‘myocardial infarction’). When technical terms are unavoidable, provide definitions in parentheses on first use. Follow FDA guidelines for patient labeling: include all required warnings, avoid absolute claims about efficacy, use ‘may help’ rather than ‘will cure.’ Reading level: 8th grade.” This domain-specific structure ensured outputs met both patient comprehension and regulatory compliance requirements.
Organizational Maturity and Governance
As organizations scale LLM deployments from experimentation to production, prompt structure and syntax practices must evolve to support governance, auditability, and risk management 59. Mature implementations establish: centralized prompt libraries with reusable templates, approval workflows for high-risk applications, automated evaluation pipelines that test prompts against benchmark datasets before deployment, and monitoring systems that track output quality and detect drift over time. Organizations should also establish guidelines for handling sensitive data in prompts, ensuring that personally identifiable information, trade secrets, or regulated data are appropriately masked or excluded.
Example: A healthcare provider initially allowed individual developers to write prompts ad-hoc for various clinical decision support tools, leading to inconsistent quality and compliance risks. They established a prompt governance framework requiring: (1) all prompts for patient-facing or clinical applications must use approved templates from a central library, (2) new prompts undergo review by clinical, legal, and AI safety teams before production deployment, (3) all prompts are tested against a 500-case benchmark covering common scenarios and edge cases, with minimum accuracy thresholds, and (4) production prompts are monitored with automated alerts when output patterns deviate from baseline. This governance structure reduced compliance incidents while actually accelerating development, because developers could reuse vetted templates rather than designing from scratch.
Common Challenges and Solutions
Challenge: Ambiguous or Vague Instructions Leading to Inconsistent Outputs
One of the most frequent problems in prompt engineering is instructions that seem clear to humans but are interpreted inconsistently by models 378. Vague action verbs (“analyze,” “review,” “discuss”), undefined success criteria, and implicit assumptions about output format or scope lead to high variance in responses, making outputs unreliable for production use. This challenge is particularly acute when prompts are written by subject matter experts unfamiliar with LLM behavior, who may assume the model shares their domain context and interpretive frameworks.
Solution:
Replace vague verbs with concrete, measurable actions and explicit success criteria 37. Instead of “analyze this customer feedback,” specify: “Identify and list: (1) specific product features mentioned, (2) whether each mention is positive, negative, or neutral, (3) any feature requests, and (4) any mentions of competitor products. Format as a numbered list with each item on a separate line.” Test prompts with diverse inputs and examine outputs for consistency; if different runs produce structurally different responses, the instructions are likely too vague. Collaborate with domain experts to identify implicit assumptions and make them explicit in the prompt. For complex tasks, provide a detailed example of the desired output format and content, which serves as a concrete specification the model can pattern-match against.
Challenge: Context Overload and Instruction Burial
As prompts grow to include extensive background information, multiple examples, and detailed constraints, critical instructions can become buried in context, leading to degraded performance 248. This is especially problematic in retrieval-augmented generation systems where retrieved passages may span thousands of tokens, or in conversational applications where conversation history accumulates over multiple turns. The model’s attention may be diluted across the large context, reducing focus on the core task directive.
Solution:
Adopt a “sandwich” structure that places critical instructions both at the beginning and end of the prompt 58. Start with a concise task directive, include necessary context and examples in the middle, then repeat or reinforce the key instruction and output format at the end: “Remember: classify into exactly one category and provide a confidence score.” For very long contexts, use explicit section markers and refer to them in instructions: “Based on the RETRIEVED DOCUMENTS section above, answer the user’s question. Cite specific document IDs for all claims.” In RAG systems, consider summarizing or filtering retrieved passages before inclusion, keeping only the most relevant excerpts. For conversational applications, implement context pruning strategies that retain recent turns and critical system instructions while dropping older, less relevant history.
Challenge: Format Non-Compliance and Parsing Failures
Despite explicit output format specifications, models sometimes produce responses that deviate from requested schemas—adding explanatory text around JSON objects, using inconsistent delimiters, omitting required fields, or mixing formats 58. These format violations break automated parsing and downstream processing, requiring manual intervention or causing pipeline failures. The challenge is exacerbated when prompts are used across different model versions or providers, which may have learned different formatting conventions.
Solution:
Implement a multi-layered approach combining stronger prompt constraints, validation, and retry logic 58. In the prompt, use emphatic, repeated format instructions: “Output ONLY valid JSON. Do not include explanatory text before or after the JSON. The first character must be { and the last character must be }.” Provide a concrete example of the exact format required. Implement post-processing validation that attempts to parse the output and, on failure, extracts any embedded valid format (e.g., finding JSON within surrounding text) or triggers an automatic retry with an appended correction: “Your previous response included text outside the JSON object. Please provide ONLY the JSON with no additional text.” For critical applications, consider using models with function calling or structured output modes (e.g., OpenAI’s JSON mode) that enforce format constraints at the API level. Monitor format compliance rates in production and refine prompts when violations exceed acceptable thresholds.
Challenge: Hallucination and Factual Inaccuracy
LLMs frequently generate plausible-sounding but factually incorrect information, a phenomenon known as hallucination 259. This is particularly problematic in domains requiring factual accuracy—such as medical advice, legal research, or technical documentation—where confident but false statements can cause serious harm. Hallucination often increases when prompts ask for information beyond the model’s training data or when context is insufficient to ground responses in facts.
Solution:
Structure prompts to constrain the model to provided context and make uncertainty explicit 59. Use instructions like: “Answer based ONLY on the information in the provided documents. If the documents do not contain sufficient information to answer the question, respond with ‘The provided documents do not contain enough information to answer this question’ rather than using general knowledge.” Require citations for all factual claims: “For every statement, cite the specific document and page number where the information appears.” Implement retrieval-augmented generation to provide authoritative source material rather than relying on the model’s parametric knowledge. For high-stakes applications, add explicit uncertainty quantification: “Rate your confidence in this answer from 0-10 and explain what information would be needed to increase confidence.” Use chain-of-thought prompting to make reasoning explicit and verifiable: “Show your reasoning step-by-step, citing specific evidence for each step.” Finally, implement human-in-the-loop review for critical outputs, with the prompt structure designed to facilitate efficient verification (e.g., by including citations and confidence scores).
Challenge: Sensitivity to Prompt Variations and Brittleness
Small changes in prompt wording, ordering, or formatting can produce surprisingly large changes in output quality, making prompts brittle and difficult to maintain 24. A prompt that works well on test cases may fail on slightly different production inputs, and minor edits intended to improve one aspect may inadvertently degrade others. This sensitivity makes prompt engineering feel more like an art than a science and complicates collaboration when multiple team members edit prompts.
Solution:
Adopt systematic testing and version control practices to manage prompt brittleness 57. Maintain a diverse benchmark dataset covering common cases, edge cases, and known failure modes, and evaluate every prompt change against this benchmark before deployment. Use A/B testing in production to compare prompt variants on real traffic, measuring not just accuracy but also consistency, format compliance, and user satisfaction. When making changes, modify one element at a time and measure the impact, rather than making multiple simultaneous edits that confound attribution. Document the rationale for each structural choice and known sensitivities (e.g., “removing the role specification reduced accuracy by 8% on technical queries”). Consider ensemble approaches where multiple prompt variants are used and outputs are aggregated or selected based on confidence scores, reducing dependence on any single brittle prompt. For critical applications, implement canary deployments where new prompts are tested on a small percentage of traffic before full rollout, with automatic rollback if quality metrics degrade.
See Also
References
- LearnPrompting.org. (2024). Prompt Structure. https://learnprompting.org/docs/basics/prompt_structure
- Wikipedia. (2024). Prompt engineering. https://en.wikipedia.org/wiki/Prompt_engineering
- Amazon Web Services. (2024). What is Prompt Engineering? https://aws.amazon.com/what-is/prompt-engineering/
- DAIR.AI. (2024). Prompt Engineering Guide – Basics. https://www.promptingguide.ai/introduction/basics
- OpenAI. (2024). Prompt Engineering Guide. https://platform.openai.com/docs/guides/prompt-engineering
- Georgia Institute of Technology. (2024). AI Prompt Engineering with ChatGPT. https://iac.gatech.edu/featured-news/2024/02/AI-prompt-engineering-ChatGPT
- GitHub. (2024). What is Prompt Engineering? https://github.com/resources/articles/what-is-prompt-engineering
- Microsoft. (2024). Prompt Engineering Concepts. https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/prompt-engineering
- IBM. (2024). What is Prompt Engineering? https://www.ibm.com/think/topics/prompt-engineering
