Research Article

AI and Prompt Architecture – A Literature Review

by  Cassandra Ansara
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 185 - Issue 34
Published: Sep 2023
Authors: Cassandra Ansara
10.5120/ijca2023923133
PDF

Cassandra Ansara . AI and Prompt Architecture – A Literature Review. International Journal of Computer Applications. 185, 34 (Sep 2023), 39-45. DOI=10.5120/ijca2023923133

                        @article{ 10.5120/ijca2023923133,
                        author  = { Cassandra Ansara },
                        title   = { AI and Prompt Architecture – A Literature Review },
                        journal = { International Journal of Computer Applications },
                        year    = { 2023 },
                        volume  = { 185 },
                        number  = { 34 },
                        pages   = { 39-45 },
                        doi     = { 10.5120/ijca2023923133 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2023
                        %A Cassandra Ansara
                        %T AI and Prompt Architecture – A Literature Review%T 
                        %J International Journal of Computer Applications
                        %V 185
                        %N 34
                        %P 39-45
                        %R 10.5120/ijca2023923133
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

Prompt Architecture represents a novel and systematic approach to the design and optimization of prompts within Conversational AI systems. This literature review synthesizes key developments, methodologies, and insights in the field, drawing from historical influences, recent advances, and current challenges. The review begins with an examination of early influences, such as Weizenbaum's ELIZA chatbot and Minsky's Frames Paradigm, and proceeds to explore modular prompting strategies, optimization techniques, and evaluation methods. Attention is given to innovative approaches, applications in conversational systems, user-centered design, knowledge representation, and ethical considerations. The review identifies existing gaps in the field, including the need for standardized benchmarks, inclusiveness, and ethical oversight. It concludes with a set of recommended actions for further research and development. The insights and recommendations provided in this review contribute to the maturation of Prompt Architecture as a robust and ethical methodology, with potential implications for the broader field of language model interaction and design.

References
  • Chen, M. F., Fu, D. Y., Sala, F., Wu, S., Mullapudi, R. T., Poms, F., Fatahalian, K., & Ré, C. 2020. Train and You’ll Miss It: Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings. ArXiv.org. https://doi.org/10.48550/arXiv.2006.15168
  • Grice, H. P. 1975. Logic and conversation. Academic Press.
  • Kannan, A., Kurach, K., Ravi, S., Kaufmann, T., Tomkins, A., Miklos, B., Corrado, G., Lukacs, L., Ganea, M., Young, P., & Ramavajjala, V. 2016. Smart Reply: Automated Response Suggestion for Email. ArXiv:1606.04870 [Cs]. https://arxiv.org/abs/1606.04870
  • Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., & Sabharwal, A. 2023. Decomposed Prompting: A Modular Approach for Solving Complex Tasks. ArXiv.org. https://doi.org/10.48550/arXiv.2210.02406
  • Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. 2022. Large Language Models are Zero-Shot Reasoners. ArXiv:2205.11916 [Cs]. https://arxiv.org/abs/2205.11916
  • Lester, B., Al-Rfou, R., & Constant, N. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. ArXiv:2104.08691 [Cs]. https://arxiv.org/abs/2104.08691
  • Li, R., Patel, T., & Du, X. 2023. PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations. https://arxiv.org/pdf/2307.02762.pdf
  • Lieberman, H. 2009. User Interface Goals, AI Opportunities. AI Magazine, 30(4), 16. https://doi.org/10.1609/aimag.v30i4.2266
  • Lin, B. Y., Zhou, W., Shen, M., Zhou, P., Bhagavatula, C., Choi, Y., & Ren, X. 2020. CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning. ArXiv:1911.03705 [Cs]. https://arxiv.org/abs/1911.03705
  • Liu, X., Zheng, Y., Du, Z., Ding, M., Qian, Y., Yang, Z., & Tang, J. 2021. GPT Understands, Too. ArXiv:2103.10385 [Cs]. https://arxiv.org/abs/2103.10385
  • Minsky, M. 1981. A Framework for Representing Knowledge. MIT-AI Laboratory Memo 306, June, 1974. https://courses.media.mit.edu/2004spring/mas966/Minsky%201974%20Framework%20for%20knowledge.pdf
  • Musker, S., & Pavlick, E. 2023. Testing Causal Models of Word Meaning in GPT-3 and -4. ArXiv.org. https://doi.org/10.48550/arXiv.2305.14630
  • Openai, A., Openai, K., Openai, T., & Openai, I. 2018. Improving Language Understanding by Generative Pre-Training. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
  • Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21(140), 1–67. https://jmlr.org/papers/v21/20-074.html
  • Rahma Chaabouni, Kharitonov, E., Bouchacourt, D., Dupoux, E., & Baroni, M. 2020. Compositionality and Generalization In Emergent Languages. https://doi.org/10.18653/v1/2020.acl-main.407
  • Schank, R. C., & Abelson, R. P. 1977. Scripts, Plans, Goals, and Understanding. Lawrence Erlbaum Associates.
  • Shin, T., Razeghi, Y., Logan IV, R. L., Wallace, E., & Singh, S. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. ArXiv.org. https://doi.org/10.48550/arXiv.2010.15980
  • Si, C., Gan, Z., Yang, Z., Wang, S., Wang, J., Boyd-Graber, J., & Wang, L. 2022. Prompting GPT-3 To Be Reliable. ArXiv:2210.09150 [Cs]. https://arxiv.org/abs/2210.09150
  • Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., & Roux, N. L. 2023. Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference. ArXiv.org. https://doi.org/10.48550/arXiv.2306.12509
  • Wang, Y., & Zhao, Y. 2023. Metacognitive Prompting Improves Understanding in Large Language Models. ArXiv.org. https://doi.org/10.48550/arXiv.2308.05342
  • Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi Quoc, E., Le, V., & Zhou, D. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models Chain-of-Thought Prompting.
  • Weizenbaum, J. 1966. ELIZA---a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  • White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. 2023. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. ArXiv:2302.11382 [Cs]. https://arxiv.org/abs/2302.11382
  • Winograd, T. 1971. Procedures as a Representation for Data in a Computer Program for Understanding Natural Language. Dspace.mit.edu. https://dspace.mit.edu/handle/1721.1/7095
  • Yang, X., Cheng, W., Zhao, X., Yu, W., Petzold, L., & Chen, H. 2023. Dynamic Prompting: A Unified Framework for Prompt Tuning. ArXiv.org. https://doi.org/10.48550/arXiv.2303.02909
  • Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., & Narasimhan, K. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. ArXiv.org. https://doi.org/10.48550/arXiv.2305.10601
  • Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. ArXiv.org. https://doi.org/10.48550/arXiv.2210.03629.
  • Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., & Ba, J. 2022. Large Language Models Are Human-Level Prompt Engineers. ArXiv:2211.01910 [Cs]. https://arxiv.org/abs/2211.01910
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Prompt Architecture Modular Prompting Chain-of-Thought Prompting Prompt Optimization User-Centered Design Large Language Models (LLMs) ELIZA Chatbot Minsky's Frames Paradigm Unified Text-to-Text Approach Accessibility in AI Ethical Implications of LLMs Evaluation of Prompt Quality Weak Supervision in AI Automated Prompt Generation Conversational AI Applications.

Powered by PhDFocusTM