菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-13
📄 Abstract - Enhancing Sentiment Classification and Irony Detection in Large Language Models through Advanced Prompt Engineering Techniques

This study investigates the use of prompt engineering to enhance large language models (LLMs), specifically GPT-4o-mini and gemini-1.5-flash, in sentiment analysis tasks. It evaluates advanced prompting techniques like few-shot learning, chain-of-thought prompting, and self-consistency against a baseline. Key tasks include sentiment classification, aspect-based sentiment analysis, and detecting subtle nuances such as irony. The research details the theoretical background, datasets, and methods used, assessing performance of LLMs as measured by accuracy, recall, precision, and F1 score. Findings reveal that advanced prompting significantly improves sentiment analysis, with the few-shot approach excelling in GPT-4o-mini and chain-of-thought prompting boosting irony detection in gemini-1.5-flash by up to 46%. Thus, while advanced prompting techniques overall improve performance, the fact that few-shot prompting works best for GPT-4o-mini and chain-of-thought excels in gemini-1.5-flash for irony detection suggests that prompting strategies must be tailored to both the model and the task. This highlights the importance of aligning prompt design with both the LLM's architecture and the semantic complexity of the task.

顶级标签: llm natural language processing model evaluation
详细标签: sentiment analysis prompt engineering irony detection few-shot learning chain-of-thought 或 搜索:

通过高级提示工程技术增强大语言模型的情感分类与反讽检测能力 / Enhancing Sentiment Classification and Irony Detection in Large Language Models through Advanced Prompt Engineering Techniques


1️⃣ 一句话总结

这项研究发现,通过使用少量示例学习、思维链等高级提示工程技术,可以显著提升GPT-4o-mini和Gemini-1.5-flash等大语言模型在情感分析和反讽检测任务上的表现,但最佳策略需根据具体模型和任务类型进行定制。

源自 arXiv: 2601.08302