Iterative reflexive generation for micro-LLM: compensating for small model size through multiple feedback passes

Автор: Khudaiberideva G.B., Kozhukhov D.A., Pimenkova A.A.

Журнал: Теория и практика современной науки @modern-j

Рубрика: Основной раздел

Статья в выпуске: 8 (122), 2025 года.

Бесплатный доступ

An Iterative Reflexive Generation (IRG) approach for small-size language models (micro-LLM) is proposed. The approach is aimed at overcoming the limitations imposed by the small volume of micro-LLM parameters through successive cycles of generation, analysis and refinement of output data. Micro-LLM performs rough response generation; then the same model instance or specialized simple mechanisms analyze the generated text for compliance with the task, format, logical integrity, and identify weaknesses.; Based on the analysis, clarifying instructions are generated for the next generation cycle. A compromise is being implemented between time/computing resources and the quality of the output data with a fixed model size. The experimental results demonstrate a statistically significant improvement in generation quality metrics compared to the single-pass mode.

Еще

Микро-llm

Короткий адрес: https://sciup.org/140312537

IDR: 140312537   |   УДК: 004.89