Logotipo del repositorio
  • English
  • Español
  • Français
  • Português
  • Iniciar sesión
    ¿Has olvidado tu contraseña?
Logotipo del repositorio
  • Comunidades
  • Listar por
  • English
  • Español
  • Français
  • Português
  • Iniciar sesión
    ¿Has olvidado tu contraseña?
  1. Inicio
  2. Examinar por materia

Examinando por Materia "Modelos de lenguaje grandes (LLM)"

Mostrando 1 - 1 de 1
Resultados por página
Opciones de ordenación
  • No hay miniatura disponible
    Publicación
    Ajuste fino de un modelo LLM para realizar reportes resumidos de expertos en trading, con integración de datos desde redes sociales
    (Universidad EAFIT, 2025) Restrepo Acevedo, Andrés Felipe; Martínez Vargas, Juan David
    The contemporary financial market is characterized by its high complexity and the massive volume of structured and unstructured data generated daily, posing significant challenges for individual investors in terms of analysis and informed decision making. This project proposes the fine-tuning of a Small Language Model (SLM) integrated into a tool capable of generating financial analysis reports similar to those produced by experts. For the proof of concept (PoC), transcripts from financial analysis videos published by experts on their YouTube channels are utilized. The SLM is fine-tuned using instruction-based techniques and the incorporation of the LoRa(Low-Rank Adapters) method, with the aim of extracting and summarizing key information relevant to individual investors. The main objective of this tool is to assist individual investors by generating efficient and accessible reports, facilitating access to valuable information in natural language, and enhancing their ability to make data-driven decisions from unstructured data, all with minimal investment of time and resources. Experimental results demonstrate the viability of using fine-tuned Small Language Models (SLMs) for the generation of high-quality financial reports. Specifically, the selected model, finetune qlora unsloth llama 3.1 8B Instruct bnb 4bit v2 Q8 0, achieved an average score of 5.67 out of 10 in the evaluation conducted by a judge LLM, with an average cosine distance of 0.159 compared to the reference summaries generated by the foundational pretrained model GPT-4.1. This improvement represents a 97.5% increase in performance compared to the same base model, Llama 3.1 8B Instruct, without fine-tuning. Qualitatively, the model exhibits high fidelity and coherence in the extraction and synthesis of key information in moderately long contexts, although it faces challenges in thematic interpretation when dealing with considerably lengthy transcripts. Additionally, implementation of this tool is projected to save 560 hours annually for individual investors, along with an estimated annual reduction in API costs ranging from 7.52 to 25 for the channels analyzed in the proof of concept.

Vigilada Mineducación

Universidad con Acreditación Institucional hasta 2026 - Resolución MEN 2158 de 2018

Software DSpace copyright © 2002-2025 LYRASIS

  • Configuración de cookies
  • Enviar Sugerencias