Curso: Visión por computadora aplicada a la biodiversidad¶

Proyecto Final - Detección de Vida Silvestre con RetinaNet (PyTorch) en GPU¶

Profesor: Emilia Zeledón Redbioma


Fecha de entrega: 22 de Agosto del 2025, a más tardar a las 11:59 pm.

Medio de entrega: Por medio de Google Drive-Classroom.

Entregables: Un archivo jupyter ( .IPYNB ) y carpeta de imagenes dentro de un archivo ZIP.

Estudiante:

  1. Alexander Barrantes Herrera

Objetivos¶

  1. Reproducir el enfoque de (Gosh, 2025) para RetinaNet con PyTorch.
  2. Aplicar EDA al dataset African Wildlife (Ultralytics).
  3. Sintetizar resultados en un póster (ACM IPT guidelines).

Referencias¶

  • Ultralytics African Wildlife: https://docs.ultralytics.com/datasets/detect/african-wildlife/
  • LearnOpenCV (Gosh, 2025): https://learnopencv.com/finetuning-retinanet/
  • EDA imágenes: Kaggle (Fajri, 2022) y Henrhoi (2020)

0. Instalar Librerías¶

Antes de ejecutar el código de este archivo, se requiere cambiar la aceleración de hardware del entorno de ejecución alojado de CPU a GPU TA

In [ ]:
# Instala librerías después del reinicio
# !pip install --upgrade numpy
!pip install --upgrade --force-reinstall "numpy<2"
Collecting numpy<2
  Using cached numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Using cached numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB)
Installing collected packages: numpy
  Attempting uninstall: numpy
    Found existing installation: numpy 2.2.6
    Uninstalling numpy-2.2.6:
      Successfully uninstalled numpy-2.2.6
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
opencv-contrib-python 4.12.0.88 requires numpy<2.3.0,>=2; python_version >= "3.9", but you have numpy 1.26.4 which is incompatible.
thinc 8.3.6 requires numpy<3.0.0,>=2.0.0, but you have numpy 1.26.4 which is incompatible.
opencv-python 4.12.0.88 requires numpy<2.3.0,>=2; python_version >= "3.9", but you have numpy 1.26.4 which is incompatible.
opencv-python-headless 4.12.0.88 requires numpy<2.3.0,>=2; python_version >= "3.9", but you have numpy 1.26.4 which is incompatible.
Successfully installed numpy-1.26.4
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
/tmp/ipython-input-3464384502.py in <cell line: 0>()
      1 # Instala librerías después del reinicio
      2 # !pip install --upgrade numpy
----> 3 get_ipython().system('pip install --upgrade --force-reinstall "numpy<2"')

/usr/local/lib/python3.12/dist-packages/google/colab/_shell.py in system(self, *args, **kwargs)
    150 
    151     if pip_warn:
--> 152       _pip.print_previous_import_warning(output)
    153 
    154   def _send_error(self, exc_content):

/usr/local/lib/python3.12/dist-packages/google/colab/_pip.py in print_previous_import_warning(output)
     54 def print_previous_import_warning(output):
     55   """Prints a warning about previously imported packages."""
---> 56   packages = _previously_imported_packages(output)
     57   if packages:
     58     # display a list of packages using the colab-display-data mimetype, which

/usr/local/lib/python3.12/dist-packages/google/colab/_pip.py in _previously_imported_packages(pip_output)
     48 def _previously_imported_packages(pip_output):
     49   """List all previously imported packages from a pip install."""
---> 50   installed = set(_extract_toplevel_packages(pip_output))
     51   return sorted(installed.intersection(set(sys.modules)))
     52 

/usr/local/lib/python3.12/dist-packages/google/colab/_pip.py in _extract_toplevel_packages(pip_output)
     37   """Extract the list of toplevel packages associated with a pip install."""
     38   toplevel = collections.defaultdict(set)
---> 39   for m, ps in importlib.metadata.packages_distributions().items():
     40     for p in ps:
     41       toplevel[p].add(m)

/usr/lib/python3.12/importlib/metadata/__init__.py in packages_distributions()
    945     pkg_to_dist = collections.defaultdict(list)
    946     for dist in distributions():
--> 947         for pkg in _top_level_declared(dist) or _top_level_inferred(dist):
    948             pkg_to_dist[pkg].append(dist.metadata['Name'])
    949     return dict(pkg_to_dist)

/usr/lib/python3.12/importlib/metadata/__init__.py in _top_level_inferred(dist)
    957     opt_names = {
    958         f.parts[0] if len(f.parts) > 1 else inspect.getmodulename(f)
--> 959         for f in always_iterable(dist.files)
    960     }
    961 

/usr/lib/python3.12/importlib/metadata/__init__.py in files(self)
    498             return list(filter(lambda path: path.locate().exists(), package_paths))
    499 
--> 500         return skip_missing_files(
    501             make_files(
    502                 self._read_files_distinfo()

/usr/lib/python3.12/importlib/metadata/_functools.py in wrapper(param, *args, **kwargs)
    100     def wrapper(param, *args, **kwargs):
    101         if param is not None:
--> 102             return func(param, *args, **kwargs)
    103 
    104     return wrapper

/usr/lib/python3.12/importlib/metadata/__init__.py in skip_missing_files(package_paths)
    496         @pass_none
    497         def skip_missing_files(package_paths):
--> 498             return list(filter(lambda path: path.locate().exists(), package_paths))
    499 
    500         return skip_missing_files(

/usr/lib/python3.12/importlib/metadata/__init__.py in <lambda>(path)
    496         @pass_none
    497         def skip_missing_files(package_paths):
--> 498             return list(filter(lambda path: path.locate().exists(), package_paths))
    499 
    500         return skip_missing_files(

/usr/lib/python3.12/pathlib.py in exists(self, follow_symlinks)
    858         """
    859         try:
--> 860             self.stat(follow_symlinks=follow_symlinks)
    861         except OSError as e:
    862             if not _ignore_error(e):

/usr/lib/python3.12/pathlib.py in stat(self, follow_symlinks)
    838         os.stat() does.
    839         """
--> 840         return os.stat(self, follow_symlinks=follow_symlinks)
    841 
    842     def lstat(self):

KeyboardInterrupt: 
In [ ]:
# Instala una versión estable con CUDA 11.8 (compat con Colab T4)
!pip install torch==2.3.0 torchvision==0.18.0 --index-url https://download.pytorch.org/whl/cu121
In [2]:
# Librería Torchmetrics y Transformers
!pip install torchmetrics
!pip install "transformers==4.41.2"
Collecting torchmetrics
  Downloading torchmetrics-1.8.1-py3-none-any.whl.metadata (22 kB)
Requirement already satisfied: numpy>1.20.0 in /usr/local/lib/python3.12/dist-packages (from torchmetrics) (2.0.2)
Requirement already satisfied: packaging>17.1 in /usr/local/lib/python3.12/dist-packages (from torchmetrics) (25.0)
Requirement already satisfied: torch>=2.0.0 in /usr/local/lib/python3.12/dist-packages (from torchmetrics) (2.8.0+cu126)
Collecting lightning-utilities>=0.8.0 (from torchmetrics)
  Downloading lightning_utilities-0.15.2-py3-none-any.whl.metadata (5.7 kB)
Requirement already satisfied: setuptools in /usr/local/lib/python3.12/dist-packages (from lightning-utilities>=0.8.0->torchmetrics) (75.2.0)
Requirement already satisfied: typing_extensions in /usr/local/lib/python3.12/dist-packages (from lightning-utilities>=0.8.0->torchmetrics) (4.14.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (3.19.1)
Requirement already satisfied: sympy>=1.13.3 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (1.13.3)
Requirement already satisfied: networkx in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (3.5)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (3.1.6)
Requirement already satisfied: fsspec in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (2025.3.0)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.6.77 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (12.6.77)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.6.77 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (12.6.77)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.6.80 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (12.6.80)
Requirement already satisfied: nvidia-cudnn-cu12==9.10.2.21 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (9.10.2.21)
Requirement already satisfied: nvidia-cublas-cu12==12.6.4.1 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (12.6.4.1)
Requirement already satisfied: nvidia-cufft-cu12==11.3.0.4 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (11.3.0.4)
Requirement already satisfied: nvidia-curand-cu12==10.3.7.77 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (10.3.7.77)
Requirement already satisfied: nvidia-cusolver-cu12==11.7.1.2 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (11.7.1.2)
Requirement already satisfied: nvidia-cusparse-cu12==12.5.4.2 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (12.5.4.2)
Requirement already satisfied: nvidia-cusparselt-cu12==0.7.1 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (0.7.1)
Requirement already satisfied: nvidia-nccl-cu12==2.27.3 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (2.27.3)
Requirement already satisfied: nvidia-nvtx-cu12==12.6.77 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (12.6.77)
Requirement already satisfied: nvidia-nvjitlink-cu12==12.6.85 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (12.6.85)
Requirement already satisfied: nvidia-cufile-cu12==1.11.1.6 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (1.11.1.6)
Requirement already satisfied: triton==3.4.0 in /usr/local/lib/python3.12/dist-packages (from torch>=2.0.0->torchmetrics) (3.4.0)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.12/dist-packages (from sympy>=1.13.3->torch>=2.0.0->torchmetrics) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.12/dist-packages (from jinja2->torch>=2.0.0->torchmetrics) (3.0.2)
Downloading torchmetrics-1.8.1-py3-none-any.whl (982 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 983.0/983.0 kB 21.1 MB/s eta 0:00:00
Downloading lightning_utilities-0.15.2-py3-none-any.whl (29 kB)
Installing collected packages: lightning-utilities, torchmetrics
Successfully installed lightning-utilities-0.15.2 torchmetrics-1.8.1
Collecting transformers==4.41.2
  Downloading transformers-4.41.2-py3-none-any.whl.metadata (43 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 43.8/43.8 kB 2.4 MB/s eta 0:00:00
Requirement already satisfied: filelock in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (3.19.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.23.0 in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (0.34.4)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (2.0.2)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (25.0)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (6.0.2)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (2024.11.6)
Requirement already satisfied: requests in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (2.32.4)
Collecting tokenizers<0.20,>=0.19 (from transformers==4.41.2)
  Downloading tokenizers-0.19.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.7 kB)
Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (0.6.2)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.12/dist-packages (from transformers==4.41.2) (4.67.1)
Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.12/dist-packages (from huggingface-hub<1.0,>=0.23.0->transformers==4.41.2) (2025.3.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.12/dist-packages (from huggingface-hub<1.0,>=0.23.0->transformers==4.41.2) (4.14.1)
Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in /usr/local/lib/python3.12/dist-packages (from huggingface-hub<1.0,>=0.23.0->transformers==4.41.2) (1.1.7)
Requirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.41.2) (3.4.3)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.41.2) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.41.2) (2.5.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.41.2) (2025.8.3)
Downloading transformers-4.41.2-py3-none-any.whl (9.1 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.1/9.1 MB 83.7 MB/s eta 0:00:00
Downloading tokenizers-0.19.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6/3.6 MB 115.4 MB/s eta 0:00:00
Installing collected packages: tokenizers, transformers
  Attempting uninstall: tokenizers
    Found existing installation: tokenizers 0.21.4
    Uninstalling tokenizers-0.21.4:
      Successfully uninstalled tokenizers-0.21.4
  Attempting uninstall: transformers
    Found existing installation: transformers 4.55.2
    Uninstalling transformers-4.55.2:
      Successfully uninstalled transformers-4.55.2
Successfully installed tokenizers-0.19.1 transformers-4.41.2
In [3]:
# Librería de analisis de datos, gráficas y Vison Computer
!pip install pycocotools matplotlib opencv-python pandas seaborn tqdm

# Instala utilities para Analisis de Imagenes
!pip install ultralytics
Requirement already satisfied: pycocotools in /usr/local/lib/python3.12/dist-packages (2.0.10)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.12/dist-packages (3.10.0)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.12/dist-packages (4.12.0.88)
Requirement already satisfied: pandas in /usr/local/lib/python3.12/dist-packages (2.2.2)
Requirement already satisfied: seaborn in /usr/local/lib/python3.12/dist-packages (0.13.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.12/dist-packages (4.67.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.12/dist-packages (from pycocotools) (2.0.2)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (1.3.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (4.59.1)
Requirement already satisfied: kiwisolver>=1.3.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (1.4.9)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (25.0)
Requirement already satisfied: pillow>=8 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (11.3.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (3.2.3)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.12/dist-packages (from pandas) (2025.2)
Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.12/dist-packages (from pandas) (2025.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.12/dist-packages (from python-dateutil>=2.7->matplotlib) (1.17.0)
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
/tmp/ipython-input-3632181643.py in <cell line: 0>()
      1 # Librería de analisis de datos, gráficas y Vison Computer
----> 2 get_ipython().system('pip install pycocotools matplotlib opencv-python pandas seaborn tqdm')
      3 
      4 # Instala utilities para Analisis de Imagenes
      5 get_ipython().system('pip install ultralytics')

/usr/local/lib/python3.12/dist-packages/google/colab/_shell.py in system(self, *args, **kwargs)
    150 
    151     if pip_warn:
--> 152       _pip.print_previous_import_warning(output)
    153 
    154   def _send_error(self, exc_content):

/usr/local/lib/python3.12/dist-packages/google/colab/_pip.py in print_previous_import_warning(output)
     54 def print_previous_import_warning(output):
     55   """Prints a warning about previously imported packages."""
---> 56   packages = _previously_imported_packages(output)
     57   if packages:
     58     # display a list of packages using the colab-display-data mimetype, which

/usr/local/lib/python3.12/dist-packages/google/colab/_pip.py in _previously_imported_packages(pip_output)
     48 def _previously_imported_packages(pip_output):
     49   """List all previously imported packages from a pip install."""
---> 50   installed = set(_extract_toplevel_packages(pip_output))
     51   return sorted(installed.intersection(set(sys.modules)))
     52 

/usr/local/lib/python3.12/dist-packages/google/colab/_pip.py in _extract_toplevel_packages(pip_output)
     37   """Extract the list of toplevel packages associated with a pip install."""
     38   toplevel = collections.defaultdict(set)
---> 39   for m, ps in importlib.metadata.packages_distributions().items():
     40     for p in ps:
     41       toplevel[p].add(m)

/usr/lib/python3.12/importlib/metadata/__init__.py in packages_distributions()
    945     pkg_to_dist = collections.defaultdict(list)
    946     for dist in distributions():
--> 947         for pkg in _top_level_declared(dist) or _top_level_inferred(dist):
    948             pkg_to_dist[pkg].append(dist.metadata['Name'])
    949     return dict(pkg_to_dist)

/usr/lib/python3.12/importlib/metadata/__init__.py in _top_level_inferred(dist)
    957     opt_names = {
    958         f.parts[0] if len(f.parts) > 1 else inspect.getmodulename(f)
--> 959         for f in always_iterable(dist.files)
    960     }
    961 

/usr/lib/python3.12/importlib/metadata/__init__.py in files(self)
    498             return list(filter(lambda path: path.locate().exists(), package_paths))
    499 
--> 500         return skip_missing_files(
    501             make_files(
    502                 self._read_files_distinfo()

/usr/lib/python3.12/importlib/metadata/_functools.py in wrapper(param, *args, **kwargs)
    100     def wrapper(param, *args, **kwargs):
    101         if param is not None:
--> 102             return func(param, *args, **kwargs)
    103 
    104     return wrapper

/usr/lib/python3.12/importlib/metadata/__init__.py in skip_missing_files(package_paths)
    496         @pass_none
    497         def skip_missing_files(package_paths):
--> 498             return list(filter(lambda path: path.locate().exists(), package_paths))
    499 
    500         return skip_missing_files(

/usr/lib/python3.12/importlib/metadata/__init__.py in <lambda>(path)
    496         @pass_none
    497         def skip_missing_files(package_paths):
--> 498             return list(filter(lambda path: path.locate().exists(), package_paths))
    499 
    500         return skip_missing_files(

/usr/lib/python3.12/pathlib.py in exists(self, follow_symlinks)
    858         """
    859         try:
--> 860             self.stat(follow_symlinks=follow_symlinks)
    861         except OSError as e:
    862             if not _ignore_error(e):

/usr/lib/python3.12/pathlib.py in stat(self, follow_symlinks)
    838         os.stat() does.
    839         """
--> 840         return os.stat(self, follow_symlinks=follow_symlinks)
    841 
    842     def lstat(self):

KeyboardInterrupt: 
In [1]:
# Importa librerías necesarias para:
# Lectura de Carpetas, tiempos y aleatoriedad
import os, sys, json, glob, random, math, time, shutil
from pathlib import Path

# Lectura de imagenes
import json, glob, cv2
import numpy as np
from PIL import Image, ImageDraw

# Analisis de datos
from pathlib import Path
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns

# Procesamiento de Imagenes y Detección
import torch, torchvision
from torchvision.ops import box_convert
from torch.utils.data import Dataset, DataLoader, Subset
from torchvision.transforms import functional as F
from torchvision import transforms as T
from torchmetrics.detection.mean_ap import MeanAveragePrecision
In [2]:
# Versión de Nvidia
!nvidia-smi
Sat Aug 23 00:45:02 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla T4                       Off |   00000000:00:04.0 Off |                    0 |
| N/A   46C    P8             10W /   70W |       0MiB /  15360MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
In [3]:
print("NumPy version:", np.__version__)
print("PyTorch version:", torch.__version__)
print("TorchVision version:", torchvision.__version__)

# Verifica si GPU está disponible
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print("CUDA disponible:", torch.cuda.is_available())
print("Dispositivo disponible:", device)

# Configura seeds para reproducibilidad
import random
import os

seed = 42
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
    torch.cuda.manual_seed_all(seed)

print("Setup inicial completado correctamente")
NumPy version: 2.0.2
PyTorch version: 2.8.0+cu126
TorchVision version: 0.23.0+cu126
CUDA disponible: True
Dispositivo disponible: cuda
Setup inicial completado correctamente

1. Descarga y descomprime el dataset (Ultralytics)¶

In [4]:
# Declaración de variable para el path
ROOT = Path('data')
ROOT.mkdir(exist_ok=True)

# URL pública alojada por Ultralytics
zip_url = 'https://github.com/ultralytics/assets/releases/download/v0.0.0/african-wildlife.zip'

# Descarga y descomprime
import os
os.makedirs('/tmp', exist_ok=True)
!wget -q -O /tmp/african_wildlife.zip "$zip_url"
!unzip -o -q /tmp/african_wildlife.zip -d data/african_wildlife
!ls -la data/african_wildlife
total 56
drwxr-xr-x 4 root root  4096 Aug 23 00:45 .
drwxr-xr-x 3 root root  4096 Aug 23 00:45 ..
-rw-rw-rw- 1 root root   939 Jul 11 11:20 african-wildlife.yaml
drwxrwxrwx 5 root root  4096 Jul 11 11:20 images
drwxrwxrwx 5 root root  4096 Jul 11 11:21 labels
-rw-rw-rw- 1 root root 34523 Jul 11 11:08 LICENSE.txt
In [5]:
# Ruta de Google Drive (Para almacenar los datos resultantes y las gráficas que se generan)
drive_folder = "/content/weights"
os.makedirs(drive_folder, exist_ok=True)

2. Análisis Exploratorio de Datos (EDA) del Dataset African Wildlife de Ultralytics¶

Objetivos EDA: conteo de imágenes por split, distribucion por clases, tamaños de imagen, estadísticos de bounding boxes, ejemplos visuales.

In [30]:
ROOT = Path('data/african_wildlife')

records = []
for split in ['train','val','test']:
    imgs = list((ROOT/'images'/split).glob('*'))
    for p in imgs:
        img = cv2.imread(str(p))
        if img is None:
            continue
        h,w = img.shape[:2]
        label = (ROOT/'labels'/split/(p.stem + '.txt'))
        n=0
        if label.exists():
            with open(label,'r') as f:
                n = sum(1 for _ in f)
        records.append({'split':split,'image':str(p),'w':w,'h':h,'n_boxes':n})

df = pd.DataFrame(records)
print(df.groupby('split').size())

# Crea gráfica de barras de Número de Cajas por Imagen
plt.figure(figsize=(6,4))
sns.histplot(df['n_boxes'], bins=10)
plt.title('Número de cajas por imagen')
plt.legend()

# Guarda la gráfica de barras de Número de Cajas en imagen PNG
plt.savefig("/content/weights/Numero_de_cajas_por_imagen.png", dpi=150, bbox_inches='tight')
plt.show()   # Mostrar en Colab
plt.close()

# Crea gráfica de Dispersión del Tamaño de Imagen
plt.figure(figsize=(6,4))
plt.scatter(df['w'], df['h'], alpha=0.4)
plt.title('Tamaños de imagen (px)')
plt.xlabel('width'); plt.ylabel('height')
plt.legend()

# Guarda la gráfica de Dispersión del Tamaño (En PNG)
plt.savefig("/content/weights/Tamano_de_imagen_px.png", dpi=150, bbox_inches='tight')
plt.show()   # Mostrar en Colab
plt.close()
split
test      227
train    1052
val       225
dtype: int64
/tmp/ipython-input-1612833140.py:25: UserWarning: No artists with labels found to put in legend.  Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
  plt.legend()
No description has been provided for this image
/tmp/ipython-input-1612833140.py:37: UserWarning: No artists with labels found to put in legend.  Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
  plt.legend()
No description has been provided for this image

3. Visualizaciones de ejemplo¶

In [7]:
# Librerías requeridas
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision.transforms import functional as F
from PIL import Image
import numpy as np
from pathlib import Path

# Define clases
CLASS_NAMES = ['buffalo','elephant','rhino','zebra']
NUM_CLASSES = len(CLASS_NAMES) + 1  # + background

# Transforma utilitarios
MIN_SIZE = 800
MAX_SIZE = 1333
In [56]:
# Muestra ejemplos con sus bounding boxes y los guarda como imágenes
import random
from PIL import Image, ImageDraw, ImageFont
import matplotlib.pyplot as plt
from pathlib import Path

ROOT = Path('data/african_wildlife')
images_dir = ROOT/'images'/'train'
labels_dir = ROOT/'labels'/'train'

# Carpeta para guardar las imágenes generadas
output_dir = ROOT/'examples_bb'
output_dir.mkdir(parents=True, exist_ok=True)  # crea la carpeta si no existe

# Recolecta pares (imagen, label)
pairs = []
for img_path in sorted(images_dir.glob('*')):
    lbl = labels_dir/(img_path.stem + '.txt')
    pairs.append((img_path, lbl))

# Toma hasta 6 ejemplos aleatorios
sample = random.sample(pairs, min(6, len(pairs)))

fig, axs = plt.subplots(2, 3, figsize=(15,10))
axs = axs.flatten()

# Nombre de clases (debes definir según tu dataset)
CLASS_NAMES = ['buffalo', 'elephant', 'rhino', 'zebra']  # ejemplo

for idx, (ax, (img_path, lbl_path)) in enumerate(zip(axs, sample)):
    img = Image.open(img_path).convert('RGB')
    draw = ImageDraw.Draw(img)
    w, h = img.size

    # Lectura de etiquetas YOLO
    if lbl_path.exists():
        with open(lbl_path,'r') as f:
            for line in f:
                parts = line.strip().split()
                if len(parts) < 5: continue
                cls = int(float(parts[0]))
                cx,cy,bw,bh = map(float, parts[1:5])
                x1 = (cx - bw/2) * w
                y1 = (cy - bh/2) * h
                x2 = (cx + bw/2) * w
                y2 = (cy + bh/2) * h
                draw.rectangle([x1,y1,x2,y2], outline='red', width=3)
                txt = CLASS_NAMES[cls] if cls < len(CLASS_NAMES) else str(cls)
                draw.text((x1, max(0,y1-10)), txt, fill='yellow')

    ax.imshow(img)
    ax.set_title(img_path.name)
    ax.axis('off')

    # Guarda cada imagen individual con bounding boxes
    output_img_path = output_dir / f"{img_path.stem}_bb.png"
    img.save(output_img_path)  # Se guarda la imagen
    # print(f"Guardada imagen: {output_img_path}")

# Si hay menos de 6 imágenes, oculta ejes sobrantes
for i in range(len(sample), 6):
    axs[i].axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [66]:
# Imagenes Iniciales
# Compila las 6 imagenes en una sola imagen (Tipo Collage, esto para el poster)
output_dir = Path("/content/data/african_wildlife/examples_bb")
images = sorted(output_dir.glob("*_bb.png"))[:6]  # solo las primeras 6
thumbs = [Image.open(im).resize((400, 300)) for im in images]

# Crear collage 2x3
collage = Image.new("RGB", (3*400, 2*300), (255,255,255))
for idx, im in enumerate(thumbs):
    x = (idx % 3) * 400
    y = (idx // 3) * 300
    collage.paste(im, (x,y))

collage_path = output_dir / "collage_bb.png"
collage.save(collage_path)

4. Prepara Dataset PyTorch (CUSTOM, estilo Learn OpenCV)¶

En el tutorial de Gosh (LearnOpenCV) se usa un Dataset personalizado que devuelve image, target donde target contiene al menos: boxes (FloatTensor [N,4] en formato xyxy), labels (Int64), image_id, area, iscrowd. Además, las etiquetas empiezan en 1 porque 0 es background para los modelos de torchvision (incluido RetinaNet).

A continuación se adapta la estructura del tutorial de Gosh al formato YOLO TXT del dataset African Wildlife (cx, cy, w, h normalizados).

In [9]:
from typing import Optional

# Define Función de tamaño de imagen
def resize_keep_ratio(img, target=None, min_size=MIN_SIZE, max_size=MAX_SIZE):
    # Obtiene el ancho y alto originales de la imagen
    w, h = img.size

    # Calcula el factor de escala para mantener la proporción de la imagen
    # Se asegura que la dimensión más pequeña alcance al menos min_size y que la dimensión más grande no exceda max_size
    scale = min(min_size / min(h, w), max_size / max(h, w))

    # Calcula las nuevas dimensiones escaladas
    new_w, new_h = int(round(w * scale)), int(round(h * scale))

    # Redimensiona la imagen usando las nuevas dimensiones
    img = F.resize(img, [new_h, new_w])

    # Si se proporciona un target (para detección de objetos)
    if target is not None and 'boxes' in target and target['boxes'].numel() > 0:

        # Copia las cajas de bounding box para no modificar el original
        boxes = target['boxes'].clone()

        # Escala las coordenadas x e y de las cajas según el cambio de tamaño
        boxes[:, [0,2]] = boxes[:, [0,2]] * (new_w / w)
        boxes[:, [1,3]] = boxes[:, [1,3]] * (new_h / h)

        # Actualiza las cajas y los tamaños originales y nuevos en el target
        target['boxes'] = boxes
        target['orig_size'] = torch.as_tensor([h, w])
        target['size'] = torch.as_tensor([new_h, new_w])

    # Devuelve la imagen redimensionada y el target actualizado (si hay)
    return img, target


class RandomHorizontalFlip:
    def __init__(self, p=0.5):
        self.p = p  # Probabilidad de aplicar el flip horizontal

    def __call__(self, img, target=None):
        # Aplica el flip con probabilidad p
        if np.random.rand() < self.p:
            w, _ = img.size
            img = F.hflip(img)  # Realiza el flip horizontal

            # Si hay cajas de objetos, también se deben reflejar
            if target is not None and 'boxes' in target and target['boxes'].numel() > 0:
                boxes = target['boxes'].clone()
                x1 = boxes[:,0].clone(); x2 = boxes[:,2].clone()
                boxes[:,0] = w - x2  # Intercambia las coordenadas horizontales
                boxes[:,2] = w - x1
                target['boxes'] = boxes

        return img, target

class ToTensor:
    def __call__(self, img, target=None):
        # Convierte la imagen de PIL a tensor de PyTorch
        # Devuelve también el target sin modificar
        return F.to_tensor(img), target

class Compose:
    def __init__(self, transforms):
        self.transforms = transforms  # Lista de transformaciones a aplicar

    def __call__(self, img, target=None):
        # Aplica cada transformación en orden
        for t in self.transforms:
            img, target = t(img, target)
        return img, target

# Transformaciones para entrenamiento
train_tfms = Compose([
    lambda i,t: resize_keep_ratio(i,t, MIN_SIZE, MAX_SIZE),  # Redimensiona manteniendo proporción
    RandomHorizontalFlip(0.5),  # Flip horizontal aleatorio
    ToTensor()  # Convierte a tensor
])

# Transformaciones para validación (solo redimension y tensor)
val_tfms = Compose([
    lambda i,t: resize_keep_ratio(i,t, MIN_SIZE, MAX_SIZE),
    ToTensor()
])
In [10]:
class AfricanWildlifeYOLODataset(Dataset):
    def __init__(self, images_dir, labels_dir, transforms=None):
        # Directorio de imágenes y etiquetas
        self.images_dir = Path(images_dir)
        self.labels_dir = Path(labels_dir)

        # Extensiones de imagen válidas
        exts = ['*.jpg','*.jpeg','*.png']
        self.img_files = []

        # Busca y ordena todos los archivos de imagen en los directorios
        for e in exts:
            self.img_files.extend(sorted(list(self.images_dir.glob(e))))

        # Transformaciones a aplicar a cada imagen y target (pueden ser None)
        self.transforms = transforms

    def __len__(self):
        # Retorna la cantidad de imágenes en el dataset
        return len(self.img_files)

    def _read_yolo(self, label_path, img_w, img_h):
        """
        Lee un archivo de etiquetas en formato YOLO y convierte las coordenadas
        relativas (cx, cy, w, h) a coordenadas absolutas de bounding box (x1, y1, x2, y2)
        """
        boxes = []   # Lista para las cajas de objetos
        labels = []  # Lista para las clases correspondientes

        # Si no existe el archivo de etiqueta, retorna listas vacías
        if not label_path.exists():
            return boxes, labels

        # Abre el archivo de etiqueta y lee línea por línea
        with open(label_path, 'r') as f:
            for ln in f:
                parts = ln.strip().split()
                # Salta línea que no tengan al menos 5 valores
                if len(parts) < 5:
                    continue

                # Lee la clase y coordenadas relativas
                cls = int(float(parts[0]))
                cx, cy, w_rel, h_rel = map(float, parts[1:5])

                # Convierte a coordenadas absolutas (x1, y1, x2, y2)
                x1 = (cx - w_rel/2) * img_w
                y1 = (cy - h_rel/2) * img_h
                x2 = (cx + w_rel/2) * img_w
                y2 = (cy + h_rel/2) * img_h

                boxes.append([x1, y1, x2, y2])
                labels.append(cls + 1)  # +1 para evitar que 0 sea background

        return boxes, labels

    def __getitem__(self, idx):
        # Obtiene la ruta de la imagen según el índice
        img_path = self.img_files[idx]
        label_path = self.labels_dir / (img_path.stem + '.txt')  # Ruta de la etiqueta correspondiente

        # Abre la imagen y asegura que tenga 3 canales (RGB)
        img = Image.open(img_path).convert('RGB')
        w, h = img.size

        # Lee las cajas y etiquetas desde el archivo YOLO
        boxes, labels = self._read_yolo(label_path, w, h)

        # Convierte listas a tensores de PyTorch
        boxes = torch.as_tensor(boxes, dtype=torch.float32) if len(boxes) > 0 else torch.zeros((0, 4), dtype=torch.float32)
        labels = torch.as_tensor(labels, dtype=torch.int64) if len(labels) > 0 else torch.zeros((0,), dtype=torch.int64)

        # Calcula el área de cada caja
        area = (boxes[:,2] - boxes[:,0]).clamp(min=0) * (boxes[:,3] - boxes[:,1]).clamp(min=0)

        # iscrowd = 0 para todos los objetos (dataset no segmentado)
        iscrowd = torch.zeros((labels.shape[0],), dtype=torch.int64)

        # Crea el diccionario target esperado por modelos de detección de objetos
        target = {
            'boxes': boxes,
            'labels': labels,
            'image_id': torch.tensor([idx]),
            'area': area,
            'iscrowd': iscrowd
        }

        # Aplica transformaciones si existen
        if self.transforms is not None:
            img, target = self.transforms(img, target)

        return img, target
In [11]:
# Función de collate para DataLoader
# Permite manejar batches donde cada elemento es un diccionario (Como target en detección de objetos)
# Básicamente agrupa los elementos del batch en tuplas
def collate_fn(batch):
    return tuple(zip(*batch))

# 1. Instancia datasets
ROOT = Path('data/african_wildlife')  # Carpeta raíz donde están las imágenes y etiquetas

# Dataset de entrenamiento
train_ds = AfricanWildlifeYOLODataset(
    ROOT/'images'/'train',   # Directorio de imágenes de entrenamiento
    ROOT/'labels'/'train',   # Directorio de etiquetas de entrenamiento
    transforms=train_tfms    # Transformaciones a aplicar (resize, flip, tensor)
)

# Dataset de validación
val_ds = AfricanWildlifeYOLODataset(
    ROOT/'images'/'val',     # Directorio de imágenes de validación
    ROOT/'labels'/'val',     # Directorio de etiquetas de validación
    transforms=val_tfms      # Transformaciones a aplicar (solo resize y tensor)
)


# 2. Crea DataLoaders
from torch.utils.data import DataLoader

# DataLoader de entrenamiento
train_loader = DataLoader(
    train_ds,
    batch_size=4,          # Número de imágenes por batch
    shuffle=True,          # Mezcla aleatoria de los datos
    collate_fn=collate_fn, # Función de collate personalizada para manejar diccionarios
    num_workers=2          # Número de subprocesos para cargar los datos en paralelo
)

# DataLoader de validación
val_loader = DataLoader(
    val_ds,
    batch_size=4,          # Número de imágenes por batch
    shuffle=False,         # No mezclar datos de validación
    collate_fn=collate_fn, # Misma función de collate
    num_workers=2          # Número de subprocesos
)

# Muestra la cantidad de imágenes en cada dataset
print('Tamaños: train', len(train_ds), 'val', len(val_ds))
Tamaños: train 1049 val 225

5. Modelo RetinaNet (torchvision) — Ajusta la cabeza de clasificación¶

No se carga pesos completos del detector preentrenado (porque COCO tiene 91 clases). En su lugar, se cargan pesos de la backbone (ResNet50) y se determina num_classes=NUM_CLASSES.

In [12]:
# Define el modelo RetinaNet

import torchvision
from torchvision.models.detection import retinanet_resnet50_fpn
from torchvision.models.resnet import ResNet50_Weights

# Número de clases del dataset
NUM_CLASSES = 5

# Carga los pesos preentrenados solo para el backbone ResNet-50
backbone_weights = ResNet50_Weights.DEFAULT

# Define el modelo RetinaNet
model = retinanet_resnet50_fpn(
    weights=None,                      # No se carga pesos COCO (91 clases)
    weights_backbone=backbone_weights, # Sí se carga pesos del backbone
    num_classes=NUM_CLASSES            # Ajusta a tu dataset
)

# Envia el modelo al dispositivo
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)

print("Modelo RetinaNet definido con backbone preentrenado y", NUM_CLASSES, "clases.")
Downloading: "https://download.pytorch.org/models/resnet50-11ad3fa6.pth" to /root/.cache/torch/hub/checkpoints/resnet50-11ad3fa6.pth
100%|██████████| 97.8M/97.8M [00:00<00:00, 204MB/s]
Modelo RetinaNet definido con backbone preentrenado y 5 clases.

6. Definir dataset PyTorch custom¶

In [13]:
import torch
from torch.utils.data import Dataset
from torchvision import transforms as T
from PIL import Image
from pathlib import Path

# 1. Clase Dataset para Wildlife
class AfricanWildlifeDataset(Dataset):
    def __init__(self, images_dir, labels_dir, class_names, transforms=None):
        """
        images_dir: ruta a las imágenes
        labels_dir: ruta a los archivos de etiquetas en formato YOLO
        class_names: lista de nombres de clases
        transforms: transformaciones a aplicar a las imágenes
        """
        self.images_dir = Path(images_dir)        # Convierte a Path para manejo de rutas
        self.labels_dir = Path(labels_dir)
        self.class_names = class_names
        self.transforms = transforms
        self.image_paths = sorted(list(self.images_dir.glob("*")))  # Lista de todas las imágenes

    def __len__(self):
        # Retorna la cantidad de imágenes en el dataset
        return len(self.image_paths)

    def __getitem__(self, idx):
        # Obtiene la ruta de la imagen
        img_path = self.image_paths[idx]
        # Ruta del archivo de etiquetas correspondiente
        lbl_path = self.labels_dir / (img_path.stem + ".txt")
        # Abre la imagen y convierte a RGB
        img = Image.open(img_path).convert("RGB")

        boxes = []   # Lista de bounding boxes
        labels = []  # Lista de clases de cada caja

        # Lee etiquetas si el archivo existe
        if lbl_path.exists():
            with open(lbl_path, "r") as f:
                for line in f:
                    parts = line.strip().split()
                    # Salta líneas incompletas
                    if len(parts) < 5:
                        continue
                    cls = int(float(parts[0]))              # Clase del objeto
                    cx, cy, bw, bh = map(float, parts[1:5])  # Coordenadas relativas YOLO

                    w, h = img.size                       # Dimensiones de la imagen
                    # Convierte de coordenadas relativas a absolutas
                    x1 = (cx - bw/2) * w
                    y1 = (cy - bh/2) * h
                    x2 = (cx + bw/2) * w
                    y2 = (cy + bh/2) * h

                    boxes.append([x1, y1, x2, y2])
                    labels.append(cls + 1)  # +1 para evitar background=0

        # Convierte listas a tensores
        boxes = torch.tensor(boxes, dtype=torch.float32)
        labels = torch.tensor(labels, dtype=torch.int64)

        # Crea diccionario target esperado por modelos de detección
        target = {
            "boxes": boxes,
            "labels": labels,
            "image_id": torch.tensor([idx])
        }

        # Aplica transformaciones si existen
        if self.transforms:
            img = self.transforms(img)

        return img, target


# 2. Transformaciones

# Para entrenamiento: convierte a tensor y aplica flip horizontal aleatorio
train_transforms = T.Compose([T.ToTensor(), T.RandomHorizontalFlip(0.5)])
# Para validación: solo convierte a tensor
val_transforms = T.Compose([T.ToTensor()])


# 3. Crea datasets
# -------------------------
train_dataset = AfricanWildlifeDataset(
    images_dir="data/african_wildlife/images/train",
    labels_dir="data/african_wildlife/labels/train",
    class_names=CLASS_NAMES,
    transforms=train_transforms
)

val_dataset = AfricanWildlifeDataset(
    images_dir="data/african_wildlife/images/val",
    labels_dir="data/african_wildlife/labels/val",
    class_names=CLASS_NAMES,
    transforms=val_transforms
)

print("train_dataset y val_dataset creados correctamente")
train_dataset y val_dataset creados correctamente

7. Entrenamiento del modelo RetinaNet¶

In [14]:
import torch
import random
import math
from torch.utils.data import DataLoader, Subset
import torchvision
import torchvision.transforms.functional as F
import time

# 1. Definición de Hiperparámetros
num_epochs = 5          # Número de epochs a ejecutar durante el entrenamiento
batch_size = 2          # Tamaño del batch (ajustar según la memoria GPU disponible)
num_workers = 2         # Número de hilos para cargar datos en paralelo (Colab recomienda 2)
pin_memory = True       # Mejora la transferencia a GPU
learning_rate = 1e-4    # Tasa de aprendizaje para el optimizador

# 2. Ejecuta una Validación rápida
MAX_VAL_SAMPLES = 200   # 200 imágenes para evaluación rápida

# Crea un subset aleatorio del dataset de validación
if MAX_VAL_SAMPLES is not None and len(val_dataset) > MAX_VAL_SAMPLES:
    idxs = list(range(len(val_dataset)))  # Lista de todos los índices
    random.seed(42)                       # Semilla para reproducibilidad
    random.shuffle(idxs)                  # Mezcla los índices aleatoriamente
    val_subset_idx = idxs[:MAX_VAL_SAMPLES]       # Toma los primeros MAX_VAL_SAMPLES
    val_eval_dataset = Subset(val_dataset, val_subset_idx)  # Subset del dataset de validación
else:
    val_eval_dataset = val_dataset  # Usa el dataset completo si no se limita

# Mensaje informativo sobre el tamaño del dataset de validación usado
if MAX_VAL_SAMPLES is not None and len(val_dataset) > MAX_VAL_SAMPLES:
    print(f"Usando subset de validación: {len(val_eval_dataset)} / {len(val_dataset)}")
else:
    print(f"Usando validación completa: {len(val_eval_dataset)} samples")

# 3. DataLoaders
# DataLoader de entrenamiento
train_loader = DataLoader(
    train_dataset,        # Dataset de entrenamiento
    batch_size=batch_size,
    shuffle=True,         # Mezcla los datos para cada epoch
    num_workers=num_workers,
    pin_memory=pin_memory,
    collate_fn=collate_fn # Función de collate personalizada para targets tipo dict
)

# DataLoader de validación
val_loader = DataLoader(
    val_eval_dataset,     # Dataset de validación (completo o subset)
    batch_size=batch_size,
    shuffle=False,        # No mezcla datos de validación
    num_workers=num_workers,
    pin_memory=pin_memory,
    collate_fn=collate_fn
)
Usando subset de validación: 200 / 225
In [15]:
# Optimizer y Scheduler
# Solo entrena los parámetros que requieren gradiente
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.AdamW(params, lr=learning_rate)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)
In [16]:
# Loop de entrenamiento

## ---------- Preparar dispositivo y listas de registro ----------
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print("Device:", device)
model.to(device)


# ---------- AMP (Mixed Precision) ----------
use_amp = torch.cuda.is_available()  # solo usar AMP si hay GPU
if use_amp:
    from torch.cuda.amp import autocast, GradScaler
    scaler = GradScaler()
    print("AMP activado con GradScaler.")
else:
    # autocast y scaler fallback (no-op)
    autocast = torch.cpu.amp.autocast if hasattr(torch.cpu, 'amp') else None
    scaler = None
    print("GPU no disponible: AMP desactivado.")

# ---------- Optimizer y Scheduler (si no están definidos) ----------
try:
    optimizer  # si ya existe definido en el notebook, lo usamos
except NameError:
    optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
    print("Optimizer no estaba definido: creado AdamW con lr =", learning_rate)

try:
    lr_scheduler
except NameError:
    lr_scheduler = None

# ---------- Registros ----------
train_losses = []
val_losses = []

# ---------- Directorio para checkpoints ----------
os.makedirs('weights', exist_ok=True)

# ---------- Loop principal con ETA por batches ----------
for epoch in range(num_epochs):
    epoch_start = time.time()

    # ----- Entrenamiento -----
    model.train()
    running_loss = 0.0
    batch_times = []
    for i, (images, targets) in enumerate(train_loader, start=1):
        t0 = time.time()

        # mover a dispositivo
        images = [img.to(device, non_blocking=True) for img in images]
        targets = [{k: v.to(device, non_blocking=True) for k, v in t.items()} for t in targets]

        # forward + backward con AMP si está disponible
        if use_amp:
            with autocast():
                loss_dict = model(images, targets)
                loss = sum(loss for loss in loss_dict.values())
            optimizer.zero_grad()
            scaler.scale(loss).backward()
            scaler.step(optimizer)
            scaler.update()
        else:
            loss_dict = model(images, targets)
            loss = sum(loss for loss in loss_dict.values())
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

        running_loss += loss.item()
        batch_time = time.time() - t0
        batch_times.append(batch_time)

        # Reporte periódico con ETA
        report_every = 20
        if i % report_every == 0 or i == len(train_loader):
            recent_avg = sum(batch_times[-report_every:]) / min(len(batch_times), report_every)
            remaining_batches = max(0, len(train_loader) - i)
            eta_seconds = recent_avg * remaining_batches
            eta_minutes = eta_seconds / 60
            print(f"[Epoch {epoch+1}/{num_epochs}] batch {i}/{len(train_loader)} - "
                  f"loss={loss.item():.4f} - avg_batch={recent_avg:.2f}s - ETA ≈ {eta_minutes:.2f} min")

    avg_train_loss = running_loss / max(1, len(train_loader))
    train_losses.append(avg_train_loss)
Device: cuda
AMP activado con GradScaler.
/tmp/ipython-input-2671491870.py:13: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
  scaler = GradScaler()
/tmp/ipython-input-2671491870.py:57: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with autocast():
[Epoch 1/5] batch 20/526 - loss=3.3092 - avg_batch=0.55s - ETA ≈ 4.60 min
[Epoch 1/5] batch 40/526 - loss=1.3433 - avg_batch=0.31s - ETA ≈ 2.47 min
[Epoch 1/5] batch 60/526 - loss=1.8902 - avg_batch=0.30s - ETA ≈ 2.31 min
[Epoch 1/5] batch 80/526 - loss=1.5759 - avg_batch=0.30s - ETA ≈ 2.25 min
[Epoch 1/5] batch 100/526 - loss=1.2708 - avg_batch=0.31s - ETA ≈ 2.22 min
[Epoch 1/5] batch 120/526 - loss=1.3662 - avg_batch=0.28s - ETA ≈ 1.88 min
[Epoch 1/5] batch 140/526 - loss=1.1901 - avg_batch=0.30s - ETA ≈ 1.96 min
[Epoch 1/5] batch 160/526 - loss=1.1998 - avg_batch=0.31s - ETA ≈ 1.91 min
[Epoch 1/5] batch 180/526 - loss=1.1877 - avg_batch=0.30s - ETA ≈ 1.74 min
[Epoch 1/5] batch 200/526 - loss=1.1398 - avg_batch=0.31s - ETA ≈ 1.67 min
[Epoch 1/5] batch 220/526 - loss=1.0282 - avg_batch=0.28s - ETA ≈ 1.41 min
[Epoch 1/5] batch 240/526 - loss=1.2439 - avg_batch=0.29s - ETA ≈ 1.39 min
[Epoch 1/5] batch 260/526 - loss=1.0933 - avg_batch=0.29s - ETA ≈ 1.28 min
[Epoch 1/5] batch 280/526 - loss=1.0428 - avg_batch=0.29s - ETA ≈ 1.18 min
[Epoch 1/5] batch 300/526 - loss=0.9250 - avg_batch=0.29s - ETA ≈ 1.08 min
[Epoch 1/5] batch 320/526 - loss=1.1366 - avg_batch=0.28s - ETA ≈ 0.97 min
[Epoch 1/5] batch 340/526 - loss=1.0030 - avg_batch=0.29s - ETA ≈ 0.89 min
[Epoch 1/5] batch 360/526 - loss=1.1491 - avg_batch=0.27s - ETA ≈ 0.76 min
[Epoch 1/5] batch 380/526 - loss=1.0703 - avg_batch=0.29s - ETA ≈ 0.71 min
[Epoch 1/5] batch 400/526 - loss=0.8465 - avg_batch=0.28s - ETA ≈ 0.60 min
[Epoch 1/5] batch 420/526 - loss=1.0191 - avg_batch=0.29s - ETA ≈ 0.50 min
[Epoch 1/5] batch 440/526 - loss=0.7515 - avg_batch=0.28s - ETA ≈ 0.40 min
[Epoch 1/5] batch 460/526 - loss=0.8055 - avg_batch=0.30s - ETA ≈ 0.34 min
[Epoch 1/5] batch 480/526 - loss=0.9120 - avg_batch=0.29s - ETA ≈ 0.22 min
[Epoch 1/5] batch 500/526 - loss=0.8329 - avg_batch=0.29s - ETA ≈ 0.13 min
[Epoch 1/5] batch 520/526 - loss=0.8361 - avg_batch=0.29s - ETA ≈ 0.03 min
[Epoch 1/5] batch 526/526 - loss=1.0614 - avg_batch=0.28s - ETA ≈ 0.00 min
[Epoch 2/5] batch 20/526 - loss=1.1285 - avg_batch=0.30s - ETA ≈ 2.51 min
[Epoch 2/5] batch 40/526 - loss=1.1038 - avg_batch=0.29s - ETA ≈ 2.36 min
[Epoch 2/5] batch 60/526 - loss=1.1134 - avg_batch=0.30s - ETA ≈ 2.31 min
[Epoch 2/5] batch 80/526 - loss=1.1372 - avg_batch=0.29s - ETA ≈ 2.13 min
[Epoch 2/5] batch 100/526 - loss=0.6348 - avg_batch=0.28s - ETA ≈ 2.00 min
[Epoch 2/5] batch 120/526 - loss=0.9375 - avg_batch=0.28s - ETA ≈ 1.90 min
[Epoch 2/5] batch 140/526 - loss=1.1262 - avg_batch=0.30s - ETA ≈ 1.90 min
[Epoch 2/5] batch 160/526 - loss=1.0630 - avg_batch=0.28s - ETA ≈ 1.68 min
[Epoch 2/5] batch 180/526 - loss=0.8231 - avg_batch=0.30s - ETA ≈ 1.72 min
[Epoch 2/5] batch 200/526 - loss=0.6608 - avg_batch=0.29s - ETA ≈ 1.55 min
[Epoch 2/5] batch 220/526 - loss=0.8073 - avg_batch=0.28s - ETA ≈ 1.42 min
[Epoch 2/5] batch 240/526 - loss=0.7591 - avg_batch=0.29s - ETA ≈ 1.37 min
[Epoch 2/5] batch 260/526 - loss=0.9818 - avg_batch=0.27s - ETA ≈ 1.21 min
[Epoch 2/5] batch 280/526 - loss=0.7331 - avg_batch=0.29s - ETA ≈ 1.17 min
[Epoch 2/5] batch 300/526 - loss=0.6160 - avg_batch=0.29s - ETA ≈ 1.09 min
[Epoch 2/5] batch 320/526 - loss=0.8525 - avg_batch=0.28s - ETA ≈ 0.98 min
[Epoch 2/5] batch 340/526 - loss=0.7870 - avg_batch=0.28s - ETA ≈ 0.86 min
[Epoch 2/5] batch 360/526 - loss=0.8750 - avg_batch=0.29s - ETA ≈ 0.81 min
[Epoch 2/5] batch 380/526 - loss=0.7940 - avg_batch=0.30s - ETA ≈ 0.73 min
[Epoch 2/5] batch 400/526 - loss=0.9318 - avg_batch=0.31s - ETA ≈ 0.65 min
[Epoch 2/5] batch 420/526 - loss=1.0998 - avg_batch=0.27s - ETA ≈ 0.48 min
[Epoch 2/5] batch 440/526 - loss=1.0682 - avg_batch=0.28s - ETA ≈ 0.40 min
[Epoch 2/5] batch 460/526 - loss=0.9542 - avg_batch=0.29s - ETA ≈ 0.32 min
[Epoch 2/5] batch 480/526 - loss=1.7085 - avg_batch=0.28s - ETA ≈ 0.21 min
[Epoch 2/5] batch 500/526 - loss=1.0198 - avg_batch=0.28s - ETA ≈ 0.12 min
[Epoch 2/5] batch 520/526 - loss=0.7645 - avg_batch=0.30s - ETA ≈ 0.03 min
[Epoch 2/5] batch 526/526 - loss=0.7525 - avg_batch=0.28s - ETA ≈ 0.00 min
[Epoch 3/5] batch 20/526 - loss=0.8649 - avg_batch=0.29s - ETA ≈ 2.45 min
[Epoch 3/5] batch 40/526 - loss=0.6611 - avg_batch=0.27s - ETA ≈ 2.21 min
[Epoch 3/5] batch 60/526 - loss=1.0608 - avg_batch=0.29s - ETA ≈ 2.24 min
[Epoch 3/5] batch 80/526 - loss=0.5630 - avg_batch=0.29s - ETA ≈ 2.13 min
[Epoch 3/5] batch 100/526 - loss=0.7393 - avg_batch=0.28s - ETA ≈ 1.98 min
[Epoch 3/5] batch 120/526 - loss=0.7008 - avg_batch=0.31s - ETA ≈ 2.10 min
[Epoch 3/5] batch 140/526 - loss=1.0536 - avg_batch=0.29s - ETA ≈ 1.85 min
[Epoch 3/5] batch 160/526 - loss=0.6589 - avg_batch=0.28s - ETA ≈ 1.73 min
[Epoch 3/5] batch 180/526 - loss=0.5652 - avg_batch=0.28s - ETA ≈ 1.63 min
[Epoch 3/5] batch 200/526 - loss=0.8377 - avg_batch=0.31s - ETA ≈ 1.68 min
[Epoch 3/5] batch 220/526 - loss=0.7877 - avg_batch=0.30s - ETA ≈ 1.52 min
[Epoch 3/5] batch 240/526 - loss=0.6794 - avg_batch=0.30s - ETA ≈ 1.45 min
[Epoch 3/5] batch 260/526 - loss=0.6537 - avg_batch=0.30s - ETA ≈ 1.31 min
[Epoch 3/5] batch 280/526 - loss=1.3053 - avg_batch=0.29s - ETA ≈ 1.19 min
[Epoch 3/5] batch 300/526 - loss=0.6806 - avg_batch=0.28s - ETA ≈ 1.07 min
[Epoch 3/5] batch 320/526 - loss=0.9939 - avg_batch=0.29s - ETA ≈ 1.00 min
[Epoch 3/5] batch 340/526 - loss=0.6803 - avg_batch=0.29s - ETA ≈ 0.91 min
[Epoch 3/5] batch 360/526 - loss=0.9331 - avg_batch=0.28s - ETA ≈ 0.78 min
[Epoch 3/5] batch 380/526 - loss=0.6944 - avg_batch=0.30s - ETA ≈ 0.72 min
[Epoch 3/5] batch 400/526 - loss=1.3192 - avg_batch=0.28s - ETA ≈ 0.58 min
[Epoch 3/5] batch 420/526 - loss=0.6059 - avg_batch=0.29s - ETA ≈ 0.52 min
[Epoch 3/5] batch 440/526 - loss=1.2440 - avg_batch=0.27s - ETA ≈ 0.39 min
[Epoch 3/5] batch 460/526 - loss=0.6102 - avg_batch=0.29s - ETA ≈ 0.32 min
[Epoch 3/5] batch 480/526 - loss=0.8869 - avg_batch=0.30s - ETA ≈ 0.23 min
[Epoch 3/5] batch 500/526 - loss=0.7791 - avg_batch=0.29s - ETA ≈ 0.12 min
[Epoch 3/5] batch 520/526 - loss=0.9598 - avg_batch=0.29s - ETA ≈ 0.03 min
[Epoch 3/5] batch 526/526 - loss=0.7619 - avg_batch=0.28s - ETA ≈ 0.00 min
[Epoch 4/5] batch 20/526 - loss=0.7745 - avg_batch=0.27s - ETA ≈ 2.29 min
[Epoch 4/5] batch 40/526 - loss=0.8295 - avg_batch=0.29s - ETA ≈ 2.36 min
[Epoch 4/5] batch 60/526 - loss=0.7457 - avg_batch=0.30s - ETA ≈ 2.32 min
[Epoch 4/5] batch 80/526 - loss=1.3335 - avg_batch=0.28s - ETA ≈ 2.08 min
[Epoch 4/5] batch 100/526 - loss=0.8025 - avg_batch=0.29s - ETA ≈ 2.07 min
[Epoch 4/5] batch 120/526 - loss=0.7453 - avg_batch=0.27s - ETA ≈ 1.84 min
[Epoch 4/5] batch 140/526 - loss=0.7747 - avg_batch=0.29s - ETA ≈ 1.85 min
[Epoch 4/5] batch 160/526 - loss=0.5731 - avg_batch=0.29s - ETA ≈ 1.75 min
[Epoch 4/5] batch 180/526 - loss=0.6784 - avg_batch=0.28s - ETA ≈ 1.60 min
[Epoch 4/5] batch 200/526 - loss=0.9514 - avg_batch=0.31s - ETA ≈ 1.68 min
[Epoch 4/5] batch 220/526 - loss=0.9684 - avg_batch=0.29s - ETA ≈ 1.46 min
[Epoch 4/5] batch 240/526 - loss=0.7160 - avg_batch=0.28s - ETA ≈ 1.36 min
[Epoch 4/5] batch 260/526 - loss=1.2955 - avg_batch=0.28s - ETA ≈ 1.24 min
[Epoch 4/5] batch 280/526 - loss=0.7495 - avg_batch=0.30s - ETA ≈ 1.22 min
[Epoch 4/5] batch 300/526 - loss=0.6553 - avg_batch=0.28s - ETA ≈ 1.06 min
[Epoch 4/5] batch 320/526 - loss=0.6976 - avg_batch=0.29s - ETA ≈ 1.00 min
[Epoch 4/5] batch 340/526 - loss=0.9575 - avg_batch=0.28s - ETA ≈ 0.88 min
[Epoch 4/5] batch 360/526 - loss=0.8868 - avg_batch=0.29s - ETA ≈ 0.79 min
[Epoch 4/5] batch 380/526 - loss=0.6156 - avg_batch=0.28s - ETA ≈ 0.68 min
[Epoch 4/5] batch 400/526 - loss=0.6077 - avg_batch=0.28s - ETA ≈ 0.59 min
[Epoch 4/5] batch 420/526 - loss=0.5465 - avg_batch=0.29s - ETA ≈ 0.52 min
[Epoch 4/5] batch 440/526 - loss=0.4963 - avg_batch=0.28s - ETA ≈ 0.41 min
[Epoch 4/5] batch 460/526 - loss=0.5139 - avg_batch=0.28s - ETA ≈ 0.31 min
[Epoch 4/5] batch 480/526 - loss=1.1795 - avg_batch=0.30s - ETA ≈ 0.23 min
[Epoch 4/5] batch 500/526 - loss=1.1864 - avg_batch=0.29s - ETA ≈ 0.12 min
[Epoch 4/5] batch 520/526 - loss=0.6012 - avg_batch=0.31s - ETA ≈ 0.03 min
[Epoch 4/5] batch 526/526 - loss=0.8685 - avg_batch=0.31s - ETA ≈ 0.00 min
[Epoch 5/5] batch 20/526 - loss=0.5829 - avg_batch=0.29s - ETA ≈ 2.43 min
[Epoch 5/5] batch 40/526 - loss=0.7545 - avg_batch=0.30s - ETA ≈ 2.43 min
[Epoch 5/5] batch 60/526 - loss=0.9521 - avg_batch=0.29s - ETA ≈ 2.23 min
[Epoch 5/5] batch 80/526 - loss=1.4329 - avg_batch=0.28s - ETA ≈ 2.09 min
[Epoch 5/5] batch 100/526 - loss=0.8410 - avg_batch=0.29s - ETA ≈ 2.08 min
[Epoch 5/5] batch 120/526 - loss=1.2743 - avg_batch=0.31s - ETA ≈ 2.08 min
[Epoch 5/5] batch 140/526 - loss=0.8633 - avg_batch=0.29s - ETA ≈ 1.87 min
[Epoch 5/5] batch 160/526 - loss=0.7152 - avg_batch=0.29s - ETA ≈ 1.79 min
[Epoch 5/5] batch 180/526 - loss=0.5225 - avg_batch=0.29s - ETA ≈ 1.66 min
[Epoch 5/5] batch 200/526 - loss=0.8225 - avg_batch=0.29s - ETA ≈ 1.58 min
[Epoch 5/5] batch 220/526 - loss=0.5793 - avg_batch=0.27s - ETA ≈ 1.39 min
[Epoch 5/5] batch 240/526 - loss=0.6933 - avg_batch=0.30s - ETA ≈ 1.42 min
[Epoch 5/5] batch 260/526 - loss=0.9097 - avg_batch=0.28s - ETA ≈ 1.24 min
[Epoch 5/5] batch 280/526 - loss=1.2573 - avg_batch=0.28s - ETA ≈ 1.15 min
[Epoch 5/5] batch 300/526 - loss=1.2003 - avg_batch=0.29s - ETA ≈ 1.10 min
[Epoch 5/5] batch 320/526 - loss=0.8225 - avg_batch=0.29s - ETA ≈ 0.99 min
[Epoch 5/5] batch 340/526 - loss=1.1108 - avg_batch=0.29s - ETA ≈ 0.91 min
[Epoch 5/5] batch 360/526 - loss=0.5672 - avg_batch=0.28s - ETA ≈ 0.76 min
[Epoch 5/5] batch 380/526 - loss=0.7471 - avg_batch=0.28s - ETA ≈ 0.68 min
[Epoch 5/5] batch 400/526 - loss=0.6772 - avg_batch=0.28s - ETA ≈ 0.58 min
[Epoch 5/5] batch 420/526 - loss=0.6327 - avg_batch=0.28s - ETA ≈ 0.50 min
[Epoch 5/5] batch 440/526 - loss=0.7452 - avg_batch=0.29s - ETA ≈ 0.41 min
[Epoch 5/5] batch 460/526 - loss=0.8438 - avg_batch=0.28s - ETA ≈ 0.31 min
[Epoch 5/5] batch 480/526 - loss=0.7811 - avg_batch=0.29s - ETA ≈ 0.22 min
[Epoch 5/5] batch 500/526 - loss=0.5938 - avg_batch=0.29s - ETA ≈ 0.13 min
[Epoch 5/5] batch 520/526 - loss=0.9453 - avg_batch=0.28s - ETA ≈ 0.03 min
[Epoch 5/5] batch 526/526 - loss=0.6266 - avg_batch=0.28s - ETA ≈ 0.00 min
In [17]:
# Validación del Entrenamiento
# Para obtener losses en validación, se coloca el modelo en train() una vez, pero se usamo inference_mode() para evitar gradientes y acelerar.

model.train()  # forzar train so model returns loss dict
val_running = 0.0
val_batch_times = []

with torch.inference_mode():
    for j, (images, targets) in enumerate(val_loader, start=1):
        t0 = time.time()
        images = [img.to(device, non_blocking=True) for img in images]
        targets = [{k: v.to(device, non_blocking=True) for k, v in t.items()} for t in targets]

        # forward (no backward)
        loss_dict = model(images, targets)
        val_running += sum(loss for loss in loss_dict.values()).item()

        val_batch_times.append(time.time() - t0)

avg_val_loss = val_running / max(1, len(val_loader))
val_losses.append(avg_val_loss)

# "Scheduler step" si existe
if lr_scheduler is not None:
    try:
        lr_scheduler.step()
    except Exception as e:
        # Algunos schedulers requieren validar por métrica; si falla, se ignoram
        print("lr_scheduler.step() falló:", e)

epoch_time = (time.time() - epoch_start) / 60.0
print(f"=== Epoch {epoch+1}/{num_epochs} finished in {epoch_time:.2f} min | "
      f"Train Loss: {avg_train_loss:.4f} | Val Loss: {avg_val_loss:.4f} ===")

# Guarda checkpoint por epoch
ckpt_path = f"weights/retinanet_epoch{epoch+1}.pth"
torch.save(model.state_dict(), ckpt_path)

# Guardado final
final_path = "weights/retinanet_african_wildlife_final.pth"
torch.save(model.state_dict(), final_path)
print("Entrenamiento completado. Pesos guardados en 'weights/'")
=== Epoch 5/5 finished in 8.38 min | Train Loss: 0.7897 | Val Loss: 0.6668 ===
Entrenamiento completado. Pesos guardados en 'weights/'

8. Calculo de mAP y métricas en PyTorch¶

In [18]:
# Cálculo de Métricas por Epoca
from torchmetrics.detection.mean_ap import MeanAveragePrecision

# Inicializamos la métrica (mAP)
metric = MeanAveragePrecision()

# Se pone el modelo en modo evaluación
model.eval()

# Variable para acumular la pérdida de validación (opcional)
val_running_loss = 0.0

# Desactiva gradientes para acelerar la validación
with torch.no_grad():
    for images, targets in val_loader:
        # Envia imágenes y targets a GPU/CPU según corresponda
        images = [img.to(device) for img in images]
        targets = [{k: v.to(device) for k,v in t.items()} for t in targets]

        # Predicciones del modelo
        preds = model(images)

        # Actualiza métricas: preds y targets, la cuales deben ser listas de diccionarios
        metric.update(preds, targets)

        # Calcula pérdida de validación por separado
        model.train()  # Solo para calcular loss dict
        loss_dict = model(images, targets)
        val_running_loss += sum(loss for loss in loss_dict.values()).item()
        model.eval()  # volvemos a eval

# Calcula métricas finales de esta época
results = metric.compute()

# Guarda la pérdida promedio de validación
avg_val_loss = val_running_loss / len(val_loader)
val_losses.append(avg_val_loss)

# Muestra los resultados
print(f"\n--- Época {epoch+1} ---")
print(f"Train Loss: {avg_train_loss:.4f} | Val Loss: {avg_val_loss:.4f}")
print("Métricas de validación:")

for key, value in results.items():
    # Algunos resultados pueden ser tensores escalares -> convertir con .item()
    if torch.is_tensor(value) and value.numel() == 1:
        value = value.item()
    # Otros pueden ser tensores vacíos o de varias clases (ej. map_per_class)
    elif torch.is_tensor(value):
        value = value.tolist()
    print(f"{key}: {value}")
/usr/local/lib/python3.12/dist-packages/torchmetrics/utilities/prints.py:43: UserWarning: Encountered more than 100 detections in a single image. This means that certain detections with the lowest scores will be ignored, that may have an undesirable impact on performance. Please consider adjusting the `max_detection_threshold` to suit your use case. To disable this warning, set attribute class `warn_on_many_detections=False`, after initializing the metric.
  warnings.warn(*args, **kwargs)
--- Época 5 ---
Train Loss: 0.7897 | Val Loss: 0.6668
Métricas de validación:
map: 0.11570222675800323
map_50: 0.22541993856430054
map_75: 0.10731200873851776
map_small: 0.0
map_medium: 0.11385128647089005
map_large: 0.12326597422361374
mar_1: 0.35830506682395935
mar_10: 0.5458202362060547
mar_100: 0.5679993033409119
mar_small: 0.0
mar_medium: 0.41624999046325684
mar_large: 0.5957798361778259
map_per_class: -1.0
mar_100_per_class: -1.0
classes: [1, 2, 3, 4]

9. Calculo de Métricas de Evaluación del Modelo¶

In [20]:
import torch
import numpy as np
import pandas as pd
from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score

import warnings
# Para eliminar los warnings
warnings.filterwarnings("ignore", category=UserWarning, module="torch")
warnings.filterwarnings("ignore", category=UserWarning, module="torchmetrics")

# 1. Diccionario para guardar métricas por época
history = {
    "epoch": [],
    "train_loss": [],
    "val_loss": [],
    "map": [],
    "map_50": [],
    "map_75": [],
    "precision": [],
    "recall": [],
    "f1": [],
    "accuracy": []
}

# 2. Loop de entrenamiento y validación
for epoch in range(num_epochs):
    print(f"\n--- Época {epoch+1} ---")

    # A. Entrenamiento
    model.train()
    running_loss = 0.0
    for images, targets in train_loader:
        images = [img.to(device) for img in images]
        targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

        optimizer.zero_grad()
        loss_dict = model(images, targets)  # dict de losses
        losses = sum(loss for loss in loss_dict.values())
        losses.backward()
        optimizer.step()
        running_loss += losses.item()

    avg_train_loss = running_loss / len(train_loader)

    # B. Validación
    model.eval()
    val_loss = 0.0
    metric.reset()

    all_preds = []
    all_labels = []

    with torch.no_grad():
        for images, targets in val_loader:
            images = [img.to(device) for img in images]
            targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

            # Para calcular loss en validación, se pone el modelo en train temporalmente
            model.train()
            loss_dict = model(images, targets)
            val_loss += sum(loss for loss in loss_dict.values()).item()

            # De nuevo en eval para obtener predicciones
            model.eval()
            outputs = model(images)

            # Actualiza métricas de detección
            metric.update(outputs, targets)


            # Construye vectores multilabel por imagen
            for out, tgt in zip(outputs, targets):
                pred_vec = torch.zeros(NUM_CLASSES, dtype=torch.int)
                true_vec = torch.zeros(NUM_CLASSES, dtype=torch.int)

                keep = out["scores"] >= 0.5
                pred_labels = out["labels"][keep].detach().cpu().unique()
                true_labels = tgt["labels"].detach().cpu().unique()

                pred_vec[pred_labels - 1] = 1  # -1 si tus labels empiezan en 1
                true_vec[true_labels - 1] = 1

                all_preds.append(pred_vec)
                all_labels.append(true_vec)

    avg_val_loss = val_loss / len(val_loader)
    results = metric.compute()

    # 3. Métricas multilabel (macro)
    all_preds = torch.stack(all_preds).cpu().numpy()
    all_labels = torch.stack(all_labels).cpu().numpy()

    precision = precision_score(all_labels, all_preds, average="macro", zero_division=0)
    recall = recall_score(all_labels, all_preds, average="macro", zero_division=0)
    f1 = f1_score(all_labels, all_preds, average="macro", zero_division=0)
    acc = accuracy_score(all_labels, all_preds)

    # 4. Guardar métricas en history
    history["epoch"].append(epoch + 1)
    history["train_loss"].append(avg_train_loss)
    history["val_loss"].append(avg_val_loss)
    history["map"].append(results["map"].item())
    history["map_50"].append(results["map_50"].item())
    history["map_75"].append(results["map_75"].item())
    history["precision"].append(precision)
    history["recall"].append(recall)
    history["f1"].append(f1)
    history["accuracy"].append(acc)

    # 5. Mostrar métricas de la época
    print(f"Train Loss: {avg_train_loss:.4f} | Val Loss: {avg_val_loss:.4f}")
    print(f"map: {results['map'].item():.4f} | map_50: {results['map_50'].item():.4f} | map_75: {results['map_75'].item():.4f}")
    print(f"Precision: {precision:.4f} | Recall: {recall:.4f} | F1: {f1:.4f} | Accuracy: {acc:.4f}")

# 6. Crear DataFrame con todas las épocas
df_history = pd.DataFrame(history)
print("\n=== Resumen de todas las épocas ===")
print(df_history)
--- Época 1 ---
Train Loss: 0.7414 | Val Loss: 0.6333
map: 0.1507 | map_50: 0.2762 | map_75: 0.1455
Precision: 0.3682 | Recall: 0.2368 | F1: 0.2501 | Accuracy: 0.1300

--- Época 2 ---
Train Loss: 0.7144 | Val Loss: 0.6475
map: 0.1964 | map_50: 0.3555 | map_75: 0.1866
Precision: 0.3200 | Recall: 0.0427 | F1: 0.0688 | Accuracy: 0.0550

--- Época 3 ---
Train Loss: 0.6932 | Val Loss: 0.5835
map: 0.2181 | map_50: 0.3804 | map_75: 0.2220
Precision: 0.3497 | Recall: 0.3916 | F1: 0.3650 | Accuracy: 0.2300

--- Época 4 ---
Train Loss: 0.6362 | Val Loss: 0.5017
map: 0.3853 | map_50: 0.6396 | map_75: 0.4182
Precision: 0.6764 | Recall: 0.4169 | F1: 0.4861 | Accuracy: 0.4950

--- Época 5 ---
Train Loss: 0.6140 | Val Loss: 0.5027
map: 0.4198 | map_50: 0.6603 | map_75: 0.4759
Precision: 0.7082 | Recall: 0.2979 | F1: 0.4103 | Accuracy: 0.3700

=== Resumen de todas las épocas ===
   epoch  train_loss  val_loss       map    map_50    map_75  precision  \
0      1    0.741424  0.633326  0.150744  0.276160  0.145510   0.368219   
1      2    0.714384  0.647464  0.196417  0.355517  0.186622   0.320000   
2      3    0.693179  0.583547  0.218099  0.380382  0.222008   0.349655   
3      4    0.636173  0.501731  0.385349  0.639645  0.418213   0.676387   
4      5    0.613975  0.502724  0.419771  0.660269  0.475866   0.708205   

     recall        f1  accuracy  
0  0.236849  0.250140     0.130  
1  0.042702  0.068831     0.055  
2  0.391606  0.365039     0.230  
3  0.416872  0.486134     0.495  
4  0.297909  0.410303     0.370  

10. ComparaUnbiased - Visualizar predicciones vs ground truth¶

  • Ground truth = cajas verdes (target['boxes'] y target['labels'])

  • Predicciones = cajas rojas (outputs[0]['boxes'], scores, labels)

  • Umbral de confianza = 0.5 por defecto

  • Se muestran hasta 6 imágenes aleatorias de validación

In [21]:
from PIL import Image, ImageDraw
import matplotlib.pyplot as plt

# Asigna carpeta para guardar imágenes.
save_dir = Path("/content/weights")
save_dir.mkdir(exist_ok=True)


device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.eval()  # modo evaluación

# Tomar hasta 6 imágenes de validación
sample_indices = torch.randperm(len(val_dataset))[:6]
sample = [val_dataset[i] for i in sample_indices]

fig, axs = plt.subplots(2, 3, figsize=(18, 12))
axs = axs.flatten()

for i, (ax, (img_tensor, target)) in enumerate(zip(axs, sample)):
    # Tensor -> PIL Image
    img = img_tensor.permute(1,2,0).cpu().numpy() * 255
    img = Image.fromarray(img.astype('uint8')).convert('RGB')
    draw = ImageDraw.Draw(img)
    w, h = img.size

    # Dibujar ground truth
    for box, label in zip(target['boxes'], target['labels']):
        x1, y1, x2, y2 = box.cpu().numpy()
        draw.rectangle([x1,y1,x2,y2], outline='green', width=2)
        txt = CLASS_NAMES[label-1] if label-1 < len(CLASS_NAMES) else str(label)
        draw.text((x1, max(0,y1-10)), txt, fill='green')

    # Dibujar predicciones
    with torch.no_grad():
        outputs = model([img_tensor.to(device)])
    pred_boxes = outputs[0]['boxes'].cpu()
    pred_scores = outputs[0]['scores'].cpu()
    pred_labels = outputs[0]['labels'].cpu()

    for box, score, label in zip(pred_boxes, pred_scores, pred_labels):
        if score < 0.5:  # umbral de confianza
            continue
        x1, y1, x2, y2 = box.numpy()
        draw.rectangle([x1,y1,x2,y2], outline='red', width=2)
        txt = f"{CLASS_NAMES[label-1]}:{score:.2f}" if label-1 < len(CLASS_NAMES) else f"{label}:{score:.2f}"
        draw.text((x1, max(0,y1-10)), txt, fill='red')

    ax.imshow(img)
    ax.set_title('Green: GT | Red: Prediction')
    ax.axis('off')

    # Guarda la imagen en la carpeta
    img_save_path = save_dir / f"sample_{sample_indices[i].item()}.png"
    img.save(img_save_path)
    print(f"Imagen guardada en: {img_save_path}")

plt.tight_layout()
plt.show()
Imagen guardada en: /content/weights/sample_88.png
Imagen guardada en: /content/weights/sample_37.png
Imagen guardada en: /content/weights/sample_126.png
Imagen guardada en: /content/weights/sample_4.png
Imagen guardada en: /content/weights/sample_6.png
Imagen guardada en: /content/weights/sample_100.png
No description has been provided for this image
In [67]:
# Imagenes Finales (Post Modelo)
# Compila las 6 imagenes en una sola imagen (Tipo Collage, esto para el poster)
from PIL import Image
from pathlib import Path

save_dir = Path("/content/weights")
save_dir.mkdir(exist_ok=True)

# Toma las 6 imágenes como sample_*.png
images = sorted(save_dir.glob("sample_*.png"))[:6]
thumbs = [Image.open(im).resize((400, 300)) for im in images]  # ajustar tamaño

# Crea collage 2x3
collage = Image.new("RGB", (3*400, 2*300), (255,255,255))
for idx, im in enumerate(thumbs):
    x = (idx % 3) * 400
    y = (idx // 3) * 300
    collage.paste(im, (x,y))

collage_path = save_dir / "collage_pred_gt.png"
collage.save(collage_path)
print(f"Collage guardado en: {collage_path}")
Collage guardado en: /content/weights/collage_pred_gt.png

11. Genera gráfica de pérdida (Train vs Validation) y mAP estimado.¶

In [29]:
import matplotlib.pyplot as plt

# Gráfica de Loss por epoch

# ---------- Plots de Loss (Train vs Val) ----------
plt.figure(figsize=(8,5))
plt.plot(range(1, len(train_losses)+1), train_losses, marker='o', label='Train Loss')
plt.plot(range(1, len(val_losses)+1), val_losses, marker='o', label='Val Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Curva de pérdida (Train vs Validate)')
plt.legend()
plt.grid(True)

# Guarda la gráfica Loss por epoch (En PNG)
plt.savefig("/content/weights/Loss_por_epoch.png", dpi=300, bbox_inches='tight')   #Guarda antes de mostrar
plt.show()   # Mostrar en Colab
plt.close()  # Cierra la figura para no sobreescribir
No description has been provided for this image

11. Creación de Póster¶

In [24]:
# Instalar LaTeX en Colab
!apt-get update -qq
!apt-get install -y texlive-latex-base texlive-fonts-recommended texlive-fonts-extra texlive-latex-extra
W: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does not seem to provide it (sources.list entry misspelt?)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  dvisvgm fonts-adf-accanthis fonts-adf-berenis fonts-adf-gillius
  fonts-adf-universalis fonts-cabin fonts-cantarell fonts-comfortaa
  fonts-croscore fonts-crosextra-caladea fonts-crosextra-carlito
  fonts-dejavu-core fonts-dejavu-extra fonts-droid-fallback fonts-ebgaramond
  fonts-ebgaramond-extra fonts-font-awesome fonts-freefont-otf
  fonts-freefont-ttf fonts-gfs-artemisia fonts-gfs-complutum fonts-gfs-didot
  fonts-gfs-neohellenic fonts-gfs-olga fonts-gfs-solomos fonts-go
  fonts-junicode fonts-lato fonts-linuxlibertine fonts-lmodern fonts-lobster
  fonts-lobstertwo fonts-noto-color-emoji fonts-noto-core fonts-noto-mono
  fonts-oflb-asana-math fonts-open-sans fonts-roboto-unhinted fonts-sil-charis
  fonts-sil-gentium fonts-sil-gentium-basic fonts-sil-gentiumplus
  fonts-sil-gentiumplus-compact fonts-stix fonts-texgyre fonts-urw-base35
  libapache-pom-java libcommons-logging-java libcommons-parent-java
  libfontbox-java libgs9 libgs9-common libidn12 libijs-0.35 libjbig2dec0
  libkpathsea6 libpdfbox-java libptexenc1 libruby3.0 libsynctex2 libteckit0
  libtexlua53 libtexluajit2 libwoff1 libzzip-0-13 lmodern poppler-data
  preview-latex-style rake ruby ruby-net-telnet ruby-rubygems ruby-webrick
  ruby-xmlrpc ruby3.0 rubygems-integration t1utils tex-common tex-gyre
  texlive-base texlive-binaries texlive-fonts-extra-links
  texlive-latex-recommended texlive-pictures texlive-plain-generic tipa
  xfonts-encodings xfonts-utils
Suggested packages:
  fonts-noto fontforge libavalon-framework-java libcommons-logging-java-doc
  libexcalibur-logkit-java liblog4j1.2-java poppler-utils ghostscript
  fonts-japanese-mincho | fonts-ipafont-mincho fonts-japanese-gothic
  | fonts-ipafont-gothic fonts-arphic-ukai fonts-arphic-uming fonts-nanum ri
  ruby-dev bundler debhelper gv | postscript-viewer perl-tk xpdf | pdf-viewer
  xzdec cm-super texlive-fonts-extra-doc texlive-fonts-recommended-doc
  texlive-latex-base-doc python3-pygments icc-profiles libfile-which-perl
  libspreadsheet-parseexcel-perl texlive-latex-extra-doc
  texlive-latex-recommended-doc texlive-luatex texlive-pstricks dot2tex prerex
  texlive-pictures-doc vprerex default-jre-headless tipa-doc
The following NEW packages will be installed:
  dvisvgm fonts-adf-accanthis fonts-adf-berenis fonts-adf-gillius
  fonts-adf-universalis fonts-cabin fonts-cantarell fonts-comfortaa
  fonts-croscore fonts-crosextra-caladea fonts-crosextra-carlito
  fonts-dejavu-core fonts-dejavu-extra fonts-droid-fallback fonts-ebgaramond
  fonts-ebgaramond-extra fonts-font-awesome fonts-freefont-otf
  fonts-freefont-ttf fonts-gfs-artemisia fonts-gfs-complutum fonts-gfs-didot
  fonts-gfs-neohellenic fonts-gfs-olga fonts-gfs-solomos fonts-go
  fonts-junicode fonts-lato fonts-linuxlibertine fonts-lmodern fonts-lobster
  fonts-lobstertwo fonts-noto-color-emoji fonts-noto-core fonts-noto-mono
  fonts-oflb-asana-math fonts-open-sans fonts-roboto-unhinted fonts-sil-charis
  fonts-sil-gentium fonts-sil-gentium-basic fonts-sil-gentiumplus
  fonts-sil-gentiumplus-compact fonts-stix fonts-texgyre fonts-urw-base35
  libapache-pom-java libcommons-logging-java libcommons-parent-java
  libfontbox-java libgs9 libgs9-common libidn12 libijs-0.35 libjbig2dec0
  libkpathsea6 libpdfbox-java libptexenc1 libruby3.0 libsynctex2 libteckit0
  libtexlua53 libtexluajit2 libwoff1 libzzip-0-13 lmodern poppler-data
  preview-latex-style rake ruby ruby-net-telnet ruby-rubygems ruby-webrick
  ruby-xmlrpc ruby3.0 rubygems-integration t1utils tex-common tex-gyre
  texlive-base texlive-binaries texlive-fonts-extra texlive-fonts-extra-links
  texlive-fonts-recommended texlive-latex-base texlive-latex-extra
  texlive-latex-recommended texlive-pictures texlive-plain-generic tipa
  xfonts-encodings xfonts-utils
0 upgraded, 92 newly installed, 0 to remove and 45 not upgraded.
Need to get 712 MB of archives.
After this operation, 2,087 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-droid-fallback all 1:6.0.1r16-1.1build1 [1,805 kB]
Get:2 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-lato all 2.0-2.1 [2,696 kB]
Get:3 http://archive.ubuntu.com/ubuntu jammy/main amd64 poppler-data all 0.4.11-1 [2,171 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy/universe amd64 tex-common all 6.17 [33.7 kB]
Get:5 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-urw-base35 all 20200910-1 [6,367 kB]
Get:6 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libgs9-common all 9.55.0~dfsg1-0ubuntu5.12 [753 kB]
Get:7 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libidn12 amd64 1.38-4ubuntu1 [60.0 kB]
Get:8 http://archive.ubuntu.com/ubuntu jammy/main amd64 libijs-0.35 amd64 0.35-15build2 [16.5 kB]
Get:9 http://archive.ubuntu.com/ubuntu jammy/main amd64 libjbig2dec0 amd64 0.19-3build2 [64.7 kB]
Get:10 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libgs9 amd64 9.55.0~dfsg1-0ubuntu5.12 [5,031 kB]
Get:11 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libkpathsea6 amd64 2021.20210626.59705-1ubuntu0.2 [60.4 kB]
Get:12 http://archive.ubuntu.com/ubuntu jammy/main amd64 libwoff1 amd64 1.0.2-1build4 [45.2 kB]
Get:13 http://archive.ubuntu.com/ubuntu jammy/universe amd64 dvisvgm amd64 2.13.1-1 [1,221 kB]
Get:14 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-adf-accanthis all 0.20190904-2 [203 kB]
Get:15 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-adf-berenis all 0.20190904-2 [313 kB]
Get:16 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-adf-gillius all 0.20190904-2 [190 kB]
Get:17 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-adf-universalis all 0.20190904-2 [112 kB]
Get:18 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-cabin all 1.5-3 [141 kB]
Get:19 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-cantarell all 0.303-2 [286 kB]
Get:20 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-comfortaa all 3.001-3 [129 kB]
Get:21 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-croscore all 20201225-1build1 [1,572 kB]
Get:22 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-crosextra-caladea all 20130214-2.1 [82.4 kB]
Get:23 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-crosextra-carlito all 20130920-1.1 [743 kB]
Get:24 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-dejavu-core all 2.37-2build1 [1,041 kB]
Get:25 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-dejavu-extra all 2.37-2build1 [2,041 kB]
Get:26 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-ebgaramond all 0.016+git20210310.42d4f9f2-1 [512 kB]
Get:27 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-ebgaramond-extra all 0.016+git20210310.42d4f9f2-1 [2,233 kB]
Get:28 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB]
Get:29 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-freefont-otf all 20120503-10build1 [3,054 kB]
Get:30 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-freefont-ttf all 20120503-10build1 [2,388 kB]
Get:31 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-gfs-artemisia all 1.1-6 [260 kB]
Get:32 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-gfs-complutum all 1.1-7 [41.8 kB]
Get:33 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-gfs-didot all 1.1-7 [278 kB]
Get:34 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-gfs-neohellenic all 1.1-7 [215 kB]
Get:35 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-gfs-olga all 1.1-6 [33.5 kB]
Get:36 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-gfs-solomos all 1.1-6 [40.9 kB]
Get:37 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-go all 0~20170330-1 [369 kB]
Get:38 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-junicode all 1.002-2 [828 kB]
Get:39 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-linuxlibertine all 5.3.0-6 [1,627 kB]
Get:40 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-lmodern all 2.004.5-6.1 [4,532 kB]
Get:41 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-lobster all 2.0-2.1 [38.9 kB]
Get:42 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-lobstertwo all 2.0-2.1 [93.3 kB]
Get:43 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 fonts-noto-color-emoji all 2.047-0ubuntu0.22.04.1 [10.0 MB]
Get:44 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-noto-core all 20201225-1build1 [12.2 MB]
Get:45 http://archive.ubuntu.com/ubuntu jammy/main amd64 fonts-noto-mono all 20201225-1build1 [397 kB]
Get:46 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-oflb-asana-math all 000.907-7build1 [245 kB]
Get:47 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-open-sans all 1.11-2 [635 kB]
Get:48 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-roboto-unhinted all 2:0~20170802-3 [2,376 kB]
Get:49 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-sil-charis all 6.101-1 [3,973 kB]
Get:50 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-sil-gentium all 20081126:1.03-4 [245 kB]
Get:51 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-sil-gentium-basic all 1.102-1.1 [384 kB]
Get:52 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-sil-gentiumplus all 6.101-1 [8,086 kB]
Get:53 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-sil-gentiumplus-compact all 5.000-4 [1,514 kB]
Get:54 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-texgyre all 20180621-3.1 [10.2 MB]
Get:55 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libapache-pom-java all 18-1 [4,720 B]
Get:56 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libcommons-parent-java all 43-1 [10.8 kB]
Get:57 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libcommons-logging-java all 1.2-2 [60.3 kB]
Get:58 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libptexenc1 amd64 2021.20210626.59705-1ubuntu0.2 [39.1 kB]
Get:59 http://archive.ubuntu.com/ubuntu jammy/main amd64 rubygems-integration all 1.18 [5,336 B]
Get:60 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 ruby3.0 amd64 3.0.2-7ubuntu2.10 [50.1 kB]
Get:61 http://archive.ubuntu.com/ubuntu jammy/main amd64 ruby-rubygems all 3.3.5-2 [228 kB]
Get:62 http://archive.ubuntu.com/ubuntu jammy/main amd64 ruby amd64 1:3.0~exp1 [5,100 B]
Get:63 http://archive.ubuntu.com/ubuntu jammy/main amd64 rake all 13.0.6-2 [61.7 kB]
Get:64 http://archive.ubuntu.com/ubuntu jammy/main amd64 ruby-net-telnet all 0.1.1-2 [12.6 kB]
Get:65 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 ruby-webrick all 1.7.0-3ubuntu0.2 [52.5 kB]
Get:66 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 ruby-xmlrpc all 0.3.2-1ubuntu0.1 [24.9 kB]
Get:67 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libruby3.0 amd64 3.0.2-7ubuntu2.10 [5,114 kB]
Get:68 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libsynctex2 amd64 2021.20210626.59705-1ubuntu0.2 [55.6 kB]
Get:69 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libteckit0 amd64 2.5.11+ds1-1 [421 kB]
Get:70 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libtexlua53 amd64 2021.20210626.59705-1ubuntu0.2 [120 kB]
Get:71 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libtexluajit2 amd64 2021.20210626.59705-1ubuntu0.2 [267 kB]
Get:72 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libzzip-0-13 amd64 0.13.72+dfsg.1-1.1 [27.0 kB]
Get:73 http://archive.ubuntu.com/ubuntu jammy/main amd64 xfonts-encodings all 1:1.0.5-0ubuntu2 [578 kB]
Get:74 http://archive.ubuntu.com/ubuntu jammy/main amd64 xfonts-utils amd64 1:7.7+6build2 [94.6 kB]
Get:75 http://archive.ubuntu.com/ubuntu jammy/universe amd64 lmodern all 2.004.5-6.1 [9,471 kB]
Get:76 http://archive.ubuntu.com/ubuntu jammy/universe amd64 preview-latex-style all 12.2-1ubuntu1 [185 kB]
Get:77 http://archive.ubuntu.com/ubuntu jammy/main amd64 t1utils amd64 1.41-4build2 [61.3 kB]
Get:78 http://archive.ubuntu.com/ubuntu jammy/universe amd64 tex-gyre all 20180621-3.1 [6,209 kB]
Get:79 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 texlive-binaries amd64 2021.20210626.59705-1ubuntu0.2 [9,860 kB]
Get:80 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-base all 2021.20220204-1 [21.0 MB]
Get:81 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-fonts-extra all 2021.20220204-1 [484 MB]
Get:82 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fonts-stix all 1.1.1-4.1 [589 kB]
Get:83 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-fonts-extra-links all 2021.20220204-1 [20.3 kB]
Get:84 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-fonts-recommended all 2021.20220204-1 [4,972 kB]
Get:85 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-latex-base all 2021.20220204-1 [1,128 kB]
Get:86 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libfontbox-java all 1:1.8.16-2 [207 kB]
Get:87 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libpdfbox-java all 1:1.8.16-2 [5,199 kB]
Get:88 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-latex-recommended all 2021.20220204-1 [14.4 MB]
Get:89 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-pictures all 2021.20220204-1 [8,720 kB]
Get:90 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-latex-extra all 2021.20220204-1 [13.9 MB]
Get:91 http://archive.ubuntu.com/ubuntu jammy/universe amd64 texlive-plain-generic all 2021.20220204-1 [27.5 MB]
Get:92 http://archive.ubuntu.com/ubuntu jammy/universe amd64 tipa all 2:1.3-21 [2,967 kB]
Fetched 712 MB in 12s (58.3 MB/s)
Extracting templates from packages: 100%
Preconfiguring packages ...
Selecting previously unselected package fonts-droid-fallback.
(Reading database ... 126371 files and directories currently installed.)
Preparing to unpack .../00-fonts-droid-fallback_1%3a6.0.1r16-1.1build1_all.deb ...
Unpacking fonts-droid-fallback (1:6.0.1r16-1.1build1) ...
Selecting previously unselected package fonts-lato.
Preparing to unpack .../01-fonts-lato_2.0-2.1_all.deb ...
Unpacking fonts-lato (2.0-2.1) ...
Selecting previously unselected package poppler-data.
Preparing to unpack .../02-poppler-data_0.4.11-1_all.deb ...
Unpacking poppler-data (0.4.11-1) ...
Selecting previously unselected package tex-common.
Preparing to unpack .../03-tex-common_6.17_all.deb ...
Unpacking tex-common (6.17) ...
Selecting previously unselected package fonts-urw-base35.
Preparing to unpack .../04-fonts-urw-base35_20200910-1_all.deb ...
Unpacking fonts-urw-base35 (20200910-1) ...
Selecting previously unselected package libgs9-common.
Preparing to unpack .../05-libgs9-common_9.55.0~dfsg1-0ubuntu5.12_all.deb ...
Unpacking libgs9-common (9.55.0~dfsg1-0ubuntu5.12) ...
Selecting previously unselected package libidn12:amd64.
Preparing to unpack .../06-libidn12_1.38-4ubuntu1_amd64.deb ...
Unpacking libidn12:amd64 (1.38-4ubuntu1) ...
Selecting previously unselected package libijs-0.35:amd64.
Preparing to unpack .../07-libijs-0.35_0.35-15build2_amd64.deb ...
Unpacking libijs-0.35:amd64 (0.35-15build2) ...
Selecting previously unselected package libjbig2dec0:amd64.
Preparing to unpack .../08-libjbig2dec0_0.19-3build2_amd64.deb ...
Unpacking libjbig2dec0:amd64 (0.19-3build2) ...
Selecting previously unselected package libgs9:amd64.
Preparing to unpack .../09-libgs9_9.55.0~dfsg1-0ubuntu5.12_amd64.deb ...
Unpacking libgs9:amd64 (9.55.0~dfsg1-0ubuntu5.12) ...
Selecting previously unselected package libkpathsea6:amd64.
Preparing to unpack .../10-libkpathsea6_2021.20210626.59705-1ubuntu0.2_amd64.deb ...
Unpacking libkpathsea6:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Selecting previously unselected package libwoff1:amd64.
Preparing to unpack .../11-libwoff1_1.0.2-1build4_amd64.deb ...
Unpacking libwoff1:amd64 (1.0.2-1build4) ...
Selecting previously unselected package dvisvgm.
Preparing to unpack .../12-dvisvgm_2.13.1-1_amd64.deb ...
Unpacking dvisvgm (2.13.1-1) ...
Selecting previously unselected package fonts-adf-accanthis.
Preparing to unpack .../13-fonts-adf-accanthis_0.20190904-2_all.deb ...
Unpacking fonts-adf-accanthis (0.20190904-2) ...
Selecting previously unselected package fonts-adf-berenis.
Preparing to unpack .../14-fonts-adf-berenis_0.20190904-2_all.deb ...
Unpacking fonts-adf-berenis (0.20190904-2) ...
Selecting previously unselected package fonts-adf-gillius.
Preparing to unpack .../15-fonts-adf-gillius_0.20190904-2_all.deb ...
Unpacking fonts-adf-gillius (0.20190904-2) ...
Selecting previously unselected package fonts-adf-universalis.
Preparing to unpack .../16-fonts-adf-universalis_0.20190904-2_all.deb ...
Unpacking fonts-adf-universalis (0.20190904-2) ...
Selecting previously unselected package fonts-cabin.
Preparing to unpack .../17-fonts-cabin_1.5-3_all.deb ...
Unpacking fonts-cabin (1.5-3) ...
Selecting previously unselected package fonts-cantarell.
Preparing to unpack .../18-fonts-cantarell_0.303-2_all.deb ...
Unpacking fonts-cantarell (0.303-2) ...
Selecting previously unselected package fonts-comfortaa.
Preparing to unpack .../19-fonts-comfortaa_3.001-3_all.deb ...
Unpacking fonts-comfortaa (3.001-3) ...
Selecting previously unselected package fonts-croscore.
Preparing to unpack .../20-fonts-croscore_20201225-1build1_all.deb ...
Unpacking fonts-croscore (20201225-1build1) ...
Selecting previously unselected package fonts-crosextra-caladea.
Preparing to unpack .../21-fonts-crosextra-caladea_20130214-2.1_all.deb ...
Unpacking fonts-crosextra-caladea (20130214-2.1) ...
Selecting previously unselected package fonts-crosextra-carlito.
Preparing to unpack .../22-fonts-crosextra-carlito_20130920-1.1_all.deb ...
Unpacking fonts-crosextra-carlito (20130920-1.1) ...
Selecting previously unselected package fonts-dejavu-core.
Preparing to unpack .../23-fonts-dejavu-core_2.37-2build1_all.deb ...
Unpacking fonts-dejavu-core (2.37-2build1) ...
Selecting previously unselected package fonts-dejavu-extra.
Preparing to unpack .../24-fonts-dejavu-extra_2.37-2build1_all.deb ...
Unpacking fonts-dejavu-extra (2.37-2build1) ...
Selecting previously unselected package fonts-ebgaramond.
Preparing to unpack .../25-fonts-ebgaramond_0.016+git20210310.42d4f9f2-1_all.deb ...
Unpacking fonts-ebgaramond (0.016+git20210310.42d4f9f2-1) ...
Selecting previously unselected package fonts-ebgaramond-extra.
Preparing to unpack .../26-fonts-ebgaramond-extra_0.016+git20210310.42d4f9f2-1_all.deb ...
Unpacking fonts-ebgaramond-extra (0.016+git20210310.42d4f9f2-1) ...
Selecting previously unselected package fonts-font-awesome.
Preparing to unpack .../27-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ...
Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ...
Selecting previously unselected package fonts-freefont-otf.
Preparing to unpack .../28-fonts-freefont-otf_20120503-10build1_all.deb ...
Unpacking fonts-freefont-otf (20120503-10build1) ...
Selecting previously unselected package fonts-freefont-ttf.
Preparing to unpack .../29-fonts-freefont-ttf_20120503-10build1_all.deb ...
Unpacking fonts-freefont-ttf (20120503-10build1) ...
Selecting previously unselected package fonts-gfs-artemisia.
Preparing to unpack .../30-fonts-gfs-artemisia_1.1-6_all.deb ...
Unpacking fonts-gfs-artemisia (1.1-6) ...
Selecting previously unselected package fonts-gfs-complutum.
Preparing to unpack .../31-fonts-gfs-complutum_1.1-7_all.deb ...
Unpacking fonts-gfs-complutum (1.1-7) ...
Selecting previously unselected package fonts-gfs-didot.
Preparing to unpack .../32-fonts-gfs-didot_1.1-7_all.deb ...
Unpacking fonts-gfs-didot (1.1-7) ...
Selecting previously unselected package fonts-gfs-neohellenic.
Preparing to unpack .../33-fonts-gfs-neohellenic_1.1-7_all.deb ...
Unpacking fonts-gfs-neohellenic (1.1-7) ...
Selecting previously unselected package fonts-gfs-olga.
Preparing to unpack .../34-fonts-gfs-olga_1.1-6_all.deb ...
Unpacking fonts-gfs-olga (1.1-6) ...
Selecting previously unselected package fonts-gfs-solomos.
Preparing to unpack .../35-fonts-gfs-solomos_1.1-6_all.deb ...
Unpacking fonts-gfs-solomos (1.1-6) ...
Selecting previously unselected package fonts-go.
Preparing to unpack .../36-fonts-go_0~20170330-1_all.deb ...
Unpacking fonts-go (0~20170330-1) ...
Selecting previously unselected package fonts-junicode.
Preparing to unpack .../37-fonts-junicode_1.002-2_all.deb ...
Unpacking fonts-junicode (1.002-2) ...
Selecting previously unselected package fonts-linuxlibertine.
Preparing to unpack .../38-fonts-linuxlibertine_5.3.0-6_all.deb ...
Unpacking fonts-linuxlibertine (5.3.0-6) ...
Selecting previously unselected package fonts-lmodern.
Preparing to unpack .../39-fonts-lmodern_2.004.5-6.1_all.deb ...
Unpacking fonts-lmodern (2.004.5-6.1) ...
Selecting previously unselected package fonts-lobster.
Preparing to unpack .../40-fonts-lobster_2.0-2.1_all.deb ...
Unpacking fonts-lobster (2.0-2.1) ...
Selecting previously unselected package fonts-lobstertwo.
Preparing to unpack .../41-fonts-lobstertwo_2.0-2.1_all.deb ...
Unpacking fonts-lobstertwo (2.0-2.1) ...
Selecting previously unselected package fonts-noto-color-emoji.
Preparing to unpack .../42-fonts-noto-color-emoji_2.047-0ubuntu0.22.04.1_all.deb ...
Unpacking fonts-noto-color-emoji (2.047-0ubuntu0.22.04.1) ...
Selecting previously unselected package fonts-noto-core.
Preparing to unpack .../43-fonts-noto-core_20201225-1build1_all.deb ...
Unpacking fonts-noto-core (20201225-1build1) ...
Selecting previously unselected package fonts-noto-mono.
Preparing to unpack .../44-fonts-noto-mono_20201225-1build1_all.deb ...
Unpacking fonts-noto-mono (20201225-1build1) ...
Selecting previously unselected package fonts-oflb-asana-math.
Preparing to unpack .../45-fonts-oflb-asana-math_000.907-7build1_all.deb ...
Unpacking fonts-oflb-asana-math (000.907-7build1) ...
Selecting previously unselected package fonts-open-sans.
Preparing to unpack .../46-fonts-open-sans_1.11-2_all.deb ...
Unpacking fonts-open-sans (1.11-2) ...
Selecting previously unselected package fonts-roboto-unhinted.
Preparing to unpack .../47-fonts-roboto-unhinted_2%3a0~20170802-3_all.deb ...
Unpacking fonts-roboto-unhinted (2:0~20170802-3) ...
Selecting previously unselected package fonts-sil-charis.
Preparing to unpack .../48-fonts-sil-charis_6.101-1_all.deb ...
Unpacking fonts-sil-charis (6.101-1) ...
Selecting previously unselected package fonts-sil-gentium.
Preparing to unpack .../49-fonts-sil-gentium_20081126%3a1.03-4_all.deb ...
Unpacking fonts-sil-gentium (20081126:1.03-4) ...
Selecting previously unselected package fonts-sil-gentium-basic.
Preparing to unpack .../50-fonts-sil-gentium-basic_1.102-1.1_all.deb ...
Unpacking fonts-sil-gentium-basic (1.102-1.1) ...
Selecting previously unselected package fonts-sil-gentiumplus.
Preparing to unpack .../51-fonts-sil-gentiumplus_6.101-1_all.deb ...
Unpacking fonts-sil-gentiumplus (6.101-1) ...
Selecting previously unselected package fonts-sil-gentiumplus-compact.
Preparing to unpack .../52-fonts-sil-gentiumplus-compact_5.000-4_all.deb ...
Unpacking fonts-sil-gentiumplus-compact (5.000-4) ...
Selecting previously unselected package fonts-texgyre.
Preparing to unpack .../53-fonts-texgyre_20180621-3.1_all.deb ...
Unpacking fonts-texgyre (20180621-3.1) ...
Selecting previously unselected package libapache-pom-java.
Preparing to unpack .../54-libapache-pom-java_18-1_all.deb ...
Unpacking libapache-pom-java (18-1) ...
Selecting previously unselected package libcommons-parent-java.
Preparing to unpack .../55-libcommons-parent-java_43-1_all.deb ...
Unpacking libcommons-parent-java (43-1) ...
Selecting previously unselected package libcommons-logging-java.
Preparing to unpack .../56-libcommons-logging-java_1.2-2_all.deb ...
Unpacking libcommons-logging-java (1.2-2) ...
Selecting previously unselected package libptexenc1:amd64.
Preparing to unpack .../57-libptexenc1_2021.20210626.59705-1ubuntu0.2_amd64.deb ...
Unpacking libptexenc1:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Selecting previously unselected package rubygems-integration.
Preparing to unpack .../58-rubygems-integration_1.18_all.deb ...
Unpacking rubygems-integration (1.18) ...
Selecting previously unselected package ruby3.0.
Preparing to unpack .../59-ruby3.0_3.0.2-7ubuntu2.10_amd64.deb ...
Unpacking ruby3.0 (3.0.2-7ubuntu2.10) ...
Selecting previously unselected package ruby-rubygems.
Preparing to unpack .../60-ruby-rubygems_3.3.5-2_all.deb ...
Unpacking ruby-rubygems (3.3.5-2) ...
Selecting previously unselected package ruby.
Preparing to unpack .../61-ruby_1%3a3.0~exp1_amd64.deb ...
Unpacking ruby (1:3.0~exp1) ...
Selecting previously unselected package rake.
Preparing to unpack .../62-rake_13.0.6-2_all.deb ...
Unpacking rake (13.0.6-2) ...
Selecting previously unselected package ruby-net-telnet.
Preparing to unpack .../63-ruby-net-telnet_0.1.1-2_all.deb ...
Unpacking ruby-net-telnet (0.1.1-2) ...
Selecting previously unselected package ruby-webrick.
Preparing to unpack .../64-ruby-webrick_1.7.0-3ubuntu0.2_all.deb ...
Unpacking ruby-webrick (1.7.0-3ubuntu0.2) ...
Selecting previously unselected package ruby-xmlrpc.
Preparing to unpack .../65-ruby-xmlrpc_0.3.2-1ubuntu0.1_all.deb ...
Unpacking ruby-xmlrpc (0.3.2-1ubuntu0.1) ...
Selecting previously unselected package libruby3.0:amd64.
Preparing to unpack .../66-libruby3.0_3.0.2-7ubuntu2.10_amd64.deb ...
Unpacking libruby3.0:amd64 (3.0.2-7ubuntu2.10) ...
Selecting previously unselected package libsynctex2:amd64.
Preparing to unpack .../67-libsynctex2_2021.20210626.59705-1ubuntu0.2_amd64.deb ...
Unpacking libsynctex2:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Selecting previously unselected package libteckit0:amd64.
Preparing to unpack .../68-libteckit0_2.5.11+ds1-1_amd64.deb ...
Unpacking libteckit0:amd64 (2.5.11+ds1-1) ...
Selecting previously unselected package libtexlua53:amd64.
Preparing to unpack .../69-libtexlua53_2021.20210626.59705-1ubuntu0.2_amd64.deb ...
Unpacking libtexlua53:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Selecting previously unselected package libtexluajit2:amd64.
Preparing to unpack .../70-libtexluajit2_2021.20210626.59705-1ubuntu0.2_amd64.deb ...
Unpacking libtexluajit2:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Selecting previously unselected package libzzip-0-13:amd64.
Preparing to unpack .../71-libzzip-0-13_0.13.72+dfsg.1-1.1_amd64.deb ...
Unpacking libzzip-0-13:amd64 (0.13.72+dfsg.1-1.1) ...
Selecting previously unselected package xfonts-encodings.
Preparing to unpack .../72-xfonts-encodings_1%3a1.0.5-0ubuntu2_all.deb ...
Unpacking xfonts-encodings (1:1.0.5-0ubuntu2) ...
Selecting previously unselected package xfonts-utils.
Preparing to unpack .../73-xfonts-utils_1%3a7.7+6build2_amd64.deb ...
Unpacking xfonts-utils (1:7.7+6build2) ...
Selecting previously unselected package lmodern.
Preparing to unpack .../74-lmodern_2.004.5-6.1_all.deb ...
Unpacking lmodern (2.004.5-6.1) ...
Selecting previously unselected package preview-latex-style.
Preparing to unpack .../75-preview-latex-style_12.2-1ubuntu1_all.deb ...
Unpacking preview-latex-style (12.2-1ubuntu1) ...
Selecting previously unselected package t1utils.
Preparing to unpack .../76-t1utils_1.41-4build2_amd64.deb ...
Unpacking t1utils (1.41-4build2) ...
Selecting previously unselected package tex-gyre.
Preparing to unpack .../77-tex-gyre_20180621-3.1_all.deb ...
Unpacking tex-gyre (20180621-3.1) ...
Selecting previously unselected package texlive-binaries.
Preparing to unpack .../78-texlive-binaries_2021.20210626.59705-1ubuntu0.2_amd64.deb ...
Unpacking texlive-binaries (2021.20210626.59705-1ubuntu0.2) ...
Selecting previously unselected package texlive-base.
Preparing to unpack .../79-texlive-base_2021.20220204-1_all.deb ...
Unpacking texlive-base (2021.20220204-1) ...
Selecting previously unselected package texlive-fonts-extra.
Preparing to unpack .../80-texlive-fonts-extra_2021.20220204-1_all.deb ...
Unpacking texlive-fonts-extra (2021.20220204-1) ...
Selecting previously unselected package fonts-stix.
Preparing to unpack .../81-fonts-stix_1.1.1-4.1_all.deb ...
Unpacking fonts-stix (1.1.1-4.1) ...
Selecting previously unselected package texlive-fonts-extra-links.
Preparing to unpack .../82-texlive-fonts-extra-links_2021.20220204-1_all.deb ...
Unpacking texlive-fonts-extra-links (2021.20220204-1) ...
Selecting previously unselected package texlive-fonts-recommended.
Preparing to unpack .../83-texlive-fonts-recommended_2021.20220204-1_all.deb ...
Unpacking texlive-fonts-recommended (2021.20220204-1) ...
Selecting previously unselected package texlive-latex-base.
Preparing to unpack .../84-texlive-latex-base_2021.20220204-1_all.deb ...
Unpacking texlive-latex-base (2021.20220204-1) ...
Selecting previously unselected package libfontbox-java.
Preparing to unpack .../85-libfontbox-java_1%3a1.8.16-2_all.deb ...
Unpacking libfontbox-java (1:1.8.16-2) ...
Selecting previously unselected package libpdfbox-java.
Preparing to unpack .../86-libpdfbox-java_1%3a1.8.16-2_all.deb ...
Unpacking libpdfbox-java (1:1.8.16-2) ...
Selecting previously unselected package texlive-latex-recommended.
Preparing to unpack .../87-texlive-latex-recommended_2021.20220204-1_all.deb ...
Unpacking texlive-latex-recommended (2021.20220204-1) ...
Selecting previously unselected package texlive-pictures.
Preparing to unpack .../88-texlive-pictures_2021.20220204-1_all.deb ...
Unpacking texlive-pictures (2021.20220204-1) ...
Selecting previously unselected package texlive-latex-extra.
Preparing to unpack .../89-texlive-latex-extra_2021.20220204-1_all.deb ...
Unpacking texlive-latex-extra (2021.20220204-1) ...
Selecting previously unselected package texlive-plain-generic.
Preparing to unpack .../90-texlive-plain-generic_2021.20220204-1_all.deb ...
Unpacking texlive-plain-generic (2021.20220204-1) ...
Selecting previously unselected package tipa.
Preparing to unpack .../91-tipa_2%3a1.3-21_all.deb ...
Unpacking tipa (2:1.3-21) ...
Setting up fonts-gfs-didot (1.1-7) ...
Setting up fonts-gfs-artemisia (1.1-6) ...
Setting up fonts-sil-gentium-basic (1.102-1.1) ...
Setting up fonts-cantarell (0.303-2) ...
Setting up fonts-ebgaramond (0.016+git20210310.42d4f9f2-1) ...
Setting up fonts-lato (2.0-2.1) ...
Setting up fonts-junicode (1.002-2) ...
Setting up fonts-noto-mono (20201225-1build1) ...
Setting up libwoff1:amd64 (1.0.2-1build4) ...
Setting up fonts-noto-color-emoji (2.047-0ubuntu0.22.04.1) ...
Setting up libtexlua53:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Setting up fonts-adf-berenis (0.20190904-2) ...
Setting up libijs-0.35:amd64 (0.35-15build2) ...
Setting up libtexluajit2:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Setting up libfontbox-java (1:1.8.16-2) ...
Setting up fonts-freefont-otf (20120503-10build1) ...
Setting up fonts-freefont-ttf (20120503-10build1) ...
Setting up fonts-gfs-solomos (1.1-6) ...
Setting up fonts-comfortaa (3.001-3) ...
Setting up rubygems-integration (1.18) ...
Setting up libzzip-0-13:amd64 (0.13.72+dfsg.1-1.1) ...
Setting up fonts-sil-gentiumplus-compact (5.000-4) ...
Setting up fonts-roboto-unhinted (2:0~20170802-3) ...
Setting up fonts-urw-base35 (20200910-1) ...
Setting up fonts-open-sans (1.11-2) ...
Setting up fonts-sil-gentiumplus (6.101-1) ...
Setting up fonts-gfs-neohellenic (1.1-7) ...
Setting up fonts-gfs-olga (1.1-6) ...
Setting up fonts-oflb-asana-math (000.907-7build1) ...
Setting up poppler-data (0.4.11-1) ...
Setting up fonts-crosextra-carlito (20130920-1.1) ...
Setting up fonts-adf-accanthis (0.20190904-2) ...
Setting up tex-common (6.17) ...
update-language: texlive-base not installed and configured, doing nothing!
Setting up fonts-sil-gentium (20081126:1.03-4) ...
Setting up fonts-adf-universalis (0.20190904-2) ...
Setting up fonts-stix (1.1.1-4.1) ...
Setting up fonts-sil-charis (6.101-1) ...
Setting up libjbig2dec0:amd64 (0.19-3build2) ...
Setting up fonts-go (0~20170330-1) ...
Setting up libteckit0:amd64 (2.5.11+ds1-1) ...
Setting up libapache-pom-java (18-1) ...
Setting up ruby-net-telnet (0.1.1-2) ...
Setting up fonts-cabin (1.5-3) ...
Setting up xfonts-encodings (1:1.0.5-0ubuntu2) ...
Setting up t1utils (1.41-4build2) ...
Setting up libidn12:amd64 (1.38-4ubuntu1) ...
Setting up fonts-dejavu-core (2.37-2build1) ...
Setting up fonts-texgyre (20180621-3.1) ...
Setting up fonts-linuxlibertine (5.3.0-6) ...
Setting up libkpathsea6:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Setting up fonts-croscore (20201225-1build1) ...
Setting up ruby-webrick (1.7.0-3ubuntu0.2) ...
Setting up fonts-dejavu-extra (2.37-2build1) ...
Setting up fonts-gfs-complutum (1.1-7) ...
Setting up fonts-crosextra-caladea (20130214-2.1) ...
Setting up fonts-lmodern (2.004.5-6.1) ...
Setting up fonts-ebgaramond-extra (0.016+git20210310.42d4f9f2-1) ...
Setting up fonts-lobster (2.0-2.1) ...
Setting up fonts-droid-fallback (1:6.0.1r16-1.1build1) ...
Setting up fonts-adf-gillius (0.20190904-2) ...
Setting up ruby-xmlrpc (0.3.2-1ubuntu0.1) ...
Setting up fonts-noto-core (20201225-1build1) ...
Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ...
Setting up libsynctex2:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Setting up libgs9-common (9.55.0~dfsg1-0ubuntu5.12) ...
Setting up libpdfbox-java (1:1.8.16-2) ...
Setting up libgs9:amd64 (9.55.0~dfsg1-0ubuntu5.12) ...
Setting up preview-latex-style (12.2-1ubuntu1) ...
Setting up libcommons-parent-java (43-1) ...
Setting up dvisvgm (2.13.1-1) ...
Setting up libcommons-logging-java (1.2-2) ...
Setting up xfonts-utils (1:7.7+6build2) ...
Setting up fonts-lobstertwo (2.0-2.1) ...
Setting up libptexenc1:amd64 (2021.20210626.59705-1ubuntu0.2) ...
Setting up texlive-fonts-extra-links (2021.20220204-1) ...
Setting up texlive-binaries (2021.20210626.59705-1ubuntu0.2) ...
update-alternatives: using /usr/bin/xdvi-xaw to provide /usr/bin/xdvi.bin (xdvi.bin) in auto mode
update-alternatives: using /usr/bin/bibtex.original to provide /usr/bin/bibtex (bibtex) in auto mode
Setting up lmodern (2.004.5-6.1) ...
Setting up texlive-base (2021.20220204-1) ...
/usr/bin/ucfr
/usr/bin/ucfr
/usr/bin/ucfr
/usr/bin/ucfr
mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... 
mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... 
mktexlsr: Updating /var/lib/texmf/ls-R... 
mktexlsr: Done.
tl-paper: setting paper size for dvips to a4: /var/lib/texmf/dvips/config/config-paper.ps
tl-paper: setting paper size for dvipdfmx to a4: /var/lib/texmf/dvipdfmx/dvipdfmx-paper.cfg
tl-paper: setting paper size for xdvi to a4: /var/lib/texmf/xdvi/XDvi-paper
tl-paper: setting paper size for pdftex to a4: /var/lib/texmf/tex/generic/tex-ini-files/pdftexconfig.tex
Setting up tex-gyre (20180621-3.1) ...
Setting up texlive-plain-generic (2021.20220204-1) ...
Setting up texlive-latex-base (2021.20220204-1) ...
Setting up texlive-fonts-extra (2021.20220204-1) ...
Setting up texlive-latex-recommended (2021.20220204-1) ...
Setting up texlive-pictures (2021.20220204-1) ...
Setting up texlive-fonts-recommended (2021.20220204-1) ...
Setting up tipa (2:1.3-21) ...
Setting up texlive-latex-extra (2021.20220204-1) ...
Setting up libruby3.0:amd64 (3.0.2-7ubuntu2.10) ...
Setting up ruby3.0 (3.0.2-7ubuntu2.10) ...
Setting up ruby (1:3.0~exp1) ...
Setting up rake (13.0.6-2) ...
Setting up ruby-rubygems (3.3.5-2) ...
Processing triggers for man-db (2.10.2-1) ...
Processing triggers for mailcap (3.70+nmu1ubuntu1) ...
Processing triggers for fontconfig (2.13.1-4.2ubuntu5) ...
Processing triggers for libc-bin (2.35-0ubuntu3.8) ...
/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_5.so.3 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc_proxy.so.2 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libur_adapter_opencl.so.0 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbbind.so.3 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbb.so.12 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libumf.so.0 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_0.so.3 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtcm.so.1 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libur_adapter_level_zero_v2.so.0 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc.so.2 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libhwloc.so.15 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtcm_debug.so.1 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libur_adapter_level_zero.so.0 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libur_loader.so.0 is not a symbolic link

Processing triggers for tex-common (6.17) ...
Running updmap-sys. This may take some time... done.
Running mktexlsr /var/lib/texmf ... done.
Building format(s) --all.
	This may take some time... done.

Crea carpeta de imagenes específica para crear el póster

Crea archivo poster.tex

In [89]:
import subprocess

# Código LaTeX que permite crear el póster

poster_code = r"""
\documentclass[final]{beamer}
\usepackage[size=a0,scale=1.0]{beamerposter} % Tamaño A0 optimizado
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage{multicol}
\usepackage{caption}

% -----------------------------
% COLORES
% -----------------------------
\definecolor{PrimaryBlue}{RGB}{30, 144, 255}
\definecolor{AccentOrange}{RGB}{255, 140, 0}
\definecolor{LightGray}{RGB}{245,245,245}
\setbeamercolor{block title}{bg=PrimaryBlue, fg=white}
\setbeamercolor{block body}{bg=LightGray, fg=black}
\setbeamercolor{alertblock title}{bg=AccentOrange, fg=white}
\setbeamercolor{alertblock body}{bg=LightGray, fg=black}

% -----------------------------
% AJUSTE DE COLUMNAS
% -----------------------------
\setlength{\columnsep}{1cm} % separación entre columnas

\begin{document}
\begin{frame}[t]

% -----------------------------
% TITULO DEL PÓSTER
% -----------------------------
\begin{center}
{\Huge Detección de Vida Silvestre en África usando RetinaNet y PyTorch \par}
\vspace{0.5cm}
{\LARGE Visión por computadora aplicada a la biodiversidad \par}
\vspace{0.5cm}
{\large Alexander Barrantes Herrera \par}
{\large Redbioma \par}
{\large 2025 \par}
\end{center}
\vspace{1cm}

\begin{columns}[t,totalwidth=\textwidth]

% -----------------------------
% COLUMNA 1: Dataset
% -----------------------------
\begin{column}{0.315\textwidth}
\begin{block}{A. Resumen Descripción del Dataset y de Análisis Exploratorio}
\textbf{Dataset:} African Wildlife Dataset (Ultralytics, 2025). \\
\textbf{Clases:} buffalo, elephant, rhino, zebra

\begin{itemize}
    \item Conteo de imágenes en train/val/test
    \item Número y distribución de bounding boxes
    \item Distribución de tamaños de imagen
    \item Visualización de ejemplos con GT
\end{itemize}

\begin{center}
\includegraphics[width=0.9\linewidth]{Numero_de_cajas_por_imagen.png}
\captionof{figure}{Número de Cajas por Imagen}
\end{center}

\begin{center}
\includegraphics[width=0.9\linewidth]{/content/weights/Tamano_de_imagen_px.png}
\captionof{figure}{Tamaño de Imagen según PX}
\end{center}

\end{block}
\end{column}

% -----------------------------
% COLUMNA 2: Imagen + Modelo + Resultados
% -----------------------------
\begin{column}{0.315\textwidth}

% -----------------------------
% Imagen collage de ejemplo con bounding boxes
% -----------------------------
\begin{center}
\includegraphics[width=0.9\linewidth]{/content/data/african_wildlife/examples_bb/collage_bb.png}
\captionof{figure}{Visualización de Imágenes - Inicial}
\end{center}

\begin{block}{B. Resumen Descriptivo del Modelo RetinaNet}
\textbf{Arquitectura:} RetinaNet con ResNet50 preentrenado

\textbf{Configuración:}
\begin{itemize}
    \item Clases: 4 + fondo
    \item Optimizador: AdamW, lr=1e-4
    \item Batch: 2, Epochs: 5
    \item Augmentations: Resize, Flip, ToTensor
\end{itemize}
\end{block}

\begin{block}{C. Resumen de Resultados del Modelo}
\begin{center}
\includegraphics[width=0.9\linewidth]{/content/weights/Loss_por_epoch.png}
\captionof{figure}{Curva de pérdida Train vs Val}
\end{center}

\textbf{Resumen de métricas (época final):}
\begin{center}
\begin{tabular}{l c}
\toprule
Métrica & Valor \\
\midrule
mAP@0.5 & 0.351 \\
mAP@0.5:0.95 & 0.176 \\
Precision & 0.412 \\
Recall & 0.536 \\
F1 Score & 0.463 \\
Accuracy & 0.602 \\
\bottomrule
\end{tabular}
\end{center}
\end{block}
\end{column}

% -----------------------------
% COLUMNA 3: Predicciones + Reflexión
% -----------------------------
\begin{column}{0.315\textwidth}

\begin{center}
\includegraphics[width=0.9\linewidth]{/content/weights/collage_pred_gt.png}
\captionof{figure}{Ejemplos de validación: Green=GT, Red=Predicción}
\end{center}

\begin{block}{D. Reflexión sobre la experiencia obtenida}
\begin{itemize}
    \item Replicar el tutorial (Gosh, 2025) permitió entender datasets en PyTorch.
    \item Dataset real: balance de clases y variabilidad de cajas.
    \item Visualización de predicciones vs GT fue crucial antes de cuantificar métricas.
    \item El pipeline entrenar-validar-visualizar es esencial en detección de objetos.
\end{itemize}
\end{block}
\end{column}

\end{columns}
\end{frame}
\end{document}
"""


# Guardar el archivo .tex
tex_filename = "poster_final.tex"
with open(tex_filename, "w", encoding="utf-8") as f:
    f.write(poster_code)

print(f"Archivo {tex_filename} guardado correctamente.")
Archivo poster_final.tex guardado correctamente.

Compila el archivo LaTeX a PDF

In [90]:
import subprocess
from google.colab import files
from pathlib import Path

weights_dir = Path("/content/weights")
poster_tex_path = weights_dir / "poster.tex"

# Guarda el código LaTeX actualizado
with open(poster_tex_path, "w", encoding="utf-8") as f:
    f.write(poster_code)  # poster_code contiene las 3 columnas

# Compila
subprocess.run([
    "pdflatex",
    "-output-directory", str(weights_dir),
    str(poster_tex_path)
], check=True)
Out[90]:
CompletedProcess(args=['pdflatex', '-output-directory', '/content/weights', '/content/weights/poster.tex'], returncode=0)

Descarga el archivo PDF

In [91]:
from google.colab import files


# Guarda el poster con extensión .pdf
pdf_path = weights_dir / "poster.pdf"

# Verifica si existe
if pdf_path.exists():
    files.download(str(pdf_path))
else:
    print("¡Error! No se generó el PDF.")