07. 04. 2025 Alessandro Paoli Asset Management, GLPI, NetEye

Automating GLPI Migration from System OnPrem to NetEye Cloud

In the context of IT management, migrating data between different environments can be a critical activity. GLPI is a widely used open-source platform for IT asset and help desk management. When transitioning from an on-premise instance to a cloud version (such as NE4 Cloud), the migration process can become extremely complex. This article analyzes a script designed to robustly and modularly automate the data migration between two GLPI environments.

Script Objectives

The script is designed to:

  • Connect to two GLPI instances (source and destination) via database.
  • Extract, transform, and load (ETL) selected data.
  • Make REST API calls to insert users, assets, and documents into the new instance.
  • Maintain a detailed log of the migration (via the migration_log table).

Main Components

1. constant_new.py

This file serves as the configuration core. It defines all constants required for the script to operate:

  • API_ROOT and API_ROOT_HTTP: REST API endpoints of the GLPI cloud.
  • APP_TOKEN, USER_TOKEN: API credentials required to authenticate with GLPI.
  • OHOST / NHOST and corresponding user/passwords: database connection credentials for source and destination.
  • ENTITIES_ID: a critical parameter to identify the destination entity in the new GLPI instance.

2. function_new.py

Contains the core business logic of the script. It’s an extremely rich and flexible module, organized into categories:

a. API Helpers

Functions that directly interface with GLPI via REST API:

  • api_init(): initializes an authenticated session.
  • api_insert_user(user): creates a user.
  • api_upload_document(), api_direct_curl(): manage file uploads, including via curl for special cases.

b. SQL Helpers

Functions to read/write from MySQL databases:

  • execute_sql, execute_sql_dest: execute queries on the databases.
  • crud_sql(): dynamically builds and submits an INSERT query.
  • execute_sql_insert_upd(): handles inserts and updates.

c. Mapping & Utility

  • find_in_migration_log(): retrieves matching IDs between old and new DBs.
  • get_payload(): serializes an object in GLPI-compatible JSON format.
  • insert_migration_log(): logs each operation in detail into the migration_log table.
  • associate_computer_items(), insert_relation(): manage object relationships.

This modularity allows for easy reuse and customization of functions for different asset types.

3. migration_new.py

The main executable file serving as the orchestrator. This script defines and executes the complete migration workflow. It consists of a single entry point (main()) and uses a combination of direct SQL operations and REST API calls to carry out the migration process.

Key responsibilities include:

  • Asset import orchestration: Assets are imported in a logical sequence—locations, manufacturers, states, groups, and various equipment types (computers, printers, peripherals, etc.).
  • Conditional logic: Includes safeguards such as:
    • Skipping already-imported entries using check_else_insert_dest() and find_in_migration_log().
    • Filtering out system users from the migration (glpi, tech, etc.).
  • Batch processing and logging: Data is processed in batches, with progress logs and detailed activity written to migration_log.
  • Relation and document handling: Maintains data relationships between entities and links documents to the corresponding records.
  • Plugin compatibility: Contains commented sections to support future expansions such as SIM card plugin data.

Additionally, the script makes use of utility functions from function_new.py to reduce redundancy and ensure consistency across data types. All output messages are printed with timestamps to track execution progress.

The structure is designed for idempotency: the script can be safely re-run without causing duplication, as long as the conditional logic is preserved. for idempotency: the script can be safely re-run without causing duplication, as long as the conditional logic is preserved.

Notable Technical Features

Use of REST APIs

The script integrates with GLPI’s REST APIs, allowing data insertion or updates even when direct database access is not allowed. This approach is secure and aligns with best practices for cloud platforms.

Example:


The APIs are also used for document insertion (file uploads) using dynamically generated curl commands:

cmd = f"""curl -k -X POST -H 'Content-Type: multipart/form-data' \
-H \"Session-Token: {token}\" \
-H \"App-Token: {APP_TOKEN}\" \
-F \"filename[0]=@{completepath}\" \"{api_path('document')}\""""

Dynamic SQL CRUD

The crud_sql function analyzes the table structure via INFORMATION_SCHEMA and dynamically builds the INSERT query, adapting to data types and availability:

sqlinsert = f"INSERT INTO {tableschema}.{tablename} ({', '.join(columns)}) VALUES ({', '.join(values)})"
SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS 
WHERE TABLE_NAME='glpi_computers' AND TABLE_SCHEMA = 'glpi'

This approach provides extreme flexibility, allowing the script to work with different tables without changing its core logic.

Extensive Logging

Each insert, update, or failure is logged in detail into the migration_log table. This ensures full traceability and facilitates debugging and auditing.

Structure:

CREATE TABLE glpi.migration_log (
  id INT AUTO_INCREMENT PRIMARY KEY,
  oldid INT,
  newid INT,
  json TEXT,
  tablename VARCHAR(255),
  data DATETIME,
  deleted DATETIME
);

Example log insertion:

sql = f"""
INSERT INTO glpi.migration_log (oldid, newid, json, tablename, data)
VALUES ({oldid}, {newid}, '{json}', '{tablename}', '{timestamp}')
"""

Conditional Insertion

The script avoids duplication using functions that pre-check for existing records. For example, to insert only non-existing manufacturers:

sqlwhere = f"WHERE `{unique_field}` NOT IN ({','.join(existing_values)})"
sql_final = sqlin + sqlwhere

This approach is used for entities such as locations, states, manufacturers, and types.

N:N Relationship Support

The script handles complex relationships like glpi_contracts_items, glpi_computers_items, ensuring referential integrity between old and new IDs.

Example:

new_item_id = find_in_migration_log('glpi_monitors', old_item_id)
data = {
    'contracts_id': new_contract_id,
    'items_id': new_item_id,
    'itemtype': item_type,
    'entities_id': entities_id
}
crud_sql('glpi_contracts_items', data, 'glpi')

The associate_computer_items function also dynamically determines the peripheral type (Monitor, Printer, Peripheral) and resolves new IDs using the log table.

Batch Processing and Limits

The script is designed to process large data volumes in batches:

offset = 0
while True:
    sql_batch = sqlin + f" LIMIT {batch_size} OFFSET {offset}"
    logs = execute_sql_objects(sql_batch, entities_id)
    if not logs:
        break
    # process the batch
    offset += batch_size

This approach avoids timeouts and optimizes the insertion of logs or documents for large asset sets.

Exportable Data

The script is capable of extracting and migrating a wide array of GLPI asset types and metadata, including:

Core Entities

  • Locations (glpi_locations)
  • Manufacturers (glpi_manufacturers)
  • States (glpi_states)
  • Groups (glpi_groups)

Asset Types

  • Computers and related models/types
  • Monitors
  • Printers
  • Phones
  • Peripherals
  • Network equipment

Contracts and Suppliers

  • Contract types
  • Suppliers
  • Contracts and their linked items (glpi_contracts_items)

Relationships and Logs

  • Computer and peripheral associations (glpi_computers_items)
  • Logs (glpi_logs)
  • Documents and document-item links (glpi_documents_items)

Users (optional)

  • With support for filtering system/default users
  • Preliminary user-related entries (preliminary_entries_for_users())

These data sets can be adapted depending on the migration scope. Additional exportable plugins or tables (e.g., SIM cards, software) can be integrated with minimal customization to the existing script logic.

Conclusion

This script is a concrete example of how programming can be used to manage critical processes in a reliable, modular, and scalable way. Its structure allows for future extensions, such as plugin management or selective migration based on policy.

A project like this requires knowledge of:

  • Relational databases
  • REST APIs
  • GLPI architecture

If you’re planning a GLPI migration to NetEye Cloud, this script serves as a practical operational reference for handling the complexity of the transition. It fully automates the extract, transform, and load operations, adhering to the security specifications and APIs of the NetEye cloud system. Adopting such a solution minimizes error risks, ensures complete traceability through structured logs, and optimizes time compared to manual processes. In business scenarios where downtime and accuracy are critical factors, this approach becomes essential for a successful migration.

Once the import process is complete, organizations can proceed with the installation of GLPI agents on their workstations. These agents allow automatic updates of asset data directly from the devices, ensuring that inventory information remains current and synchronized without manual intervention. This marks the beginning of a dynamic asset lifecycle management process, fully integrated within the NetEye Cloud environment.

Alessandro Paoli

Alessandro Paoli

My name is Alessandro Paoli and I've been a Technical Consultant at Wurth Phoenix since May 2024. I've always had a great passion for IT and since 2004 it has also become my job. In 2015 I found my role in the field, monitoring. I have had the opportunity to use various monitoring products, both open source and proprietary, I have worked on numerous projects from small businesses to global companies. I am married and have 2 wonderful daughters. My passions are travel, cinema, games (video and board) and comics, and every now and then I manage to indulge in a few days of sport (Padel and gym).

Author

Alessandro Paoli

My name is Alessandro Paoli and I've been a Technical Consultant at Wurth Phoenix since May 2024. I've always had a great passion for IT and since 2004 it has also become my job. In 2015 I found my role in the field, monitoring. I have had the opportunity to use various monitoring products, both open source and proprietary, I have worked on numerous projects from small businesses to global companies. I am married and have 2 wonderful daughters. My passions are travel, cinema, games (video and board) and comics, and every now and then I manage to indulge in a few days of sport (Padel and gym).

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive