In the context of IT management, migrating data between different environments can be a critical activity. GLPI is a widely used open-source platform for IT asset and help desk management. When transitioning from an on-premise instance to a cloud version (such as NE4 Cloud), the migration process can become extremely complex. This article analyzes a script designed to robustly and modularly automate the data migration between two GLPI environments.
The script is designed to:
migration_log
table).constant_new.py
This file serves as the configuration core. It defines all constants required for the script to operate:
function_new.py
Contains the core business logic of the script. It’s an extremely rich and flexible module, organized into categories:
Functions that directly interface with GLPI via REST API:
api_init()
: initializes an authenticated session.api_insert_user(user)
: creates a user.api_upload_document()
, api_direct_curl()
: manage file uploads, including via curl
for special cases.Functions to read/write from MySQL databases:
execute_sql
, execute_sql_dest
: execute queries on the databases.crud_sql()
: dynamically builds and submits an INSERT query.execute_sql_insert_upd()
: handles inserts and updates.find_in_migration_log()
: retrieves matching IDs between old and new DBs.get_payload()
: serializes an object in GLPI-compatible JSON format.insert_migration_log()
: logs each operation in detail into the migration_log
table.associate_computer_items()
, insert_relation()
: manage object relationships.This modularity allows for easy reuse and customization of functions for different asset types.
migration_new.py
The main executable file serving as the orchestrator. This script defines and executes the complete migration workflow. It consists of a single entry point (main()
) and uses a combination of direct SQL operations and REST API calls to carry out the migration process.
Key responsibilities include:
check_else_insert_dest()
and find_in_migration_log()
.glpi
, tech
, etc.).migration_log
.Additionally, the script makes use of utility functions from function_new.py
to reduce redundancy and ensure consistency across data types. All output messages are printed with timestamps to track execution progress.
The structure is designed for idempotency: the script can be safely re-run without causing duplication, as long as the conditional logic is preserved. for idempotency: the script can be safely re-run without causing duplication, as long as the conditional logic is preserved.
The script integrates with GLPI’s REST APIs, allowing data insertion or updates even when direct database access is not allowed. This approach is secure and aligns with best practices for cloud platforms.
Example:
The APIs are also used for document insertion (file uploads) using dynamically generated curl
commands:
cmd = f"""curl -k -X POST -H 'Content-Type: multipart/form-data' \
-H \"Session-Token: {token}\" \
-H \"App-Token: {APP_TOKEN}\" \
-F \"filename[0]=@{completepath}\" \"{api_path('document')}\""""
The crud_sql
function analyzes the table structure via INFORMATION_SCHEMA
and dynamically builds the INSERT query, adapting to data types and availability:
sqlinsert = f"INSERT INTO {tableschema}.{tablename} ({', '.join(columns)}) VALUES ({', '.join(values)})"
SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME='glpi_computers' AND TABLE_SCHEMA = 'glpi'
This approach provides extreme flexibility, allowing the script to work with different tables without changing its core logic.
Each insert, update, or failure is logged in detail into the migration_log
table. This ensures full traceability and facilitates debugging and auditing.
Structure:
CREATE TABLE glpi.migration_log (
id INT AUTO_INCREMENT PRIMARY KEY,
oldid INT,
newid INT,
json TEXT,
tablename VARCHAR(255),
data DATETIME,
deleted DATETIME
);
Example log insertion:
sql = f"""
INSERT INTO glpi.migration_log (oldid, newid, json, tablename, data)
VALUES ({oldid}, {newid}, '{json}', '{tablename}', '{timestamp}')
"""
The script avoids duplication using functions that pre-check for existing records. For example, to insert only non-existing manufacturers:
sqlwhere = f"WHERE `{unique_field}` NOT IN ({','.join(existing_values)})"
sql_final = sqlin + sqlwhere
This approach is used for entities such as locations, states, manufacturers, and types.
The script handles complex relationships like glpi_contracts_items
, glpi_computers_items
, ensuring referential integrity between old and new IDs.
Example:
new_item_id = find_in_migration_log('glpi_monitors', old_item_id)
data = {
'contracts_id': new_contract_id,
'items_id': new_item_id,
'itemtype': item_type,
'entities_id': entities_id
}
crud_sql('glpi_contracts_items', data, 'glpi')
The associate_computer_items
function also dynamically determines the peripheral type (Monitor, Printer, Peripheral) and resolves new IDs using the log table.
The script is designed to process large data volumes in batches:
offset = 0
while True:
sql_batch = sqlin + f" LIMIT {batch_size} OFFSET {offset}"
logs = execute_sql_objects(sql_batch, entities_id)
if not logs:
break
# process the batch
offset += batch_size
This approach avoids timeouts and optimizes the insertion of logs or documents for large asset sets.
The script is capable of extracting and migrating a wide array of GLPI asset types and metadata, including:
glpi_locations
)glpi_manufacturers
)glpi_states
)glpi_groups
)glpi_contracts_items
)glpi_computers_items
)glpi_logs
)glpi_documents_items
)preliminary_entries_for_users()
)These data sets can be adapted depending on the migration scope. Additional exportable plugins or tables (e.g., SIM cards, software) can be integrated with minimal customization to the existing script logic.
This script is a concrete example of how programming can be used to manage critical processes in a reliable, modular, and scalable way. Its structure allows for future extensions, such as plugin management or selective migration based on policy.
A project like this requires knowledge of:
If you’re planning a GLPI migration to NetEye Cloud, this script serves as a practical operational reference for handling the complexity of the transition. It fully automates the extract, transform, and load operations, adhering to the security specifications and APIs of the NetEye cloud system. Adopting such a solution minimizes error risks, ensures complete traceability through structured logs, and optimizes time compared to manual processes. In business scenarios where downtime and accuracy are critical factors, this approach becomes essential for a successful migration.
Once the import process is complete, organizations can proceed with the installation of GLPI agents on their workstations. These agents allow automatic updates of asset data directly from the devices, ensuring that inventory information remains current and synchronized without manual intervention. This marks the beginning of a dynamic asset lifecycle management process, fully integrated within the NetEye Cloud environment.