Api
Bases: object
The interface for interacting with the Rory Privacy-Preserving Data Mining platform.
RoryClient provides a unified set of methods to access clustering, classification, and distributed processing services. It simplifies communication with the backend by handling endpoint orchestration, HTTP header management, and response validation through predefined data models.
The client supports multiple security paradigms, including: - Standard algorithms (K-Means, KNN, NNC). - Privacy-Preserving algorithms (Secure K-Means, Secure KNN). - Bouble-Blind variants (dbskmeans, dbsnnc). - Post-Quantum Cryptography (PQC) enabled methods for quantum-resistant security.
Attributes:
| Name | Type | Description |
|---|---|---|
uri |
str
|
The base connection string (e.g., http://localhost:9000). |
timeout |
int
|
The maximum time in seconds to wait for a server response. |
clustering_url |
str
|
Base endpoint for clustering services. |
classification_url |
str
|
Base endpoint for classification services. |
distributed_url |
str
|
Base endpoint for elastic task management and segmentation. |
__init__(hostname='localhost', port=9000, timeout=120)
¶
Initializes the RoryClient with server connection details.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
hostname
|
str
|
The network address of the Rory server. Defaults to "localhost". |
'localhost'
|
port
|
int
|
The port where the Rory service is listening. Defaults to 9000. |
9000
|
timeout
|
int
|
Connection and request timeout in seconds. Defaults to 120. |
120
|
dbskmeans(plaintext_matrix_id, plaintext_matrix_filename, k=2, experiment_iteration=0, num_chunks=2, max_iterations=5, extension='npy', sens=1e-11)
¶
Executes the Double-Blind Secure K-Means (dbskmeans) clustering algorithm.
Sends a POST request to the /clustering/dbskmeans endpoint. Task parameters, matrix
identifiers, and a sensitivity value (sens) are securely transmitted via HTTP headers
to interface with the privacy-preserving backend, ensuring robust data protection during
the mining process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plaintext_matrix_id
|
str
|
The unique identifier of the plaintext matrix. |
required |
plaintext_matrix_filename
|
str
|
The filename of the plaintext matrix dataset. |
required |
k
|
int
|
The number of clusters to form. Defaults to 2. |
2
|
experiment_iteration
|
int
|
The ID or iteration number of the current operation, useful for tracking research experiments. Defaults to 0. |
0
|
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
max_iterations
|
int
|
The maximum number of iterations allowed before the algorithm stops. Defaults to 5. |
5
|
extension
|
str
|
The file extension of the matrix dataset. Defaults to "npy". |
'npy'
|
sens
|
float
|
The sensitivity value used for calibrating privacy-preserving operations (e.g., differential privacy noise). Defaults to 0.00000000001. |
1e-11
|
Returns:
| Type | Description |
|---|---|
|
Result[KmeansResponse, Exception]: An |
|
|
Returns an |
dbskmeans_pqc(plaintext_matrix_id, plaintext_matrix_filename, k=2, experiment_iteration=0, num_chunks=2, max_iterations=5, extension='npy', sens='0.00000000001')
¶
Executes the Post-Quantum Cryptography (PQC) enabled Secure K-Means clustering algorithm.
Sends a POST request to the /clustering/pqc/skmeans endpoint. This method leverages
quantum-resistant cryptographic primitives to ensure data privacy during the clustering
process. Task parameters and matrix identifiers are securely transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plaintext_matrix_id
|
str
|
The unique identifier of the plaintext matrix. |
required |
plaintext_matrix_filename
|
str
|
The filename of the plaintext matrix dataset. |
required |
k
|
int
|
The number of clusters to form. Defaults to 2. |
2
|
experiment_iteration
|
int
|
The ID or iteration number of the current operation, useful for tracking research experiments. Defaults to 0. |
0
|
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
max_iterations
|
int
|
The maximum number of iterations allowed before the algorithm stops. Defaults to 5. |
5
|
extension
|
str
|
The file extension of the matrix dataset. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
|
Result[KmeansResponse, Exception]: An |
|
|
Returns an |
dbsnnc(plaintext_matrix_id, plaintext_matrix_filename, extension='npy', threshold=1.4, num_chunks=2, sens=1e-11)
¶
Executes the Double-Blind Secure Nearest Neighbor Clustering (dbsnnc) algorithm.
Sends a POST request to the /clustering/dbsnnc endpoint. Task parameters, including
matrix identifiers, distance threshold, dataset chunks, and a sensitivity value (sens),
are securely transmitted via HTTP headers to interface with the privacy-preserving backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plaintext_matrix_id
|
str
|
The unique identifier of the plaintext matrix. |
required |
plaintext_matrix_filename
|
str
|
The filename of the plaintext matrix dataset. |
required |
extension
|
str
|
The file extension of the matrix dataset. Defaults to "npy". |
'npy'
|
threshold
|
float
|
The distance threshold value used to determine cluster boundaries. Defaults to 1.4. |
1.4
|
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
sens
|
float
|
The sensitivity value used for calibrating differential privacy operations. Defaults to 0.00000000001. |
1e-11
|
Returns:
| Type | Description |
|---|---|
|
Result[NncResponse, Exception]: An |
|
|
Returns an |
get_completed_task_by_id(task_id)
¶
Retrieves the details and results of a specific completed distributed task.
Sends a GET request to the /tasks/<task_id>/completed endpoint. This method
is used to fetch the final execution status, performance metrics, and
output data for a specific task identified by its unique ID, ensuring
traceability within the distributed architecture.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
task_id
|
str
|
The unique identifier of the completed task to retrieve. |
required |
Returns:
| Type | Description |
|---|---|
Result[dict, Exception]
|
Result[dict, Exception]: An |
Result[dict, Exception]
|
task's results and metadata on success. |
Result[dict, Exception]
|
Returns an |
Result[dict, Exception]
|
connection, or task lookup fails. |
get_completed_tasks()
¶
Retrieves a historical record of all successfully completed distributed tasks.
Sends a GET request to the /tasks/completed endpoint of the distributed module.
This method allows users to fetch the final results, execution metrics, and
identifiers for tasks that have finished their lifecycle within the elastic
architecture, distinguishing them from pending or active processes.
Returns:
| Type | Description |
|---|---|
Result[dict, Exception]
|
Result[dict, Exception]: An |
Result[dict, Exception]
|
history of completed tasks and their associated metadata. |
Result[dict, Exception]
|
Returns an |
Result[dict, Exception]
|
connection, or JSON parsing fails. |
get_task_details()
¶
Retrieves detailed information and status queues for distributed tasks.
Sends a GET request to the /tasks/q endpoint of the distributed module.
Unlike the general get_tasks method which provides a top-level overview,
this method fetches more granular, in-depth details regarding the task queue,
execution states, or worker assignments within the elastic architecture.
Returns:
| Type | Description |
|---|---|
Result[dict, Exception]
|
Result[dict, Exception]: An |
Result[dict, Exception]
|
task queue information returned by the server on success. |
Result[dict, Exception]
|
Returns an |
Result[dict, Exception]
|
or JSON parsing fails. |
get_tasks()
¶
Retrieves the current status and list of tasks from the distributed architecture.
Sends a GET request to the /tasks endpoint of the distributed module. This
method is useful for monitoring the progress, state, or results of asynchronous
operations occurring within the elastic platform, such as dataset segmentation
or distributed processing jobs.
Returns:
| Type | Description |
|---|---|
Result[dict, Exception]
|
Result[dict, Exception]: An |
Result[dict, Exception]
|
and statuses returned by the server on success. |
Result[dict, Exception]
|
Returns an |
Result[dict, Exception]
|
or JSON parsing fails. |
kmeans(plaintext_matrix_id, plaintext_matrix_filename, k=2, extension='npy', num_chunks=2)
¶
Executes the standard K-Means clustering algorithm on the remote platform.
Sends a POST request to the /clustering/kmeans endpoint using the specified
configuration parameters passed as HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plaintext_matrix_id
|
str
|
The unique identifier of the plaintext matrix. |
required |
plaintext_matrix_filename
|
str
|
The filename of the plaintext matrix dataset. |
required |
k
|
int
|
The number of clusters to form. Defaults to 2. |
2
|
extension
|
str
|
The file extension of the matrix dataset. Defaults to "npy". |
'npy'
|
num_chunks
|
int
|
The number of chunks to split the task into for the distributed architecture. Defaults to 2. |
2
|
Returns:
| Type | Description |
|---|---|
|
Result[KmeansResponse, Exception]: An |
|
|
Returns an |
knn(model_id, model_filename, model_labels_filename, record_test_id, record_test_filename, extension='npy')
¶
Executes the complete K-Nearest Neighbors (KNN) classification workflow.
This method acts as a high-level wrapper that sequentially orchestrates both the training
and prediction phases. It first calls knn_train to register the training dataset and
labels on the platform. Upon success, it automatically extracts the model's label shape
and invokes knn_predict to classify the test records. Finally, it aggregates the
prediction results and timing metrics from both steps into a single, unified response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier to assign to the model. |
required |
model_filename
|
str
|
The filename of the dataset containing the training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
record_test_id
|
str
|
The unique identifier for the set of test records to classify. |
required |
record_test_filename
|
str
|
The filename of the dataset containing the test records. |
required |
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
Result[KnnResponse, Exception]
|
Result[KnnResponse, Exception]: An |
Result[KnnResponse, Exception]
|
Returns an |
knn_predict(model_id, model_filename, model_labels_filename, record_test_id, record_test_filename, model_labels_shape, extension='npy')
¶
Executes the prediction phase for the K-Nearest Neighbors (KNN) classification algorithm.
Sends a POST request to the /classification/knn/predict endpoint. This method
evaluates a new set of test records against a previously registered training dataset
to determine their classifications. All configuration parameters and file identifiers
are transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier of the trained model to use. |
required |
model_filename
|
str
|
The filename of the dataset containing the training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
record_test_id
|
str
|
The unique identifier for the set of test records to classify. |
required |
record_test_filename
|
str
|
The filename of the dataset containing the test records. |
required |
model_labels_shape
|
str
|
The dimensional shape of the model labels (e.g., as a string representation of a list/tuple). |
required |
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
|
Result[KnnPredictResponse, Exception]: An |
|
|
Returns an |
knn_train(model_id, model_filename, model_labels_filename, extension='npy')
¶
Initiates the training phase for the K-Nearest Neighbors (KNN) classification model.
Sends a POST request to the /classification/knn/train endpoint. The method registers
the training dataset and its corresponding labels on the remote platform, preparing
the model for future prediction tasks. Parameters are securely transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier to assign to the trained model. |
required |
model_filename
|
str
|
The filename of the dataset containing the training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
|
Result[KnnTrainResponse, Exception]: An |
|
|
Returns an |
nnc(plaintext_matrix_id, plaintext_matrix_filename, extension='npy', threshold=1.4)
¶
Executes the Nearest Neighbor Clustering (NNC) algorithm on the remote platform.
Sends a POST request to the /clustering/nnc endpoint. Task parameters, including
matrix identifiers and the distance threshold, are transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plaintext_matrix_id
|
str
|
The unique identifier of the plaintext matrix. |
required |
plaintext_matrix_filename
|
str
|
The filename of the plaintext matrix dataset. |
required |
extension
|
str
|
The file extension of the matrix dataset. Defaults to "npy". |
'npy'
|
threshold
|
float
|
The distance threshold value used to determine cluster boundaries. Defaults to 1.4. |
1.4
|
Returns:
| Type | Description |
|---|---|
|
Result[NncResponse, Exception]: An |
|
|
Returns an |
segment(dto)
¶
Initiates the segmentation of a dataset for distributed processing.
Sends a POST request to the /segment endpoint of the distributed architecture.
This method is responsible for splitting a large dataset (identified by its bucket
and ball IDs) into smaller, manageable chunks based on the parameters specified
in the Data Transfer Object. This is a crucial pre-processing step for parallel
and elastic data mining operations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dto
|
SegmentDTO
|
The Data Transfer Object containing the configuration required to segment the dataset (such as bucket_id, ball_id, filename, and chunk size parameters). |
required |
Returns:
| Type | Description |
|---|---|
|
Result[SegementResponseDTO, Exception]: An |
|
|
|
|
|
Returns an |
|
|
or data validation fails. |
skmeans(plaintext_matrix_id, plaintext_matrix_filename, experiment_iteration=0, num_chunks=2, k=2, max_iterations=5, extension='npy')
¶
Executes the Secure K-Means (skmeans) clustering algorithm on the remote platform.
Sends a POST request to the /clustering/skmeans endpoint. Task parameters and
matrix identifiers are securely transmitted via HTTP headers to interface with
the Privacy-Preserving Data Mining (PPDM) architecture.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plaintext_matrix_id
|
str
|
The unique identifier of the plaintext matrix. |
required |
plaintext_matrix_filename
|
str
|
The filename of the plaintext matrix dataset. |
required |
experiment_iteration
|
int
|
The ID or iteration number of the current operation, useful for tracking research experiments. Defaults to 0. |
0
|
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
k
|
int
|
The number of clusters to form. Defaults to 2. |
2
|
max_iterations
|
int
|
The maximum number of iterations allowed before the algorithm stops. Defaults to 5. |
5
|
extension
|
str
|
The file extension of the matrix dataset. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
|
Result[KmeansResponse, Exception]: An |
|
|
Returns an |
skmeans_pqc(plaintext_matrix_id, plaintext_matrix_filename, k=2, experiment_iteration=0, num_chunks=2, max_iterations=5, extension='npy')
¶
Sends a POST request to the /clustering/pqc/skmeans endpoint with the specified headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plaintext_matrix_id
|
str
|
Identifier of the matrix for the skmeans_pqc algorithm. |
required |
plaintext_matrix_filename
|
str
|
Name of the plaintext matrix file. |
required |
k
|
int
|
Number of clusters (default is 2). |
2
|
experiment_iteration
|
int
|
Id of current operation. |
0
|
num_chunks
|
int
|
Number of chunks in which the dataset is split. |
2
|
max_iterations
|
int
|
Max. number of iterations before stopping the process. |
5
|
extension
|
str
|
File extension (default is "npy"). |
'npy'
|
Returns:
| Name | Type | Description |
|---|---|---|
KmeansResponse |
A dataclass instance containing: - label_vector (list): The resulting label vector. - iterations (int): Number of iterations performed. - algorithm (str): Algorithm used. - worker_id (str): Identifier of the worker. - service_time_manager (float): Service time from the manager. - service_time_worker (float): Service time from the worker. - service_time_client (float): Service time from the client. - response_time_clustering (float): Clustering response time. |
sknn(model_id, model_filename, model_labels_filename, record_test_id, record_test_filename, num_chunks=2, extension='npy')
¶
Executes the complete Secure K-Nearest Neighbors (SKNN) classification workflow.
This method acts as a high-level wrapper that sequentially orchestrates both the privacy-preserving
training and prediction phases. It first calls sknn_train to securely register, partition, and
encrypt the training dataset on the platform. Upon success, it automatically extracts the encrypted
model's metadata (shape, data type, and label shape) and invokes sknn_predict to securely classify
the test records. Finally, it aggregates the prediction results and timing metrics from both the
training and prediction steps into a single, unified response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier to assign to the trained secure model. |
required |
model_filename
|
str
|
The filename of the dataset containing the training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
record_test_id
|
str
|
The unique identifier for the set of test records to classify. |
required |
record_test_filename
|
str
|
The filename of the dataset containing the test records. |
required |
num_chunks
|
int
|
The number of chunks into which the datasets are split for distributed elastic processing. Defaults to 2. |
2
|
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
Result[KnnResponse, Exception]
|
Result[KnnResponse, Exception]: An |
Result[KnnResponse, Exception]
|
Returns an |
sknn_pqc(model_id, model_filename, model_labels_filename, record_test_id, record_test_filename, num_chunks=2, extension='npy')
¶
Executes the complete Post-Quantum Cryptography (PQC) Secure K-Nearest Neighbors classification workflow.
This method acts as a high-level wrapper that sequentially orchestrates both the privacy-preserving
training and prediction phases using quantum-resistant algorithms. It first calls sknn_pqc_train
to securely register, segmentation, and encrypt the training dataset on the platform. Upon success,
it automatically extracts the encrypted model's metadata (shape and data type) and invokes
sknn_pqc_predict to securely classify the test records. Finally, it aggregates the prediction
results and timing metrics from both steps into a single, unified response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier to assign to the trained PQC secure model. |
required |
model_filename
|
str
|
The filename of the dataset containing the training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
record_test_id
|
str
|
The unique identifier for the set of test records to classify. |
required |
record_test_filename
|
str
|
The filename of the dataset containing the test records. |
required |
num_chunks
|
int
|
The number of chunks into which the datasets are split for distributed elastic processing. Defaults to 2. |
2
|
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
Result[KnnResponse, Exception]
|
Result[KnnResponse, Exception]: An |
Result[KnnResponse, Exception]
|
Returns an |
sknn_pqc_predict(model_id, model_filename, model_labels_filename, record_test_id, record_test_filename, encrypted_model_shape, num_chunks=2, extension='npy', encrypted_model_dtype='float64')
¶
Executes the prediction phase for the Post-Quantum Cryptography (PQC) Secure K-Nearest Neighbors algorithm.
Sends a POST request to the secure /classification/sknn_pqc/predict endpoint. This method
evaluates a new set of test records against a previously trained dataset that has been secured
using post-quantum cryptographic primitives. It requires specific metadata regarding the shape
and data type of the encrypted model to ensure the distributed elastic workers can properly
reconstruct and process the quantum-resistant matrices. All configuration parameters are
securely transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier of the trained PQC secure model. |
required |
model_filename
|
str
|
The filename of the dataset containing the encrypted training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
record_test_id
|
str
|
The unique identifier for the set of test records to classify. |
required |
record_test_filename
|
str
|
The filename of the dataset containing the test records. |
required |
encrypted_model_shape
|
str
|
The dimensional shape of the encrypted model matrix (e.g., string representation of a tuple). |
required |
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
encrypted_model_dtype
|
str
|
The data type of the encrypted model matrix. Defaults to "float64". |
'float64'
|
Returns:
| Type | Description |
|---|---|
Result[KnnPredictResponse, Exception]
|
Result[KnnPredictResponse, Exception]: An |
Result[KnnPredictResponse, Exception]
|
Returns an |
sknn_pqc_train(model_id, model_filename, model_labels_filename, num_chunks=2, extension='npy')
¶
Initiates the training phase for the Post-Quantum Cryptography (PQC) enabled Secure K-Nearest Neighbors model.
Sends a POST request to the secure PQC /classification/sknn_pqc/train endpoint.
This method registers the training dataset and its corresponding labels on the remote platform.
It leverages post-quantum cryptographic primitives to ensure the dataset remains secure against
quantum-level threats. The data is partitioned (num_chunks) to facilitate distributed,
privacy-preserving processing. Parameters are securely transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier to assign to the trained PQC secure model. |
required |
model_filename
|
str
|
The filename of the dataset containing the training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
|
Result[SknnTrainResponse, Exception]: An |
|
|
Returns an |
sknn_predict(model_id, model_filename, model_labels_filename, record_test_id, record_test_filename, encrypted_model_shape, model_labels_shape, encrypted_model_dtype='float64', num_chunks=2, extension='npy')
¶
Executes the prediction phase for the Secure K-Nearest Neighbors (SKNN) algorithm.
Sends a POST request to the secure /classification/sknn/predict endpoint. This method
evaluates a new set of test records against a previously trained, encrypted dataset.
It requires detailed metadata regarding the shape and data type of
the encrypted model to ensure the distributed elastic workers can properly reconstruct
and process the secure matrices. All parameters are transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier of the trained secure model. |
required |
model_filename
|
str
|
The filename of the dataset containing the encrypted training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
record_test_id
|
str
|
The unique identifier for the set of test records to classify. |
required |
record_test_filename
|
str
|
The filename of the dataset containing the test records. |
required |
encrypted_model_shape
|
str
|
The dimensional shape of the encrypted model matrix (e.g., string representation of a tuple). |
required |
model_labels_shape
|
str
|
The dimensional shape of the model labels. |
required |
encrypted_model_dtype
|
str
|
The data type of the encrypted model matrix. Defaults to "float64". |
'float64'
|
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
|
Result[KnnPredictResponse, Exception]: An |
|
|
Returns an |
sknn_train(model_id, model_filename, model_labels_filename, num_chunks=2, extension='npy')
¶
Initiates the training phase for the Secure K-Nearest Neighbors (SKNN) classification model.
Sends a POST request to the secure /classification/sknn/train endpoint. This method
registers the training dataset and its corresponding labels on the remote platform.
Unlike the standard KNN, this secure version supports data partitioning (num_chunks)
to facilitate distributed, privacy-preserving processing across the elastic architecture.
Parameters are securely transmitted via HTTP headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The unique identifier to assign to the trained secure model. |
required |
model_filename
|
str
|
The filename of the dataset containing the training features. |
required |
model_labels_filename
|
str
|
The filename containing the corresponding training labels. |
required |
num_chunks
|
int
|
The number of chunks into which the dataset is split for distributed elastic processing. Defaults to 2. |
2
|
extension
|
str
|
The file extension of the dataset files. Defaults to "npy". |
'npy'
|
Returns:
| Type | Description |
|---|---|
|
Result[SknnTrainResponse, Exception]: An |
|
|
Returns an |