API Guide

Using the Marple API

1 Arguments and Responses

There are two ways of passing arguments to an endpoint: either encoded as key-value pairs in the URL or encoded as JSON in the body of the request. This depends on the endpoint, in general GET requests will use URL encoding, while POST requests will use JSON encoding. The full specification of all endpoints can be found at the bottom of the page. Here is an example of both methods in Python:

params = {'path': path}
r = requests.get(f'{API_URL}/sources/lookup', headers=auth_header, params=params)

body = {'path': '/', 'new_folder_name': 'new_folder'}
r = requests.post(f'{API_URL}/library/folder/add', headers=auth_header, json=body)

Requests will always return data as JSON with the following structure:

{"request_time": time, "message": data}

It will contain both the time the server has taken to process the request and the actual output. Decode and access the message key to extract the result:

response = requests.get(f'{API_URL}/version', headers=auth_header)
version = response.json()['message']
print(version) # 3.3.0

Or using the Marple SDK:

from marple import Marple
m = Marple(ACCESS_TOKEN)
response = m.get('/version')
print(response.json()['message']) # 3.3.0

2 Uploading and Importing Files

In Marple files are uploaded onto a file server and then parsed into our database which we call 'importing'. Once a file has been imported into the database it is referred to as a 'source'.

1. POST /library/file/upload

Upload a file onto the file server. URL query parameters:

  • path: the path to the directory on the file server where the file should be uploaded

File body parameters:

  • file: the file contents

=> Returns "OK", and confirms the path where the file is uploaded

#Setup
API_URL = 'https://app.marpledata.com/api/v1'
endpoint = '/library/file/upload'
token = '...'
auth_header = {'Authorization': f'Bearer {token}'}

#Arguments
files = {'file': open('myfile.csv', 'rb')}
target_dir = '/'  # file server root dir
target_dir = '/sub_dir' # or a subdirectory

#Request
r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  params={'path': target_dir}, 
  files=files
)

#Response
r.json()['message'] == {'status': 'OK', 'path': 'sub_dir'}

2. POST /library/file/import

Start importing a file into the database. JSON body parameters:

  • path: the path to where the file is located on the server

  • plugin: the plugin to be used for importing the file

  • config: (optional) the configuration for the plugin

=> Returns the source_id of the file in the database, and the metadata associated with the file

endpoint = '/library/file/import'
path = '/sub_dir/myfile.csv'
plugin = 'csv_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={'path': path, 'plugin': plugin}
)

r.json()['message'] == {
  'source_id': 1,
  'metadata': { # most plugins don't support automatic metadata extraction
    'test_reference': 14,
  }
}

Plugins

Marple supports multiple datatypes, each imported with their own plugin. Every plugin also has some available config settings. The config must have the following structure:

config = {
    'common': [
        {'name': NAME_OF_PARAMETER_1, 'value': VALUE_OF_PARAMETER_1},
        {'name': NAME_OF_PARAMETER_2, 'value': VALUE_OF_PARAMETER_2}
    ] 
}

Here is an overview of the different plugins and their corresponding config to import your data.

CSV

Usage: .csv files

Plugin: csv_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

delimiter used delimiter in csv file. Default ','

thousands thousends separater used in csv file. Default None

decimal used decimal separater. Default: '.'

skiprows number of rows to be skipped while reading the file. Default: 0

quotechar Used quote character. Default: "

dayfirst Parse dates day first (enable for . Default: False

units File contains units in the row below the header. (0: No, 1: Yes). Default: 0

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.csv'
plugin = 'csv_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1},
            {'name': 'decimal', 'value': ","}
        ]
    }
  }
)
zipped CSV

Usage: .zip csv files

Plugin: csv_zip_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

delimiter used delimiter in csv file. Default ','

thousands thousends separater used in csv file. Default None

decimal used decimal separater. Default: '.'

skiprows number of rows to be skipped while reading the file. Default: 0

quotechar Used quote character. Default: "

dayfirst Parse dates day first (enable for . Default: False

units File contains units in the row below the header. (0: No, 1: Yes). Default: 0

include_file_name Include the name of each file in the signal name. (0: No, 1: Yes). Default: 1

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.zip'
plugin = 'csv_zip_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1},
            {'name': 'decimal', 'value': ","}
        ]
    }
  }
)
HDF5

Usage: .h5 files

Plugin: hdf5_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

structure File structure, flat/matrix. Default: flat

include_groups Include the name of each group in the signal name. (0: No, 1: Yes). Default: 1

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.h5'
plugin = 'hdf5_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'structure', 'value': 'flat'}
        ]
    }
  }
)
MAT

Usage: .mat files

Plugin: mat_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

structure File structure, flat/matrix. Default: flat

include_groups Include the name of each group in the signal name. (0: No, 1: Yes). Default: 1

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.mat'
plugin = 'mat_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1},
            {'name': 'structure', 'value': "matrix"}
        ]
    }
  }
)
MDF / MF4

Usage: .mdf files

Plugin: MDF_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.mdf'
plugin = 'MDF_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1}
        ]
    }
  }
)
TDMS

Usage: .tdms files

Plugin: tdms_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.tdms'
plugin = 'tdms_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1}
        ]
    }
  }
)
ULOG

Usage: .ulg files

Plugin: ulog_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.ulg'
plugin = 'ulog_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1}
        ]
    }
  }
)
AVRO

Usage: .avro files

Plugin: avro_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.avro'
plugin = 'avro_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1}
        ]
    }
  }
)
ROS

Usage: .bag files

Plugin: rosbag_plugin

Config options:

time_scale scale the time signal with this factor. Default: 1

time_offset add offset to every timestamp. Default: 0

Example:

endpoint = '/library/file/import'
path = '/sub_dir/myfile.bag'
plugin = 'rosbag_plugin'

r = requests.post(
  f'{API_URL}{endpoint}', 
  headers=auth_header,
  json={
    'path': path, 
    'plugin': plugin,
    'config': {
        'common': [
            {'name': 'time_offset', 'value': 100.0},
            {'name': 'time_scale', 'value': 0.1}
        ]
    }
  }
)

3. GET /sources/status

To monitor the progression of the file import process, query its status. The status is a number [0, 100] representing the progress, where 100 means the import has finished successfully. URL query parameters:

  • id: source id or array of source ids for which to request the status.

To pass an array as a query parameter use the following syntax: api_url/sources/status?id=1,2,3,...

=> Returns an array of status codes

endpoint = '/sources/status'
source_id = 1

r = requests.post(
  f'{API_URL}{endpoint}',
  headers=auth_header,
  params={'id': source_id}
)

r.json()['message'] == {[{
  'source_id': 1,
  'status': 100
}]}

3 Adding Metadata

Metadata can be manipulated using the /library/metadata endpoints. Metadata is always coupled to files using the source_id and not the file name. The source id associated with a file can be found using the /sources/lookup endpoint.

1. GET /sources/lookup

Get source id associated with file. URL query parameters:

  • path: full file path of file to lookup

=> Returns the source_id of the file

endpoint = '/sources/lookup'
path = '/sub_dir/myfile.csv'

r = requests.post(f'{base_url}{endpoint}', headers=auth_header,
                  params={'path': path})

2. POST /library/metadata

Add metadata to a source JSON body parameters:

  • source_id: the source to add the metadata to

  • path: (optional) If the file does not yet have a source_id it can be referenced by its path. This function will return the source id assigned to the file in this case

  • metadata: Key-value pairs to add to metadata.

=> Returns source_id and "OK" message

source_id = 1
metadata = {'test_operator': 'Arthur', 'test_reference': 'BX-14'}

endpoint = '/library/metadata'
r = requests.post(
  f'{base_url}{endpoint}',
  headers=auth_header,
  json={
    'source_id': source_id,
    'metadata': metadata
  }
)

4 Sharing Projects and Sources

A project can be shared using the /library/share endpoints. Make sure that the target project exists in the app before using these endpoints.

1 POST /library/share/new

Generate a new share link id JSON body parameters

  • workbook_name: name of the project you wish to share

  • source_ids: sources to include in the share

=> Returns new share id

endpoint = '/library/share/new'
name = 'example'
source_ids = [1,2,3]

r = requests.post(
  f'{base_url}{endpoint}',
  headers=auth_header,
  json={
    'workbook_name': name,
    'source_ids': source_ids
  }

2 POST /library/share/<share_id>/add

Add data to share id JSON body parameters:

  • workbook_name: change project for this share

  • source_ids: Source ids to add to share

=> Returns new share id with updated content

share_id = '1234567890abcdef'
endpoint = f'/library/share/{share_id}/add'
source_ids = [4,5,6]

r = requests.post(
  f'{base_url}{endpoint}',
  headers=auth_header,
  json={
    'workbook_name': name,
    'source_ids': source_ids
  }
)

3. POST /library/share/<share_id>/link

Genrate a sharable link which opens the project with its source ids JSON body parameters:

  • url: (optional) set base URL for the share link

=> Returns clickable link

share_id = '1234567890abcdef'
url = 'https://yourcompany.marpledata.com'
endpoint = f'/library/share/{share_id}/link'

requests.get(
  base_url + endpoint,
  auth=marple_auth,
  args={'url': url}
) # Returns https://yourcompany.marpledata.com/?share=1234567890abcdef

5 API Example Repository

Marple maintains a public repository with an example API usecase: 🔗 https://gitlab.com/marple-public/marple-api-exampleThis script runs a simulated experiment, logs the data, uploads and imports it into marple and then generates a share link ready for visualisation.In the public repository there are example scripts for both Python and MATLAB.

6 API Reference

Marple Swagger API Reference

Last updated