Deep Decision® APIs Example Calls

Intro

Deep Decision® Drag and Drop features that are easily accessed through the web interface are also available through an API. For most use cases, our UI provides a simple interface to conduct advanced modeling and research using the power of LSM-LLM. The APIs enable you to integrate LSM-LLMs into your systems and leverage their power throughout a decision lifecycle. The examples below show how to create, access, and re-use LSM-LLM models using our APIs. All examples are provided in Python, cURL, and JAVA.

Sign Up

Once enrolled, use your provided OAuth 2 token to authorize all requests.

The examples below show advanced use cases for LSM-LLM models. In these examples it is assumed you have stored your auth code in a environment variable.

Below is a Python example of how to create a environment varible to store you auth code.

First install dotenv

pip install python-dotenv

Next create a file called login.py with the following code. This code should be run as a script otherwise the prompts will not work.

#!/usr/bin/env python
         
import requests 
import getpass
import os 
        
username=str(input("Type Your Username:\n")) 
password=getpass.getpass()
        
payload = {"username": username  , "password":password}  
response = requests.post(url= "https://deeplabs.dev/token", data=payload) 
assert response.status_code == 200
token = response.json()["access_token"]  

import dotenv   
dotenv_file = dotenv.find_dotenv('dd_api.env') 
if dotenv_file == "":
    open("dd_api.env", "w")
    dotenv_file = dotenv.find_dotenv('dd_api.env') 
 
dotenv.load_dotenv(dotenv_file)
dotenv.set_key(dotenv_file, "DEEP_DECISION_TOKEN", token)
   
 

Finally run the following command and enter your user name and password.

python login.py
 

Please configure the authentication and authorization parameters in accordance with your company’s security policy.

Accessing Your Account

Let's begin by making sure your authorization token is properly configured. First, use the following code to access your account information:

Python

 
import sys
import os
import requests  

# Load your token from where it is stored (you can alsosave it to your bash profile)
import dotenv 
dotenv.load_dotenv("dd_api.env")
token = os.environ['DEEP_DECISION_TOKEN']

base_url = "https://deeplabs.dev"  
auth_resp = requests.get(url=""https://deeplabs.dev/users/me" , 
                          headers={"Authorization": f"Bearer {token}"})

assert auth_resp.status_code == 200
print(auth_resp.json()) 

  

cURL


curl -X GET "http://deeplabs.dev/users/me" \
       -H "Authorization: bearer YOUR AUTH TOKEN"

  

Expected Output


{ 'username': 'Example User', 
  'email': 'example_user@email_firm.com',
  'full_name': "example user", 
  'orgname': 'example-firm', 
  'orgkey': '',
  'disabled': None
}

Upload Data

Deep Decision® accepts standard JSON or delimited (CSV) file formats. For delimited-formated files, the header must be followed directly by the data rows. All data within a row is expected to be the same data type for both delimited and JSON-formatted files. All data is preferred to be in UTF-8 format with no binary fields, also there must be a carriage return between each completed JSON message per line (often called JSONL if JSON is used instead of CSV file format).

The following example uses the dataset, 'Most Streamed Spotify Songs 2024' from Kaggle.

You can download it from HERE.

Once you have downloaded and upzipped the file you need to upload data to Deep Decision® using the following RESTful call.

Python


f =  open("Most Streamed Spotify Songs 2024.csv", 'rb')  
files = {"file": (f.name, f, "multipart/form-data")}
upload_resp = requests.post(url=""https://deeplabs.dev/deep_decision/upload_data",
            files=files ,
            headers={"Authorization": f"Bearer {token}"})

assert upload_resp.status_code == 200 
print(upload_resp.json())
 
  

cURL

 

curl -X 'POST' \
    'http://deeplabs.dev/deep_decision/upload_data' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN' \
    -H 'Content-Type: multipart/form-data' \
    -F 'file=@"Most Streamed Spotify Songs 2024.csv";type=text/csv'
 
  

Expected Output


{'TaskId': 'dcbe4e1e-8028-49e6-b2fc-598f916c0c70', 
 'Status': 'Loaded', 
 'FileName': 'Most Streamed Spotify Songs 2024.csv', 
 'InputFile': 'Most Streamed Spotify Songs 2024.csv', 
 'TimeStamp': '2024-07-01 10:22:47.587881'
}

Build LSM

Most of the work to build an LSM is automatically done leveraging the learnings from Deep Lab’s World State. There are a few required parameters to train an LSM to your data, such as the focus. The focus defines what a user wants the AI to pay attention to. It is similar to a prompt with a LLM and helps guide the LSM results. For the Spotified example, we define the focus to be songs with a Track Score greater than or equal to 26

Python

 
payload = {"FileName": "Most Streamed Spotify Songs 2024.csv",
"Focus" : "Track Score",
"FocusValue" : 26,
"FocusOperator" : "<="
} 

mdl_build_resp = requests.post(url=""https://deeplabs.dev/deep_decision/fit",
               params=payload,
               headers={"Content-Type": "application/json; charset=utf-8", 
                         "Authorization": f"Bearer {token}"})
              

assert mdl_build_resp.status_code == 200 
task_id =mdl_build_resp.json()["TaskId"]


  

cURL


curl -X 'POST' \
    'https://deeplabs.dev/deep_decision/fit?FileName=Most%20Streamed%20Spotify%20Songs%202024.csv&Focus=Track%20Score&FocusOperator=%3C%3D&FocusValue=26&WorldState=false&WorldStateColumn=NA&WorldStateDefaultCntry=NA' \
    -H 'accept: application/json' \
    -H 'Authorization: bearer YOUR AUTH TOKEN \
    -d ''

  

Expected Output

 

{'TaskId': 'ac843828-7119-4c99-86b9-cf54ad75d369', 
 'Status': 'Recieved',
 'TimeStamp': '07/01/2024, 10:56:23'
}

Get Status

Once you kick off a job you can check the job status. A typical LSM fit takes about 1-3 minutes depending on the file size. A large file can take longer. For very large files it is recommended to use a dedicated environment.

Python


import time
running = True
while running:
    status_resp = requests.get(url=""https://deeplabs.dev/deep_decision/fit/" + task_id, 
                         headers={"Authorization": f"Bearer {token}"})
                         
    print(status_resp.json()["Status"])
    if status_resp.json()["Status"] in  ["FAILURE", "SUCCESS"]:
       running = False
    time.sleep(2) 

  

cURL


curl -X 'GET' \
    'http://deeplabs.dev/deep_decision/fit/5a3751c9-d1c7-4d32-94ca-f005b6b0317d%22' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer  YOUR AUTH TOKEN' 


  

Expected Output

 
Submitted
...
SUCCESS

Get Features

When the job is complete you can access the results. This example downloads the generated LSM features and converts them to a Pandas data frame (only in the Python example). If you do not have Pandas installed you can use the following command:

pip install pandas

Once downloaded, the features can be used for reporting, decision-making, model development, and scoring.

Python


import pandas as pd
from io import StringIO
features = requests.get(url=""https://deeplabs.dev/deep_decision/download/features/" + task_id, 
                         headers={"Authorization": f"Bearer {token}"})
                         

features = features.text  
df_features = pd.read_csv(StringIO(features)) 
print(df_features.columns)
  

cURL


curl -X 'GET' \
    'http://deeplabs.dev/deep_decision/download/features/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN'
  
  

Expected Output

 

Index(['Release Date_day_of_week', 'Release Date_month', 'Spotify Popularity',
  'Apple Music Playlist Count', 'Deezer Playlist Count',
  'Amazon Playlist Count', 'Explicit Track', 'Track Score_lteq_26.0',
  'unique_row_key', 'Spotify Streams_is_missing',
  'YouTube Views_is_missing', 'YouTube Likes_is_missing',
  'TikTok Posts_is_missing', 'TikTok Likes_is_missing',
  'TikTok Views_is_missing', 'YouTube Playlist Reach_is_missing',
  'AirPlay Spins_is_missing', 'SiriusXM Spins_is_missing',
  'Deezer Playlist Reach_is_missing', 'Pandora Streams_is_missing',
  'Pandora Track Stations_is_missing', 'Soundcloud Streams_is_missing',
  'Shazam Counts_is_missing', 'Location_embedding_X',
  'Location_embedding_Y', 'Location_embedding_Z', 'focus_est',
  'cluster_assignment', 'segmentation_id', 'outlier_local_outlier_factor',
  'outlier_elliptic_envelope', 'outlier_isolation_forest',
  'outlier_score', 'outlier_rank', 'outlier_segmentation_id', 'focus',
  'index', 'Track Score', 'SiriusXM Spins_is_missing_pre',
  'SiriusXM Spins_is_missing_bin',
  'E_Dist_10_From_SiriusXM Spins_is_missing_high',
  'Deezer Playlist Reach_is_missing_pre',
  'Deezer Playlist Reach_is_missing_bin',
  'E_Dist_10_From_Deezer Playlist Reach_is_missing_high',
  'Pandora Streams_is_missing_pre', 'Pandora Streams_is_missing_bin',
  'E_Dist_10_From_Pandora Streams_is_missing_high',
  'TikTok Views_is_missing_pre', 'TikTok Views_is_missing_bin',
  'E_Dist_10_From_TikTok Views_is_missing_high',
  'TikTok Posts_is_missing_pre', 'TikTok Posts_is_missing_bin',
  'E_Dist_10_From_TikTok Posts_is_missing_high',
  'TikTok Likes_is_missing_pre', 'TikTok Likes_is_missing_bin',
  'E_Dist_10_From_TikTok Likes_is_missing_high'],
 dtype='object')

Plot Embeddings

You can examine the results further by plotting the embedding space generated by the LSM.

To run please install the following package:

pip install pandas

Python


import plotly.express as px 

fig = px.scatter_3d(
    df_features,
    x="Location_embedding_X", 
    y="Location_embedding_Y",
    z="Location_embedding_Z",
    color=df_features.focus, labels={'color': 'focus'}
)
fig.update_traces(marker_size=8)
fig.show()

  

cURL

Not Available
  

Expected Output

Generated Plot: Embeddings.

Get Statements

Deep Lab’s LSM provides human-intelligible statements that describe the relationship within the dataset. A typical statement is something like this:

When track explicit is low value then the number of times Track Score is less than 26 is Less frequency.

LSM statements are derived from statistically valid findings within the dataset and World State. Below is an example of downloading the statements files.

Python


statements = requests.get(url=""https://deeplabs.dev/deep_decision/download/statements/" + task_id, 
                             headers={"Authorization": f"Bearer {token}"})
                             
print(statements.text)
  
  

cURL


curl -X 'GET' \
    'http://deeplabs.dev/deep_decision/download/statements/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN'
  
  

Expected Output

 

{"org": [{"origin": "org", 
          "statement": "When missing Playlist YouTube is YouTube Playlist Reach \
                        Reach_is_missing is Low Value then the number of time Track Score \
                        is less than 26.0 IS LESS frequent."}, 
         {"origin": "org", 
          "statement": "When Explicit Track is Low Value then the number of time \
                        Track Score is less than 26.0 IS LESS frequent."},        
         ]}

Get Knowledge Graph

Optionally the LSM can generate Knowledge Graphs to interface with LLMs, power multi-models, and integrate with existing data solutions.

Python


knowledge_graph = requests.get(url=""https://deeplabs.dev/deep_decision/download/knowledge_graph/" + task_id, 
headers={"Authorization": f"Bearer {token}"})
    
knowledge_graph = knowledge_graph.json()
print(knowledge_graph)
  
  

cURL

 
curl -X 'GET' \
    'http://deeplabs.dev/deep_decision/download/knowledge_graph/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN'

  

Expected Output

 

{'vertices':{...}, 
 'vertice_lookup' : {...}, 
 'vertice_properties' : {...}, 
 'edges':{...}, 
 'edge_properties':{...}, 
 'edge_lookup':{...}, 
 'graph_properties':{...}}
  

Plot Knowledge Graph

All graphs generated by Deep Decision®’s LSM can be loaded in NetworkX or a similar graph framework making downstream data integration simple. The example shows how to download a knowledge graph, create a NetworkX graph, and then plot the graph using Pyvis

This example is only provided in Python and requires our tools library located HERE

Once you have downloaded the zip file, unzip it in your work directory to use.

Python


from  lsm_tools.graphs import lsm_g_2_nx_g, generate_nx_g_plots 

G = lsm_g_2_nx_g(knowledge_graph, True)   
generate_nx_g_plots(G, "knowledge_graph.html")  

  

cURL

Not Available
  

Expected Output

Generated Plot: knowledge_graph.

Get Interpretations

You can leverage Deep Lab’s LLM to further analyze the statements using our interpretation API. You can pass the ID of the statement you wish to process. If no id is provided the first statement will be processed. If the same ID is sent multiple times, only the first request will start a LLM task. The other requests will return the saved response and it will not cost you any tokens.

Python


payload ={"LSMModelTaskId":task_id }
response_llmkr = requests.post(url=""https://deeplabs.dev/deep_decision/interpretation", 
                        params=payload, 
                        headers={"Authorization": f"Bearer {token}"})
 
                         
assert response_llmkr.status_code == 200 
llmkr_task_id = response_llmkr.json()["TaskId"] 

  

cURL


curl -X 'POST' \
    'http://deeplabs.dev/deep_decision/interpretation?LSMModelTaskId=5a3751c9-d1c7-4d32-94ca-f005b6b0317d&ForceNewQuery=false&ForceRebuild=false&StatementCategory=org&StatementId=0' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN' \
    -d ''

  

Expected Output

 

{'TaskId': '22f34837-0c4d-4eb6-9233-ed0326381b5e', 
 'Status': 'Recieved', 
 'TimeStamp': '07/01/2024, 11:08:32'
}

  

Get LLM Status

You can check the state of an interpretation task by passing the task ID to the API. A typical LLM task takes several seconds to run. If faster response time is required dedicated servers can be set up.

Python


running = True
while running:
    status_llmkr = requests.get(url=""https://deeplabs.dev/deep_decision/interpretation/" + llmkr_task_id, 
                         headers={"Authorization": f"Bearer {token}"})
                         
    print(status_llmkr.json()["Status"])
    if status_llmkr.json()["Status"] in  ["FAILURE", "SUCCESS"]:
       running = False
    time.sleep(1) 
        
  

cURL


curl -X 'GET' \
    'http://deeplabs.dev/deep_decision/interpretation/e3d3aca4-f6bb-4ba8-9ac3-2ded84d7be02' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN'

  

Expected Output

 
Submitted
...
SUCCESS

Get Interpretation Graph

All the LLM responses are stored in a graph (Interpretation Graph) to create an easy-to-query database of all results ready to be integrated with existing UI solutions. The code below downloads the Interpretation Graph.

Python


interpretation_graph = requests.get(url=""https://deeplabs.dev/deep_decision/download/interpretation_graph/" + llmkr_task_id, 
headers={"Authorization": f"Bearer {token}"})

interpretation_graph = interpretation_graph.json() 
print(interpretation_graph) 

  
  

cURL


curl -X 'GET' \
  'http://deeplabs.dev/deep_decision/download/interpretation_graph/e3d3aca4-f6bb-4ba8-9ac3-2ded84d7be02' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer YOUR AUTH TOKEN'

  

Expected Output

 

{'vertices':{...}, 
 'vertice_lookup' : {...}, 
 'vertice_properties' : {...}, 
 'edges':{...}, 
 'edge_properties':{...}, 
 'edge_lookup':{...}, 
 'graph_properties':{...}
}
    
  

Plot Interpretation Graph (optional)

Also, like with the LSM Knowledge Graph, you can load the Interpretation Graph into NetworkX and plot it.

This example is only provided in Python and requires our tools library located HERE

Once you have downloaded the zip file, unzip it in your work directory to use.

Python


from lsm_tools.graphs import lsm_g_2_nx_g  , generate_nx_g_plots 
G = lsm_g_2_nx_g(interpretation_graph, False)   
generate_nx_g_plots(G, "interpretation_graph.html")  
 

  

cURL

Not Available
  

Exepected Ouput

Generated Plot: interpretation_graph.

Get Completed Projects

The next examples will show how to re-run a tuned LSM model. First, we query what projects are available for reuse.

Python

 
    
response3 = requests.get(url=""https://deeplabs.dev/available_projects", 
                                     headers={"Authorization": f"Bearer {token}"})  
print(str(response3.json()))  
 

cURL


url -X 'GET' \
  'http://deeplabs.dev/available_projects' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer  '

  

Expected Output


{ 'Most Streamed Spotify Songs 2024_Track Score_lt_26', 
  'username': 'example_user', 
  'file_meta': {'type': 'delim', 'delim': ','}, 
  'meta_file': 'Most Streamed Spotify Songs 2024.csv.meta.json', 
  'input_file': 'Most Streamed Spotify Songs 2024.csv', 
  'start_time': '2024-07-01 10:56:01.621793', 
  'total_time': 160.813449, 
  'output_file': 'Most Streamed Spotify Songs 2024_Track Score_lt_26',
  'world_state': {'enabled': False, 'date_col': 'NA'}, 
  'user_meta_file': 'Most Streamed Spotify Songs 2024.csv.meta.json.llm_meta'}}}, 
  'TimeStamp': '07/01/2024, 11:19:48'}
}

Inference

Inference allows you to use an existing LSM on new data.

The Inference API call allows you to use an existing LSM on new data. The example provided below:

  • Pulls several rows from the original Spottifed dataset.
  • Changes a few values.
  • Loads the new data file to Deep Decison®.
  • Starts an Inference task using the new data and the ID of an existing project.

Python


# First pull two records from the orginal data and the header
with open("Most Streamed Spotify Songs 2024.csv", 'rb') as f:
   header = f.readline()
   row1   = f.readline()
   row2   = f.readline()
## Let change one data element in each row
str_row1 = list(row1.decode())
str_row1[-3] = '0'
row1 = ''.join(str_row1) 

str_row2 = list(row2.decode())
str_row2[-3] = '0'
row2 = ''.join(str_row2) 
#Now create the new data
with  open("Most Streamed Spotify Songs 2024.sample.csv", 'w') as f:
     f.write(header.decode())
     f.write(row1)
     f.write(row2)
     
#Load it into Deep Decision
f = open("Most Streamed Spotify Songs 2024.sample.csv", 'rb') 
files = {"file": (f.name, f, "multipart/form-data")}
upload_resp = requests.post(url=""https://deeplabs.dev/deep_decision/upload_data",
            files=files ,
            headers={"Authorization": f"Bearer {token}"})

assert upload_resp.status_code == 200 
print(upload_resp.json())
   
 
  

cURL


curl -X 'POST' \
'http://deeplabs.dev/deep_decision/upload_data' \
-H 'accept: application/json' \
-H 'Authorization: Bearer  YOUR AUTH TOKEN' \
-H 'Content-Type: multipart/form-data' \
-F 'file=@Most Streamed Spotify Songs 2024.sample.csv;type=text/csv'

curl -X 'POST' \
  'http://deeplabs.dev/deep_decision/inference?FileName=Most%20Streamed%20Spotify%20Songs%202024.sample.csv&LSMTaskId=5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer  YOUR AUTH TOKEN' \
  -d ''

  

Expected Output

 
  
{'TaskId': '82584330-9b9d-4c15-8d2a-383d9cdd3050', 
 'Status': 'Loaded', 
 'FileName': 'Most Streamed Spotify Songs 2024.csv', 
 'InputFile': 'Most Streamed Spotify Songs 2024.csv', 
 'TimeStamp': '2024-07-01 11:21:53.386039'}

Now, kick off a inference job.

Python


payload ={"LSMTaskId":task_id , "FileName": "Most Streamed Spotify Songs 2024.sample.csv"}
response_inference = requests.post(url=""https://deeplabs.dev/deep_decision/inference", 
                        params=payload, 
                        headers={"Authorization": f"Bearer {token}"})
                         
assert response_inference.status_code == 200 
inference_task_id = response_inference.json()["TaskId"] 

running = True
while running:
    status_inference= requests.get(url=""https://deeplabs.dev/deep_decision/inference/" + inference_task_id, 
                         headers={"Authorization": f"Bearer {token}"})
                         
    print(status_inference.json()["Status"])
    if status_inference.json()["Status"] in  ["FAILURE", "SUCCESS"]:
       running = False
    time.sleep(1) 

  

cURL


curl -X 'GET' \
    'http://deeplabs.dev/deep_decision/inference/f4602c17-27e9-42a7-8a76-f5a09daf0be3' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN'
  

Expected Output

 
Submitted
...
SUCCESS

Get Inference Results

Like with a fit task, you can download the feature file, knowledge graph, and statements through the APIs. Note, that the statements will be identical to the prior project run because both use the same underline model.

Python

 
features = requests.get(url=""https://deeplabs.dev/deep_decision/download/features/" + inference_task_id, 
headers={"Authorization": f"Bearer {token}"})
    
features = features.text  
df_features = pd.read_csv(StringIO(features)) 
print(df_features.columns ) 


  

cURL


curl -X 'GET' \
    'http://deeplabs.dev/deep_decision/download/features/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \
    -H 'accept: application/json' \
    -H 'Authorization: Bearer YOUR AUTH TOKEN'

Expected Output

 

Index(['Release Date_day_of_week', 'Release Date_month', 'Spotify Popularity',
  'Apple Music Playlist Count', 'Deezer Playlist Count',
  'Amazon Playlist Count', 'Explicit Track', 'Track Score_lteq_26.0',
  'unique_row_key', 'Spotify Streams_is_missing',
  'YouTube Views_is_missing', 'YouTube Likes_is_missing',
  'TikTok Posts_is_missing', 'TikTok Likes_is_missing',
  'TikTok Views_is_missing', 'YouTube Playlist Reach_is_missing',
  'AirPlay Spins_is_missing', 'SiriusXM Spins_is_missing',
  'Deezer Playlist Reach_is_missing', 'Pandora Streams_is_missing',
  'Pandora Track Stations_is_missing', 'Soundcloud Streams_is_missing',
  'Shazam Counts_is_missing', 'Location_embedding_X',
  'Location_embedding_Y', 'Location_embedding_Z', 'cluster_assignment',
  'segmentation_id', 'outlier_local_outlier_factor',
  'outlier_elliptic_envelope', 'outlier_isolation_forest',
  'outlier_score', 'outlier_rank', 'outlier_segmentation_id', 'focus',
  'index', 'Track Score', 'SiriusXM Spins_is_missing_pre',
  'SiriusXM Spins_is_missing_bin',
  'E_Dist_10_From_SiriusXM Spins_is_missing_high',
  'Deezer Playlist Reach_is_missing_pre',
  'Deezer Playlist Reach_is_missing_bin',
  'E_Dist_10_From_Deezer Playlist Reach_is_missing_high',
  'Pandora Streams_is_missing_pre', 'Pandora Streams_is_missing_bin',
  'E_Dist_10_From_Pandora Streams_is_missing_high',
  'TikTok Views_is_missing_pre', 'TikTok Views_is_missing_bin',
  'E_Dist_10_From_TikTok Views_is_missing_high',
  'TikTok Posts_is_missing_pre', 'TikTok Posts_is_missing_bin',
  'E_Dist_10_From_TikTok Posts_is_missing_high',
  'TikTok Likes_is_missing_pre', 'TikTok Likes_is_missing_bin',
  'E_Dist_10_From_TikTok Likes_is_missing_high'],
 dtype='object')
  
Copyright © 2024 Deep Labs