Deep Decision® APIs Example Calls
Intro
Deep Decision® Drag and Drop features that are easily accessed through the web interface are also available through an API. For most use cases, our UI provides a simple interface to conduct advanced modeling and research using the power of LSM-LLM. The APIs enable you to integrate LSM-LLMs into your systems and leverage their power throughout a decision lifecycle. The examples below show how to create, access, and re-use LSM-LLM models using our APIs. All examples are provided in Python, cURL, and JAVA.
Sign Up
Once enrolled, use your provided OAuth 2 token to authorize all requests.
The examples below show advanced use cases for LSM-LLM models. In these examples it is assumed you have stored your auth code in a environment variable.
Below is a Python example of how to create a environment varible to store you auth code.
First install dotenv
pip install python-dotenv
Next create a file called login.py with the following code. This code should be run as a script otherwise the prompts will not work.
#!/usr/bin/env python import requests import getpass import os username=str(input("Type Your Username:\n")) password=getpass.getpass() payload = {"username": username , "password":password} response = requests.post(url= "https://deeplabs.dev/token", data=payload) assert response.status_code == 200 token = response.json()["access_token"] import dotenv dotenv_file = dotenv.find_dotenv('dd_api.env') if dotenv_file == "": open("dd_api.env", "w") dotenv_file = dotenv.find_dotenv('dd_api.env') dotenv.load_dotenv(dotenv_file) dotenv.set_key(dotenv_file, "DEEP_DECISION_TOKEN", token)
Finally run the following command and enter your user name and password.
python login.py
Please configure the authentication and authorization parameters in accordance with your company’s security policy.
Accessing Your Account
Let's begin by making sure your authorization token is properly configured. First, use the following code to access your account information:
Python
import sys import os import requests # Load your token from where it is stored (you can alsosave it to your bash profile) import dotenv dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] base_url = "https://deeplabs.dev" auth_resp = requests.get(url=""https://deeplabs.dev/users/me" , headers={"Authorization": f"Bearer {token}"}) assert auth_resp.status_code == 200 print(auth_resp.json())
cURL
curl -X GET "http://deeplabs.dev/users/me" \ -H "Authorization: bearer YOUR AUTH TOKEN"
JAVA
public class DeepLabs { public static void main(String[] args) { String base_url = "https://deeplabs.dev/"; String token = "YourToken"; String url = ""https://deeplabs.dev/users/me"; Headers headers = new Headers.Builder() .addAuthorization("Bearer " + token) .build(); Responseauth_resp = Unirest.get() .headers(headers) .url(url) .asJson(); assert auth_resp.getStatus() == 200; System.out.println(auth_resp.getBody().getObject()); } }
Expected Output
{ 'username': 'Example User', 'email': 'example_user@email_firm.com', 'full_name': "example user", 'orgname': 'example-firm', 'orgkey': '', 'disabled': None }
Upload Data
Deep Decision® accepts standard JSON or delimited (CSV) file formats. For delimited-formated files, the header must be followed directly by the data rows. All data within a row is expected to be the same data type for both delimited and JSON-formatted files. All data is preferred to be in UTF-8 format with no binary fields, also there must be a carriage return between each completed JSON message per line (often called JSONL if JSON is used instead of CSV file format).
The following example uses the dataset, 'Most Streamed Spotify Songs 2024' from Kaggle.
You can download it from HERE.
Once you have downloaded and upzipped the file you need to upload data to Deep Decision® using the following RESTful call.
Python
f = open("Most Streamed Spotify Songs 2024.csv", 'rb') files = {"file": (f.name, f, "multipart/form-data")} upload_resp = requests.post(url=""https://deeplabs.dev/deep_decision/upload_data", files=files , headers={"Authorization": f"Bearer {token}"}) assert upload_resp.status_code == 200 print(upload_resp.json())
cURL
curl -X 'POST' \ 'http://deeplabs.dev/deep_decision/upload_data' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN' \ -H 'Content-Type: multipart/form-data' \ -F 'file=@"Most Streamed Spotify Songs 2024.csv";type=text/csv'
JAVA
import java.io.File; import java.io.FileInputStream; import java.io.InputStream; import java.net.URL; import java.net.HttpURLConnection; import java.util.HashMap; import java.util.Map; public class Uploader { public static void main(String[] args) throws Exception { String csvFilePath = "Most Streamed Spotify Songs 2024.csv"; String token = "YOUR_TOKEN"; String baseUrl = "YOUR_BASE_URL/deep_decision/upload_data"; File file = new File(csvFilePath); InputStream inputStream = new FileInputStream(file); Mapfiles = new HashMap<>(); files.put("file", new MultipartFileItem(file.getName(), inputStream, "multipart/form-data")); HttpURLConnection connection = (HttpURLConnection) new URL(baseUrl).openConnection(); connection.setRequestProperty("Authorization", String.format("Bearer %s", token)); connection.setRequestProperty("Content-Type", "multipart/form-data"); int responseCode = connection.getResponseCode(); assert responseCode == 200 : "Expected status code 200 but got " + responseCode; HttpURLConnection.HTTPResponseHandler responseHandler = new HttpURLConnection.HTTPResponseHandler() { @Override public Object handleResponse(HttpURLConnection connection) throws Exception { return connection.getInputStream(); } }; Object response = connection.getResponse(responseHandler); System.out.println(response); }
Expected Output
{'TaskId': 'dcbe4e1e-8028-49e6-b2fc-598f916c0c70', 'Status': 'Loaded', 'FileName': 'Most Streamed Spotify Songs 2024.csv', 'InputFile': 'Most Streamed Spotify Songs 2024.csv', 'TimeStamp': '2024-07-01 10:22:47.587881' }
Build LSM
Most of the work to build an LSM is automatically done leveraging the learnings from Deep Lab’s World State. There are a few required parameters to train an LSM to your data, such as the focus. The focus defines what a user wants the AI to pay attention to. It is similar to a prompt with a LLM and helps guide the LSM results. For the Spotified example, we define the focus to be songs with a Track Score greater than or equal to 26
Python
payload = {"FileName": "Most Streamed Spotify Songs 2024.csv", "Focus" : "Track Score", "FocusValue" : 26, "FocusOperator" : "<=" } mdl_build_resp = requests.post(url=""https://deeplabs.dev/deep_decision/fit", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert mdl_build_resp.status_code == 200 task_id =mdl_build_resp.json()["TaskId"]
cURL
curl -X 'POST' \ 'https://deeplabs.dev/deep_decision/fit?FileName=Most%20Streamed%20Spotify%20Songs%202024.csv&Focus=Track%20Score&FocusOperator=%3C%3D&FocusValue=26&WorldState=false&WorldStateColumn=NA&WorldStateDefaultCntry=NA' \ -H 'accept: application/json' \ -H 'Authorization: bearer YOUR AUTH TOKEN \ -d ''
JAVA
import java.util.HashMap; import java.util.Map; import org.json.JSONObject; import org.apache.http.HttpStatus; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpPost; import org.apache.http.entity.StringEntity; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; public class DataUpload { public static void main(String[] args) throws Exception { String fileName = "Most Streamed Spotify Songs 2024.csv"; String focus = "Track Score"; int focusValue = 26; String focusOperator = "<="; int expectedStatus = 200; CloseableHttpClient httpClient = HttpClients.createDefault(); HttpPost httpPost = new HttpPost(""https://deeplabs.dev/deep_decision/fit"); httpPost.setHeader("Content-Type", "application/json; charset=utf-8"); httpPost.setHeader("Authorization", "Bearer " + token); String jsonPayload = "{\"FileName\":\"" + fileName + "\",\"Focus\":\"" + focus + "\",\"FocusValue\":" + focusValue + ",\"FocusOperator\":\"" + focusOperator + "\"}"; StringEntity entity = new StringEntity(jsonPayload); httpPost.setEntity(entity); CloseableHttpResponse response = httpClient.execute(httpPost); int actualStatus = response.getStatusLine().getStatusCode(); if (actualStatus == expectedStatus) { String responseString = EntityUtils.toString(response.getEntity()); JSONObject responseJson = new JSONObject(responseString); String taskId = responseJson.getString("TaskId"); System.out.println(responseJson); // Perform task with the returned task ID // ... } else { throw new AssertionError("Unexpected response status: " + actualStatus); } response.close(); } }
Expected Output
{'TaskId': 'ac843828-7119-4c99-86b9-cf54ad75d369', 'Status': 'Recieved', 'TimeStamp': '07/01/2024, 10:56:23' }
Get Status
Once you kick off a job you can check the job status. A typical LSM fit takes about 1-3 minutes depending on the file size. A large file can take longer. For very large files it is recommended to use a dedicated environment.
Python
import time running = True while running: status_resp = requests.get(url=""https://deeplabs.dev/deep_decision/fit/" + task_id, headers={"Authorization": f"Bearer {token}"}) print(status_resp.json()["Status"]) if status_resp.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(2)
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/fit/5a3751c9-d1c7-4d32-94ca-f005b6b0317d%22' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.time.Duration; import java.time.Instant; import java.util.HashMap; import java.util.Map; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; import org.json.JSONObject; boolean running = true; while (running) { String url = ""https://deeplabs.dev/deep_decision/fit/" + task_id; Mapheaders = new HashMap<>(); headers.put("Authorization", "Bearer " + token); CloseableHttpClient httpClient = HttpClients.createDefault(); HttpGet httpGet = new HttpGet(url); headers.forEach(httpGet::setHeader); try (CloseableHttpResponse response = httpClient.execute(httpGet)) { String responseBody = EntityUtils.toString(response.getEntity()); JSONObject statusResponse = new JSONObject(responseBody); String status = statusResponse.getString("Status"); System.out.println(status); if (status.equals("FAILURE") || status.equals("SUCCESS")) { running = false; } } catch (Exception e) { e.printStackTrace(); } try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } }
Expected Output
Submitted ... SUCCESS
Get Features
When the job is complete you can access the results. This example downloads the generated LSM features and converts them to a Pandas data frame (only in the Python example). If you do not have Pandas installed you can use the following command:
pip install pandas
Once downloaded, the features can be used for reporting, decision-making, model development, and scoring.
Python
import pandas as pd from io import StringIO features = requests.get(url=""https://deeplabs.dev/deep_decision/download/features/" + task_id, headers={"Authorization": f"Bearer {token}"}) features = features.text df_features = pd.read_csv(StringIO(features)) print(df_features.columns)
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/download/features/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.URL; import java.util.ArrayList; import java.util.List; public class FeatureDownloader { public static void main(String[] args) { String baseUrl = "base_url"; String taskId = "task_id"; String token = "token"; try { URL url = new URL(baseUrl + "/deep_decision/download/features/" + taskId); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); connection.setRequestProperty("Authorization", "Bearer " + token); int responseCode = connection.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK) { BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String inputLine; StringBuilder response = new StringBuilder(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); Listcolumns = new ArrayList<>(); String[] lines = response.toString().split("\n"); for (String line : lines) { String[] values = line.split(","); for (String value : values) { columns.add(value.trim()); } break; } System.out.println(columns); } else { System.out.println("Error: " + responseCode); } } catch (IOException e) { e.printStackTrace(); } } }
Expected Output
Index(['Release Date_day_of_week', 'Release Date_month', 'Spotify Popularity', 'Apple Music Playlist Count', 'Deezer Playlist Count', 'Amazon Playlist Count', 'Explicit Track', 'Track Score_lteq_26.0', 'unique_row_key', 'Spotify Streams_is_missing', 'YouTube Views_is_missing', 'YouTube Likes_is_missing', 'TikTok Posts_is_missing', 'TikTok Likes_is_missing', 'TikTok Views_is_missing', 'YouTube Playlist Reach_is_missing', 'AirPlay Spins_is_missing', 'SiriusXM Spins_is_missing', 'Deezer Playlist Reach_is_missing', 'Pandora Streams_is_missing', 'Pandora Track Stations_is_missing', 'Soundcloud Streams_is_missing', 'Shazam Counts_is_missing', 'Location_embedding_X', 'Location_embedding_Y', 'Location_embedding_Z', 'focus_est', 'cluster_assignment', 'segmentation_id', 'outlier_local_outlier_factor', 'outlier_elliptic_envelope', 'outlier_isolation_forest', 'outlier_score', 'outlier_rank', 'outlier_segmentation_id', 'focus', 'index', 'Track Score', 'SiriusXM Spins_is_missing_pre', 'SiriusXM Spins_is_missing_bin', 'E_Dist_10_From_SiriusXM Spins_is_missing_high', 'Deezer Playlist Reach_is_missing_pre', 'Deezer Playlist Reach_is_missing_bin', 'E_Dist_10_From_Deezer Playlist Reach_is_missing_high', 'Pandora Streams_is_missing_pre', 'Pandora Streams_is_missing_bin', 'E_Dist_10_From_Pandora Streams_is_missing_high', 'TikTok Views_is_missing_pre', 'TikTok Views_is_missing_bin', 'E_Dist_10_From_TikTok Views_is_missing_high', 'TikTok Posts_is_missing_pre', 'TikTok Posts_is_missing_bin', 'E_Dist_10_From_TikTok Posts_is_missing_high', 'TikTok Likes_is_missing_pre', 'TikTok Likes_is_missing_bin', 'E_Dist_10_From_TikTok Likes_is_missing_high'], dtype='object')
Plot Embeddings
You can examine the results further by plotting the embedding space generated by the LSM.
To run please install the following package:
pip install pandas
Python
import plotly.express as px fig = px.scatter_3d( df_features, x="Location_embedding_X", y="Location_embedding_Y", z="Location_embedding_Z", color=df_features.focus, labels={'color': 'focus'} ) fig.update_traces(marker_size=8) fig.show()
cURL
Not Available
JAVA
Not Available
Expected Output
Generated Plot: Embeddings.
Get Statements
Deep Lab’s LSM provides human-intelligible statements that describe the relationship within the dataset. A typical statement is something like this:
When track explicit is low value then the number of times Track Score is less than 26 is Less frequency.LSM statements are derived from statistically valid findings within the dataset and World State. Below is an example of downloading the statements files.
Python
statements = requests.get(url=""https://deeplabs.dev/deep_decision/download/statements/" + task_id, headers={"Authorization": f"Bearer {token}"}) print(statements.text)
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/download/statements/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; import java.util.Scanner; public class CodeTranslation { public static void main(String[] args) { String baseUrl = "/deep_decision/download/statements/"; String taskId = ""; // Replace with the actual task ID String token = ""; // Replace with the actual token try { URL url = new URL(baseUrl + taskId); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); connection.setRequestProperty("Authorization", "Bearer " + token); int responseCode = connection.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK) { Scanner scanner = new Scanner(connection.getInputStream()); StringBuilder response = new StringBuilder(); while (scanner.hasNextLine()) { response.append(scanner.nextLine()); } System.out.println(response.toString()); } else { System.out.println("Error: " + responseCode); } } catch (IOException e) { e.printStackTrace(); } } }
Expected Output
{"org": [{"origin": "org", "statement": "When missing Playlist YouTube is YouTube Playlist Reach \ Reach_is_missing is Low Value then the number of time Track Score \ is less than 26.0 IS LESS frequent."}, {"origin": "org", "statement": "When Explicit Track is Low Value then the number of time \ Track Score is less than 26.0 IS LESS frequent."}, ]}
Get Knowledge Graph
Optionally the LSM can generate Knowledge Graphs to interface with LLMs, power multi-models, and integrate with existing data solutions.
Python
knowledge_graph = requests.get(url=""https://deeplabs.dev/deep_decision/download/knowledge_graph/" + task_id, headers={"Authorization": f"Bearer {token}"}) knowledge_graph = knowledge_graph.json() print(knowledge_graph)
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/download/knowledge_graph/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; import java.util.Scanner; public class CodeTranslation { public static void main(String[] args) { String baseUrl = "/deep_decision/download/knowledge_graph/"; String taskId = ""; // Replace with the actual task ID String token = ""; // Replace with the actual token try { URL url = new URL(baseUrl + taskId); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); connection.setRequestProperty("Authorization", "Bearer " + token); int responseCode = connection.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK) { Scanner scanner = new Scanner(connection.getInputStream()); StringBuilder response = new StringBuilder(); while (scanner.hasNextLine()) { response.append(scanner.nextLine()); } System.out.println(response.toString()); } else { System.out.println("Error: " + responseCode); } } catch (IOException e) { e.printStackTrace(); } } }
Expected Output
{'vertices':{...}, 'vertice_lookup' : {...}, 'vertice_properties' : {...}, 'edges':{...}, 'edge_properties':{...}, 'edge_lookup':{...}, 'graph_properties':{...}}
Plot Knowledge Graph
All graphs generated by Deep Decision®’s LSM can be loaded in NetworkX or a similar graph framework making downstream data integration simple. The example shows how to download a knowledge graph, create a NetworkX graph, and then plot the graph using Pyvis
This example is only provided in Python and requires our tools library located HERE
Once you have downloaded the zip file, unzip it in your work directory to use.
Python
from lsm_tools.graphs import lsm_g_2_nx_g, generate_nx_g_plots G = lsm_g_2_nx_g(knowledge_graph, True) generate_nx_g_plots(G, "knowledge_graph.html")
cURL
Not Available
JAVA
Not Available
Expected Output
Generated Plot: knowledge_graph.
Get Interpretations
You can leverage Deep Lab’s LLM to further analyze the statements using our interpretation API. You can pass the ID of the statement you wish to process. If no id is provided the first statement will be processed. If the same ID is sent multiple times, only the first request will start a LLM task. The other requests will return the saved response and it will not cost you any tokens.
Python
payload ={"LSMModelTaskId":task_id } response_llmkr = requests.post(url=""https://deeplabs.dev/deep_decision/interpretation", params=payload, headers={"Authorization": f"Bearer {token}"}) assert response_llmkr.status_code == 200 llmkr_task_id = response_llmkr.json()["TaskId"]
cURL
curl -X 'POST' \ 'http://deeplabs.dev/deep_decision/interpretation?LSMModelTaskId=5a3751c9-d1c7-4d32-94ca-f005b6b0317d&ForceNewQuery=false&ForceRebuild=false&StatementCategory=org&StatementId=0' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN' \ -d ''
JAVA
import com.google.gson.JsonObject; import com.google.gson.JsonParser; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpPost; import org.apache.http.entity.StringEntity; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; import java.io.IOException; public class CodeTranslation { public static void main(String[] args) { String taskId = "your_task_id"; String baseUrl = "your_base_url"; String token = "your_token"; JsonObject payload = new JsonObject(); payload.addProperty("LSMModelTaskId", taskId); HttpPost httpPost = new HttpPost(baseUrl + "/deep_decision/interpretation"); httpPost.setHeader("Authorization", "Bearer " + token); httpPost.setEntity(new StringEntity(payload.toString())); try (CloseableHttpClient httpClient = HttpClients.createDefault(); CloseableHttpResponse response = httpClient.execute(httpPost)) { int statusCode = response.getStatusLine().getStatusCode(); assert statusCode == 200; String responseBody = EntityUtils.toString(response.getEntity()); JsonObject responseJson = JsonParser.parseString(responseBody).getAsJsonObject(); String llmkrTaskId = responseJson.get("TaskId").getAsString(); System.out.println(responseBody); } catch (IOException e) { e.printStackTrace(); } } }
Expected Output
{'TaskId': '22f34837-0c4d-4eb6-9233-ed0326381b5e', 'Status': 'Recieved', 'TimeStamp': '07/01/2024, 11:08:32' }
Get LLM Status
You can check the state of an interpretation task by passing the task ID to the API. A typical LLM task takes several seconds to run. If faster response time is required dedicated servers can be set up.
Python
running = True while running: status_llmkr = requests.get(url=""https://deeplabs.dev/deep_decision/interpretation/" + llmkr_task_id, headers={"Authorization": f"Bearer {token}"}) print(status_llmkr.json()["Status"]) if status_llmkr.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(1)
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/interpretation/e3d3aca4-f6bb-4ba8-9ac3-2ded84d7be02' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.time.Duration; import java.time.Instant; import java.util.HashMap; import java.util.Map; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; import org.json.JSONObject; boolean running = true; while (running) { Mapheaders = new HashMap<>(); headers.put("Authorization", "Bearer " + token); CloseableHttpClient httpClient = HttpClients.createDefault(); HttpGet httpGet = new HttpGet(baseUrl + "/deep_decision/interpretation/" + llmkrTaskId); for (Map.Entry entry : headers.entrySet()) { httpGet.addHeader(entry.getKey(), entry.getKey()); } CloseableHttpResponse response = httpClient.execute(httpGet); String responseBody = EntityUtils.toString(response.getEntity()); JSONObject statusJson = new JSONObject(responseBody); System.out.println(statusJson.getString("Status")); if (statusJson.getString("Status").equals("FAILURE") || statusJson.getString("Status").equals("SUCCESS")) { running = false; } Thread.sleep(1000); }
Expected Output
Submitted ... SUCCESS
Get Interpretation Graph
All the LLM responses are stored in a graph (Interpretation Graph) to create an easy-to-query database of all results ready to be integrated with existing UI solutions. The code below downloads the Interpretation Graph.
Python
interpretation_graph = requests.get(url=""https://deeplabs.dev/deep_decision/download/interpretation_graph/" + llmkr_task_id, headers={"Authorization": f"Bearer {token}"}) interpretation_graph = interpretation_graph.json() print(interpretation_graph)
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/download/interpretation_graph/e3d3aca4-f6bb-4ba8-9ac3-2ded84d7be02' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; import java.util.Scanner; public class CodeTranslation { public static void main(String[] args) { String baseUrl = "/deep_decision/download/interpretation_graph/"; String llmkr_task_id = ""; // Replace with the actual task ID String token = ""; // Replace with the actual token try { URL url = new URL(baseUrl + llmkr_task_id); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); connection.setRequestProperty("Authorization", "Bearer " + token); int responseCode = connection.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK) { Scanner scanner = new Scanner(connection.getInputStream()); StringBuilder response = new StringBuilder(); while (scanner.hasNextLine()) { response.append(scanner.nextLine()); } System.out.println(response.toString()); } else { System.out.println("Error: " + responseCode); } } catch (IOException e) { e.printStackTrace(); } } }
Expected Output
{'vertices':{...}, 'vertice_lookup' : {...}, 'vertice_properties' : {...}, 'edges':{...}, 'edge_properties':{...}, 'edge_lookup':{...}, 'graph_properties':{...} }
Plot Interpretation Graph (optional)
Also, like with the LSM Knowledge Graph, you can load the Interpretation Graph into NetworkX and plot it.
This example is only provided in Python and requires our tools library located HERE
Once you have downloaded the zip file, unzip it in your work directory to use.
Python
from lsm_tools.graphs import lsm_g_2_nx_g , generate_nx_g_plots G = lsm_g_2_nx_g(interpretation_graph, False) generate_nx_g_plots(G, "interpretation_graph.html")
cURL
Not Available
JAVA
Not Available
Exepected Ouput
Generated Plot: interpretation_graph.
Get Completed Projects
The next examples will show how to re-run a tuned LSM model. First, we query what projects are available for reuse.
Python
response3 = requests.get(url=""https://deeplabs.dev/available_projects", headers={"Authorization": f"Bearer {token}"}) print(str(response3.json()))
cURL
url -X 'GET' \ 'http://deeplabs.dev/available_projects' \ -H 'accept: application/json' \ -H 'Authorization: Bearer'
JAVA
import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; import java.util.HashMap; import java.util.Map; import com.google.gson.JsonObject; import com.google.gson.JsonParser; public class CodeTranslation { public static void main(String[] args) { String baseUrl = "/available_projects"; String token = "YOUR_TOKEN_HERE"; try { URL url = new URL(baseUrl + "/available_projects"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); connection.setRequestProperty("Authorization", "Bearer " + token); int responseCode = connection.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK) { JsonParser parser = new JsonParser(); JsonObject jsonResponse = parser.parse(connection.getInputStream()).getAsJsonObject(); System.out.println(jsonResponse.toString()); } else { System.out.println("Error: " + responseCode); } } catch (IOException e) { e.printStackTrace(); } } }
Expected Output
{ 'Most Streamed Spotify Songs 2024_Track Score_lt_26', 'username': 'example_user', 'file_meta': {'type': 'delim', 'delim': ','}, 'meta_file': 'Most Streamed Spotify Songs 2024.csv.meta.json', 'input_file': 'Most Streamed Spotify Songs 2024.csv', 'start_time': '2024-07-01 10:56:01.621793', 'total_time': 160.813449, 'output_file': 'Most Streamed Spotify Songs 2024_Track Score_lt_26', 'world_state': {'enabled': False, 'date_col': 'NA'}, 'user_meta_file': 'Most Streamed Spotify Songs 2024.csv.meta.json.llm_meta'}}}, 'TimeStamp': '07/01/2024, 11:19:48'} }
Inference
Inference allows you to use an existing LSM on new data.
The Inference API call allows you to use an existing LSM on new data. The example provided below:
- Pulls several rows from the original Spottifed dataset.
- Changes a few values.
- Loads the new data file to Deep Decison®.
- Starts an Inference task using the new data and the ID of an existing project.
Python
# First pull two records from the orginal data and the header with open("Most Streamed Spotify Songs 2024.csv", 'rb') as f: header = f.readline() row1 = f.readline() row2 = f.readline() ## Let change one data element in each row str_row1 = list(row1.decode()) str_row1[-3] = '0' row1 = ''.join(str_row1) str_row2 = list(row2.decode()) str_row2[-3] = '0' row2 = ''.join(str_row2) #Now create the new data with open("Most Streamed Spotify Songs 2024.sample.csv", 'w') as f: f.write(header.decode()) f.write(row1) f.write(row2) #Load it into Deep Decision f = open("Most Streamed Spotify Songs 2024.sample.csv", 'rb') files = {"file": (f.name, f, "multipart/form-data")} upload_resp = requests.post(url=""https://deeplabs.dev/deep_decision/upload_data", files=files , headers={"Authorization": f"Bearer {token}"}) assert upload_resp.status_code == 200 print(upload_resp.json())
cURL
curl -X 'POST' \ 'http://deeplabs.dev/deep_decision/upload_data' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN' \ -H 'Content-Type: multipart/form-data' \ -F 'file=@Most Streamed Spotify Songs 2024.sample.csv;type=text/csv' curl -X 'POST' \ 'http://deeplabs.dev/deep_decision/inference?FileName=Most%20Streamed%20Spotify%20Songs%202024.sample.csv&LSMTaskId=5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN' \ -d ''
JAVA
import java.io.*; import java.util.ArrayList; import java.util.List; import org.apache.http.HttpEntity; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpPost; import org.apache.http.entity.ContentType; import org.apache.http.entity.mime.MultipartEntityBuilder; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; public class SpotifyDataProcessor { public static void main(String[] args) { try { // First pull two records from the original data and the header Listdata = new ArrayList<>(); try (BufferedReader br = new BufferedReader(new FileReader("Most Streamed Spotify Songs 2024.csv"))) { String header = br.readLine(); data.add(header); data.add(br.readLine()); data.add(br.readLine()); } // Let's change one data element in each row List modifiedRows = new ArrayList<>(); for (int i = 1; i < data.size(); i++) { StringBuilder sb = new StringBuilder(data.get(i)); sb.setCharAt(sb.length() - 3, '0'); modifiedRows.add(sb); } // Now create the new data try (BufferedWriter bw = new BufferedWriter(new FileWriter("Most Streamed Spotify Songs 2024.sample.csv"))) { bw.write(data.get(0)); bw.newLine(); bw.write(modifiedRows.get(0).toString()); bw.newLine(); bw.write(modifiedRows.get(1).toString()); } // Load it into Deep Decisionn® CloseableHttpClient httpClient = HttpClients.createDefault(); HttpPost httpPost = new HttpPost(""https://deeplabs.dev/deep_decision/upload_data"); httpPost.setHeader("Authorization", "Bearer " + token); MultipartEntityBuilder builder = MultipartEntityBuilder.create(); builder.addBinaryBody("file", new File("Most Streamed Spotify Songs 2024.sample.csv"), ContentType.MULTIPART_FORM_DATA, "Most Streamed Spotify Songs 2024.sample.csv"); HttpEntity multipart = builder.build(); httpPost.setEntity(multipart); try (CloseableHttpResponse response = httpClient.execute(httpPost)) { int statusCode = response.getStatusLine().getStatusCode(); assert statusCode == 200; System.out.println(EntityUtils.toString(response.getEntity())); } } catch (IOException e) { e.printStackTrace(); } } }
Expected Output
{'TaskId': '82584330-9b9d-4c15-8d2a-383d9cdd3050', 'Status': 'Loaded', 'FileName': 'Most Streamed Spotify Songs 2024.csv', 'InputFile': 'Most Streamed Spotify Songs 2024.csv', 'TimeStamp': '2024-07-01 11:21:53.386039'}
Now, kick off a inference job.
Python
payload ={"LSMTaskId":task_id , "FileName": "Most Streamed Spotify Songs 2024.sample.csv"} response_inference = requests.post(url=""https://deeplabs.dev/deep_decision/inference", params=payload, headers={"Authorization": f"Bearer {token}"}) assert response_inference.status_code == 200 inference_task_id = response_inference.json()["TaskId"] running = True while running: status_inference= requests.get(url=""https://deeplabs.dev/deep_decision/inference/" + inference_task_id, headers={"Authorization": f"Bearer {token}"}) print(status_inference.json()["Status"]) if status_inference.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(1)
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/inference/f4602c17-27e9-42a7-8a76-f5a09daf0be3' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.time.Duration; import java.time.Instant; import java.util.HashMap; import java.util.Map; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.entity.StringEntity; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; import org.json.JSONObject; public class CodeTranslation { public static void main(String[] args) { String taskId = "task_id"; String baseUrl = "base_url"; String token = "token"; Mappayload = new HashMap<>(); payload.put("LSMTaskId", taskId); payload.put("FileName", "Most Streamed Spotify Songs 2024.sample.csv"); try (CloseableHttpClient httpClient = HttpClients.createDefault()) { HttpPost postRequest = new HttpPost(baseUrl + "/deep_decision/inference"); postRequest.setHeader("Authorization", "Bearer " + token); postRequest.setEntity(new StringEntity(new JSONObject(payload).toString())); CloseableHttpResponse postResponse = httpClient.execute(postRequest); assert postResponse.getStatusLine().getStatusCode() == 200; String inferenceTaskId = new JSONObject(EntityUtils.toString(postResponse.getEntity())).getString("TaskId"); boolean running = true; while (running) { HttpGet getRequest = new HttpGet(baseUrl + "/deep_decision/inference/" + inferenceTaskId); getRequest.setHeader("Authorization", "Bearer " + token); CloseableHttpResponse getResponse = httpClient.execute(getRequest); String status = new JSONObject(EntityUtils.toString(getResponse.getEntity())).getString("Status"); System.out.println(status); if (status.equals("FAILURE") || status.equals("SUCCESS")) { running = false; } Thread.sleep(1000); } } catch (Exception e) { e.printStackTrace(); } } }
Expected Output
Submitted ... SUCCESS
Get Inference Results
Like with a fit task, you can download the feature file, knowledge graph, and statements through the APIs. Note, that the statements will be identical to the prior project run because both use the same underline model.
Python
features = requests.get(url=""https://deeplabs.dev/deep_decision/download/features/" + inference_task_id, headers={"Authorization": f"Bearer {token}"}) features = features.text df_features = pd.read_csv(StringIO(features)) print(df_features.columns )
cURL
curl -X 'GET' \ 'http://deeplabs.dev/deep_decision/download/features/5a3751c9-d1c7-4d32-94ca-f005b6b0317d' \ -H 'accept: application/json' \ -H 'Authorization: Bearer YOUR AUTH TOKEN'
JAVA
import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import java.util.HashMap; import java.util.Map; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.util.EntityUtils; import com.opencsv.CSVReader; import com.opencsv.exceptions.CsvException; public class CodeTranslation { public static void main(String[] args) { String baseUrl = "/deep_decision/download/features/"; String inferenceTaskId = ""; // Replace with the actual inference task ID String token = ""; // Replace with the actual token Mapheaders = new HashMap<>(); headers.put("Authorization", "Bearer " + token); String url = baseUrl + inferenceTaskId; String features = getFeatures(url, headers); try (InputStream inputStream = new ByteArrayInputStream(features.getBytes()); CSVReader csvReader = new CSVReader(new java.io.InputStreamReader(inputStream))) { String[] columns = csvReader.readNext(); for (String column : columns) { System.out.println(column); } } catch (IOException | CsvException e) { e.printStackTrace(); } } private static String getFeatures(String url, Map headers) { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { HttpGet httpGet = new HttpGet(url); for (Map.Entry header : headers.entrySet()) { httpGet.addHeader(header.getKey(), header.getValue()); } try (CloseableHttpResponse response = httpClient.execute(httpGet)) { return EntityUtils.toString(response.getEntity()); } } catch (IOException e) { e.printStackTrace(); return ""; } } }
Expected Output
Index(['Release Date_day_of_week', 'Release Date_month', 'Spotify Popularity', 'Apple Music Playlist Count', 'Deezer Playlist Count', 'Amazon Playlist Count', 'Explicit Track', 'Track Score_lteq_26.0', 'unique_row_key', 'Spotify Streams_is_missing', 'YouTube Views_is_missing', 'YouTube Likes_is_missing', 'TikTok Posts_is_missing', 'TikTok Likes_is_missing', 'TikTok Views_is_missing', 'YouTube Playlist Reach_is_missing', 'AirPlay Spins_is_missing', 'SiriusXM Spins_is_missing', 'Deezer Playlist Reach_is_missing', 'Pandora Streams_is_missing', 'Pandora Track Stations_is_missing', 'Soundcloud Streams_is_missing', 'Shazam Counts_is_missing', 'Location_embedding_X', 'Location_embedding_Y', 'Location_embedding_Z', 'cluster_assignment', 'segmentation_id', 'outlier_local_outlier_factor', 'outlier_elliptic_envelope', 'outlier_isolation_forest', 'outlier_score', 'outlier_rank', 'outlier_segmentation_id', 'focus', 'index', 'Track Score', 'SiriusXM Spins_is_missing_pre', 'SiriusXM Spins_is_missing_bin', 'E_Dist_10_From_SiriusXM Spins_is_missing_high', 'Deezer Playlist Reach_is_missing_pre', 'Deezer Playlist Reach_is_missing_bin', 'E_Dist_10_From_Deezer Playlist Reach_is_missing_high', 'Pandora Streams_is_missing_pre', 'Pandora Streams_is_missing_bin', 'E_Dist_10_From_Pandora Streams_is_missing_high', 'TikTok Views_is_missing_pre', 'TikTok Views_is_missing_bin', 'E_Dist_10_From_TikTok Views_is_missing_high', 'TikTok Posts_is_missing_pre', 'TikTok Posts_is_missing_bin', 'E_Dist_10_From_TikTok Posts_is_missing_high', 'TikTok Likes_is_missing_pre', 'TikTok Likes_is_missing_bin', 'E_Dist_10_From_TikTok Likes_is_missing_high'], dtype='object')