Deep Decision® Advanced APIs Example Calls
Intro
The examples below show advanced use cases for LSM-LLM models. In these examples it is assumed you have stored your auth code in a environment variable.
Below is a Python example of how to create a environment varible to store you auth code.
First install dotenv
pip install python-dotenv
Next create a file called login.py with the following code. This code should be run as a script otherwise the prompts will not work.
#!/usr/bin/env python import requests import getpass import os username=str(input("Type Your Username:\n")) password=getpass.getpass() payload = {"username": username , "password":password} response = requests.post(url= "https://deeplabs.dev/token", data=payload) assert response.status_code == 200 token = response.json()["access_token"] import dotenv dotenv_file = dotenv.find_dotenv('dd_api.env') if dotenv_file == "": open("dd_api.env", "w") dotenv_file = dotenv.find_dotenv('dd_api.env') dotenv.load_dotenv(dotenv_file) dotenv.set_key(dotenv_file, "DEEP_DECISION_TOKEN", token)
Finally run the following command and enter your user name and password.
python login.py
Synthetic Data
One of the most powerful tools of LSMs is to generate Synthetic data (Reverse transform). The following example shows how to generate feature data from an existsing LSM model. To run this example you need the task_id of an existsing LSM model.
Python
import os import requests import dotenv import time dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] lsm_task_id = task id from an existing LSM payload = {"OutputName": "reverse_transform_1000", "LSMTaskId" : lsm_task_id, "Records":1000 } response = requests.post(url="https://deeplabs.dev/deep_decision/reverse_transform", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response.status_code == 200 task_id =response.json()["TaskId"] running = True while running: response_status = requests.get(url="https://deeplabs.dev/deep_decision/reverse_transform/" + task_id, headers={"Authorization": f"Bearer {token}"}) if response_status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) data = requests.get(url="https://deeplabs.dev/deep_decision/reverse_transform/download/" + task_id, headers={"Authorization": f"Bearer {token}"}) print(data.text)
cURL
Todo
JAVA
Todo
Expected Output
0.46380755,0.05105678,0.24551268,0.03683918,0.2507396,0.99654776,1.0046242,-0.0047902344,-0.0046186065,-0.0046186065 0.43295756,0.16100644,0.3331982,-0.013320735,0.28278425,1.013017,0.9957659,0.004235721,0.004238049,0.004238049 0.46427616,0.0393171,0.2386723,0.006180934,0.23810743,1.0001982,0.98970985,0.010247549,0.010291172,0.010291172 0.44723058,0.12667318,0.3278843,0.03441456,0.13836586,0.9913957,1.0191704,-0.012941012,-0.019163158,-0.019163158 0.30264154,0.986271,0.35439748,0.017408665,0.724232,0.0054707513,1.0125,0.011873793,0.0049595726,0.0049595726 ... 0.44302073,0.11943609,0.21936777,0.01674453,0.23416166,1.0142374,0.9854311,0.014540881,0.014569386,0.014569386
Multi-Models
LSM enable creating semi-supervised models on the fly to address threats as they merge in near real-time. The following example shows how to create and use multi-models using an existsing LSM model. To run this example you need the task_id of an existsing LSM model.
Python
import os import requests import dotenv import time dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] questing user authentication lsm_task_id = The task id from an existing LSM. payload = {"LSMTaskId": lsm_task_id, "Target" : "settlement_code_desc", "TargetOperator" : "=", "TargetValue" : "Returned", } response = requests.post(url="https://deeplabs.dev/deep_decision/multi_model/fit", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response.status_code == 200 mm_task_id =response.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/multi_model/fit/"+ mm_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(2) payload = {"LSMTaskId": lsm_task_id, "MultiModelTaskId": mm_task_id } response_2 = requests.post(url="https://deeplabs.dev/deep_decision/multi_model/inference", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_2.status_code == 200 mm_in_task_id =response_2.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/multi_model/inference/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(2) data = requests.get(url="https://deeplabs.dev/deep_decision/multi_model/inference/download/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) print(data.text )
cURL
Todo
JAVA
Todo
Expected Output
0.5813363194465637 0.012083625886589289 0.39033398032188416 0.41366443037986755 0.5977702140808105 ... 0.020116182044148445
Proximity
You can explore how records within your data are related using the promimity search method. For each record its closest five neighbor is returned. The following example shows how to calculate proximity features using an existsing LSM model. To run this example you need the task_id of an existsing LSM model.
Python
import os import dotenv dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] lsm_task_id = the task id from an existing LSM. payload = {"LSMTaskId": lsm_task_id } response = requests.post(url="https://deeplabs.dev/deep_decision/topology/fit/", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response.status_code == 200 mm_task_id =response.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/topology/fit/"+ mm_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) payload = {"LSMTaskId": prior_task_id, "TopologyModelTaskId": mm_task_id } response_2 = requests.post(url="https://deeplabs.dev/deep_decision/topology/similar", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_2.status_code == 200 mm_in_task_id = response_2.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/topology/similar/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(2) data = requests.get(url="https://deeplabs.dev/deep_decision/topology/similar/download/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) print(data.text )
cURL
Todo
JAVA
Todo
Expected Output
34,118,87,79,166 35,37,165,9,215 36,102,172,164,137 37,9,165,35,200 38,95,122,45,125 ... 44,85,106,109,11
Drift
LSMs can data overtime to see drifts in bevhavior. The following example shows how to monitor behavior drift using an existsing LSM model. To run this example you need the task_id of an existsing LSM model. To run download and unzip in your work directory the following data taken from Kaggle:
Python
import os import dotenv dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] input_file = "summary_0.csv" f = open("./onlinefraud/" +input_file, 'rb') files = {"file": (f.name, f, "multipart/form-data")} response = requests.post(url="https://deeplabs.dev/deep_decision/upload_data", files=files , headers={"Authorization": f"Bearer {token}"}) assert response.status_code == 200 file_name = response.json()["FileName"] payload = {"FileName": file_name, "Focus" : "amount_max", "FocusValue" : 44373.0, "FocusOperator" : ">=" } response_2 = requests.post(url="https://deeplabs.dev/deep_decision/fit", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_2.status_code == 200 task_id = response_2.json()["TaskId"] running = True while running: response_3 = requests.get(url="https://deeplabs.dev/deep_decision/fit/" + task_id, headers={"Authorization": f"Bearer {token}"}) if response_3.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(2) input_file = "summary_1.csv" f = open("./onlinefraud/" + input_file, 'rb') files = {"file": (f.name, f, "multipart/form-data")} response_4 = requests.post(url="https://deeplabs.dev/deep_decision/upload_data", files=files , headers={"Authorization": f"Bearer {token}"}) assert response_4.status_code == 200 payload = {"FileName": input_file, "PriorModelTaskId" : task_id } response_5 = requests.post(url="https://deeplabs.dev/deep_decision/inference", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_5.status_code == 200 LSMInferenceTaskId = response_5.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/inference/" + LSMInferenceTaskId, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) payload = {"LSMTaskId": task_id, "LSMInferenceTaskId": LSMInferenceTaskId, } response_6 = requests.post(url="https://deeplabs.dev/deep_decision/topology/cfit", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_6.status_code == 200 mm_task_id = response_6.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/topology/cfit/"+ mm_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) payload = {"LSMInferenceTaskId": LSMInferenceTaskId, "TopologyModelTaskId": mm_task_id } response_7 = requests.post(url="https://deeplabs.dev/deep_decision/topology/drift", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_7.status_code == 200 mm_in_task_id = response_7.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/topology/drift/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(2) data = requests.get(url="https://deeplabs.dev/deep_decision/topology/drift/download/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) print(data.text )
cURL
Todo
JAVA
Todo
Expected Output
1.4971574203081233 3.422140212921138 1.4015057627344816 3.649078550973383 1.7401598452516722 ... 2.2046907395506743
Twinning
LSM can find twins within new datasets for targeting and analysis. The following example shows how to find twins in a new dataset using an existsing LSM model. To run this example you need the task_id of an existsing LSM model. To run download and unzip in your work directory the following data taken from Kaggle:
Python
import os import dotenv dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] f = open("./bank/bank.1.csv", 'rb') files = {"file": (f.name, f, "multipart/form-data")} response = requests.post(url="https://deeplabs.dev/deep_decision/upload_data", files=files , headers={"Authorization": f"Bearer {token}"}) assert response.status_code == 200 file_name = response.json()["FileName"] payload = {"FileName": file_name, "Focus" : "poutcome", "FocusValue" : "unknown", "FocusOperator" : "=" } response_2 = requests.post(url="https://deeplabs.dev/deep_decision/fit", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_2.status_code == 200 task_id = response_2.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/fit/" + task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) f = open("./bank/bank.2.csv", 'rb') files = {"file": (f.name, f, "multipart/form-data")} response_3 = requests.post(url="https://deeplabs.dev/deep_decision/upload_data", files=files , headers={"Authorization": f"Bearer {token}"}) assert response_3.status_code == 200 payload = {"FileName": "bank.2.csv", "PriorModelTaskId" : task_id } response_4 = requests.post(url="https://deeplabs.dev/deep_decision/inference", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_4.status_code == 200 LSMInferenceTaskId = response_4.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/inference/" + LSMInferenceTaskId, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) payload = {"LSMTaskId": task_id, "LSMInferenceTaskId": LSMInferenceTaskId, } request_5 = requests.post(url="https://deeplabs.dev/deep_decision/topology/cfit", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert request_5.status_code == 200 mm_task_id = request_5.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/topology/cfit/"+ mm_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) payload = {"LSMInferenceTaskId": LSMInferenceTaskId, "TopologyModelTaskId": mm_task_id } response_6 = requests.post(url="https://deeplabs.dev/deep_decision/topology/twins", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response_6.status_code == 200 mm_in_task_id = response_6.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/topology/twins/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) data = requests.get(url="https://deeplabs.dev/deep_decision/topology/twins/download/"+ mm_in_task_id, headers={"Authorization": f"Bearer {token}"}) print(data.text )
cURL
Todo
JAVA
Todo
Expected Output
"[68, 0.033436298591846124]","[62, 0.04832043449395457]" "[63, 0.4380094980488955]","[68, 0.4605327707084767]" "[59, 0.13782657931159953]","[68, 0.15387070262619562]" "[59, 0.17298820215651198]","[68, 0.19223247651858108]" "[59, 0.09788875656452228]","[58, 0.13564280369497914]" ... "[59, 0.17972316648126735]","[68, 0.2015414928549079]"
World State
You can leverage pre-trained LSM without building a new LSM model. This example runs our World State model for a give date range.
Python
import os import dotenv dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] payload = {"ProjectName": "test", "Region" : "US", "Date" : "01/01/2022", "Periods" : 50, } response = requests.post(url="https://deeplabs.dev/deep_decision/world_state/embeddings", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response.status_code == 200 task_id =response.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/world_state/embeddings/" + task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) data = requests.get(url="https://deeplabs.dev/deep_decision/world_state/embeddings/download/"+ task_id, headers={"Authorization": f"Bearer {token}"}) print(data.text )
cURL
Todo
JAVA
Todo
Expected Output
[{"region":"US","date":1645232461000,"s&p_500_dl_prices":-0.0427175929,"s&p_500_anomaly_dl_prices":205,"russell_1000_dl_prices":-0.0329695153,"russell_1000_anomaly_dl_prices":244,"nikkei_225_dl_prices":0.0057064018,"nikkei_225_anomaly_dl_prices":731,"natural_gas_dl_prices":-0.0028949996,"natural_gas_anomaly_dl_prices":387,"nasdaq_dl_prices":-0.0345521869,"nasdaq_anomaly_dl_prices":180,"nasdaq_100_dl_prices":-0.0083518486,"nasdaq_100_anomaly_dl_prices":200,"gold_dl_prices":-0.0115861884,"gold_anomaly_dl_prices":334,"ftse_100_dl_prices":-0.0014682044,"ftse_100_anomaly_dl_prices":354,"dax_dl_prices":-0.0727371928,"dax_anomaly_dl_prices":190,"cboe_volatility_index_dl_prices":0.0546578822,"cboe_volatility_index_anomaly_dl_prices":284,"bitcoin_dl_prices":-0.0234457974,"bitcoin_anomaly_dl_prices":246,"TMIN_dl_weather":0.1781523594,"TMIN_anomaly_dl_weather":330,"TMAX_dl_weather":0.3097317937,"TMAX_anomaly_dl_weather":236,"TAVG_dl_weather":0.2377009852,"TAVG_anomaly_dl_weather":194,"SNWD_dl_weather":0.966868867,"SNWD_anomaly_dl_weather":149,"SNOW_dl_weather":0.9587584984,"SNOW_anomaly_dl_weather":157,"PRCP_dl_weather":0.3977812611,"PRCP_anomaly_dl_weather":157,"RHAV_dl_weather":0.8684229476,"RHAV_anomaly_dl_weather":470,"RHMX_dl_weather":0.87289397,"RHMX_anomaly_dl_weather":415,"01_cnt_neg_tone_dl_news":0.0025696601,"01_cnt_neg_tone_anomaly_dl_news":702,"02_cnt_neg_tone_dl_news":0.0019427925,"02_cnt_neg_tone_anomaly_dl_news":759,"03_cnt_neg_tone_dl_news":0.0023542987,"03_cnt_neg_tone_anomaly_dl_news":566,"03_cnt_pos_tone_dl_news":-0.0019865568,"03_cnt_pos_tone_anomaly_dl_news":519,"05_cnt_pos_tone_dl_news":-0.0001768632,"05_cnt_pos_tone_anomaly_dl_news":455,"06_cnt_neg_tone_dl_news":-0.0018127371,"06_cnt_neg_tone_anomaly_dl_news":651,"06_cnt_pos_tone_dl_news":0.0014753919,"06_cnt_pos_tone_anomaly_dl_news":602,"07_cnt_neg_tone_dl_news":-0.0013654009,"07_cnt_neg_tone_anomaly_dl_news":516,"07_cnt_pos_tone_dl_news":-0.0005561117,"07_cnt_pos_tone_anomaly_dl_news":464,"08_cnt_neg_tone_dl_news":0.0028051624,"08_cnt_neg_tone_anomaly_dl_news":313,"10_cnt_neg_tone_dl_news":0.0037537538,"10_cnt_neg_tone_anomaly_dl_news":705,"11_cnt_neg_tone_dl_news":0.0042574357,"11_cnt_neg_tone_anomaly_dl_news":669,"12_cnt_neg_tone_dl_news":-0.0054685369,"12_cnt_neg_tone_anomaly_dl_news":646,"13_cnt_neg_tone_dl_news":-0.0060976361,"13_cnt_neg_tone_anomaly_dl_news":661,"14_cnt_neg_tone_dl_news":0.0,"14_cnt_neg_tone_anomaly_dl_news":759,"15_cnt_neg_tone_dl_news":0.0,"15_cnt_neg_tone_anomaly_dl_news":587,"16_cnt_neg_tone_dl_news":-0.0005192108,"16_cnt_neg_tone_anomaly_dl_news":687,"18_cnt_neg_tone_dl_news":0.011344678,"18_cnt_neg_tone_anomaly_dl_news":582,"score3_dl_software_exploits":0.4908546815,"score3_anomaly_dl_software_exploits":110,"cnt3_dl_software_exploits":0.8705513005,"cnt3_anomaly_dl_software_exploits":403,"severity3_dl_software_exploits":0.526237196,"severity3_anomaly_dl_software_exploits":179,"impact3_dl_software_exploits":0.7070185172,"impact3_anomaly_dl_software_exploits":351,"min_severity3_dl_software_exploits":0.6293070235,"min_severity3_anomaly_dl_software_exploits":411,"max_severity3_dl_software_exploits":0.9788910928,"max_severity3_anomaly_dl_software_exploits":825,"max_impact3_dl_software_exploits":0.8412687158,"max_impact3_anomaly_dl_software_exploits":757,"min_impact3_dl_software_exploits":0.1855470431,"min_impact3_anomaly_dl_software_exploits":60} ... ]
Psychographics
You can leverage pre-trained LSM without building a new LSM model. This example runs our LSM Psychographics model using just shopping history.
Python
import os import dotenv dotenv.load_dotenv("dd_api.env") token = os.environ['DEEP_DECISION_TOKEN'] payload = {"ProjectName": "Pyco_Test", "History" : json.dumps(["Target", "Safeway", "Amazon", "TacoBell", "InAndOut"]), } response = requests.post(url="https://deeplabs.dev/deep_decision/pre-trained/psychographics", params=payload, headers={"Content-Type": "application/json; charset=utf-8", "Authorization": f"Bearer {token}"}) assert response.status_code == 200 mm_task_id = response.json()["TaskId"] running = True while running: status = requests.get(url="https://deeplabs.dev/deep_decision/pre-trained/psychographics/" + mm_task_id, headers={"Authorization": f"Bearer {token}"}) if status.json()["Status"] in ["FAILURE", "SUCCESS"]: running = False time.sleep(5) data = requests.get(url="https://deeplabs.dev/deep_decision/pre-trained/psychographics/download/"+ mm_task_id, headers={"Authorization": f"Bearer {token}"}) print(data.text )
cURL
Todo
JAVA
Todo
Expected Output
[{"neuroticism":0.012,"agreeableness":0.008955224,"conscientiousness":0.009615385,"openness":0.011764706,"mindful":0.012145749,"extroversion":0.013157895,"impulse":0.020547945,"RF_actionChain_fraud_recieved_fraudaert_text_max":0.1339210908,"RF_bnpl_urge_max":0.3895239346,"RF_actionChain_fraud_easy_max":0.1278611028,"RF_Ukraine_read_news_mobile_device_max":0.3143611367,"RF_promotions_just_wanted_to_try_something_new_max":0.3020388319,"RF_actionChain_fraud_appreciate_max":0.1335004163,"RF_actionChain_fraud_ethical_max":0.2463841159,"RF_actionChain_fraud_present_max":0.1199792379,"RF_crypto_My_friends_would_say_that_I_m_a_risk_taker_GRIP_max":0.0,"RF_promotions_I_went_back_at_least_one_more_time_merchant_outcomes_max":0.3071109915,"RF_actionChain_fraud_unnecessary_max":0.1266302142,"RF_impulsivity_1_urgeDiff_max":1.0,"RF_actionChain_fraud_declineYesNo_max":0.6108015818,"RF_crypto_I_enjoy_taking_risks_in_most_aspects_of_my_life_GRIP_max":0.0,"RF_actionChain_fraud_inflation_worried_max":0.1275277696,"RF_marketing_channel_outdoor_preferred_max":0.3217299143,"RF_marketing_channel_buyCosofPersonalizedAds_max":0.00435739,"RF_promotions_I_never_went_back_merchant_outcomes_max":0.0986544962,"RF_crypto_Taking_risks_makes_life_more_fun_GRIP_max":0.0,"RF_Ukraine_read_news_facebook_max":0.1558929916,"RF_impulsivity_2_urge_max":0.4843093524,"RF_actionChain_fraud_received_alert_max":0.255594067,"RF_marketing_channel_usePersonalData_max":0.0019838536,"RF_actionChain_fraud_debit_credit_card_fraud_worried_max":0.3673081252,"RF_buyers_remorse_buyersRemorse":0.8687597902,"RF_promotions_returned_product_max":0.100922111,"RF_marketing_channel_video_preferred_max":0.0036505848,"RF_Ukraine_How_do_you_primarily_read_the_news_":0.0,"RF_marketing_channel_radio_preferred_max":0.0097651515,"RF_crypto_I_am_attracted__rather_than_scared__by_risk_GRIP_max":0.2538959867,"RF_marketing_channel_socialMedia_preferred_max":0.0005412924,"RF_crypto_I_would_take_a_risk_even_if_it_meant_I_might_get_hurt_GRIP_max":0.1317617549,"RF_actionChain_fraud_consumer_paid_chargeback_max":0.0017142857,"RF_healthcare_advertisement_deceptive_max":0.7636308081,"RF_promotions_buy_now_pay_later_service_merchant_change_reasons_max":0.0,"RF_actionChain_fraud_real_fraud_max":0.6145682984,"RF_Ukraine_Which_social_media_platform_do_you_read_the_news_on_":0.3063133145,"RF_crypto_I_commonly_make_risky_decisions_GRIP_max":0.255629021,"RF_actionChain_fraud_check_fraud_daliy_max":0.756380673,"RF_marketing_channel_preferTailoredAds_max":0.2974078644,"RF_actionChain_fraud_stayLonger_max":0.2522357115,"RF_crypto_Taking_risks_is_an_important_part_of_my_life_GRIP_max":0.2626744256,"RF_promotions_told_firms_max":0.2999351153,"RF_marketing_channel_email_preferred_max":0.0005157729,"RF_promotions_ever_switched_product_max":0.7128036375,"RF_marketing_channel_commercials_preferred_max":0.0031824894,"RF_actionChain_fraud_trust_max":0.2524955541,"RF_actionChain_fraud_frustrate_max":0.3839993756,"RF_actionChain_fraud_identity_theft_worried_max":0.1292412587,"RF_bnpl_intent_max":0.1361630169,"RF_marketing_channel_snailMail_preferred_max":0.3124951956,"RF_promotions_rebate_program_merchant_change_reasons_max":0.0993936511,"RF_promotions_one_time_discount_coupon_merchant_change_max":0.3073334434,"RF_marketing_channel_printed_preferred_max":0.3420828955,"RF_crypto_I_am_a_believer_of_taking_chances_GRIP_max":0.0004,"RF_promotions_loyalty_program_merchant_change_reason_max":0.5141764263,"RF_marketing_channel_mobile_preferred_max":0.0007104238,"RF_actionChain_fraud_textPreferred_max":0.2553853759,"RF_actionChain_fraud_intrusive_max":0.108424938,"RF_Ukraine_shopped_online_less_max":0.5450570509,"RF_actionChain_fraud_fraudAlert_nextSteps_another_card_max":0.0,"neuroticism_bin":2,"agreeableness_bin":2,"conscientiousness_bin":1,"openness_bin":1,"mindful_bin":3,"extroversion_bin":2,"impulse_bin":3} ... ]