POST
https://api.playment.io/v0/projects/<project_id>/jobs
Each request represents one unit of data that you would want to be annotated.
Parameters
project_id
: To be passed in the URL Getting project_id
x-api-key
: Secret key to be passed as a header Getting x-api-key
{
"reference_id": "001",
"data": {
"image_url":"https://dummyimage.com/600x400/000/fff.jpg&text=Dummy+Image+1"
},
"tag": "Sample-task",
"batch_id": "72c888f6-b365-4f27-ad57-d7841da2de0c",
"priority_weight": 5
}
import requests
import json
"""
Details for creating JOBS,
project_id ->> ID of project in which job needed to be created
x_api_key ->> secret api key to create JOBS
tag ->> You can ask this from playment side
batch_id ->> The batch in which JOB needed to be created
"""
project_id = ''
x_api_key = ''
tag = ''
batch_id = ''
#method that can be used to call the job creation api
def Upload_jobs( DATA):
base_url = f"https://api.playment.io/v0/projects/{project_id}/jobs"
response = requests.post(base_url, headers={'x-api-key': x_api_key}, json=DATA)
print(response.json())
if response.status_code >= 500:
raise Exception("Something wrong at Playment's end")
if 400 <= response.status_code < 500:
raise Exception("Something wrong!!")
return response.json()
#method that can be used for batch creation
def create_batch(batch_name,batch_description):
headers = {'x-api-key':x_api_key}
url = 'https://api.playment.io/v1/project/{}/batch'.format(project_id)
data = {"project_id":project_id,"label":batch_name,"name":batch_name,"description":batch_description}
response = requests.post(url=url,headers=headers,json=data)
print(response.json())
if response.status_code >= 500:
raise Exception("Something wrong at Playment's end")
if 400 <= response.status_code < 500:
raise Exception("Something wrong!!")
return response.json()['data']['batch_id']
#list of frames in a single job
image_url = [ "https://example.com/image_url_1"]
#reference_id should be unique for each job
reference_id="job1"
job_data = {
'reference_id':reference_id,
'tag':api_tag,
'data':{'image_url':image_url},
'batch_id' : new_batch_id
}
#helper method to print json data structure
def to_dict(obj):
return json.loads(
json.dumps(obj, default=lambda o: getattr(o, '__dict__', str(o)))
)
print(json.dumps(to_dict(job_data)))
response = Upload_jobs(DATA=job_data)
print(response.json())
{
"data": {
"job_id": "3f3e8675-ca69-46d7-aa34-96f90fcbb732",
"reference_id": "001",
"tag": "Sample-task"
},
"success": true
}
Request Key | Description |
---|---|
reference_id | reference_id is a unique identifier for a request. We'll fail a request if you've sent another request with the same reference_id previously. This helps us ensure that we don't charge you for work which we've already done for you. |
tag | Each request should have a tag which tells us what operation needs to be performed on that request. We'll share this tag with you during the integration process. |
data | Object This has the complete data that is to be annotated |
image_url | String Accessible source URL on which annotation will be done |
batch_id Optional | String A batch is a way to organize multiple sequences under one batch_id |
priority_weight Optional | Number Its value ranges from 1 to 10 with 1 being lowest, 10 being highest.Default value 5 |
Secure attachment access
For secure attachment (
image_url
here) access and IP whitelisting, refer to Secure attachment access
Response Key | Description |
---|---|
data | Object Having job_id, reference_id and tag |
reference_id | String unique identifier sent for a request |
job_id | String UUID, unique job id |
tag | String which tells us what operation needs to be performed on that request. |
job_id
is the unique ID of a job. This will be used to get the job result