Updated: April 6, 2023

Analyzer API Documentation

The Attestiv Analyzer API allows a user to submit a photo and receive an in-line assessment as to whether the photo was edited or otherwise tampered. The Analyzer uses various risk models to gather evidence that an image may be tampered. A probabilistic model combines the results and produces a final score that can be used to guide decision making.

Table of Contents

Overview

The Attestiv Analyzer provides real-time image forensics in order to catch fraudulent activity at the source. Within seconds of receiving a photo, the API returns the probability that the photo has been deliberately altered. Consumers can choose a threshold appropriate to their risk tolerance and flag incoming photos for review or even reject them outright. Image fraud is a complex problem that requires a multifaceted approach, and the Analyzer is built to take this complexity and turn it into one score to drive decision making. This is done with a hierarchical “model of models”. 

First, the photo goes through various models that identify ways in which an image can be compromised. For instance, the quality model flags poor quality photos; that is, images which are very blurry or noisy. Of course, a blurry photo doesn’t imply anything malicious, but it is one trick that attackers use to cover up their edits. It’s when the blurriness is paired with inconsistent metadata that things start to look suspect. These models are gathering the evidence to build up a case; one which continues to strengthen as Attestiv enhances and expands the models over time.

Next, the results are passed to the tampering model. This model answers the following question: What is the probability that this image is fraudulent given the following evidence? There’s always some minimal chance that we’ve encountered fraud, so in the absence of any information this model returns a very low value. Past that, it considers the likelihood that each piece of evidence is direct proof of tampering. Finally, it returns a probability from 0 to 100, where 100 is highly suggestive of fraud and 0 is highly likely to be original. Consumers can use the midpoint for simple decision making, i.e. results above 50 will trigger further review. Alternatively, a higher threshold can be used to pick out only the most suspicious of the set.

The Models

A list of the Analyzer’s models is shown below. In the API, each model returns a value between 1 and 5 that indicates possible compromise in the given domain. If all the models return 1, there is little evidence of compromise and the overall likelihood of tampering will be low. However, a high value does not always lead to a tampered photo, as in the quality example above. It’s important that consumers use the final tamper score to make fraud decisions – the supporting values are presented as evidence, but they do not always indicate fraud on their own.

There are cases where users will want to ignore certain kinds of evidence. For instance, an image with no EXIF metadata is suspicious, unless it comes from a system where the metadata is stripped for privacy reasons. Attestiv employs a custom tamper model for these users, whereby the metadata model will notice the lack of EXIF tags, but this will not factor in to the final tamper score.

Note that the model scores can support other use cases. For instance, consumers may decide to have a user retake their photo if the quality is too poor (with a score >3).

Triage Tamper Analysis

Attestiv’s real-time suite of tamper detection models is available at /api/v1/forensics/detect_tampering.

It takes a single argument, image, where the desired image is sent as multipart/form-data. The API currently only supports images in jpeg or png format.

Model Name Purpose
Metadata Assesses whether the image has traces of edits or other anomalies in the metadata
Provenance Assesses whether the image has forensic traces from known sources
Photo of Photo (PoP) Assesses the probability that this image is a photo of a photo
Image Integrity Assesses whether the image file has any structural inconsistencies
Image Quality Assesses the overall image quality (Remember - the score is the level of potential compromise. Here a score of 5 means the image is extremely poor quality)
Reverse Search Assesses whether this photo was sourced from the internet
Text Insertion Assesses whether this document has had text digitally inserted. This model returns a mask highlighting the suspect regions of the photo. (Only returned as part of document triage)

Raw HTTP

				
					POST /api/v1/forensics/detect_tampering HTTP/1.1
Host: attestiv.net
Authorization: Bearer {authToken}
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW


----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="image"; filename="my-image.jpg"
Content-Type: image/png


(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW


				
			

cURL

				
					curl --location --request POST 'https://attestiv.net/api/v1/forensics/detect_tampering' --header 'Authorization: Bearer {authToken}' --form 'image=@/path/to/my-image.jpg'
				
			

Example Responses

Example 1

In the example below the image has been taken from the internet, and thus the final tamperscore is 100; this is obviously not an original image. The pop model also suspects that this is a photo of a photo. When a model finds something suspicious, it will return a brief message for explainability. Note the “analysis_id” – this is a value that is unique to this particular result. Please include this value when referring Attestiv support to certain analysis results.

				
					{
  "assessments": [
    {
      "model": "metadata",
      "compromisedScore": 1,
      “details” : {},
    },
    {
      "model": "provenance",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "quality",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "pop",
      "compromisedScore": 5,
      “details” : {
        “message”: “This image appears to be a photo of a photo”
       }
    },
    {
      "model": "downloads",
      "compromisedScore": 5,
      "details": {
        “message”: “This image may have originated on the web”,
        "links": [
          "https://example.com/link/to/photo",
        ],
        "totalMatches": 1
       }
    },
    {
      "model": "integrity",
      "compromisedScore": 1,
      “details” : {}
    }
  ],
  "image": "pic1.jpg",
  "tamperScore": 100,
  “analysisId”: “22ae86b5-fd4d-497f-a8a8-3b49ad75f2ab”
}


				
			

Example 2

In this example, the image is not flagged by any model in particular and thus the final tamperscore.

				
					{
  "assessments": [
    {
      "model": "metadata",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "provenance",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "quality",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "pop",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "downloads",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "integrity",
      "compromisedScore": 1,
      “details” : {}
    }
  ],
  "image": "img2.jpg",
  "tamperScore": 8,
  “analysisId”: “3ae64b43-3d3b-4956-bfb6-f5f0c5ff5958”
}

				
			

Photo Inventory 

Raw HTTP

				
					POST /api/v1/forensics/detect_objects HTTP/1.1
Host: attestiv.net
Authorization: Bearer {authToken}
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW


----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="image"; filename="my-image.jpg"
Content-Type: image/png


(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW


				
			

cURL

				
					curl --location --request POST 'https://attestiv.net/api/v1/forensics/detect_objects' --header 'Authorization: Bearer {authToken}' --form 'image=@/path/to/my-image.jpg'
				
			

Example Object Inventory Responses

In the example below the image is a photo of a tree. The results returned show items detected in the photo along with their confidence.

				
					{
	"objects": [
    	{
        	"label": "Tree",
        	"primary": false,
        	"confidence": 98.1
    	},
    	{
        	"label": "Plant",
        	"primary": false,
        	"confidence": 98.1
    	},
    	{
        	"label": "Outdoors",
        	"primary": false,
        	"confidence": 92.96
    	},
    	{
        	"label": "Nature",
        	"primary": false,
        	"confidence": 92.53
    	},
    	{
        	"label": "Fir",
        	"primary": false,
        	"confidence": 89.26
    	},
    	{
        	"label": "Abies",
        	"primary": false,
        	"confidence": 89.26
    	}
	]
}


				
			

Forensic Deep Scan

This is an async endpoint that is used to initiate a forensic deep scan on the given image. You will receive a task id that can be used to fetch the results, which are typically available within 10 seconds of making the request. The retrieval endpoint returns a status that will change to complete once a result is available. A model argument is required to specify which level of deep scan is to be used – the current options are “high”, “medium”, “low” and “document”.

When fetching the result, an optional “format” argument can be used to retrieve the mask in a specific format. The options are as follows:

  • overlay – Return a base64 encoded png containing just the mask
  • photo-overlay – Return a base64 encoded jpeg of the original photo with the mask shown on top
  • overlay-image – Returns a png of the mask as a streaming response
  • photo-overlay-image (default) – Returns a jpeg of the photo and its mask as a streaming response

The two endpoints are available at /api/v1/forensics/deep_scan and /api/v1/forensics/deep_scan_result.

Raw HTTP

				
					POST /api/v1/forensics/detect HTTP/1.1
Host: attestiv.net
Authorization: Bearer {authToken}
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW


----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="image"; filename="my-image.jpg"
Content-Type: image/png


(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW

				
			

cURL

				
					curl --location --request POST 'https://attestiv.net/api/v1/forensics/detect' 
--header 'Authorization: Bearer {authToken}' --form 'image=@/path/to/my-image.jpg'
				
			

Example Responses

The following example demonstrates how to initiate a deep scan and then fetch its result asynchronously.

				
					POST /api/v1/forensics/deep_scan
Formdata arguments:
image
model

Response:
{
   "status": "pending",
   "taskId": "c6dd9773-c899-4939-a2d7-0a3de6497113"
}

POST /api/v1/forensics/deep_scan_result
Formdata arguments:
taskId
format (optional)

Response:
{
   "status": "complete",
   "mask": {base64 encoded image of the deep scan mask}
}

				
			

Document Analysis (PDF)

This is an async endpoint that is used to initiate document analysis. You will receive a task id that can be used to fetch the results, which are typically available within 5 seconds of making the request. The retrieval endpoint returns a status that will change to complete once a result is available.

The two endpoints are available at /api/v1/forensics/document_analysis and /api/v1/forensics/fetch_document_result.

Raw HTTP

				
					POST /api/v1/forensics/document_analysis HTTP/1.1
Host: attestiv.net
Authorization: Bearer {authToken}
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW


----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="media"; filename="my-document.pdf"
Content-Type: application/pdf


(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW


				
			

cURL

				
					curl --location --request POST 'https://attestiv.net/api/v1/forensics/document_analysis --header 'Authorization: Bearer {authToken}' --form media=@/path/to/my-document.pdf’
				
			

Optional Arguments

Document analysis has characteristics that make the provision of additional context information important for a number of use cases. Towards that end, an optional argument ‘contextData’ can be provided. It should consist of a list of key-value pairs, where the values can be simple atoms or more complex structures like lists or dictionaries.

A set of keys are defined to allow for modified logic in the document analysis process. Currently defined keys are:

  • ‘dont_run’: skip one or more of the analysis steps. Value must be a list with contents either ‘producer’, ‘creator’, ‘author’, ‘versions’, ‘modification_date’.
  • ‘bad_producers’, ‘good_producers’, ‘producer_messages’: these control the analysis of the ‘producer’ metadata field. ‘bad_producers’ and ‘good_producers’ both take lists of strings as values, and the strings may either be a specified producer value, e.g. ‘Microsoft Word’, ‘Word’ (either of which will match the producer string “Microsoft Word”), or a regular expression, e.g. ‘[wW]ord’. Bad producers are PDF software packages that one would not expect to see in the current context (an exclusion list), while good producers are packages that one might normally exclude, but which are acceptable in the current context (an inclusion list). Finally, ‘producer_messages’ is an optional dictionary consisting of key that is the exact string from the ‘bad_producers’ parameter, and a value that is a custom message for that key, e.g. {‘[wW]ord’: ‘This PDF was produced by Microsoft Word’}. (It should be stressed that this argument is entirely optional, and omitting it will still return a generic error message.)
  • ‘bad_creators’, ‘good_creators’, ‘creator_messages’: these operate in exactly the same way as the ‘*_producers’ parameters above, except applied to the creator argument.
  • ‘bad_authors’, ‘author_messages’: document analysis does not flag any authors by default. A list of bad_authors, in the same format as ‘bad_producers’ and ‘bad_creators’ above, will flag if it matches the authorship of the PDF, and author_messages follow the same format as above. Why is this parameter useful? A number of PDF software packages will automatically include the username of the creator in the author metadata field. If the document is from a context (e.g. travel receipts) in which the user is not expected to make edits, this check can be used.

cURL with optional arguments

				
					curl --location 'localhost:2000/api/v1/forensics/document_analysis' \

--header 'Authorization:{authToken}' \

--form 'media=@"/path/to/document.pdf"' \

--form 'contextData="(\"bad_producers\" , [\".*[wW]ord.*\"]), (\"bad_authors\", [\"John Smith\"])"'

				
			

Example Responses

The following example demonstrates how to initiate a document analysis and then fetch its result asynchronously.

				
					POST /api/v1/forensics/document_analysis
Formdata arguments:
media
recordId ← This is an optional parameter
extractImages ← This is an optional parameter
contextData ← This is an optional parameter

Response:
{
   "status": "pending",
   "taskId": "c6dd9773-c899-4939-a2d7-0a3de6497113"
}

POST /api/v1/forensics/fetch_document_result
Formdata arguments:
taskId

Response:
{
    "status": "complete",
    "summary": {
        "total_pages": 1,
        "edited_pages": [],
        "tamperScore": 98
    },
    "page_info": [
        {
            "page_number": "1",
            "edits_detected": false,
            "page_gif": "base64 encoded gif of the page
        }
    ],
    "assessments": [
        {
            "model": "document_analysis",
            "compromisedScore": 1,
            "raw": "",
            "details": ""
        },
        {
            "model": "document_metadata",
            "compromisedScore": 5,
            "raw": [
                "John Smith",
                ".*[Ww]ord.*"
            ],
            "details": "bad_author, bad_producer"
        },
        {
            "model": "document_integrity",
            "compromisedScore": 5,
            "raw": [
                "page_1",
                "page_1",
                "null references"
            ],
            "details": "Page 1 has an element that is hidden by being placed off out of bounds, Page 1 has a text element that is hidden by using a small font, Possible deleted objects or empty form fields"
        }
    ],
    "tamperScore": 98
}

# Note that the tamperScore appears in both the “summary” object and the base 
# response: these fields are identical. The “summary” instance is for backwards 
# compatibility.


				
			

ID Verification

This endpoint will compare faces on two different images, such as an image of an ID and an image of a person’s face. The endpoint is available at /api/v1/forensics/detect_faces. It takes two arguments, image1 and image2, where the images are sent as multipart/form-data. The API currently only supports the JPEG format.

Raw HTTP

				
					POST /api/v1/forensics/detect_faces HTTP/1.1
Host: attestiv.net
Authorization: Bearer {authToken}
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW


----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="image1"; filename="image1.jpg"
Content-Type: image/jpeg


(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="image2"; filename="image2.jpg"
Content-Type: image/jpeg


(data)

----WebKitFormBoundary7MA4YWxkTrZu0gW

				
			

cURL

				
					curl --location --request POST 'https://attestiv.net/api/v1/forensics/detect_faces' --header 'Authorization: Bearer {authToken}' --form 'image1=@/path/to/image1.jpg' --form 'image2=@/path/to/image2.jpg'
				
			

Example ID Verification Responses

In the example below the images are a very close match. The results returned show that the faces detected in the photo match and there is a 99.9% confidence.

				
					{
    “status”: “Success”,
    “facesMatch”: “true”,
    “confidence”: 99.9
}

				
			

Combined Analysis

This endpoint can be used to run the previous analysis tools together on the same image.

It is available at /api/v1/forensics/detect. It takes a single argument image, however, multiple images can be supplied in the single request. The desired image or images are sent as multipart/form-data. The API currently only supports the JPEG format.

Photo Triage and Document Triage

The /detect endpoint and its async equivalent /detect_async both support document triage. When a photo is sent, it is automatically classified as being a document (e.g. a photo of a receipt, a scanned image of an invoice, etc) or a regular photo. Depending on the class, a different set of models will be returned in the `detect_tampering_result`. See the “Models” section above for an overview of the tamper models.

Example Responses – Photo Triage

In the example below we wish to execute both tamper detection and object detection. However, we do not want to upload the same file to the call twice. In order to efficiently execute both calls and upload the file once, you can use the /api/v1/forensics/detect endpoint. This endpoint allows more than one image to be uploaded in a single request. The response is an array of objects. Each object contains the result of the tamper detection and object detection calls in the detect_tamper_result and detect_object_result properties.

				
					[
	{
    	"image": "my-image.jpg",
    	"detect_tampering_result": {
        	"assessments": [
            	{
                	"model": "metadata",
                	"compromisedScore": 1,
                	"details": {}
            	},
            	{
                	"model": "provenance",
                	"compromisedScore": 1,
                	"details": {
                    	"features": {
                        	"digest": "270dc23488b4e99d896297a7ed4ae1398725888ec21f0ba9d94b25b5f408b3cc"
                    	}
                	}
            	},
            	{
                	"model": "downloads",
                	"compromisedScore": 1,
                	"details": {}
            	},
            	{
                	"model": "integrity",
                	"compromisedScore": 1,
                	"details": {}
            	}
        	],
        	"tamperScore": 5,
        	"errors": [
            	{
                	"model": "quality",
                	"details": {
                    	"message": "Deadline Exceeded"
                	}
            	},
            	{
                	"model": "pop",
                	"details": {
                    	"message": "Deadline Exceeded"
                	}
            	}
        	],
        	"image": "my-image.jpg",
        	"analysisId": "587483fa-d087-42cb-814a-b9d3bbca4185"
    	},
    	"detect_objects_result": {
        	"objects": [
            	{
                	"label": "Tree",
                	"primary": false,
                	"confidence": 98.1
            	},
            	{
                	"label": "Plant",
                	"primary": false,
                	"confidence": 98.1
            	},
            	{
                	"label": "Outdoors",
                	"primary": false,
                	"confidence": 92.96
            	},
            	{
                	"label": "Nature",
                	"primary": false,
                	"confidence": 92.53
            	},
            	{
                	"label": "Fir",
                	"primary": false,
                	"confidence": 89.26
            	},
            	{
                	"label": "Abies",
                	"primary": false,
                	"confidence": 89.26
            	}
        	]
    	}
	}
]


				
			

cURL

				
					curl --location --request POST 'https://attestiv.net/api/v1/forensics/detect' --header 'Authorization: Bearer {authToken}' --form 'image=@/path/to/my-image.jpg'
				
			

Raw HTTP

				
					POST /api/v1/forensics/detect HTTP/1.1
Host: attestiv.net
Authorization: Bearer {authToken}
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW


----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="image"; filename="my-image.jpg"
Content-Type: image/png


(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW

				
			

Example Responses – Document Triage

When a document is sent to the endpoint, the detect_tampering_result will look similar to the below. Note that the text insertion model returns a base64 encoded png. This is the mask that is meant to display over the photo to highlight suspect areas. For API users, it may be more beneficial to receive the mask already overlayed on the photo. For this reason, the text insertion model is also available in the /deep_scan endpoint, using the “document” argument. That endpoint provides multiple format options for easily viewing the results.

				
					[
	{
    	"image": "receipt.jpg",
    	"detect_tampering_result": {
        	"assessments": [
            	{
                	"model": "metadata",
                	"compromisedScore": 1,
                	"details": {}
            	},
            	{
                	"model": "provenance",
                	"compromisedScore": 1,
                	"details": {}
            	},
            	{
                	"model": "integrity",
                	"compromisedScore": 1,
                	"details": {}
            	},
            	{
                	"model": "text_insertion",
                	"compromisedScore": 0,
                	"details": {
                    	"mask": "{base64Encoded png}"
                	}
            	}
        	],
        	"tamperScore": 5,
        	"analysisId": "01efd6f35224834b78a8eff98c60ab96",
        	"type": "document"
    	},
    	"detect_objects_result": {
        	"objects": [
            	{
                	"label": "Text",
                	"primary": false,
                	"confidence": 99.98
            	},
            	{
                	"label": "Receipt",
                	"primary": false,
                	"confidence": 99.8
            	},
            	{
                	"label": "Document",
                	"primary": false,
                	"confidence": 99.8
            	},
            	{
                	"label": "Business Card",
                	"boundingBox": {
                    	"Width": 0.5526850819587708,
                    	"Height": 0.9459810853004456,
                    	"Left": 0.18490348756313324,
                    	"Top": 0.021326353773474693
                	},
                	"primary": true,
                	"confidence": 80.95
            	},
            	{
                	"label": "Paper",
                	"primary": false,
                	"confidence": 80.95
            	},
            	{
                	"label": "Invoice",
                	"primary": false,
                	"confidence": 79.36
            	}
        	]
    	}
	}
]

				
			

Combined Analysis (Async)

The models above can also be run together in an async endpoint. This allows the user to poll for the result instead of waiting on a long lived connection.

It is available at /api/v1/forensics/detect_async. It requires a single argument `media`, which is an image sent as multipart/form-data. This endpoint returns a taskId, which can be used to poll for the result using /detect_async_result. The results endpoint accepts a list of taskIds, and will return either a pending status or the full result for each taskId that is sent.

Raw HTTP

				
					POST /api/v1/forensics/detect HTTP/1.1
Host: attestiv.net
Authorization: Bearer {authToken}
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW


----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="image"; filename="my-image.jpg"
Content-Type: image/png


(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW

				
			

cURL

				
					curl --location --request POST 'https://attestiv.net/api/v1/forensics/detect_async' --header 'Authorization: Bearer {authToken}' --form 'media=@/path/to/my-image.jpg'
				
			

Example Flow – Polling a taskId

In the example below we have issued two requests to /detect_async and received two taskIds for the results. We can poll for both results using the /detect_async_result endpoint. Initially, both tasks remain pending. When a result is ready, the status will switch to complete. By design, the results of the forensic APIs are deleted once they have been consumed. If a taskId is sent after its results were received, the status will show `pending` indefinitely.

				
					# First forensic call
POST /api/v1/forensics/detect_async_result
Formdata arguments:
media

Response:
[
  {
   "status": "pending",
   "taskId": "c6dd9773-c899-4939-a2d7-0a3de6497113"
  }
]

# Second forensic call
POST /api/v1/forensics/detect_async_result
Formdata arguments:
media

Response:
[
  {
   "status": "pending",
   "taskId": "c3ed1a8e-7c38-47a2-bbe7-1256c0a7693f"
  }
]

# Poll for the results
POST /api/v1/forensics/detect_async_result
Formdata arguments:
taskId:"c6dd9773-c899-4939-a2d7-0a3de6497113"
taskId:"c3ed1a8e-7c38-47a2-bbe7-1256c0a7693f"

Response:
{
	"c6dd9773-c899-4939-a2d7-0a3de6497113": {
    	"status": "pending",
    	"task_id": "c6dd9773-c899-4939-a2d7-0a3de6497113"
	},
	"c3ed1a8e-7c38-47a2-bbe7-1256c0a7693f": {
    	"detect_objects_result": {
        	"objects": [
        	...
        	...
        	...
        	]
    	},
    	"detect_tampering_result": {
        	"assessments": [
        	...
        	...
        	...
        	],
        	"tamperScore": 18,
        	"image": "receipt2.jpg",
        	"analysisId": "a2135e6e3634801a85584c80e45773a7",
        	"type": "photo"
    	},
    	"status": "complete"
	}
}

				
			

Adding Detection Results to Assets

If you are using the forensic APIs with the records and assets APIs in the Attestiv platform, you will likely want to add the forensic results to the body of the /api/v1/assets POST call. Doing this will allow you to see the tamper scores and object detection results in the Attestiv Dashboard. It will also include this information in any reports generated by the /api/v1/reporting API.

To add the data returned to an asset, within the media property in the request body, add a property called “tamperedResult” and set the value to the result returned from the detect_tampering call. If using /api/v1/forensics/detect, set “tamperedResult” to the value of the “detect_tampering_result” property in the response. To add the results from an object detection call, add a property called “objectsDetectedResult” to the media property. See the example below.

				
					{
   "user":"user@attestiv.com",
   "timestamp":1611083781969,
   "recordId": "87ea8933-a3d2-4eaf-a89e-c786040f30d1",
   "media":[
  	{
     	"mimeType":"image/jpeg",
     	"fingerprint":"ab386530cbe34958bcba8f10b992aef808ace3b445a210a084924125f9069ffc",
     	"location":"41.5115648, -72.1565312",
     	"size":"3024x4032",
     	"tags":["import"],
     	"timestamp":1611083781969,
     	"tamperedResult": {
  "assessments": [
    {
      "model": "metadata",
      "compromisedScore": 1,
      “details” : {},
    },
    {
      "model": "provenance",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "quality",
      "compromisedScore": 1,
      “details” : {}
    },
    {
      "model": "pop",
      "compromisedScore": 5,
      “details” : {
        “message”: “This image appears to be a photo of a photo”
       }
    },
    {
      "model": "downloads",
      "compromisedScore": 5,
      "details": {
        “message”: “This image may have originated on the web”,
        "links": [
          "https://example.com/link/to/photo",
        ],
        "totalMatches": 1
       }
    },
    {
      "model": "integrity",
      "compromisedScore": 1,
      “details” : {}
    }
  ],
  "image": "pic1.jpg",
  "tamperScore": 100,
  “analysisId”: “22ae86b5-fd4d-497f-a8a8-3b49ad75f2ab”     	     
  	},
     	"objectsDetectedResult": {
     	      "objects": [
            	{
                	"label": "Tree",
                	"primary": false,
                	"confidence": 98.1
            	},
            	{
                	"label": "Plant",
                	"primary": false,
                	"confidence": 98.1
            	},
            	{
                	"label": "Outdoors",
                	"primary": false,
                	"confidence": 92.96
            	},
            	{
                	"label": "Nature",
                	"primary": false,
                	"confidence": 92.53
            	},
            	{
                	"label": "Fir",
                	"primary": false,
                	"confidence": 89.26
            	},
            	{
                	"label": "Abies",
                	"primary": false,
                	"confidence": 89.26
            	}
        	]
  	}
   ]
}




				
			

Error Cases

The following errors may occur:

Response Recommendation
401 UNAUTHORIZED Please ensure that you’re passing the correct access token via bearer authorization
413 PAYLOAD TOO LARGE Please use a smaller image
429 TOO MANY REQUESTS The rate of requests has exceeded our quota

Resources

Please contact support@attestiv.com with any issues.