Automation with SpatialScale Rest API¶
The SpatialScale REST API allows you to discover, inventory, store, and analyze location based data at cloud scale via your applications and workflows. This tutorial will teach you the basics of the API.
Authentication¶
All SpatialScale endpoints require authentication in the form of an access token provided in the Authorization header of the request. SpatialScale uses the OAuth 2.0 Bearer <access_token>
format.
Let’s start by generating a token that we can use in this tutorial:
- Navigate to the Access Tokens page of your SpatialScale account.
- Click the + Create button.
- In the name field, enter “API Example”.
- Make sure assets:read, assets:write, and assets:list are all enabled.
- Click Create.
- Copy the token text to the clipboard.
Now that you have a token, you can test it out with the below cURL command which will retrieve a list of datasets in your account. Be sure to replace <access_token>
with the token you just copied to your clipboard.
curl "https://api.spatialscale.com/v1/datasets" -H "Authorization: Bearer <access_token>"
If you see JSON output like {'data': [], 'count': 0}
, then everything worked! This is the list of datasets in your SpatialScale account. If you see output like {"detail":"Could not validate credentials"}
, double check that the token on your Tokens page and the token on the command line match.
Scopes¶
Scopes are used to restrict what actions an access token can be used for. SpatialScale has the following scopes:
Syntax | Description |
---|---|
assets:read | Read metadata and access the tiled data for individual assets in the account. |
assets:list | List the metadata for all assets available to the account. |
assets:write | Create, modify, and delete assets in the account. |
Tokens with the assets:write scope should be kept secret, as they allow someone to modify the assets in your account.
Python requests¶
Now that you know the basics of making requests, let’s start accessing some resources with python. To simplify things we’ll utilize the requests library for making api calls.
The cURL request we made above would be written in python as:
from typing import Any
import requests
ACCESS_TOKEN = "<access_token>" # Replace this with your token.
def get_datasets() -> dict[str, Any]:
"""
Retrieve a list of datasets available with the supplied access_token
"""
return requests.get(
url="https://api.spatialscale.com/v1/datasets",
headers={"Authorization": f"Bearer {ACCESS_TOKEN}"},
).json()
if __name__ == "__main__":
results = get_datasets()
Find some imagery¶
Let’s go ahead and obtain imagery that covers UT Austin. For this exercise, we’re going to accomplish the following…
- Find NAIP Photography cells that cover the university boundaries.
- Execute a recipe/job that will extract the cells from a remote catalog and publish a new dataset.
import json
from typing import Any
import requests
ACCESS_TOKEN = "<access_token>" # Replace this with your token.
def find_data(payload: dict[str, Any]) -> dict[str, Any]:
"""
Search for data by AOI and return a configuration for downstream extraction.
"""
return requests.post(
url="https://api.spatialscale.com/v1/discover/search",
headers={"Authorization": f"Bearer {ACCESS_TOKEN}"},
json=payload,
).json()
def extract_data(payload: dict[str, Any]) -> dict[str, Any]:
"""
Extract data from remote catalogs and store as a new dataset.
"""
return requests.post(
url="https://api.spatialscale.com/v1/discover/extract",
headers={"Authorization": f"Bearer {ACCESS_TOKEN}"},
json=payload,
).json()
if __name__ == "__main__":
# Define an aoi as a GeoJSON feature
aoi = {
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[-97.74398803710938, 30.273226302036015],
[-97.71785259246826, 30.273226302036015],
[-97.71785259246826, 30.295758037778036],
[-97.74398803710938, 30.295758037778036],
[-97.74398803710938, 30.273226302036015],
]
],
},
}
# Construct a search payload
search_payload = {
"collection_type": "raster",
"collection_id": "naip",
"collection_name": "NAIP: National Agriculture Imagery Program",
"collection_product_type": "natural_color",
"aoi": aoi,
}
# Obtain a configuration payload for the aoi
config = find_data(search_payload)
# Override the dataset name
config["dataset_name"] = "UT Austin - NAIP"
# Clip cells to the AOI's bounding extent
config["crop_to_aoi"] = True
# start a data extraction job
results = extract_data(config)
If you return to your Datasets page. You’ll observe that a new dataset has been created.