Skip to content

Record APIs

TrailBase’s Record APIs allow you to easily configure endpoints exposing your TABLEs and VIEWs though type-safe restful CreateReadUpdateDelete operations, composite list queries and even realtime change subscriptions.

For your data to be exposed via Record APIs:

  • TABLEs and VIEWs need to be STRICTly1 typed to guarantee type-safety all the way from your DB records, over JSON schemas, to your client-side language bindings2.
  • To allow for stable sorting and thus efficient cursor-based pagination, TABLEs are required to have either INTEGER, UUIDv4 or UUIDv7 primary key columns.

Record APIs can be configured via the admin dashboard or direclty in TrailBase’s configuration file.

An example API setup for managing user profiles:

record_apis: [
{
name: "profiles"
table_name: "profiles"
conflict_resolution: REPLACE
acl_authenticated: [READ, CREATE]
create_access_rule: "_REQ_.user = _USER_.id"
},
]

A quick explanation:

  • The name needs to be unique. It’s what is used to access the API via http://<host>/api/records/v1/name
  • table_name references the TABLE or VIEW that is being exposed.
  • conflict_resolution declares what should happen if a newly created record is conflicting with an existing one.
  • autofill_missing_user_id_column lets you omit fields for columns with a foreign key relationship to _user(id). The field will then be filled with the credentials of the authenticated user. In most cases, this should probably be off, this only useful if you cannot explicitly provide the user id yourself, e.g. in a static HTML form.
  • acl_world and acl_authenticated define that anyone can read avatars but only authenticated users can modify them. The following access_rules further narrow mutations to records where the user column (or request field for insertions) match. In other words, user X cannot modify user Y’s avatar.

Note that and TABLE/VIEW can be exposed multiple times as different APIs with different properties.

Access can be controlled through combination of a simple ACL-based system (a matrix of who and what) and custom SQL access rules of the nature: f(req, user, row) -> bool. Generally, the ACLs are checked first and then the access rules are evaluated when present.

For example, to validate that the requester provided a secret key and is member of a group 'mygroup':

(_REQ_.secret = 'foo' AND EXISTS(
SELECT 1 FROM groups
WHERE
groups.member = _USER_.id
AND groups.name = 'mygroup'
))
  • _REQ_ is an injected sub-query containing a user request’s fields. It is available in the access rules for CREATE and UPDATE operations. Fields that were either absent in the request or explicitly passed as null, will be NULL.
  • To check for the presence of a specific field in a CREATE or UPDATE request, you can use 'field' IN _REQ__FIELDS_. Note that _REQ_FIELDS_ may only contain fields with a corresponding column in the underlying TABLE or VIEW.
  • Similarly, _ROW_ is a sub-query of the target record. It is available in the access rules for READ, UPDATE, and DELETE operations.
  • Lastly, _USER_.id references the id of the currently authenticated user and NULL otherwise.

Independently, you can use VIEWs to filter which rows and columns of your TABLEs should be accessible.

As hinted at by the example above, the SQL access rules can be used to build higher-level access protection such as group ACLs or capabilities. What makes the most sense in your case, is very application dependent. The <repo>/examples/blog has an “editor” group to control who can write blog posts.

Somewhat on a tangent, group and capability tables can themselves be exposed via Record APIs. This can be used to programmatically manage permissions, e.g. for building a moderation dashboard. When exposing authorization primitives, make sure the permissions are appropriately tight to avoid permission escalations.

VIEWs can support a variety of use-cases, e.g.: read-only APIs on a subset of columns, APIs exposing complex joins across many tables, computed values across many columns, etc. Note that computed columns require a top-level CAST expression for TrailBase to determine the resulting type and satisfy the type-safety requirements for record APIs, e.g.:

CREATE VIEW profile_view AS
SELECT
p.*,
-- The CAST is needed to determine the type of the result.
CAST(IIF(editors.user IS NULL, FALSE, TRUE) AS BOOLEAN) AS editor
FROM profiles AS p
LEFT JOIN editors ON p.user = editors.user;

Columns with names starting with an underscore can be written on insert or updated but are hidden during read/list operations. This is meant as a convenient convention to allow for internal data fields, e.g hiding the record owner in an otherwise public data set or hiding a user’s internal credit rating from their profile. A similar effect could otherwise be achieved by only allowing create/update operations on the table and exposing a subset of columns through a readable VIEW. However, this would lead to different API endpoints for read vs create/update.

After setting up your API, TrailBase will expose the following main endpoints3:

  • Create: POST /api/records/v1/<api_name>
  • Read: GET /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
  • Update: PATCH /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
  • Delete: DELETE /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
  • List: GET /api/records/v1/<api_name>?<search_params>
  • Change Subscriptions:
    GET /api/records/v1/<api_name>/subscribe/[*|<url-safe_b64_uuid_or_int>]
  • Schema: GET /api/records/v1/<api_name>/schema

All of the endpoints accept requests that are JSON encoded, url-encoded, or multipart/form-data encoded, which makes them accessible via rich client-side applications, progressive web apps, and static HTML forms alike.

The create endpoint lets you insert new records and potentially override existing ones depending on conflict resolution strategy.

import type { Client } from "trailbase";
export const create = async (client: Client): Promise<string | number> =>
await client.records("simple_strict_table").create({ text_not_null: "test" });

The read endpoint lets you read specific records given their id.

import type { Client } from "trailbase";
export const read = async (client: Client, id: string | number) =>
await client.records("simple_strict_table").read(id);

The update endpoint lets you modify, i.e. partially update, existing records given their id

import type { Client } from "trailbase";
export const update = async (
client: Client,
id: string | number,
record: object,
) => await client.records("simple_strict_table").update(id, record);
import type { Client } from "trailbase";
export const remove = async (client: Client, id: string | number) =>
await client.records("simple_strict_table").delete(id);

The delete endpoints lets you remove a record given its id.

Using the GET /api/records/v1/<api_name>?<params> endpoint and given sufficient permissions one can query records based the given read_access_rule and query parameters.

Parameters:

  • Pagination can be controlled via the following query parameters:
    • limit=N, with a built-in default of 50 and a hard limit of 1024 to avoid abuse.
    • cursor=<primary key> to offset into results using a cursor. Significantly less expensive than OFFSET-based pagination.
    • offset=N to offset into results.
    • count=true will yield a total_count of records in the result. This can be used together with limit and cursor to build pagination UIs.
  • Ordering can be controlled using the order=[[+-]?<column_name>]+ parameter, e.g. order=created,-rank, which sorts records based on their created column in ascending order first (same as ”+”) and subsequently in descending order by their rank column due to the minus prefix.
  • Filtering can be controlled by passing one or more filter[<column_name>][op]=<value> parameters. For example, filter[revenue][$gt]=0 would list records with a positive revenue only.
  • Supported operators are:
    • $eq: equal, which is also the default if no explicit operator is specified, i.e. ?success[$eq]=TRUE and ?success=TRUE are identical.
    • $ne: not equal
    • $gte: greater-than-equal
    • $gt: greater-than
    • $lte: less-than-equal
    • $lt: less-than
    • $is: is null or not null, i.e. ?col[$is]=NULL or ?col[$is]=!NULL, respectively
    • $like: SQL LIKE operator, e.g. ?col[$like]=%something%
    • $re: SQL REGEXP operator, e.g. ?col[$re]=^something$
  • Parent records, i.e. records pointed to by foreign key columns, can be expanded using the ?expand=<col0>,<col>` parameter, if the respective columns were allow-listed in the API configuration.

For example, to query the top-3 ranked movies with a watch time below 2 hours and “love” in their description:

import type { Client, ListResponse } from "trailbase";
export const list = async (client: Client): Promise<ListResponse<object>> =>
await client.records("movies").list({
pagination: {
limit: 3,
},
order: ["rank"],
filters: [
{
column: "watch_time",
op: "lessThan",
value: "120",
},
{
column: "description",
op: "like",
value: "%love%",
},
],
});

The streaming subscribe endpoints lets you listen for changes to tables backing an API or specific records given their id. Change events can be insertions, updates, and deletions.

import type { Client } from "trailbase";
export const subscribe = async (client: Client, id: string | number) =>
await client.records("simple_strict_table").subscribe(id);
export const subscribeAll = async (client: Client) =>
await client.records("simple_strict_table").subscribe("*");

The schema endpoint allows for reading the APIs JSON schema definition. This can be useful for driving external code generation or introspection in general.

Record APIs can also support file uploads and downloads. There’s some special handling in place so that only metadata is stored in the underlying table while the actual files are kept in an object store.

By adding a TEXT column with a CHECK(jsonschema('std.FileUpload')) constrained to your table definition, you instruct TrailBase to store file metadata as defined by the “std.FileUpload” JSON schema while keeping the contents in a separate object store. Files can then be upload by sending their contents as part of your JSON requests or multipart/form-data POST requests. Downloading files is slightly different, since reading the column through record APIs will only yield the metadata. There’s a dedicated GET API endpoint for file downloads:

/api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>/file/<column_name>

By default, TrailBase will keep the object store on the local file system under <data-dir>/uploads. Alternatively, one can configure an S3 bucket via the configuration file, it’s not yet accessible through the admin dashboard. If you need support for other storage backends, let us know.

Akin to std.FileUpload above, you can register your own nested JSON schemas to be used with column CHECKs. For now, the dashboard only allows viewing all registered JSON schemas, however you can register schemas using the configuration:

schemas: [
{
name: "simple_schema"
schema:
'{'
' "type": "object",'
' "properties": {'
' "name": { "type": "string" },'
' "obj": { "type": "object" }'
' },'
' "required": ["name"]'
'}'
}
]

Once registered, schemas can be added as column constraints:

CREATE TALE test (
simple TEXT CHECK(jsonschema('simple_schema')),
-- ...
) STRICT;

When generating new client-side bindings for a table or view with such nested schemas, they will be included ensuring type-safety all the way to the client-side APIs.


  1. By default, SQLite is not strictly typed. Column types merely express affinities unless the table is explicitly created as STRICT.

  2. Views are more tricky to type strictly being the result of an arbitrary SELECT statement. For columns that are mapped 1:1 from a STRICTly typed TABLE the type can be infered. For everything else you can use CAST(col AS <TYPE>) expressions to assert the types of computed columns.

  3. There’s also a few other endpoints, e.g. for downloading files, which will be described further down in the docs.