In the data extraction phase, the extractor is expected to call the external system’s APIs to retrieve all the items that should be synced with DevRev.

If the current run is an initial sync, this means all the items should be extracted. Otherwise the extractor should retrieve all the items that were changed since the start of the last extraction.

Each snap-in invocation runs in a separate runtime instance with a maximum execution time of 13 minutes. After 10 minutes, the Airdrop platform sends a message to the snap-in to gracefully exit.

If a large amount of data needs to be extracted, it might not all be extracted within this time frame. To handle such situations, the snap-in uses a state object. This state object is shared across all invocations and keeps track of where the previous snap-in invocations ended in the extraction process.

Triggering event

Airdrop initiates data extraction by starting the snap-in with a message with event type EXTRACTION_DATA_START when transitioning to the data extraction phase.

During the data extraction phase, the snap-in extracts data from an external system, prepares batches of data and uploads them in the form of artifacts (files) to DevRev.

The snap-in must respond to Airdrop with a message with event type of EXTRACTION_DATA_PROGRESS, together with an optional progress estimate when the maximum Airdrop snap-in runtime (13 minutes) has been reached.

If the extraction has been rate-limited by the external system and back-off is required, the snap-in must respond to Airdrop with a message with event type EXTRACTION_DATA_DELAY and specifying back-off time with delay attribute (in seconds as an integer).

In both cases, Airdrop starts the snap-in with a message with event type EXTRACTION_DATA_CONTINUE. In case of EXTRACTION_DATA_PROGRESS the restarting is immediate, meanwhile in case of EXTRACTION_DATA_DELAY the restarting is delayed for the given number of seconds.

Once the data extraction is done, the snap-in must respond to Airdrop with a message with event type EXTRACTION_DATA_DONE.

If data extraction failed in any moment of extraction, the snap-in must respond to Airdrop with a message with event type EXTRACTION_DATA_ERROR.

Implementation

Data extraction should be implemented in the data-extraction.ts file.

The snap-in must respond to Airdrop with a message, that signals either success, a delay, progress, or an error.

Success
1await adapter.emit(ExtractorEventType.ExtractionDataDone);
Delay
1await adapter.emit(ExtractorEventType.ExtractionDataDelay, {
2 delay: "30",
3});
Progress
1await adapter.emit(ExtractorEventType.ExtractionDataProgress);
Error
1await adapter.emit(ExtractorEventType.ExtractionDataError, {
2 error: {
3 message: "Failed to extract data.",
4 },
5});
The snap-in must always emit a single message.

Extracting and storing the data

The SDK library includes a repository system for handling extracted items. Each item type, such as users, tasks, or issues, has its own repository. These are defined in the repos array as itemType. The itemType name should match the record_type specified in the provided metadata.

1const repos = [
2 {
3 itemType: 'todos',
4 },
5 {
6 itemType: 'users',
7 },
8 {
9 itemType: 'attachments',
10 },
11];

The initializeRepos function initializes the repositories and should be the first step when the process begins.

1processTask<ExtractorState>({
2 task: async ({ adapter }) => {
3 adapter.initializeRepos(repos);
4 // ...
5 },
6 onTimeout: async ({ adapter }) => {
7 // ...
8 },
9});

After initialization of repositories using initializeRepos, items should be then retrieved from the external system and stored in the correct repository by calling the push function.

1await adapter.getRepo('users')?.push(items);

Behind the scenes, the SDK library stores items pushed to the repository and uploads them in batches to the Airdrop platform.

Data normalization

Extracted data must be normalized to fit the domain metadata defined in the external-domain-metadata.json file. More details on this process are provided in the Metadata extraction section.

Normalization rules:

  • Null values: All fields without a value should either be omitted or set to null. For example, if an external system provides values such as "", –1 for missing values, those must be set to null.
  • Timestamps: Full-precision timestamps should be formatted as RFC3339 (1972-03-29T22:04:47+01:00), and dates should be just 2020-12-31.
  • References: references must be strings, not numbers or objects.
  • Number fields must be valid JSON numbers (not strings).
  • Multiselect fields must be provided as an array (not CSV).

Extracted items are automatically normalized when pushed to the repo if a normalization function is provided under the normalize key in the repo object.

1const repos = [
2 {
3 itemType: 'todos',
4 normalize: normalizeTodo,
5 },
6 {
7 itemType: 'users',
8 normalize: normalizeUser,
9 },
10 {
11 itemType: 'attachments',
12 normalize: normalizeAttachment,
13 },
14];

For examples of normalization functions, refer to the data-normalization.ts file in the starter template.

Each line of the file contains id, created_date, and modified_date fields in the beginning of the record. These fields are required. All other fields are contained within the data attribute.

1{
2 "id": "2102e01F",
3 "created_date": "1972-03-29T22:04:47+01:00",
4 "modified_date": "1970-01-01T01:00:04+01:00",
5 "data": {
6 "actual_close_date": "1970-01-01T02:33:18+01:00",
7 "creator": "b8",
8 "owner": "A3A",
9 "rca": null,
10 "severity": "fatal",
11 "summary": "Lorem ipsum",
12 }
13}

If the item you are normalizing is a work item (a ticket, task, issue, or similar), it should also contain the item_url_field within the data attribute. This field should be assigned a URL that points to the item in the external system. This link is visible in the airdropped item in the DevRev app, helping users to easily locate the item in the external system.

1{
2 "id": "2102e01F",
3 "created_date": "1972-03-29T22:04:47+01:00",
4 "modified_date": "1970-01-01T01:00:04+01:00",
5 "data": {
6 "actual_close_date": "1970-01-01T02:33:18+01:00",
7 "creator": "b8",
8 "owner": "A3A",
9 "rca": null,
10 "severity": "fatal",
11 "summary": "Lorem ipsum",
12 "item_url_field": "https://external-system.com/issue/123"
13 }
14}

Validating extracted data

Extracted artifacts can be validated with the chef-cli using the following command:

$chef-cli validate-data -m external_domain_metadata.json -r issue < extractor_issues_2.json

You can also generate example data to show the format the data has to be normalized to, using:

$echo '{}' | chef-cli fuzz-extracted -r issue -m external_domain_metadata.json > example_issues.json

State handling

To enable information passing between invocations and runs, a limited amount of data can be saved as the snap-in state. Snap-in state persists between phases in one sync run as well as between multiple sync runs.

You can access the state through SDK’s adapter object.

1adapter.state['users'].completed = true;

A snap-in must consult its state to obtain information on when the last successful forward sync started.

  • The snap-in’s state is loaded at the start of each invocation and saved at its end.
  • The snap-in’s state must be a valid JSON object.
  • Each sync direction (to DevRev and from DevRev) has its own state object that is not shared.
  • The snap-in state should be smaller than 1 MB, which maps to approximately 500,000 characters.

Effective use of the state and breaking down the problem into smaller chunks are crucial for good performance and user experience. Without knowing what has been processed, the snap-in extracts the same data multiple times, using valuable API capacity and time, and possibly duplicates the data inside DevRev or the external application.

The snap-in starter template contains an example of a simple state. Adding more data to the state can help with pagination and rate limiting by saving the point at which extraction was left off.

1export const initialExtractorState: ExtractorState = {
2 todos: { completed: false },
3 users: { completed: false },
4 attachments: { completed: false },
5};

To test the state during snap-in development, you can pass in the option to decrease the timeout between snap-in invocations.

1await spawn<DummyExtractorState>({
2 ...,
3 option: {
4 timeout: 1 * 60 * 1000; // 1 minute in milliseconds
5 }
6});
Built with