The Amp Devcenter Test Developer Hub

Welcome to the Amp Devcenter Test developer hub. You'll find comprehensive guides and documentation to help you start working with Amp Devcenter Test as quickly as possible, as well as support if you get stuck. Let's jump right in!

Get Started    

Self Data Backfill Guide

The Self Data Backfill Guide can be used to import historical data into Amplitude. 

Things to Consider

  1. Keep historical data separate. Consider keeping historical data in a separate Amplitude project and not backfilling historical data into a live production project. Not only does it make the upload easier, but it keeps your live Amplitude data clean and keeps you focused on current data and moving forward. Chances are, you are not going to check historical data that often, but when you do it is still easily accessible. In addition, historical user property values would also get processed and would overwrite current live values. Our ingestion system would then sync the out-of-date property values onto new live events coming in. You can skip user property sync by adding the following in your event payload: "$skip_user_properties_sync": true. You can read more about our data ingestion system below.
  2. Connecting user data between two data sets. If you want to connect historical data with current data, then you should combine the historical data and live data in the same project. You must also have a common ID shared between both sets of data, which is the User ID field in Amplitude’s system. You will need to set the User ID of your historical data to the User ID in your current data if you want the data to connect.
  3. The new user count may change. Amplitude defines a new user based on the earliest event timestamp that Amplitude sees for a given user. As a result, if a user is recorded as new on 6/1/15 and data is backfilled for the user on 2/1/15, the user will then be reflected as new on 2/1/15. Read below for instructions on backfilling users.
  4. Current application data may be compromised. If there is a mismatch between your current User ID and the backfilled User ID, then we interpret the two distinct User IDs as two distinct users. As a result, users will be double counted. Since we cannot delete data once it has been recorded, you may have to create a new project altogether to eliminate any data issues. 
  5. Understand how Amplitude identifies unique users. We use the Device ID and User ID fields to compute the Amplitude ID. Read more here.
  6. Monthly event limit. Each event backfilled counts toward your monthly event volume.
  7. Daily Quota for Event Ingestion. There is a daily limit of 500K events per device id (and per user id) for a project that can be ingested, to protect Amplitude from event spam. This limit has a 24 hours rolling window of 1 hour interval. This means that at any given time, a particular user or device can only send 500K events in the last 24 hours. If this limit is hit, they will see exceeded_daily_quota_users / exceeded_daily_quota_devices in the response.

Instructions for the Backfill

  1. Review the documentation for the Batch API. If you exported historical data using the Export API and wanted to backfill use that historical export, please note that the fields exported are not in the same format as the fields needed for import (i.e. Export API lists $insert_id while HTTP and Batch APIs will format insert_id without prepend $).
  2. Understand which fields you want to send and map your historical data to our fields. We highly recommend that you use the _insert_id _field so that we can deduplicate events.
  3. Create a test project in Amplitude where you will send sample test data from your backfill. You should do several tests on a few days worth of data in a separate Amplitude project before the final upload to the production project. This way, you can take a look at it as well and make sure things look good. IMPORTANT NOTE: If you mess up the import to your production project, then there is no way for us to "undo" the upload.
  4. Limit your upload to 100 batches/sec and 1000 events/sec. You can batch events into an upload but we recommend not sending more than 10 events per batch. Thus, we expect at most 100 batches to be sent per second and so the 1000 events/sec limit still applies as we do not recommend sending more than 10 events per batch. You will also be throttled if you send more than 10 events/sec for a single Device ID. The following is a guideline for our recommended way of backfilling large amounts of data:
    1. Break up the set of events into mini non-overlapping sets (for example, partition by device_id).
    2. Have 1 worker per set of events executing steps 1-3.
      1. Read a large number of events from your system.
      2. Partition those events into requests based on device_id or user_id.
      3. Send your requests concurrently/in parallel to Amplitude.
      4. To optimize the above process further, you can also do the following:
  5. In your upload, you should retry aggressively with high timeouts. You should always retry forever until you receive a 200. If you send an insert_id, then we will deduplicate any duplicate data for you on our end that is sent within 7 days of each other. 

Timing

If you send data that has a timestamp of 30 days or older, then it can take up to 48 hours to appear in some parts of our system, so do not be alarmed if you do not see everything right away. You can use the User Activity tab to check the events that you are sending as that updates in real-time regardless of the time of the event.

Resources

Sample scripts for data import: https://gist.github.com/djih/2a7e7fb2c1d45c8277f7aef64b682ed6

Sample data: https://d24n15hnbwhuhn.cloudfront.net/sample_data.zip

Data Ingestion System

In Amplitude's ingestion system, each user's current user properties are always being tracked and are synced to a user's incoming events. This diagram details the process of user property syncing. When sending data to Amplitude, customers will either be sending event data or will be sending identify calls to update a user's user properties. These identify calls will update a user's current user property values and will affect the properties being synced to subsequent events received after the identify call. For example, let's say for user Datamonster they currently have one user property, 'color', and it is set to 'red'. Then, Datamonster logs 'View Page A' event and triggers an identify that sets 'color' to 'blue'. Afterwards, they log a 'View Page B' event:

  1. logEvent -> 'View Page A'
  2. identify -> 'color':'blue'
  3. logEvent -> 'View Page B'

If Amplitude receives events from Datamonster in that exact order, then you would expect 'View Page A' to have 'color' = 'red' and 'View Page B' to have 'color' = 'blue'. This is because in Amplitude, we maintain the value of user properties at the time of the event. For this reason, the order in which events are uploaded is very important. If the identify was received after 'View Page B', then 'View Page B' would have 'color' = 'red' instead of 'blue'. 

The way Amplitude guarantees that events are processed in the order in which they are received is we process all of a user's events using the same ingestion worker. In essence, all of Datamonster's events would queue up in order on a single ingestion worker. If these events were instead processed in parallel across two separate workers, then it is much harder to guarantee the ordering (e.g. one worker might be faster than another, etc.). 

Because each user's events are processed by the same worker, if that user sends an abnormally high number of events in a short amount of time, then they would overload their assigned worker. For this reason, the event upload limit is 300 events/sec per Device ID. It is possible for customer backfills to exceed 300 events/sec if it is a system that iterates through historical data and sends data as fast as possible in parallel. As a result, Amplitude will keep track of each Device ID's event rate and will reject events in addition to returning a 429 throttling HTTP response code if it detects that a particular device is sending faster than 300 events/sec. This is why if you receive a 429 in response to an event upload, then the process should sleep for a few seconds and keep retrying the upload until it succeeds as stated in our Batch API documentation. This will ensure that no events are lost in the backfill process. If you do not retry after a 429 response code, then that specific batch of events will not be ingested.    

Instructions for Pre-existing Users Backfill

If you have pre-existing users, then you should backfill those pre-existing users to accurately mark when those users were new users. Amplitude marks users new based on the timestamp of their earliest event.

To backfill your pre-existing users, use our Batch API to send a "dummy event" or a signup event where the timestamp of the event is the time of when the user was actually new. For instance, if a user signed up on Aug 1st, 2013, the timestamp of the event you send would be Aug 1st, 2013.

Updated 3 months ago


Self Data Backfill Guide


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.