This issue was originally reported in #280 but may also relate
to #167 and other potential performance issues in the recording.
In #206 I implemented the new mutation observer which will defer
the serialization of DOM, which helps us to have a consistent DOM
order for the replay.
In this implementation, we use an array to represent the `addQueue`.
Whenever we need to consume the queue, we will iterate it to make
sure there is no dead loop, and then shift the first item to see
whether it can be serialized at the new timing.
But this implementation may be very slow when there are a lot of newly
added DOM since it will do an O(n^2) iteration.
For example, if we have three newly added DOM `n1`, `n2`, `n3`,
the iteration looks like this:
```
[n1, n2, n3]
n1 -> n2 -> n3, consume n3
[n1, n2]
n1 -> n2, consume n2
[n1]
n1, consume n1
```
We should have a better performance if te iteration looks like this:
```
[n1, n2, n3]
n3, consume n3
[n1, n2]
n2, consume n2
[n1]
n1, consume n1
```
Simply reverse the mutation payload does not work, because it does
not always as same as the DOM order.
So in this patch, we replace the `addQueue` with a double linked list,
which can:
1. represent the DOM order in its data structure
2. has an O(1) time complexity when looking up the sibling of a list item
3. has an O(1) time complexity when removing a list item
* part of #80, support mask input options
* close#188 enhance sampling options
Use a more general sampling strategy interface to describe the
configuration of sampling events collection.
Implemented mousmove, mouse interaction, scroll and input sampling
strategy.
- What was broken was that it would just play activity from the first page view, but then would stop at the second page view (meta) as actions after that had been discarded
- This restores the functionality given by the comment 'return the events from last meta to the end.' - we never want to discard events that are after the baseline time
- I believe 'session' is the incorrect terminology for this function name, as a session in web analytics usually means a series of page views
related to #6
Since the currently 'play at any time offset' implementation is pretty simple,
there are many things we can do to optimize its performance.
In this patch, we do the following optimizations:
1. Ignore some of the events during fast forward.
For example, when we are going to fast forward to 10 minutes later,
we do not need to perform mouse movement events during this period.
2. Use a fragment element as the 'virtual parent node'.
So newly added DOM nodes will be appended to this fragment node,
and finally being appended into the document as a batch operation.
These changes reduce a lot of time which was spent on reflow/repaint previously.
I've seen a 10 times performance improvement within these approaches.
And there are still some things we can do better but not in this patch.
1. We can build a virtual DOM tree to store the mutations of DOM.
This will minimize the number of DOM operations.
2. Another thing that may help UX is to make the fast forward process async and cancellable.
This may make the drag and drop interactions in the player's UI looks smooth.
On recordings with many full pageloads, dom state and mutations were being applied only to be discarded when a new pageload came in, resulting in very slow time to rebuild - and inability to interactively 'scrub' through these recordings