* record canvas mutations
close#60, #261
This patch implements the canvas mutation observer.
It consists of both the record and the replay side changes.
In the record side, we add a `recordCanvas` flag to indicate
whether to record canvas elements and the flag defaults to false.
Different from our other observers, the canvas observer was
disabled by default. Because some applications with heavy canvas
usage may emit a lot of data as canvas changed, especially the
scenarios that use a lot of `drawImage` API.
So the behavior should be audited by users and only record canvas
when the flag was set to true.
In the replay side, we add a `UNSAFE_replayCanvas` flag to indicate
whether to replay canvas mutations.
Similar to the `recordCanvas` flag, `UNSAFE_replayCanvas` defaults
to false. But unlike the record canvas implementation is stable and
safe, the replay canvas implementation is UNSAFE.
It's unsafe because we need to add `allow-scripts` to the replay
sandbox, which may cause some unexpected script execution. Currently,
users should be aware of this implementation detail and enable this
feature carefully.
* update canvas integration test
This issue was originally reported in #280 but may also relate
to #167 and other potential performance issues in the recording.
In #206 I implemented the new mutation observer which will defer
the serialization of DOM, which helps us to have a consistent DOM
order for the replay.
In this implementation, we use an array to represent the `addQueue`.
Whenever we need to consume the queue, we will iterate it to make
sure there is no dead loop, and then shift the first item to see
whether it can be serialized at the new timing.
But this implementation may be very slow when there are a lot of newly
added DOM since it will do an O(n^2) iteration.
For example, if we have three newly added DOM `n1`, `n2`, `n3`,
the iteration looks like this:
```
[n1, n2, n3]
n1 -> n2 -> n3, consume n3
[n1, n2]
n1 -> n2, consume n2
[n1]
n1, consume n1
```
We should have a better performance if te iteration looks like this:
```
[n1, n2, n3]
n3, consume n3
[n1, n2]
n2, consume n2
[n1]
n1, consume n1
```
Simply reverse the mutation payload does not work, because it does
not always as same as the DOM order.
So in this patch, we replace the `addQueue` with a double linked list,
which can:
1. represent the DOM order in its data structure
2. has an O(1) time complexity when looking up the sibling of a list item
3. has an O(1) time complexity when removing a list item
* part of #80, support mask input options
* close#188 enhance sampling options
Use a more general sampling strategy interface to describe the
configuration of sampling events collection.
Implemented mousmove, mouse interaction, scroll and input sampling
strategy.
- What was broken was that it would just play activity from the first page view, but then would stop at the second page view (meta) as actions after that had been discarded
- This restores the functionality given by the comment 'return the events from last meta to the end.' - we never want to discard events that are after the baseline time
- I believe 'session' is the incorrect terminology for this function name, as a session in web analytics usually means a series of page views
related to #6
Since the currently 'play at any time offset' implementation is pretty simple,
there are many things we can do to optimize its performance.
In this patch, we do the following optimizations:
1. Ignore some of the events during fast forward.
For example, when we are going to fast forward to 10 minutes later,
we do not need to perform mouse movement events during this period.
2. Use a fragment element as the 'virtual parent node'.
So newly added DOM nodes will be appended to this fragment node,
and finally being appended into the document as a batch operation.
These changes reduce a lot of time which was spent on reflow/repaint previously.
I've seen a 10 times performance improvement within these approaches.
And there are still some things we can do better but not in this patch.
1. We can build a virtual DOM tree to store the mutations of DOM.
This will minimize the number of DOM operations.
2. Another thing that may help UX is to make the fast forward process async and cancellable.
This may make the drag and drop interactions in the player's UI looks smooth.
On recordings with many full pageloads, dom state and mutations were being applied only to be discarded when a new pageload came in, resulting in very slow time to rebuild - and inability to interactively 'scrub' through these recordings
According to @eoghanmurray's suggestion, we can support three
main scenarios:
1. record only
2. replay only
3. all in one
Since we have implemented the packer feature, which has a big
influence in bundle size, we provide another three bundles:
1. record and pack
2. replay and unpack
3. all in one with pack and unpack
* Move mutation processing into it's own object.
This should stand on it's own as a refactor, but is intended as a basis
for exposing the new MutationBuffer object to further outside control e.g.
to 'mute' or batch up mutation emission when the page becomes inactive
from a https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibility_API
point of view
* The `processMutations` function needed to be bound to the `mutationBuffer` object, as otherwise `this` referred to the `MutationObserver` object itself
* Neglected to add this output of `npm run typings`
* Get around the binding problem by using Arrow function expressions
* Prettier formatting