Last modified: 2014-06-30 23:48:43 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T68914, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 66914 - VisualEditor: Compress POST data in the client
VisualEditor: Compress POST data in the client
Status: RESOLVED FIXED
Product: VisualEditor
Classification: Unclassified
MediaWiki integration (Other open bugs)
unspecified
All All
: High enhancement
: VE-deploy-2014-07-03
Assigned To: Ed Sanders
: performance
: 59659 (view as bug list)
Depends on:
Blocks: ve-performance
  Show dependency treegraph
 
Reported: 2014-06-21 13:40 UTC by Ed Sanders
Modified: 2014-06-30 23:48 UTC (History)
6 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Ed Sanders 2014-06-21 13:40:03 UTC
Until Parsoid removes metadata, large articles are going to consist of several megabytes of data (en:Barack_Obama = 3.4MiB). As browsers doesn't provide any mechanism for compressing post data, this is takes about 20s to upload on a typical 1Mbps up ADSL line.

Using a JS implementation of deflate, we could achieve 80-90% compression in a few hundred ms on a decent machine: http://jsperf.com/js-deflate

A couple of considerations with performing such a complex calculation in pure JS:

* Really bad JS engines or slow devices (old browsers, IE, mobile) may offer to little to overall speed benefit. We may want to detect these cases by user agent or performance profiling.
* The compression function will be synchronous and lock browser interaction. On slower machines this may give the appearance of crashing, or with memory leaks may actually crash. If this proves to be a significant problem we could look into encoding in chunks of 100k at a time so we could at least provide progress, and maybe improve memory usage (at the cost of overall compression).
Comment 1 Bartosz Dziewoński 2014-06-21 13:42:58 UTC
Supporting section editing (bug 48429), and submitting only modified section to the server, might be a better way to increase performance.
Comment 2 Gerrit Notification Bot 2014-06-24 12:55:11 UTC
Change 141678 had a related patch set uploaded by Esanders:
Compress HTML data with deflate before POSTing

https://gerrit.wikimedia.org/r/141678
Comment 3 Ed Sanders 2014-06-24 15:32:45 UTC
There are a number of things we can do to avoid this heavy payload:
* Section editing
* Separating Parsoid metadata
* Sending differential linmod data using a server-side converter

None of these are ready or close to being ready yet, and 20s to save a page is fairly significant problem in the present that compression can mitigate.
Comment 4 Bartosz Dziewoński 2014-06-25 22:39:13 UTC
*** Bug 59659 has been marked as a duplicate of this bug. ***
Comment 5 Matthew Flaschen 2014-06-27 22:42:42 UTC
(In reply to Ed Sanders from comment #0)
> * The compression function will be synchronous and lock browser interaction.

You could look at doing it asynchronously in the background, with a Web Worker (i.e. thread) (https://developer.mozilla.org/en-US/docs/Web/API/Worker), in supported browsers (IE 10+, and Firefox/Chrome/Safari/Opera going back pretty far).
Comment 6 Gerrit Notification Bot 2014-06-30 23:43:18 UTC
Change 141678 merged by jenkins-bot:
Compress HTML data with deflate before POSTing

https://gerrit.wikimedia.org/r/141678

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links