Datasets at a glance
Basically, there are two types of datasets: those that only measure objective metrics, and those that additionally collect subjective user feedback. Datasets are arranged by date of release, most recent first.
Year | Size | Dataset | Paper |
2020 | 52.8MB | CNSM-20 | CNSM-20 |
2020 | 121.6GB | Networking-20 | Networking-20 |
2019 | 6.21MB | WWW-19 | WWW-19 |
2018 | 1.9MB | PAM-18 | PAM-18 |
2016 | 7.2 GB | SIGCOMM-QoE-16 | SIGCOMM-QoE-16 |
2020: Detecting Degradation of Web Browsing Quality of Experience
In a collaboration with Orange Labs, we performed a longitudinal study of Web services, tracking the most interesting changepoints in the QoE space, and discerning changes due to QoE from changes due to content evolution. The study has been published at CNSM-20
- 52.8MB This dataset represents 222k samples of web browsing session measurements collected during 2.5 months using the Orange Web View platform. The CNSM’20 dataset reports the logs of the sessions, tracking about 40 features per session.
2020: Revealing QoE of Web Users from Encrypted Network Traffic
In a collaboration with Orange Labs, we collected over 200k Web sessions, gathering HAR files along with packet traces, to accurately learn objective WebQoE metrics directly from raw encrypted packets, and stress the generalization capability of the models. The study has been published at [Networking-20].
- 121.6GB We purposely collect two distinct sets with two different tools, namely Web Page Test (WPT) and Web View (WV), varying a number of relevant parameters and conditions, for a total of 200K+ web sessions, roughly equally split among WV and WPT. The Networking’20 dataset comprises variations in terms of geographical coverage, scale, diversity and representativeness (location, targets, L7 and L3 protocols, browser software, viewport settings, etc.).
2019: Wikipedia subjective metrics dataset (User satisfaction)
In a collaboration with Wikimedia foundation we collected for more than 5 months worth of Real User Monitoring (RUM) data from Wikipedia users, during their normal browsing activity, about whether they felt that page loading process was fast enough. The study has been published at [WWW-19], and we additionally prepared an extended technical report containing further details [TECHREP-19].
- 6.21MB The Wikimedia legal team has given clearance for the publication of the datasets, after having fully prevented user deanonymization and content-linkability. The WWW’19 dataset comprises over 60,000 user survey answers, associated with 18 browser performance metrics.
2018: Subjective metrics datasets (5-grades ACR scale)
These award winning datasets have been collected in our [PAM-17] and [PAM-18] papers. Subjective metrics have been collected with (hundreds of) real humans, browsing (tens of) real websites on controlled lab conditions. Details of the testbed are in [PAM-17] and details of the set of candidate pages in [DIRECTORSCUT-16]. To reduce human error and increase repeatability, we also provide our code.
-
486KB compressed, 1.9MB raw The sanitized PAM-18 WebMOS dataset comprises over 3,000 user grades, that we describe and use in [PAM-18] and [QoMEX-18]. Details of the sanitization process are in [PAM-18], and (a simplified version of the) Jupyter Notebook of used in the paper can be found in the code section
-
466KB compressed, 24MB raw The original WebMOS dataset (link disabled) used in [PAM-17] WebMOS] is still available, but was significantly extended in [PAM-18], so there is no reason you should pick this one!
-
1.5MB compressed, 5MB raw The complete PAM-18 WebMOS dataset (link disabled), comprises over 9,000 user grades and is also still available (you can find the link in this page if you’re determined enough). However, for repeatability we would prefer you to use the sanitized version below!
2016: Objective metrics datasets
These datasets have been collected for our award winning [SIGCOMM-QoE-16] paper. Objective metrics are collected with an automated process, and do not require user intervention. This makes it possible to collect fairly large datasets, with enough repetitions to make statistical analysis accurate
-
24 MB The Alexa Top-100 Chrome dataset contains objective metrics metrics such as ByteIndex, ObjectIndex, DOM, onLoad, etc. (but not SpeedIndex as it slows down Webpage rendering process itself, see [SIGCOMM-QoE-16])
-
7.2 GB The Alexa Top-100 WebPagetest dataset contains objective metrics metrics such as ByteIndex, ObjectIndex, DOM, onLoad, etc., as well as SpeedIndex (computed with histograms of the page rendering process by wepagetest).
-
100+ GB We have collected much larger datasets on (several Top 1000 list of FR, EU, World) with 100+ repetitions from Webpagetest. If interested, contact us.