Early
2018
Initial Idea
Founders Rob Dongas and Suzan Oslin begin ideating about a study to evaluate interaction design patterns in VR.
November
2019
Proof of Concept
An initial list of questions is turned into a Google form and testing protocol is created. We ran our first test on Sketchbox 3D. The team size is now at 6, which includes long-time team members Marina Roselli and Lori Helig
Spring
2019
Personas and Test Cases
It becomes immediately clear that the study will require a persona for each use-case category to establish certain foundational assumptions and a test case for each title will be required to assure each evaluator is collecting similar data.
June/July
2020
Alpha Test
A dry run of the test is launched to test the evaluation form, and the pod and communications structure. Each pod evaluates 1 title in each of 3 categories, all 12 categories are tested between 4 pods. Overall team size grows to 16, which includes additional dedicated team members Alberto Garcia, Jennie Lee, Lissa Aguilar, and Robert Bellman.
August
2020
Alpha Test Analysis
The framework for the UI and interaction questions is not sufficient for such a wide range of experiences, and the data is not easily compared. The decision is made to bring on an accessibility consultant to additionally run usability studies for a variety of disabilities.
Fall
2020
Committees Created
To address re-architecting the evaluation form and the evaluation summaries, committees are created. Accessibility usability studies are added to the test plan.
December
2020
Functional QA Test
To test the dynamic Qualtrics evaluation form we asked every team member to select any title of their choosing to assure as much variation as possible.
January
2021
Final Revisions
and further training, add additional evaluator to ensure min 4 evaluations, add second analyst for fair workload
February
2021
Beta Test
The home environment for each major platform [Rift, Quest, Vive, WMR] is evaluated for a baseline to compare experiences across headsets. Data is stored for the first time.
03.2021
Final Test Launch
Each pod evaluates and summarizes 1 title every two weeks over the next 36 weeks for a total of 18 titles per pod or 72 titles for the entire team.