open ended scoring | Blog | Human+AI Process Analytics

Deep Learning Analytics

open ended scoring

Metacog releases open ended rubric based machine scoring service

Metacog’s new open-ended scoring service enables richer, deeper, authentic assessment that goes beyond multiple choice to assess what learners can actually DO, not just what they have MEMORIZED. Now games, CEPA’s and simulations that are performance-based and cognitively challenging can be utilized to blend assessment with learning in both formative and summative environments.

How much time, energy and money goes into scoring assessments manually? The estimates vary widely but one thing is certain: teachers spend an absolutely huge amount of their day (usually at home) scoring manually. Because a large part of these assessments are multiple choice they also gives limited insight into how learners solved a particular challenge. The metacog API toolkit "shows the work" and thought processes that learners exhibit during interactions with digital performance-based lessons and assessments, enabling a broader, richer, and more authentic view of what a learner knows. It goes beyond the limited completion and time-on-task data provided by existing web and learning analytics products and can measure the degree of mastery which students demonstrate. Deep insight, reduced manual scoring time and faster more meaningful results are all now possible.

Our goal with metacog's scoring service API is to enable a new generation of instruction and assessment products where students create responses instead of merely picking them. Constructed-response is already part of the repertoire of strong educators, but it's too labor-intensive to use frequently enough to inform learning. metacog changes that – it can observe a student's performance in detail and score it quickly enough to drive formative instruction and adaptive learning. Please see the press release here:

Owen Lawlor
formative feedback visualization